Abstract: Semiconductor technology advancement has elevated reliability to a crucial design role, particularly in safety-critical industries such as automotive and aerospace. Operating environments characterized by extreme temperatures, radiation exposure, and sustained stress require integrated circuits engineered to resist long-term performance degradation. Meeting these challenges requires comprehensive reliability solutions that integrate predictive modeling with design verification methodologies. This presentation explores how reliability-aware methodologies, combined with advanced measurement instrument FS800™ and EDA tools like BSIMProPlus™ (for aging-aware modeling platform) and NanoSpice™ (for high-accuracy reliability analysis), enable a streamlined workflow—from early-stage failure prediction to sign-off. Companies leveraging these solutions can ensure compliance with strict standards while optimizing performance and yield. This one-stop approach empowers engineers to deliver safer, more durable chips with accelerated time-to-market.
Constrained MRC Approaches for Inverse Lithography
Presenter: Yijiang Shen, Guangdong University of Technology
Abstract: The dramatically evolved optical proximity correction (OPC) in lithographic process and mask manufacturing is requiring the notion and practice of curvy features in industry to improve printability and pattern fidelity. However, the pragmatic manufacturability of curvy features requires reliable mask rule checks (MRCs), as a consequence, addressing MRC violations is critical in optical proximity correction (OPC) engines. In this paper, we propose a constrained approach for curvy feature MRC compliance where a skeleton-distance map-based method is implemented for MRC violation detection and correction, and a mask rule penalty term per correction with the mask contour as a distance indicator is incorporated into the level-set based optimization process to avoid local rule-violation level-set evolution. Simulation results demonstrate that the constrained approach penalizes the occurrence of mask rule violations and resolve MRC violations without degrading imaging performance.
DATIS: DRAM Architecture and Technology Integrated Simulation
Presenter: Shiyu Xia, Shanghai Jiao Tong University
Abstract: Recent advances in DRAM technologies and large-dataset applications in data centers make both academic and industrial researchers eager to explore DRAM's novel usage and cross-disciplinary DTCO (design and technology co-design) spaces, as illustrated by recent studies of the PIM (Processing-In-Memory) or RowHammer effect etc. This evolving landscape has created a pressing need for systematic testing and validation of those emerging DTCO studies. However, previous DRAM simulators have lacked joint modeling of device and architecture, impeding effective simulation of these DTCO designs. To address this gap, we introduce DATIS (DRAM Architecture and Technology Integrated Simulator), a tool that effectively connects architectural design and the complexities of DRAM technology. DATIS addresses two critical challenges: abstracting technology intricacies and establishing connections between architectural activities and device-level process structures. This versatile tool empowers researchers to unlock the latent capabilities of DRAM and provides manufacturers with a platform to experiment with new processes and architecture co-design. To the best of our knowledge, DATIS is the first DRAM simulator in academia that integrates architecture and technology modeling. We build DATIS upon Ramulator, a well-known open source DRAM simulator for architecture-level modeling, and thus it can support a wide range of DRAM specifications, including DDRx, LPDDR5, GDDR6, and HBM2\&3 etc. Our experiments demonstrate DATIS's efficacy and precision through three compelling case studies, addressing pivotal facets of DRAM technology, including storage, reliability, and computation.
A Survey of Standard Cell Layout Design
Presenter: Zhiyuan Luo, National University of Defense Technology
Abstract: Standard cells serve as fundamental building blocks in the integrated circuit (IC) design flow. The efficiency and quality of standard cell layout design have consistently been a major research focus in both academia and industry. With design-technology co-optimization (DTCO) playing an increasingly critical role in advanced technology nodes, standard cell layout design is undergoing a paradigm shift from traditional approaches to system-level optimization. In this paper, we systematically review the evolution of standard cell layout design, categorizing its development into three distinct phases: manual design era, automated design era and the design-technology (DT) co-design era. Building upon this historical framework, we first introduce the general flow of standard cell layout design during the era of manual design. This flow comprises four key stage: schematic drafting, stick diagram design, layout drafting, and physical verification. Subsequently, we analyze the core challenge in transitioning from manual to automated design: formulating the transistor placement and in-cell routing as a solvable optimization problem through constraint modeling. We then provide a comprehensive review of the methods employed in related work over the past four decades, encompassing graph-based algorithms, heuristic algorithms, constraint solvers, and AI-based methods. Finally, we delve into the emerging design paradigms of the DT co-design era, offering a perspective on the future of standard cell design. This perspective encompasses: concurrent cell-level and system-level optimization; finer-grained modeling and integrated design flows; specialized, custom cell library design and AI-assisted intelligent design.
Efficient SRAF Generation via Diffusion Models
Presenter: Minjie Bi, The Hong Kong University of Science and Technology (Guangzhou)
Abstract: In semiconductor manufacturing, optical proximity correction and sub-resolution assist features (SRAF) are critical techniques for achieving high-fidelity wafer images, especially as semiconductor device critical dimensions shrink. However, traditional SRAF generation methods often face challenges in scalability, adaptability, and efficiency. This paper introduces a novel method that uses a conditional generative diffusion model for SRAF generation to improve efficiency and flexibility. We treat the SRAF generation task as an image-to-image translation problem, converting the input layout to include optimal assist features. Experimental results show that our proposed approach achieves a 5.57× speed-up over commercial tools while maintaining comparable accuracy in terms of edge placement error and process variation band.
GMUNet-ILT: A lightweight MLP-based network for Inverse Lithography Technology
Presenter: Ke Wang, Zhejiang University
Abstract: With the continuous scaling down of critical dimension (CD) in advanced integrated circuits, Resolution Enhancement Techniques (RETs) are employed to improve the printing performance in lithography processes. Inverse lithography, a widely studied RET, precisely controls the printed images on wafers. As a type of computational lithography, inverse lithography often incurs significant computational costs. Therefore, this paper proposes a lightweight and fast deep learning convolutional network architecture for computational lithography, which is based on an improved U-Net incorporating shifted-window multi-layer perceptron and Ghost modules. Our approach achieves relatively low mask error and a runtime of less than 5 seconds.
Application Driven One-stop Reliability Solution
Invited Speaker: Wenchao Liu, Primarius Technologies CO.,LTD.