Efficient Parameter Extraction and Modeling Techniques
Session Chair: Zichao Ma, South China University of Technology
Multi-Objective Bayesian Target Interval Optimization for Semiconductor Process Parameters
Presenter: Xiao Yang, Beijing University of Posts and Telecommunications
Abstract: Semiconductor manufacturing is becoming increasingly complex, making optimization of process parameters both challenging and costly. In Physical Vapor Deposition (PVD) processes, high-dimensional nonlinear relationships between input parameters and output performance exacerbate these difficulties, particularly under conditions of limited data and multi-objective constraints. To tackle these challenges, we propose a Bayesian Optimization (BO) framework integrating Sparse Multi-task Gaussian Process (SMTGP) and Probability-guided Interval Search Mechanism (PRISM) for process parameter optimization. By incorporating a Multi-input Multi-output (MIMO) Predictor with Soft Physical Constraints, the framework effectively combines physical priors with learning-based modeling, enhancing predictive accuracy and reliability in low-data scenarios. Experimental evaluations across various scenarios demonstrate that the proposed method outperforms Random Search (RS) and classical BO in both efficiency and reliability.
Task Scheduling and Temperature Optimization Co-Design for Multi-Core Embedded Systems
Presenter: Lei Mo, Southeast University
Abstract: The high integration and power density of multi-core processors increase energy consumption and chip temperatures, affecting reliability and accelerating aging. Effective energy and temperature management is crucial to maintaining performance and extending processor lifespans. This paper proposes a real-time task scheduling approach using Dynamic Voltage and Frequency Scaling (DVFS) to optimize energy and temperature in embedded Multi-Processor System-on-Chip (MPSoC) platforms. We introduce a thermal model and develop a Mixed-Integer Nonlinear Programming (MINLP) model to optimize task allocation and scheduling, considering real-time, dependency, non-overlapping, and thermal constraints. To simplify the problem, we linearize the MINLP model into a Mixed-Integer Linear Programming (MILP) model. A Genetic Algorithm (GA)-based method is also proposed to improve scalability. The GA method accounts for task allocation, processor selection, and thermal constraints, while prioritizing lower execution frequencies to reduce energy use. Simulation results show that the proposed approach effectively reduces energy consumption and computation time, which is suitable for complex multi-core platforms.
Optimization of N/P Performance Matching through Adjusting Extension Region Doping Concentration for Vertical Gate-All-Around FETs
Presenter: Xinlong Guo, Fudan University
Abstract: A novel vertical gate-all-around FET (VGAAFET) structure based on the Si/SiGe/Si/SiGe/Si epitaxial stacks is proposed for integrating the controllable gate and spacer lengths, as well as self-aligned junctions. In the proposed VGAAFET structure, SiGe layers served as extension regions form the doping pocket between the Si source/drain (S/D) and Si channel in pFET due to the higher solid solubility of boron in SiGe. Optimization of the doping concentration in the pFET extension regions has been demonstrated to enhance the saturation current of the pFET by 23% and improve the n/p current ratio from 1:0.74 to 1:0.91 when compared to undoped extension regions. To further investigate the performance improvement of the VGAAFET-based ring oscillator (RO) by adjusting the doping concentration in the extension regions, the design-technology co-optimization (DTCO) platform is established. An enhancement of 17% in frequency and a reduction of 12% in energy-delay product (EDP) have been achieved when the doping concentration of the pFET extension regions is the same as that of the S/D.
Efficient Parameter Extraction for GaN HEMTs Using Rational Function Assisted Neural Network
Presenter: Zhenhai Cui, Xidian University
Abstract: This work proposes a novel method integrating Rational Function Preprocessing (RFP) with Artificial Neural Network (ANN) to efficiently extract ASM-HEMT model parameters from S-parameters of Gallium Nitride High Electron Mobility Transistor (GaN HEMT) devices. Compared to conventional ANN parameter extractors, this method represents a significant advancement in the field. The innovation lies in developing an advanced rational function model specifically designed to accurately capture the complex high-dimensional frequency characteristics of GaN HEMT S-parameters. By effectively preserving the essential frequency response characteristics, it successfully eliminates interference from redundant S-parameters. Experimental results highlight substantial performance improvements, with training time reduced from 9035 seconds to 670 seconds under comparable error conditions, achieving a 92.6% reduction. The Root Mean Square Percentage Error (RMSPE) for parameter extraction decreases from 7.55% to 4.13%, representing a 45% optimization in accuracy under the same neural network architecture. This method holds significant engineering as 5G base stations and Rfpower amplifiers.
Experiment and TCAD Hybrid-data-Driven Modeling of Advanced Node GAAFETs by Transfer Learning
Presenter: Yuhan Jiang, Institute of Microelectronics, Chinese Academy of Sciences
Abstract: Gate-All-Around transistors for advanced technology nodes present significant simulation challenges due to their intricate 3-dimensional structures and diverse process and device configurations. The scarcity of experimental data, constrained by high costs and time, further complicates the accurate calibration of robust TCAD models. This study introduces a novel neural network technique that synergizes experimental and TCAD hybrid data with transfer learning. By training a base model on a comprehensive GAAFET simulation dataset and fine-tuning it with minimal experimental data, the method reduces dependency on experimental data by over 80%. Results demonstrate a 98% improvement in predictive accuracy and a substantial reduction in calibration time compared to conventional methods. This approach highlights the potential to significantly enhance the efficiency and accuracy of advanced node device design and development.
Compact Model of Superparamagnetic Tunnel Junction Controlled by Spin-Orbit Torque
Presenter: Chaoyue Zhang, Nanjing University of Aeronautics and Astronautics
Abstract: Superparamagnetic tunnel junction (SMTJ), characterized by thermally driven random magnetization switching, offer novel pathways for high-sensitivity magnetic sensing and true random number generation. In this study, we developed a compact model based on magnetization dynamics and electrical characteristics for SMTJs modulated by spin-orbit torque (SOT). The temporal evolution of the magnetic moments in the free layer (FL) is described by a revised Landau-Lifshitz-Gilbert (LLG) equation, which accounts for thermal noise effect. A voltage-dependent tunneling magnetoresistance (TMR) model is introduced, thereby establishing a robust framework for circuit simulation. The model is developed by using the Verilog-A language, and the accuracy of the model was verified through simulations. The simulation results indicate that the magnetization state retention time of SMTJ is positively correlated with the thermal stability factor, and the external voltage can modulate the probability distribution of magnetization direction. This research provides theoretical support and design references for the application of superparamagnetic devices in low-power random circuits and high-precision sensors.
Multi-Objective Bayesian Target Interval Optimization for Semiconductor Process Parameters
Presenter: Xiao Yang, Beijing University of Posts and Telecommunications