Large Language Models and Next-Generation EDA Tools
Session Chair: Cheng Liu, Institute of Computing Technology, Chinese Academy of Sciences
LLM-Aided HLS Repair and Testbench Generation
Invited Speaker: Bing Li, University of Siegen
Abstract: With the rapidly increasing complexity of modern chips, hardware engineers are required to invest more efforts in tasks such as circuit design and verification. These workflows often involve continuous modifications, which are labor-intensive and prone to errors. Therefore, there is an increasing need for more efficient and cost-effective Electronic Design Automation (EDA) solutions to accelerate new hardware development. Re cently, large language models (LLMs) have made significant advancements in contextual understanding, logical reasoning, and response generation. Since hardware designs and intermediate scripts can be expressed in text format, it is reasonable to explore whether integrating LLMs into EDA could simplify and fully automate the circuit design workflow. This talk discusses the application of LLMs in HLS (High-Level Synthesis) code repair and testbench generation as well as future challenges and opportunities.
MALTS: A Multi-Agent Large Language Model-Based Layout-to-TCAD Structure Synthesizer
Presenter: Jiaqi Dou, East China Normal University
Abstract: The demand for efficient 3D modeling in Technology Computer-Aided Design (TCAD) simulations has surged, driven by the need for rapid device performance evaluation and Design Technology Co-Optimization (DTCO). Traditional TCAD modeling involves time-consuming process emulation and complex script editing, which limits the scope within single-device simulations and hinders scalability. To address these challenges, this paper introduces MALTS, a multi-agent large language model-based layout-to-TCAD structure synthesizer. The proposed framework takes industrial layout files (e.g., GDSII) and process information as inputs, and directly generates simulation-ready definitions in standard TCAD syntax, eliminating the need for iterative process emulation and manual script adjustments typically required in conventional workflows. By decomposing the synthesis process and assigning each phase to a specialized LLM agent, MALTS mitigates the ``long-context forgetting'' problem, enabling efficient generation of complex structures and gate-level TCAD models. Experimental results demonstrate that the proposed framework provides an automated solution for analyzing the impact of various physical layout designs and process details on electrical characteristics, facilitating rapid design space exploration in the DTCO process.
Finetuned Decision Transformer with Tree Search for Logic Synthesis Optimization
Abstract: Logic synthesis (LS) involves the conversion of high-level circuit descriptions into gate-level netlists. However, identifying the optimal primitive sequences (PS) for this transformation is a formidable challenge due to the expansive design space. While machine learning has been employed to address this issue, existing methods often necessitate extensive training or result in high computational costs. GPT-LS revolutionized LS optimization by reframing it as a sequence generation problem and leveraging a pre-trained transformer with offline reinforcement learning to generate PS effectively. In this work, we integrate tree search and model fine-tuning to enhance the transferability and performance of GPTLS, particularly in unseen cases, which significantly improves efficiency and effectiveness with minimal runtime increase, establishing a new standard in LS optimization. Experimental results demonstrate that, after just a few rounds of fine-tuning, it outperforms all previous methods, including state-of-the-art Bayesian optimization techniques and long-running random searches.
AnaSizeCoder: Code Generator for Analog Integrated Circuit Sizing Automation via Large Language Model
Presenter: Wenzhao Sun, Fudan University
Abstract: This study proposes a task-driven approach AnaSizeCoder that utilizes the large language model (LLM) Qwen2.5 to automatically generate optimization code for transistor sizing in analog integrated circuits (IC). Traditional optimization techniques often require manual code adjustments when faced with different optimization objectives and constraints, limiting their flexibility and efficiency in complex design tasks. To address this issue, we fine-tuned the Qwen2.5 model using LoRA (Low-Rank Adaptation) technology, enabling it to generate high-quality optimization code based on user-provided natural language requirements. We constructed a dataset comprising user requirements, summarized information, and corresponding optimization code, and consolidated all data for fine-tuning. The model was trained on 3 analog circuits, covering different optimization goals and constraints. Experimental results show that the fine-tuned model significantly improves code generation accuracy, with the best performance observed at the 4th training epoch (Epoch 4). In terms of generating summarized information, the accuracy reached 100%, while for code generation, the accuracy was above 97% across 3 different circuits. Additionally, by implementing the LDO (Low Dropout Regulator) circuit, we validated the code generated by AnaSizeCoder can successfully run and achieve the optimal Pareto frontier. Our approach enhances the automation and efficiency of analog IC design, reduces human intervention, and accelerates the design process.
ChaTCL: LLM-Based Multi-Agent RAG Framework for TCL Script Generation
Presenter: Yibo Rui, National ASIC System Engineering Technology Research Center, Southeast University
Abstract: Manually generating Tool Command Language (TCL) scripts is time-consuming and error-prone. Although large language models (LLMs) show promise in automating TCL script generation, they struggle with complex EDA tasks. They often fail to meet practical requirements and face compatibility issues across multiple tools. To address these limitations, we introduce ChaTCL, a multi-agent Retrieval-Augmented Generation (RAG) framework based on LLMs. Our contributions include: (1) creating fine-tuned datasets (NondomainT and DomainT) for TCL-specific LLMs, (2) designing a TCL-specific RAG framework with one-to-one retriever-database mapping to reduce hallucinations, and (3) implementing a multi-agent system that interprets user prompts and autonomously selects agents for script generation. Evaluated using TCLEval, ChaTCL outperforms models like ChatGPT-4.0-o and OpenAI o1-preview in generating accurate scripts across diverse tools and design stages.
LLM-Aided HLS Repair and Testbench Generation
Invited Speaker: Bing Li, University of Siegen