Abstract: Although artificial intelligence (AI) has made significant progress in the electronic design automation (EDA) field, specialized infrastructure remains insufficient. In this paper, we analyze the necessary components for the integration of AI with EDA, propose a data decomposition from design to vector, and build an open-source AI-aided design (AAD) library. This library aims to transform chip data into vectors, train AI4EDA models, and integrate trained models into the chip design flow.
PZ-Agent: A Symbolic-Engine Enhanced LLM Agent for Op-Amp Topology Inquisition
Presenter: Mingzhen Li, Shanghai Jiao Tong University
Abstract: Using large language models (LLMs) for EDA is a rising research interest. In this paper, we investigate whether a symbolic tool assisted LLM agent is capable of offering circuit level inference in the analog domain. We develop a framework to inject pole-zero (PZ) knowledge into a LLM-based multi-agent system, hoping that this PZ-agent so formed can handle Op-amp topology inquisition. We demonstrate in details that analytical knowledge in the PZ form should be properly condensed to better the reasoning capability of the PZ-agent. It turns out that, with proper incorporated knowledge, the PZ-agent is capable of answering designer’s inquiries on certain sophisticated multi stage Op-amp design issues.
SpecLLM: Exploring Generation and Review of VLSI Design Specification with Large Language Model
Presenter: Mengming Li, The Hong Kong University of Science and Technology
Abstract: The development of architecture specifications is an initial and fundamental stage of the integrated circuit (IC) design process. Traditionally, architecture specifications are crafted by experienced chip architects, a process that is not only time-consuming but also error-prone. Mistakes in these specifications may significantly affect subsequent stages of chip design. Despite the presence of advanced electronic design automation (EDA) tools, effective solutions to these specification-related challenges remain scarce. Since writing architecture specifications is naturally a natural language processing (NLP) task, this paper pioneers the automation of architecture specification development with the advanced capabilities of large language models (LLMs).
We propose a structured definition of architecture specifications, categorizing them into three distinct abstraction levels. Based on this definition, we create and release a specification dataset by methodically gathering 46 architecture specification documents from various public sources. Leveraging our definition and dataset, we explore the application of LLMs in two key aspects of architecture specification development: (1) Generating architecture specifications, which includes both writing specifications from scratch and converting RTL code into detailed specifications. (2) Reviewing existing architecture specifications. We got promising results indicating that LLMs may revolutionize how these critical specification documents are developed in IC design nowadays.
DiffuSE: Cross-Layer Design Space Exploration of DNN Accelerator via Diffusion-Driven Optimization
Presenter: Yi Ren, Peking University
Abstract: The proliferation of deep learning accelerators calls for efficient and cost-effective hardware design solutions, where parameterized modular hardware generator and electronic design automation (EDA) tools play crucial roles in improving productivity and final Quality-of-Results (QoR). To strike a good balance across multiple QoR of interest (e.g., performance, power, and area), the designers need to navigate a vast design space, encompassing tunable parameters for both hardware generator and EDA synthesis tools. However, the significant time for EDA tool invocations and complex interplay among numerous design parameters make this task extremely challenging, even for experienced designers. To address these challenges, we introduce DiffuSE, a diffusion-driven design space exploration framework for cross-layer optimization of DNN accelerators. DiffuSE leverages conditional diffusion models to capture the inverse, one-to-many mapping from QoR objectives to parameter combinations, allowing for targeted exploration within promising regions of the design space. By carefully selecting the conditioning QoR values, the framework facilitates an effective trade-off among multiple QoR metrics in a sample-efficient manner. Experimental results under 7nm technology demonstrate the superiority of the proposed framework compared to previous arts.
Unsupervised Defect Detection Based on Self-Supervised Transformers
Presenter: Qianqian Ye, Semitronix Corporation
Abstract: Effective management of wafer defects is crucial for improving the yield of integrated circuit (IC) chip manufacturing. Compared to traditional defect analysis performed by human experts, deep learning-based automatic defect detection can significantly enhance both the speed and accuracy of the process. This helps engineers identify root causes more quickly, thereby improving yield. However, the training of these detection models requires substantial manual annotation, which can be costly and limits the widespread application of automatic defect detection. To address this challenge, we propose an unsupervised defect localization method that requires no training data. This method achieved a defect detection accuracy (ACC) of 92.5% (with IoU threshold = 0.3) on both scanning electron microscopy (SEM) and optical microscope (OM) images. By significantly reducing the need for manual annotation, this new approach paves the way for fully automated large-scale data training in the future.
AiEDA-2.0: An Open-source AI-Aided Design (AAD) Library for Design-to-Vector
Invited Speaker: Xingquan Li, Peng Cheng Laboratory