A technical paper titled “ChipNeMo: Domain-Adapted LLMs for Chip Design” was published by researchers at NVIDIA.
“ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-adaptive continued pretraining, supervised fine-tuning (SFT) with domain-specific instructions, and domain-adapted retrieval models. We evaluate these methods on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our results show that these domain adaptation techniques enable significant LLM performance improvements over general-purpose base models across the three evaluated applications, enabling up to 5x model size reduction with similar or better performance on a range of design tasks. Our findings also indicate that there’s still room for improvement between our current results and ideal outcomes. We believe that further investigation of domain-adapted LLM approaches will help close this gap in the future.”
Find the technical paper here. Published October 2023.
Liu, M., Ene, T., Kirby, R., Cheng, C., Pinckney, N., Liang, R., Alben, J., Anand, H., Banerjee, S., Bayraktaroglu, Bhaskaran, B., et al. 2023. “ChipNeMo: Domain-Adapted LLMs for Chip Design.” https://research.nvidia.com/publication/2023-10_chipnemo-domain-adapted-llms-chip-design
Related Reading
AI Adoption Slow For Design Tools
While ML adoption is robust, full AI is slow to catch fire. But that could change in the future.
Making Tradeoffs With AI/ML/DL
Machine learning, deep learning, and AI increasingly are being used in chip design, and they are being used to design chips that are optimized for ML/DL/AI.
Leave a Reply