Home
TECHNICAL PAPERS

LLMs For Hardware Design Verification

popularity

A technical paper titled “LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation” was published by researchers at University of Cambridge, lowRISC, and Imperial College London.

Abstract:

“Test stimuli generation has been a crucial but labor-intensive task in hardware design verification. In this paper, we revolutionize this process by harnessing the power of large language models (LLMs) and present a novel benchmarking framework, LLM4DV. This framework introduces a prompt template for interactively eliciting test stimuli from the LLM, along with four innovative prompting improvements to support the pipeline execution and further enhance its performance. We compare LLM4DV to traditional constrained-random testing (CRT), using three self-designed design-under-test (DUT) modules. Experiments demonstrate that LLM4DV excels in efficiently handling straightforward DUT scenarios, leveraging its ability to employ basic mathematical reasoning and pre-trained knowledge. While it exhibits reduced efficiency in complex task settings, it still outperforms CRT in relative terms. The proposed framework and the DUT modules used in our experiments will be open-sourced upon publication.”

Find the technical paper here. Published October 2023 (preprint).

Zhang, Zixi, Greg Chadwick, Hugo McNally, Yiren Zhao, and Robert Mullins. “LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation.” arXiv preprint arXiv:2310.04535 (2023).

Related Reading
Test Challenges Mount As Demands For Reliability Increase
New approaches, from AI to telemetry, extend well beyond yield.
AI, Rising Chip Complexity Complicate Prototyping
Constant updates, more variables, and new demands for performance per watt are driving changes at the front end of design.



Leave a Reply


(Note: This name will be displayed publicly)