LLM Technology For Chip Design

Automating the design workflow to reduce errors introduced by humans.

popularity

In the nine short months since OpenAI brought ChatGPT (a Chat Generative Pre-Trained Transformer) and the phenomenal concept of large language models (LLMs) to the global collective consciousness, pioneers from every corner of the economy have raced to understand the benefits—and the pitfalls—of deploying this nascent technology to their particular industry. And as it turns out, semiconductor chip design is a perfect candidate.

Cadence is no stranger to generative AI, of which LLMs are one aspect. Our current applications focus on chip design optimization, automation, and acceleration in the later stages of implementation. Yet it’s in the initial, human-led design process where bugs and bottlenecks are most likely to occur—and we can put LLM strengths to good use.

We are making public the first robust proof of concept of an LLM in chip design. Another chatbot in the corner of the screen? Yes, but this LLM is so much more than a modern-day Clippy. To focus on this LLM’s conversation skills would be to misunderstand just how powerful this technology stands to be in solving some of chip design’s most pressing challenges—automating the workflow to reduce errors introduced by humans in creating the design specification, the design itself, and all the project documents needed to create a complex semiconductor device.

An ambiguous problem

The starting point for the design of any silicon chip is a high-level hardware and software specification, described in a natural language such as English, to capture as much detail from as diverse a set of engineers as possible.

It is then the engineer’s job to take this natural-language specification, with all its potential ambiguity and variation in style, level of detail, and so on, and translate it into ​​code written in a hardware description language (HDL) such as Verilog or VHDL. It’s also their job (or perhaps that of another engineer) to generate connection lists used by verification tools to check that all is as expected systematically. This process is repeated for all the functionality in the chip hardware. While it uses automation, it remains a highly intensive human process: people creating and checking, using tools where they can.

Writing good HDL code from a specification targeting today’s advanced process nodes with stringent power, performance, and area (PPA) requirements would take a single engineer years to complete. A team might achieve it in months. Yet teams are becoming stretched thin as the semiconductor skills gap continues to grow—with a projected shortfall of 67,000 technicians, computer scientists, and engineers in the United States by 2030, according to the Semiconductor Industry Association.

As a result, design, creation, and verification processes have seemingly plateaued—with more bugs surviving to final silicon. Design teams must mitigate these bug escapes in software, disabling buggy features in an often futile attempt to avoid a hugely expensive design respin.

Chip design GPT

Today, our JedAI LLM proof of concept focuses on the design-cleaning process. Once the first draft of the HDL description of the chip is created, engineers can use the LLM chatbot to interrogate the design, validate the design versus specification, explore and rectify issues, to prompt analysis tasks, and receive explanations in their natural language. They can do spec reviews, code reviews, test reviews, and change management reviews. This can save hundreds of hours of individual engineering time, and hundreds of group meetings for specification and code reviews. And remove many bugs previously uncovered during the ramp-up to regression verification.

Fig. 1: LLM extension to Cadence JedAI Platform.

The essential usage of the JedAI LLM is to load in the architecture specifications, design specifications, integration connection specifications, and the design itself. From there, the users can issue prompts to the JedAI LLM such as “list the name of irregular nets”, “list all possible irregular pins”, automate hook up testbenches, tool script auto-completion, and RTL code auto-completion.

Cadence also recognizes the need to maintain the highest levels of data security when giving AI algorithms access to classified IPs. Our LLM implementation runs entirely on-prem, with all data stored and processed in the JedAI platform within the Enterprise firewall. The LLM processes run on the customers’ server infrastructure, whether CPU- or GPU-based.

We intend to grow the JedAI platform’s LLM capabilities as the project evolves, including potentially expanding the LLM to enable the generation of verified HDL code from a natural language specification in an IP-protected way. As LLMs are trained on vast amounts of data in natural languages such as English, they are spectacularly good at reading, evaluating, and summarizing information intended for humans. The JedAI proof of concept is the first step in what is likely to be a long process for deploying LLMs in chip design.


Tags:

Leave a Reply


(Note: This name will be displayed publicly)