The dream may be to have generative AI write RTL, but text is only one of the necessary things AI must understand to help with many design and implementation problems.
Progress is being made in generative EDA, but the lack of training data remains the biggest problem. Some areas are finding ways around this.
Generative AI, driven by large language models (LLMs), stormed into the world just two years ago, and since then has worked its way into almost every aspect of our lives. Some people love it, others hate it, and some even give dire warnings about machines taking over. But the semiconductor industry, which enables AI’s advance, is much more cautious about the adoption of this new technology.
Semiconductor engineering first wrote about generative EDA in May of 2023, and the general consensus was that it faced huge challenges. This was primarily due to the very limited amount of high-quality data on which training could be performed. Companies protect their own IP, but few companies have enough data to do effective training, except for design that are very similar.
As Dean Drako, president and CEO of IC Manage, put it back then: “People are using AI to write software programs. It is improving their productivity because it’s taking some of the drudgery out of it. But it is unclear how this is going to play in our industry, which is very risk-averse. AI will be used in design stuff where it is checked by humans and gives humans a template to start with, and then they can move forward more effectively, more quickly. Our industry wants surety. Eventually, we’ll get models that are trained well enough where we’ll get that surety.”
Two years is a long time in AI terms, but it’s just a heartbeat in the semiconductor industry. To put that in perspective, it’s time for one new design iteration, one new manufacturing technology advance, and perhaps one new release of an EDA tool. So how much has the industry advanced on the dream?
Productivity enhancers
At DAC this year, there were several examples of co-pilots being produced both by EDA companies and within design houses. There are fundamentally two types of co-pilots today — tool and design co-pilots. A tool co-pilot is a productivity aid for a given tool, while a design co-pilot has understanding about the language being used for design.
It is easier to train LLMs about tools and languages than it is to understand design, and it confines the requirements for training. “One use model for co-pilots is to help engineers use the tools,” says Rod Metcalfe, product management group director at Cadence. “This is something we can do because we write the tools. We know how they work. We have documentation and other things to train the system on. A tool copilot is a stepping-stone to a larger end goal. Because they are embedded in the tools and the user interface is very familiar, people can be interacting with the tool that they would normally use, with an added chat bot type interface. They can start asking questions of the large language model.”
Anything that has sufficient documentation can be incorporated into a chat bot. “Instead of reading the documentation and looking for your answer, you can just use the search functionality,” says Alex Petr, senior director for Keysight. “LLMs will be able to provide answers right away. When added to a tool, they are going to help the engineer to exercise tasks faster. You are going to take all those tedious tasks away from an engineer and basically elevate his capability to the point where you can focus on, ‘What is it he trying to achieve,’ rather than how to achieve it.”
That will enable incremental capabilities. “We need to be able to understand the design specification,” says Cadence’s Metcalfe. “The reality is that most design specifications are written in a high-level English language format. We can use large language models to start interpreting those high-level specifications. Based on that, we can then start generating design collateral. Or, rather than starting from an LLM and going to RTL, how about starting from the RTL? That may be human engineer written, and you start explaining that RTL using a large language model. People would immediately value that because they can interpret the results. They can start understanding their RTL at a much higher level.”
There are other possible starting points. “Consider ECAD models that are needed to create the designs,” says Chandra Akella, director for advanced technologies in the electronic board systems segment of Siemens EDA. “Today, the user manually creates the symbols, the footprints, the 3D models, or the IBIS models, and the simulation models by going through the data sheets manually and extracting that information. Instead of that, can we use the power of AI to extract the information knowledge out of the data sheets, and then drive the model generation automatically? This would tremendously help the librarians. They have to go through these steps manually, where they have to read 300 to 500 pages of the data sheet, and then convert that into useful models. With this application of AI, that whole effort will be exponentially reduced. That will transform a task that takes two days today into a two-hour task.”
The creation of RTL in a vacuum is not that useful. “We all know that LLMs can generate RTL code, but without integration into the design and verification flow the result is not very useful for the development team,” says Cristian Amitroaie, CEO of AMIQ EDA. “LLMs must work in concert with knowledge of the design and testbench so that queries and results can be fully accurate. In addition, it must be easy to run continuous, incremental checks on the LLM-generated code. This ensures not only correct syntax and semantics, but also conformance to the project coding rules. The best way to accomplish this is for the integrated development environment (IDE) to connect with the user’s choice of LLM. This flow works not just for RTL designs, but also for testbenches, assertions, power intent files, and any other types of design and verification code that LLMs generate now or in the future.”
In some areas, tool productivity can be a barrier to entry. “Making the tools easier to use is more relevant in the FPGA domain, where ease of use is absolutely critical,” says Metcalfe. “You are not aiming for the ultimate efficiency when you go to an FPGA, especially if it is for prototyping reasons. Ease of use is critical, and AI absolutely can help to translate an RTL description to a mapping on an FPGA.”
The next phase is optimization. “For any design, we have to run SPICE simulations, EM simulations, electro-thermal simulations,” says Keysight’s Petr. “As soon as you have that information in a design flow, you can make decisions based on it. The biggest problem with optimization has always been tradeoffs. If you can explain the benefits of each of those design solutions, or design decisions, so that the human in the mix can make those final decisions, that will be a huge step. In the future, we might be able to automate this even further, but this is an incremental process.”
Another form of optimization is design space exploration. “There are multiple ways of creating a schematic, but what is the best way given the set of requirements?” asks Siemens’ Akella “Today, there are solutions that work in a brute force way to simulate and identify the best possible scenario. Using AI, we can drastically reduce that cycle by identifying the most optimal solutions, run simulations on those identified potential suggestions, and then generate the optimized circuitry.”
Multi-modal design
While a lot of attention has been paid to the generation of textual language input for design, many aspects of the design flow are graphical. Multi-modal capabilities are much more recent. “If you look at designs, they are done in design environments that are fairly complex,” says Petr. “The data we are looking at is schematics, layout. In the analog world you don’t even code. You draw schematics, and then you draw layouts from it. That kind of information needs to be taught to AI agents. LLMs are designed around words, around transformers, which allow you to understand the context of the discussion. But for most of the solutions in EDA, what you see is the use of surrogate models.”
PCB design is similar. “If we can add support for PCB schematics as a modality, it could extract knowledge from existing schematic graphics,” says Akella. “Then we should be able to drive the generation of those schematics based on text or a picture on a napkin or a photo. Once we add the support of PCB schematic as a modality, then we can do all of these complex operations.”
There are many aspects of the digital flow that also involve graphics. “Verification relies heavily on waveform traces,” says Metcalfe. “To have AI systems be able to interrogate those waveforms, find anomalies, draw your attention to things that don’t look right, is immensely beneficial. You are interpreting graphical information and trying to look for anomalies in that system. This does not have to be text based. It is naturally text based at the moment, because that’s what we’re used to. We write scripts, we write specifications, but that’s definitely not the way things need to be. I draw the same experience from place-and-route, where you have a floor plan. A floor plan is graphical. It’s a graphical representation of your chip. There are many things we could do with that floor plan image, in terms of quality, looking for congestion, identifying if the channels are too narrow. All of that you could do based on a picture. Multi-modal AI is very relevant for EDA type workflows, as well as more conventional text and chat-bot type interfaces.”
Training data
The lack of high-quality data for training was identified as a major limitation in the early days, and that problem remains. “All the IP we need is the intellectual property of a given company,” says Petr. “They guard it. They protect it. You can’t just go and start collecting all that information in a single place and try to train something. We are operating in a data-scarce domain, which will make it really hard to build universal baseline models.”
But there is a glimmer of hope in the PCB world. “If you look at the component manufacturers, they often publish two things — component data sheets and reference designs,” says Akella. “These reference designs are not big designs, but have a very small, focused functionality that they create for promoting their products, the components. We are investigating if can we take these small reference designs, which are published as a PDF document, and extract the knowledge out of them. Now I have these building blocks. These building blocks have the components and the circuitry for that small block, which is made up of multiple components. And the components are defined in the data sheets. Merging these two information sources, the data sheet and the reference designs, we are looking at the possibility of using this information to train the model.”
This is not available within the IC design flow. “There is not that much public RTL available on the Internet,” says Metcalfe. “Going straight to Verilog from a high-level English language specification may be a jump too far. It is also difficult to understand the RTL that’s been generated. Another approach could be to go through an intermediate format. Maybe we could go through a C language format. There are billions of lines of C code out there. If you can go from English language to a C model and then a C model to RTL, you get to the same result. It’s much more controlled, because you can use a tool that goes from a high-level SystemC description to an RTL representation, and then the designer has lots more control over what type of RTL is generated.”
Within a company, there may be a lot more training data available. “All the knowledge that internal experts have built is captured in the designs and the models they have created,” says Akella. “This knowledge is codified in the designs, in the models that they created. If we can extract that knowledge from these completed production quality designs and models, then we can easily help fresh industry persons with one or two years of experience. The tools become smarter, and will help assist new users in making the right choices.”
Becoming better than humans
Is it possible that AI could produce better solutions than humans? Designs have evolved over time and are based on many layers of decisions, some made decades ago that are no longer questioned. But AI could tackle many design tasks on its own, look at optimization strategies, compare designs and learn. It should not be constrained by the past, as current design teams are.
“By viewing AI and large language models as a way of helping engineers get through the existing process, you have constrained them to the way we design chips today,” says Metcalfe. “This may not be the most efficient way. In the future there’s going to be much wider scope for that kind of investigation.”
Similar strategies have been used for training robots to play soccer, where they come up with better moves when they are given the freedom to learn by making mistakes and correcting their behavior, compared to robots that were told how to perform certain maneuvers. “An example often seen in academic literature is the matching network, where you see weird QR codes or weird shapes that have been generated,” says Petr. “A human would never draw that. It wouldn’t make sense. But from a machine perspective, it totally does. We still have to prove that those shapes and forms can be used in production.”
Humans are constrained in the amount of detail they can keep in their minds at any one time. “There’s potential for changing the way people design chips today,” says Metcalfe. “It’s not necessarily a revolution where you just set the end goal and leave the machine to figure it out. But if we look at what’s going on with chip design today, it hasn’t fundamentally changed over the past few decades. People design blocks and they bring these blocks together into a system. It’s a hierarchical block-by-block approach. An alternative approach is to let the machine design all the blocks. Rather than having each engineer design a block at a time, why can’t we use AI to say this is what the system-on-chip must be.”
Conclusion
While progress has been made on co-pilots that will increase designer productivity, little else has progressed in terms of generative AI for EDA. This is primarily because of the lack of training data.
At the same time, constraining AI to the fixed ways that humans have evolved the process may restrict its ultimate capabilities. If someone created a system that allowed them to learn by themselves, and if computers could learn at just 10X the speed of humans — which should be fairly easy since they do not have the same commercial constraints — the computers could catch up in about four years. What comes after that is anybody’s guess.
Related Reading
Will AI Disrupt EDA?
It’s been decades since there was a disruption within EDA, but AI could change the semiconductor development flow and force changes in chip design.
RAG-Enabled AI Stops Hallucinations, Adds Sources
New GenAI method enables better answers and performs more functions.
AI For Data Management
The quantity and sources of data remain challenging, but new approaches are being developed to deal with it.
Leave a Reply