Can AI Write RTL?

Design reuse has become a staple for the semiconductor design industry, but is it ready to use RTL generated by AI?

popularity

Just a few months ago, generative AI was just a promise about what would be possible in the future. Today, nearly everyone with an ounce of curiosity has tried ChatGPT. Most people appear to be somewhat impressed with what it can do, but at the same time see the limitations that it has.

As Dean Drako, founder of several companies, told me: “Recently, I needed to write a patent. I described the concept in three sentences and told ChatGPT to write me a patent for it. It spit out a four page patent. Was it perfect? No. Was it a great start? Absolutely. Now I can write the patent in two hours instead of four hours. But everything in it is manually checked.”

While writing a story about the use of AI in EDA, the issue came up about using ChatGPT, or something similar, to write Verilog RTL. Most people don’t see much value in it. ChatGPT can be trained on existing designs, and there are some in GitHub, RISC-V, and other repositories, but the number is small in the AI sense, and maybe not enough to even get started. They also suffer from a similar problem that ChatGPT has in that they can easily be influenced by bad data that is available on the Internet. Who is to say if any of that RTL is good, and for what purpose and how was it verified?

EDA does have some experience with this — not in an AI sense, but in a design reuse sense. The industry needed to move faster in creating designs that could utilize the enormous number of transistors becoming available to them. They couldn’t do that if they had to worry about every block in the system. Many of those systems required a rich set of communications interfaces, every one of which was being updated as fast as they could, also making use of the new technologies that became available to boost performance. Those products also contain a small analog component, which is outside of the internal expertise. Processors become a commodity product because the time and effort necessary to keep those updated, and to build the software support they required, grew.

The IP industry was born from the statement that a design house should concentrate on its differentiator and not be spending the time and money on the commodity aspects of their chip. It was argued that if someone wanted to concentrate on designing a specific communications standard, for example, they could spend more time and effort on that if they then had the ability to sell it to multiple companies. In doing so, all the users would get a better design than they might have been capable of on their own, for lower cost.

In the early days of IP reuse, every designer thought they could open a one-man IP shop in their garage, throw together a few lines of RTL, and sell it. The industry quickly realized that buying mediocre RTL was more expensive than doing it yourself. The time and effort necessary to integrate the IP, with whatever restriction it had, to verify it and debug it when it was found to not work, eclipsed the money they potentially saved from not having to do it themselves.

Cadence took that notion even further and grew a very large service offshoot, Tality, whose role was to design the blocks that their customers needed, after buying their development groups, and then to resell those to other customers. It didn’t work.

What the industry wanted was high-quality IP blocks that had been pre-verified to at least the same levels of internal quality, that was flexible and easy to integrate, and most importantly that was backed by a good support organization that could respond when required. It only took a few years for all of those thousands of startups to either fold or be consolidated into a few large organizations.

So how does this relate to generative AI? How much of the RTL that can be found on the internet is of sufficient quality to be used in training? Maybe all of it, or none of it, but there is often nothing to demonstrate that. Rarely do the verification environments for them exist to be able to ascertain that simple metric about quality. Design IP is also selected on qualities that go beyond just their logic function. A specific design style may be chosen that meets other needs, such as low power, or minimum area. None of that information has been stored for use by the training engine, meaning that is information that cannot be made available when making an initial selection.

But why would a design house go this route for a commodity block when they are able to buy a high-quality component today? Just to save the license fee? It is even less conceivable that they would attempt to use this for blocks they consider to be their differentiator. The only exception may be if the training was done locally on previous designs, where the data can be kept secret. But even then, the generative AI only can provide derivatives of that design, and nothing that would be required to stay ahead of the competition.

Taking it a step further, having generative AI that could build testbenches would be more useful. That would act as an independent channel for verification because more time and effort is often spent on this than design. Plus, verification is incredibly inefficient today, with constrained random test pattern generation creating huge numbers of wasteful vectors. AI is being used by EDA companies today to try and find some of these and discard them, but why not look at replacing a technology that has long since outlived its welcome?

The problem is there are even fewer verification examples showing how things could be done better and training them by using existing testbench methodologies only keeps you where you are today.

One company did recently introduce what is perhaps a middle ground. Rapid Silicon is using generative AI to help with auto-completion of HDL for FPGA. It states that “intelligent code auto-completion provides FPGA designers with relevant and contextual suggestions based on their code, removing errors, and streamlining the code writing process.”

So it is not clear to me if generative AI for EDA is possible today, or whether it would ever be useful, except in a very limited sense. While the software industry is showing interest, they have a much larger example space to start from, and they have substantially fewer performance and quality metrics that need to be considered.



Leave a Reply


(Note: This name will be displayed publicly)