EDA Pushes Deeper Into AI

AI is both evolutionary and revolutionary, making it difficult to assess where and how it will be used, and what problems may crop up.

popularity

EDA vendors are ramping up the use of AI/ML in their tools to help chipmakers and systems companies differentiate their products. In some cases, that means using AI to design AI chips, where the number and breadth of features and potential problems is exploding.

What remains to be seen is how well these AI-designed chips behave over time, and where exactly AI benefits design teams. And all of that needs to be compared to designs done without AI-enhanced tools, and doing standard kinds of computations.

In some respects, AI is an evolutionary improvement of the kind of software EDA vendors already offer. “What we used to talk about as ‘computational software’ has become AI,” observed Kam Kittrell, vice president, product management in the Digital & Signoff Group at Cadence. “We’re great at creating these types of algorithms that are related to AI, so we can pick up and develop AI technology in order to make our products better.”

At this point, if there are limits to what AI can do, they aren’t obvious. “By using a large language model, we can read a specification and make determinations about what is wrong with a particular design,” Kittrell said. “It’s like having another engineer on your team to review the spec and thus considerably shorten debug time. This can be extrapolated to many different areas, because there’s a great deal of collateral that’s generated from a specification that may be automatically generated using LLM technologies. It can shorten your schedules tremendously.”

Others agree. “We think about EDA as the perfect target for using pattern matching and machine learning,” said Steve Roddy, chief marketing officer at Quadric. “It’s the classic min-cut algorithm on steroids. You’ve got billions of things to place, and you’re trying to minimize wires crossing boundaries. There have been successive iterations of algorithms that power different generations of EDA tools, and they’re all using some complex heuristics to figure out if I place all this things, what gives me the shortest average wire length and the minimum number of wires and crossings. That was easy when we had two to four metal layers. Now you may have 14 layers of metal and 82 masks. It’s insane.”

Still, the EDA industry is treading carefully here because there is a lot at stake. It’s one thing to spot illegal wire crossings or bugs in a design. It’s quite another to assume that all the significant problems have been identified. And as with other EDA tools, all of this needs to be integrated into existing flows and models, which is non-trivial.

“We look at the full EDA stack, starting from architecture to manufacturing, in order to figure out where the bottlenecks are,” said Arvind Narayanan, senior director for product line management at Synopsys. “In spots where there are a lot of manual iterative loops, there’s a lot of opportunity for AI to help improve productivity. For example, when we look at the stack, digital implementation is a key portion of the design flow in terms of the project cycle, where designers spend a significant amount of time going from an RTL design to physical implementation and signoff. This step can significantly benefit from AI to analyze the solution space automatically and optimize design QoR targets, instead of designers iterating manually.”

Some customers have their own advanced AI/ML schemes. “What they want to do is call a simulator with a new set of data, which turns the current state on its head a little bit,” said Steve Slater, EDA product manager at Keysight. “Instead of the EDA tool in the driver’s seat, it’s the customer’s own AI/ML infrastructure. You can imagine them trying to do gigantic parameter sweeps across all the possible corner cases, and now they’re using AI/ML to come up with a better prediction.”

That’s the first of a five-level hierarchy that Keysight sketched out to ground their thinking about AI’s potential. A second approach is to put LLMs inside EDA tools, creating domain-specific chatbots to support real-time customer interactions. A third approach is AI-assisted design and routing. “Keep in mind that auto-routers have been around a long time,” Slater said. “The real question is whether there are leaps forward in technology that enable people to design faster than what they’ve been doing already. Maybe it means that you need to build up a gigantic library of good design examples that an AI can be trained upon.”

In addition, AI can be used to build better models, and to speed up simulators by leveraging more artificial neural networks. “There are physics-based analytical models, but you’ve got to take lots of measured data in order to get there,” said Slater. “What if you could use AI heuristic networks to create a model that’s just as good at curve-fitting, but crucially, doesn’t need as much input data, and can execute much faster because the model is based on neural nets, not complex netlists?”

One beneficial aspect of AI is that it can eliminate some of the time spent on heuristics by having the tool run simulations and decide which is the best choice for each situation. “The process can be made much smarter and more correct, which will make it a lot easier for engineers to describe an algorithm and get good circuitry out on the other side of the high-level synthesis process,” said Russell Klein, program director at Siemens EDA.

But there are tradeoffs. While AI can improve the simulation speed, it comes at the cost of accuracy. In addition, while large language models can understand the question that’s being asked of it, the actual answer it returns needs to be put in context. “It still needs the EDA vendor or the software vendor to feed and curate the context,” Slater said. “Generative AI can give you amazing things if you’re drawing on a really large database of information, but when it comes to design, that information may not be readily available.”

Challenges at the chip level
On the AI chip side, there is a vast difference between chips used in data centers and those used at the edge. In the data center, generative AI is known to have “hallucinations,” and advanced customized hardware has resulted in silent data errors. Both are known to produce wrong results, which may not be a problem with a general search engine, but it can be catastrophic in a military or financial operation.

Making matters worse, AI solutions are domain-dependent, “In the enterprise world, the more mature industries, such as financial institutions and insurance companies, are all trying to accelerate and get into AI as fast as they can,” said Christian Jacobi, an IBM fellow who led the development of the zSystems Architecture. “However, these companies have very different expectations, both by their clients and regulatory bodies, on how they can behave. You wouldn’t want to ask a question to a bank’s chatbot and have the thing declare its love for you.”

That’s one of the complexities, said Evan Sparks, chief product officer for AI at HPE. “At many of the customers we’re working with, the architectures are shifting from CPU-heavy to more GPU-heavy. I don’t think it stops there. There’s an alternative — what we are calling an AI-native architecture — which are systems beyond just the silicon of the microprocessor that are purpose-built to solve these workloads. In AI-native architecture, you really need to think about all layers of the stack, from the choice of silicon, which may be custom-built silicon for model training and evaluating problems, to the interconnects, to the storage, to the software that sits on top and helps end users actually program these things and get their applications out and running efficiently. We’re still in the early innings of moving toward that future.”

At the edge where AI is being built into much smaller and less sophisticated systems, the potential pitfalls can be very different. “When AI is on the edge, it is dealing with sensors, and that data is being generated in real time and needs to be processed,” said Sharad Chole, chief scientist at Expedera. “How sensor data comes in, and how quickly an AI NPU can process it changes a lot of things in terms of how much data needs to be buffered and how much bandwidth needs to be used. How does the overall latency look? Our objective is to target the lowest possible latency, so from the input from the sensor to the output that maybe goes into an application processor, or maybe further processing, we’d like to keep that latency as low as possible and make sure that we can process that data in a deterministic fashion.”

In addition, semiconductor devices must serve real needs on tight margins, which means any claimed AI differentiation needs to have real value for the customer. “Semiconductor companies are constantly battling it out over pennies,” said Quadric’s Roddy. “You can’t differentiate with tiny marginal changes. If you come up with 17% better energy per inference or whatever, it’s a fleetingly small difference and it’s not enough to break the inertia of the incumbent that’s already there. You need something that’s substantially different, or which is used dramatically differently. You not only have to be orders of magnitude better, you must have a need for doing it orders of magnitude better.”

This is a challenge, because many of the applications are very specific. “There are different things people have in mind when they’re building products,” said Nalin Balan, business development manager at Renesas. “Number one, they want to maintain a reasonable bill of materials. You can’t have this intelligence built in at the cost of making the product 1,000 times more expensive. Thus, the first question is, how do you do all of this, and maintain a reasonable cost of materials? The second question is one of generalization. Will the AI you’ve incorporated generalize in the typical operating conditions in which you expect the product to be deployed? For example, a smart home device has to work under different types of background noises, complex positioning, and other situations. How do you ensure it will?”

There’s another important consideration to all this, as well, noted Roddy. “Does the problem that needs solving stay stable for two or three years? So far, the answer’s been, ‘No,’ because machine learning and AI changes so dramatically year to year as new mathematical models are invented and explored.”

Different starting points
It’s also worth noting that despite progress, the AI world continues to debate what AI actually is, what constitutes an AI company — an essential element for continued funding of startups — and how and where AI will best be used in the future.

“There are two levels of this,” said Larry Lapides, vice president at Imperas. “One is at the SoC level, with someone producing an SoC that supports AI. They’re not just putting processors on a chip. They’re going to provide a software stack with it. Then a user can put that on a board in their product. The second level would be somebody who’s producing a product that incorporates AI into it, whether it’s a data center plug-in or IoT at the edge. This is something that is more than just the SoC with software on top of it. It actually provides a real AI subsystem able to interface with a larger environment.”

Lapides noted that optimizing the AI algorithm for the underlying SoC architecture is a significant challenge. “There is so much data and so many scenarios that significant, effectively continuous, software simulation is required to achieve the AI performance requirements with the desired accuracy of results.”

For some, this is simple math. “Remember 30 years ago when we did the linear approximations of curves in Excel? You have a bunch of points and draw a line for the points that have the best regression fit, or you could try a quadratic function or other formula,” said IBM’s Jacobi. “AI today is nothing more than that, but instead of using quadratic functions and 10 points or 100 points, it now uses billions of points to do what is essentially best-fit regression.”

Not everyone agrees, however. And it is easier to point to what AI does — and does well — than what separates an AI company from a non-AI company. “To be an AI company, your product has to have an intimate connection with AI,” said Paul Karazuba, vice president of marketing at Expedera. “For example, if you are a search engine optimization company, and you put generative AI into your software stack that helps enable a better SEO result for your customer, then you are absolutely an AI company. If you use AI in your hiring and your product is pencils, you are not an AI company. There has to be a definable, explainable use of AI in your product or service that is integral to that product or service’s success. And you should be able to quantify your claims and have them backed up by third parties.”

Likewise, Siemens’ Klein narrowly defines the definition. “An AI company is building an electronic system that uses artificial intelligence to perform part of its function, and that AI algorithm is somehow implemented in hardware or software. It could be software running on a processor, or it could be something accelerated either in a GPU, or a TPU, or a bespoke hardware accelerator for doing that type of AI.”

Still, true differentiators have nothing to do with futurist promises. Instead, what sets a company apart is a focus on engineering fundamentals, said IBM’s Jacobi. “Know your problem. Are you really designing the next big breakthrough thing? Or have you convinced yourself that you are, but you’ve not done the proper research to know what you’re trying to solve. If you’re just trying to build the biggest, baddest floating point matrix multiplication engine, what does that actually solve? You need a holistic approach. For example, why are you optimizing for throughput in a transaction environment? The user may be willing to wait a half-second for auto-complete, but after that half-second, they want 20 words. There are all sorts of tradeoffs. You can only design your solution when you know what you’re designing it for.”

And that is what defines an AI company. “AI is not model building. AI is an engineering discipline,” said Kaushal Vora, senior director, business acceleration and global ecosystem at Renesas. “Like anything in engineering, it starts off with understanding what you’re trying to build, including the constraints of the system and how you instrument the system to collect data that is high integrity, that has good coverage, and enough separation in the features on which you make decisions. Then it goes into building a model, deploying that model, then figuring out how to take care of your system post-deployment.”

Fig.1: Domains and technologies where AI/ML can provide benefits and market differentiation. Source: Renesas

Fig. 1: Domains and technologies where AI/ML can provide benefits and market differentiation. Source: Renesas

Related Reading
AI Adoption Slow For Design Tools
While ML adoption is robust, full AI is slow to catch fire. But that could change in the future.
EDA Makes A Frenzied Push Into Machine Learning
All major vendors now incorporate ML in at least some of their tools, with more ambitious goals for AI in the future.
Making Tradeoffs With AI/ML/DL
Optimizing tools and chips is opening up new possibilities and adding much more complexity.



Leave a Reply


(Note: This name will be displayed publicly)