Preparing For An AI-Driven Future In Chips

Designs need to be flexible enough to handle an onslaught of continuous and rapid changes, but secure enough to protect data.

popularity

Experts at the Table: Semiconductor Engineering sat down to discuss the impact of AI on semiconductor architectures, tools, and security, with Michael Kurniawan, business strategy manager at Accenture; Kaushal Vora, senior director and head of business acceleration and ecosystem at Renesas Electronics; Paul Karazuba, vice president of marketing at Expedera; and Chowdary Yanamadala, technology strategist at Arm. What follows are excerpts of that conversation. Panelists were chosen by GSA’s EMTECH Interest Group. Find part 2 of this discussion here.


L-R: Accenture’s Kurniawan; Renesas’ Vora; Expedera’s Karazuba; Arm’s Yanamadala.

SE: How will AI impact chip design in the future? Will it just be the big chipmakers and systems companies that gain an advantage, or will it be spread across the entire ecosystem?

Karazuba: From a design perspective, AI helps to democratize. If AI can be used to forward or advance chip design without the typical costs of an extremely large design team, tools, etc., there is an opportunity for more democratization. What AI doesn’t do, though, is solve the cost of masks, or test, or qualification, or wafer cost. So AI is certainly a way for more people to get into the chip design space, but it doesn’t set a level playing field for small versus large companies simply because of the cost of everything else involved.

Vora: I agree. One thing I’ve heard time and again from the big EDA companies is that it’s really helping to solve the shortage of labor. AI is taking away a lot of things that traditionally were labor-intensive, but which easily can be done now through machine learning and AI that is embedded within the EDA tools. We’re also seeing co-pilot type of work in the EDA space, which will show significant productivity gains over time as people start getting more comfortable with it. AI is not going to solve all the problems. It’s not going to change the dynamics of the big guys versus the little guys in the industry, but it has democratized a lot of access, and made access to chip design and EDA tools more affordable. We have close to 200 startups trying to make an AI-based SoC in Silicon Valley today. A lot of that is thanks to AI, which has made chip design tools and things like that more accessible. The other point involves how AI is impacting system design. If you look back to the early days of how compute started, these typically were pipelined von Neumann-based architectures. They were designed to accelerate workloads that were linear lines of code, often referred to as Software 1.0. What we’re seeing now are more multi-threaded and multi-core types of architectures, which are needed to accelerate and handle bigger and bigger workloads. Fundamentally, from a chip architecture standpoint, we’re getting into what we call Software 2.0, where machine learning gets intermingled with traditional software. This is where heterogeneity in the architecture becomes very, very important. Specific elements in the hardware are designed to accelerate certain types of workloads. There are certain things that are good at accelerating machine learning-type workloads, including exponential functions, non-linear functions, and pipelined architectures, which can run in parallel. We’re entering this interesting era of heterogeneous compute, which is going to drive a lot of changes on the hardware and the software stack.

Kurniawan: With AI getting more infused into design tools, smaller players can get access to that, and they can accelerate their chip design cycles and minimize costs. But that’s just one component of the entire design cost. There also are requirements for talent, CapEx, running test wafers. Those costs will not be eliminated completely. They may be minimized, but there are still a couple of other things that need to be taken care of. In chip manufacturing, there is a ‘carry-over’ element, meaning the more experience you get from working on a chip design, the more it helps with the next iteration of the chips. The big players in design don’t become leaders overnight. They have to build up the design skill sets and capabilities over a long period of time. They get to the leadership position through groundbreaking products and an enormous amount of investment. I’m excited to see how this design space is going to be revolutionized, but it still requires a lot of progress to be made in the space.

Yanamadala: While I generally agree with the increasing presence of AI in chip design, it’s crucial to exercise caution when deploying AI in these activities. This precaution is necessary to prevent the leakage of proprietary information into the public domain, and to address potential IP contamination issues arising from the inadvertent use of unapproved LLM or generative AI tools. Despite these concerns, it’s clear that silicon IP creators, silicon makers, and EDA companies have successfully integrated AI into the design flow methodology and tools. It’s evident that AI’s role, whether in AI assisted code generation applications or automating specific design functions, will continue to expand over time. However, it’s important to emphasize the responsible use of AI and to be cautious, recognizing that AI-based tools and methodologies, while capable of reducing human effort in certain areas of chip design, are not simplistic push-button solutions.

SE: AI plays across a lot of areas. There’s AI training in the cloud, inferencing on the edge, and AI used to develop these chips. But is there any cohesiveness across all these efforts?

Vora: People think of AI as this thing, but it’s actually an engineering discipline. It’s an approach to solve a problem. We’ve solved problems using traditional methods in the past. Now we’re looking at problems where we are basically flipping them around, looking at the data, and then trying make use of that data and predict when something good or bad will happen in the future. AI is a way to address problems by applying machine learning and intelligence to certain aspects of system design. AI is being used in every aspect of semiconductor manufacturing, everything a semiconductor touches, and all the products that are made using semiconductors. AI is pretty ubiquitous in terms of most of the approaches people are taking toward problem solving. But we’re still very early in the ecosystem for things to become more cohesive and tightly coupled.

Karazuba: In many parts of the corporate world there’s a fundamental lack of knowledge of what AI really is. It seems like every company is an AI company today. If you scan LinkedIn, everyone’s an AI-enabled company. But then if you ask a second-level question about exactly what do they do that involves AI, there’s not necessarily an answer for that. AI is a great buzzword. There is a big difference in what AI means to manufacturing, to what it does to chip design, and what it does to inference in the cloud. If you look at inference in the cloud and inference on the edge, they’re very different. The models and expectations of performance are different. It’s a very small term for an incredibly large industry and an incredibly large promise. Over time, as people begin to really understand what it can do, there might be better words to describe what AI is. But the promise of AI is absolutely founded, and the value of AI is real. And we’re just starting to see that today.

Kurniawan: The majority of the people in the public domain think that AI is about a high-performance, $10,000 chip in a data center somewhere, running the model, and spitting out predictions or recommendations. But AI also is being implemented in edge devices. They run on processors that need to be hyper-efficient, doing things like facial recognition and speech recognition. The device itself runs on coin-sized batteries. So there’s that type of AI, as well, which right now is not in the forefront, but it’s growing tremendously. The IoT market is getting really big, and there’s growing awareness that edge AI will be where most of the compute is.

Karazuba: As a maker of edge AI inference IP, for the vast majority of customers we speak to in the chip world, having AI inference capabilities locally is a must-have for their next design. The level of performance depends on the market, the budget, the intended use cases, the time in market, etc. And the idea of having specialized AI processing is really a must-have for just about every high-end design today, regardless of market. Probably 95% of the mid-end SoCs and ASICs that are being designed today are being designed with dedicated AI processing inside of them. Every single edge device in the future likely will have some sort of AI inference capabilities inside of it, and that obviously needs to be done in an extremely power-efficient envelope.

SE: At the chip level, this raises all sorts of interesting challenges, though. AI chips are pretty much designed to run full bore, which is different from what we’ve seen in the past, where processors go on, they go off. The whole idea behind AI is to get these algorithms processed on various MAC elements or accelerators as fast as you possibly can. That can shorten the lifespan of these devices. In addition, there are security issues that nobody has even dreamed about because these AI systems basically morph over time to adapt and optimize themselves. There also is new software coming in on a regular basis, and lots of updates. And then there’s generative AI, which wasn’t even on most people’s radar a year ago. There are a lot of moving pieces, and our industry has never reacted well to this much uncertainty at one point. How do all these factors play out?

Karazuba: I’ve been doing chips and IP for 25 years, and I have never seen something move as fast as AI does. Some of us knew about ChatGPT a year ago, but very few people knew what it could do. And I don’t think anyone in their wildest dreams would have forecasted the impact that it’s had. AI models change dramatically. In six months, there have been radical changes in the size and complexity of the networks that we have been interfacing with. I do not relish the job of a chip architect, or someone in charge of chip specification, when they’re looking at something that needs to be in market for 6 to 10 years. The needs 6 months from now are going to be so dramatically different than they are today that often it forces a brute force approach to AI processing, which is to throw as many different MAC elements as possible into your design, run it as fast as you can, and hope that you’re able to handle the math that’s coming at you in 6 months, or 6 years. It’s going to be tough for a lot of people.

Yanamadala: While training currently consumes a significant portion of deployed computing resources, inference is poised for significant growth, particularly on edge devices. This includes exciting trends in generative AI, where applications will increasingly rely on on-device processing. This shift demands versatile compute solutions — smaller form factors and efficient power usage to handle resource-constrained environments. Overall, the migration of AI workloads from the cloud to the edge underscores the critical need for balanced performance, where both efficient computation and power consumption are paramount. To unlock the full potential of AI, our industry must tackle three primary challenges — compute, memory, and power consumption. The availability and efficiency of compute are vital for any AI application, ensuring that the right type and size of compute resources are accessible to execute AI workloads. Many AI applications involve processing massive amounts of data, leading to challenges in data movement during workload execution. Efficiently handling this data movement requires addressing issues such as the proximity of compute to memory. Additionally, power efficiency poses a significant challenge for scaling AI. Solving the power efficiency challenge is intricately connected to addressing the challenges of compute efficiency and the proximity of compute to memory. So while acknowledging the swift and transformative nature of AI, I’m optimistic that, as an industry, we’ll seize the enormous opportunities it presents, despite the accompanying array of possibilities and challenges. Undoubtedly, there also will be a continuous process of learning and necessary adjustments.

Vora: The time frame in which things are changing is much faster than before. You’ll see a lot of chips come out that will be significantly over-designed to compensate for the unknown, because a lot of the use cases are not mature. Models are continuously changing. It’s not like software where you can change it in a few months. This is a characteristic of any generational shift in technology, where the hardware elements are going to start off being a lot more expensive. As the ecosystem matures, things will stabilize. It is a challenging time for a lot of designers, but it’s also a lot of fun for people in this space.

Karazuba: One trend we are seeing specifically with AI is the idea of co-processors. An application processor or ASIC likely has a larger general-purpose NPU on it, but the OEM also will develop a separate piece of silicon, a co-processor, that is geared and optimized for specific networks. And then, a couple of months after launch, they may architect the neural processing specifically around those to get the best possible PPA envelope and the best possible user experience. That works really well in consumer devices that have a relatively shorter life expectancy, like smart phones. In markets with longer life expectancy, like cars, it’s just going to be brute force.

Vora: At the extreme edge, there are compute nodes that are extremely constrained in terms of memory and compute. This may be a bunch of sensors — analog/mixed signal sensors, or digital sensors — that collect data and do something with it. The problem there is the diversity of use cases, and the fragmentation in the ecosystem is so significant that it’s very hard to make chips or find that one killer application that’s going to drive adoption of AI. What you look at when you when you define products in that space are algorithms like CNNs, which are still very prominent in the visual space. Or you look at signal processing and advanced math-type machine learning, which is very prominent and relevant for sensors and multi-sensor systems. These are fairly mature. The models have been around 20 or 30 years. There’s a lot of hype around transformers, but eventually as things scale and mature and come closer to the user edge, the technologies will be a lot more stable. The challenge is the diversity of the use cases and the applications. That’s where, even in general-purpose microcontrollers and microprocessor-type of products, you’re going to start seeing more dedicated neural processing units and DSPs that are specifically designed to accelerate machine learning models. GenAI today is extremely hyped up because of the hyper-consumerization of transformers and ChatGPT. But if we ever have to scale GenAI, it’s not sustainable. It needs to move toward the edge. It needs to become more intention-based. That’s when things will start maturing a lot more. As things move away from the cloud, they tend to stabilize.

Kurniawan: There is a strong need to tailor the hardware, and that needs to be guided by strong collaboration with ecosystem partners — especially the software developers. You would want to have visibility into how the software is going to evolve. That will determine how you’re supposed to design your hardware, and vice versa. The flexibility is based on some previous work that we’ve done, where we see chipmakers try to anticipate how to address the different use cases that are being explored out there right now. That translates into hardware architectures that need to be flexible enough and reconfigurable for use cases one, two, and three, or any adjacent use cases into the future. That’s part of the reason why we’re seeing good growth in demand for FPGAs.

SE: In the past when we’ve seen new technology explosions, it’s been in one area and we’ve had a map that went across the entire industry. Now we have all these different pieces accelerating at different rates, right? So we may have a vision that all of this will work together, but not everything will be up to date, and some of these things will change. Flexibility at advanced nodes, in particular, and in advanced packaging adds a lot of margin, which impacts performance and power. How do we solve this?

Kurniawan: That’s a complex equation, for sure. Flexibility is one approach. But the bigger question is, ‘How do you manage the other parameters?’

SE: Then what comes next? We’re taking computing out of a box and putting it everywhere. What’s the impact of all of this?

Karazuba: It’s a great opportunity for disruptors in the market — people who are willing to take risks. The semiconductor industry has a history of risk taking since its inception. But companies in this industry have gotten really big, and with size generally comes a bit of risk-averseness. There’s an opportunity for people who bring in not only new technologies, but new approaches to business, to supply chain, to chip design, and to helping people take their products to market. The IP model generally has been to create one IP and sell it to a bunch of people. In the past, it typically was not about creating a slightly different IP for each one of your customers, but that’s what’s starting to happen with a lot of the market now. With uncertainty comes change, and with change comes people who are willing to go the extra mile to capitalize on it. The legacy players aren’t going away, but the semiconductor market as a whole could look very different in 5 to 10 years, based on how AI evolves and how new technologies like chiplets come into play. If quantum computing makes an entrance, as some people predict, this conversation could be very, very different in 5 years.

Vora: As far as AI at the edge goes, a lot of the onus is on the semiconductor industry to develop the software stack and the tools to make customers successful with AI. It’s not just about creating complicated chips, adding fancy capabilities, and giving them to customers. In most cases, it’s about how you actually use a chip. How do you compile code for this chip? Do you have the right ecosystem and the right stack to do something meaningful with this chip? That’s where the complexity comes in. It’s not just the semiconductor design. If you look at some of the large players that have been extremely successful in the semiconductor space today, it’s not just because they make cool chips. It’s because they have a great ecosystem and a great software stack that really empowers their customers. I was reading a report by McKinsey, where they said that with AI, the semiconductor industry has an opportunity to extract as much as 50% value from the vertical stack. If you compare that to what happened in the PC era, or the smart phone era, the semiconductor industry was left with 10% or 15% value. Everything else went to the software companies. The semiconductor companies have the keys to the kingdom now. The question is how they’re going to make the most of it and develop the software that enables the hardware to actually do something.

Kurniawan: Edge AI and IoT are big, growing markets. If you look at reports from Gartner and Yole, you will see numbers that really show strong growth. The problem with IoT now is that it’s like a cookie jar filled with really tasty cookies, but there are too many hands trying to get in. That is how I see the fragmentation in that space. There are several collaboration platforms for integration, where a few companies start to bring together different hardware players, software players, security, etc., and enable them to plug in their products to see the see the compatibility or suitability of their product with the rest of the components in the technology stack. The hope here is to address problems earlier in order for everyone to come up to a standard through natural evolution of the platform. So they create a highway where they can test their product with low risk and see how it fits with the rest of the components in the technology stack. And if that platform is successful, it will enable players to bring their products to the market faster and at reduced cost.

Related Reading
Broad Impact From Accelerating Tech Cycles Part 2 of the above roundtable.
How disruptive new technologies affect the infrastructure that will leverage them.
How Much AI Is Really Needed?
Performance depends on the application it is being applied to.
Partitioning Processors For AI Workloads
General-purpose processing, and lack of flexibility, are far from ideal for AI/ML workloads.



Leave a Reply


(Note: This name will be displayed publicly)