CEO Outlook: More Data, More Integration, Same Deadlines

How design is changing with the shift from homogeneous ICs to heterogeneous, multi-chip packages.

popularity

Experts at the Table: Semiconductor Engineering sat down to discuss the future of chip design and EDA tools with Lip-Bu Tan, CEO of Cadence; Simon Segars, CEO of Arm; Joseph Sawicki, executive vice president of Siemens IC EDA; John Kibarian, CEO of PDF Solutions; Prakash Narain, president and CEO of Real Intent; Dean Drako, president and CEO of IC Manage; and Babak Taheri, CEO of Silvaco. What follows are excerpts of a panel discussion held by the SEMI ESD Alliance.

SE: We’ve seen a big shift in over the past couple of years from designing a chip for functionality to really designing a chip around the the flow and processing of data. We have more data, and a lot more issues to deal with that are tied directly to the data. How does that affect design?

Sawicki: One of the most surprising things that happened a few years ago was when Apple became the first company to come out with a 64-bit architecture for an application processor. Previous to that, every time you thought about 64-bit, it was an address space issue about being able to manage bigger sets of data, bringing in bigger pieces of software. But Apple didn’t do it for that reason. They did it because it let them be more power-efficient. What that’s really talking about is an aspect of designing the silicon to the software stack lying on top of it. Since that time, there’s been a huge amount of change in terms of designers validating to spec. It’s not about something I’ve missed. It’s about really taking a look at what the end user application is. That end user application may go beyond just simple data processing. It may involve being interfaced to the outside world, and it’s changing both design and validation such that it has to span out and increasingly handle those aspects of validating an end user software stack operating in real world, which is way more data processing on the design side of things, way more invested in end user experience, and far more holistic about how you optimize for design.

Segars: One of the points that comes from that is the complexity of architecting your design. A lot of the challenges ahead of us are about optimizing the architecture, because it is about the data flow. It’s about heterogeneous architectures and picking the right way to architect your chip. That requires doing high-level simulations and running much larger data sets through, because it isn’t just about a set of fixed functions that have to work. It’s about how efficiently the system works, how well it processes that data, how much it can re-use the data once it’s on-chip, and how power-efficient it can be through addressing those issues effectively. That makes simulating these designs and running verification cycles much more complex. And we need to be able to cope with much bigger systems and much more easily make tradeoffs between the components we might put in the system to effectively manage these very large data sets and the rate at which they’re flowing through. There’s a lot of new design challenges that come with this kind of data-driven era that we’re in.

Tan: We’re moving into a data-centric era where the whole infrastructure, from cloud to network and storage, have got to scale. The complexity is increasing a lot, and AI, machine learning, and data analytics will grow into it. Right now were moving into what I call Software 2.0, using AI/machine learning to develop software based on examples and models. We are moving into a new era.

SE: What does this mean for the rest of design?

Drako: Amdahl’s Law says that for every increment of CPU horsepower, you have to have a certain amount of I/O. That’s been pretty accurate. Back when we did the first 8-bit processor, they had 8 bits of I/O and 8 bits of memory bandwidth. Memory bandwidth and the number of pins keep going up, and it’s been pretty proportional to the amount of CPU capability we have. With the introduction of GPUs, we went more parallel and were able to get a lot more data in. But the design of the CPUs and the processors has always been driven by the applications on top. In 1990 at Apple, we looked at the code and the graphics rendering routines and design instruction sets in the CPUs to make those go faster, which is what Intel does every day. So I’m not convinced there’s a drastic change. With the AI stuff that’s permeating the industry, the Tensor processors and GPUs require more data than traditional processors. Getting the data in and out of those is creating a bottleneck. With TensorFlow on a GPU, one of the major bottlenecks is getting the data on and off to run the computation. And you can’t change models because you can’t swap them out fast enough. There isn’t enough bandwidth to do that effectively. Those things are driving more of a data-driven design, but it’s not a huge divergence.

Narain: Clearly, there are a lot of architectural changes. There are configurable designs, and designs are much bigger, so now you have all the electrical schemes, clocking, power management, and more complexity in verification. All of that can lead to reliability issues. So architecturally, people are building recovery mechanisms into designs, and all of this is introducing new failure modes. But these designs also have to be completed in the same time to market. People are investing in design methodologies, and EDA is innovating in ways to support these design methodologies. All of this is creating a lot of opportunities for everyone.

SE: We’re moving from a world of homogeneous planar designs into one of heterogeneous, multi-die packages. So while some of this may be evolutionary, it’s still a big jump. What does that mean for design and tools and reliability?

Kibarian: When you did a monolithic chip, 95% of the value was the foundry building that chip. Then you’d move to wafer sort, and packaging was an afterthought with 99.99% yield, followed by speed binning and test. The test was at the end of the flow, and it was just a final check. Now, if you look at most chiplet-type strategies, test is in the middle of the flow. There’s a lot of value being added in the packaging and combining of a lot of chips, whether it’s panel integration or reconstituted wafers. The manufacturing flow is a lot more complex, and the risk is on the product groups, not as much on the foundries. There are more sophisticated screening approaches, more data required in the assembly flow as it becomes much more important, and integrating across the supply chain is now a requirement to drive reliability and to get cost and yield to where people expect them to be. It’s now a very mechanical process, whereas in the past it primarily used to be a chemical and physical process.

SE: There are a lot of components that never had to fit into the same package or onto the same chip before. How do you deal with that?

Taheri: When you combine a bunch of sensors, a bunch of digital and analog functions or power functions, you find that integrating them is not possible due to process limitations and incompatibility of processes. So you end up having multi-chip modules. The challenge then is how to manage the data flow with these multi-chip modules that architecturally have to play together. Simulations become much more difficult. Passing data through these becomes more difficult. A lot of effort will be spent on two aspects. One is what the EDA companies and designers have to do, dealing with vectors and verification and analysis of that, and how you can use machine learning and AI to reduce the data set to give us the six-sigma or seven-sigma confidence that we need. That’s at the EDA and design level. At a different level, you have to deal with the packaging and the flow of how to simulate the packaging of the chips, the radiation effects, thermal effects, but also the data flow that has to go through the whole package. I see a very focused effort by a lot of companies on how to reduce the data set. It’s not a matter of flowing all the data through it. It’s about reducing the data set to get the yield and the functionality out of it.

SE: We’re seeing a shift toward much more customization, more domain-specific and focused designs, that are manufactured in much smaller lots. How does that affect design?

Sawicki: Some customers want all aspects of system design co-optimization. That includes tools to help them understand how to apportion design between the different elements they’re going to integrate into a package, help to make decisions on process technology, bandwidth, communications within, and placement. So that’s a whole set of tools that need to be integrated, find a way to model, and drive down to system analytics. At the other end of the stack, they’re looking at how do you get Gerber to speak to GDSII. Packaging is a Gerber world. IC is a GDSII world. They don’t look like each other. The tools that do these things don’t speak to each other. So this issue of bringing together what traditionally have been board and package types of problems with IC problems spans a whole range of opportunities that we need to speak to over the next couple of years.

Tan: The complexity of designs has increased a lot. So verification has become more complex. And if you’re designing silicon, you have to look at the whole system in terms of EM, power, and signal integrity. System analysis has become critical. You need to look at the whole system rather than just the silicon, and you need to look at all of these things together.

SE: Given this system emphasis, how hard is it to integrate tools from smaller EDA players?

Narain: A lot was said about development complexity, and that’s true. But time to market remains the same, so the productivity pressure on the designers is much higher. This is where a lot of methodology innovation is happening in semiconductor companies, and there’s an attempt to make it more efficient. In verification, we see this as a push to shift left. You want to verify as much of the design as you can earlier, and at a more cost-effective level. At every level of integration, the verification costs go up by 10X. This is not just about pushing the old methodologies and trying to get the last bit of horsepower out of them. This is leading to new methodologies that are being deployed. Our focus is on static sign-off, and we’re finding a very good reception for this class of tools. As for how to integrate them into a larger platform, the difference between verification tools and design tools is that the output of the verification tools is consumed by humans, so to integrate into the platform you need to do that on the input side, not on the output side. And that makes it easier to integrate as a parallel activity in the design flow.

Drako: All of the major companies use our tools. None is devoted to a single EDA tool vendor. They all use a mix of the three major vendors for some component. The all have a design flow or their methodology, and it’s all considered very proprietary and very hard for them to change. But it’s all about managing the data. The folks who win have very strict methods for managing their data globally. The ones who are a little messier don’t have control over their data in terms of watching it and revisiting it.



1 comments

Benjamin Cheng says:

advanced data mining by way of customized chip set coincid with operating system. A true hardware and software integration at sorce code level.

Leave a Reply


(Note: This name will be displayed publicly)