Large-Scale Integration’s Future Depends On Modeling

The progeny of VLSI is 3D-IC and a range of innovative packaging, but all of it has to be modeled to be useful.

popularity

VLSI is a term that conjures up images of a college textbook, but some of the concepts included in very large-scale integration remain relevant and continue to evolve, while others have fallen by the wayside.

The portion of VLSI that remains most relevant for semiconductor industry is “integration,” which is pushing well beyond the edges of a monolithic planar chip. But that expansion also is bucking up against the laws of physics, where power reduction, delivery, and dissipation are the big limiting factors. And it requires some complex models, few of which exist today, to be able to solve these issues in an acceptable time frame and at a reasonable economic price point.

For example, a recent VLSI conference in Japan included an Imec-led workshop on backside power delivery. Julien Ryckaert, imec vice president and one of the conference’s organizers, said putting everything on the same substrate is still the best way to maximize performance. “Every time you get off the main substrate, you lose by an order of magnitude in terms of power and speed,” he said. “Now there are certain things you can’t put on the same substrate, like a DRAM or flash or certain components. That’s why we had the SoC paradigm. That was the quest to try to find a way to put as much as possible on the same die, in the same substrate to maximize efficiency. That’s why VLSI, the way people perceived it in the beginning, is now 3D technology, backside technology, very advanced packaging technology, which allows you to propagate signals with the same efficiency but across different substrates.”

Others agree. “We’re starting to grow the X and Y of the package — so much that there has to be some sort of solution,” said Ivor Barber, AMD’s corporate vice president for packaging, at a CHIPcon panel this week. “I compare the building strategy of New York to that of Los Angeles. When Los Angeles was founded, it became as big as they wanted to be. New York was very bounded and went straight up. And that’s where we have to focus going forward. We really have nowhere to go but up. In the end, we can’t keep making the substrates bigger and bigger and bigger. So in that regard, thermal is the biggest challenge.”

The chip industry keeps innovating along the way, but the fundamental problems don’t change. “The evolution of where some of those densely packed VLSI chips has gone a few generations is 3D-IC and a whole range of innovative packaging,” observed Daren McClearnon, product manager at Keysight Technologies.

This has led to a change in thinking. “That whole SoC paradigm is getting slowly broken,” Ryckaert noted. “It’s no longer costing you a factor of 10 in power to bring a signal from one die to another, with smart packages, stacking technology, and other options available today. If the goal is still to have more function per unit area, now you can do it with multiple substrates, organizing things on chiplets. At that point, an SoC is not the answer to what you’re looking for, and you can have a smarter way of organizing your system that occurs in conjunction with the fact that systems are becoming more distributed. This breaks the SoC paradigm, so that’s why it’s hard to talk about VLSI these days under those conditions.”

Modeling in the present day
Another paradigm that requires fixing is how to create the necessary models. With the increase in design complexity, some of today’s models have grown so big they cannot be run in a reasonable amount of time.

“The systems we’re trying to simulate and model keep getting bigger at all scales,” said Marc Swinnen, product manager at Ansys. “You have systems like entire cars or multiple chiplets on a 3D-IC. As you go up the chain, there’s more information, and it’s impossible to drag all that information with you. There’s a requirement for some level of abstraction, but on the other hand, you don’t want to lose the accuracy.”

Others point to similar concerns. “If you put too many dependencies into your model, it gets bloated and takes more time to simulate than the reality, because you’ve over-engineered the model,” said Roland Jancke, head of department for design methodology at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “But that’s always the question of modeling. You need to be as abstract as possible, and as accurate as needed.”

The problem is a variation of the punchline of the physics class joke about modeling: “assume a spherical cow in a vacuum.” There needs to be a balance between enough simplification to solve a question, without oversimplifying it to the point the answer no longer has relevance to the real world.

“But in the semiconductor world, there is a subtle but important distinction,” said Swinnen. “It’s not so much about simplification. It’s about which of this data you need to bring along with you to a higher level, and which of the data is not directly visible and can be nested into a simpler model.”

The answer is a familiar one, which is raising the abstraction level through reduced-order models. “You include enough of the detail so that the higher upper level sees what it needs to see, but you simplify the unnecessary detail,” Swinnen said. “For a system-level analysis, we have a whole series of these models — thermal, power, signal integrity, ESD, each of which capture specific issues. Then we can put them together into one single file.”

The ‘roll up’ technique is frequently used. “When you look at a chip’s multiple layers, starting from the transistor level and up to the top level, usually when a chiplet talks to the outside world, the inputs/outputs are through the higher-level layers, and the lower levels that connect to the transistor are actually pretty deeply buried inside the chip,” he said. “The outside doesn’t really directly connect to that. So, you can take the lower levels and collapse them all into a simple electrical model. Thus, instead of having a dozen or so layers with all that detail, you only have the top three layers in full detail. The other layers below are represented as effective electrical models that show the outside how they behave. And that whole system from the top looks very realistic and behaves exactly like the real thing. But it has a fraction of the data. Each component in the system can be modeled with a ROM, or you can mix-and-match where you leave some components in full detail and use ROMs for the others. This gives an order of magnitude speed-up for system-level analysis without significantly impacting accuracy.”

AI has a role in this, as well, and some of the same concerns, including how intelligent to make an AI/ML model. “In some cases you are limited by the hardware,” noted Marc Greenberg, group director for product marketing in Cadence’s IP Group. “Of course, more powerful hardware comes at a huge cost. Those pieces of hardware are turning out to be extremely expensive. So there’s always going to be that tradeoff of how much hardware capability you can have. How much time are you willing to wait for the answer? And how complex do you want to make it? There’s always going to be a tradeoff. Of three choices, pick two.”

The other big constraint is time. Fraunhofer’s Jancke pointed to the need for more efficient simulation, but added that it’s not a solution that can be easily automated. “For number of years now we have suggested a multi-level approach, so that you have the models at different levels of abstraction,” he said. “Where you want to more thoroughly investigate, you go deeper down into more details on the question. And for the other parts, you’re more abstract, so you have different levels within one simulation.”

Different approaches
One solution that may help integrate models is the Functional Mock-up Interface (FMI), which allows the exchange of dynamic simulation models using a combination of XML files, binaries, and C code.

“Digital twins, which is the modern term for ‘modeling and simulation,’ are better than the system-level models we had in the past, in that they incorporate real-time data from the field to improve model accuracy,” Jancke said. “The FMI is the backbone of a digital twin, so that you have one common interface that can work with different models at different levels of abstraction, different models of computation, and different simulation engines. An example is an engine simulation containing finite element models of mechanical and thermal parts, electrical models of sensors and driver circuits, as well as a state charts model of the motor controller, all connected via FMI.”

One of the biggest outstanding challenges is how to ensure that the model is going to be valid for the intended purpose, which comes from the need for comprehensive model-based design and validation, which is much larger than most people would agree on.

“This is important if we’re to get to the next level of autonomous driving,” Jancke said. “Carmakers say they need to have 1 million kilometers to be tested for a new vehicle, but also they cannot afford to have them all on the road. If they only have 10% on the road, 90% need to be virtual. How do you trust these models and allow the results of that investigation to be relevant for homologation, if it’s only virtual? Tesla’s record in letting the customer test it isn’t reassuring.”

But developing models is a challenge in itself, and one that is constantly evolving and changing. “A model is defined by its boundary conditions,” said Rykaert. “When you set specific boundary conditions, you establish your assumptions. Adhering to these conditions allows a model to represent reality accurately. The challenge arises as models evolve over time within a rigid optimization structure. They have been extensively optimized for a particular set of conditions, which has made them highly accurate in predicting outcomes, thanks to feedback that corrected any discrepancies.”

However, he warned, problems emerge when you encounter an effect within the system hierarchy that significantly impacts another domain.

“Often, these effects lie beyond the original model’s boundary conditions, as they were not previously known or considered. Now, you face the task of capturing these new effects across boundaries, but it becomes difficult to ensure the model evolves appropriately when the boundary conditions change. To address this, you need to rethink how you abstract and account for these cross-domain effects properly. Attempting to model the entire system-on-chip (SoC) would be impractical, considering various scales, time scales, physical dimensions, etc. You wouldn’t even be able to build the model. It would take centuries to simulate. So it is crucial to find a way to abstract and incorporate these effects without sacrificing the model’s validity and simulation efficiency.”

Conclusion
From his perspective working on microwave systems, Keysight’s McClearnon points to current problems and future possibilities. “Today, mainstream EDA tools are allowing humans to create designs with billions of transistors, and manage performance vs. industry trends toward ‘density’ such as thermal, parasitics, and sheer scale,” he said. “Where EDA starts to lose advantage is in the higher mmWave frequency bands of 5G and 6G, and correspondingly faster picosecond edge rates. This area of analog is still a black art in some ways, and harder to automate. In terms of radios, the region above 100 GHz (sub-terahertz) is especially difficult. That’s why that spectrum is available. It’s hard to even get accurate measurements from test equipment, so the models are problematic and now need to account for things that could be neglected in previous generations. New packaging is needed, new design methodologies, and modular (heterogeneous) mixtures of technologies. So it’s only now that semiconductor processes and analytical horsepower are able to open up frontiers defined by Moore, Shannon, and Maxwell. This is where the miners work on the details.”

What kinds of tools and methodologies will be required to develop chips in the future isn’t entirely clear. “The system awareness perspective needs to penetrate more in the VLSI community,” said imec’s Ryckaert. “We need each other on both ends, the manufacturing side, but also the architecture. If we would find a better and smoother way of seeing how those two are entangled, and what kind of technology choices are better geared to answer architecture bottlenecks and systems bottlenecks, we would make much better systems.”

Related Stories
Searching For A System Abstraction
Hardware became stuck using the RTL abstraction, but system-level tasks need more views of hardware than are currently available.



Leave a Reply


(Note: This name will be displayed publicly)