Creating complex multi-chiplet systems is no longer a back-of-the-envelope diagram, but viable methodologies are still in short supply.
Virtual prototypes, often used as a niche tool in the past, are becoming essential for developing complex systems. In fact, systems companies are finding they no longer can function without them.
In the semiconductor industry, a virtual prototype is a model for a system at an abstraction level above RTL. But there is no such thing as ‘the’ virtual prototype. They are constructed for a particular purpose, which defines the level of accuracy required and the type of information that can be obtained from it. Some examples of virtual prototypes are architectural, functional, or a representation of hardware on which software can execute.
What has plagued virtual prototypes is the availability of models. Models are expensive to create and maintain, and if they are only used for one task, it becomes difficult to justify their cost. The other criticism thrown against them in the past is that for many tasks, an experienced architect can get close enough, especially when the new design is based on the previous design.
But this is changing. Systems have become so complex that architects cannot use a spreadsheet or the back of an envelope to come up with reliable answers. While it always has been known that problems caught early are cheaper to fix than those discovered later in the development flow, it is becoming very apparent that some of those issues may become chip or system killers.
Systems companies, especially those within the automotive and aerospace sectors, want to put together digital twins, which are another form of virtual prototype. These models extend beyond the electronics domains into mechanical, fluid dynamics, and other physics. They require models of the electronics to be able to construct those digital twins, putting pressure on the semiconductor industry to find ways to both create those models and to maintain them over the lifetime of the product.
Resurrecting diagrams from 20 years ago (see figure 1) can help to explain some of the issues being faced with virtual prototypes and digital twins. ESL (electronic system-level tools) attempted to separate behavior from architecture. In a mapping process, behavior is assigned to architectural elements, such that a piece of behavior could be mapped to a processor to run in software, while another piece could be mapped to custom hardware. The architecture not only defines the resources available, but also defines the paths that can be used for those pieces of behavior to communicate. Aspects of that communication still can be decided, such as whether it is buffered, held in memory, etc. Once mapped, performance of actual use cases can be determined.
Fig. 1: Behavior – architecture co-design. Source: Semiconductor Engineering
The behavioral view is generally most attractive for digital twins. The architectural view is most likely to be used for synthetic performance analysis. The mapped structural view is the one most likely to be used in the semiconductor development flow. It is also likely that the functional model would conform to the hierarchy that will be used in implementation.
The reality is that people only adopt a technology when there’s no other way. “In some areas, people were big on virtual prototyping, and they still are,” says Neil Hand, until recently product marketing director at Siemens EDA. “People didn’t see the need unless they were in certain niche applications where you really needed that virtual prototype. But today, people are faced with three compounding complexity curves. Those complexity curves are the normal growth of semiconductor complexity, plus system complexity that brings in the software. And finally there is domain complexity, where you have systems involving mechanical, thermal, stress, reliability, and functional safety. All of these elements of the system are layering on top of each other.”
The use cases for virtual prototypes are growing. “Ten years ago, the thinking was I’m building a chip,” says Marc Serughetti, vice president of product management and applications engineering at Synopsys. “RTL was not available, but I needed something before that. While that may have been a good value proposition, the lifecycle of that value proposition is limited in time. Today, there’s another value proposition that’s really important. It’s not just because the system is not available, it is because the system is way too complex to understand. As you talk about multi-die, as you talk about more complex systems like automotive, like airplanes, where all those pieces come together, having a real physical system of this is almost impossible.”
Perhaps the biggest problem has been demonstrating value. “Anytime you build a new model, the question is, where does the effort come from? I would say it removes risk more than anything else, and the earlier in the cycle you find an issue, the cheaper it is to fix,” says Hand. “When you find a problem in the virtual prototype and you fix it, you’re not going to find that problem further down. So have you quantified the value associated with stopping the problem early on? You’ve got to articulate that value, and if you’re not measuring risk, and you’re not measuring what you’ve been able to take out of the system, it does become challenging.”
Hierarchical disconnect
Without a clear idea of the ways a virtual prototype is to be used, it becomes difficult to imagine a flow. “The system is modeled (see figure 2) with behavioral models and the system is partitioned into behavioral blocks,” says Chris Mueth, new opportunities business manager at Keysight. “The component-level design then begins, often by another team. Component design is implementable, i.e. something that will turn into hardware. The models used for simulation are different. The engineer designing the component may be different than the engineer architecting the system. It is natural that there is a disconnect because the models at these two levels have different levels of abstraction and detail and serve different purposes.”
Fig. 2: Top-down and bottom-up requirement of a virtual prototype. Source: Keysight
The development costs cannot exceed the gains. “As the complexity is going up in these systems, they have to do more analysis upfront,” says Hand. “We are beginning to see too many decisions that end up being wrong. Trying to fix them during implementation is too late. While you can’t fully verify a system on a virtual prototype, it is going to give you the big picture – it’s working.”
Multiple prototypes are often required. “You may model the processor with an instruction set simulator,” says Synopsys’ Serughetti. “But for a virtual prototype, you might take the code intended to run on that processor and execute it on an x86. If you look at the breadth of use cases, it’s going to range from the people doing lower-level type of software development, to people that are doing more application development on the control side.”
What is missing is a framework on which to build adaptable prototypes. “A low-level model will contain a lot of detail and may encapsulate an array of multi-physic effects,” says Keysight’s Meuth. “However, the model will be more complex and simulation time will be longer. I don’t think we need to see the convergence to one model, but I do see the need for a simulation system that can simulate many model types within the hierarchy.”
The accuracy/performance tradeoff always has been the limiter. “Rapid prototyping, which essentially means using an FPGA-based prototyping platform, allows the testing of a preliminary version of a new product under real-time conditions,” says Roland Jancke, design methodology head in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “The main use case used to be early verification of interoperability of the device under development with existing components. With increasing amounts of functionality, these prototyping platforms need to become even larger and significantly more powerful. Virtual prototypes offer a solution to that dilemma with several advantages. They do not require extensive investments into real-time prototyping hardware. They do not require the extra step of cross compiling onto hardware. And they can be executed on-premise or in the cloud, where more powerful computing resources are available.”
Industry Shift
Two changes are happening in the industry that are pushing toward greater adoption of virtual prototypes — shift left and digital twins.
Digital twins are the creation of people building mechanical products. “They are mechanical simulations in the multi-physics world,” says Serughetti. “What’s changing is that people are transforming this into products that use electronics. They use electronics because it’s more efficient from an energy standpoint. It’s safer, or you can upgrade its capability. But this is when people get into trouble. They think the virtual prototyping is a pure replacement for hardware, and those expectations are wrong. There needs to be a change in the methodology mindset, which is, ‘What is it I’m trying to do with simulation?’ The multi-physics world has understood this. When I do this, I’m not necessarily having the exact same thing on every point as the real physical system, I’m having something that serves the purpose, that serves the value, and the way it’s represented is good enough for what I need to do. It’s not the exact same thing.”
Others agree. “One of the things the digital twin needs is to be able to move up and down in fidelity,” says Hand. “You are not wanting to run a very accurate version of the model when you’re doing a systems analysis of the control surfaces of a plane. Instead, you’re using abstraction of the computational fluid dynamics (CFD), and you need an abstraction of the digital model. If we can link that into the digital twin, you can get to systems integration, and that can be used for predictive purposes. The only way that’s going to happen is when, at every step along the way, you are delivering value — and enough value for the digital twin, which is effectively the virtual prototype, to evolve with the design.”
Digital twins can be very useful in the verification flow. “Virtual prototypes allow for interoperability verification with similar models from suppliers,” says Fraunhofer’s Jancke. “Such early abstract models can later be used during the development process as golden reference models for the implementation, as executable specification to be handed over to suppliers, as well as the means of verification throughout the entire development process. The maximum benefit from virtual prototypes happens when OEMs, which are in charge of putting all the pieces of a complex system together, can perform an early investigation of the overall correctness of functionality before hardware is available.”
The other pressure is coming from shift left. “In automotive, people are developing more middleware, more application-level software, and they need to do this in conjunction with what’s around it,” says Serughetti. “You have an entire set of software developers going up the stack. Their need is not hardware accuracy, but they do need to connect with the world that’s around it. Testing is interesting because you test for different things. You may test for functionality, you may test for safety, you may test for security, you may test at the integration level. We are seeing all those use cases starting to pop up.”
Meanwhile, gaps are being filled in. “The virtual prototype we’re building for early architecture exploration has very low fidelity, and you are running it with synthetic workloads on an approximation of the underlying hardware,” says Hand. “You are building a rough approximation of the system to do functional decomposition and allocation. Now you’ve got the bones of a virtual prototype long before we would have had a virtual prototype before. Then you start going into the system architecture exploration, where you are looking at which pieces of IP to use, which pieces of hardware development do I need? The virtual prototype is evolving, and it can let you do tradeoffs — the power performance tradeoffs at the coarse SoC architecture level.”
Problems remain, though. “This is a balance between the cost of modeling and the breadth of use cases you can serve,” says Serughetti. “That’s the reality. What needs to change, or is still work in progress, is that while a lot of companies are doing some form of virtual prototyping, it’s done in an isolated team. What’s lacking is thinking about virtual prototyping as a methodology. It needs to span across the company, across multiple use cases, and it needs to be structured for that.”
Hand agrees. “We need to generate infrastructure for building a more traditional SoC virtual platform. Within that virtual SoC platform, we can automate and utilize that for software shift left. You’re mixing the virtual platform with the block level verification or with the performance modeling. And you can also use it as you start to do the system integration.”
Exactly what that infrastructure looks like may be dependent on your area of focus. “Data flow is a technology that can accommodate multiple levels of hierarchy and many different types of models,” says Mueth. “With data flow, top-down design and bottom-up verification processes can be used and engineers have a choice of what level of fidelity they want to simulate at.”
Related Reading
Shifting Left Using Model-Based Engineering
MBSE becomes useful for identifying potential problems earlier in the design flow, but it’s not perfect.
AI, Rising Chip Complexity Complicate Prototyping
Constant updates, more variables, and new demands for performance per watt are driving changes at the front end of design.
Leave a Reply