The Drive Toward Virtual Prototypes

Prototypes are transforming rapidly to take on myriad tasks, but they are hampered by a lack of abstractions, standards, and interfaces.

popularity

Chipmakers are piling an increasing set of demands on virtual prototypes that go well beyond its original scope, forcing EDA companies to significantly rethink models, abstractions, interfaces, view orthogonality, and flows.

The virtual prototype has been around for at least 20 years, but its role has been limited. It has largely been used as an integration and analysis platform for models that are more abstract than implementation models. At its heart is an instruction set model for one or more processors, enabling early software development to be conducted on a fast, functionally accurate model of the hardware. Performance historically has relied on abstracting out timing information.

The term ‘shift left’ is being applied to an increase percentage of the tasks performed during system development. It provides an early glimpse of the impact that downstream decisions will have so that bad decisions can be caught earlier when their impact is limited. This was first seen with physical information being brought into RTL tools, and the industry has made great progress with that, such that downstream surprises have been greatly minimized. No longer do design teams have to endlessly iterate around timing closure.

But with the economics of Moore’s Law for a single monolithic die no longer working as it did in the past, a greater concentration is being put on analysis and optimizations at the system level. Architectures of the past are being questioned — processors are being designed for specific code sets or tasks, memory architectures are being transformed, and dies are being disaggregated into 2.5D or even 3D packages. Alongside of these changes, power is becoming more important, while thermal is being seen as the ultimate limiter. And all of this information is beginning to be targeted at the virtual prototype, threatening to overwhelm the analysis technologies that exist today.

“It is helpful to think about the role of the virtual prototype as it applies to the V diagram (see figure 1),” says Neil Hand, director of strategy for design verification technology at Siemens EDA. “On the left-hand side, you are doing design decomposition and figuring out what you want to build. Implementation and verification are along the bottom, and then integration and validation up the right-hand side. As you look at modern system design processes, there’s a role for virtual prototypes pretty much on every piece of that V diagram. The ability to integrate with the virtual prototype is going to be key to enable system exploration, whether that be for power or performance, whether it be for hardware/software partitioning, or whether it be the integration of sensors and actuators into overall system design.”

Fig. 1: The V diagram of system development. Source: Semiconductor Engineering

Fig. 1: The V diagram of system development. Source: Semiconductor Engineering

In some cases, the virtual prototype can help improve a specific development process. “Shift left is about enabling a structured design methodology so that you can have a more predictable design process,” says Prakash Narain, president and CEO for Real Intent. “As we build these products, we have to enable abstraction and hierarchical methodologies to be able to deal with the complexity that is coming our way. Our focus is on functional verification, covering different failure modes through early functional analysis and by creating efficient methodologies, and flows that can be deployed early.”

In other cases, these prototypes will be directed toward new tasks that have not been encapsulated in IP or tools in the past. “Historically, you bought a processor core, used an interconnect based on some multiplexers to plug everything together, plus a few interrupts and resets and clocks and things,” says Nick Heaton, distinguished engineer and SoC verification architect at Cadence. “Most of those things were encapsulated in the IP. That’s not the case with a growing number of issues. Coherency issues have driven a lot of new verification efforts because you cannot verify coherency in one IP. It’s a system-wide issue, and that’s growing. Security is a system-wide concern, and you can’t verify those in isolation.”

A hierarchical approach
That list of concerns continues to grow. Today, arguments can be made for early floor-planning information being fed into the virtual prototype. This is necessary for thermal analysis, to ensure successful multi-die partitioning within a package.

“Disintegration flows are becoming interesting,” says Tim Kogel, principal engineer for virtual prototyping at Synopsys. “This is where people look at what used to be one chip, but it has become too big and they want to disintegrate that. Where do I make the cuts? Every cut is a potential source of additional power or could create a performance bottleneck. This creates additional challenges for all tools. There are now multiple exploration points, and ideally you want to have a very flexible way to allow ‘what if’ analysis.”

System optimization offers enormous potential. “When you start looking at system-level optimization, rather than a localized optimization — be that through chiplet-based design or through more holistic systems design — the ability to get those early insights and make informed tradeoffs is going to have a huge impact on how we build the systems,” says Siemens’ Hand. “That comes with its own challenges. Those sorts of virtual prototypes are only as good as the models you have available. And then you end up questioning whether to invest in the models, because I don’t know what the benefits are going to be, but I can’t get the benefit until I invest in the models.”

Models always have been the limiter. “Model availability is definitely one of the areas that makes it hard to adopt virtual platform technology,” says Simon Davidmann, founder and CEO for Imperas Software. “You need to get the minimum number of models with the minimum number of modes of operation to enable a task to be performed. This is often the limiter for early software bring-up. Can you model the environment in enough detail that you can use it instead of using the real thing?”

Complexity requires hierarchical approaches to be used. “Many things, like timing analysis, are becoming a divide-and-conquer problem,” says Shekhar Kapoor, senior director of marketing at Synopsys. “That comes down to modeling. We are trying to cut the design, dissect it into smaller partitions, and create some quick-and-dirty models to use as part of the analysis. Divide-and-conquer can work well if the scale of the problem that is given to the tools is well managed, and the constraints around it are well defined.”

Without this type of approach, the problem can become intractable. “If you consider power integrity analysis, you need to analyze each die standalone, as well as the impact of the rest of the chips impacting this die,” says Mallik Vusirikala, director, and product specialist at Ansys. “The base engine for solving the power grid remains the same, but how you model the other chips will change. We can create a very lightweight model of the other chips and integrate that into the analysis of a single die system. Alternatively, you can have all the dies modeled in detail and solve them together, but that creates different problems. If you’re dealing with one die versus three or four dies, the capacity increase is a challenge for the engines. This might be reserved for final sign-off.”

Many tools today are becoming driven by specific use-cases and scenarios. “You almost need to start doing system-level testing as you are doing IP testing,” says Cadence’s Heaton. “Virtual platforms help you start that, and hybrid systems enable multiple abstractions of pieces, including the implementation RTL. Portable Stimulus (PSS) is playing a part in this, where the virtual platform can be a step on the way to developing and testing your scenarios before you apply them to the slow platforms in RTL or other tools.”

Some of this may be beyond the capabilities of a single virtual prototype. “The idea of having one model to rule them all is not going to happen,” says Hand. “We need to have a way of managing the fidelity and performance needed in the model. At different points you will need different fidelity out of the model. You’ll hear people talking about a digital thread and threading in the design process. There is a power thread, there is a performance thread, there is a functional thread, and having a distinction between those, and a conscious breakdown, is important. What is your aim for the virtual prototype? Is it power analysis? Is it functional verification? Is it performance? That’s going to control the needs of the model itself. Whether you are looking at it from a virtual prototype perspective, whether it’s a coverage perspective, having a well-defined and well-separated understanding of what you’re doing for each of those threads is going to be important.”

Kogel agrees. “We differentiate between the virtual prototypes during the specification phase, like the architecture definition — which is the integration points for the IP at the specification level — and how you configure and dimension the IP (see figure 2.) Another integration point is for hardware and software. That’s the virtual prototype for software development, where you are trying to be as early as possible before RTL. More recently, that also is extending into post-RTL and the post-silicon phase, because it’s a more convenient target for modern CI/CD flows, where the integration is not a one-time event. It becomes a continuous event, and you want to make sure it is verified when changes get made. For that, virtual prototypes are a more convenient and a better target than managing lab equipment with hardware.”

Fig. 2: Virtual prototype use cases. Source: Synopsys

Fig. 2: Virtual prototype use cases. Source: Synopsys

Models
The models play a vital role. “Across the entire flow, what is most important is the consistency of the abstractions that are being used,” says Sathish Balasubramanian, head of product management and marketing for AMS at Siemens EDA. “IP consistency across the entire flow, both in verification and implementation, needs to be maintained based on the abstraction that is required for each flow. That is getting much more important right now.”

Standards for that do not exist today. “Standards allow you to have reuse and collaboration, so many people can work together and share things,” says Imperas’ Davidmann. “That really makes people more efficient. In the virtual platform world, there are abstractions, but there aren’t standards. If I model something, will it work with your system? TLM tries to say there’s this type of abstraction, but you need more than that. The language abstractions are compatible, but the interfaces haven’t been designed. That is something we have been trying to do with Open Virtual Platforms (OVP) — the definition of modeling and control APIs. With that done, we can implement everything below the API in our proprietary solution, but customers can write models and analysis capabilities that sit on top of it.”

Early software development has reached a tipping point. “Because we had the TLM standard, everyone understood the value of virtual prototypes for software,” says Kogel. “It’s still a significant effort to build the full virtual platform, but the entire supply chain is waiting for it, leveraging it. It has become a viable ecosystem initiative from the semis, with the model providers, to build the models of the MCUs and then leverage them across the supply chain.”

For other purposes you need more than just functional models, and few abstractions or models exist for those today.

Various companies are looking at individual pieces of it. “An early task is to look around at all of the possible package and die solutions,” says Lang Lin, principal product manager at Ansys. “We have developed an early prototyping flow for floor-planning, where different components are being assembled. In that flow, the model for each component could simply be a geometry with some material properties, but very few details. Once each of the components become finalized, during the late design stage, you have the detailed geometry. Now you can create a more detailed model and plug that in. There are several stages — early stage, where you need to plan ahead, and final stage where you need to sign off with an accurate simulation performed on that system.”

And power is becoming increasingly important at every stage. “People are increasingly asking for power-aware virtual prototypes,” says Kogel. “We had a standardization initiative in 2015 that defined UPF system-level power models. But so far, this has not been widely adopted, although it is supported in the tools. The problem is that the creation of models is still very much an open problem. You can calibrate power with tools that do this for RTL and below, but system-level power models are still very much an open problem. With multi-chip integration, this problem must be solved — not just power, but also thermal, because now you’re stacking dies on top of each other and you have to know the thermal behavior of this package.”

Conclusion
The virtual prototype is being transformed in several ways, driven by complexity of both the designs and the choices that need to be made early in the development flow. These decisions can have significant impact on implementation. This is compounding the desire to shift information left, enabling earlier, informed decision-making and analysis.

But this requires models to be developed and verified at different levels of abstraction and for different purposes, a process made more difficult by a lack of standards and abstractions. For some use cases, such as early software bring-up, the need and benefits have risen to the point where solutions are being created and used successfully. But for others, the journey has only just started.

Some point solutions are being developed today, but much more robust modeling tools and methodologies have to be created to enable the full potential to be reached.

Editor’s Note: This detailed look at virtual prototypes and their application will continue in a future story concentrating on verification and debug.



1 comments

Karl Stevens says:

I disagree that tools and technologies have to be created. It is more like “WAKE UP AND SMELL THE ROSES!” C# supports an API and all the things needed to model hardware designs.

There are Boolean variables and operators just as there are arithmetic variables and operators.

A key element is conditional assignment. It is key because there is no need to rely in if/else to do assignments. I think that Verilog also has conditional assignment and no one has used it.

Rather, HLS has consumed years of time and tons of effort. But C# is a compiled language for software development(includes debug, etc.) and is open source!

Leave a Reply


(Note: This name will be displayed publicly)