When And Where To Use Virtual Prototypes

While it’s technically possible to create a virtual prototype that reflects an entire SoC, it isn’t always the best choice.

popularity

Just because something is technically possible doesn’t always mean it should be done. This definitely holds true currently when it comes to virtual prototypes, which have gotten a lot of attention for their potential in the SoC design process—especially for concurrent software development.

While no one is pointing fingers, there are situations in which design teams have thrown themselves into a complete SoC virtual prototype project only to realize partway through that it was overkill or they missed the mark.

Tom De Schutter, senior product marketing manager for Virtualizer Solutions at Synopsys, said that when this happens it is always interesting and the outcome depends a lot on the premise of what the designers were trying to achieve.

“For example, if the premise of doing a virtual prototype is really to replace a board, you miss out on a big advantage of a virtual prototype, namely the ability to just focus on specific tasks and to really move virtual prototype creation alongside the software tasks,” he said. “In the end, even if you create a virtual prototype of the entire SoC, the software team is completely split up into different teams doing different tasks. For a lot of these tasks, they don’t need, and actually don’t want, the entire SoC because it just makes it more complex. The more things that are moving, the more things that can go wrong.”

Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics, noted that in successful engagements he has seen, the situations are marked by engineering teams who have identified a specific problem they’re trying to address with a virtual prototype and their initial target is to create virtual prototype to address that problem.

Conversely, he said, there have been times when a team was trying to put in place a general-purpose solution and those projects are often much more difficult because there isn’t a target. “You can do so many different things with a virtual platform. You want to create something that does everything and it covers all possible questions. And when you’re going from nothing, and that’s what you set up as your initial target, it’s just too big a step. There’s too much to do.”

McDonald recommended something not be put in a virtual platform until there is a reason to do it. “Just because we can, doesn’t mean we should—we shouldn’t do it until we need to do it. When we need to do it then we should get it done…but creating a virtual platform just for the sake of creating something to address all possible scenarios is just too much work.”

A good illustration of this is Micron, which started creating a virtual platform for architectural exploration of one specific problem. “They got that working, they got some really good results from that and then they handed that off to a group doing a larger portion of the system. They incorporated that with some ARM cores and software that was running on the ARM cores. They grew from the architectural exploration into a real virtual platform for their software people to start using early. Then they took pieces of that and started handing it off to their implementation teams to use it as UVM verification predictor in a UVM verification flow. So they grew into this scenario where they were doing a lot of the different things that people want to do with virtual platforms, but they didn’t get there in one step,” he added.

However, what is helpful for many teams right now are either mixed-mode uses of virtual prototyping or incremental virtual prototypes.

In the first case, Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence, said this goes back to what he defines as Schirrmeisiter’s Law: Technology adoption is inversely proportional to the number of things you have to change in your project flow. “You really need to have a good reason to do it and there’s not always a good reason for doing a full virtual prototype. The main reason is that virtual prototype definition, TLM, in itself has not become — and I don’t know when it ever will be — a mandatory part of the design flow.”

He explained that one of the characteristics for the different types of prototypes, including virtual prototypes, is the added effort to what the project team is doing anyway. Imagine a graph and on the horizontal is TLM virtual prototype, RTL simulation, RTL emulation, RTL FPGA-based prototyping, and the actual chip. “If you chart this out, the additional effort is literally zero at the RTL simulation level because it’s the golden standard — everything goes from there. There’s some effort involved to get it into an emulator. They are of course so valuable that they cost money but then also you need to do some small changes, say, two weeks effort. It’s a lot more effort to get it into an FPGA-based prototype because you start rewriting your RTL. And then it’s zero effort for the chip because that chip actually is literally what you want at the end.”

Schirrmeister said if the additional effort to enable this particular development is considered on top of what has to be done mandatorily to get the design done, TLM looks interesting because the effort is not as big as the RTL development. “Still, there is model development in there so it’s an additional effort to the baseline of what’s mandatory to get to the chip anyway. And now, because this additional effort does only make sense if you get something for it in return, building a full virtual prototype of the full system actually only works if you have all the TLM models available or can develop them with little effort. That’s why people are right now doing these mixed simulation models so often.”

Companies such as Broadcom, Nvidia and now CSR have publicly discussed their experiences with this mixed model of keeping the RTL in the hardware in the emulator. “People are interested in this for two reasons: They don’t need to do the additional effort to build a TLM model because they can use the RTL model. The other thing that is useful is you have the full accuracy as opposed to the TLM model, which comes with a bunch of warning signs,” he added.

As far as incremental virtual prototypes go, De Schutter explained the value here is significant for the software developer. “Potentially you can create an entire board replacement, but that is just a whole lot of effort without necessarily a lot of return. What we see is that our successful customers who embrace that methodology eventually might actually develop the entire virtual prototype for the entire SoC — but that’s not their end goal. They just happened to get there because eventually when they cover all the different software tasks they end up with an entire virtual prototype of the SoC, but it has been used in 12 different steps in between.”

Drew Wingard, CTO of Sonics stressed that you need to consider what the virtual prototype is going to be used for. “The most classic use is to try to provide an early model for the software people. In this case, the virtual prototype is very likely a simulation model that is intended to run fast enough that a software developer could use it as a model for the target platform that they are coding for. In that case, you have this constant tradeoff—the more accurate you make the virtual prototype model, the slower it runs. You are driven very quickly to try to find out how to abstract away parts of the design that aren’t meaningful to the developer so that they can get their job done at a faster speed.”

At the same time, there are engineers who want to determine whether the chip architecture is going to behave properly. “Is it going to satisfy the real-time and quasi real-time requirements of the application before you have a chip back? Then you do need a different kind of virtual prototype, and that’s probably one that’s maybe not used by very many software developers, but which certain software developers will need to use—and certainly the system architects would like to have to model the performance aspects of the system.”

That’s the style of virtual prototype Sonics is much more involved with. “In that case,” Wingard continued, “it’s essential to model at least the memory system and the network that connects to the memory system in a very accurate fashion because what we find is that the performance characteristics of these chips is limited by the characteristics of the external DRAM memory system. And the performance characteristics of a DRAM memory system are determined by the address patterns that are sent to it by the different processors and different initiators on the device, combined with the network and DRAM subsystems choice of ordering so we have to model that at a cycle accurate level, which is why we provide cycle accurate SystemC models for our stuff.”

These are very different kinds of virtual platforms. “Essentially you take a look at the network and the memory system and you’re modeling it at a cycle accurate form. Strangely enough, you don’t have to model the CPU in a cycle accurate way, you don’t have to model the video processor in a cycle accurate way. All we really care about in that kind of virtual platform is what kind of addresses does it generate, but we don’t care about the actual data. It’s just traffic to us. In those cases, instead of black boxing out the PHY and the network, you end up black boxing the processors. So even though you’re modeling from a performance perspective at a much more detailed level, you can still get good runtimes because you’re modeling so much less of the system at accuracy,” he concluded.



Leave a Reply


(Note: This name will be displayed publicly)