The Multiple Faces Of Virtual Prototyping

Virtual prototyping means different things to different people. It also may have different uses for each of them, depending on their role in IC design.

popularity

Virtual prototyping conjures either confusion or relief, so it should come as no surprise that some chip designers are still confused about the different types of prototypes on the market.

“Virtual prototyping is going through a change right now,” explained Gary Smith, founder and chief analyst for Gary Smith EDA. “Today, users are using cycle-based tools to prototype sections of their design (if it is a big design). The hardware guys are the ones using it right now. They are trying to get it to move up to pass off to the software guys for co-development, but it’s still a little expensive for that [because of the cost of FPGAs].

Smith identifies the three types of virtual prototypes as: the architectural prototype at the behavioral level, along with the silicon virtual prototype and the software virtual prototype at the architectural level.

Some of the leading semiconductor companies have been leveraging virtual prototyping for many years.

“It’s a long story for us,” said Jean-Marc Chateau, director of system platforms and tools at STMicroelectronics. “It started before 2000 in terms of R&D, trying to bring forward technologies for hardware software co-design to be faster on the market with SoCs. At that time it was in the central R&D organization with Philippe Magarshack. The team has been very active in the standardization of SystemC driving this effort and being chairman of the board of the initiative. We have really been working from the beginning of SystemC, and in the SPIRIT initiative we have been involved and have been pushing the new standard forward. We are still pushing, by the way. SPIRIT is now in Accellera and we are still very active both on the board and in the working groups.”

ST used transactional modeling first for verification of SoCs rather than for software development. Chateau noted that ST’s big success with virtual prototyping was in verification because the random methodology being used was really tough for very complex systems. “We have used it for sure for IPs, but for big subsystems we have moved quickly to directed test in transactional mode.”

ST's Chateau: Long history with virtual prototypes.

The company’s use of the technology has increased since 2005. Before then, it was more in trials, R&D and pilot projects. “In 2005, we started a phase of using transactional models for IPs and SoCs,. bringing multiple verification platforms for the same SoC to verify subsystems one by one in specific environments. We started to build the library in TLM of all of our IP, starting with consumer and moving to mobile telecom. We have accumulated both generic and dedicated IP that were written in TLM for verification purposes as a golden reference to be compared with RTL that is first emulated—in most cases—on big machines from Cadence and Mentor. We have used a lot of emulation and co-simulation with TLM in order to verify the RTL or the IP in the system environment, or at least a system that is relevant to the IP. That has been the big driving force of our effort in ESL.”

With two populations to serve—the architects who perform investigation, and software developers for the rest of software development—ST currently is developing a virtual platform for software development. Chateau explained that his team of 44 dedicated people started with drivers of the IP blocks, then moved one level up. It is now working at the application level on the virtual platform. “But still it is not 100% today. We are not yet at a level where we fully deliver the complete industrial software package based on the virtual platform,” he said.

What they do instead is half on a board and half when silicon is there, more or less, and this is true for all consumer products: TVs, set-top boxes, monitors, and mobile phones for ST-Ericsson. The team that Chateau leads is guiding both ST and ST-Ericsson on ESL and is dedicated to ESL support and development.

As there were no tools of this sort available in the beginning, ST developed its own framework based on open standards and open source. “We have put in open source a basic model library. We are in a phase where we think to put more on open source to enlarge the standards to certain gaps that prevent in the OSCI TLM-2.0 standard; there are some ambiguities,” he said.

Ambiguity in TLM, IP-XACT standards is hampering interoperability
Chateau’s bone to pick with TLM and IP-XACT is the same: both are too ambiguous. They need extensions to be really interoperable between models, tools or various sources. “As we are building SoCs with many IPs from third parties changing from one company to another, or like us with ST-Ericsson, and with mergers and acquisitions everywhere, if we are mixing IPs then we really need this interoperability. Today it is jeopardized by the fact that you need to do your own interpretation of the standard to fill the gap.”

“If we let it go for one year or more it will be a disaster because the big companies will de facto the standard. And if you are not married to them I see a risk that you cannot mix with anything that is not with that proprietary standard. There is really a danger that something that was open source from the beginning will be closed because the lack of extension. It may kill the way we work today if we are not fast enough on the extension,” he added.

Along those lines, Mike Gianfagna, VP of marketing at Atrenta noted that quality, implementation-ready RTL is important, but a bigger problem today is not really designing original RTL. It’s more about integrating existing IP blocks. “We look at most of our customers and, roughly speaking, our view is that over 80% of every SoC that starts today is already pre-ordained with predefined building blocks—either legacy blocks or third party blocks. It used to be whoever designed the most novel circuit first wins. Today, it is whoever figures out what blocks to integrate first wins.”

On a larger scale, he has observed that one of the fundamental challenges in the virtual prototyping space is the interface between hardware virtual prototype and the software virtual prototype.

“Those two worlds are reasonably well-defined, but the way they interact with each other is not well-defined so that’s the opportunity. What’s the interface between the hardware architecture and the software model associated with that hardware architecture? And do you make that interface robust enough so that you can change things on one side of the interface and then see how they affect the other side? There’s a lot of work to do there in that data handoff, representing things like the register map, the interconnections and how you profile performance at the software level that are meaningful for the hardware. There are a lot of connections to be made between the software and hardware virtual prototypes,” Gianfagna said.

Getting the balance right
Based on his experience at ST, in order to properly implement virtual prototyping, Chateau advised a single SoC development plan. “You cannot have a software plan and a hardware plan. You need to have one plan considering the speed at which the development will be done on the hardware virtual model to be able to develop the software. The most critical problem is that you have three categories of customers for the virtual platform. Verification was the driving force for us. For other companies it will be early software development and for others it may be tools for architects to do what-if analysis and investigate the whole architecture approach.”

The constraints of those three populations are quite different. For the architecture they need a cycle-accurate approach. With software they may not care if it is cycle accurate, but they do want a quick simulation that works. With verification, they don’t need cycle accuracy at the beginning but at the end they will need to replace that RTL. There needs to be a flow down to the lowest level, Chateau explained.

“Those constraints are quite different and you cannot find a perfect model that matches all of them, so you need to accept a compromise,” he said. “You also have the need to synthesize with the high-level synthesis tool more and more IPs to go faster. The level of accuracy of those models is quite large, so therefore it is another level of abstraction. There is no magic compromise. You cannot afford in any company to have three types of models–or even four if you want to do high-level synthesis. You will do only one and maybe try to generate the one that will be before synthesis, but you can do only one. Even one is quite constrained to have enough resources to do it on time, much before silicon. You have to think about this compromise at the beginning and not focus on only one customer type because if you do it for only software development, it may not be useful in verification. Therefore in verification you will have low productivity, the same for architecture and so forth. You need to find the right compromise at the beginning when you introduce these kinds of techniques.”



Leave a Reply


(Note: This name will be displayed publicly)