Virtual Prototyping Takes Off

Experts at the Table, part 1: What’s different about virtual prototyping now; the growing emphasis on performance and power in software; time to market vs. better reliability.

popularity

Semiconductor Engineering sat down with Barry Spotts, senior core competency FAE for fabric and tools at ARM; Vasan Karighattam, senior director of architecture for SoC and SSW engineering at Open-Silicon; Tom De Schutter, senior product marketing manager for Virtualizer Solutions at Synopsys; Larry Melling, director of product management for the System Verification Group at Cadence; and Bill Neifert, chief technology officer at Carbon Design Systems. What follows are excerpts of that conversation.

SE: Why is virtual prototyping so slow to catch on, and is it spreading beyond just the biggest companies?

Spotts: I got into virtual prototyping in 1997. Virtual prototyping and performance analysis took off until 2001, when the downturn hit, and almost all virtual prototyping activities stopped. That continued for about three or four years. Then SystemVerilog came in and confused the issue with SystemC and OSCI TLM 2.0. It took the industry a while to say, ‘We need to look at this.’ SystemC came back into the rounds.

SE: What node did it start ramping?

Spotts: We started seeing it with the fast models in the last two or three years and more people taking a look at virtual prototyping. A lot of that has to do with prototyping on an FPGA getting expensive and software developing being needed earlier and earlier.

Neifert: It comes down to virtual prototyping based on fast models. When you’re using a fast model with the accuracy that it provides, which is 100% functional accuracy but no cycle accuracy, it matches up with that. When you’re talking early virtual prototyping, it was from a performance analysis perspective. We’ve seen good interest since the A9, when there was sufficient complexity with the surrounding pieces of IP. In the past couple years it has taken off for the stuff that is less accurate from a cycle perspective.

Karighattam: And we have a couple customers we have convinced that it’s a good thing to do performance analysis in addition to speeding up software development. They have been advocates and contributed a little, although we have done the majority of the work, and it has helped with performance analysis. We have caught issues that would have gone unnoticed in two customers’ cases.

SE: Performance analysis of what?

Karighattam: Of the whole system. We put together a platform that is typically ARM-based, and we add all the memories that we have—and we have developed a whole bunch of memory models from scratch. From that point on, we take the use cases and match that to the traffic patterns. We are able to identify bottlenecks. If the customer has their own architecture, we let them know. If it’s our architecture, we can eliminate the bottlenecks. For the past couple years, we have been doing some active work in that.

De Schutter: To some extent it’s a little bit of a chicken and egg situation. People wanted to look into virtual prototyping but didn’t always know what’s available. For fast modes, pressure was big enough for companies like ARM and Synopsys to make more of this available and to do it more broadly. The A9 was an MP (multiprocessing) core, so it put a lot more burden on cache analysis. It was more complex, and that complexity has been exploding. It went from MP core to multi-clusters. Both the performance and software side increased so much that there was a need to do something earlier in the cycle. It becomes hard, and late, to rely on hardware availability. Having an FPGA or emulator available is still valuable, but it’s from a validation rather than a development point of view. The other piece is that electronics as a whole has shifted from something that was in a couple places in the house to something that is all around us. Software content is now prevalent everywhere, so now you have to deal with performance and security. And with mobile and the , the need to start earlier has become much more important over the past couple of years.

Melling: I started in virtual prototyping at Virtio, which Synopsys acquired. It was virtual prototyping for early software development. We were trying to sell that specifically. But when you went in to talk to customers the software developers would say, ‘Yes, I would love to have that model, but I don’t do modeling.’ So they would bring in the earliest guy doing modeling on the hardware side, which were the architects. You ended up in a room with architects, and they wanted to talk about throughputs, latencies, accesses—all these performance analysis sorts of details. If you do that, then the performance for the software starts to fall off. The problems that early virtual prototyping faced was that you had people looking for one answer to fit all situations. What’s happened, as people have become more familiar with the fast abstract and what that provides, and accurate timing analysis and what that gives you, people are starting to put together methodologies and solutions to bring these different elements together. That’s why virtual prototyping is taking off. Elements of it can be used along with cycle-accurate RTL and all these different applications, and it gives you a performance boost in simulation, which is what everyone is looking for.

SE: Virtual prototyping has always been all about time to market. As software becomes a critical factor in power budgets, is power starting to become a factor?

De Schutter: To some extent time to market has changed to time to quality. It depends on how you defined quality, of course. What customers are trying to do is to shift left to the point where they have something worthwhile to release. That includes performance, power, and to some extent area, but for virtual prototyping it’s really about the performance-power tradeoff. Both of these things are moving left. People want to make sure they have quality. It’s really hard to test hardware without the software. You want to do the software without the hardware. And you get into this conundrum about what you want to do first. That’s where virtual prototyping helps bring in all the information ahead of time, and then you can validate later whether it all works and does what you want it to do within the context of the actual software.

Neifert: It’s not so much about time to market anymore. Two or three years ago, I stopped using my methodology and time-to-market slides. I didn’t need to spell the methodology anymore. They had already accepted that they wanted to do virtual prototyping. It became a question of when and how much. That angle shifted, and a lot of that was driven by the software perspective as well as the additional visibility you could get with software analysis. I do like that we’re finally calling it virtual prototyping and not just ESL. At least we’re no longer lumped in with , although we do still have multiple different approaches lumped under a single title. But this is a more established methodology.

Karighattam: One other issue is cost. The number of FPGA platforms that are needed to develop software can be significant. It could be dispersed. You could have groups in different geographies. That can be addressed through virtual prototyping. You can essentially disperse one nice platform across geographies. That can be used for software development.

SE: Is it better than what was there before?

Neifert: One of our customers was replacing an FPGA because it kept hanging and they didn’t know why. Using our models it moved slower, but they could debug it faster. When they shipped it to their team in India, it didn’t have any blue wires attached to it. It was just software.

Melling: The important part is accuracy. As we started applying virtual prototypes to hybrids and connecting to RTL, the demand from the software people was much higher than what we expected. We didn’t think software people wanted to be anywhere near RTL. The reality is a software person’s job isn’t done until the software is running on silicon. A software guy’s life is miserable once silicon returns. They can’t go home. What we’ve been seeing with the software teams is an attitude that anything that gives them access pre-silicon so they can truly validate it is good. That way when the silicon comes back it’s actually working, and all those little errors can be uncovered and gone by the time silicon comes back. That’s the drive for FPGAs, too. An FPGA is an ugly platform. It’s not very visible. It has all these difficulties to work with, but it is closer in accuracy to the real silicon and it gets rid of software issues.

Spotts: For companies doing software, they really want two models. They want a cycle-approximate model or a cycle-accurate model, and they want one for software development that runs fast. With the software development that runs fast, they want to create their software on top of an OS running a virtual platform to validate that their software works. That brings up the subject of having software before the RTL is available, and the concept of co-design and co-verification. The hardware engineer can then test with software that’s already available. We see companies going in that direction and doing co-design work. The virtual platforms do help in that aspect.

SE: Is the code better quality now than when things were done sequentially?

Spotts: It depends on how you measure code and quality of code. There is more code being developed, but the number of errors you have between software and the hardware platform it’s running on has been dramatically reduced. The number of errors you see in software development—and there is more software being written fast—is going up. But you are catching those errors sooner.

Karighattam: Some software got shifted and other software did not. The boot code got shifted because it’s easy to develop early on. Its firmware can be developed. But not having an end device—models for USB and so on—that software development still stays on an FPGA board.

Spotts: There are models for USB and PCI Express. It’s out there.

Karighattam: It’s there for the controllers. It still needs to be done for the end devices.



Leave a Reply


(Note: This name will be displayed publicly)