Making Verification Easier

Verification IP is finding new uses to speed up and simplify verification particularly when coupled with emulation technology.

popularity

SoC design teams increasingly are confronting complexity in the quest to target application segments, but at the same time they are struggling to more quickly reduce risk in their designs while also speed up testing to make sure everything works.

Those often-conflicting goals have transformed verification IP from an interesting concept to a must-have tool for advanced designs. Verification IP (VIP) emerged a decade ago as a form of reusable IP, which can be used to create the tests needed to shorten SoC verification time and increase coverage. Often it is used to verify standard bus protocols, but it also can be used for system performance analysis.

“Protocols are coming out more and more quickly, and every one is more complex than the one before,” said Susan Peterson, group director of product marketing for verification IP and memory models at Cadence. “Imagine you are a user, and you’ve got a guy who is great at USB 3.0. But now USB Type-C with power delivery is coming on, and they don’t know anything about that. How are they going to come up to speed quickly? How are you going to know actually what the latest protocol is going to be in your next product line, and therefore if you are doing it internally, what you should be developing.”

SoC designers currently need to support more than 100 different standard interfaces, as well as numerous standard memory interfaces. In the past they used to develop their own IP, but they no longer have the resources to do that. So now they rely on IP vendors and VIP developers.

The advantage to that approach is the designers don’t have to be experts in all of the protocols. “If you’ve got an engineer who maybe doesn’t know much about a new protocol or who is junior, using verification IP helps them to be more productive than they would be otherwise,” Peterson noted. “It trains them not only on the product, and the verification, but on the protocol itself, and that’s really important in regions of the world where many engineers are so much younger than they are here in the U.S., which equals less experience. It’s really a great way to make a junior workforce behave more like a senior workforce.”

But VIP encompasses more than just protocols. Adam Rose, product marketing manager at Mentor Graphics, said that across the spectrum of technical requirements in various end markets, and different combinations of requirements — high bandwidth and low latency, and high bandwidth, low latency and low pin count — those must all be captured in such a way that the proper tests can be created to reflect the system requirements.

“That complexity really means that, increasingly, users just can’t build their own verification IP. When protocols were simpler, they could actually build their own. It was more of a viable option for them to build their own. With time-to-market pressures, multiple variations, and verification teams not expanding, those verification teams can no longer build their own verification IP in the way that they used to be able to. And that is driving the market, which is estimated at about $125 million today. And it has been growing at 20% to 25% a year over the last five years,” Rose said.

Emulation, simulation, virtual prototyping use growing

Given the sheer verification task, VIP increasingly is being used with emulation, simulation and virtual prototyping technology to help with the overall effort.

When things were simple — think mobile SoC — a design could essentially just be simulated, said Anush Mohandass, vice president of business development and marketing at NetSpeed Systems. But as coherency is becoming more mainstream, millions and millions of cycles must be run before a bug shows up, and this means emulation, along with synthesizable VIP, becomes a cornerstone of the verification strategy.

Neill Mullinger, product marketing manager for vertical solutions at Mentor Graphics, sees two sides of VIP use in the context of emulation: simulation acceleration and APIs.

The simulation acceleration side uses emulation to run faster with a UVM test environment, very similar to the verification IP used with UVM, but not targeted to run on an emulator. “In order to maximize throughput, you create verification IP where the lower-level transactor piece sits on the emulator so it is synthesized,” he explained.

Then, the API side of the VIP that is sitting in the testbench communicates between the two and uses transactions rather than low level signals. However, if a design is put onto the emulator and it is running in a UVM testbench mode, the speed of the emulator is limited because everything is moving back and forth at the signal level. Part of that signal level is still running in simulation so it’s going to throttle back the speed possible with emulation.

“You take all of the low-level signal level and put that all on the emulator, and it’s all handled through the transactor that sits on the emulator, and that’s doing all the translation from reads and writes down to the signal level,” Mullinger said. “In the same way that you would test a verification IP, you test a transactor that runs verification IP to make sure that you’re adhering to the protocol spec, and then you just move the protocol-based traffic that’s all sitting on the emulator back and forth between the design and the transactor. Then you can monitor the traffic and do that in your simulator in the same way you would a verification IP. It’s not a black box. It’s completely visible to the user.”

Abstraction levels make a difference
Drew Wingard, CTO at Sonics, said he observed some time ago that there was a lot of similarity between the level of abstraction, the components designers wanted to have for functional verification, and system performance analysis. “These tend to lend themselves to the concept of transaction-level modeling, and therefore, a transactor is needed that can ‘play’ the transactions.” Of course, in a functional verification context, that transactor is VIP.

He explained that the main engine inside verification IP is the transactor model — sometimes called masters and slaves, sometimes just called VIPs, and in some flows called agents — that are the elements that can get told by a UVM, or whatever approach is being used, to write the test vectors. “By doing that, the verification person isn’t having to code at the level of wires, so many of the components are being verified at a boundary where the boundary is a protocol, not just a set of wires. In this way, the transaction-level model is a very nice thing.”

Wingard noted that design teams want to be able to use VIP in an emulator, because what they are trying to do is the same thing they were doing in a simulator, just a whole lot faster. “However, the way some VIP is written, it’s not trivial or obvious that it runs on the emulator, so sometimes it has to run on a simulator that’s attached to the emulator, and that’s complicated. And now you are probably better off buying your emulator from the same people you buy your simulator from, or your VIP. When it comes to using VIP in a virtual platform model, now we have to get down to what’s the level of accuracy of the virtual platform model you are trying to build — and that’s what I would call system modeling.”

There are two common levels of abstraction in a virtual platform model, Wingard said. “One is purely functional, and it has no concept of time or bit accuracy, so trying to make VIP helpful in that context is a tall order. The people there are working with pretty abstract descriptions of protocols. It’s not that you can’t make it work. It’s that the people probably aren’t willing to do the work necessary to make it work. The other level of abstraction for a virtual platform model is one where you can actually do legitimate performance analysis and you’re cycle-approximate or cycle-accurate. There, you can use the VIP. It still looks heavy. It feels heavy to many of the people using it. But they may prefer it because it has some level of guaranteed accuracy.”

He still maintains there is very little verification reuse, but noted the Accellera is looking to address this issue.

As far as abstraction levels go, NetSpeed’s Mohandass also sees things are moving to a different abstraction level. “You want a VIP to emulate a CPU, a DDR, a memory controller. But because we are not designing at a transistor level, but rather at an IP level, that’s the way engineering teams are doing it. The IP usage is much higher, and they are building systems on a chip for a reason. The usage and the way you deal with it is different. You want to abstract away the complexity that they don’t want to see, but at the same time give enough details that if there is some issue they can quickly dig into it and figure out how to deal with it.”

Where we go from here

The path forward is sure to include even more usage of transactors/VIPs with emulation and simulation.

Mentor Graphics’ Mullinger said along with simulation acceleration, there is also more interest in virtual in-circuit emulation (ICE). He noted that ICE is more the traditional use of emulation, where there is a traffic generator of some sort hardwired into the emulator, controlled with a C-based testbench, and traffic generators are used to create traffic going into the design.

“The big change over the last five years has been the evolution of virtual ICE,” he said. “ICE doesn’t scale that well as designs get larger. You need more and more traffic, and it’s non-deterministic. It’s really hard, once you find a fault, to be able to replay the exact set of circumstances that caused you to find that fault in the first place, so it can be challenging. Virtual ICE is more like the transactor model, where everything is running virtually. You have a virtual traffic generator that talks to the transactor sitting on the emulator, and that drives traffic into the design. It runs at the same speed as ICE, but because it has been virtualized, it is scalable and deterministic.”

Along these lines, there is likely to be much more use of hybrid emulation, particularly with a software model of a processor, running software on the processor, all of which connects to the other RTL that’s running on the emulator. This enables software to be run at the speeds that software engineers want to run and debug it, he concluded.

Related Stories
Software Debug Gets Tricky
Flexibility Vs. Portability In Emulation



Leave a Reply


(Note: This name will be displayed publicly)