How To Speed Up Verification

Do embedded testbenches really shorten the task in a hardware-assisted development environment?

popularity

Software requirements have changed the tapeout process in today’s SoCs so much that it isn’t uncommon to hear a design can’t be released because Android hasn’t booted.

“It’s one of those things where you really understand that what used to be classic hardware verification that said ‘the chip is done’ is heavily impacted by if it actually does software things,” noted Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “For that you have never enough verification cycles, so you need to make sure that you run a lot and run it fast in terms of hardware-assisted verification coming in. That’s where emulation and so forth happens—where you make sure that the interaction between hardware and software actually allows you to bring up Android and even bring up applications on top of that.”

Given time-to-market pressures, all of this must run much faster than traditionally could happen in normal RTL simulation, which is where the hardware acceleration techniques come in. “You keep the design under test in the hardware, making it a little bit faster and keeping the test bench on the host. Next, you switch fully to emulation where basically everything is in the emulator, [also known as] in-circuit emulation (ICE). There, you have a design running at a megahertz and you connect it with RAID adapters [or speed bridges] and that is connected to the real system environment,” he explained.

FPGA-based prototypes also fit in this category but require longer set up times.

The next place where software and hardware touches is when the actual chip comes back from the fab. Technologically speaking, hardware-assisted verification and software development are really the enablers to allow engineering teams to bring up much more software on the virtual representation of the chip. This is the software the end user actually sees. It’s what runs on the chip and goes out with the product.

Benefits and challenges
Among the challenges of using hardware-assisted verification is getting the entire chip into that environment.

“Performance is the key reason chip design teams adopt hardware-assisted verification such as emulation, so an emulation system first must deliver the highest performance model of a chip and its environment for SoC verification and hardware/software bringup – regardless of how the chip environment is created,” said Tom Borgstrom, director of emulation marketing at Synopsys. “With emulation, the design-under-test (DUT) is usually represented in the emulator, while the chip’s environment can be provided by real-world connections outside the emulator, modeled in the emulator itself, or modeled in a transaction-based verification environment running on a host connected to the emulator. One of the critical factors in deciding how to represent the chip’s environment is the overall performance of the system, including both SoC and environment. Will you be able to achieve the multi-megahertz performance required to verify your SoC and its software interacting with a realistically complex chip environment?”

Borgstrom noted that “virtual bridges” can be used in conjunction with virtual test environments to connect the DUT through protocol-specific transactors to real devices, such as USB ports or Ethernet networks on the host. Coupled with that, system-level debug components, such as trackers and monitors can be used to understand the high-level behavior of an SoC.

This isn’t always so simple, though. It requires some kind of testbench to stimulate the on-chip side of a PCI Express chain, for example, if what you are trying to prove is the chain associated with a PCI Express interface on chip, explained Drew Wingard, CTO of Sonics. “That obviously has to be able to go into some kind of hardware environment — to be compiled into an FPGA or some kind of emulation platform — and that has implications for how you create the testbench because many of the testbench technologies are not naturally synthesizable into hardware.”

For many digital design engineers, there are some other compelling reasons for performing hardware-assisted verification, he said.

First, the number of vectors that can be run per second goes up a lot when the design is being put in the hardware. “You can be at a full level of RTL description and you don’t have to be abstracted in any way if you can get your testbench into that environment as well,” Wingard noted. “There is a whole lot of stuff that’s gone on for a long time in the hardware accelerator world trying to make all that work. We don’t see as much hardware acceleration per se anymore because people have learned how to do the synthesizable testbench things, which was one of the biggest things hardware accelerators provided. They can use more off-the-shelf digital components like FPGAs to build the hardware platform.”

Second, he said, “the emulation side has an awful lot to do with being able to run a version of the real software because suddenly you can actually run the real processor. One part of it is very, very natural. The actual environment in which this hardware block being designed is going to be used is one in which all of the control is happening from a microprocessor, so doing bring-up in a simulation environment that doesn’t include a processor is, in many ways, artificial. If you can bring it up using the host processor or a processor compatible with the actual processor that’s going to be using, then a good chunk of your testbench moves into code for that processor. So instead of a testbench being vectors that are going to get played on a transactor into this hardware environment, it gets compiled from the processor and it becomes a binary image – maybe it’s even the actual device driver that’s going to use this hardware when it’s really working in the compiled system.”

Here is where embedded testbenches can come into play. They can execute in hardware-assisted verification and virtualize the chip environment to speed up verification.

Once the software that the end user actually sees is verified together with the chip, the software can be used for verification of the hardware software. “This is becoming like the good old days of built in self-test (BIST),” Schirrmeister said. “This is a good analogy for embedded testbenches or the software-driven verification, where you have something which is in the chip and can actually be executed in the chip is software-operated and it tests and verifies aspects of the chip.”

Instead of writing SystemVerilog testbenches for a specific piece of the design, which can become very complex — especially when it comes to testing the interaction between the different blocks — it is done in software. That software is executed on a processor in the design to which the various components can be connected. This allows software-driven verification to run verification of the hardware and the hardware/software interaction.

“The fascinating aspect is that for those software-driven verification representations as testbenches, you can run them along the different verification engines,” he said. “You can run them at RTL. You can even start developing some of those scenarios on a virtual platform if you don’t have the RTL yet. You can run them on acceleration, on emulation, on FPGA-based prototyping, and you can run them on the real chip. Verification reuse is potentially a biggie in terms of impact.”

Cadence announced an embedded testbench capability last September, (http://www.cadence.com/cadence/newsroom/press_releases/pages/pr.aspx?xml=090913_PXPII) which it claims boosts the speed because all of the design code resides on the emulation box.

“The overarching theme behind all of this is to have the design be more properly represented with its system environment and execute that within the box in the case of the embedded testbenches,” Schirrmeister said.

However, this does alter the flow quite a bit. More aspects have to occur earlier in the design process. In addition, it requires engineering teams to learn software.

The testbench existing almost entirely in the emulator resembles the traditional standalone emulation use model, where the entire design is loaded into the box and is run there, observed Jim Kenney, director of marketing for emulation at Mentor Graphics.

“We have very few embedded testbenches. It’s not a focus for us at all. What you are trying to accomplish for the user is the same thing though. You are trying to generate stimulus or create a device — stimulus in terms of a testbench, or creating a device like a USB memory stick that the design is supposed to interface with in real life — and you want to see how well it does that.”

As is the case with embedded testbenches, Mentor Graphics’ technology creates a model of a peripheral device. “Let’s say my design has a USB port on it, so I have a USB controller in my design and I would like to make sure it works. I have a couple of options. I can sit down with SystemVerilog and write a testbench and figure out how to wiggle the pins on a USB port. Or I can use a transactor that knows how to wiggle USB but doesn’t know what else to do, so I have to write in my testbench the sequences of data that I want to go in and out of the port along with the commands and the configurations and options. That’s a testbench, which would be a behavioral testbench that can’t be embedded. It has to run external on a workstation. You also could take the traditional ICE approach of actually wiring up a USB device, but the USB device wants to run a lot faster than the emulator so there is a speed bridge in between. With a speed bridge you can physically connect up a USB device or you can do what we do, which is to create a virtual device. We do that by embedding some of the device characteristics in the emulator. The rest is software that runs external to the emulator on the workstation,” he explained.

Kenney pointed out that to make it look like a real USB stick in the emulator, there are USB software stacks, mass storage clients and standard software packages for these peripherals that need to run. “Those are hard to do with an embedded testbench because you have to figure out how you are going to re-create all that functionality — you usually end up making some compromises.”

Still, as to the base question of how hardware-assisted verification speeds up verification, the user doesn’t have to write the testbench or think about how the USB interface will be exercised. “They will take a USB device — physical or virtual — connect it up to the design and then use the device driver for the USB controller to go out and say, ‘There’s a memory stick, I want to format it, I want to create some files, I want to write data to the files and create folders, I want to read some data back and it looks like my USB interface seems to work,’” Kenney said.

It speeds up or reduces the amount of effort in that there is a canned device that the user doesn’t have to develop, he added. “It’s a model that doesn’t have to be developed and it is a testbench doesn’t have to be written. The engineer then takes the device driver which someone is going to hand them and gets it running on the emulator. Not only is that speeding up the verification of the hardware, but it also gets a jumpstart on debugging the device driver too.

While the approaches differ, there is at least agreement that hardware-assisted technology is required to speed the overall verification effort—something that is reflected in market demand and the earnings of all of the Big Three EDA companies.



Leave a Reply


(Note: This name will be displayed publicly)