When To Virtualize, When To Stay In The Real World

Is there one right answer for when to virtualize a design and when to stay in the real world?

popularity

Virtualization is all the craze these days. People have virtual personas on LinkedIn, Facebook and Match. I sent my daughter to a Minecraft camp at Stanford where she built virtual worlds while learning programming. Virtualization also plays an important role in chip development, especially when it comes to representing the system environment. There is, however, some crass misinformation out there, especially in comparison to the real world itself. Buzz words like “cables are bad,” are deliberate misinformation, so I have put this together to clearly outline the options.

As shown in previous blogs, like “Software Driven Electronic Design Automation” and in the attached graphic, a chip really does not live in isolation. It only comes to life with software executing on its processors to run functionality, as well with proper representations of the system environment to allow software development, testing and verification. For instance, a USB master peripheral on a chip will talk to a USB slave like a memory card holding media information. An Ethernet port will talk to the network in the chip’s environment to connect the chip to the network infrastructure and ultimately allow the user to experience the Internet on a browser like Chrome that runs on an OS like Android that executes on an application processor subsystem on the chip.

Real Virtualization Table
More generically, in order to verify functionality of such a path, the user needs to run software on an apps processor, debug it together with the hardware of the chip and the peripheral itself, and validate how the peripheral talks to the system environment. Throughout the project flow, users have various options to do this, as also summarized in the table in the attached graphic.

Very early on, users can model the system or a portion of it in a virtual platform. One of the coolest demos that I have seen here as early as 15 years ago are virtual prototypes that execute an OS and the user can bring up on that OS a browser to update Facebook in the “real Internet,” as well as access a “real memory stick” that is inserted into the PC on which the virtual platform executes, to load media into the media player running on the OS that executes on the virtual prototype. Sounds wicked, but the underlying technology is standard in all virtual platform offerings today: The peripheral in the virtual platform talks to an abstracted system model that represents the USB slave and Ethernet connection and connects it to the “real” USB and Ethernet on the host PC on which the virtual prototype resides. The use model here is software development of the drivers on both sides – PC and OS running on the chip – as well as validation of long sequences at real speeds; it runs almost a real time and depends really only on the speed of the virtual prototype. And it demos wicked well ☺.

Once in RTL simulation, users can now verify the hardware. Typically they do that with a test bench that behaves like the system environment would. Ethernet packets are sent. Constrained random testing, assertions and all advanced verification features can be used to direct traffic and cause specific scenarios to be verified. Due to the limited speed, real world traffic and long sequences are not practical in this configuration.

Closest to the actual chip, but still well before silicon, users connect the system environment to FPGA based prototypes, which are very practical for software development and system validation due to the high speed of FPGA based prototypes. Interfaces like USB, Ethernet and even PCI-e can run at full speed either in specific dedicated FPGAs or through connector cards. With proper rate adaptation, they can run together with the rest of the chip that executes at tens of MHz. Using JTAG hardware, software debuggers like DS-5, Lauterbach and GDB can be attached to debug the software running on the processors in the chip.

Between simulation-based techniques – virtual prototypes and RTL simulation – and FPGA based prototyping lies the world of acceleration and emulation. Recently, some buzz words like “cables are bad” were used by some participants in the market to discredit in circuit emulation, which is still the biggest market segment in this space. Let’s look into the details here. There are four ways to represent the system environment for emulation, and – surprise, surprise – each of them offers valid capabilities that the others don’t.

First, acceleration is a variation of the RTL simulation case described above. The important difference is that the design under test (DUT) now resides in the emulator. The test bench on the host can execute advanced verification techniques and represents the system environment. Speed-ups between 40x and 1000x over simulation can be achieved. The use model here clearly is hardware verification with all the bells and whistles that advanced verification has to offer.

Using in circuit emulation (ICE), the DUT resides in the emulator and the real system environment is connected using rate adaptors (speed bridges). The complete set-up is slowed down to the speed of the DUT but connected to real Ethernet and real USB, for instance. The speed is in the MHz range, which allows not only execution of long test sequences and real world traffic (like in a virtual prototype, but now with real RTL). Testers from LeCroy or Rohde & Schwarz can be connected, blending this set-up with the post-silicon testing world, but well before achieving silicon. The actual connection is sometimes called difficult, error prone and datacenter unfriendly. However, the advantages of speed, connected hardware testers and real world traffic can only be achieved in this configuration.

Alternatively, an advanced mode of transaction-based acceleration (TBA) connects the DUT in the emulator to the host using a simulation acceleration channel. On the host, the system environment now models the actual system environment, not as test bench but as an actual abstraction. We previously talked about how Samsung is doing this for PCI using our Accelerated Verification IP (AVIP). The upside of this use model is that protocol testing software for USB and Ethernet can be used, directed traffic can be created, and the system environment IP like a camera or a USB memory stick can be modeled in its entirety. And it is absolutely datacenter friendly as all access points can be virtualized. The downside here is speed is limited by the transfer rates of the acceleration channel. For example, we have seen situations in the networking space in which this acceleration with virtualized system environment was more than 10,000x faster than in pure RTL simulation, but the actual “real” physical connection was still 1,300x faster than that.

Finally, the entire system environment can be moved into the emulator using what we previously reported on as Embedded Test Bench (ETB). Now the system environment is modeled as software that executes on a dedicated processor in the emulator. The advantage here is that speed goes back to the MHz range and that the system environment can be flexibly modeled in software, allowing for example the test of different cameras for a CSI/DSI interface.

So bottom line: Is there a “one fits all” configuration? Absolutely not! Each of the different configurations serves different development needs; each is valid. So both virtualization and real hardware, despite the naysayers on cable connections, have their place and will continue to be used.



Leave a Reply


(Note: This name will be displayed publicly)