Systems & Design
SPONSOR BLOG

Innovating Virtualization In Emulation

The goal is to focus on the verification job, not how to implement the execution.

popularity

Last week we officially introduced our next-generation emulator. We used the words “datacenter” and “virtualization” a lot, and it is worthwhile to underline the significance of what just happened in emulation. The new concepts are just as key to emulation as was the invention of virtual memory and memory management units to processors and software development.

The concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch at my own personal alma mater, the Technische Universität Berlin, back in 1956. It led to the use of dynamic address translation and memory management units, which were later introduced in computers like the VAX and in the Intel 80286 processor using “protected mode”. It allowed users to dynamically assign memory areas to tasks instead of hard-coding the actual memory addresses. It made tasks more independent from where they execute.

As an ultimate result, when you start a task on your computer today, you don’t have to find yourself a space in memory and processing time on a processor for it to execute, define which memory regions it can use for computation, or assign which USB port it accesses, which disk regions it reads from, and which video output it can use for display. This is simply done for you by your computer’s infrastructure. One of the guiding principles in our latest development effort was a similar level of independence of verification jobs from the available resources.

And we achieved it in full, using four levels of virtualization.

Virtualization

First, the actual execution of tasks is completely virtualized. Where to execute it within the array of available processing domains is decided when the task is scheduled knowing the current utilization the emulator is in. Fine granularity is a must for that as I had already outlined in my last blog post, “Requirements for Datacenter Ready Emulation.” It allows more tasks to run in parallel. Palladium Z1 also allows users to re-shape the tasks to use available processing spots in the system most efficiently. Remember how we used to defragment our hard drives? With the Palladium Z1’s “job re-shaping” capability, we can work around the fragmentation of tasks in the emulator.

Second, the actual connections the emulator connects to can be virtualized in several ways. Both real and virtual connections are necessary, as I had outlined a while ago in “When To Virtualize, When To Stay In The Real World.” The good news is that Palladium Z1 can do both. Actual connections to USB, Ethernet, PCIe, SATA etc. can be positioned 30 meters away from the rack and can be assigned to the jobs in the system flexibly at runtime without having to manually switch connections. We call this “virtual target relocation.” In addition, based on the technology that is used for verification IP and has been made synthesizable in our Accelerated Verification IP (AVIP), the emulator can connect to virtual representations of the environment, like a virtual USB. Both are valid and necessary options to do, for example, driver development and throughput analysis.

Third, memories in the user’s design to be emulated are completely virtualized. Users simply tell the compiler about the size and configuration parameters of the on-chip memory and our memory compiler will take care of it. And, just like in a computer, you can choose the appropriate resources, of which we have plenty in the system, both centralized per processing domain and distributed across the system. I encourage users when evaluating emulators to always check on the memory-to-gate ratio.

Finally, like in a virtual machine in which you may be running Windows on your Mac at home, users want to access the resources of the executed verification job for offline access to look into snapshots of the hardware/software interaction at any point in time. The Virtual Verification Machine that we introduced with Palladium Z1 does exactly that. Users get an offline database that provides an accurate trace of what happened during the verification run. You can roll forward, backward, set trigger conditions, do hardware/software debug, etc. The system behaves like the actual run on emulation, but is now accessible to large numbers of software and hardware designers and reflects the state accurately, including the memory.

Let’s contrast this with a custom implementation in a set of FPGAs. The actual design logic is mapped into the FPGA logic. After some re-modeling of the code, that process is largely automated, but it fixes the design to the configuration of FPGAs it is mapped to. There is no re-shaping. Connections to external interfaces in FPGA are hard-coded and connected using plug-in boards, so there is no virtual target relocation. Memories have to be remapped. One downloads the FPGA’s datasheet, understands the FPGA’s memory, and re-writes the RTL code that accesses it. If there is not enough memory provided in the FPGA, one uses external memories connected by plug-in boards, re-writing different code to access it.

Don’t get me wrong. There are clear advantages of doing this, with higher speed and lower cost among them. That’s why FPGA-based prototypes primarily are used for software development, scaling to lots of developers at lower cost, offsetting the cost and time to implement it. With our multi-fabric compilation, using the same front-end as Palladium, we allow users to trade execution speed against faster bring-up for which we have automated both the memory mapping – using the same as with Palladium above – and partitioning/mapping. Still, once mapped, there is no re-shaping, virtual target re-location infrastructure or tracing for full offline access. Users need both emulation and FPGA, and we give them options.

Back to emulation’s virtualization. The net result of all these improvement is that the user can concentrate on the verification job. Worries about how to implement the actual execution go away and are virtualized. We started some of the concepts with the introduction of Palladium XP in 2010 – we called it the “Verification Computing Platform” – and now with Palladium Z1, emulation has become a fully virtualized verification compute resource that, together with its rack-based form factor, has arrived with datacenter-ready capability.



Leave a Reply


(Note: This name will be displayed publicly)