Using a combination of simulation and emulation can be beneficial to an SoC design project, but it isn’t always easy.
As electronic products shift from hardware-centric to software-directed, design teams are relying increasingly on a simulation approach that includes multiple engines—and different ways to use those engines—to encompass as much of the system as possible.
How engineers go about using these approaches, and even how they define them, varies greatly from one company to the next. Sometimes it varies from one project to the next within the same company, and frequently it can blur the lines between what used to be discrete steps at the front and back end of the verification flow.
Still, there are a couple of common threads that run through most of these solutions. One involves a well-thought-out approach to defining and verifying a complex customized design, which has become an increasing challenge as more devices are connected and resources such as memory and processing are partitioned locally, in the cloud, or somewhere in between. A second involves a reality in many engineering organizations that there are never enough available resources to complete a complex project in the given time frame, so teams need to be somewhat creative in what they can utilize to get the job done. Many hybrid approaches involve a bit of both. But they do share a similar end goal—faster time to working silicon with fewer surprises at tapeout.
“A product may use off-the-shelf SoCs, and if there is a virtual platform that can represent that SoC, then the design team can take that and focus on what they are designing — the custom hardware,” said Mark Carey, product marketing manager at Mentor Graphics. “The virtual platform allows them to run embedded software, which enables them to use real software stimulus to test the custom hardware that they are making.”
As you might expect, best practices for how to make this work can vary greatly. “On the hardware-software partition of the system, it is vital to make sure the right features are implemented at the appropriate abstraction level,” said Rajesh Ramanujam, product marketing manager at NetSpeed Systems. “Being able to simulate all features is important to test the functionality and performance of the overall system. Hybrid approaches help in bringing all the worlds together and understanding dependencies and interoperability.”
These approaches also help design teams to understand the impact of third-party IP early on and throughout the design cycle. “These pieces are usually not available or not integrated until the later phase of the project,” Ramanujam said. “Hybrid approaches help in accounting for these IPs and analyzing the interaction of these system IPs.”
And in the IoT world, multiple systems can be interconnected to create an overall system. “Whether it is a chip at the client side interacting with one in the hub node, and all connected with a host node, simulating the overall system and analyzing the performance can be done with hybrid approaches,” he said.
All of this requires much deeper knowledge of the strengths of the individual components of this solution. Traditionally there have been good tools, tool chains, and flows available for hardware or software, Carey said. “It’s just recently most new architectures have some level of processing involved — multiple cores, perhaps different types of cores, and it’s getting the software and hardware people talking together early. They’re making decisions about where the functionality will be implemented. It may be that it’s not so easy to make a decision about whether a function needs to be implemented in the hardware, or whether the software can achieve that. Without having some kind of prototyping system available, it’s very difficult to guess that. You need to be able to make that decision as early on as possible because you can waste a lot of time and make the wrong decision if you start implementing things in RTL.”
This is where hybrid simulation/emulation and verification can really shine. “As design complexity increases in the mobile, networking and enterprise semiconductor markets, shrinking development time becomes increasingly critical,” said Barry Spotts, senior field applications engineer at ARM. “The ability to have software running and tested against RTL is paramount to success, as software must be ready by the time of RTL tape out.”
Simon Davidmann, CEO of Imperas Software, views a hybrid approach as one where there is an emulator that is typically modeling RTL, hooked to some software simulation that is typically modeling processors, with behavioral components such as a virtual prototype. “Essentially, this is a hook-up between a software simulation of a platform with parts of that platform being in RTL, and blocks.”
Frank Schirrmeister, senior group director, product management in the System & Verification Group at Cadence, pointed out there is some redefining of this terminology because hybrid used to be associated only with software development in the context of emulation.
“In a more general sense, it’s really like a hybrid car — you have two engines and they interact with each other. What these two engines do, and what the end use model is depends on the intent,” he said. “Software development is one use model, with the key problem being that there are certain things, if you run them all in hardware, will take too much time. There’s not much interaction at that point between hardware and software, so why not fast forward through it?” Here, Cadence quotes a 50X improvement in the emulation hybrid when connected to the ARM Fast Models.
Further, Schirrmeister pointed to other hybrid approaches, including those used such that sometimes there are situations where the DUT (device under test) in hardware describes the hardware. That typically is written in such a way that it can be synthesizable at the end, because once it is rid of all the defects, that’s the thing you synthesize and hand over to the implementation team. “But in order to verify it, you often use code which is either C that is connected, or things like UVM testbenches that are connected, or it’s a different level of abstraction. Even outputs from Mathworks are used. You could call all of these hybrids, and like in the hybrid car, you’d simply have two different engines that are optimized for something specific.”
Hybrid approaches obviously reach into the verification space, he continued, given that the verification team may want to do specific things on the testbench side. Even though they won’t make it into the chip, this can be a key part of software development. This is where a full virtual platform is linked in, which runs software. “There are lots of shades in between where it is not a full virtual platform but it may be behavioral code, or things that come specifically from UVM, and other things, and you are more at the IP level. All of those are essentially hybrid approaches, where each engine has distinct advantages, and they interact with each other.”
As a technology, Davidmann believes this approach is very complex, very expensive, and a challenge to get working. “Technologists like this because they can waste a lot of their time trying to address this challenge, but if we step back and ask what the purpose is, the cynics come into play and say, ‘Hardware emulation has been around forever, it’s gotten tremendously expensive, and actually has limited usefulness and utility.’ That’s one side of it. On the software simulation virtual platform side, it is complex to build models and you can get tremendous speed, but it might not give you the accuracy that you might need if you are doing certain things.”
Adam Sherer, verification product management director at Cadence, explained that a hybrid approach comes in a couple of different forms—high performance models that run with simulation, and high performance models in the emulation space. “Generally speaking, what customers do with this and with most hybrid solutions is use the hybrid part (the high-performance modeling part) to push through software execution, because the intent is that you need to create the appropriate state in the system so that you can do hardware-software verification in the proper context.”
Design teams use these approaches to run longer, multi-scenario-type simulations with hybrid on hardware, which allows more cycles and more complexity. The two-state nature of hardware allows a certain type of functional simulation, Sherer noted. Also, when using a hybrid scenario with a software simulator, he said all four-state-type simulation is available, although there is a much bigger difference in speed between that model that’s running the software and a simulator.
Speed vs. complexity
Done right, simulating the whole SoC as early as possible can provide a serious competitive advantage for companies.
“Hybrid simulation and emulation allow two key aspects to be covered,” said Zibi Zalewski, general manager of the Hardware Division at Aldec. “The obvious one is simulating the SoC itself, not separated sub-modules. Another one is testing of the software stack with hardware synchronized and connected into one environment. Software developers and hardware designers can verify each other on the working organism without waiting for the RTL code of the whole SoC. It wouldn’t be possible to test the drivers, for example, during simulation if not connected with a virtual platform. BFMs are not suitable for software operation and debug.”
Zalewski believes hybrids are the next evolutionary step for verification because of the increasing scale and complexity of designs. “The typical verification mode with teams focused on module-level simulation is no longer sufficient, especially when we start putting the puzzle pieces together to create the final system. The sooner we start testing the whole system, the earlier we know the problems and the lower project cost is going to be.”
Integration with virtual platforms provides one additional benefit, he said. It increases the quality and coverage of the tests, allowing the simulation/emulation of the cases not available even for complicated UVM testbenches and to actually debug the errors when detected.
In fact, Zalewski pointed to the immediate benefit of using hybrid simulation/emulation in IP-based designs where the processor subsystem and standard peripherals (like memory controllers, USB, Ethernet, Display Port/HDMI) come from the IP vendor. “This part of the design is usually available also as virtual models. Custom IPs designed in-house are connected with such sub-systems. Hybrid simulation/emulation means additional time savings as there is no need to create virtual models for the custom IP. They are emulated, which provides both excellent speed and RTL-level accuracy. Connection of virtual prototyping and emulators is straightforward via the system bus transactor, which also can be re-used if applying the SCE-MI standard. On the software side, SystemC/TLM wrappers are used.”
He noted that the transaction level interface considerations are very similar to those used when creating a transaction-level testbench.
Unique design considerations
Given both the usefulness and complexity of a hybrid approach, however, there are special design considerations here.
Ramanujam said the key is to be able to define the IP at different abstraction levels, because it is no longer just a simple translation of product requirements into an RTL. “Design considerations have to be made in order to spec the IP at different levels. An example would be to have a good software addressable register map, to make the software application layer agnostic of the design level details by exposing the required set of features at a higher level. The IPs also have to be made available at various levels of tradeoffs between functional accuracy and simulation speed.”
ARM’s Spotts said that beyond hardware emulation, there is also a push to use hybrid simulation with RTL embedded designs around microcontrollers. Because embedded design development in microcontrollers is heavily cost-constrained, a single prototype is desired for software development and assistance with hardware design. Here, a virtual subsystem of the CPU is connected to an RTL simulator with the rest of the system in Verilog.
“Hybrid simulation has benefits for the OEM RTL designer to test their system to ensure the portion of their design is compatible with the embedded processor software,” Spotts said. “The virtual subsystem is a wrapped module with a SystemC TLM interface, and a bus bridge is created to stimulate the RTL design with cycle timing added. Once the RTL design is finalized, the RTL portion is then synthesized to FPGA and hybrid simulation can then be used by software development. Software engineers generally prefer to put the full design onto a single FPGA, but the hybrid approach works best if the OEM does not own RTL IP for the processor, or if the complete design cannot fit into one FPGA. The connection between the FPGA and the virtual subsystem is usually interfaced with a C++ API to convert read and write bus transactions between the virtual subsystem and FPGA board technology.”
While the definition of hybrid simulation/emulation may be evolving, when properly implemented it can add tremendous visibility into the design and verification of both the hardware and software of a system. How these approaches evolve over time as more electronics products are software-driven will emerge as demand dictates. But at least for the foreseeable future, expect discussions about exactly what this entails to continue.
Emulation’s Footprint Grows
Why emulators are suddenly indispensable to a growing number of companies, and what comes next.
Too Big To Simulate?
Traditional simulation is running out of steam with autonomous vehicles and other complex systems. Now what?
FPGA Prototyping Gains Ground
The popular design methodology enables more sophisticated hardware/software verification before first silicon becomes available.
Hybrid Emulation Gets More Hybrid (Oct’15)
Possibilities increase for what can be done with this technology as profitability and investments grow.