Trends in System-Level Prototyping

Transaction-level modeling appears to be the way to go for architectural exploration and verification, but some standards are still missing.

popularity

By Clive Maxfield

One problem with electronics is that certain terms can mean different things, depending on who one is talking to at the time. Even worse, some terms have a tendency to evolve over time. This means that when we are presented with a topic like “Trends in System-Level Prototyping,” before leaping headfirst into the fray, it may be a good idea to first define exactly what we mean by terms like “system-level” and “prototyping”.

For the purpose of these discussions, we are targeting System-on-Chip (SoC) designs. Such designs involve both hardware and software, with subtle interactions between the two domains and increasing levels of interactions within the two domains. If we were to look back 10 years ago, most design teams largely performed hardware and software development independently, and the majority of hardware-software integration was performed post-silicon.

In those days of yore, the majority of software developers would have considered “system-level” to refer to In-Circuit Emulation (ICE), although a few of the more progressive developers may have leaned toward Instruction Set Simulation (ISS). Much of the increase in popularity of ISS over time has been driven by the fact that it is no longer possible to create an ICE for an SoC with multiple heterogeneous processors. By comparison, hardware design engineers would almost universally have considered “system-level” to refer to any portions of the design (and we’re talking only about hardware portions here) that were captured at a higher level of abstraction than synthesizable Register Transfer Level (RTL) representations.

But time has moved on, and these days “system-level” refers to the combination of the software (particularly firmware) and hardware components of the design, where the hardware may be represented at multiple levels of abstraction.

Generally speaking, a prototype refers to: “An original, full-scale, and usually working model of a new product or new version of an existing product.” In the context of a modern SoC, some folks would consider a system-level prototype to refer to a virtual, physical, or even hybrid representation of the hardware upon which the software can be executed, analyzed, and debugged. By comparison, others might regard a system-level prototype as comprising both the hardware and software representations.

Who is right? Who can say? The important point is to make sure that we’re all talking about the same thing before we put our hands in our pockets and start spending our hard-earned money. In reality, coming to one clear consensus as to what a system-level prototype actually is, may not be possible anyways. The only real question is:“Whatever it is, does it provide me with capabilities or values that I require?”

For the purpose of these discussions, let’s stick a stake in the ground and say that system-level prototyping is all about creating some model of the system that allows users to perform four main tasks:

  1. Front-end architectural exploration

  2. Front-end verification

  3. Implementation verification at speeds not possible with RTL models.

  4. Early software development

In this case, the things the users will care about the most are:

  • How fast is it (can I run real-world test-cases)?

  • How accurate is it (does it do what the real hardware would do)?

  • How soon can I get it (with regard to starting a new SoC design)?

  • How much does it cost (there’s always someone who worries about fiddly little details like this)?

Transaction-Level Models (TLMs)

With regard to the points made in the previous section, it is immediately obvious that RTL representations of the hardware are not the way to go. If you have the RTL associated with the design, then your micro-architecture is pretty much locked down, which means that the time for front-end architectural exploration and verification is long past. Similarly, the time and effort required to create the RTL means that early software development would no longer be an option.

In order to address these issues, one trend with regard to virtual representations of the hardware portions of the system is the increasing use of Transaction-Level Models (TLMs). In this case, the actions of the system are defined as high-level transactions such as “initiate a memory read” or “trigger an interrupt,” as opposed to bit-twiddling RTL where the actions of every low-level signal have to be defined and simulated in excruciating detail. (In fact, some prototypes operate at even higher levels of abstraction, such as “transfer this frame of data to that processor,” where it is not even clear what bus operations would need to be performed in order to make this happen.)

So, the idea is that you commence your SoC design by creating TLMs for the various functional blocks. In many cases you will already have the TLMs associated with blocks you are reusing from a previous design (or RTL versions of these existing blocks these can be easily mapped into FPGAs or hardware emulators as discussed below). Also, you might still use an ISS representation for a large, complex block like a CPU or DSP core (processor models, such as ISSs used to be slow 10 years ago, but now they can run at speeds ranging from 1 MIP to speeds in excess of real-time, depending on the level of accuracy required.)

TLMs simulate much faster than their RTL equivalents, so a TLM-based prototype can be used for early architectural evaluation and verification. The TLM-based prototype can be regarded as being an executable specification and a “golden model” against which the RTL blocks can be verified as they are created (this would not be true if high-level synthesis were being used, but that’s another story).

Hardware-Assisted Verification (HAV)

A TLM-based prototype can certainly be used to verify hardware-software interfaces, but it is unlikely to be fast enough to support full-up software development (there are system-level prototypes that can run fast enough, but they may not be accurate enough for all tasks). In order to address this, we move to Hardware-Assisted Verification (HAV), of which there are three main forms: hardware acceleration, hardware emulation, and FPGA-based prototypes.

Each technique has its “pros and cons.” “Big-box” accelerators and emulators can support mega-designs of 100 million gates or more. The dedicated architectures of their custom chips allow designers to quickly map the design-under-test onto the hardware, and they provide special debug capabilities and high levels of visibility into the design. On the down-side, they can be very, VERY expensive and they don’t offer the extreme speeds of FPGA-based prototypes.

As a rule of thumb, if we consider an RTL representation that takes 10 days to run on a software simulator, the equivalent design would run in 24 hours on a hardware accelerator, 15 minutes on a hardware emulator, and 10 seconds on an FPGA-based prototype. In addition to their high speed, FPGA-based prototypes are relatively low-cost, but they also typically offer limited visibility into the design (there are ways around this like Total Recall technology from Synopsys, but that’s another discussion for yet another day).

Who are all the players?

The three main players in this market are … not surprisingly … Mentor, Synopsys and Cadence. Each of these companies fields incredibly powerful solutions. There are also many other smaller players who have pieces of the prototypes being talked about here, but not such complete solutions.

Suffice it to say that Mentor and Cadence both have extremely sophisticated TLM modeling, simulation, and verification capabilities. Also, each has a powerful TLM-to-RTL route: Mentor with its Catapult-C, Cadence with its C-to-Silicon. Furthermore, both Mentor and Cadence field “big-box” emulators (Veloce from Mentor and Palladium from Cadence), whose total gate complexities and number of parallel processors are enough to make my head spin.

Watch that spike: Mentor’s TLM

Meanwhile, Synopsys has a somewhat different take on things. As opposed to emulation, the main thrust at Synopsys is FPGA-based Rapid Prototyping coupled with virtual prototypes. The idea here is that the software content of modern designs is increasing exponentially and is becoming a major bottleneck. Thus, once the SoC architecture has been tied down, the solution (according to Synopsys) is to get tens or hundreds of virtual prototypes or FPGA-based prototypes out into the hands of the software developers as quickly as possible.

Synopsys’ prototyping model

Summary

So, in conclusion, what can we say with regard to trends in system-level prototyping? Well, one trend that no one can deny is the increasing use of TLMs for architectural exploration and verification. In addition to raw functionality, there’s also a trend for today’s TLMs to support the analysis of performance and power consumption.

TLMs sound wonderful when someone is jumping up and down and waving their arms around in the air (most things do), but there is a “fly in the soup” as some might say. SystemC is the modeling language du jour for creating TLMs … or at least, for creating the “wrappers” that are presented to the rest of the system. The problem is that although SystemC allows modelers to create communications channels and data types at high levels of abstraction, there is no comprehensive standard for these definitions, which makes it problematic to mix-and-match TLMs from multiple vendors.

The TLM 2.0 protocol is a step in the right direction, but it’s far from being all-embracing because it focuses only on memory-mapped interfaces. Having said this, although the combination of SystemC and TLM 2.0 may not be as ideal a solution as some companies used to have internally, almost everyone accepts that this is the way to go with respect to “talking” to models from other vendors.

When it comes to hardware-assisted verification, the trend is for everyone to start doing it (if they aren’t doing so already). The main thing here is that there is no “one size fits all” solution; some folks prefer hardware acceleration and/or emulation, others opt for FPGA-based prototypes, and a third group loves the flexibility that comes from the fully virtual prototypes. In fact, some companies are using all of these techniques, perhaps using different solutions for different designs, or even using different solutions for the same design at different stages in the flow.

The good news is that there are now many system-level prototyping tools and techniques available to the designers of today’s SoCs, and their power and capabilities are increasing on a daily basis.



Leave a Reply


(Note: This name will be displayed publicly)