Emulating Systems Of Systems

Verifying entire systems will require not just mixed abstraction levels, but organizational changes.


System design is all the craze these days. I have been in notably more discussions recently about how one can verify systems of systems. Does an airplane or a car lend itself to an array of emulators? Are multiple abstractions needed? How can design teams span electrical, mechanical, and thermal—as well as analog and digital—effects? Do companies need to re-organize to deal with system design? There is no easy answer, but here are some aspects for you to consider.

First, what is a “system”?
The constant growth in the complexity of designs seems to be so fast and staggering that the item we are working on right now always seems to be at or just above the complexity that can be handled. But then again, a system on a chip (SoC)—a very complex system in itself—becomes just a component in something bigger. The processor cores that power an application processor are already very complex; then the processor becomes a component of the compute sub-system in an application processor, which then becomes a component of the SoC that is integrating the compute sub-system with the modem. The SoC, again, is just a component of the actual cell phone, which is a component within the wireless network… You get the picture. In a car or plane, each electronic control unit (ECU) is made of components and is also a component within the network that is the car or plane. To make things more challenging, engineers must consider whole-system factors, such as thermal effects and interaction with mechanical actuators.

So how do we verify such a complex system? How do we develop software early and assess performance aspects?

While an array of emulators to execute a full car or airplane is intriguing, it is probably not realistic. Just having all the components available for emulation at the right time is a challenge. Some components will be available in silicon already, some will be still conceptual, others may be available as RTL for emulation. I have seen various approaches to the problem, including using pure virtual models, virtual models hooked up to ECU prototypes, pure networks of hardware prototypes, to a combination of all of the above. Even just recently, when I visited a Tier 1 automotive vendor, a car with several boxes in the trunk pulled up next to me in the parking lot. The car was clearly road-testing variables that weren’t testable with virtual models alone and required “real world” stimulus.

Emulating Systems of Systems

A full “executable specification” of the system and all its components is as desirable as it is unrealistic. The alternative is proper divide and conquer—abstracting away some aspects that do not directly impact the component under development. The illustration above shows something our customers are doing today: combining several nodes of a system at different abstraction levels, combining real and virtual nodes. This is a natural extension of what we call “hybrid emulation,” see a recent round-table discussion here, in which the processor sub-system of an SoC is abstracted into a virtual platform at the transaction level.

In the example above, some nodes of the system are emulated, connected via rate adapters with some of the nodes being virtualized running as software on a host. It’s a true mixed abstraction set-up, and allows verification and validation of the components that are emulated. Some of our customers are already integrating several chips within our emulator; this is one of the reasons that Palladium Z1 scales to 9.2B gates. One of the challenges here is the representation of the analog-mixed-signal interfaces.

Stay tuned. Interesting technologies are coming up here shortly.

What does all this mean to the design process and team structures?
Get ready for a new world of flatter organizations. In a recent discussion I had with Serge Leef, Siemens’ VP of New Ventures, he used the “business-card-line” that I have been using for hardware and software design teams for years, but in the context of thermal and mechanical aspects. He stated that the real issue is not technology, but organization. Bringing together the experts from different disciplines within a company to deal with thermal aspects across chip design, packaging and PCB integration is hard, and when you have them in a room, “they will introduce themselves with business cards.” I have experienced the same situation many times with hardware and software teams together in the same room for discussion, business cards often come out.

The challenge in executing a full car (a complete automotive system) may be an organizational one. The divide-and-conquer model in organizations makes it really hard for all of them to use the same tooling to set up an environment as shown above. For some, a virtual representation it may be perfectly natural; for others, only the actual hardware will be trusted—and even emulation or an FPGA-based prototype is considered “too abstract.”

And then there is, of course, what I postulated a while back as Schirrmeister’s Law:  the likelihood that a new technology will be adopted by a project team is inversely proportional to the number of changes you ask the project team to do. For the verification of “systems of systems,” changes in the design flow must happen with multiple design teams that aren’t organizationally well connected. That elevates the challenge even further.

Aren’t we having all the fun in EDA? There is lots of growth and new technology to come as we shift to systems of systems.

I’d love to hear from you! What do you think?