System-Level Models Redefined

With the issue of system-level model reusability still hotly debated, the use case scenario for models is evolving with increasing design complexity.


By Ann Steffora Mutschler
It wasn’t that long ago that the promise of system-level models was an easy implementation path and the ability to then reuse the models in a different design, for a different target application. But how reusable are those models in reality? The answer depends on whom you ask.

First, it is important to define what a system-level model is, noted Frank Schirrmeister, group director of product marketing for system development in the system and software realization group at Cadence. “If a system-level model is defined as a TLM model or something even higher above then by virtue of its abstraction, it’s actually re-usable by definition so to speak. I always compare them to the gate-to-RTL jump. Is the RTL model re-usable? Absolutely, because we have automation underneath to remap it to several technologies. Is the TLM model/the system level model for, let’s say, your high-level synthesis input re-usable? Absolutely, it’s reusable for that particular implementation it does. And then, you have the automation around it to actually get the implementation done.”

Further, are these models reusable in general terms the higher up in levels of abstraction? From his perspective they are—they are reusable across different applications and different designs. Otherwise it wouldn’t be commercially feasible for system-level houses or EDA vendors to provide them, he argued.

However, Schirrmeister pointed out, “You need to be precise about what you re-use them for. If you go up from the RTL to the TLM level first, then these models are re-usable for sure when it comes to processor models because they are re-usable for every design that uses the processor model.”

But not so fast, said Drew Wingard, co-founder and chief technology officer at Sonics. “The place where the system models have the bigger challenge is in trying to imagine when I integrate these things together, how is it going to perform? And there we have some challenges.”

The challenges boil down to the fact that for most of these applications, the cost mandate requires that the cheapest DRAM system is used for the SoC. The SoC maker may want to sell its SoC for $10 while the DRAM cost was approximately $8, but if more expensive DRAM is needed it could bump the SoC price to $12. At that point the end user may say they are still willing to buy the SoC but are only willing to pay $7 for it.

“The real challenge of modeling the performance of DRAM with enough accuracy to predict is not the bailiwick of most of the system level modeling initiatives. The virtual platform models don’t give you any real concept of performance and certainly nothing near detailed enough,” he explained.

These cost pressures combined with design complexity is changing the perspective on how models should be used.

“The notion of having a seamless path from having a high-level model and synthesizing it to maybe VHDL/Verilog model and onto hardware—I don’t see it happening. It might still be an industry dream of a couple of people and it would help a lot. It would help proliferate virtual prototyping a lot and it would help proliferate platform architecture design a lot, but it just doesn’t seem feasible to really have that seamless flow,” asserted Tom De Schutter, senior product marketing manager for system-level solutions at Synopsys.

Instead of an implementation point of view, he believes the view on re-use of models currently is defined more on a use-case point of view.

Besides developing the use case for creating testbenches from models, De Schutter explained that work is being done on how can models be re-used across different types of use cases for different types of software developers, be it OS porting or middleware development onto more verification use cases of IP blocks or looking at it from a software performance and energy point of view.

“Because the software is becoming so important, maybe it’s not that important that the model has an implementation path as long it provides value across the lifecycle of the different stages of software and the different types of software—the value of the model establishes itself, as well,” he said.

Toward this end of making the models re-usable across different use case scenarios, it comes down to defining the different things that a model has to do to be useful for those use cases.

“In a lot of cases, the way we as an industry—customers and vendors—looked at it, there was always a notion that you need a lot of accuracy, you need a lot of timing for models to be useful and, of course, the more complex systems become the more that breaks,” De Schutter continued. This becomes clear particularly with the latest approaches to processor design such as ARM’s big.LITTLE approach. “If you look into those systems, just that specific subsystem has up to eight processors, and that’s not taking into account the rest of the system where the baseband, Bluetooth, WiFi, the power management system—everything has cores. So it’s becoming very hard to have very accurate models, and accuracy then defined as timing accuracy, and simulate them in a reasonable simulation speed.”

De Schutter said the current thinking is that the software itself doesn’t need timing accuracy or cycle accuracy to be developed or even optimized. “Again, looking at it from a different point of view rather than from a hardware point of view and an architecture design or an implementation point of view, we are starting to more and more look at it from a software point of view. How can this software help optimize the system? ARM big.LITTLE is actually a perfect example of this,” he added.

In closing, Wingard offered some harsh criticism for some approaches promoted by some vendors today. He believes the models some companies are providing to their customers are not accurate enough to do architectural sign-off exercises.

“While they might help the designer try to get to an intermediate design point, they’re still forcing the development team to go to an emulator to prove whether the architecture is viable or not. They have this additional problem that even if it works on the emulator, it doesn’t mean it will work on the layout of the chip…so that when they get into layout the floorplan changes associated with dealing with the actual layout constraints end up rippling back to their architecture and creating additional substantial performance problems that need to be re-architected. That generates an additional round of problems that basically force the customers to tape out sub-optimal solutions,” he concluded.