Making Modeling Less Unpleasant

Taking medicine has never been enjoyable, but the benefits are usually worth it. When that medicine is modeling, the industry is still trying to improve the flavor.


How many times did your mother tell you to take your medicine? You knew two things: a) it would be unpleasant and b) it would be worth the few seconds of unpleasantness because of the benefits it would provide. It appears as if the electronics industry has the same issue with modeling. We talk about the benefits that having a system-level model would have — the ability to explore system architectures, evaluate performance and throughput, reduce power consumption, enable hardware/software tradeoffs to be made and provide an integration platform. The list goes on and almost nobody denies the benefits, but few companies are willing to pay the price.

It was almost 20 years ago when a European systems house was concluding an experimental program to do a design starting from a system model. Their conclusion was that it provided a better design in all aspects that they measured, it got them to a working design faster and with higher quality. Moreover, they could not find a single fault with the flow even though there was little tool support for it at the time. When asked if they would be rolling that flow out into production the answer was an emphatic, no. They stated that the time it would have taken to produce the models would have negated all of the gains.

How far have we progressed in 20 years? Two years ago Intel bought a small French company – CoFluent, which created tools for modeling virtual platforms. CoFluent Studio is an embedded system modeling and simulation toolset that supports model-driven architecture concepts and Eclipse modeling framework technology. Intel continues to develop and sell that tool (and yes, it is an EDA supplier), not through its Wind River division, but directly from the Intel Web site.

“Models are [still] the main barrier,” said Ghislain Kaiser, the CEO and co-founder of Docea Power. “Customers don’t want to spend time creating models.” And yet he is still willing to bet his company on the fact that this will change in the near future.

But it is not just modeling for system-level development. It is a pervasive problem for all aspects of the design process. The analog engineers are having the same problems. “But there is a gap, an often very wide gap,” Mike Jensen, technical marketing engineer at Mentor Graphics, wrote in a recent blog, “between wanting to simulate a design and having a model that tells an accurate story for your system.”

And that is a key part of the problem. Most engineers are committed to accuracy, even if it means sacrificing speed, and that can produce models that are sub-par. “There is a significant change from the past where the hardware team used to be responsible for creating these models,” explains Tom De Schutter, senior product marketing manager at Synopsys. He recalled a customer who recently explained the dynamics of a typical team. “If you use a hardware developer to create a model that will eventually be used for software, they tend to have models that are too accurate and too slow. They want to do a translation from RTL to SystemC and this is not the way to go. Instead, they are now hiring embedded software engineers who have a sense of the underlying hardware, and what the software people actually need from the model. This is easier than getting a hardware guy to understand what the software people need.”

It is now becoming commonplace for IP models to be shipped with a variety of models, but the value chain is still having some difficulties. Warren Savage, CEO of IPextreme, says he is being asked to supply SystemC models by some of his customers. He asks them whether they’re willing to pay extra for those models and the response is typically, no, because that should be part of the price. “We’re still in the space where the value proposition for people to be willing to pay extra money to feed the development side of things is not yet there,” explains Savage, “It’s coming, but it’s a slow slog.”

What’s confusing is that there are several abstractions at which these models could be provided, ranging from untimed to cycle-accurate and several options in between. No supplier can be expected to produce all of them, but to create a complete model of the system, you need all models to be at the same level of abstraction or you finish up with the worst of both worlds — slow to execute and lowest levels of accuracy.

“The ideal is get to a system in which there is one big knob—the big knob that says the level of accuracy for each block,” says Chris Rowen, Cadence fellow and CTO at Tensilica. “Given that it’s usually performance versus level of detail, people can systematically work from the software scenarios down through these cycle-by-cycle scenarios without switching tools or methodologies. This is a useful by-product of some of the consolidation that has taken place. It is likely to be a barrier for some of the small shops if they have to have this range of models. But this seamless model that scales from being fast enough, so somebody can boot an operating system, to accurate enough, so that they can figure out the picosecond-by-picosecond power, is ideally what you want.”

“The path forward is probably a collaboration between the semiconductor companies and the EDA industry,” predicts De Schutter. “We have seen the start of a supply chain forming where a semiconductor company does supply a virtual prototype (VP) to the customers of their chips. This is especially true in the automotive industry where [the customer] is the primary user of the VPs. The semi companies do not do much software development. This is done by the tier one companies.”

But other industries, such as the mobile industry, have concerns about leaking too much information about the chip. A VP is a representation of the entire SoC, something they may want to shield. They don’t want competitors knowing, for example, how many processors will be in their next generation system or if there is a big.LITTLE configuration, or DVFS.

It would appear as if the industry still has a way to go before this situation is resolved, but the situation is improving, even if very slowly.


David Black says:

It is interesting to note that with all the standardization, one glaring hole exists. There is no standard way (by programming) of determining whether a model component is “fast” or “accurate”. Much of this information is not disclosed in marketing materials either. Although, TLM 2.0 modeling allows for an easy “plug-n-play” connectivity, models are may not have all the features (performance, speed, accuracy) that a customer might expect. For instance, how do you know if an Instruction Set Simulator models pipeline and caching effects? Or how would I understand that a peripheral model is running in the “loosely-timed” aka “as fast as possible for software development” model versus a more accurate bus timing mode? Do customers appreciate the difference between cycle accurate, cycle approximate and transaction accurate behaviors?

Of course part of the problem is that vendors are too scared to share this information for fear their competition might use it against them. By omitting the information they hope to win sales. Due to licensing restrictions, honest vendor neutral comparisons are impossible. Due to schedules most customers really don’t have time to do real comparisons, they are forced to choose blindly.

Customers are by and large not aware of the issues or find out too late — and then unfortunately blame the modeling technologies rather than the vendors responsible. Savvy mature customers would of course demand the answers, and savvy mature vendors would provide them.

Leave a Reply

(Note: This name will be displayed publicly)