The Trouble With Models

As the usage scenarios get worked out for models, it’s not the technology holding things up in terms of adoption. It’s all about the money.

popularity

By Ann Steffora Mutschler
Models and modeling concepts seem to be on the tip of every tongue these days. Once the promise of sparking true ESL design, the use of system-level models has settled into something more like enabling software development.

There is also talk of leveraging models across the supply chain, but is this really possible yet? The concept of doing this incremental refinement and re-using models across multiple use cases is starting to gain a lot of interest.

“People have realized that having a flow from a model to something implementable doesn’t seem to work,” said Tom De Schutter, senior product marketing manager at Synopsys. “If you look also at the predictions that happened five, six years ago that high level synthesis would be the thing that would really launch ESL—people have really settled down from that now with the realization that creating a model with a long lifecycle across different software use cases—even to verification of the actual RTL later on—is creating a new boost for model development. That’s true for the IP vendors, who are now all creating models, and for customers, who are realizing that this is not a one-time, throwaway effort. It’s something that they can actually leverage for a very long time inside the company and with their customers.”

Today, engineering teams are more focused on the system and the system performance. The system is not just the software. The hardware and doesn’t work unless the software and the hardware work efficiently together.

“That’s where we get into what’s driving it, which is the fact that it has to be an efficient interface between the two,” said Jon McDonald, a technical marketing engineer at Mentor Graphics. “You can’t just have the system architect define the specification for what’s in hardware, what’s in software, have the two groups go off, do it independently and then come back together in the end. It doesn’t deliver the performance you need, there’s no predictability, there’s no way to know what you’re going to end up with until you’ve built it. By then it’s too late to make changes or adjustments. We’re absolutely seeing customers needing to run software earlier; needing to keep that software accurate to what the end system performance will be as well.”

This is easier said than done, however.

“This is the can of worms and is what all of our semis deal with,” said Kurt Shuler, vice president of marketing at Arteris “We encourage them to do it, and you would think that it would be good. But we’ve got a major phone OEM who has bought chips from a certain semiconductor vendor and wants models of parts of that SoC for its own internal benchmarking. The semiconductor vendor is thinking a couple of things: ‘Oh my gosh, I’ve got to support an external entity on my model’ in terms of using them and integrating them into the SystemC environment their customer chooses. Then there’s the, ‘It’s our IP and we don’t want to let it out’ thing, which always crops up. The subtle thing that nobody says but is true is, ‘We don’t want our customers to know too much about the inside of the chip,’ because you get more visibility through these models than you do out of silicon. ‘We don’t want them to know that much because we don’t them specifying to us what our next chip needs to do exactly. We want to have wiggle room to create a product that meets the needs of a whole bunch of customers and if we give models out to everybody, we’re going to have very, very, very, very specific sets of requirements from each one and we turn into a pseudo-custom services shop.’”

On the other hand, ARM, Tensilica, and other IP provider do provide models but struggle with the ROI, said Frank Schirrmeister, group director of product marketing for system development in the system and software realization group at Cadence, during a panel at DAC. Additionally, Qualcomm said during the panel that ROI is an issue but do provide models to some key customers but the objective has to be to get one model that does everything otherwise it won’t work.

“This is one of the problems of the horizontalization of the industry,” Shuler suggested. “When the Apple iPhone came out, why did the screen respond so well to the touch? It’s because the hardware guys and the software are working together on this stuff. And when you have this horizontal industry where the network guys don’t trust the phone vendors, and the phone vendors don’t trust the chip vendors, and the chip vendors don’t trust the IP and software vendors, then the phone vendors also don’t trust the software vendors. It’s about trust but it’s that they all have their area of expertise and their own area of how they make money—and they don’t want that touched.”

Still, De Schutter maintains there are a lot of concepts helping promote the use of models. The software stack is increasing and the way software is controlling hardware is increasing. An example of this is ARM’s big.LITTLE architecture, whereby the hardware is now controlled by the software.

“There is this constant struggle of who owns what portion of the value chain,” Schirrmeister said. “In order to make model development commercially feasible, in order for models to become available to enable users to avoid big issues potentially unresolvable later in the design flow, models need to be become part of the standard development flow.”

The dynamic between the semi and system house comes down to where the value is. Is it in the chip or the various subsystems? What will be delivered as part of the chip in terms of software? How does the system house differentiate? “The system house wants to look into the hardware in more detail and wants to provide more input into the hardware than the semi really would like—it’s their IP,” he said.

No matter the interplay among the providers, model availability is still one of the biggest issues for many companies. As Mentor’s McDonald summed it up: “I think we’re at a point where what we generally see is that there are models available from some source and it’s a question of figuring out what source has the model, is it a third-party model? Is it a Mentor-supplied model? Is it an IP vendor-supplied model? Then we must integrate those into an environment that allows us to put those models from all the different sources together to build a model of the system. We’ve done a lot of work recently in taking models from Freescale, ARM and many other star IP providers and putting those models together in a transaction-level reference platform, and we’re using SystemC and TLM 2.0 as the baseline standard connection for all of those models. It’s definitely not trivial and I think it’s important for people to understand that it does take some effort, but it’s not like you’re writing the model from scratch every time.”

How this all shakes out is anyone’s guess, but if the indicators in the back end of the supply chain—namely, the packaging and assembly players—are correct, relationships with the systems houses will be characterized by new levels of closeness particularly as 3D enters the mainstream. This could mean the models will absolutely be required from the semi players, possibly disrupting the business plans of certain players in the supply chain.



Leave a Reply


(Note: This name will be displayed publicly)