Model-Driven Design: Making Progress

Changes are under way, in part because some of the old approaches no longer work and in part because of the demands for tradeoffs earlier in the design cycle.

popularity

By Ann Steffora Mutschler
Model-driven design is coming into its own, in part because the old way of using models at advanced nodes doesn’t always produce usable chips and in part because of the need for making tradeoffs at the earliest stages of the design process.

The concept of developing models for IC design is hardly a new one, and it is being done today on a number of levels ranging from transactions in TLM 2.0 to power and software. Historically, though, it has been done at a very high, purely functional level, often for algorithmic verification. These models have enabled engineers to verify if a data stream or data object could be manipulated, and once the function was verified the model was disregarded. From there, little executable performance information was recovered.

More recently, engineering teams have begun to recognize those models can used and flowed down into the rest of the design process to help make architectural tradeoffs. A lot of engineering teams are running into problems in that their designs do what they are supposed to do, but they are not successful because they miss the performance, observed Jon McDonald, technical marketing engineer for the design and creation business unit at Mentor Graphics. “Those performance issues could be timing and power-related; often it’s both and it’s a tradeoff between the two.”

More and more, he said, engineering teams want to take the architectural models that they’re getting at the very high level to define what the system is, how they’re going to do manipulations and break that down to the level of an architectural refinement where they decide what should be implemented in software, what should be implemented in hardware, how fast it needs to be and how many processing engines are needed. These are questions around implementation choices that need to be made before implementation because once teams start writing RTL it’s much more difficult to make a change.

Tom De Schutter, senior product marketing manager at Synopsys, noted that years ago everybody was hoping those models would also be synthesizable and that they would be a starting point for the IP components. So rather than just creating a model for a specific use case and then ‘throwing it out,’ there would be an implementation path. “That just doesn’t seem to be a reality. There are high-level synthesis tools, but it’s a very specific type of model, not what you need for these type of use cases.”

One item coming more to the forefront, he explained, is that although the model is not synthesizable into a piece of IP it actually can provide a lot of value as a reference to create the IP and to create the testbench before the IP is available. More and more IP vendors are now doing a model-first approach. They create a SystemC model, they verify the functionality of the model, create the testbench around it, potentially also create software stacks, and so on based on the model. Doing that provides more debug ability. They can then re-use that model to develop and verify the actual IP model.

While there is no automated path, the goal is still to reduce the effort for IP development significantly by doing a model first. Varying results are seen based on whether engineering teams consistently develop the models or whether they just try one model.

“It’s really something that by just doing one model you can’t really prove or disprove that methodology works. ST is probably the best example of a company that’s really adopted this methodology and has a lot of documentation out there. Some IP vendors have done this, as well, and they’ve seen good results. But it’s really something that you completely adopt and go for it, or if you kind of step into it and assume that it won’t work then you can easily prove that it doesn’t. It’s like with all statistics, depending on the mindset and how you ask questions there’s always a way to prove or disprove certain assumptions,” De Schutter added.

Interestingly, what often today is called model-based design refers to these things that have UML models or MathWorks models, which are independent of hardware and software implementation, said Frank Schirrmeister, group director for product marketing of the system development suite at Cadence. “What you have there is the trend continuing and getting stronger that on the pure software side linking into to the software development piece you have these things like UML and MDA (model-driven architecture) and so forth, which you find in the OMG (Object Modeling Group) where UML sits. That’s progressing and there are certain application domains, which seem to be more sensitive to it.

The linkage to the hardware side is finding more interest, he said, with smaller companies linking UML type models to SystemC, which leads naturally into the hardware world. “That’s happening and it’s progressing, but it’s in the very early stage of adoption. It’s definitely pre-chasm and not something that you will find broad adoption next year,” he said.

Integrating IP into models
When it comes to integrating IP into models, a robust model based on standards is needed. “We’re seeing a lot of customers who invested significantly in SystemC and TLM 2.0 as the standard level of abstraction at which you can create a fairly abstract model of a piece of IP for your system. You can still define timing and power characteristics around that piece of IP, so you can do what-if analysis. If you take in a third-party piece of IP—and quite a few third parties are starting to offer SystemC models and TLM models of their IP—you can take that IP, plug it into your system, start exercising that IP in various ways, and see how it interacts with the rest of your system,” Mentor’s McDonald said.

By having those robust standards and having models that are developed and interoperable between the companies supplying the IP and their customers consuming the IP, there is now the opportunity to really make detailed tradeoffs before committing to the implementation. It is likely too that System C will need to be extended over time to address some of the outstanding technical barriers.

From the user perspective, Mike Berry, senior director of engineering at Open-Silicon, explained that in terms of integrating IP into models, “We would look for either vendor-provided models of the IP to put into a bigger environment or go off and build those models ourselves if they are not available. Many designs today obviously use a lot of off-the-shelf IP in conjunction with some amount of custom designed logic—the customer’s secret sauce so to speak. Where IP can play a role in the modeling approach is that you can build up a model of a chip or a system using the models of the individual IP blocks and then use those off-the-shelf again to develop a standardized or non-custom part of the total model environment. You can adjust focus on developing custom models for the pieces that aren’t available off the shelf so, much like with hardware design, they can help expedite things and reduce the load on the design team. The same can be said for the modeling environment. You don’t have to develop all those models yourself you can take advantage of existing models for standard IP.”



Leave a Reply


(Note: This name will be displayed publicly)