What’s working, what isn’t, and what to avoid in modeling. Brace yourself for the big up-front costs.
All of the major EDA vendors and standards groups are pitching modeling as the next level of abstraction for advanced process nodes, but is it working as planned for the chipmakers? System-Level Design caught up with Frans Theeuwen, Department Manager for System Design at NXP Semiconductors Corp. to discuss system-level design and power modeling.
By Ann Steffora Mutschler
SLD: How long has NXP designed at the system-level for production chips?
Frans Theeuwen: It depends on what you call ‘system-level design.’ We have been doing hardware/software co-verification activities for quite some time, which goes back about eight years. Many things we are doing in system-level design are creating virtual prototypes and software development for virtual prototypes. We first did that for production designs about three or four years ago. There was one chip for identification, used in banking applications, and now we are using it more heavily in the area of television chips (consumer electronics). What we are doing now for consumer applications is transaction-level modeling and using that mostly for software development.
SLD: What works and what doesn’t in this area?
Theeuwen: The largest problem for introduction is you need to create all the models. That requires quite an investment if you want to reuse that within the company. In 2007 and 2008, we did quite an investment in creating lots of models for our standard SoCs, so for all the IPs that are there. That’s one thing that is important. The other thing is that most people want to use these virtual prototypes for accurate simulations – for really cycle accurate things. That is what you should not do because then the models are much too complicated and you are too late. If you do transaction-level modeling, you can still do software development, so convincing people they should use one use case for software development and create the models for that, and then do software development for that.
SLD: How long have you been doing power modeling on the transaction level, and are you using tools that you created or outside tools?
Theeuwen: For power modeling on the transaction level, I think we started four years ago. Before we started on the transaction level we did it for power estimation on the gate level. Then, later on, we extended this capability of power modeling at the gate level to go up in abstraction to the transaction level, and there we created our own tools.
SLD: How is the learning curve for the engineers in terms of power modeling?
Theeuwen: Our power models are part of the SystemC TLM models. First, you have to create TLM SystemC models, and then you can put the power models to that. First, you must have all the TLM models available and then you can think about power modeling. We’ve only been working on full-fledged TLM models for a few years, so now we can add the power models to that and the extra work is not that much. Once you have the TLM model, then adding the power view is really not so much work and we rely there on the gate level simulation. As most of the designs are reused – about 90% to 95% of large SoCs is reused – you can have quite accurate power models because you have the RTL so you can simulate. If you have that on the TLM level, you can have quite accurate power modeling on your whole SoC. There are only a few parts for which you do not have implementations and there you need high-level power estimates.
SLD: What are some other issues that need to be addressed at the system level?
Theeuwen: The largest problem with TLM modeling is that interoperability between models is still very difficult. TLM 2.0 is a step in a good direction, so it gives a bit of framework, but if you are modeling in TLM 2.0 there is no guarantee that everything works together.
SLD: What is missing from TLM 2.0?
Theeuwen: One part of being interoperable is being able to connect models to each other with the same buses and pins, and things like that. But also, in a complete system, how scheduling works or how different parts run on a multiprocessor design and how does that change, and how it interfaces to memory. All of those things are still not standardized.
Leave a Reply