Experts at the Table, part 2: Are approximately timed models dead? What follows is a very lively discussion about them and their use cases.
Semiconductor Engineering sat down to discuss the state of the industry for modeling at abstractions above RTL, a factor which has delayed adoption of and the proliferation of system-level design and hardware/software codesign. Taking part in the discussion were Frank Schirrmeister, group director, product marketing for System Development Suite at Cadence; , CTO for Carbon Design Systems; Nick Gatherer, engineering manager for models within the processor division of ARM; Victoria Mitchell, director for SoC software within Altera; Marleen Boonen, CEO and founder of Methods2Business; and Tom De Schutter, product marketing for Virtualizer at Synopsys. In part one, each of the participants provided an outline of the current state of modeling and virtual prototypes, from their perspective. What follows are excerpts of that conversation.
SE: Are people being forced to take a partial approach today based on the models they have available?
Neifert: The problem with this is the lack of adherence to, or propagation of, a good set of standards that can be used at the higher abstraction levels. At the loosely timed (LT) level, TLM 2.0 LT is pretty simple and straight forward and you can model the system there. The system cannot get too complex because you don’t have too many outstanding transactions. The Approximately Timed (AT) part is protocol dependent and everyone has done it differently.
Schirrmeister: I think AT is dead. We don’t really need to discuss it. It was a bad idea, it is unverifiable, it is producing answers that only have use to the person who wrote it because only they can judge if the results have meaning. LT or cycle accurate (CA) are the two that people are using.
Boonen: An LT model is not good enough for SystemC models intended forhigh-level synthesis. LT is good for hardware/software validation. LT is easy to create but IP designs that has to be able to go to market quickly cannot start in RTL. I have to take the most advanced methods…
Schirrmeister: But that is not AT, right? Perhaps I was too aggressive here.
Neifert: I don’t like AT either. When I talk about it, I show the classic SystemC description. It contains LT, AT and CA and then I place the dragons over AT.
Mitchell: I think there is a definite need for it. When you said it was dead – that was disappointing. I think there is still a way that we can solve that problem. Perhaps the approach we took, which was to start with LT and add layers was the wrong approach, but we still need a solution for it. I agree with Neifert that we need a common interface so that we can pull these together. If you start from the bottom up, then you should be allowed to…
Schirrmeister: OK, time for me to back pedal on AT. As a standardized deliverable, I don’t think AT is anything that we would endorse because every customer will look for something different.
Mitchell: Maybe the solution is that we provide something at the LT level and a standard method to take it to AT if that is what you need.
Schirrmeister: I see the need, but the solution has to be done very carefully. You have to have a description with it – this model is annotated with timing or power information and only for these components, because you are never likely to do it fully. It needs these disclaimers to ensure the user only uses it for the intended purpose and to answer specific questions. If this is something users demand then there needs to be a standard. Today, as Neifert says, everyone does it differently.
Mitchell: It also depends upon who your users are. There is a spectrum and the software people wants untimed LT. Internally we care about AT and cycle accurate. The customer may be fine with only an LT model.
Neifert: This is why the IP providers have done a great job of providing the LT models and have filled the void for fast cycle accurate models. When we spend time trying to integrate with proprietary solutions in the middle, we waste a lot of time.
Boonen: I actually have to agree with what Schirrmeister was saying. AT is a lot of effort and so why should you do it? If you have a performance model, then that is a different type of model and there is a use case for performance analysis. But that is not a model with all of the functionality. You don’t need that in a performance model. So you must look at the use case. As an IP design center, I have to provide the high-level synthesis model that runs very fast, but do I also have to provide a simple LT model as well? The SystemC model that I developed was there to help me design my IP and validate the software.
Gatherer: There is actually a lot of AT modeling going on. It is just that everyone has a proprietary way of doing it. This is not just for IP; this is for whole systems. A lot of companies have invested a lot of resources making AT models to analyze their systems. I agree that we have not managed to standardize it, because the questions they are trying to answer and the applications they are working on are all different. For an IP provider it is really tough to work out how to make one thing that fits everyone’s needs. But this is a challenge that lots of companies have with increasingly complex systems have – to work out whether the overall architecture of the system they have put together is going to fit within their performance/power budget before they start committing to an implementation. It is convenient for us to stick to LT and CA because we can define them quite well, but we have to bear in mind that companies still have the challenge of creating AT models and we don’t have a good solution for them.
De Schutter: To tie it back to Schirrmeister’s earlier comment, it is a generalization to look at IP as being a processor, the interconnect and the memory sub-system. From a processor point of view, AT has been shown to be very hard to do anything with. For the interconnect and the memory sub-system, internet IP companies have proven that it is possible to create an AT model. The same is true for the memory. So it is IP specific. For software developers, where you want the registers and the memory, LT models are ideal. If you are looking for a performance model, where you want latency and throughput, you need the interconnect and memory sub-system, and the software is still important because it provides the traffic profiles, so you need AT, but yes there are no standards so this means that you get a block here and another block there and you need 20 transactors in-between to make something work.
I agree that we’re forced to use proprietary solutions but there is a lot of value in LT & AT models – I currently have a model of a 4 core design with about 20 other components running about 100x faster than RTL simulation and in the range where I can make realistic hardware~software choices before coding the implementation.
One BIG bugbear for me is the cores – I can’t find a way to realistically join core models that depend on the SystemC quantum keeper to bus models that have unpredictable response times that depend on the particular peripheral and arbitration on the bus.
I definitely don’t want to drop to the performance of cycle accurate solutions to handle that so the whole solution ends up being proprietary, but it definitely works and it definitely delivers value provided I can create it quickly and cheaply.