No one size fits all, but you can leverage one model for multiple levels of analysis.
By Jon McDonald
The other day I was asked what would be the best level of abstraction to model at for system-level design. This is a question I get, in one form or another, far too often. It reminds me of an old quote attributed to Lincoln, slightly updated and applied to this subject: “One model can answer some of the questions all of the time, and all of the questions some of the time, but it cannot answer all of the questions all of the time.”
We are always searching for the ideal solution. It would be wonderful if one model would work for all of our analysis, but unfortunately this is not the case. Many different levels of modeling are needed to answer different questions at different points in time. With the current methodologies and standards there is a lot we can do to leverage the modeling investment. The work done to create one model may be a portion of the work needed to create a model for another level of analysis. To enable this reuse, however, we do need to think about what the models will be used for. Moreover, we need to plan the modeling points to allow as much reuse as possible.
I believe general best practices have pretty clearly settled on a small number of modeling levels that are being fairly widely used for electronic system-level design. The focus in this level of design is on hardware, software and system validation and analysis. The names given to the various modeling levels may vary based on who you are talking to and what domain they are working in, but I think the abstraction points have coalesced into three specific levels of modeling before we get to the RTL implementation of the hardware.
Initially the most abstract level for the electronic system would be independent of the hardware implementation details, would abstract away the low-level driver issues, and would only model the unique custom hardware functions that will be accessed by software. This level of modeling would not require cross compiling of the software. Instead, the software would be compiled natively on the simulation host and software libraries are used to link the native host-compiled software to the unique hardware models required. This is often the initial level of homegrown hardware-software integration projects. There are also a number of commercial offerings targeting specific platforms. One example involves the AUTOSAR simulation stacks, and another involves Linux KVM user space emulation.
The next major step would be to add enough detail to allow execution of cross-compiled target software on the model of the hardware. This level allows detailed functional validation of the software, the hardware and the system interaction. At this level the target processor and full register interface for the software accessible elements must be defined. The communication paths must be defined, but the implementation of the communication paths is not necessarily needed. Many hardware architectural aspects of the system can be ignored or dramatically simplified such as the caches, bus and memory structure. The detail in the simplified models needs to match the areas being targeted for functional exploration. This is probably the most common level I see in terms of current implementation and usage of system level design today.
The final step before RTL implementation would be to add performance information for architectural and hardware software tradeoff analysis. This level targets the final decisions on what should be implemented in hardware and what performance requirements should be placed on that hardware. At this level we want accuracy in the communication paths, the processing nodes and estimates of the performance of each element. This is not a cycle-accurate model, but a performance model appropriate for tradeoff analysis targeting implementation decisions. Most of the questions related to system performance require this level of accuracy to provide a reasonable answer. Prior modeling levels do not have the architectural accuracy to be used as performance models.
Each of these abstraction levels can build on the information and models created at the level above. One model cannot answer all of the questions, but the set of models can answer many of the questions we would have prior to committing to the RTL implementation. There will still be questions that must be answered based on the RTL detail and other questions that will require even more detailed levels, but we want to and can answer many questions with the earlier more abstract models. By properly ordering our questions to correspond with the current model capabilities we can gain value from each level of model while leveraging the information defined and gathered at each modeling level in creation of the next level in the chain.
—Jon McDonald is a technical marketing engineer for the design and creation business at Mentor Graphics.
Leave a Reply