What makes one model better than another? A look at how to build models for less money and with higher user value.
By Achim Nohl
In this post, I would like to share some perspectives on the transaction-level models needed for the creation of virtual prototypes. Just recently, TLMCentral kicked off a contest seeking the “best” model for a mobile phone sensor device. What actually makes a “good” model? The most ad-hoc answer to this question historically has been that the best model is the fastest and most accurate. However, if you ask about the reason for this, you probably won’t get a very precise answer. What does “accurate” mean? What is the definition of “fast”? Will an accurate and fast model be available early enough to start software development before hardware is available and therefore meet time-to-market demands?
Simulation performance and accuracy (timing and function) are for sure valid aspects. But there is much more to consider. The opportunity for the “best” model arises when user requirements are understood correctly. When this happens, model creation can be greatly simplified and sped up. The model will have more value for the end-user at an even lower cost.
A good model, in my eyes, is one that serves the purpose of the user at the right point, on time and for a proper cost. Here, the user task has to be carefully reviewed before selecting or creating a TLM model. What is the user intending to accomplish with the model? Simply identifying that a model should be used for software development is not the right level of diversification we are looking for. It does not constrain the requirements! Thus, we would again need a model that has to be able to do everything. While this is a challenge that often has to be tackled by TLM model providers, it is not something that you want to deal with when enabling a certain class of the end-users in your project.
In the remainder of the post I want to provide some software perspectives on models. In context of those perspectives I would like to introduce the aspects that make a model valuable.
The mobile phone application developer uses models in context of emulators being a back end to SDKs. This class of users demands that the models executes at speeds closer to the real device. For interface IP models, it is necessary for the model to be connected to the real world. For example, the developer can use his/her real “iPhone” to drive the sensor model in the virtual prototype. She/he values the fact that they are able to record and repeat the stimuli for testing. It sounds complex, but it doesn’t need to be. An application developer will not look underneath the APIs he is using which leaves great room for abstraction/simplification. In the context of Android, a sensor model may be a combination of a simple middleware along with a generic interface model (e.g. a UART) that is used to send and receive sensor data. This is close to the way it is realized in the Qemu Android emulator.
However, if the focus is middleware, and not application development, then this level of abstraction does not hold true anymore. Still, we typically have fewer requirements on simulation performance. The scenarios in which we are using the model are mainly in context of short embedded directed tests. For compliance testing, Android provides small sensor test applications for developers that are creating their own sensor middleware. The middleware is interfacing with the driver of the OS. Thus, our model should be accurate up to the level of the driver’s user interfaces. This implies that the coarse grain states of the IP are modeled correctly as those are typically reflected in how the driver is operated. For example, the sensor is activated and configured to provide updated gyroscope values every 500ms via the file “/sys/class/sensors/gyroscope”.
To enable this, it is not required to realize such a model register accurate or register complete in the first place. Only a subset of the driver needs to be realized and functions such as clock, reset, voltage control can be neglected to simplify the modeling. The middleware developer will get more value through the additional debug and tracing capabilities we can put into the model. A good model will assist the developer and inform him/her about the coarse grain operation. Why are things happening in the model, or why are things not happening? This level of visibility can be of great help for understanding the IP and debugging the embedded software.
Someone who develops drivers or brings up a device in context of a board configuration will need more functional completeness and accuracy. Here, an accurate representation of the register map is almost mandatory. This also concerns functions for the power management integration such as clock, reset, and voltage sensitivity to bring-up and test for the Linux clock/voltage regulator frameworks. The model has to provide the proper response when configuring it through the register interface and also trigger interrupts. Therefore, a certain level of timing accuracy is required. However, this does not refer to having the data-path of the model being cycle-accurate/cycle-approximate. The correct operation of the IP in the system context requires responses of the model at proper times.
For example, a timer will fire interrupts in periodic intervals. But again, model abstraction can be done to simplify the task. For the purpose of driver development or board bring up, the most important aspect is the control plane. Here, the data plane of video accelerator IP can be abstracted to show just dummy data.
In summary, creating a model efficiently requires careful review of the end user design task. The model development can be staged and aligned with the software development plan, to incrementally enable new software development tasks. Understanding the requirements correctly allows creating models at lower cost and with higher end-user value.
Leave a Reply