How Many Levels Of Abstraction Are Needed?

How to apply Sun Tzu’s Art of War principles to system-level modeling.

popularity

Recently I was having a conversation with a user who was creating cycle accurate SystemC models. My initial thought was, “Why would this be necessary?” Through the course of discussions I realized that he did have a design questions that required that level of accuracy and the simulation performance trade-offs were appropriate for his needs. His cycle accurate SystemC models were running at about 100 KHz.

This got me thinking about how many different levels of abstraction are really needed. In thinking through the levels available I realized that the creation of additional levels was being influenced by Sun Tzu’s concept that a 10X strength difference is overwhelming, and the notion that nature abhors a vacuum. I probably need to explain my thought process a little more before that statement makes sense.

If I look at the different levels of abstraction being used for system and hardware modeling, then consider the relative performance of each level, I come up with the following somewhat standard levels of abstraction and very rough estimate of performance:

RTL…10 KHz
TLM 2.0 AT…1 MHz
TLM 2.0 LT…100 MHz
Host C++…1 GHz

Theoretically, the tradeoff between performance and accuracy should be a reasonably continuous transition. We should be able to pick any performance point and trade off accuracy to achieve that performance. Over the range of performance options I see two gaps large enough to support a new level with no competing performance point. For the gap to be large enough to justify the creation of a new performance point I am assuming Sun Tzu’s model that 10X difference is overwhelming so the new performance point has to be 10X faster than the next slower and 10x slower than the next faster. By this reasoning we should have a new point between RTL and TLM AT, and another point between TLM AT and LT.

The customer I mentioned earlier is exactly targeting the gap between RTL and TLM AT. I’ve talked to other people interested in this point, but this is the first time I’ve really begun to accept that this is a viable and probably required level of performance and accuracy that should be standardized. Practically, the user was addressing this with an extension to the TLM 2.0 AT protocol to support cycle-by-cycle synchronization. Ultimately this should be standardized which would then allow the interoperability of models at this level with the AT and LT level models.

As a confirmation that the 10X steps are important, I realized that we at Mentor are already addressing the gap between LT and AT. In our modeling we support an LT+timing mode, which adds to the accuracy of the LT models, without incurring the full overhead of an AT level model. Anecdotally, we have seen that the LT+t models come in roughly at that 10 MHz performance point.

Back to my initial question, how many levels do we really need? Using this reasoning with the addition of a cycle accurate AT extension, let’s call this CT (Cycle Timed), and the LT+t level, we have six levels of abstraction that take us from very high performance host execution down to very accurate RTL simulation with fairly smooth transitions and reasonable tradeoffs in performance and accuracy at each step.