The parallels between home remodeling and transaction-level modeling are scary.
By Jon McDonald
Recently we’ve been doing some minor remodeling in our house—nothing requiring major contractors. It’s mostly smaller things that unfortunately require a significant amount of personal involvement. Over the course of the past few weeks we’ve had a number of “projects” that we’ve started, then had to undo what was done because we were interfering with another area of work. More than I would like to admit, we fell into the trap of getting ahead of ourselves in the work.
I’ve been working with a customer recently who has fallen into a similar trap in their system modeling. They have been creating fairly accurate, approximately timed, transaction-level models of their system. The system is fairly complex and dealing with many different possibilities at any one point in time. The model has gotten overly complicated because they’ve been trying to deal with problems before they need to.
Essentially, they are trying to predict the next step and be ready for that next step before it happens. From a hardware designer’s perspective this is a very good way of achieving high-performance designs, but this is a level of detail that is only appropriate for the RTL design. This level of detail leads to a tremendous amount of complexity that doesn’t need to be addressed at the transaction level.
The main “rule” I tend to use in transaction-level modeling is: “Avoid any activity in the model that does not correspond to some transaction coming into the model or going out of the model.” This rule is helpful in changing your perspective from RTL to transaction-level.
In this case, the customer followed this rule fairly well, but it wasn’t enough. At each point when a transaction was received they were trying to prepare for what might be coming next. They were getting ahead of what they needed to do to handle the current activity. This involved setting up a lot of data, but some of that setup would not be used, or in some cases had to be undone once the next transaction actually occurred.
The key disconnect in modeling was not in generating activity that did not correspond to transactions, but in doing a lot of work in anticipation of the next transaction. In transaction-level modeling, when a transaction occurs you can do as much work as needed to handle that transaction with no benefit of having pre-computed some of the work previously. The model can perform its work in any amount of “simulated” time, regardless of the length of time it takes for the model to run on the host.
This is very different from the RTL perspective of having a limited amount of computation that can be performed in each clock cycle, which leads to my second rule for transaction-level modeling: “Don’t perform any computations until the result of the computation is needed.” For an RTL designer adopting these two rules in your approach to transaction-level modeling should help ensure your models are as simulation efficient as possible, and as simple to code as possible.
—Jon McDonald is a technical marketing engineer for the design and creation business unit at Mentor Graphics.
Leave a Reply