Systems & Design
SPONSOR BLOG

AT vs. LT

The goal is to understand tradeoffs with the least amount of time and effort, balancing function and performance.

popularity

By Jon McDonald
A subject came up today that has come up on numerous occasions: “How often will the transaction-level model with timing, AT, be used versus the functional model, LT?”

This is a common question; the answer is often very specific to the user. The kinds of questions and the analysis required will drive the level of accuracy required in the models.

It’s probably easiest to start by thinking about the kinds of things that need timing and power accuracy. The first statement that comes to mind is “any software execution that is timing dependent,” but this feels a little too much like a circular definition. What we’re really looking for is system activity that varies based on the order of messages. For this to be true there must be multiple threads of interacting execution. Multiple processing activities progressing concurrently will generate this situation.

It may be easier to define the class of systems that are not potentially subject to timing issues. Any system with a single thread of execution would not be subject to timing dependencies. This is a pretty limiting restriction.

Thinking about most multiprocessor systems, they generally would have many areas of critical timing interaction. In the general case I believe most hardware systems will have multiple activities progressing and interacting to deliver the required performance. This does not mean that all of the verification and analysis should be done using the AT level, but that verification and analysis of the system at the AT level is a critical step in the overall development process.

I’ve heard a number of well-informed individuals take the position that AT analysis isn’t needed. Their contention is that LT for functional verification is enough, then they would go right to RTL. But what happens if an issue that is discovered in the RTL causes an architectural change? Generally, changing the architecture once RTL implementation has begun is a very expensive proposition.

The benefit of AT transaction-level analysis is to begin to understand the performance implications of our architectural decisions without investing the effort of implementation. We can make a relatively small incremental investment by adding the AT timing and power performance information to the LT models, allowing us to quantify our tradeoffs. Once we understand the timing and power implications of our choices then we can invest in the implementation of the architecture we know will meet our system needs.

Back to the original question, the first level of question, “Does it work,” will be addressed at the LT level of abstraction. Once you understand if it works, the next level gets to the questions related to, “Does it meet performance requirements?” How much time is invested in verifying function versus verifying performance expectations will provide the balance between LT and AT analysis.

–Jon McDonald is a technical marketing engineer for the design and creation business unit at Mentor Graphics.



Leave a Reply


(Note: This name will be displayed publicly)