The Real Value Of Test

No matter how good the model looks, it doesn’t work until you actually can prove it does.


By Jon McDonald
Sometimes one test is worth a thousand code reviews. Perhaps not a thousand, but it is a very significant number. Not that this is a new idea, but I’ve had a couple of experiences recently that reminded me how valuable a transaction-level simulation model is as an executable specification.

In one case we were reviewing aspects of a potential design change, trying to decide the best way to approach a modification to an existing architecture. Through review of the existing specification and new requirements, as well as white-boarding of the potential modification, we came to general agreement on an approach to modify the system. It was agreed that it was a fairly low risk and a relatively straightforward modification to the design.

In this case the change was deemed to be simple enough that, part of the team went off and started modifying the RTL to implement the specification change immediately. A SystemC transaction level model of the original system existed as well. The estimate was that the RTL modifications and testing would take a couple of weeks and the transaction level model update and testing would take a couple of days.

During the review it was suggested that testing of the modifications in the transaction level model might not be needed. Through what some thought was an over abundance of caution it was decided to begin implementation of the changes in the RTL and the TLM at the same time. Modification of the transaction level model actually only took a few hours. The change involved altering the way in which a hardware accelerator was accessed in the design. This meant some restructuring of the busses, modification of the hardware accelerator interface and a change to the software driver. Initially the change appeared to work as expected, within one day the transaction-level model had been modified and some basic smoke testing of the change had been done to verify that it worked as specified.

A number of different use cases existed and a fairly significant amount of software was in place that would interact with the accelerator in various ways. Regression tests were set up for that night to run through all of the software tests on the modified architecture. The next day it was discovered that a number of the tests had not completed correctly. With a little bit of debugging, an issue with the interface was discovered that could cause a deadlock. It was a minor issue that required a small addition to the implementation of the hardware accelerator interface to avoid the situation. The specification was modified and the need for the change was pointed out to the RTL implementation team.

During a debrief discussion an interesting point came up. The RTL verification flow could not execute all of the software. It attempted to test all of the use cases, but not by running the actual software. In fact, this issue probably would not have been found until software had been run on a prototype board if the transaction level model had not identified the issue.

I’m not sure of the source, but an axiom that has been applied in both software and hardware worlds, “If it’s not tested, it doesn’t work!” In this case an addition to this would be to say that it should be tested as soon as possible.

—Jon McDonald is a technical marketing engineer for the design and creation business at Mentor Graphics.