Bulletproofing Virtual Prototypes

Boosting model quality with Agile methods and code analysis.

popularity

The big benefits of virtual prototyping methods are that they don’t rely on the availability of RTL or physical hardware. Instead they utilize modes of the future SoC. These models are typically lightweight and optimized for their use case, which is important in regards of simulation speed, modeling and testing effort. Model quality is a key concern, as the virtual prototyping end user acceptance directly relies on the accuracy of the models and also because it’s the single biggest effort in the creation of models.

Of course, there are approaches to leverage RTL test benches, but let’s assume in this blog the more typical pre-RTL situation where we don’t have any verification tests to leverage yet. So to improve model quality we can leverage:

  1. Smart SystemC coding methodology
  2. Technology for automatic code analysis

Test Driven Development for SystemC
SystemC coding is C++ development with some additional knowledge of hardware concepts. Agile development methods have been adopted in the C++ community for a long time. Test Driven Development means that coding and testing are interleaved activities. The full model functionality is broken down into smaller pieces. Each piece is first tested before the coding moves on. The agile community has developed many test-frameworks that foster this methodology (e.g. Junit).


Test Driven Development allow to quickly create unit -tests for each small SystemC functionality chunks. Virtualizer offers a SystemC aware unit-test framework integrated into the Virtualizer Studio IDE.

SystemC Code Analysis Technology
The second approach to boost model quality is to utilize code analysis technologies.

Most common are Dynamic Code Analysis techniques that executes the unit test cases and checks which parts of the model has not yet been executed or stimulated. This feedback then helps the developer to create additional tests to cover these missed functionality chunks. There are two approaches: Code Coverage looks at the source code and highlights which code chunks have been executed; Functional Coverage looks at the SystemC primitives and highlights which registers, pins or ports have been executed.


Code Coverage and functional coverage report generated after each incremental test run guides the developer to add test for yet missed code chunks.

Then there are Static Code Analysis techniques which analyzes the source code and checks for coding and security flaws like buffer overflows, null-pointer issues and many other common coding bugs.

A study of Synopsys SystemC model users showed that their models have been created without SystemC Code Analysis. In retrospect, the users found that 80% of the bugs they discovered were in areas that Dynamic or Static Code Analysis would have highlighted. This is a strong indication that guided testing technologies can significantly increase SystemC model quality.



Leave a Reply


(Note: This name will be displayed publicly)