When multiple people independently converge on a single concept, it is likely to be a well-founded idea. That is what is happening in verification today.
Some of the most significant advances are not the result of a single person or a single idea. They often don’t happen overnight, and are suggested by a change that slowly becomes pervasive enough to become a generalized solution. That is exactly what is happening right now in the area of functional verification. The tools and methodologies in place at the moment assumed designs typical of the past, not what they are today.
What changed is that most designs are now large enough to contain at least one processor and in many cases several and those processors have access to just about every corner of the design. Couple that with the fact that processors are the one piece of a design where it is not felt that an RTL model has to be run so that software executing on a model of the processor is faster than the simulation of the rest of the design at RTL.
What better resource could there possibly be to execute those verification tests? In addition, Emulation often deploys the actual bonded out core meaning that the processor is available throughout the entire verification flow, from system-level model in a virtual prototype through to actual silicon and it never changes its behavior throughout the design flow.
Most people are familiar with the work being undertaken within the Accellera Portable Stimulus Working Group , but you may not be as aware of a startup in India who has been working on the same problem and has a similar solution. Their name is Valtrix Systems and they are located in Bangalore. Semiconductor Engineering talked with Shubhodeep Roy Choudhury, one of the founders.
SE: The founders were working in processor companies prior to the formation of Valtrix. What was missing or what problems did you see for which there were no existing solutions?
Choudhury: There are too many tools used for design verification even within a single company. As a result, engineers often have to learn and maintain multiple tools to validate effectively. In some cases, multiple sets of test configurations are required to be maintained which just takes away precious execution time. In absence of a single tool that covers all the IP present in the system, it is often very difficult to get cross product Coverage.
In addition, post-silicon tools which are highly effective on silicon are not designed to run seamlessly on pre-silicon simulation and vice-versa. As a result, we would see different verification methodologies in place for pre-silicon, Emulation, FPGA and post-silicon. It is common to have duplicate test content from pre- to post-silicon. Also in absence of a shift-left friendly test framework, it is quite cumbersome to move test failures from silicon to emulation resulting in a lower debug throughput.
Some of the common problems we see the existing tools include: OS based tools are subject to pre-emption/task switch making it longer to hit the coverage goals; pure random instruction sequence generators are not good in hitting focused scenarios; and focused tests do not have enough randomness around the test sequence. It is desirable to have a test stimulus generation platform that addresses all these issues effectively.
SE: You have recently announced your first tool: Sting. Tell me a little about how it addresses these issues.
Choudhury: Sting consists of a user input/test stimulus specification processor, a test generator and a bare metal kernel. The Sting test builder processes the test stimulus specification and creates intermediate representations which are then used by the test generator to create tests. The tests are bundled with the kernel to create an ELF image which can then be booted and run on the target. The entire test stimulus can be controlled using custom Sting configurations. Also there are mechanisms available to develop focused tests and C++ based applications which can then be plugged into the random instruction stream.
SE: How are those tests specified?
Sting supports the classic random constrained test generation. Extensive controls are provided to ensure that all the test attributes can be controlled by the user. In addition, Sting also provides a C++ API for development of test scenarios which require complex programming constructs. A common use case of API based tests is to write device drivers for the IO. The APIs provided by Sting are very validation friendly. For example, they will let you share a piece of memory between CPU threads and IO devices which is unthinkable in Linux. Sting also provides an ASM-like programming framework which can be used by users to write highly focused tests. This lets us test focused test conditions like memory ordering litmus tests.
SE: Often a test has to be reduced in order to facilitate finding the root cause of a problem. How is this performing in Sting?
The test reduction is an important step in failure debug and reduction. The test configuration generator of Sting (SCC) can help with this. Assume that our test has 16 CPU threads, 2 DMA controllers and 4 PCIe cards. All of them will need interrupt and memory resources. SCC has components for each hardware module which analyzes the available resources for that module and determines if it is possible to generate a test using those. The SCC component for interrupts will distribute the available interrupt vectors to CPU/PCIe cards and DMA controllers as per the needs of the test. If the test needs to be reduced by removing all the interrupts, SCC can disable the interrupt component. As a result the interrupt requirements of the other modules are not fulfilled and they are not generated. There are other methods as well by which targeted sections of the test configuration can be excluded.
More information about Valtrix can be found on their website.