Limits of existing tools, and what’s holding back an industry shift to a higher level of abstraction.
By Ed Sperling
Verification has always been the problem child of SoC design. It requires the most engineering resources, the largest block of time and the biggest budget in the design process. And at each new process node the problem gets bigger, in part because there is more stuff on each die—transistors, memory, interconnects, I/O, functionality—and in part because chipmakers are being called upon to generate more software, integrate more IP and do more with the same resources.
It has long been suggested that the best way to improve verification is to start the process earlier, understanding what needs to be verified as far forward as the architectural level. But that requires much more cooperation and understanding of what’s really going on in verification.
There have been different approaches to try to make this happen. Synopsys, for example, has pushed heavily into software prototyping. Mentor and Cadence have pushed into emulation and high-level synthesis (Synopsys is now in that market, as well). But so far, in part because of the sheer magnitude of the problem, the best that can be said is that the industry is treading water instead of washing downstream.
So why haven’t companies adopted new approaches to solving this problem?
“There is actually a fair amount system-level verification that is being done today,” said David Park, director of marketing for the System-To-Silicon Verification Solution at Synopsys. “The perception that system-level verification approaches are not being widely adopted is more due to a lack of prescribed flows at the system level than an actual lack of customer usage. At the functional level, there are well-defined verification flows based on methodologies like VMM and UVM, but at the system-level those methodologies are not well-defined today which results in ad hoc flows. Even customers at the leading edge have different views on what the right methodology should be.”
The results so far are largely trial and error, and the error most likely will outweigh the results. But Park believes that eventually best practices will emerge and baseline methodologies will be developed, evolving out of the many collaborations that are already ongoing between customers and tool providers.
Strategies and challenges
To say that verification isn’t getting done effectively is an overstatement, considering the vast number of advanced electronics products rolling out in the market every week. Synopsys has done as well, if not better, than any of its rivals in this incredibly complex system-level approach. But there is certainly room for improvement and opportunities for new tools across this segment.
“Systems are being designed and verified, but there seem to be two primary challenges,” said Bernard Murphy, CTO at Atrenta. “One is having a simulation sufficiently faithful to the design but sufficiently fast to run application-level software. The second is having a platform-plus-OS partitioning also sufficiently accurate and sufficiently fast to (somewhat) exhaustively test critical behaviors, for example, around mutexes and semaphores.”
There are a number of approaches for each, but none is ideal. Emulation on a commercial or custom FPGA board, for example, makes it tough to partition the design onto multiple FPGAs. While this is the most flexible approach, offering acceptable performance, it also requires a lot of effort in the area of debugging mapping problems rather than design problems, Murphy said.
A second approach is simulation on an emulation platform, which is expensive and culturally unpopular on the software management side. While software engineers will readily use an emulator, they are averse to paying for one. All of the major emulation vendors say their sales continue to come from the hardware side even though software engineers routinely schedule time to use the emulators.
A third approach is to simulate using a TLM-compliant virtual model with the ability to hot swap cycle-accurate models for the virtual models, especially if the cycle-accurate model can be derived from RTL. “You use a pure VM to get through the OS boot, but then swap in cycle-accurate models to verify behavior for individual IP,” said Murphy. “This is accurate but still very slow and painful to debug accurately across multiple IPs.”
Having a platform plus an operating system working together is a step in the right direction, but it doesn’t expose all of the problems, either. “This is heavily dependent on the architecture,” Murphy said. “Some systems have support in hardware, such as spinlocks, but will also have some low-level software support. This level of verification seems to be performed mostly with traditional simulation but needs support for assembly or higher-level instruction primitives to efficiently describe tests.”
Synopsys’ Park agrees that the next step for system-level verification is to bring all these pieces together with links to the existing RTL verification tools so that customers can assemble a verification flow that meets their specific needs. “The real breakthrough will be when a customer can seamlessly transition between a virtual platform to an RTL or hardware-accelerated representation of their design to support the different combinations of system accuracy and performance that is required to support both hardware verification and software development teams.”
Leave a Reply