Wrong Verification Revolution Offered

The industry says the focus of new verification efforts is misguided and that only the easy problems are being tackled.

popularity

SoC design traditionally has been an ad-hoc process, with implementation occurring at the register transfer level. This is where verification starts, and after the blocks have been verified, it becomes an iterative process of integration and verification that continues until the complete system has been assembled.

But today, this methodology has at least two major problems, which were addressed in a DAC panel entitled “Scalable Verification: Evolution or Revolution?” The first problem is that the constrained random methodology removed processors from the design because they were not fully controllable. This was acceptable 20 years ago when processors were simple controllers, but today they are an integral part of the design and verification cannot be performed without them. The second is that simulation scaling has stopped, meaning that verification is forced to migrate onto emulators for integration and even for some block-level verification. Both of these problems are becoming more acute.

One solution under development within Accellera is the . This aims to develop a new way of creating stimulus that is portable in several ways. First, it offers horizontal reuse in that the same vectors can be used for virtual prototypes, simulation, emulation, FPGA prototypes and final silicon. The second type of reuse is vertical reuse, where use-cases developed for the system-level can be reused for sub-system or block-level verification and the third type of reuse combines different aspects of the design such as software, power and functionality.

Panelists included Ali Habibi, design verification methodology lead at Qualcomm; Steven Jorgensen, architect for the Hewlett-Packard Networking Provision ASIC Group; Bill Greene, CPU verification manager for ARM and Mark Glasser, verification architect at Nvidia.

Habibi talked about the challenges Qualcomm is facing, including the number of apps processors, additional processors for security, and special-purpose processors for many other functions. There are also many networks within the chip and in the past verification concentrated on IP blocks. As integration continues, the problems change and involve large amounts of concurrency and multiple power domains. “The existing solutions just don’t fit anymore, and we have a lot of problems all over the place. While the methodologies have improved, it is far from being an easy problem to solve and we are using incremental approaches to improve productivity. However, the problem is growing faster than the advancements in methods and tools.”

Habibi wants to be on the side of revolution, but he noted that with an organization of more than 7,000 engineers, it is impossible to just switch your mind and do it differently. “We need evolution with some smell of revolution.”

Jorgensen separated the world into two camps: Those following for cost reduction and those following it for functionality. “For the latter, the verification costs grow geometrically, and this is outstripping the capacity of the tools—particularly simulation, which has not managed to deliver on multi-threaded tools.” Jorgensen said it has become necessary to deploy additional tools or people to address these problems. “Design reuse has not helped us, and even if you change only 20% of a design you still have to completely re-verify it. We need to be able to describe features and functionality in an abstract manner, and from that derive the inputs to the verification tools.”

Jorgensen noted that while UVM is useful for defining things at the interface level, different methods are necessary for describing things at the system level. He also believes that this will enable innovation in verification and allow new players to come into the market.

Greene pointed out that as an IP provider, verification means different things to them. “We have to throw the kitchen sink of verification methods at the problem including unit-level constrained random, , assertion-based verification, top-level simulation with both directed and random instruction sequences. We also need to do system simulation to verify that the hardware and software work well together and to ensure interoperability of the IP, to verify functionality such as power management and that performance targets are met.”

“You might think that we are able to re-use much of our verification collateral from the IP, unit and top levels into the system-level environment, but this isn’t the case,” noted Greene. He listed the parts that could be re-used. “You can’t find new bugs by running stimulus that was used in the past, and this means that the notions of coverage are different.” Greene’s position is that the existing methods are adequate at the IP level, but that new approaches, such as use cases, could help at the system level.

Glasser noted that Nvidia makes some of the largest chips in the world and that the company is them out the door (along with those from other panel members), so they must be doing some things right. “The most important thing is the coverage model,” noted Glasser. “When we go to the system level, the coverage model is different and we need a new coverage model to describe the behaviors that we want to check at the system level. This involves interactions amongst the blocks.”

brianpic
New coverage models needed. Source: Nvidia Corp.

“The current methodology scales pretty well,” said Glasser, although he admitted the tools are having some problems that are being addressed by the EDA vendors. “Does that mean we should stop looking for new solutions? We should always be looking for better things.”

When looking at various verification methodologies, top-down approaches attempt to show that requirements have been met, whereas bottom-up attempts to show the non-existence of bugs. Is there a correct mix of the two approaches? “’Is it correct’ and ‘Will it break’ are two different questions, but you have to ask both of them,” said Jorgensen.

“We have always needed to do negative testing,” added Glasser.

“We don’t expect that a user of our core will be doing unit testing,” points out Greene, “and we also have to help customers ensure that it still works when not hooked up correctly.”

Dan Joyce, from Oracle Labs, said he was “concerned that we do almost all of our simulation in ideal-world RTL, when what happens on a chip is about gates.” He wanted to know how to correlate these two.

Glasser responded, “We trust our synthesis vendor.” And Greene added that equivalence checking means that we don’t have to run large numbers of gate-level simulations. Glasser admitted they still do some full-chip gate level simulations, but that it is very painful and “it won’t be long before we do no gate-level simulation, just like we no longer do full-chip SPICE simulation.”

Harry Foster, chief scientist at Mentor Graphics, said at least three of the panelists had mentioned coverage and asked, “Why do you think we need something different?” Glasser responded that while he thought that existing methods were working, he would welcome something better. Greene says that their notion of statistical coverage came out of having completed functional coverage and yet still finding bugs. They asked “what is the next way to get visibility into the design and be able to target stressful stimulus to find more bugs?” He explained how they used various methods to identify the aspects of the design that were only rarely being reached and then to tune stimulus towards those kinds of things.

Glasser and Greene discussed that this is a way of characterizing the testbench to find ways to improve it, and determine how this is different than just ticking off coverage points. Jorgensen added that he doesn’t believe “it is possible to create a complete coverage model for a sufficiently complicated design, or even if possible would not be practical.” The panelists agreed that coverage only can tell you when you are not done with verification, but can never provide full confidence because the state space is too large.

If chips are getting out there, then is it possible that you are actually doing too much verification? Jorgensen said they are normally in the high 90s for coverage when they tape out a chip. “We have created crosses of things that we think are interesting but when we look at the holes we often find that they are not very interesting.” Glasser felt that given the astronomical size of the state space, “we are talking about strategies for how to cover enough of it to feel comfortable. We ship product when we have the desired comfort level.”

It all comes to prioritization. If the space is so large, then you have to identify those that are really important, decide what areas are amenable to software workarounds if problems persist, and concentrate on the things that are essential to a successful product. “This is where some of the new solutions, such as graph-based verification can help,” said Jorgensen. “You need more than random, because these may be deep bugs.”

Glasser agreed: “At the system-level, random is less effective. If you just wiggle pins randomly, nothing interesting will happen.”

Jorgensen noted that because of the skills of the verification engineers today, the problem is not increasing exponentially, but linearly. Habibi is not so certain about that: “There are a lot of resources spent on this problem. We tape out the first chip for testing and the second chip for testing, the third one for testing to finally get to the chip that is ready.”

Greene noted that the costs associated with finding a bug at the system level is more than at the block level, and because of this “we tolerate the retesting that happens when it is being done for a specific purpose.”

Jorgensen said that abstraction is the key to making this work. “We are asking ourselves how to pull the bug curve in so that we find issues faster. We are using more formal verification to help with this. This will enable designers to do more design exploration and find bugs early.”

What’s missing today?
Habibi said there is a lot missing from the system level, adding that “simulators are not making effective use of the advances in the underlying hardware. Design sizes are growing faster than the improvements they are making.”

Jorgensen said he wants to see verification as a fractal problem. “When I describe something at one level, I want it to work at other levels. The problem with the Portable Stimulus work is that it is focused on stimulus. I also need the checkers and coverage.”

Glasser agreed. “It is a single part of the problem in isolation — it is yet another modeling language, and it won’t make a big difference without the other parts.” He noted there are no tools for managing coverage models. “You go from documents to coverpoints and there is nothing in the middle.”

Greene sees that debug capabilities need to be improved and made more suitable for complex software programs. “We had to develop our own library of software integration functions and we would like a way to share this amongst various projects.”

When pressed for one thing that would have the biggest impact, the panel almost unanimously said they wanted a portable, machine-readable, specification language, and once in place tools would follow. So while they rejected the revolution that is currently being offered to them, they all see the value in the right revolution.



Leave a Reply


(Note: This name will be displayed publicly)