Stuck In The Corners

Defining corner cases is hard enough; keeping track of them and adequately testing and verifying is confounding even the most advanced companies.

popularity

It’s common for semiconductor design teams to spend 60% to 70% of product development time on verification, which is why verification has bubbled to the top of the management chain as a concern. Executives worry about the predictability of their product development cycle because so much of it is dependent on successful execution of verification, the ability to achieve coverage closure and the ability to predict a schedule based the confidence level they have on verification.

Corner cases are an integral part of this verification process. But as simple as the issue might sound, defining what a corner case is can be elusive.

“It is a simple question, but there is unfortunately no simple answer to it primarily because there is no standard, agreed-upon industry definition of what a corner case is,” said Swami Venkat, senior director of marketing for the verification group at Synopsys. “Corner cases are really a combination of multiple inputs like stimulus, the state of the design, and certain conditions that the software may be executing on the chip. All of these conditions need to occur so that it triggers some sort of misbehavior and unexpected behavior on the design. If you have such a behavior then obviously you want to be able to find it in RTL because finding it post silicon becomes extremely expensive both from a time point of view as well as from dollars point of view.”

Consider Intel’s latest corner case problem. It can be extremely challenging for design teams to think of all of the various scenarios and situations that will lead up to a corner case, particularly as much more functionality is getting integrated into SoCs today. “There are various conditions, multiple simultaneous operations that are happening on the design and now to be able to simulate all of that requires extensive amount of planning, requires a lot of experience and obviously the right technology to be able to uncover this,” he said.

Standards help in this regard, and the more that can be standardized presumably the fewer corner cases. But even with standard interfaces there are plenty of corners to consider.
Verification IP (VIP) also can check that each core as well as the system interconnect supports the same signals and speaks the same language.

“We have to speak to all cores with all kinds of interfaces,” said James Mac Hale, vice president of Asia operations at Sonics. “Each core might conform to a single standard, but we have to interface to all of them all together. Our challenge is often not just interfacing to all the cores out there, but also managing the translation from one to another. We spend an inordinate amount of time and resources verifying the functionality of our IP that they’ll seamlessly connect to all of the cores and function correctly. Most customers are coming to appreciate that once they try to put together anywhere from 50 to 100 cores in the same chip. With design re-use, it may very well be true that they have used the same cores previously and therefore know in theory that all those cores are good, but the problem is when you hook them together, something doesn’t work.”

To identify corner cases Synopsys’ Venkat said design teams are simulating as much as possible in the RTL and are also using a combination of techniques. He said more and more semiconductor teams are bringing architects, verification leads and design leads together to create an extensive verification that includes all of the various scenarios that would have to be verified in the RTL before they commit a design. More and more people are deploying constrained random verification where, instead of an engineer imagining the right sequences that would lead to a bug, they let the constraint solver come up with interesting sequences and combinations of inputs.

“When you do that, they need to have good coverage that will track how much of the design has been covered, what combination of the input specification has been covered and verified. There are customers that spend a lot of time creating extensive up-front plans. Then they use constrained random and then coverage to find out how much of the design has been verified. As you continue to execute through the plan the level of confidence about the functionality of the design and about its ability to meet the requirements goes up. Essentially, you have to be able to create stimulus that will replicate the real-life behavior of the design, and you need to be able to do it in RTL so you can find these problems very early in product development,” he added.

“We see people investing more and more in simulations where they try to run a lot of simulations in parallel. They deploy a lot of constrained random technology, and that’s where newly announced methodologies like Accellera UVM and the VMM methodology can help. The language offers various constructs, but the methodology can help the user leverage the constructs, put together a very effective state-of-the-art environment and create scenarios to verify the design,” Venkat said.

And due to the serious repercussions of not having adequate coverage for corner cases, the world’s leading semiconductor companies are partnering closely with EDA companies to create specific technology just for them.



Leave a Reply


(Note: This name will be displayed publicly)