Experts At The Table: Rising Complexity Meets Verification

First of three parts: Power becomes major factor at advanced nodes; questions surface about whether verification is reliable enough.

popularity

Low-Power Engineering sat down to discuss rising complexity and its effects on verification with Barry Pangrle, solutions architect for low power design and verification at Mentor Graphics; Tom Borgstrom, director of solutions marketing at Synopsys; Lauro Rizzatti, vice president of worldwide marketing at EVE, and Prakash Narain, president and CEO Real Intent. What follows are excerpts of that conversation.

LPE: Does complexity make verification less reliable?
Borgstrom: Verification is an unbounded problem. It’s difficult to ever be 100% confident, and the design is so complex that the state space to be verified is enormous. That is driven by verification complexity.
Rizzatti: Back in the 1970s there was a rule of thumb to achieve confidence in a design. The complexity at the time was in the thousands of gates. The 8086, which we considered large at the time, had 30,000 transistors and 10,000 gates. To achieve confidence you had to apply a number of vectors equal to the square number of the gates, so 10,000 gates will give you 100 million vectors. The Pentium in the 1990s had 1 million gates, so that would mean 1 trillion patterns. That is when the industry split. On one hand, emulation and hardware-assisted came into play. On the other side it was everything formal.
Pangrle: Complexity is certainly going up, but that creates opportunities for the EDA industry. That’s our lifeblood. If design complexity were to stop a lot of the innovation would stop, too. It creates opportunities for better coverage tools, for example. We’ve invested a lot in those hard-to-get-to spaces so the designer isn’t alone trying to figure out how to get there. With formal, you can check properties. There are certain things the design either does or doesn’t do. On the other side, looking at the square of the vectors is on a component basis. The blocks are seeing this kind of complexity, but then you have to catch the interactions between them. That starts coming into the low-power space because you have different modes.
Narain: Verification is an unbounded problem from two dimensions. It starts with something to check for or to check against and then how to check. You really cannot check for everything that can go wrong, though, so your verification is only as good as the number of checkers you have in place. You can apply a trillion vectors, but if you don’t have anyone checking the design you won’t get anything out of it. It’s the quality of the checking. Plus, with the increase in complexity you’ve got many diverse things in the design like clock domain crossing. We cannot have a single clock domain running through the design. When you put all these things together the failure modes go up and we have issues covering the failure modes, and we need a process to manage that. You focus on interactions between the blocks, which bounds the problem with a divide-and-conquer approach. But the cost of verification is still going up and the probability of failures creeping into a design is getting higher.

LPE: Is verification really just split between formal and emulation?
Borgstrom: No, and I’d like to take issue with that. Device complexity is growing in many different facets. Not only do we have low-power designs becoming much more common, but almost every block contains analog components that need to be verified. Plus, there are embedded processors where you have to do hardware-software co-verification. One of the aspects of complexity in this unbounded problem is you need to take multiple approaches with multiple engines to get competence across all aspects of a design, such as mixed signal simulation or hardware-software co-verification at high speeds.

LPE: When it comes to verifying a complex chip, what’s good enough?
Narain: At the time the project manager makes the decision to tape out, I wouldn’t want to be in his or her shoes. In a structured methodology you create your checklist and process and you check everything in there. Then you worry about the chip coming back bad. The complexity is so high that the best mechanism is to isolate the failure modes and then use the most thorough mechanism to check for them. With design planning, where you are doing power verification on blocks and have something specific for checking interactions between them, it should be a totally independent process. It’s the same for low-power analysis. But the more you can isolate them, the more confidence you have that you have minimized your failure modes. And then you need to make sure there are no bottlenecks and all your interactions work.

LPE: But the other piece no one is talking about is time to market, which gets worse when we start factoring in power modeling and multiple islands, right?
Narain: Absolutely. And there’s another piece beyond that—at what cost? How many engineers can you hire and what are your resource constraints?
Borgstrom: In the past, verification was very much an ad hoc process. It was up to individual verification engineers to decide when they were done. What we’re seeing now is a more structured approach where you have quantitative metrics for things like code stability of the design itself, the number of vectors that have been run, what is the bug discovery rate, and then tying them all together into a verification plan. So at the beginning of the process you decide what you have to verify on a feature-by-feature basis, and then for each of those features how do you assess whether it’s been verified. Maybe you use formal techniques for some features, hardware-software co-verification for other features and some mixed signal for some things. An executable verification is gaining more traction in the industry so you can make a more educated decision to tape out.
Pangrle: It’s definitely important to have that process and make sure you know what you’re looking for. On top of that there are other tools that you can use to see which paths you are covering and point out places you might not have thought about.

LPE: But how do you know when you can check at a higher level of abstraction, and when you have to drill down with formal techniques?
Pangrle: Part of that depends on the level you’re running at. You can look back at building an arithmetic unit using formal techniques. But formal techniques have to be kept on a level that’s still tractable. At some point it’s going to blow up, so you have to figure out what can you simulate using vectors and what can I cover with different kinds of assertions or properties I’m looking for to make sure they’re captured in my design. Above that, some of it falls into the expertise of the person who’s running the verification process. Some people do it better than others. Going to a higher level of abstraction is another way.
Rizzatti: The problem with a higher level of abstraction, though, is that as soon as you deal with software and you have to test the interaction between the software and the hardware all of that will collapse. I’m not aware of any formal approach that will do any good there.
Narain: A higher level of abstraction is a double-edged sword. It certainly reduces complexity, but then you have to go through one more level of translation to get to RTL, and from there to gates. You lose control over your process. Functionality is not the only thing. There’s also timing and physical characteristics, and all of those are controlled at the RTL level.
Pangrle: I would argue that now you already have architects doing modeling in C.
Narain: But that’s for a verification model, not as a design model.
Pangrle: Then there’s a total disconnect. If I’m an architect doing some mathematical modeling and I take what some engineer did in Matlab, create a C model and if I think it’s okay then I just throw it over the wall to the RTL guys to implement it—then where’s the connection?
Narain: Other than emulation the big problem is coming up with what you want to test for. Typically the higher-level models are used as the model against which you simulate RTL. If you don’t have the two-model approach then you’re compromising on the verification quality.

LPE: So higher-level models don’t work as well for verification?
Borgstrom: If you can generate that RTL from the higher-level description directly, there’s less importance in doing a lot of verification at the RTL level. You still need to do some verification at the RTL level. You’re looking for different types of failure modes. If you originally implement the functionality in RTL, you’re looking for functional bugs in the RTL.
Narain: But how do you check for those?
Borgstrom: We can use high-level models written in the M language, which is very common for algorithm architects, and we can automatically map it down into a discrete-time RTL model that can be simulated. You can do your verification at your high-level algorithm and assume that your translation from algorithm to RTL is done correctly. There’s equivalence tracking that can be done. That’s one of the bigger challenges of enabling this high-level verification flow—enabling that formal equivalence checking and the synthesis parts of the flow. I think we’re starting to make progress.
Narain: In my opinion if you attempt to use that, you compromise on verification quality. Verification is not an isolated problem. Verification and design are constantly making tradeoffs.



Leave a Reply


(Note: This name will be displayed publicly)