Last of three parts: How verification is changing; validation vs. verification; the limits of divide and conquer; the impact of stacked die; questions about whether the lines are blurring between board and die; permanent employment for verification experts.
Semiconductor Engineering sat down to discuss the future of verification with Janick Bergeron, Synopsys fellow; Harry Foster, chief verification scientist at Mentor Graphics; Frank Schirrmeister, group director of product marketing for the Cadence System Development Suite; Prakash Narain, president and CEO of Real Intent; and Yunshan Zhu, vice president of new technologies at Atrenta. What follows are excerpts of that conversation.
SE: How is verification changing?
Schirrmeister: The lines between design and verification are blurry. In the past you could make a decision about performance based upon more abstracted models. That would trickle into the design flow from there. Today it is performance verification. You generate relevant cases and run enough cycles to verify that the performance is correct. What used to be design based on power and performance becomes validation/verification. You have to re-validate it at every phase of the design, and some of those decisions aren’t made until later. If you don’t run the right cycles and transactions, it’s hard to extract certain things.
Bergeron: And you need the right application so you get the right profile of transaction.
Foster: It’s more design validation. I’m seeing a very narrow set of skills for verification.
Schirrmeister: I agree. It’s a matter of semantics.
Narain: What I’m hearing is the principle of ‘divide and conquer.’ A comment was made that you have to focus on a problem and address it directly. So you’re dividing the canvas into little pieces and then looking to cover those pieces completely. This paradigm has given rise to the whole static solution space. It’s a use model where you look at a certain attribute. You’re creating a static timing model where the user can interact with the process. Because of design complexity, more verification is required. But we’re also seeing there is a migration from static solutions up to compete with dynamic solutions to provide a more efficient approach. Once something is defined, static solutions really show their value. But because there are so many more unknowns, these smaller pieces also are indispensable. You still have to deal with that for the unknowns. More validation-related attributes are rising up into the RT level and the management happening at the RT level. That is requiring new validation and verification methodologies.
SE: What happens when we move into finFETs and stacked die? We’ve got more physical effects, which makes verification more challenging? And can we solve it with the same tools?
Narain: Ten years back people were saying we’re going to make RTL obsolete. RTL is still relevant because every time you go through a transformation in the IC design process new constraints get introduced. So how do you run static analysis on RTL? You need parasitics for RTL analysis. In each layer the problems are getting worse, but in each layer innovation is happening to keep track of this problem. So I don’t see any layer getting subsumed.
Foster: In the 1990s functional verification became very efficient because we could abstract out any timing and physical things. They are starting to come back together in certain areas. CDC power is an example. That does add a new layer of complexity and new verification we have to do. But tools and methodologies have evolved. CDC is practically a push-button solution these days. We are developing solutions as they emerge. More interesting in stacked die, as you get away from physical effects, is the opportunity to rethink architectures. You potentially can eliminate caches in certain areas, and all of a sudden that’s a different verification problem.
Zhu: Do you need to look into IP to make sure the SoC is correct? Absolutely. But do you need to look at your finFET technology to make sure your chip design is functionally correct? Not yet. When you don’t have that layer being broken, a lot of the technology we have today still works.
Schirrmeister: It also depends on what you break and when you actually break it. For years I’ve been surprised there isn’t more discussion between the guys going the board and the guys doing the chip.
SE: Those two could easily merge, though, right? We’re starting to see that with fan-outs.
Schirrmeister: You would think so, but I don’t see them actually getting closer. It’s such a stringent handoff to board designers. And if you look at technologies such as stacked die, bringing the whole board piece closer together gives you new things to verify, such as power and thermal. But do you need to look into every piece inside for the purpose of making the decision at the next level?
Bergeron: We’re fortunate in that we have a good logic barrier. However, this is a layer that all digital functional verification will be able to rely on. We’ll have to make sure the bottom level doesn’t break.
Schirrmeister: There is a lot of mixed signal, but the problems you have to verify on the mixed signal world now have to be verified with digital problems for the whole chip.
Zhu: With multiclock domains, you may lose your view into one.
Narain: Methodologies are evolving. Digital is separated from analog, and that’s where the CDC problem is growing.
Bergeron: What happens if we go to four-level logic. Now you’re talking about a complete redesign of the entire design chain for a very simple mathematical change.
Schirrmeister: From a verification standpoint, we don’t have to worry that we’re running out of problems. We have are safe in terms in enough new trouble being generated due to sheer complexity, different disciplines—software and hardware. If you start using software for verification, instead of doing directed embedded tests in SystemVerilog you do them in C. Those are all challenges ahead and they will keep us busy for years.