Experts At The Table: The Future Of Verification

Second of three parts: New challenges as software and hardware need to be understood together; what defines success in verification tools and methodologies; shrinking the feedback loop between the designer and the verification process; can hardware verification ever catch up with growing complexity.

popularity

Semiconductor Engineering sat down to discuss the future of verification with Janick Bergeron, Synopsys fellow; Harry Foster, chief verification scientist at Mentor Graphics; Frank Schirrmeister, group director of product marketing for the Cadence System Development Suite; Prakash Narain, president and CEO of Real Intent; and Yunshan Zhu, vice president of new technologies at Atrenta. What follows are excerpts of that conversation.

SE: So is complexity of the design the biggest challenge for verification?
Schirrmeister: There may be one aspect that is worse. Can you find someone who can deal with RTL and software? The RTL guys are not moving up to software and it takes 15 years to find a new breed of people who can deal with visibility. So there are two aspects to visibility. One is being able to show all the assertions and show all the aspects of the RTL. The second is all the software. There is nobody who understands all of this together. The software guy says there must be something wrong in RTL. The RTL guy says you didn’t code it right. There is no person who grasps all of that, so it may be an education issue.
Narain: The software developer doesn’t see the issues at the same level as the hardware developer. In two days, a social networking is turning around a fix. You can never turn around a hardware fix in two days. The constraints are different. You have a lot more time and flexibility to fix issues in software. The reason why this whole system is still moving forward is because of the continuous evolution in methodology. The challenge for the EDA industry is to be able to deliver solutions in the time frame that methodology evolutions will demand in the future. We are keeping pace with it, but we have to run faster to keep pace with it.

SE: Is tackling the problem a function of the tools, or is it also a matter of doing things differently? If you are just barely keeping up, do you have to think about the problem from a different perspective.
Schirrmeister: We’re making huge dents in it, but the problem is growing, as well. So the question is which is growing faster. Will there ever be a time when we have figured it out and verification is a non-issue? No way. If you figure out everything at the design level, there still will be verification problems. We have made great strides. The industry can be proud about the tools and methodologies. It’s doing things differently.
Bergeron: You also have to appreciate how much we’ve done in an industry that is relatively small. We do not have 1 million customers. We have 10 big customers and the rest are small guys who have 10 or 15 licenses. If we were to roll out something to 0.1% of our users, we couldn’t afford to do that. We have an environment that is extremely limited, it changes very fast, and the consequences of a mistake are huge.
Schirrmeister: If you talk to the powerhouses in the software world, there are cool techniques. But the software guys think the hardware is perfect because the software is always incomplete. There is always a service pack necessary. We get up everyday to work harder and address the verification problem. Will we ever completely fix it? Probably not.
Foster: But there is a concern about the relatively small group of design engineers in comparison to verification engineers. We saw about a 4% increase in design engineers, and a 53% increase in verification engineers. You could argue we never had enough verification engineers. But there is a potential concern.
Narain: That’s an interesting point. If you look at black-box verification, we run testbenches, we run simulation, there’s a failure, we debug a testbench, the design engineers figure out what the problem is, they debug it and then they fix it. If you look at it at the base level, the design engineer does get involved because he’s the one responsible for finding the problem. This cycle was more indirect and is now getting a little more direct. Simulation-based approaches are critical. But the innovations are in cutting down the feedback loop to get the designer into the fixing process much faster. The new methodologies are narrowing the gap a little bit.

SE: Isn’t part of the solution doing more pre-verification of blocks?
Foster: Yes, but try doing something with multiprocessor systems and a cache-coherent network. You can’t just plug the pieces in. You move that to the system level.
Schirrmeister: And the system needs to be defined. What that really means is that the complexity of the blocks you’re using and the complexity of the subsystem is getting hardware because you’re mixing hardware and software together. That trend will continue. But the overall problem is growing with that.
Foster: But you don’t want to be finding something at a higher level that you could have found sooner at a lower level.
Schirrmeister: That’s correct.
Narain: Any time there is fresh RTL there are going to be problems. It used to take four years to do a microprocessor design and it had 300,000 to 400,000 gates. Now we’re looking at 200 million-plus gates. That’s only possible because of reuse. There is a significant amount of reuse happening. But even though you’re re-using the design, every chip is a new implementation. So what is happening is the implementation-related problems are rising up in the verification space, and they’re taking their place next to functional verification-related problems.
Schirrmeister: It’s not just a problem with new RTL, though. The problem increases once you switch the individual blocks once you connect them. All the pieces may work, but if you add in a caching system you may have a number that is inconsistent.
Narain: If you have a better quality piece it will be easier, though. Whatever you can verify early contributes to easing the complexity.
Foster: The cost moves up 10X every time you move to the next phase of integration.
Bergeron: Combining correct blocks will create some even more complex systems. They create bugs when you combine them.
Zhu: With IP SoC methodology, there’s a big productivity gain, but there’s no verification productivity gain because you need to make sure the IPs are connected so there are no cache-coherent issues. But the overall design timeline is still six to nine months before people can tapeout an SoC even if they re-use all of this IP. The reality of the high-tech industry is that people are doing more software design because hardware design is taking so long implement anything. If you can turnaround features in two days in software, and it takes a year to add features into hardware, then guess what you’re going to do? If you look at 2% increase in designers and 50% for verification engineers, it’s not hard to understand why people are going down this road.
Schirrmeister: That’s exactly why things are done differently. The task has switched from making sure all the things our development needs to support at the end are verified in advance of tapeout because that’s no longer possible.
Zhu: But part of the problem is that the two-day turnaround time is knowing what you need to do. If you have access to what the customers and networking guys want, then you can do a quick turnaround. In the hardware, you don’t know what your customers want. Even with the IP, you don’t know what your SoC will need so you over-verify the IP. All the stuff they verify in the IP the SoC never uses. The part they do use you may not verify.
Schirrmeister: IP in itself, at a certain level, may be overqualified. There are companies out there trying to remove the Swiss army knife approach to just the protocol you need. But it’s also a question not of just running cycles, but running the right cycles. So how do you define the right scenarios? There will be new technologies for that, such as formal at a higher level. You may formally identify for this system to work under these conditions, you need to have all these process checks.

To read part one of this roundtable discussion, click here.



Leave a Reply


(Note: This name will be displayed publicly)