Experts At The Table: The Future Of Verification

First of three parts: Raising the abstraction level and the effect on verification time; productivity measurements; the need to start verification earlier; concurrent issues; talk about a productivity gap resurfaces.

popularity

Semiconductor Engineering sat down to discuss the future of verification with Janick Bergeron, Synopsys fellow; Harry Foster, chief verification scientist at Mentor Graphics; Frank Schirrmeister, group director of product marketing for the Cadence System Development Suite; Prakash Narain, president and CEO of Real Intent; and Yunshan Zhu, vice president of new technologies at Atrenta. What follows are excerpts of that conversation.

SE: Verification still consumes the bulk of time in design. People are doing more verification at more points in the design cycle. Will we ever really get an upper hand on this?
Foster: When we went from gates to RTL, we did improve efficiency and we went to fewer bugs in a design. The constant is between 10 and 50 bugs per 1,000 lines of code. That’s been true in software forever. If you go up a level of abstraction, you still have the same number of bugs but it represents many more lines of code. In the long run, the only way we will solve this is by moving up to the next level of abstraction, but that’s a very big problem to solve. It’s not about TLM. TLM is a component, but it only define the communications aspect, not the computation aspect.

SE: So will raising up the abstraction level solve the problem of how long it takes to verify a design?
Bergeron: No, it will still be 50% to 70% of the design. What’s the purpose of a new design? It’s to create more functionality. If you get 50 bugs per 1,000 lines of code, the rule of thumb is that it will take 50% to 70% of the time to verify it. In order to get new functionality you will have to spend 50% to 70% of that time verifying, regardless of where you are in the design. What we will be able to do is provide more functionality in the same period of time.
Foster: But at least you can be more productive.
Schirrmeister: The true solution to verification will be design. The next abstraction level helps, but what are you designing and where are you sitting? We are seeing more and more people combining hardware acceleration with virtualization models and processor models. It offloads some of the capacity and helps with speed, but if you open up space it will get filled. It’s like filling a hole with water. The water goes everywhere. Verification will remain a problem, but the question is what you’re verifying. You verify more structural pieces that you can rely on, and move the verification into more critical areas. You will be very focused on making the interconnect is right and that an IP component like a processor is correct. From a functional perspective, you have to enable all that functionality, but some of that is difficult to do. People do things like verify the interconnect, verify all the components, verify all the processors, and they have specific tests and verification environments for them. Some of the functionality is in software, too. You cannot verify everything at once, so finding the right scenarios and figuring out what can be verified where is the only option. Verification will remain an issue.
Narain: I’m very happy that verification continues to be a problem. You would almost think there has been no progress. Two years back, verification was a big problem. Today verification is still a big problem. It’s not just verification. Everything will continue to be a problem because we are continuously pushing the envelope. We try build next generation’s hardware on this generation’s technology. There is always a gap, and engineers have to use their creativity to fill that gap. This is what justifies their compensation. Just last year, a 100 million gate design was an outlier. Today, designs with 200 million logic gates are all over the place. Next year it’s going to be 400 million gates. People are still taping out their chips on reasonable schedules and accomplish their goals. It’s happening because of a revolution in products and a revolution in methodologies. A lot of SoCs with a lot of re-use have entered into the picture. There is a higher level of abstraction in terms of failure modeling, how to break up the verification up into various pieces and focusing technologies on that. So there is a lot of innovation coming out of the design houses on how to create a methodology to keep this problem in check by leveraging the tools that are out there and defining roles for engineers. Verification methodology is continuously evolving. The good old workhorses—simulators—are critical and will still be there, and EDA companies are developing tools to keep pace. Everybody is running.
Zhu: The rise in complexity of designs is creating a visibility problem in verification. That’s the biggest problem we have. It’s the same as when a company gets big enough, the CEO no longer has visibility into all aspects. When the chip gets big enough, the verification manager or project manager doesn’t have enough visibility into the design and verification process. The designer may not know what the verification team is doing, and vice versa. The management may not know what has been accomplished over a nine-month verification cycle. Or the customer may not know what the vendor has tested. When you buy third-party IP, you’ve integrated the IP but you don’t always know what’s been tested. That’s a visibility problem. I was talking with a friend in social networking company. They do release patches with two-day turnaround time. They spend a day fixing the bug and the next day they roll out to 0.1% of their customers. Only certain customers get the new patch. If it works out, they roll it out to the rest. You know what needs to be tested and what has been tested. If it doesn’t work, they have a mechanism to roll back. I worry that if we don’t change much we’ll all be retired soon.
Narain: A lot has changed, but the problem is changing, too.
Foster: In 1995 to 1996, what people were talking about was the design productivity gap. It really didn’t happen. We solve these problems.
Narain: Because of this level of complexity, you’re seeing a migration to do more verification earlier in the design process. When we talk about verification, we usually talk about functional verification. But today, because of the nature of the design, a lot of what we’re dealing with are implementation-driven aspects migrating into the RT level. So CDC, for example, is a new problem that has become absolutely critical in the last four years. It has to do with implementation issues. The real driver is constraints on the implementation side. But the problem needs to be verified at the RT or earlier level. When you take this evolution of methodology, new failure points are emerging. Visibility is just one problem. Early verification of all kinds of failure modes—CDC, SDC, power—all of that has moved into RTL functional space and all of them are happening concurrently. You have to make sure these issues are identified early so you have a more predictable design process.
Schirrmeister: Things are getting more complex at the implementation level. There are more and more little pieces to push around. But that’s only the bottom line. There’s also the top line. The complexity of the overall system still grows. The design productivity gap we talked about in 1996—I would argue it’s still there. We just managed to keep it in check.
Foster: I agree that we’re continually addressing it. The reason is that if you look at the growth of design engineers over the past five years, it’s roughly 4% growth. Yet Moore’s Law continues.

To read part two of this roundtable discussion, click here.



Leave a Reply


(Note: This name will be displayed publicly)