Gaps In The Verification Flow

Experts at the Table, part 1: The verification task is changing and tools are struggling to keep up with them and the increases in complexity. More verification reuse is required.

popularity

Semiconductor Engineering sat down to discuss the state of the functional verification flow with Stephen Bailey, director of emerging companies at Mentor Graphics; , CEO of Agnisys; , CEO of Test and Verification Solutions; Dave Kelf, vice president of marketing for OneSpin Solutions; and Mike Stellfox, a Cadence fellow. What follows are excerpts of that conversation.

SE: Is the industry managing to keep up with the demands for functional verification or is verification becoming more difficult.

Bailey: We are running like a bat out of hell to stay somewhat up to speed with the growth of design size and complexity. Many customers are trying to extrapolate out and they see an exponential curve in the number of verification cycles, the number of tools that they need. They want that to be only a linear growth path. That is big challenge. Functional verification is the biggest challenge for EDA and semiconductor – period. It is far more expensive than mask costs.

Kelf: Without question. It is also changing. There may be a slowdown in the size of the devices, but the complexity and the kinds of things that people are doing have increased. Now it is not just a question of whether the simulator can go faster. It is what you can do to bring in new methods to tackle new kinds of problems.

Stellfox: I see more things that are big challenges. Software and integration and some of the new domains such as automotive, where the systems guys are trying to build in more safety and add reliability into chips. That is adding a whole new level of things for verification. The last wave, driven by mobile, didn’t have the same level of focus. We are gradually drifting a little bit behind what is needed.

Bartley: The push is for safety and new markets that are demanding safety – ISO 26262 in automotive is an example. This is driving a lot of demand, and a lot of companies have to change the way they do development and verification to comply with those standards. We have seen growth in verification and in the types of verification that we do. It is not just functional verification any more – it is low power. There are several clock domains. The techniques and methodologies that we need in order to deal with them continue to increase. In the Mentor surveys two years ago, verification became the most costly aspect of a chip design.

Bakshi: There is a three-to-one ratio between verification engineers and designers, and it is the most time-consuming and costly affair in the development of the system. It shouldn’t be that way. It means that the verification process needs to be more effective and streamlined. The productivity has to be improved. You do not have to do all of the things that they are currently doing.

Kelf: It is not just the complexity of the problem. It is the people doing verification. Before, it might have been several verification guys with a big bulge of verification in the middle. Today, they are asking designers to give us better code and verify some of it up-front so that we don’t have to do so much and deal with all of the bugs. The designers are getting more pressure to do more verification. The integration guys have to do more before they hand off to the back-end of the process. So there are many people with different perspectives about verification, and they all have different requirements for the tools. Designers do not want to spend their time writing stimulus for simulation. They are trying to figure out other ways to verify the design without digging into the things that a verification guys would do with UVM.

Bailey: Another thing I see happening is that it is not just the designers doing verification, because they have always done some amount of verification so that they do not embarrass themselves. But if verification truly is the biggest cost driver, then it should be impacting how design gets done and you should have more ‘design for verification’ and design to drive down the cost of verification. I see that a little bit and the most public example of that, although they have not made such public statements, is Marvell‘s MoChi interconnect. You can take that and implement it in a GALs implementation or an FPGA prototype and it allows them to reuse things at the wafer-level with 2.5D type technology. That starts to minimize the amount of new things that have to be verified. One of the reasons why verification growth is much greater than design growth is because even if you reuse on the design-side, you still are creating more interactions with other pieces and it is those interactions that cause the exponential growth of verification.

Bartley: It is much easier to integrate design IP than it is to integrate verification and verification IP. We have just completed some work with ARM, where they focused on how methodologies can make verification easier and the types of things that designers can do to make their designs more verifiable. Designers have always done a little bit of verification, but we need to find ways to re-use that verification. Reuse of verification has been very low, and that is a big area where we can improve efficiency. has a lot of potential there because, rather than writing a testbench that gets thrown away, if they can write assertions and and visualize it in a formal tool, then those assertions and constraints are reusable.

Bakshi: But that is only at the IP-level, right. More and more is happening at the system and the software level. That is where comes in. Portability will help with the efficiency and productivity of all of the people involved. But doing it at a higher level will help the whole verification process.

Bartley: That is being standardized. I think Breker, Cadence and Mentor …

Bailey: It is not standardized yet. I chaired the first two versions of UPF and I can tell you that it is shocking that it is only just now that it is becoming a state-of-practice. It takes a long time – often ten years and no matter how quickly you do the standard, it still takes ten years for the industry to evolve to fully adopt something and make it mainstream.

Kelf: And by that time it is often too late.

Bailey: Portable stimulus is maybe, three, five or seven years away.

Kelf: The flow today is that there are designers doing their design and then the verification guys start writing UVM and testbench based on the spec. You could argue that this is a good thing because everything is being done in parallel and this speeds things up. But what happens is that the designers might pass the tests over with their code, and that gets thrown away. The question is how can you get the designers to contribute to the verification process more effectively so the work they have done, and their understanding of the design, can still leverage the large testbench development that is being done at the same time. That is a big problem.

Bailey: That has limited benefits because the biggest challenges are in the system and toward the validation end of the spectrum. Individual designers will not have a big impact on that because they are only seeing a small piece of the overall puzzle. So the biggest challenge is at the top, where all of the complexity comes together. You also get into multi-discipline issues because it is not just the software, but control of mechanical aspects, etc. Then with automotive and driven by ADAS, the quality, reliability and safety considerations have become a key driver. The folks in mobile did not have to worry about those issues. There was pain if someone had a bug that made you reboot your phone multiple times a day, but nobody dies because of that.

Bartley: I don’t think that if you look to other industries, avionics is a good example. That has had DO-254, but the pace of change in avionics is tiny.

SE: The largest change over the past 10 years appears to be that verification has moved from being a point tool to being a flow. That is a big deal to tool providers and to verification teams. Are we beginning to see more specialization?

Bailey: Of course you see that as part of the progression—specialization versus consolidation. It is the same thing as any advancement in technology. You will always get more specialization. It is a natural outcome. But the problem is that you can create silos in doing that. So then you need to have a good flow, and you see that today with the drive for shift-left. At the very raw level, the need for more verification means that you need acceleration, such as what comes from or FPGA prototyping. If you did it all on software simulation, people would love that because of the easy and general-purpose hardware platform. But you can’t. You have to use acceleration, and when you start doing that, the whole flow becomes an issue. Things that we may think are nits become big issues for customers because they have to keep remodeling things to make adjustments to go from one platform to another. That is difficult to manage. Part of it can be addressed through methodology, but some of it has to be addressed by the vendors making the flow easier.

Bartley: I am not sure it is always in one direction that you become more specialized. If you look at formal, they have recently introduced apps, and so we see something that was very specialized becoming more general purpose. The same is true for emulation.

Stellfox: The key is to make things more accessible. Formal can help very early before the testbench is ready, but someone has to learn how to do that. So we have had good success teaching people good ways to use formal. And if we provide more tools, such as formal and emulation, that become mainstream, then you have different ways to measure how well you have done verification. Similarly, if I am using an emulator and running workloads, I have different metrics. This is a big challenge to becoming efficient in how you normalize all of those metrics according to what you are trying to verify in the design in a way that you can take credit for those things.

Bartley: Yes, normalization or combination of metrics is important. That enables you to make informed decisions. Combining them is really difficult.

Kelf: It is the coverage models being produced by all the tools that are all different — especially between formal, simulation and emulation. How do you combine them and make sense of them? We have a long way to go with this and vendors have a part to play to figure out what is a common metric. But as soon as you start to introduce those to different people, end users often say they want this kind of model or that kind of model. More and more, we are seeing different ideas about what those models might look like. That needs to come together.

Bailey: Metrics at the system level are not even defined. Code coverage is something that you could do in terms of structure and find various ways to see if something was tested and then you have the SystemVerilog that requires the user, same with assertions, where the user has to define the model to capture the information they think is relevant. You go to an SoC, and just on the hardware side—forget going across to software—who will write all of the coverage models for the complete SoC and validate that you have covered everything? It is impossible, and it does not rely on someone a-priori defining all of the coverage metrics. We need to provide ways to give visibility to the user to see what is happening so that they can investigate what is going on and better ways to stress the system. Did they cover the types of things they think will be interesting? And when you introduce a new coverage metric, you always find you are not doing anything as good quality-wise as you thought.

Bartley: And we never throw metrics away, either.

Kelf: In the communications domain, you take an SoC and stick it on an emulator and ignore metrics. You take several days of data and just run it through. It is effective. With automotive, you can’t do that. ISO 26262 says that you will do diagnostic coverage and get 99% of the faults. Try running a fault simulator on an SoC. Formal has a part to play in this. Emulation clearly does.

Related Stories
Debug Becomes A Bigger Problem
EDA companies have been developing more integrated debug flows that bring execution engines and hardware and software closer together, but is that enough?
System-Level Verification Tackles New Role Part 3
Automotive reliability and coverage; the real value of portable stimulus.
Verification Engine Disconnects
Moving seamlessly from one verification engine to another is a good goal, but it’s harder than it looks.
Making Verification Easier
Verification IP is finding new uses to speed up and simplify verification particularly when coupled with emulation technology.
Open Standards For Verification?
Pressure builds to provide common ways to use verification results for analysis and test purposes.
Can Verification Meet In The Middle? Part 2
The industry has long considered verification to be a bottom-up process, but there is now a huge push to develop standards for top-down verification. Will they meet comfortably in the middle?



Leave a Reply


(Note: This name will be displayed publicly)