Verification As A Flow (Part 1)

Experts at the Table, part 1: As more tools are added into the verification flow, how well are they integrated? How well do vendors work together? Are the right problems being solved?

popularity

Semiconductor Engineering sat down to discuss the transformation of verification from a tool to a flow with Vladislav Palfy, global manager application engineering for OneSpin Solutions; Dave Kelf, chief marketing officer for Breker Verification Systems; Mark Olen, product marketing group manager for Mentor, A Siemens Business; Larry Melling, product management director, System & Verification Group at Cadence; and Roger Sabbagh, vice president of applications engineering for Oski Technology. What follows are excerpts of that conversation.

SE: The days of verification being a point tool are behind us, and we are about to add Portable Stimulus (PS) into the mix. How well integrated is the flow?

Kelf: There are many verification engines being used by people these days and they have to generate stimulus or testbenches for them. Then there are assertions for formal, UVM stimulus for simulation, a lot of C tests for emulation. It is very disconnected. People are rewriting what is essentially the same test in different formats. PS is about solving that problem, but it is bigger and broader than that. This is about creating a high-level specification of design intent and allowing a series of test vectors for different parts of the verification flow to be derived from that. It is a major paradigm shift for verification, similar to moving from static, directed testing to random test generation.

Melling: Verification isn’t a specific thing anymore. It is a flow. It is both multi-level and multi-engine. It takes more than just a simulation engine to complete your verification job. You also have to deal with more levels – it is not just IP verification, but extends up the chain into the software stack potentially. With PS, it recognizes the challenges as you look at vertical integration and horizontal distribution of testing across the platform. These are some of the compelling reasons for the creation of a new standard and what PS will attack.

Sabbagh: As an industry we have come a long way in terms of developing a flow rather than a bunch of disparate tools. PS may, in the future, help to make that better and solve some of the issues that people have. But I think there are still issues and some big gaps. Our customers face a huge challenge in developing and maintaining flows. We put a big burden on the user to make the flow work. We really should define what we mean by a flow. It is not just a process, it is an automated set of scripts and various inputs and outputs at different stages and from different tools. Some of it is related to stimulus and some is related to coverage or debug. We have to look at all of these aspects. We are in the trenches helping our customers do verification. Customers have to fill in the gaps and they have to invest a lot in the infrastructure, in maintaining translators and scripts. It is an overhead necessary to get the job done. We see the problem of merging coverage as an unsolved problem. Merging just from formal and simulation — that is unsolved.

Palfy: It is great hearing about PS being something new. As a formal guy, we are no longer the new guy on the block. We hope PS is not going to be the source of everything, even though some would like that to happen. When it comes to integration, we also have to help our customers do the work. And we have a committee for coverage (UCIS), which is not really working out. But you have to go into the trenches and solve the customers problem first. We try to listen to what they need, rather than coming up with new stuff all the time.

Kelf: As vice-chair for UCIS – you are right. It is not working out and we have had several discussions over the past few months about why. That brings up a good point. It requires cooperation between all vendors to try and get them working. In PS, we are kind of cooperating. We are getting somewhere. Within UCIS there is no cooperation. Everyone has different models, which have been defined for various types of flows. Nobody can figure out how to bring them together. There is not a lot of business drive behind it, so it does bring up the business aspect. How much does that weigh in and create discontinuities in the technical flow because of business reasons? This is something EDA has to come to grips with.

Melling: There are two sides to that business problem. There is the cost side for the vendor. Vendors have made a huge investment in the development that has occurred on these engines and embedded into those engines are coverage models and those models have to get mapped. How much can you afford to reengineer versus what return do you get from it.

Palfy: To be fair, it shouldn’t be rocket science. There should be a way to consolidate all of that if the committee would work together. It means that people are using custom solutions so they can get on with their daily work and not wait for a committee to solve the problem for them.

Olen: Having standards is a good first step or a prerequisite. We have a UCIS standard, but we cannot merge coverage from different sources. We have a UVM standard that is better in terms of migration, but it is still not painless. You want to take a testbench from one simulator to another simulator. Try and take a design that compiles on one of three major simulators and run it on another one.

Sabbagh: It is not just the design. It is the testbench, the UVM.

Olen: It is the whole thing. We have all of these standards, which theoretically make it possible. Consider debug. Right now, debug is a major problem because we are not allowed to read a certain format (FSDB). So we are trying to figure out how to collaborate on a common debug interchange format.

Sabbagh: We are talking here about flows between vendors, and that puts a huge burden on the user because they typically have a dual-vendor strategy and they want the whole flow to be able to work with multiple vendors. So they end up investing a lot of time. I have done this in the past – building wrappers around tools and making it look as though the two tools are the same thing. But even within one vendor’s flow, there are still gaps. If I find a bug in formal and give it to the designer, he wants to be able to quickly run the same stimulus that caused the bug in his simulator, and then to show that same trace passing on the fixed design. This is more difficult than it should be.

Palfy: We do that for customers. We can generate a waveform that goes into any simulator from the formal engines.

Sabbagh: Yes, which is a utility that we have, as well. To use the counter example and create a simulation testbench. But within a single vendor’s flow that offers both a formal tool and a simulation tool. It is difficult to get that flow working, or is non-existent. That is a gap that users have to fill in. The other problem is with VIP. We haven’t talked about reuse there. People want to re-use their VIP, and it doesn’t work seamlessly from formal to simulation to emulation and to the lab.

Melling: To shift from the negative side about what doesn’t work and gaps – let’s take a step back and look at the importance of verification and the flow for the customer and what it is doing for their businesses. Verification is about creating visibility and predictability into the problem of when the product is going to be done, will it meet my quality objectives, and can I do it with the number of people that I have. That is the important flow. That is the one that matters. How will I get the product out of the door? If I try to do that and bring in multiple variables, I will have integration issues that I will have to deal with. That is the nature of the beast even when there are standards. We cannot lose sight of the end game. New standards such as PS may help to address that problem. How do I stop duplicating effort such as rewriting tests to work across platforms? We are trying to take care of the gaps that we can take care of in the overall flow, but we will not come together as competing vendors and say – how do we make our stuff plug in seamlessly with yours. We are trying to provide a solution to our customers that will help them get their job done.

Sabbagh: You are talking about a tool flow or scripting, but now we are talking about a methodology for verification and signoff and the management of your resources. That is something that a tool or language will not solve. That comes with years of experience delivering verification on large SoCs.

Melling: If you look at verification management, the things that are pushing us forward in terms of predictability of delivery are things such as continuous integration – the idea that I am building things, and as I build them I am testing them. The amount of variation that I am seeing as I go through the process is that I have less surprises and more visibility into what is happening, and those are the things that make the difference in terms of the customer success rate. We are approaching our tool development from that perspective as well. It tells us the kind of visibility that that they need.

Sabbagh: I am not trying to be overly negative, and the industry has come a long way and the tool vendors are doing a lot to create flows that work. But there are still gaps. The tool flows will not solve the overall verification process and management. You still need a lot of personnel that have the experience and expertise to manage the project and to know how to use metrics. At what point in the project do you start doing certain tasks?

Palfy: It is a methodology and you need to apply the right methodology to the right type of problem.

Melling: Methodology from a product management perspective is about specifying extensibility of the product. Customization is inevitable. The customers all want to do different things, and you have to be thinking about what capability you are delivering and how the customers will want to extend it. Where does it need to be extensible? What data do they want to have access to? And what data do they want to give access to?


Kelf: I have seen a change over the past few years in the way that end users interact with EDA vendors. PS is interesting because we have this large committee with a number of end users and the EDA vendors. I have noticed that the customers tend to leave the work to the EDA vendors for issues such as interconnect and cooperation more than they used to. Because of this, the EDA vendors are in competition. Each of us is trying to create our own flows, with small companies trying to be agnostic to the tools used in those flows. It is hard to build the connections unless there is demand from the end users. Unless the customers push for that, the big EDA vendors will create their own monolithic flows and the small ones try to fit across that. This will create the dynamic where people continue to cobble things together and have to do customization. This is a problem. The end users need to create at least some pressure to make vendors work together. In UCIS we have discussions about some basic verification problem that really should have one or two possible solutions, but many variants exist because each EDA vendor has gone down a path that is different from the others. Why is that? Why can’t we have one model? For UCIS there isn’t enough customer pressure.

Palfy: It has to be customer-driven. A good example is building a flow for safety, such as what’s required for automotive these days. We can hook onto certain simulators and be simulator-agnostic, and we have a flow. There are some problems that can be solved with formal engines, but there are others where we can only help a simulator. There is a clear customer need for that. In this sense, we are not competing. We are helping each other. At the end of the day, the money comes from solving customer problems.

Olen: We used to architect things, such as stimulus and debug, to work with our own tools. Today we are investing in stimulus, analysis and data mining, verification management and metrics to be as engine independent as possible. Part of that is because we have to do it that anyway just to support simulation and emulation. That will shortly include a prototyping system. It has led us down the path and our customers don’t use just one simulator. Most of them use two, or possibly three. They have good reasons for that, so you cannot be successful with a next-generation technology if you only support your own technology.



2 comments

neil johnson says:

with the introduction of portable stimulus, the steady talk of requirements tracking and the renewed discussion of verification flow, it’d be nice to see (a) more concrete flow visuals/diagrams/maps/etc that show how tools and techniques are meant fit together and (b) more top-down orchestration that shows how tools are developed to complement and improve the flow. this’ll sound a bit cynical, but that lack of (a) suggests vendors don’t entirely understand how tools form a verification flow while lack of (b) shows there is no concrete flow anyway and the development of point tools continues…

Theodore wilson says:

I have a positive feeling about this space. I would be surprised if block and system level verification teams, emulation and validation teams, formal experts will stay siloed in any real way for much longer. As these teams become integrated or increasingly share staffing the EDA vendors will respond and the tools will integrate more cleanly. I have never found tool integration or porting to really be the bug bear it is made out to be. Other project problems dominate. But hard problems are not solved. Real tracking of heterogeneous coverage under design revision is not solved. Intelligent scheduling of heterogeneous test compute is not solved. But teams will keep doing something provisional and talk to their vendors. All of this will drive a lot of innovation and drive the carrying cost of chip development down. It seems the pace is accelerating and the appetite for better practice is improving.

Leave a Reply


(Note: This name will be displayed publicly)