When Verification Leads

Experts at the Table, part 1: Would an executable requirements document transform verification or design? Experts have differing ideas.

popularity

Semiconductor Engineering sat down to discuss the implications of having an executable specification that drives verification with Hagai Arbel, CEO for VTool; Adnan Hamid, CEO for Breker Verification; Mark Olen, product marketing manager for Mentor, a Siemens Business; Jim Hogan, managing partner of Vista Ventures; Sharon Rosenberg, senior solutions architect for Cadence Design Systems; and Tom Anderson, technical marketing consultant for OneSpin Solutions. What follows are excerpts of that conversation.

SE: Portable Stimulus may enable significant change. It defines an executable specification that precedes many design decisions. It could fundamentally change how we think about verification—no longer a lagging process but a leading one. Is it possible that verification will lead design?

Anderson: There are two dimensions to that. One is that verification is influencing the way that people design, at least to some extent because if you build structures that cannot be verified, you will not want to fabricate them. So there is some evidence in the industry that people are perhaps simplifying their bus structures, simplifying some aspects of the design so that they have a chance to verify it. This is not a major influence, but there is some. It is similar to design for test (DFT)—design for verification. DFT modified the way people designed to make the chips testable, and we are seeing a similar wave for people modifying chips to make them verifiable. It is not as profound as DFT, but there is evidence. The second dimension, and this always comes up when we talk about the Portable Stimulus Standard (PSS), is what is the possibility of a PSS model being the basis for the design? Could it become the high-level synthesis model? This would be very interesting because then what we think of today as verification code would become the source of the design. Maybe someday this might be a possible direction.

Hogan: Let me look at this is an even more abstract manner. I believe, and so do a number of companies that I have invested in, that it is not domain-specific design for processors and logic devices, but specific behavior that I am trying to design. Adaptive training is an example—there is no way to predict how to verify that. That behavior of the system is the thing you will actually test it to, and that needs to be the design element. So it could be the input description to a system-design compiler. The adaptive training example is an interesting problem because the training of an AI system is what takes all of the time. To actually get it right you have to go back and redo it a million times, so trying to build specialized hardware for that is a verification conundrum. So, yes, it is possible that you may want to describe that behavior, and that will drive design.

Hamid: I will use another example company. They have a document that lays out all of the marketing requirements. It is full of flow charts that show the use cases the product will support. This is what they sell. They can use this to show people the capabilities of their chip, and that is what the team has to verify. So when we look at PSS, it lets us model those flows directly and then allows us to push down to the subsystem and the IP blocks what needs to be verified. IP suppliers burn thousands of cycles testing things that may not matter, and then miss the ones that do matter. That is why we have bugs in systems. We need to push down to them what it is that we expect them to do.

Arbel: When you look at automotive, this process may go a lot deeper. There, you have to follow processes where your design functionality is actually translated to requirements, and those requirements have to be verifiable. So in some sense it is the verification team that defines what the design will look like at the end. When you talk about this kind of system, there is no choice. You can call it verification or system design, but IP designers have less and less effect on this process.

Rosenberg: Today, people do PSS modeling and they have some questions when they read the specification, and find areas that are unclear. They can go to the architect and ask those questions, and this is good. Sometimes it may be that they didn’t understand the spec and sometimes it may turn up something that had not been considered.

Hogan: They interpreted something differently.

Rosenberg: Yes, and sometimes the architect really doesn’t know the answer and they need to think about it further. Verification engineers always have good questions—that is nothing new, but that will now happen much earlier in the process—even before the testbench has been created. So will this remain something that we call verification or not? Architects want to have this capability and they may want to do it themselves. Some of them already use UML activity diagrams to describe things. We didn’t invent that, but PSS is very tuned for this task and has a lot of expressiveness to it. You can try things. You can define the rules and you can put the definition into a formal way, define the requirements, and then you can experiment with them. What are the boundaries? Does it do what I want? Can you build quality in from the very beginning? This will become possible with all platforms. Currently, this task is performed by verification engineers, but it may become the role of the architect. They can visualize things, they can explore the architecture, they can look at the response from a tool and consider the options to make sure it is high quality before passing it on. The problems in system-level verification is the English language. This is vague. This is unclear—but it will now become more formal. From that you can add verification, validation, you can add implementation details and some of those may be in PSS and some of it may be under PSS. This will start the flow, and we see this starting today.

Anderson: UML is primarily a design formalism. PSS is a verification formalism, and yet they are very closely aligned. So is there really any fundamental difference between a design model and a verification model?

Rosenberg: There is some difference. This comes about because design looks at topology and the structure of the design and how it is going to get built. Flows do not always preserve hierarchy and there is no block-level refinement that is taking place (top-down) when you do design and build things from blocks (bottom-up). You just described high-level scenarios and the structure of it may not match the end result of the detailed implementation of your design. You can do a lot of things with it, but I am not sure if there will be synthesis of PSS to produce a design structure with components.

Anderson: That is the question we started with. Is there a possibility of synthesis that can do all of that for you? Can synthesis advance to the point where it can take a very high-level model, call it verification, call it design, and produce a fully elaborated design from it? That is a huge leap.

Hogan: Verification always has been about coming up with ideas. First, I model them. Then I optimize around the requirements. That is a verification task. It has become more complex. From a business point of view, whoever owns the models wins. If you are the de facto model builder, or owner, you will win. So, what is a model? It must have enough detail to allow us to consume it and optimize on it. But it cannot be too heavy. Otherwise, we have too much data and will never be able to get through it. This is similar to the way that people think. If you look at the human brain, we start filling synapses when we are 10 years old because there is just too much stuff to remember. You start to focus on what is important. That is part of evolution. We are at one of those points in system design. Models are sticky, but simulators will always come and go. There will always be better simulators. There will always be better pieces of hardware. But whoever has the most sophisticated, elegant, lightest weight model that gets the job done, is the answer. Therefore, once you have models that can do that, potentially you can synthesize.

Anderson: Optimizing requirements is something that is more automated today than it used to be. You used to have to lay out your gates, you had to worry about everything. Logic synthesis took a piece of that and it could go higher.

Hogan: Synthesis is not compilation. Compilation provides a consistent and predictable result. So to the extent that you can, you always want to compile. There will be a lot of stuff that you can compile. Therefore the verification burden should be less. So I invest in models and compilation technology.

Arbel: But it gets to a higher abstraction every time. You have to, otherwise there are too many details.

Hogan: That is the challenge. The models are the challenge.

Hamid: ISO 26262 shows us, at least the systematic testing part of it, that we have to list the requirements. And then we are forced to list the use cases that will show that I have met that requirement. We are being forced down that path.

SE: You seem to be suggesting that we have a model that lists the requirements, and maybe we have matched that to the needs that come from the marketing team, but if you now use that as the input to synthesis and get hardware from it, we have to relook at the verification problem. There has to be something against which you verify. It is not possible to drive everything from a single model. PSS is a partial model that does not have to be complete when the process starts. It can be refined throughout the process and sophistication can be added during development. So why would you want to start thinking about it as being the source of the design?

Rosenberg: It is also abstract in the way that you can consume it. I don’t want it to have all of the implementation details in it. It should stay abstract, to be as easy as possible to grasp, to be able to see the necessary flows. It is not just one item at a time. It is scenarios. Here are the high-level flows. These are the requirements.

Hamid: I have been asked if I can synthesize designs from it and I can say that we don’t. But the notion holds together in a theoretical manner. Why not synthesize from a PSS model? That just moves the whole process up one level. If you can prove that it is right, then everything else is correct by construction.

Related Stories
Data-Driven Verification Begins
Experts at the Table: What determines good vs. bad data, why the EDA industry is so slow to catch on, and what verification engineers are really looking for.
Digital Twins Deciphered
It’s easy to see a digital twin as nothing more than a simulation model, but that would ignore a very important difference.
Verification Knowledge Center
Verification top stories, white papers, videos, and blogs.



Leave a Reply


(Note: This name will be displayed publicly)