Incremental System Verification

Experts at the table: Part 2. How does a PSS model get verified and who will create that model? What happens when models extend beyond the specification?

popularity

Semiconductor Engineering sat down to discuss the implications of having an executable specification that drives verification with Hagai Arbel, chief executive officer for VTool; Adnan Hamid, chief executive office for Breker Verification; Mark Olen, product marketing manager for Mentor, a Siemens Business; Jim Hogan, managing partner of Vista Ventures; Sharon Rosenberg, senior solutions architect for Cadence Design Systems; and Tom Anderson, technical marketing consultant for OneSpin Solutions. Part one can be found here. What follows are excerpts of that conversation.

SE: How do you prove that a requirements document, as described by Portable Stimulus (PSS), is correct?

Hamid: We use massive C libraries every day in software development. We write programs and little test cases and we know that the building blocks work, so we trust that the compiler does the right thing. The whole verification/validation load decreases significantly the more you automate the compilation.

Hogan: Going back to automotive, a certain level of image processing is required, so you create a Harris corner model. This [technique] is pretty universal. We do not need to do that more than once. There will be libraries of these [Harris corner models]. Now, let’s use the logic analogy: there is not going to be one design as a compilation or assemblage of a bunch of things with some glue that binds it together, makes it special and can be optimized, and that is the design’s value add. We are just abstracting the design model up a level. It is not easy, but you always need to become more abstract. The models are the same where you have a debug environment where you can optimize.

Arbel: I don’t think it is an either/or. If you look at an SoC today, one or two IPs are new and for those, you drive the process or architecture and logic design and optimize it and verification has to validate every corner, but the rest of the SoC is a commodity. It is something where your only verification comes from a system perspective. Over the past five years we haven’t seen a design that is not like this. Nobody starts from scratch.

Rosenberg: With a PSS model, you can emulate the latency of each activity and speculate about those latencies, which could be based on the memory and the traffic that you need to process. What does this buy you? You can visualize the scenario and when you do that you get the order in which things will get executed. Let’s say you want to try for a meaningful overlap between two actions. One may be software and the other hardware. They could be of different durations and a meaningful overlap may mean starting at the same time and doing 100 of the short task and just one of the long task. If you want to see how dense the test will be, you can look it and see that this is what the scenario will provide. This is speculation—it is not accurate, but you can see if there will be overlap and tune things when you run it on a real system. Then in the next generation you will have a better model. You are building your regression before the system is ready, and you tune that because when the system is ready, you will have a very good regression that can be run. You can do this for a number of physical dimensions, such as power, that you can put on top of the model and you can tune the scenarios to these activities and corner cases, but you start by speculating. That requires running them to see where things are and you can see the waveforms and decide which tests you want to save.

Hogan: I want to test elasticity. I want to know the extent to which I can push things around. This sounds like a valuable thing to do. I do not want to write a hardware description to talk about what I want the system to do. I want the behavioral system to do that. So, refinement—yes. But always go back to behavior. What can we do to ensure that behavior is what we want? So, we have to write new models that are meaningful and not take too long to write, that are adequate to describe the function that we want, that has not already been described in a library element. We have tools that help us to assemble and optimize those. Not everyone is ready for this. The default of human behavior is to stick with what you know. We will see some false starts, but the dialog is important, and we cannot lose sight of the top-level goals.

Rosenberg: PSS is not rocket science. We are describing state machines, resources available and requirements, behaviors and their dependencies. So, I hope that it can be adopted and that some people really do want to adopt it. These people understand what it is. Now the question is what you can do with it. Not all of the system is being modified, but because we have the behaviors in an abstract way, we can very quickly customize scenarios that are only verifying the pieces that are new. Inside the model may be dependencies that you may not have thought about, and the model has this information. [The model] tells you what you need to try. This is a ranking that was not possible in the past. You can do it even before running the regression, before you commit the time. The nice thing about PSS is that you can solve everything at once. You know what is in each scenario—and you can check if you want to run it or not. You may know that you have already done that in a previous test. We used to do that as runtime coverage, but now we have generation time coverage and you did not have to spend the time for the simulation to be finished. That is value—right there.

Anderson: This is certainly an example of the way in which verification will influence design. You are playing with a level of parameters that affect what the hardware does. This is another dimension of the general problem.

Hogan: Let’s give some respect to the verification engineer. Sorry—that will not happen. The less I know about verification, the happier I am as a chip design guy. I just want it to happen. I want it to happen fast, and I am tired of budgets going up.

Rosenberg: I think it will build respect for the verification engineers. This approach of formally describing what can take place and what is legal will mean there will be less silly mistakes. Instead of going to the architect and asking why something doesn’t work, they will tell you what you did wrong.

Hogan: I am agreeing with you. I am not sure if that is called verification; I think that is called design.

SE: Who will be creating the PSS models? Will this extend the existing verification group’s role or does the design team take responsibility?

Hogan: Do not expect the IP guys to do it.

Arbel: They will call it the architecture group and that is it.

Rosenberg: That is the right thing. It is hard to say what will happen, but the right thing is that the architect will start with it. They will adopt this technology or some other technology, but I think PSS is very good for the job it was designed to do. They will do the testing, do the examination and proof of concept and then deliver the formal spec to the rest of the team.

Olen: It is more straightforward to derive a declarative behavioral specification from an architecture rather than having to write a program. That is part of the beauty of the whole thing. It is more natural to go from architecture to PSS than it is to write programs.

Hamid: No question about that, but the way our industry is structured, we are a ways away from that. The design verification engineers will still be the engineers writing these models for the next few years, and hopefully the tools will make it easy enough that eventually it will move up to the architects. They will provide the top-level flows, then someone will fill in the details—so we will not be putting the verification team out of a job any time soon. Should it be the design guy? I had a smart engineering manager who once said to me—it could be design person so long as he is building a verification model for anything but his own block.

Rosenberg: This is a good point. You also need to use PSS in a certain way. PSS gives you enough room to go and do some programming that the tool cannot analyze. So, there is an element of how you use PSS. If you use it in the declarative style as intended, then yes—a tool can do all of the analysis. If you insert arbitrary procedural code…

Hamid: Then we have to write smarter tools.

Rosenberg: The intuition—you cannot put constraints on an arbitrary code function and expect that the inputs are adjusted and you get a result. A SystemVerilog person will ask if that is even possible. But there is a scientific proof that you cannot analyze arbitrary procedural code and get to where we want to go. That is an NP-complete problem. We can blame it on the vendors, but…

Hogan: The state of the art is what it is. Where do we want to be? We can play around with words like synthesis, intuition, infer—I want inference engines and we can have that. Machine learning, AI, it will provide us with intuition. What is that—it is almost an idea. It is experience—data that you relate to—that you can make a leap of faith that says it will probably work that way. Sometimes you just have to take that leap of faith. There is no reason why code cannot do that. So, we will get to a point where we will have a declarative statement of what the behavior ought to be and the system will infer where it needs to go.

Hamid: Which is essentially what we are building.

Hogan: That is what we want and PSS, which is a misnomer—we should come up with a different name—

All: Too late!

Hogan: Once you have a TLA, it is over. But, I want to hear how people are going to do things like that because my personal goal is to reduce the barriers to innovation. How do we do that? If I need a bunch of PhDs to help me with my latest design—not acceptable.

SE: We have talked about the system assembly problem and it sounds as if you are not separating the notions of design and verification. It becomes a definition of what is required and then tools perform the selection and integration of the IPs.

Hamid: Certainly, that has to be the first step. We have to evolve, we can’t just mutate to a different place. We have to automate what is being done manually today and that provides a place from which to go to the next evolution of the process. Today, we have design IP, we have verification IP for interfaces—we will start to see PSS IP, which may be called PIP, and I think we will see design IP folks having to deliver PIP with their IP.

Hogan: It should be simple to change a system from having two cameras to three.

Hamid: That will introduce new use cases that had not been discussed before. Some architect will have looked at all of the things that he wants to be able to sell in a product and has on the chip and will want to reduce it down to the minimum delta the hardware and software teams have to go through. But the big delta is the new use cases. We need to define them, test them and we can do that ahead of time.

Olen: We have customers that have already started to ask for the verification IP. You deliver UVM test sequences with your verification IP, so why not deliver PSS models. We don’t have it today, but it will become a priority.

Anderson: I think it will happen.

Rosenberg: Several vendors are already delivering PSS IP for various solutions. PSS starts before the split between verification and design. It is a representation of the specification and as such you just need one. It had better be accurate and it had better be proven because that is your starting point. One spec. You can compare implementations, one to the other, but when you start with a specification—it is before the split happens. So, it might be different. Yes, we need to have that comparison, here is one implementation in one abstraction and here is a more abstracted implementation that I can compare it to, to make sure the designer did follow the spec. But the spec itself should be one and it should be as clear as possible.



Leave a Reply


(Note: This name will be displayed publicly)