Verification Unification

Experts at the Table, part 2: Strategies for using Portable Stimulus to drive formal and simulation, as well as the common ground with coverage.

popularity

Semiconductor Engineering brought together industry luminaries to initiate the discussion about the role that formal technologies will play with Portable Stimulus and how it may help to bring the two execution technologies closer together. Participating in this roundtable are Joe Hupcey, verification product technologist for Mentor, a Siemens Business; Tom Fitzpatrick, verification evangelist for Mentor and vice chair of the Portable Stimulus Working Group (PSWG); , chief executive officer of Breker; Roger Sabbagh, vice president of applications engineering at Oski Technology; Sean Safarpour , director of CAE for VC formal at Synopsys; Tom Anderson, who was product manager at Cadence at the time and secretary of the Portable Stimulus working group; and Ashish Darbari, director of product management for OneSpin Solutions. What follows are excerpts from that conversation.

SE: Earlier in this discussion, several opportunities were unearthed for ways in which Portable Stimulus (PS) and formal verification could work together. What can users do today?

Hupcey: In reality, formal is an analysis of the design and PS gives you a reflection of the testbench. That is where it is today and that is fine. It is not a bad thing. Practically speaking, there are different tools in the toolbox that an end user could download and start applying, but that is why I am glad we are having this discussion. There is a lot of opportunity for future work, but what can be done today? Today there are two different tools in the toolbox and maybe they could be merged efficiently.

Hamid: The argument against that is that first we have to get the world to adopt it and provide solutions for the big problems, such as how they get from simulation to emulation to post silicon. Maybe this is a problem for tomorrow.

Safarpour: On the formal side, the context of portability and reuseability has been thought through, and I feel like when you are developing your , your assertions or even the targets, you re-use them. Assumptions become assertions to make sure you are not using false assumptions, etc. We have seen people use cover properties in the past that are now using cover groups, so they can be used across simulation and formal. That concept of portability and reuse is there on the formal side, and for PS, instead of generating the stimulus we could generate the assumptions, the constraints. There would be a lot of value, but I don’t know how that maps into coverage, cover points or cover targets and the assertions, as well. Constraints are only one-third of the problem for formal.

Hamid: We are talking about randomly generated directed test. They are self-checking tests. We know what the results should be, so the checks that we are doing turn into your coverage or assertions.

Sabbagh: It is the same problem in simulation. PS just gives you the way to express your stimulus in some high-level fashion, but you still need the testbench checkers.

Hamid: The value behind PS is that it provides stimulus, checks and coverage.

Hupcey: When I started to do some research into PS, my preconception, which is obviously way too narrow, was that this was stuff was a big test-generation machine. While it is valuable, it is way more than that. That just scratches the surface.

Anderson: There is test generation, but there is a lot more.

Fitzpatrick: Going back to formal, when you write assumptions or assertions it is about, ‘This must happen before that, or within X cycles.’ A PS model has similar concepts, particularly when it comes to scheduling. While we don’t have within N clocks, you could model something that could count and add a constraint that it couldn’t get greater than X. Would you use the same model to generate assertions or do formal checking of the design? It may be that PS could be used to create a set of assertions about the design because it has the constructs to allow you to group things together, to capture behaviors.

Darbari: You are getting your assumption, you are getting constraints, and you are getting coverage points and you are getting your checkers. These three aspects and the ability to exercise them in formal or run them in a testbench environment on an emulator target or even in an FPGA are great. When we talk about formal’s usage we say it could be done from the block level to the IP level, to the system level or you could leave them running in an emulation target. I have to ask where is the formal?

Fitzpatrick: The problem is—and this is why we call it PS—we expect the output of the PS tool will be stimulus that runs on a testbench, and the whole thing about formal is that there is no testbench.

Darbari: That is not true.

Sabbagh: It’s not a testbench.

Darbari: Some really sophisticated formal testbenches are built for seriously high quality verification.

Sabbagh: It gets to the difference between white box formal, where you are proving this FIFO will never overflow, versus end-to-end formal, where you have a testbench that is like a reference model or a scoreboard to check the entire function of the design, and you want to sign off on that design with formal.

Hamid: A big value to formal, if it can be pulled off, is that PS is designed for mortal engineers to be able to code scenarios. If this can be fed into formal engines that can enable you to get the information you need without needing specialists, I can tell you that the people who want this kind of technology do not think about formal. They are too busy trying to get their chip integrated and through bring-up.

Darbari: You have many more simulation engineers than formal users.

Hamid: We have to get the chip to wake up, run a few cycles in emulation. We are too busy doing this stuff.

Fitzpatrick: The people who will create a PS model are going to be thinking about those kinds of issues. I don’t think an arbitrary PS description could be translated by itself into a set of formal stuff.

Hamid: Why not? I suggest that it could be done because we have a formally analyzable model so why can’t someone decide that they want to be a PS tool vendor for formal and read PS and turn it into assertions and constraints?

Fitzpatrick: It would be slightly different model for formal than for stimulus. That is my initial impression.

Safarpour: The two technologies do work differently and formal, in principle, is much more tied to requirements than specifications—this is the way we do formal. Simulation is a very operational way of doing it. Both have their own reasons to be. The right reasons at the right time. It is the level of abstraction that is key. If the PS specification is too abstract, and can’t be mapped to RTL specification for the SoC, then the difficulty is how do you build these maps. That is a challenge.

Hupcey: Is there a synthesizable subset that could be created?

Anderson: I don’t think we know for sure. This technology is so new that there aren’t that many examples out there. Having the notion that you could take some aspects of behavior from a PS model and put it into some form of assertions that could be checked against the design itself sounds feasible. We are describing the testbench and the test structure, but if you look at a typical model there is overlap with the design. You see a set of actions and other actions that correspond to blocks in the design and they talk together. Maybe you could generate an assertion that these two blocks are supposed to do something and check it against the RTL implementation. That doesn’t sound too hard. How effective or useful I am not sure. Today I do not see people writing PS models where they are modeling constraints on the inputs and outputs of the DUT. You almost always assume you have a VIP and you are talking to it through simulation.

Safarpour: If you have the VIPs in formal, then you could use the VIPs to set up an SoC. At the end of the day, if this is all reusable in emulation, then it has to be synthesizable anyway.

Fitzpatrick: What we are thinking for emulation is that you would generate C code and run that on the processor model. We are not talking about generating RTL that would get compiled in.

Darbari: That is not so bad, but it is a stretch of the imagination. It is certainly do-able that you could compile C code to machine code and do C to formal for hardware/software together.

SE: A graph defines the paths through a design. Formal is tied to path coverage, so the coverage models are in alignment and formal can directly remove paths that simulation needs to consider.

Sabbagh: There is an interesting application there. If there are coverage items that can be targeted by formal as unreachable, then we know it is not necessary to test it. Also, formal can show how something is reachable. If the input space coverage model doesn’t provide some coverage points deep inside the design, then formal can help provide that. You can feed that back into the input space coverage model and tweak that to show these are the sequences that would get you there, now let’s change the input coverage space to allow that to be reachable.

Hamid: It can provide much more value than that. When we create PS models, we are thinking about the sequences that can happen, scenarios that can happen, but now when we are doing dynamic testing, we are in the business of having to run lots of tests because individual scenarios tend not to fail, it is when they are interacting on the hardware that they tend to fail. In a perfect world, we can take the scenarios, turn them into proofs and show that something will be true under all conditions and that no other IP or memory can mess this up.

Sabbagh: That is similar to what we talked about earlier in that checkers could be generated and fed into formal. But even if you can’t, you can use the simulation-based checkers and enhance your stimulus coverage model based on what is derived from formal.

Fitzpatrick: Going back to the coverage point, you could use the results of formal as an additional input to the PS tool to say that something have already been covered, or I want to target this and I know how to get there.

Sabbagh: Formal tells you how to do it and if your coverage model doesn’t include that it can be enhanced.

Anderson: We already do that for unreachability, so why not counter examples, as well?

Hamid: Just because formal has proven something doesn’t mean that simulation is not required. Electrical stuff does matter. Even if all of the logic is right, it still has to run on silicon.

Sabbagh: We’re not saying that. We are including the witness trace. Formal tells you how to do something and that can be rolled back into the graph.

Darbari: The point is that if you have exhaustively proven through formal that some bits of the implementation are not exercisable then what is the point of generating testcases for it?

Hamid: That is a confusing concept for the portable stimulus guys because what we are putting into the model is the definition of what is supposed to work.

Darbari: True, but the implementation may be different. And there may be a bug in the implementation or a specification issue.

Fitzpatrick: Do you mean taking the results of formal and using it to influence generation? There should be a way to reconcile the two.

Safarpour: This reminds me a little of architectural-level validation, because we can do the same kind of analysis on the architectural models. And there is a bunch of stuff that you find there and find inconsistencies in the specification. But you are not done. Then you have to go and do it on the design, and now this can be considered golden.

Hamid: The creation of the PS model tends to force you into understanding what your specifications are. Bugs will drop out all of the time because of exactly this kind of reason. This could be because the ins and outs don’t match or you can’t establish the constraints because the models are inconsistent.

Sabbagh: That is a classic case. Declaring the assertions or the properties of constraints or checks tends to find bugs.

Anderson: Any rigorous model will have this effect. You are thinking about it in a way that goes beyond the words on paper.

Sabbagh: Trying to make it executable.

Related Stories
Verification Unification (Part 1)
The verification task relies on both dynamic execution and formal methods today, but those technologies have few connection points. That may be changing with Portable Stimulus.
System-Level Verification Tackles New Role
Experts at the table, part 3: Automotive reliability and coverage; the real value of portable stimulus.
Emulation’s Footprint Grows
Why emulators are suddenly indispensable to a growing number of companies, and what comes next.



Leave a Reply


(Note: This name will be displayed publicly)