Verification Unification

Experts at the Table, part 1: The verification task relies on both dynamic execution and formal methods today, but those technologies have few connection points. That may be changing with Portable Stimulus.


There is a lot of excitement about the emerging Accellera Portable Stimulus Working Group (PS) standard. Most of the conversation has been about its role in simulation and emulation contexts, and in the need to bring portability and composability into the verification flow. Those alone are strong enough reasons to see strong adoption, but it appears that Portable Stimulus and formal verification have several interesting possibilities together. This could strengthen both technologies and eliminate some of the overlap between them.

Semiconductor Engineering brought together industry luminaries to initiate the discussion about the role that formal technologies will play with Portable Stimulus and how it may help to bring the two execution technologies closer together. Participating in this roundtable are Joe Hupcey, verification product technologist for Mentor, a Siemens Business; Tom Fitzpatrick, verification evangelist for Mentor and vice chair of the Portable Stimulus Working Group (PSWG); Adnan Hamid, chief executive officer of Breker Verification Systems; Roger Sabbagh, vice president of applications engineering at Oski Technology; Sean Safarpour , director of CAE for VC formal at Synopsys; Tom Anderson, who was product manager at Cadence at the time and secretary of the Portable Stimulus working group; and Ashish Darbari, director of product management for OneSpin Solutions. What follows are excerpts from that conversation.

SE: What are the possible connection points between formal verification and Portable Stimulus?

Anderson: As I thought about this I structured it into two parts. What is the connection today, what is the overlap, or lack of it, and how could it evolve. To me they are very separate things. Let’s start with the first part – we don’t talk about PS being a formal technology for lots of reasons. Formal carries a lot of weight about what people infer from it. People tend to think of formal as being suitable for smaller blocks. PS is at the SoC-level and multi-SoC systems, so formal may carry the wrong connotations. But there are a lot of formal technologies under the hood and there are a lot of similarities to the approaches taken. A model could be statically analyzed – maybe not a formal model, but a formalism in a PS model. You have Stimulus Constraints, you have the ability to detect unreachability through static analysis, you have constraint solvers under the hood. So there are a bunch of things that are similar to formal and in some cases do share some technology such as engines and algorithms. Today, there is a clear distinction about how people use the tools, and there is underlying technology in common that may lead to a better alignment in the future.

Fitzpatrick: Part of why we spend so much effort in the PSWG, making the PS description be a declarative, static language, is so that you can do that kind of analysis. You have to be able to do the basic analysis just to be able to generate the output. It does then open itself up to the possibility of doing additional analysis and asking questions about the model perhaps. In the most basic sense, formal analysis is taking a model of the design, at the RT level today, and asking questions about it. That enables you to figure out all of the things that could happen. With a static description of the stimulus specification, you should be able to ask questions about it – they may be slightly different than what you would ask of the design and I doubt you would write a SystemVerilog assertion, but you may be able to answer some questions about it. There is formal technology that can be brought to that. Today we are just parsing the model, creating some data structure and then generating code from it. In the future, there is the possibility that we could start doing more analysis on the abstract model.

Anderson: You could argue that some of the things that you do today, such as constraint solving, identification of constraint conflicts, unreachability analysis — are formal technology.

Sabbagh: If the spec of your stimulus can be formalized into this model and can be analyzed formally, then one question comes to mind: Why isn’t anyone talking about PS being used to drive formal in some way? When I see all of the hype about PS, it says this stimulus specification can be used by everybody. It can be used by the architects to drive their C model, the DV guys to drive their RTL models, by emulation and to silicon validation guys. Why are formal guys being left out of the party?

Darbari: Agreed. Formal is not mentioned at all.

Sabbagh: If all of this is true and we have this great capability to do all of this formal analysis, such as reachability, it seems to me that we should be able to leverage it more.

Fitzpatrick: So you will be joining the working group then?

Hamid: The point is that in addition to generating dynamic tests, we also generate formal tests that could be proven on the design.

Sabbagh: There would be constraints. Whenever you run formal, one of the challenges is constraining the design. You are already specifying a number of variables about my stimulus and here is how I want them to play together. You are also specifying the sequences of things that I want to test for, and those I don’t want to see. And you are defining the Coverage model. You have all of that defined in a single model, so I have question: Is formal down here at the block level and PS at the SoC level? If so, maybe there is some need to connect them. When I talk to colleagues who are using testing tools today, they say, ‘No, most people are using PS at the block level to replace their UVM stimulus.’ So I am confused. Why are we not aligning here?

Darbari: What is intriguing about PS is that the word declarative is there. Requirement-based is there. Reusability is there. Compositionality is there. Modularity is there. This is all of the stuff that I teach when I teach people about formal. And yet I found two serious issues about PS in that it suggests that you will be generating test cases, and therefore from day one the emphasis is on driving sequences. This is good stuff and useful for UVM, for driver based bring-up and emulation. But then you started off with this requirement-based thing. So why don’t we call it portable requirements standard? All of the concerns that people had when they started to work with it—being able to reuse some of the tests downstream—I get it. I have used formal in that manner. I haven’t had a way to automatically extract test cases or formal assertions, so is it conceivable to take an SoC representation and actually run some form of formal testing on it? I still have to ask where the models are going to come from? Are the assertions going to come from a spec or from a formal evaluation tool? Or are they going to come from an automated graph-based formalism, which is the goodness of what the PS brings? I didn’t know that this was not the case, that consistency analysis is already happening on the graph-based formalism so that you don’t end up generating inconsistent tests downstream.

Hamid: That is definitely there.

Darbari: Then we want to leverage all the goodness of that abstraction. I know that for good reasons you have an abstract layer where you want to reason about transactions, like the UVM transactors, and I get it. I guess formal actually worked at the signal level, so there is a gap in the abstraction level in which we are trying to reason about things. But there is no reason why we can’t mitigate that gap.

Fitzpatrick: We probably can, but we haven’t thought much about it because there aren’t many formal experts on the committee. It doesn’t mean that it can’t be done. There is an important distinction that has to be made. When you are doing formal, you have an RTL representation of the design, and then you have assertions that specify behaviors about the design. When we talk about PS, we are not talking about representing the design. We are representing the actions that the design or system can perform, and then putting those together in interesting ways to exercise the design.

Sabbagh: It is a view of the testbench.

Darbari: What is system in your view? Is it a combination of hardware and software?

Fitzpatrick: Yes.

Darbari: So we could do the same in formal too. It is possible to…

Fitzpatrick: There is the path for formal to be brought into the mix, but it is important to understand that we are not talking about the design specifically. It is a way of organizing your thoughts about what the design can do so that you can create interesting verification sequences about it.

Darbari: That is exactly what formal would do.

Hamid: In addition to generating dynamic tests, you could generate the constraints and the assertions for formal to go and prove.

Fitzpatrick: You still have to get it to the RTL of the design. The way that we set up the PS model, there are constraints that not only are about the values that things can be, but in the relationships between actions, such as scheduling etc.

Darbari: I am glad to hear all of this because I thought maybe it wasn’t on the slides (shown at the DVCon PS training session) because there was some fundamental reason why they couldn’t.

Fitzpatrick: There is one fundamental reason and that is because we are not formal people.

Sabbagh: We can fix that.

Related Stories
Verification Unification
Experts at the Table, part 2: Strategies for using Portable Stimulus to drive formal and simulation, as well as the common ground with coverage.
Formal’s Roadmap
Experts at the Table, part 1: Panelists look at the progress made by formal in the past five years and where we can expect to see new capabilities emerge.
System-Level Verification Tackles New Role
Experts at the table, part 3: Automotive reliability and coverage; the real value of portable stimulus.
Emulation’s Footprint Grows
Why emulators are suddenly indispensable to a growing number of companies, and what comes next.