Evolution Of Verification Engineers

Experts at the Table, part 3: The role of a verification engineer will change and start to look a lot like knowledge management.


Semiconductor Engineering sat down to discuss the implications of having an executable specification that drives verification with Hagai Arbel, chief executive officer for VTool; Adnan Hamid, chief executive office for Breker Verification; Mark Olen, product marketing manager for Mentor, a Siemens Business; Jim Hogan, managing partner of Vista Ventures; Sharon Rosenberg, senior solutions architect for Cadence Design Systems; and Tom Anderson, technical marketing consultant for OneSpin Solutions. Part one can be found here. Part two is here. What follows are excerpts of that conversation.

SE: If PSS is a single specification that is before the design/verification split, you have to make sure that you have explored the specification enough to know the implications of what you have specified.

Rosenberg: Exactly. As an architect you can really challenge your assumptions. You can create state machines and dependencies between activities and their states and resources and now you can challenge that. You can ask to be taken to all of the states — without even running — just from the spec. It is realistic to ask to visit all of the states, or all of the legal transitions. You may find that some are not reachable. That is a bug in the spec. It happens. Now the designer has a new efficient tool and they can visualize what is happening in an intuitive way and see what is reachable and what isn’t. As long as I keep my model to be something that can be analyzed and I do not go into to many procedural details, then I can use PSS to get all of that data.

Anderson: We are dancing around the idea of correct by construction. It has been around forever. Would you want a model that you don’t have to verify? That is the goal of correct by construction. You have a model that is correct, maybe through exploration, maybe through other techniques that you used to validate and verify what you are worried about. But it is a single model, and from that you generate the design. Maybe we are getting to the point where the top level of an SoC, defined to be correct by construction, may actually work.

Hamid: There is no contradiction in any of this. Verification is the job or proving that an implementation matches an intent. PSS or UML activity diagrams are good ways to define intent. From there, am I refining manually and getting an implementation, or do I have a tool that can synthesize a solution? How do I prove that the synthesis is correct? Well, the tests from the original PSS model are applied to the implementation. Does it pass? This is similar to sequential equivalence checking, so nothing has really changed. We do not have to think about a split between design and verification as being what do I want and then how do I build it. It is all about making it simple for people to capture what they want in ways that work for mortal human beings.

Arbel: At the end, I want to build a chip. I need a way to express it in a way that everyone can understand because design and implementation is the job of many people. And if I can find a language or tool or concept that provides a better expression of complex machines, then my job becomes a lot easier. Then it is design and verification. We need to validate it and check that what we expressed is a working system. But the higher the abstraction we start from, the better. If we can explore that during architecture specification, it is much better than doing it during design. I have seen many projects where you define the spec or the simulation, and the verification teams and design teams are defining the spec together, and that is a very long process.

Hamid: These models lend themselves to building out – graph-based, but once you turn the entire PSS model into a graph that is very analyzable for what is reachable or not, use cases that are reachable or not, has it been covered or not. We can turn what used to be an art into math.

Rosenberg: Why do you still need the two paths — the comparison, design versus verification — if we can capture things and PSS will allow us to do that and there is technology that will allow you to find all of the issues. But I am not sure that it is synthesizable for a full design. We will continue to build it out of generic IPs that will come together top-down, and not bottom-up. That is the best way to think about the problem. So the implementation will be different. It will not be automatically synthesizable. There are implementation details and people will make mistakes with that – not in the spec, but in the implementation, and that is what you want to check. For that you will have to create another model, which is the verification testbench to compare between things. As long as we don’t have synthesis, and I am not convinced that we ever will, we will need the two paths to compare during implementation. This assumes the spec is accurate, but you still need the two implementations to be compared to find any mistakes.

Anderson: I am not convinced we will ever have it either. But it is an interesting idea.

Hogan: We cannot wait a decade of more for this to be used. We cannot allow it to have the same gestation period as formal.

Rosenberg: But PSS is already there.

Hogan: You have the 1.0 spec.

Rosenberg: And you have implementations. There are tools around this table. There is so much you can do with it today. For most users, the first thing they do is not test creation. The first thing is to generate the verification plan, challenge my verification plan before I even create tests. We will see more non-vendor initiatives that will create in-house solutions. We are providing a formal definition, and they can work on specific things that they want to do with that. So yes, there is a lot to be done in the next 35 years, but there is stuff working today.

Hogan: I don’t have 35 years to see that happen. I appreciate what you are saying, but we have to work out what it takes to get faster adoption and to accelerate the rate with which tools can be created. So, we have a spec and it will have holes in it like all spec do. What is the next biggest thing?

Rosenberg: Ask what is out there today. We can do test creation, we can do coverage maximization, we can do a verification plan and…

Olen: And it is not just what we have, but what customers are doing with it.

Rosenberg: There is value that can be had today.

Hogan: I started with SPICE. What was the key? BSIM9. All foundries gave me a SPICE model with 59 terms – polynomial. Solve for that a few times. But it got everyone working at the same level. What is the level that we can all talk? Yes , you each have solutions, but I will come back to the fact that it is all about the models. There is another spec effort needed. Models. They have to be interchangeable. Is that a competitive threat to everyone? I don’t think so, because we all want to expand the market.

Anderson: There are some good analogies between formal and PSS. One thing that drove formal to getting wider adoption was when people standardized on SVA, and all of a sudden you could exchange models.

Rosenberg: But we started very well. We started with standardization before we even had the technology.

Anderson: That is my point. We have started well. The adoption of PSS is already going faster than formal. But until we start seeing wide availability of models from the people doing designs, either internally or externally, it will be hard to make it ubiquitous.

Hogan: The wider user community will end up contributing. Open-source is ubiquitous, so open-source models may be the way it goes.

Rosenberg: We have three vendors siting at this table, and each claims to have customers who are willing to stand behind them and say they can get value today. We are doing models and we are getting value. Perhaps you are underestimating that.

Olen: You do have a good point. The good and the bad of PSS – you can do a whole lot of different things with it. We have customers where we scratch our heads trying to work out why they want it. So, that was the good. It is also the bad. It has a tendency to defocus. What is next? That means what do we do first to establish the foothold and acceptance. So people ask do I think PSS will go the path of UVM or the way of UCIS? I think much closer to the UVM acceptance path. UCIS is technically a standard that nobody supports.

Hamid: There is no contradiction. Those of us who are the pioneers in this space are building various Apps that are models that people can make use of. Design IPs will have to start supplying PSS models – absolutely.

Hogan: Consider an IP guy and let’s take RISC-V, a big non-verifiable thing. Is it not in the best interest of any RISC-V provider to have all of the models available that run on all of the platforms?

Hamid: Yes.

Hogan: Look at how much money Arm spends on verification. Why do people buy Arm? Because it has been verified better than any other solution. Are you willing to pay the Arm tax for that? People are. Would they as soon not pay it if they could get a RISC-V model that is understandable and verifiable? That is the problem I am faced with.

SE: Is this going to be the golden age for verification engineers, or will verification engineers just disappear or join the marketing team?

Hamid: They will become more powerful than they are today.

Arbel: They will become architects.

Rosenberg: There will have more power and they will have more good questions for the architect. The verification team already sees the value, and they will see more as the vendors expand their solutions. Plus, they will create their own. They will see value and they will use it. If the architects don’t use it, then the verification engineers will create more tests, larger regressions. They will have better questions to challenge the architects, so that will give them more respect. When we introduced randomization and coverage many years ago, there wasn’t even the concept of verification engineers. There were a few juniors who created tests, but most of the time it would be the designers who did that. And the ones that got the task of verification, it was boring because they had to create a lot of directed tests.

Olen: It was work.

Rosenberg: Tedious. And then they started to have tools and languages that they had to grow into, and once they did they provided a lot of value. They became professionals. Okay, perhaps I am biased, but it is becoming even more interesting. Initially it was hard for them, but go and check to see if the software engineers would be willing to do that. We want software-driven testbench, and only the verification team can provide sophisticated stuff like that.

SE: Will anyone graduate from college and want to be a verification engineer, and if they aren’t good enough they will get relegated to the design team?

Hamid: We are already there. Perhaps it is the wrong question. We talked about verification people becoming architects, and that is the wrong question. Architects only understand about 80% of what they have to get done, and they break the design into many blocks and people go and build them. The ones who build the blocks are pretty smart, but they only understood 80% of what is happening in their design. It is the verification guy who gets what is only 64% understood, and he has to understand 100% and test that. Usually, at the IP level, the verification person is the only one who really understands what is happening in the block, and that means that PSS is all about knowledge management. If you can get that one person who really understands a block, and if it fails in silicon, they are the only one who can figure out what is going on. Nobody likes that situation. We have to capture that knowledge into a model that is moveable and that will be preserved across product generations.

Leave a Reply

(Note: This name will be displayed publicly)