Verification As A Flow (Part 3)

Experts at the Table, part 3: How will Portable Stimulus impact SoC verification and what adoption approaches are likely to catch on first?

popularity

Semiconductor Engineering sat down to discuss the transformation of verification from a tool to a flow with Vladislav Palfy, global manager application engineering for OneSpin Solutions; Dave Kelf, chief marketing officer for Breker Verification Systems; Mark Olen, product marketing group manager for Mentor, A Siemens Business; Larry Melling, product management director, System & Verification Group at Cadence; and Roger Sabbagh, vice president of applications engineering for Oski Technology. What follows are excerpts of that conversation. Part one of this discussion is here Part two is here.

SE: One of the touted capabilities of Portable Stimulus (PS) is the ability to target different execution engines. What are the benefits of that?

Sabbagh: First, choosing the right engine for the job. If you have a declarative spec of design intent, which can test against that, you can decide if you want to use simulation or you want to use formal. That is a question about the block, how well does it suit formal versus simulation. We can take that spec and use it in different ways. What will motivate people to change if we are just replacing it with the same thing. What additional problem are we solving? What about the portability of verification intent from the IP level to SoC level. That is a huge problem today. This is a reuse-of-testbench problem. Building the SoC infrastructure is a time and resource sink.

Olen: Consider the difference between the design-to-verification flow for IP versus SoC. In the past there was a brick wall between manufacturing test and design and test. At the IP level, if you have a designer and a verifier and you are building an Ethernet MAC, you don’t put a verification person on that unless they understand Ethernet. There is a specification and you hire people who have domain experience at the block level. When you are integrating and building an entire SoC, SoCs are by nature different. You cannot find a verification engineer who has all the design intent knowledge in their head that the designer had. Ethernet is Ethernet — you can read the spec. But there is a big brick wall when it comes to the SoC level. If you give me an SoC with quad cores, multi-level cache, how do I know where to start? The brick wall is back.

Sabbagh: But the SoC verification engineer has the expertise in doing that top-level software-driven verification of the whole environment including multiple processors, GPUs etc. They are getting over the wall. What will motivate people to move from UVM to PS at the IP level might be in helping to solve that problem.

Kelf: There is a business issue here. The SoC team and the IP teams are separate groups and each group buys their own tools. They only talk to each other if there is a high-level manager who does get that story. PS does help with the UVM problem – you can make it drive sequences at that level and it really helps with the SoC problem which is what it is primarily targeting. It is great at synchronizing software tests with hardware transactions and it can help with the transition as well.

Olen: There is at least one company using existing SystemVerilog constraints and converting them into a pre-PS tool. It is all math and you can use algebraic expressions such as constraints, or you can use a declarative descriptive such as a graph or tree, and you can describe the same thing. So you can suck the constraints into PS and run randomly without having to describe all of the use cases in your graph. We could turn off all the formal algorithm, heuristic intelligence in PS and just run random traversals through the graph if we wanted to mimic the exact same thing you get with constrained random testing. But why would you do that when you can put some intelligence into it? And if you have tested this thing three times already, then let’s not do that again. Let’s do something different and produce a flatter distribution of solutions. Constrained random has no concept of history. Each one is run independently from all things that happened in the past and all things that will happen in the future. In a graph or PS, since it is a formal technology, without unfolding the description you can track where you have been before and not go there again if you so choose. You can set priorities and weighting. ‘I want to force the graph and traversal mechanisms into this solution space because I just checked in a new part of the design, and I want to drive tests in that area.’

Melling: That is where the value is. What I was trying to say is, ‘Why not re-use the UVM stuff rather than being forced to do it a different way?’ UVM worked. I would hate to walk away from this and come out as, ‘PS will replace UVM.’ It isn’t and it won’t. There will be customers who do see it as a replacement.

Olen: No, I don’t think it will replace it.

Melling: It is more about reuse and addressing the next level of problem. There are some things that PS offers that will help address problems in the old methods. Great, but lets not throw everything away.

SE: You said that PS keeps track of history. It knows what has been tested. But if you don’t have that coming from IP verification, then are you getting the whole picture? At the block level, you should be able to see in the graph exactly what was tested in the IP. You need coverage results to be portable between IP verification and SoC verification.

Kelf: Let’s be clear — UVM provides a framework. PS is one way. There are others for creating high-level tests and applying it to IP. Then you can take that high-level test and move it over the SoC level.

Melling: This is a multi-level verification problem. There is a verification problem at the IP level, there is a verification problem at the block level, there is a verification problem at the sub-system, there is an interconnect at the SoC level problem, and there is an SoC functionality problem. There are layers and levels of assembly that are happening, and there are verification approaches to each of those levels. The challenge is not to say IP doesn’t have history. The challenge is to aggregate the testing and verification that was done at the IP level to the other levels, so that when I look at the SoC, I can see from the top level all the way down to the IP level what kind of coverage…

Palfy: But you are coming back to the standard. It would be ideal if you could understand a block that was verified by formal and has formal coverage results. It would be good if that could be read into PS.

Melling: Absolutely. It is fundamental to our approach in verification management. You must be able to assemble those views. ‘Here is what was done in formal, here is what was done here, here is how to aggregate them, here is how to map them across the domains.’

SE: What do you see as being the most prevalent way in which most people will start using PS?

Melling: One of the biggest drivers has been content — a library that works with our pre-PS product. You go into a customer and you look at the problem that they have – an ARM CPU sub-system that has many cores, multiple clusters, cache-coherent, I/O-coherent. ‘Here is a library that will help you test this. It includes power management, coherency. It gives you the basic models and you can start generating tests and get much higher functional coverage in these domains.’ In the past you were not even able to measure coverage of these. You were just writing directed software tests to try things out, following a test plan. Now you are following an approach where you have content that is giving you measurable results and metrics that you can plot against.

Kelf: Whenever introducing a new technology, there have to be one or two killer apps. These are the first things that bring it into the door. There are three, and people are using all our pre-PS solutions today. First, there is a better way of creating UVM sequences that is more efficient and effective and easier to write. Second, at the SoC level, is synchronizing transactions and multi-threaded software tests and having a more efficient way of going from one scenario that can run on either emulation or simulation. Third is post-fabrication silicon bring-up, which is being able to take some of the tests and applying them at that point. It will then evolve to being a high-level spec that drives other things, such as formal verification.

Palfy: This is a new technology, and I hope it will bring a shift left as formal verification did, maybe bring it to the designers as well to start earlier with verification. We are keeping an eye on it and we hope it will work with formal, as well.

Olen: I agree with the multi-threaded tests. People have studied human behavior and when you go to multi-threaded systems, the human can no longer track all the things that are going on. For single thread, even I could write tests if I had the time. The other areas in which we are seeing interest is in the automotive functional safety area and in Mil/Aero. The declarative nature, which is similar to a formal technology, is traceable and exhaustive. So, for functional safety there is random failure analysis and systematic failure analysis. If you are running constrained random, then unless you can confirm that you have covered all the necessary corner cases then you have problems. They want traceability that you have covered every single legal situation, and even some of the wanted illegal. Using decelerative terminology you can synthesize a coverage model, you can show what it is, you can track it, you can do history along the way. Mil/Aero has gotten along without it, but they have struggled with the requirements tools being able to map design requirements into test and verification requirements and then building the coverage plans. It takes a lot of time, so if you can automate that it will provide a huge benefit. While they are not always the early adopter, they are jumping into this.

Sabbagh: If your favorite tool is a hammer, every problem looks like a nail. We shouldn’t be thinking only about simulation with PS. Even some of the things we talked about at the architectural level – like pulling in various CPU clusters where they have different cache coherent protocols and they are talking to a shared memory – how can you possibly test that with simulation? There are too many combinations of things. So when you talk about killer apps, some of them are simulation- and emulation-based. But even at the system level there is a place for formal, and we do architectural formal that helps people to test some of these high-level interactions.



Leave a Reply


(Note: This name will be displayed publicly)