Portable Stimulus And Digital Twins

Experts at the Table: How Portable Stimulus plays with the digital twin and the drive toward system-level coverage.

popularity

It has been a year since Accellera’s Portable Test and Stimulus Specification became a standard. Semiconductor Engineering sat down to discuss the impact it has had, and the future direction of it, with  Larry Melling, product management director for Cadence; Tom Fitzpatrick, strategic verification architect for Mentor, a Siemens Business; Tom Anderson, technical marketing consultant for OneSpin. and Dave Kelf, chief marketing officer for Breker Verification Systems. What follows are excerpts of that conversation. Part one of this discussion is here. Part two is here.

SE: The Portable Stimulus Standard (PSS) has given us a new system abstraction. The industry has also thrown another one at us – the digital twin. The prime concept with the digital twin appears to be live stimulus. So how do these two concepts work together?

Kelf: The digital twin is being talked about everywhere. If we are creating a PSS model of design intent, how much of a leap is it to take this model and make that the twin of the actual design. People are asking if this is a similar notion. Therefore, if we could create the digital twin in PSS and use it as a reference model, and it is generating tests and potentially checks, then you do have a digital twin type of flow — a practical flow for where you may use a digital twin. Now we are seeing virtual platforms come in, and in many cases this really is the digital twin of what they are producing, and they are wrapping PSS around it and using it to drive intent.

Fitzpatrick: It goes back to the split that PSS has. In a digital twin environment there are ways to model real-time data and stimulus. But from a verification perspective, there is still the outside world that is doing stuff and I have to react to it. As far as the electronics goes, there is software running that needs to be able to process all of this cool information coming from outside. That is pretty much the same as a UVM transactor pretending to be an Ethernet packet coming in, so I don’t think there is much of a discontinuity there. What you will see is that we will use PSS to define the intent of the system, and there is some thought about using PSS to model the design, as well. The idea that you can model what it is that you want to do separate from how you are going to do it will continue into the digital twin space. The only difference is that with the digital twin you will have a lot more data that has to be processed, but you can still model what you are doing with it in PSS.

Anderson: The digital twin is all about the design. PSS was never intended to be a way to capture the design, but the verification intent. So there is some overlap, but they do not do the same thing. They have different goals.

Fitzpatrick: The digital twin will be analogous to a virtual platform, and so PSS could be used to verify either an RTL design or a virtual prototype or a digital twin.

Melling: I see PSS as a path to better bug resolution. With the digital twin you are trying to reproduce the real environment of the system. You are putting complex software on there, and so when bugs occur, they are deeply buried through the operating system. We see customers who say they want to take a profile of what caused a problem, see if we can create a test intent that matches that and be able to reproduce the problem in a way that we can debug. So it is a way to break down the bugs that the digital twin finds and provide a faster path to resolution.

Kelf: PSS was designed for verification, and there are very good reasons to keep it separate from design. The virtual platform is the EDA equivalent of the digital twin. But the concept of the digital twin is a bigger thing, just as you are saying. What we are trying to do with PSS is to create a spec for what the system is supposed to do. This is the mind shift, and we are trying to get the community to understand. In the UVM world, they are creating tests. Here we are creating a specification and a tool generates the test. That spec then covers what the digital twin is supposed to do, so you could see that spec morph into the digital twin and be used in a broader sense. It can also describe lower level software as well. So there is an analogy that is not fully there today, but I don’t think we should lose that idea. Bottom line – decades ago we talked about the executable spec. This is the closest we have ever come to it. The next level is the spec for the entire digital environment to throw real data in and test it out.

Fitzpatrick: The idea behind a PSS model is that you have these actions that represent stuff going on in the system or in the outside world and you have to coordinate between them. That is what the digital twin is doing. So, it is a question of is this action going to be represented as a software API call or a UVM sequence or will it be some digital twin modeling the sensors at the front of the car. It is all the same kind of thing. I can envision using PSS, or maybe the next generation of it, to start defining the models of all of the stuff that has to happen and then mapping each of those things to what it will actually happen on. But the idea that you are defining all of these interactions in some reasonable way, and declarative way, so that you can analyze them and figure out which ones makes sense. That is what we are taking about.

Kelf: I can see us moving in that direction.

Anderson: It is possible. I wouldn’t bet that it will happen. It is between the visions for both.

Fitzpatrick: The difference is that we are postulating that there is some platform that will represent each of the actions, be it RTL, emulation or UVM. The idea that you would be able to synthesize and get to an implementation from a PSS model is a big leap and is not what PSS was designed for. In PSS we did not define what the implementation was going to be. We have other ways to do that, and if you say we continue to do that, it is just that the implementation may be this big sensor model.

Kelf: Verilog was not designed for synthesis. Things evolve and unexpected things happen.


SE: While coverage is not in the domain of PSS to solve, it has to enables the tools to solve the problem. What does system coverage mean and how will people equate their usage of PSS with how they define what system coverage means.

Fitzpatrick: If you think about system coverage as a level up from functional coverage, then you want to make sure that this variable hits this set of values, and that you can combine those in different ways. You can do that today and say that if I have a huge chip, can I make sure I had a big packet come in here while I am DMAing a certain type of thing here. Those are coverage points, and you can make sure they happen. We can target a test that will hit those two things. Now that I have these things, why don’t I, say, cross all of the different combinations? It goes beyond what the user initially defined as his coverage goals, but it provides a way to capture these weird relationships. Since we have the static graph of all of the possibilities, we can identify which of the paths through the graph will hit those cross covers and we can generate those tests for you.


SE: That is the UVM way of thinking. It is combinatorial. The whole advantage of PSS is that it provides a way of thinking temporally. What does temporal coverage mean?

Fitzpatrick: That is the next step, once you figure out how to define that — and nobody really has. There is transition coverage in SystemVerilog, which nobody really understands and isn’t quite assertions. It may go back to assertions as a way to define the temporal relationships between things. Once we figure how to define what that is, the idea that you have a static graph of possibilities and you can analyze it to see which ones would match the criteria you want, we will still be able to generate the tests to hit those coverage points. So it becomes the technology, once we figure out how to define temporal coverage, to create tests to hit what already exists.

Kelf: One of the disappointments of PSS 1.0, and one of the reasons why we pushed back on it initially, was the lack of path coverage. Very early on we defined system coverage and came up with the idea of path constraints and path coverage. It requires a solver that works across the graph. We have a large number of customers who really like the idea of being able to define a coverage mechanism, down a path, that makes use of the solver and able to feed back that path information. We believe we have at least one definition for the way to do scenario or intent coverage. This is a little different from functional coverage because that is something very specific to the UVM world. How do you cover the specification – the intent. But using path coverage and the paths through the design, and cross correlating between them, is the way to go.

Anderson: A PSS description is a model, be it called a graph or whatever, but it is a model that you can traverse and solve for, and keep track of what you have done. So the notion of model coverage is very applicable. It also can lead to some of the temporal notions. The easiest way is to think of a cache coherency test. That has multiple threads and lots of temporal relationships that can be established between those threads. This could be the timing between reads and writes and flushes. But it is not obvious how you define that. Coverage means I have tried every possible combination, or at least the ones I define in a given amount of time.

Melling: It is about performance, about latency, about power. It is about the system level – what is in the requirements spec? I see it as a combination of things. PSS gives you workload. Workload is imperative to having a good measure. I see it as a combination of workload, measurement, and then taking that and applying it to cover bins. People measure latency and then set up bins for those, and they need to be able to do testing at that level. It will be that kind of combination of technologies. PSS provides good workloads.

Fitzpatrick: At the abstract level – temporal operations — we are not opposed to have that. What we want to avoid is defining a different way to specify what we already do today. You cannot analyze a graph and say I can make this happen 10 nanoseconds after that. There will be ways to do things at the abstract level, but then you have to measure more detail about it.


SE: What are the plans within the committee, and what are you working on.

Fitzpatrick: We are working toward a 1.1 release targeted for end of February (2020). We have closed on the requirements for that. Accellera has asked us to define a three-year roadmap, and by our estimate, probably around June 2021 we will either have a 1.2 or a 2.0.



Leave a Reply


(Note: This name will be displayed publicly)