Extending Portable Stimulus

Experts at the Table: With the initial standard in place, what can we expect in terms of broadening its scope to other domains and applications?

popularity

It has been a year since Accellera’s Portable Test and Stimulus Specification became a standard. Semiconductor Engineering sat down to discuss the impact it has had, and the future direction of it, with Larry Melling, product management director for Cadence; Tom Fitzpatrick, strategic verification architect for Mentor, a Siemens Business; Tom Anderson, technical marketing consultant for OneSpin Solutions; and Dave Kelf, chief marketing officer for Breker Verification Systems. What follows are excerpts from that conversation. Part one of this discussion is here. Part three is here.

SE: The committee started with some initial expectation about applications and clearly it is now getting used in different areas. Have you started to identify new features or capabilities that would enhance the standard?

Fitzpatrick: Nothing that is that surprising. What we are working on for 1.1 is stuff that we knew we wanted to do that didn’t make it into 1.0. So far, there hasn’t been a huge ‘ah hah’ moment. We are confident that we are going in the right direction.

Anderson: It is still quite early. The composition of the working group doesn’t extend to people doing test. While there were a few that expressed interest, they didn’t last long. But if we are now seeing users moving into those worlds, there may be additional things that the standard could be doing to help them.

Kelf: By the same token, embedded software. We knew that we would be running micro-kernels or bringing software applications drivers in, and I am sure we are all seeing that. But I would expect to see a lot more of that and the API necessary to open into the software world.

Melling: AMD has presented about how they are using PSS, and they have a software layer they are using. It is being run below the micro-kernel layer and all of the PSS threads that are being operated. So we are seeing exactly that. They want to re-use their software layer here.

Kelf: Right, we are starting to see that now, and AMD is a great example because they are far ahead with the virtual and hybrid platform mode running up front. It is like a thin Linux.

Anderson: PSS can verify software in an indirect manner. There is often a lot of code in common, maybe at the driver level, maybe more sophisticated than what you are sharing between traditional bring-up and testing and validation in the PSS flow.

Kelf: Testers, virtual platforms…

Anderson: The more code you share, the more you validate it.

SE: How much interest is there in bridging between PSS and formal?

Anderson: We have lot seen a lot of requests asking how PSS fits in a formal world. It is something we have discussed in previous roundtables. There are interesting ideas out there about possible links between formal and PSS models – applying more vigorous formal verification, perhaps using things like assertions feeding into the models – but there are not many user-level requests so far. It is, however, an area that we continue to be interested in.

Fitzpatrick: Our formal guys have been asking about it and looking into it. There is a school of thought that the formalism of the PSS language, as limited as it is, is enough that if you can bring assertions in as the implementation of a set of actions, that it is a nice way to string those assertions together. We will have to see if that pans out. I am not sure there are a lot of users who understand it well enough to even know the right questions to ask. We have been so focused on formal apps and hiding the fact that there is any formal going on, that to try thinking about moving up to that level is not on the top of their minds.

Kelf: The question came up twice yesterday. One asked if we can drive assertions because we hate writing them. If you have a graph model and if you have done a good job of describing the intent, formal can decompose all of the state space of the design. We are trying to describe the state space of the intent, so you can think about getting pretty full coverage on it. It is halfway to formal at the system level. For the UVM-directed testing phase of the process, where formal is used quite a lot, is there an opportunity there to plug in formal either before the UVM part at the block level — or for creating a bug-hunting framework for formal after the UVM bit, where you want to run formal to see if you can find some bugs at that point? It is early, but I can see points in the flow where it would fit.

Melling: Bug hunting is the right place, and PSS gives you test intent and it gives you a way to break up an SoC. If you look at a particular state space – it gives formal a path into SoC-level verification without having to digest the whole SoC.

SE: Will PSS need overlays, such as UPF, for power? That would put more information into the model which can then be used to improve tests for power and allow tests that would create worst-case power draw.

Melling: That is a very interesting question. At the SoC level, UPF is uninteresting because SoC level guys are not dealing with power at that level. Now they are saying that we need the IP guys to do some more setup for us so that we are getting more from them. I believe we will see it now. The system-level power requirements push into the IP level, and the IP guys will have to apply the stuff they know, like UPF, to be able to make it useful and valuable to them. So it is coming. But today we have seen it more at the system power description, and we even have translators and things that would extract the various states and transitions, and the system guys were not interested. They have their own architectural requirements for the power management system. They are not written in UPF.

Fitzpatrick: Because it is set up to be above things like UVM, we are faced with having to interface to things like the UVM register layer and UPF and other things that each have their own set of information that is specific to them and we are trying to wrestle with how to make that available at the abstract level without recreating all of it. We have been thinking about this in the committee and within Mentor a lot. We do need to figure out how to make that happen. The things that you really care about are the states and the transitions. We can model that within PSS. The question is how do you get the information, and do we need something in the PSS standard — something that says you shall use the UPF format? There does need to be some way to represent that information and let the tools deal with it. If there is a standard way to specify that information, then all of the tools will be able to process it in the same way. But we do not need to define an alternative way to specify registers and power in PSS, so long as we have a path to get the information into PSS. We have to avoid trying to make PSS do everything for everybody and keep it doing what it is supposed to do, and provide other ways to get information into and out of it.

Kelf: I would take it further. The whole point of PSS was not to replace all of these other things. This is a renaissance for UPF, which is starting to be questioned a bit, and some of the power guys have been talking about other standards to describe this information. Now, PSS comes along and it is being driven by the system folks who have realized that they need this information, and maybe if it can be redone slightly so that it fits more with the system-level testbench, it would work well. UPF is a great example where the IP folks now have to provide extra information about the interfaces, which they didn’t do before, and now it is really useful at the system level. Broadcom did this, where they have a UVM functional testbench and they overlaid power management tests where they were manipulating power domains using UPF. So you can layer the existing standards, UVM, UPF and others, all built to lay on top of SystemVerilog and put PSS on top to control the whole thing.

Anderson: I am glad to hear that. One thing I did not predict correctly with PSS was that I thought people would want to do that three years ago. The idea that you have a set of power domains and the things that you do manipulate them on the chip, the idea of having PSS generate the high-level tests that do that, probably from a UPF description, seemed like an obvious thing.

Fitzpatrick: It is one of our use cases.

Anderson: Right, but I expected it to be an early application. It is good that it is finally happening.

Kelf: It did seem like an obvious application because it seemed as if people had a real problem with it, but things take time.

SE: Does it go further than this? Si2, and now IEEE, have been working with the multi-abstraction power models. Do we need to think about how to layer power and timing? When you are trying to generate stress tests for multi-processor systems, you do not know the timing until you actually run it. So you have to have some kind of estimated timing that could be used to improve the schedules. You are learning this on the fly rather than pulling in the necessary information.

Kelf: There is so much that could be done. There are clearly things that we could do to bring in those models. It is just that we all have limited bandwidth, and we are all working on different cases, building things up and picking up bits as we go along.

Melling: We are at the stage where we are starting optimization on the implementation side of it. For things like coherency testing, timing matters. Being able to get the thinnest possible overhead layer in terms of activating the different types of actions or functionality in a coherency test create challenges for performance optimization. Because we have knowledge about the test sequence, there are a lot of post-processing opportunities. You can actually, after the fact, do analysis where you can ask, ‘Did I cover this kind of latency or this kind of overlap?’ Post-processing coverage analysis and verification is in its early stages. Lots of people are starting to take advantage of emulators to run stimulus fast, and then doing a lot of analytics. Getting to a language to be able to capture it – we haven’t gone there, but we are tackling the problems.

Fitzpatrick: We have been looking into how to get that information out of the system and ways of probing what is going on and tracing transactions within the system. We actually have developed ways to visualize that information. The next step is to take that, and it is almost like running constrained random, look at the coverage and automatically tweak the constraints, which I don’t think will ever work. But the idea that if you identify a problematic path through the system, if you can then understand how the stimulus can be used to generate that, you can look at that. You can look at the state space. We can already look at the cover points and say, ‘I need to be able to run these paths through the graph to hit my cover points.’ The question is, what path through the graph do I need to exercise to make this transaction go from here to here? You may need to look at what else was going on in the system when something takes longer than you expected.

Anderson: Post-processing for performance metrics is very much a part of that — timing and performance.

Fitzpatrick: The answer is more about getting the analytics out of what has been run, rather than trying to model the low-level details of things at the abstract level.

Kelf: You do see the test gets larger or smaller as they actually run, and when stitching tests together you can identify bottlenecks. So it is low-level information being reflected back up to the high level. But it does bring up another interesting topic, which is debug. We are all looking for ways to flag a transaction and watching it go through a system. Visualization at this level creates some interesting new areas. There is a world of new possibilities that this opens up. Now that we have raised the abstraction on system-level tests, it creates opportunities for debug and power analysis. You can expect to see all kinds of companies going after these opportunities.

SE: When you throw in the software angle, do things get more interesting?

Anderson: Right, you have to watch what both the hardware and software are doing.

Melling: It can help you decode what the software is doing. Software that gets generated on these things is hairy — multiple threads on multiple cores, and figuring out what is happening and what is dependent on what else. The UML diagrams are proving to be very useful. They need those kinds of annotations to put into their reference guides and to show the operations that are valid.



Leave a Reply


(Note: This name will be displayed publicly)