What Happened To Portable Stimulus?

All standards take time to become established — especially new languages and methodologies, which require an ecosystem to be developed.

popularity

In June 2018, Accellera released the initial version of the Portable Test and Stimulus Standard (PSS), a new verification language that was slated to be the first new abstraction defined within EDA for a couple of decades. So what happened to it?

Apart from a few updates at DVCon, there appears to be little talk about it today. However, the industry has its head down trying to make it work, and it is making progress.

Often shortened to Portable Stimulus, it defines verification intent in a way that decoupled it from the engine the design is to be executed on, be it simulation, emulation, post-silicon, or even running as software executed on the embedded processors designed within the system. It also enables vertical reusability, so that code used for block-level verification also can be used at the SoC level, or the other way around. Probably most importantly, it is scalable, and could pick up from SystemVerilog and UVM, which are struggling with the complexity of modern systems.

Five years later, it seems as if the activity that swarmed around this standard has disappeared. In this case, looks are definitely deceiving. Several EDA vendors are working on the technology, and many companies have deployed it into their flows. It appears as if those efforts have yet to become unified. Each vendor is focusing on a different problem, a different market sector, a different approach to using and integrating the technology — even a different focus on the language.

What is PSS?
Why does the industry need a new language? “The ultimate value of Portable Stimulus is taking constrained random to the SoC level, which you can’t do today with SystemVerilog and UVM,” says Tom Fitzpatrick, vice chair of the Accellera Portable Stimulus working group and strategic verification architect at Siemens EDA. “A Portable Stimulus description says do A then B and C in parallel, then D and then if X happened, then do this. It’s a declarative description that allows you to do randomization as needed.”

Unlike SystemVerilog and UVM that apply constrained random techniques onto the inputs of a system, PSS starts from the outputs and works back toward the inputs. “If you say I want to do this action, it may require data from somewhere and the data could come from multiple places,” Fitzpatrick explains. “You don’t know that ahead of time when you’re writing the model. You define the possibilities, and then you randomize it and solve it.”

The committee continues to work on the standard. “We just submitted the 2.1 draft to the Accellera board of directors which we expect to be approved at their September meeting,” he says. “We have also started work on 3.0, a big portion of which will be scenario level coverage. We have also started an effort to define a methodology library, similar to what UVM was for SystemVerilog.”

Standards take a while to see adoption. “PSS has followed a fairly typical introduction cycle — initial excitement at the possibilities, discovery of real value-add areas, general adoption and proliferation, i.e., excitement, hard work, acceptance,” says Dave Kelf, CEO for Breker Verification. “For language-based technology introductions, inevitably this adoption cycle requires many years of learning, experimentation, and investment. This is when high-level visionary articles are superseded by practical application papers, which is what we are seeing now.”

New standards have early adopters. “The companies who seem to be using PSS have done it with a top-down focus,” says Robert Hoogenstryd, product marketing manager at Siemens EDA. “The challenge is that there is no bottom-up content to leverage for this idea of reuse. They’re having to do a lot of the implementation stuff for themselves. Therefore, it becomes costly. Every time they talk to the bottom-up guys, the guys doing block-level UVM SystemVerilog verification, they ask, ‘What’s in it for me?'”

The system needs to be bootstrapped. “PSS brings value in multiple use cases and applications,” says Moshik Rubin, senior product management group director, System and Verification Group at Cadence. “We believe that the sweet spot is in full chip, bare metal environments, running in simulation or emulation, and even post-silicon. To boost adoption, there needs to be ready-to-use content for typical system-level verification domains. We call it ‘System VIP’ and we see customers that have no prior experience with PSS getting up and running fairly quickly.”

The language
In the early days of the standard’s development, there were two camps. One wanted C++ and the other wanted to develop a new declarative language. The initial specification contained both. “The C++ camp had a few of the initial drivers, but they have not been active in the working group,” said Bernie DeLay, senior engineering director at Synopsys. “The active customers found that using a declarative language was more efficient for them. Using base class libraries, like in SystemC, turned out to be messy.”

The latest version of PSS has dropped the C++ variant. “We’re very happy that they dropped C++,” says Breker’s Kelf. “When the language was defined, it was supposed to be both DSL and C++. A decision was made by the committee to make them semantically equivalent, but what they were thinking was syntactically equivalent. This was a mistake because that added a ton of class libraries and routines that made C++ mimic the DSL. We say native C++, with one or two extra functions, can easily handle all the requirements of Portable Stimulus. So we support either native C++ and the DSL as defined by Accellera in the standard.”

One adoption area divides the industry. “If you talk to people who are doing RTL design, they seem to naturally go toward the declarative language because that’s what they’re used to,” says Siemens’ Hoogenstryd. “When you talk to people who are doing software development, you’re talking about software-driven tests, and they’re looking at using PSS as a way to create bare metal tests. They naturally go toward C because that is their world. It is a little more challenging for those people to make the transition to DSL, but for those working on RTL development, it’s a natural thing for them. It also feels more like an extension of what they’re doing with SystemVerilog and UVM.”

And the language will evolve over time. “When you are talking about a problem that involves scheduling, with many resources, describing that in a declarative way is easier,” says Synopsys’ DeLay. “You are describing the resources and the things they are capable of doing things. Inferring whether there is a producer or consumer is much easier in a declarative language. However, there are some things that seem to be better described procedurally, and there is a need to mix.”

PSS adoption
Many large semiconductor companies will play with a new standard when it emerges, but its stickiness is the real test. “We see adoption, and for those who have adopted it, they continue with it,” says Robert Ruiz, product line management director at Synopsys. “I have seen standards where they tried it once, maybe found it interesting, but didn’t use it next time. However, with PSS, once it’s adopted by one person, the team, then other teams begin to adopt. We haven’t seen discontinuation after someone started using it.”

A lot of work still is required before PSS is easy to adopt. “The real question is, who can adopt it versus who should adopt it?” says Hoogenstryd. “There are a lot of challenges to adoption because it’s very much in the early stages. There is a lot of exercise left to the student, which limits the adoptability.”

But adoption continues to widen. “We are seeing it in a couple of different market spaces,” DeLay says. “The GPU area is one because of the complexity of the scenarios that they want to drive. They come from a C background, and they never had an abstraction language like this. They were going along writing their embedded C code and they just weren’t getting the required coverage. And then we are seeing IP level teams use it. While they also want coverage, this plateaus off with UVM, and that takes a lot of compute. With PSS, we’re closing faster. They’re picking it up because of the reduced compute needs.”

Sometimes, adoption is faster when the underlying technology is hidden. “At the end of the day, it is a new language and it’s a fairly complex one,” says Kelf. “The issue is that we need people to be able to get up to speed and we can make good use of PSS to create system VIP blocks and make them configurable without an end user having to know PSS. When they are ready to actually use PSS, it becomes easier.”

Long-term promise
PSS is an abstraction of functional intent, and that may make it suitable as a specification language. “More than one customer has said they are really looking for a single high-level executable specification of their intent,” says DeLay. “The architect will sit down, write their test intent as an executable spec, hand it down to firmware and validation teams, and they can then reuse that all the way into post silicon. We see PSS becoming the executable specification for the industry once we have all the capabilities in the language and the tools, which we are closing on.”

Portable stimulus is enabling shift left, in that post-silicon validation and test environments now can be developed pre-silicon. “Pre-silicon and post-silicon efforts can be assigned to the most suitable environment based on a common test plan,” says Klaus-Dieter Hilliges, platform extension manager for Advantest. “The Portable Test and Stimulus Standard is specifically designed for this reuse of test content across insertions. It now makes sense for post-silicon teams to learn how to work with pre-silicon teams, their content, and tools, e.g., to effectively debug a PSS-based test case. As PSS becomes widely adopted, extending the verification flow to post-silicon validation is a natural extension. Already, we see leaders in the industry communicating first successes.”

But there are barriers. Models always have been a limiter to new languages and methodologies. Without a collection of pre-existing models, the effort required to implement something is often greater than the gain, but nobody develops the models unless there is demand for them. It requires an ecosystem, and that means both models and tools. “It seems that the critical mass in terms of industry adoption, vendors support, and available content is getting there,” says Cadence’s Rubin. “We are starting to see the vision turning to reality with IP teams that create PSS API up-front, then being used at the SoC level and all the way to post silicon, maximizing reusability and productivity and at the same time enabling a single language between teams that use to be isolated.”

Some of that can be enabled by the definition of a methodology, which is part of the plan for PSS 3.0. “Models are an important part of the ecosystem,” says Accellera’s Fitzpatrick. “Similar to the way UVM dealt with transactions and the infrastructure for communicating transactions, through drivers and monitors, the methodology library will define the important pieces of a typical application and standardize on how to represent those things. This will enable a library of components to be defined. Today, a typical VIP has a library of UVM sequences or an API for it. There will also be a library of PSS actions associated with those, which will call those UVM sequences as you’re using it. It will also include the appropriate data structures to fit in with the methodology view of Portable Stimulus.”

Many pieces have to come together to make this work. “We are at the point where we have the ecosystem, and now everybody’s figuring out the best application for it,” says Synopsys’ Ruiz. “We are at the tipping point, but now there’s a need for development of the tools, the libraries, and debug capabilities. We need to put in the infrastructure to debug tools. If you’re going to use anything for testing, there has to be some way to debug because that’s the purpose of verification.”

The tools are coming. “I don’t believe there’s any tool that is currently able to span both top-down and bottom-up modes of operation,” says Fitzpatrick. “Some tools focus on C code generation. Some of them focus on UVM on-the-fly generation. Portable Stimulus is where SystemVerilog was pretty early on, where different vendors support different parts of the language. My expectation is that by the end of next year most, if not all, vendors are going to be able to do both UVM code and C code. Eventually, you could generate Python or whatever from a Portable Stimulus description.”

There are multiple ways in which tool flows can be imagined. “We decided not to build a PSS tool, which is a PSS compiler tool,” says Hoogenstryd. “Instead, we are developing technology that integrates it into our simulator. We’re not creating a PSS tool that works with the simulator. We are enabling our simulator to be able to execute PSS models. We are trying to reduce the barrier of customers being able to adopt this, and having to figure out how to put it into their infrastructure, by building it into our infrastructure.”

Alternatively, it can be seen as a synthesis tool. Breker provides another example of an application focus. It has combined PSS and RISC-V verification. “We see ourselves starting with the uncore things like interrupt controllers, memory controllers, the pieces that fit around the processor,” says Kelf. “We call this integrity or integration verification. You can run the app on a clean processor with no added instructions. With RISC-V, it’s quite easy to add instructions into the instruction set. You can add a test for that into a PSS graph without disrupting the rest of the graph.”

Coverage
Of all the advantages and difficulties associated with PSS, there is one issue that rises above them all — coverage. Coverage provided by SystemVerilog and UVM is not sufficient for SoC-level verification. “In SystemVerilog, you can sample data at certain points,” says Fitzpatrick. “You can see what the transaction is when the bus cycle stops, or you can see what data you read from what address. Then you use cross coverage to say, ‘Did I do large transfers in and out of all the areas of my memory?’ However, the kinds of questions that we’re asking for coverage at the scenario level are different. ‘When I did this thing, what was the previous action that generated the data for me?’ Let’s assume that I can envision these actions as blocks in a waveform. There’s a beginning and an end time. I want to ask, ‘Did these two things overlap? Did this one come before that one? If I executed this action that could have inferred A, B, or C, which one did I infer? Have I inferred all three of them?'”

It is a multi-dimensional problem. “The first dimension is combinatorial coverage, like what you might get in UVM,” says Kelf. “It answers questions like do these three signals or variables happen at the same time? The second one you need is sequential coverage. Which sequence of events happened? Do you traverse a certain path for a state machine, or a certain path for a set of actions in PSS? The third dimension involves multiple sequences. Did these sequences happen concurrently.”

Others agree. “At the requirements level, you’re really talking about things involving behavioral,” says Hoogenstryd. “And if you’ve covered all the behaviors, that is one of the most powerful things that will come out of the evolution of PSS.”

Conclusion
Portable Stimulus is certainly not fading away. Several EDA companies are busy building the tools and libraries to make adoption easier, and the number of use cases appears to be growing. Large semiconductor companies — those with the most difficult verification challenges — have adopted it and are seeing benefits. One of the most positive signs is they continue to use it and spread its adoption within their respective companies.

Related Reading
Optimizing IC Designs For Real-World Use Cases
Vector-driven approach comes at expense of secondary tasks.
Design And Verification Methodologies Breaking Down
As chips become more complex, existing tools and methodologies are stretched to the breaking point.



Leave a Reply


(Note: This name will be displayed publicly)