The Growing Impact Of Portable Stimulus

Experts at the Table: How the Portable Test and Stimulus Standard has affected the industry and who are the early adopters?

popularity

It has been a year since Accellera’s Portable Test and Stimulus Specification became a standard. Semiconductor Engineering sat down to discuss the impact it has had, and the future direction of it, with Dave Kelf, chief marketing officer for Breker Verification Systems; Larry Melling, product management director for Cadence; Tom Fitzpatrick, strategic verification architect for Mentor, a Siemens Business; and Tom Anderson, technical marketing consultant for OneSpin Solutions. What follows are excerpts of that conversation. Part two of this discussion is here. Part three is here.

SE: The Accellera Portable Test and Stimulus Standard (PSS) became a standard one year ago. How has the industry reacted to it?

Fitzpatrick: It appears to be very positive. There was a lot of interest last year at the Design Automation Conference for the rollout. Before that, at DVCon US and Europe it was a big topic. This year, we do not have as much to announce because we still working on the 1.1 update, but I have been doing presentations in our Verification Academy booth and they are some of the best attended sessions. I can’t walk down the hall without someone asking me about it.

Melling: For us, it was the stimulus that sparked broader adoption, and we are seeing heavy production use. There are a lot of places where we had not anticipated interest and we have had customers that are taking it into the ATE world, and looking at how it can impact time to yield. We have customers looking at it from an architecture requirements perspective, being able to take the abstract language and defining the architectural requirements and from that to generate a verification plan. So there are many applications that we are starting to see, and people are more confident now that a standard is in place.

Anderson: There is clearly a consensus that it is real now. The early adopters went with an early vendor and had reasonable success, and now the second wave is happening. Now they can code something up that will work with multiple vendors and they know it will not lead down a dead end. That means there is more willingness to give it a try. I am also fascinated with the idea that it is already extending into new areas. Extending into test was certainly a vision early on, but that was not the focus for the standards group. And the fact that the architect is the one that writes the initial PSS description is part of the original vision, and that is something that we clearly accomplished. So it is not just about adoption, but people taking it into areas that are in advance of the efforts going on in the working group.

Kelf: The release of the standard eliminated one of the biggest entry barriers for mainstream verification users. What has been interesting for us is that we have seen a clear evolution by early adopters and mainstream users toward solving practical issues, rather than inspecting the possibilities afforded by the standard — especially for SoC verification. We have been able to apply the experience gained from our existing customers to develop apps and libraries that address these issues. The second adoption wave is here. We are in the classic post-chasm mode of applying the standard to real applications.

SE: Who are the early adopters? Is it the architects? People looking for better UVM? Has the adoption pattern changed since it became a standard?

Anderson: The first wave was primarily verification engineers, ones who sat on the boundary of doing testbenches and writing some embedded code because they were designing SoCs with embedded processors — diagnostic-level code. They saw this as a way to unify those two efforts and make a one-time investment. They could take portions of the chip that did not have processors, and the portion of the chip that did, and be able to bring that out to the lab and even the field. That was the initial set of drivers. It is expanding. My sense is that designers are now asking this kind of question, the architects and slowly coming in, and the ones in the lab who write diagnostic code for a living. They see a way out of some of the hard, manual effort.

Fitzpatrick: Our initial user base was mostly UVM users, but we did a lot of work to keep the analysis under the covers for them. It was still proprietary, so they were uneasy. Now that it is based on a standard, we are still doing that but growth has been with UVM engineers who are looking for better coverage targeted stimulus in their UVM environments. Now that they know that there is the standard and they can build on that, we have seen an expansion of the user base. We have not really had a lot of people willing to just say that we are ready to do everything in PSS.

SE: So, an expansion of the UVM base rather than a new pocket of people?

Fitzpatrick: Yes, it is broadening out within the companies and they are now looking at emulation and being able to generate C code from the same scenarios. The standardization has enabled them to be confident that they can do that part of it and not be tied into a single vendor. They see the value that we have been bringing all along.

Melling: We saw the biggest initial adoption at the SoC verification level — people who were filling a gap. They had challenging issues with coherency testing and some of the big things at the SoC level. What has really caught fire has been low power. People recognize that this is how we change the game. They say that power is a system problem. And while they are sitting here working out how to test power, they now have stuff that can be pushed down to the IP guys and say, ‘This is the kind of testing you need to be doing on the IP so that when it is integrated into the SoC, it will be ready.’ I can also ensure I have the necessary building blocks to do the more complex scenarios. So it has evolved to becoming an SoC tool pushing down to the other places that are affected.

Kelf: We see two segments. The big top-down SoC users who are trying to test complete systems — a lot of people are starting there. Interestingly, with these hybrid methodologies where virtual platforms are becoming more important, they are doing configuration up front on a virtual platform for the whole SoC, doing some early testing and getting the testbench working, and then using that on an emulator and taking some blocks out and using that in the UVM environment. We are starting to see more of that happening. Customers are wanting to use it in the whole flow. Some parts of the tests are modular — going into the UVM environment and the doing the full SoC testing, power analysis. Security is another area we are seeing a lot of interest. The newer things are connected to RISC-V and we are seeing interest there. Automotive and the systematic methodology.

Melling: Another thing that is really proving out is the abstraction of the language – which is paying dividends. If you have done an Arm library, getting to RISC-V is not a big effort because it is at that level of abstraction. It is a few details under the covers.

Kelf: That’s right. It wasn’t modular before. We can now use the ArmV8 app, and with just a few tweaks it is now a RISC-V app. Plus, you can bring in the configuration test. You can also bring in tests from the outside that might be written in a different format, such as the compliance tests for RISC-V, and calling them from PSS allows you to bring in new features very easily. I didn’t fully appreciate this when I first started with the language.

Anderson: PSS is important for RISC-V because one of the roots of this technology was verifying processor architectures. So it has come full circle. We went off and did abstract system stuff. Then we found a sweet spot in cache coherency verification, and now we are back into the ISA realm. This demonstrates the power of the technology and the power of the visualization capabilities. You can accomplish such a wide range of different problems in one standard. It validates all of the hard work we have been doing over the years.

Fitzpatrick: We made a conscious decision early on to have that hard boundary between the abstract model and the realization for two reasons – one to hopefully be able to enable this kind of stuff, but also to keep us from trying to define Esperanto. We do need to be careful as we move forward that we don’t get dragged down too far into any particular kind of implementation itself.

Kelf: It was important to keep that boundary, but the realization is very important as well. Making sure it does work in people’s environments, and people do not have to write large amounts of glue code to connect them, is important. A lot is possible, and we are working on a lot on realization code. The next frontier is making it more practical for configuration and other things around the standard.

SE: You said that test was an unexpected area for adoption. Originally it was about portability of testbenches from simulators to emulators, which hasn’t been mentioned a lot yet.

Fitzpatrick: The surprise is that it goes beyond simulation and emulation. That is a given today. We have done a lot through our UVM cookbook. We have re-architected how you should set up your testbench to make it easier to do that with the goal of having PSS be the thing on top of everything. That is pretty much a solved problem. But the ability to have a completely different realization layer than hadn’t been initially anticipated, to be able to address new platforms, that has been tremendous.

Kelf: We have had a post-silicon product for a while, which also works with prototyping. The realization layer for post-silicon bring-up is not that dissimilar to the emulation layer, but there are some changes and things that had to be put in. It is not hard to rebuild that, and it then opens up new worlds. The same is true for virtual platforms such as Qemu.

Melling: The surprises were the paradigm shifts that occurred in the test world. I saw test as being DFT, scan, vectors, but it turns out they are facing real problems in terms of the more complex designs. The scan vectors are getting too big and using too much tester time. So they have some real initiatives around time to yield. How good and short is the test set that will give me a good read on whether a part yields? PSS has some real advantages solving this problem in terms of the number of pins needed to configure and test all the way through to exercising real functional behaviors of the device. Just because I wasn’t as familiar with that area, it struck me more as a cool adoption, and opens up problems spaces in other territories that were not being discussed in the working group.

Fitzpatrick: Yes, we had a guy in the Verification Academy ask me about that yesterday.

Kelf: Test efficiency – you do more testing with fewer tests. More efficiency finding more corner cases.

Anderson: And being able to use the processor cores themselves to do the testing means the less tester time it takes. The overhead is getting the vectors in and out of the tester head. If you can do loops with variations so you do not have to load a lot of code into the processors themselves, kick them off, they can run much faster than the tester itself. That can be very efficient.

Kelf: In one place, in automotive, it is all about fault injection and the random failure flow. There was one person who does some of that pre-silicon, and then there is a lot more post-silicon. So they are doing the fault analysis on their chips by trying to inject faults post-silicon and tweaking the logic in the flip flops at certain times, finding transients faults. Imagine a chip, out in the sunshine. You are trying to duplicate that with the actual chip by flipping bits in the flip flop at the right time, while the chip is running. Because they cannot run enough tests to get the ISO grading done properly, which we cannot do upfront – it is too complex and takes too long to run. So, they are in effect doing silicon-level fault simulation.

Melling: You are talking about manipulating the fault inside the hardware not negative testing.

Kelf: Right, manipulating the fault inside the hardware. Flip a bit in memory and see what happens. Did the safety mechanism catch it and respond in an appropriate manner?

Anderson: A problem with trying to calculate the fault metrics for ISO 26262 is that the manufacturers will not provide the reliability data on the silicon itself to feed into those metrics. This allows you to get around that because you have the actual silicon.

Kelf: There are some tricky things to get to the scan path, get all of the data out, flip one bit and scan it back in.

Related Stories
Portable Stimulus Knowledge Center
PSS top stories, blogs, and white papers
How To Optimize Verification
There’s no such thing as a perfect strategy, but much can be improved.
Debugging Complex SoCs
Experts at the Table, Part 1: Why time spent in debug is increasing, underlying trends, and what surveys do not reveal.
Evolution Of Verification Engineers
Experts at the Table, part 3: The role of a verification engineer will change and start to look a lot like knowledge management.



Leave a Reply


(Note: This name will be displayed publicly)