Experts at the Table, part 2: How did Portable Stimulus get its name, and will it replace UVM?
Semiconductor Engineering sat down to discuss the transformation of verification from a tool to a flow with Vladislav Palfy, global manager application engineering for OneSpin Solutions; Dave Kelf, chief marketing officer for Breker Verification Systems; Mark Olen, product marketing group manager for Mentor, A Siemens Business; Larry Melling, product management director, System & Verification Group at Cadence; and Roger Sabbagh, vice president of applications engineering for Oski Technology. What follows are excerpts of that conversation. Part one of this discussion is here.
SE: This time last year, we talked about Portable Stimulus (PS) and formal. Has any progress been made in bringing these two technologies together?
Sabbagh: We have been talking about making stuff work across the dynamic engines, but formal is not given any consideration.
Melling: PS was targeted at dynamic verification and not formal. There are formal tools that are starting to address things. PS was also trying to deal with the vertical integration problem. As systems become more integrated and you get larger designs, formal has capacity issues as well, so trying to restrict what we do in dynamic verification based upon what formal can do would be a mistake.
Olen: Don’t confuse the standard and the syntax with what you can do with the standard syntax. It is possible to apply formal technologies to PS.
Kelf: The standard is a declarative language. You declare properties and this is not dissimilar to assertions.
Olen: There are some that would claim that PS is not stimulus at all.
Kelf: The real problem with PS is that it is called PS. It is not just stimulus, it is coverage, it is testbenches.
Sabbagh: It is a specification of design intent.
Kelf: Portability is one important thing, but it is also sharable, there is so much to it. It is a functional intent specification standard, and we have shown that it can work with assertions, so it can work with formal. But you do have to start somewhere and there was a clear need for migrating tests from simulation to emulation that had to be addressed for the hardware/software space. We went after that and we know it can be extended to formal, to virtual platforms, to all kinds of technologies. It has the potential to do that.
Olen: The genesis of the name is here at this table. That would be me and Dennis Brophy. We had an idea for a different way of generating a “specification” that used a declarative, formal mechanism as opposed to a programmatic method. However, there was no way to get Accellera to form a committee based on that! It could not just be better constrained random stimulus because SystemVerilog was already there. So, we came up with a problem that it could solve, something that Accellera could get its arms around. The ability to port stimulus from platform to platform was an issue that many users cared about. In the end, it was named after just one of the many possible things that it could do. Now, the truth comes out – this really isn’t about stimulus – it is a specification that can be used to do things such as generate stimulus, but the stimulus it generates on a simulator and later on an emulator or FPGA prototyping system – that stimulus will be different across all of those different platforms. So, is the stimulus portable? It is the specification that describes a system’s behavior that is actually reusable or retargetable.
Sabbagh: So just change the middle S to Specification. It just seems as if the time has not yet come for formal.
Olen: It is declarative, so it is formal, it is a property-based specification. Second, you can use formal mechanisms to operate on the graph, tree or whatever you want to call it. Some of us are already doing that.
Palfy: It sounds like what OneSpin did with gap-free verification. A methodology that had a property set fully proven to be complete and cover all possible scenarios that your IP can do. The difference is that here you had two or three companies that tried to put it together with PS and maybe the time is right and maybe we were just too early.
SE: When PS got started the message was clear – this is not a UVM replacement. UVM would continue to be used for block-level and IP, while PS targets higher levels of integration. But the papers that are appearing talk about how we replace UVM at the block level. How will users perceive this and how do they transition between UVM and PS?
Kelf: UVM contains UVCs that are basically bus functional models plus sequences. Yes, you can have hierarchies of sequences and it can get complex, but most companies are doing that. PS can be used as another way to generate sequences, especially when you are trying to synchronize sequences around activity on ports in a block. It does not replace the UVM-style of the UVC or even the lower-level sequences. It layers on top of UVM. However, something that can be done with PS, even though we are not doing a good job of promoting it, is that you have the specification language and there are exec blocks inside of it, and they can write out different languages. They can write out C and C++, they can write SystemVerilog, e, they can write out assertions. This allows you to layer PS on top of many different things – which is part of the secret behind it. From the UVM side, it is not a replacement and it does not replace UVCs, it is just another way of driving them.
Olen: I agree.
Melling: UVM and UVC were all about targeting exhaustive verification at the IP level. As you integrate those blocks, how you use the IP in a system becomes part of what you need to use to describe something that is verifiable. The state space becomes too large and the use-case coverage becomes sparse. You have an expanding state space that you have to navigate very precisely otherwise you spend too many verification cycles on illegal use cases.
Olen: It is an n^2 problem.
Melling: Exactly. You have to run the use cases that are the most important and have to be covered. We talked to a customer that had a highly configurable piece of IP. The amount of variation in just the parameterization and the constraints that you can do means you could spend a huge amount of test time running illegal tests. The user would never create those, and you are not running enough tests on the legal testcases. PS really provides a way to describe that ‘what’ the thing has to do and a tool can create testcases and generate the actual stimulus that will test that.
SE: There is clearly a transition period where PS is replacing sequences. But that is one part of PS. What about scoreboarding, coverage – this is all part of PS. Ultimately is would appear that the only left for UVM is bus functional models. Why do we need something that complicated for something so simple?
Kelf: Agreed. You can synthesize scoreboards and the coverage models from PS. UVM has mechanisms and those will still be there. We all know it is hard to write a decent coverage model in UVM for decent sized blocks. PS provides a more effective way to do it based on a high-level intent spec. But consider gate-level simulation. We still have gates and we have a Verilog standard for the gate level, but nobody is writing gates by hand. Everyone creates RTL and synthesizes gates. PS is the same. We will create high-level specifications and synthesize the coverage models, scoreboards.
Palfy: Are we plotting to replace UVM completely?
Kelf: No – just like we are not replacing gates.
Palfy: By the time you start to get anything done, you could be doing stuff with formal and be getting rid of bugs.
Olen: We have both. We have companies using the pre-PS technology and they work completely in that environment. We also have others who say they have tens of thousands of constraints written in SV and there is so much know how and experience that went into making those, that we are not going to through those away. They use a layer of PS on top to control those and to control the transactors. It is rare for a customer to move to a homogeneous pre-PS solution before it becomes engine agnostic. They may have considered it to be interesting technology, but if it only works with one simulator, it is not going to be successful. That is why becoming a standard is important. When more people move into a homogeneous declarative mode, we will see many use PS exclusively, but not everyone because there is just too much legacy.
Melling: I am not sure how well PS would do at the IP level. If you look at the IP problem, exhaustive testing on IP is a must have to address those markets. I don’t see PS spending time trying to reinvent that wheel because it is not broken.
SE: What is an example of something PS can’t do? If you have to create the PS description for the IP blocks to use at the SoC level, why would you not use them at the block level?
Melling: At the IP level, PS is trying to describe how the IP is going to get used. Now, if I have to write all of the use cases and figure out all of the systems that the IP is going to get plugged into and write all of the different behaviors that could potentially be presented to it in a system, that is a tough problem. If you are coming at it from the what do I want to test perspective, it has a very broad range. So now you are running in the opposite direction where you have a smaller state space but an unspecified sparse space. As you go up, the space gets broader and you have to navigate the use model because it is sparse. If I say I am not going to worry about that and create a piece of IP and I want to test it such that all of the sparse paths of all of the different verticals and all of the different places have been tested, it is a specification problem. You can’t write all those use cases in a reasonable period of time. It is much more cost effective to just randomize the inputs and provide the constraints about what is legal in this piece of IP and I will just allow it to be beaten up and find out if it really works.
Kelf: I disagree. There is truth to what you said – certainly when you look at the system level and the sparse graphs – but we believe that PS can be used to test blocks very effectively, can completely describe the spec for a block and can drive the whole UVM framework. UVM is still there as a framework, but it can drive the use of those UVM test far more effectively. We think there is a lot of value there. You can easily describe a spec for a block in PS and much more efficiently produce a series of tests and even drive formal verification as well.
Leave a Reply