When it comes to test portability between simulation, emulation and silicon, it may be time to consider a higher level of abstraction using graph-based technology.
Is it time to move up again? When it comes to test portability between simulation, emulation, prototypes and silicon, as well as an easier way to create a test structure, the answer appears to be a resounding ‘Yes.’ Looking at these activities from a higher level of abstraction and using a graph-based approach should allow automation where there has been none previously, and could allow valuable engineering resources to be allocated to higher-value activities.
Here is where graph-based technology comes in. Mark Olen, product manager for the Design Verification Technology Division of Mentor Graphics, explained there is a format that describes the C language. At the heart of it is a description with its origins dating back to IBM in the 1970s that based on a BNF (Backus-Naur Form) meta-language grammar construct.
Interestingly, graph-based technology also has the ability to enable engineering teams to run verification code on bare metal silicon to start debugging before production OS and applications are ready. Cleaner code with less post-silicon debug means there is less for the foundries to wrestle with at production time, which also improves the overall design for manufacturing flow.
Mentor sees this as its starting point for tool development in this area but didn’t really invent anything because it is pretty much a standard — by computer science terms not by verification engineering terms — so it’s not a SystemVerilog or a UVM or anything like that, he noted.
Olen said the BNF is a higher level of abstraction in terms of defining a space of behavior that you want to test. The same way that designers used to lay down gates and logic because they had to find the way they wanted to implement their design, they no longer do that. Now they write in RTL — a higher-level of abstraction that describes intent rather than implementation. Then they rely on a logic synthesis tool to process and operate on the design intent and synthesize the design implementation. The analogy holds in verification for the graphs. The graph itself describes a test space at a higher level of abstraction, which can be automated synthesize test implementation as opposed to synthesizing design implementation.
“Anything you could describe in a set of constraints today you could describe in a graph is somewhere between a half to a quarter the amount of lines of code — so it is much more brief,” he added.
There are many abstractions possible for verification, of course. “The UVM and SystemVerilog are working very well for most of the verification needs for doing test benches and working at that simulation and emulation level, but there are few things that SystemVerilog just can’t do,” said Steve Chappell, senior manager for CAE technology and verification at Synopsys. “You’ve got embedded processors in these designs and you can’t compile SystemVerilog to an embedded processor. There are only C compilers; C++ and Java. In order to actually run some code on those processors there is a need to write some C tests or C++ tests or have something generate that code for you. There’s also the high-level modeling side, where the preferred language is SystemC because of performance reasons, and at that level there isn’t a real good constrained random environment for SystemC. So there are some needs outside of the general area of where 90% of the verification work is done. The idea of the graph is to have this meta-language if you will that is a higher level of abstraction that allows you to then do some level of linking through different layers – the C layer or the SystemVerilog layer or even the System C layer.”
Joe Skazinski, chief technology officer at Kozio, agreed that generating test cases is challenging. “We’ve gone from directed to constrained random and now graph-based to find the flow and generate code for that. One of the challenges I see is that you still have to run software on these embedded processors.”
Still, Chappell said, “From an adoption point, software guys that are trying to generate the C code for embedded processors are not quite as comfortable with SystemVerilog – it’s going to be a learning curve for them either way whether they have to learn SystemVerilog or some new meta-language.”
He believes anything that could be done on top of current methodologies needs to be very non-disruptive so if a graph-based approach can be done in a way that minimizes the impact and allows engineering teams to continue using and building on the infrastructure that they have already, it will probably be acceptable.
Synopsys is not going into detail about any plans in the space as of today as the company’s viewpoint is that this technology addresses a need that’s not that critical and that there are bigger needs in the customer base currently.
Is the industry ready?
It seems promising, but are engineering teams ready to move up?
“People have looked for years for a way to have a common test or common methodology that spans simulation, prototyping and hardware and there really hasn’t been one,” said Tom Anderson, vice president of marketing at Breker Verification Systems. “If you are running a handwritten diagnostic test in hardware, let’s say, in the bring up lab, it’s usually very hard to port that back to simulation. At the same time it’s really impossible to take simulation-based vectors and port them to hardware because you no longer have direct access to the I/O ports.”
Also, engineering teams find when they get their chips back that they can’t expect to boot the OS and have applications up and running on day one, he observed. “You need software that is designed to be a verification vehicle rather than the final production vehicle as a stepping stone. Much as we saw with the simulation world, people are hand writing tests. Virtually everybody we see out there when they get their silicon back has a team of diagnostic engineers writing diagnostics to run on the raw silicon as stepping stones to find bugs, to find problems — sometimes they will use bits and pieces of their production software like drivers and things connected together in this manual test to build up to the point where they can run the OS.”
The belief is that graph-based technology has the potential to completely replace this manual process both in simulation and in hardware.
“One of the things that’s changing the landscape is people who are saying, ‘I want to go beyond block-based verification at RTL in SystemVerilog with my favorite VM — UVM, OVM, VMM,” Olen said. “Now I actually want to go to the next level and I want to verify my subsystem, my fabric or my full system, my full chip.’ In that case, they want to know how to reuse the testbench from the block level at the system-level and this is an area where the graph-based technology gives a lot of benefit.”
In support of industry adoption, Mentor is making a technical donation of its existing graph-based test specification format to jump-start a standardization effort not as a standard of a graph-based technology, but as a standard for a portable stimulus description format, he said. It is taking this approach because graph-based insinuates an implementation, whereas if the standard is focused on the real need of being able to port stimulus from the simulator to an emulator; from an emulator to an FPGA prototype; and maybe even from a prototype onto first silicon – they believe it may be received better by the industry.
Users also want to port stimulus from the block level to the subsystem to the full system level — and the graph-based technology doesn’t replace SystemVerilog nor does it replace C or assembly — it sits on top, Olen pointed out. He stressed that this technique also supports Verilog and VHDL.
Does one size fit all?
However, questions remain. Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence, agrees graph-based technology does make it easier to create tests than by hand but a question that is unresolved for him is one of scalability. “If I have a chip of 100 IP blocks all connected, does it give me the scalability I need and does it scale up to complete tasks and is it actually having enough information to identify for the designer what is actually a valid sequence, what is a valid path and what is not?”
He has observed the technology being used more for debug at this point in time, but for entry, ‘You really want more automation,” he said. “You really want to automatically generate those tests based on the properties and characteristics of the individual blocks in the system because they know what they can and cannot do as opposed to leaving it up to the designer to identify which path in the graph is valid and which one isn’t…It’s a good start – but designers will need more…There’s nothing wrong with it. We just need to deal with the limitations and need to be aware that it scales to a certain level but there may be issues of scale with very complex designs.”
Mentor’s Olen disagreed. “We sometimes encounter the opposite problem where when you truly use a BNF or a rule graph to describe the behavior of the system, it actually shows that the system is capable of so many different functions and operations — billions and billions of operations. You come to the opposite conclusion and you say, ‘Holy cow, this is generating way too many tests. How do I pare this down—which you can do, but it’s ironic because what happens more often is customers say constrained random testing generates lots of tests and this graph is going to be over-constraining. We show them how to use the graphs and the graph actually ends up generating billions to trillions of sequences and they say, ‘Wow, I didn’t even know that my test space was this big. Now I’ve got to figure out how to target my tests to just of the parts I care about.’”
At the end of the day, Chappell said he describes what graph-based technology does is generate more stimulus, and it is really targeted at the stimulus coverage side. “The biggest problems a lot of people are having involve the deep down coverage inside of their design that they are having trouble hitting. So unless you have some technology that is not just black box or gray box — it actually looks and analyzes a design and tries to figure out how to work its way out backwards — that’s a bigger problem that we need to solve as an industry. There is room for both sides of the equation but it’s not going to be the silver bullet that solves everything”
For more on this subject, check out this panel discussion that took place at the recent DVCon.
Leave a Reply