Experts at the table, part 1: What is software-driven verification and how does it relate to constrained random generation and UVM?
has been powered by tools that require hardware to look like the kinds of systems that were being designed two decades ago. Those limitations are putting chips at risk and a new approach to the problem is long overdue. Semiconductor Engineering sat down with Frank Schirrmeister, group director, product marketing for System Development Suite at Cadence; Maruthy Vedam, senior director of system validation engineering at Intel; Tom Anderson, vice president of marketing at Breker; , founder and chief executive officer of Vayavya Labs and John Goodenough, vice president of design technology at ARM, to talk about . What follows are excerpts from that conversation.
SE: Can you start by defining what software-driven verification means to you.
Schirrmeister: There are two types of software – software that leaves the room and has to be correct and an increasing amount of software that stays in the building and is used primarily for diagnostics. The software that leaves the room is functionality that needs to be correct and luckily in many product categories, this can be updated as required in the field. The amount of diagnostic software is growing and the biggest reasons appear to be that there are increasing interactions between the blocks and the other is that designs contain processors. This becomes important because it means that you can use software across all engines from s, RTL simulation, , FPGA and post silicon. In theory there is verification reuse across all of them. Software-driven verification is the usage of software to verify hardware.
Vedam: At Intel, we have tried to shift our focus from being a very chip and hardware focused company into a system company. We now understand that it is the system that counts and not just the hardware, or the software or the firmware. Rather than focusing on the ingredients, we now focus on the complete product. Software is becoming a bigger part of the product. We want to be able to limit the validation scope to the areas that matter, potentially the use cases, to describe the ways in which the final product will be used. Software helps us get to those scenarios faster. Software-driven validation is a way of understanding the end usage of the product and validating those earlier in the product life cycle.
Anderson: I see three different but closely related areas. One is traditional hardware / software co-verification. This is running production software on hardware in whatever form or representation that you have, including final silicon. That is not software-driven verification. People have been doing this for a long time, it is well understood and it is the right thing to do. Software-driven verification is about leveraging the power of the processors inside an SoC to verify from the inside out. The SoC is all about the processors. They control its function in the real world, so why not use the power of those processors. It supplements traditional testbenches where you are trying to poke from the outside to make stuff happen. These can be either manually or automatically created. There is also software-driven validation which is when you get the first chip back from the foundry and it is often the case that it doesn’t do exactly what you wanted. Hardware bugs, bugs in the environment, many things can go wrong. Traditionally they would start to write diagnostics and this task is automatable in much the same way that technology is being used to replace hand coded software in the verification phase. An interesting question is can they all be linked together. Can you use the same process, the same tests, the same generation tools and how does that tie into the production software you are using for co-validation.
Patil: When we talk about software-driven verification, it means that software is an integral part of the verification flow. It is an inside out process. The software resides in your SoC and runs on the processors of your design. Verification is carried out in conjunction with the software that resides in the system. This software could be automatically generated or hand written. It is not necessary to achieve full-fledged verification in terms of all of the use-cases being generated and getting complete , instead being able to simulate most of the use cases that are reflective of the real world scenarios in which the system is going to be used. If you can do that you have achieved the intent of software-driven verification.
Goodenough: First you must consider if the product you are validating is the hardware, or if the product is hardware/software. As a processor company we are often just shipping hardware so we use software to help us validate the hardware. We have another product, a GPU, where the product is a combination of hardware and a more significant part is software, such as drivers. I can apply software-driven validation to either of those and it important to distinguish the two. This also applies to more complex things such as platforms, sub-systems and SoCs. We may use software to validate the integration of the hardware and the software that is part of the product. We use software validation to accelerate integration testing, trying to bring as much of the integration testing into pre-silicon and preferably as an provider, into the pre IP-release world, which is pre-SoC integration and to accelerate integration testing. That could be booting use-cases. The other is to anticipate use cases and to verify either the hardware or the hardware/software combination. When we release hardware or a hw/sw platform, the software that is going to be running on it is not the software that we were testing it with. Software-driven verification is about anticipating use cases and stressing the product in a variety of anticipated use cases. The thing that kills us are the unknown unknowns. The real power of software-driven verification is helping us accelerate getting to the known unknowns and to explore the unknown unknowns by putting the systems through more stress. The reason we can do that is that we run the same payloads in simulation, emulation, FPGA and then in silicon. It is not just about running fast, it is about when you have a failure being able to debug it.
SE: Are we now going to have two methodologies in use for verification: UVM for blocks and software-driven for SoC and how do we ensure that this does not waste resources?
Schirrmeister: At DVCon, people were discussing if UVM was at an end. Software-driven verification is the only option when you get beyond the pure block-level verification. UVM is great for the block level and possibly for sub-systems, but when you put it all together and want to investigate how the blocks interact with each other, that is where software-driven is becoming important.
Vedam: UVM is a very structured and layered architecture. Is software the next frontier of that layering? As we go from the transaction layer into collections of transactions it seems as if use cases and potentially software could be the next level in the stack.
Schirrmeister: UVM, as it is today, is focused on the block level and we need new things. A new layer. UVM is not going away because you still have to verify the blocks.
Vedam: They could be aligned. It is not about having two different methodologies. There is an opportunity to align them and extend the layered architecture.
Anderson: That is a valid way to think about it. Software-driven verification could be viewed as UVM++. It is the processor that changed everything. UVM works great until you add the processor and UVM has no notion of processor, software running on a processor or communications between software and the testbench. None of that exists in UVM. Maybe there could be a new layer that encompasses it. But we are not throwing a lot away because we are leveraging a lot of UVM. If you have a test case running on the processor that communicates with any I/O of the chip, you will leverage the UVM testbench components for those. You can leverage the assertions, the coverage. What you don’t have to do is write a top level UVM sequence that ties everything together. That may be subsumed by the software-driven layer.
Goodenough: When you say UVM, you have to define what you mean. Do you mean reusing the assertions, the transactors or reusing complicated virtual transactions? There is a different answer for each of those. There are some things that can be promoted from unit-level testbenches such as assertions and cover points. Reusing UVM transactors at the system level is not as useful. This is often because you have moved into an accelerated environment and I don’t want anything in there that is going to slow me down.
Patil: If you want to run a test case, and assume it is a pure C-based test case, you need some bare-metal software which is acting on top of the hardware. This could be written manually or you may have a mechanism to generate this. This could come from a high-level hardware / software interface mechanism that could be captured in a language such as SystemVerilog. This captures sequencing that can be turned into bare-metal software.
Goodenough: We find that people start using the real Linux kernel, or hypervisor. You want to do this because it is part of the thing that you are verifying.
Patil: We assume that a base-level kernel has been verified and tested. Beyond that, when you integrate a system with its peripherals and IP, the IPs have to interact with the hardware processor and there is a defined mechanism for this. You need to describe those interactions. That is what we capture and it is a hardware/software interface. From those specifications it is possible to generate the software. As you move up, how do the IPs interact?
Goodenough: We tend to find that many of our software tests are written in advance of the hardware. These are not validation guys, they are software development guys. They don’t want the bit in the middle.
Part Two can be found here.
Second Anderson, there need to be a layer encompassing UVM and the interacting software that runs on the processor. The ip blocks are UVM components. their configuration is software controlled and the processor block itself is simulated and running the software.