First of three parts: What’s changed; the pros and cons of UVM; the evolving nature of complexity; three paths for verification; partitioning the verification; limits of formal, FPGA prototyping; relatively vs. absolutely correct.
By Ed Sperling
System-Level Design sat down to discuss verification strategies and changes with Harry Foster, chief verification scientist at Mentor Graphics: Janick Bergeron, verification fellow at Synopsys; Pranav Ashar, CTO at Real Intent; Tom Anderson, vice president of marketing at Breker Verification Systems; and Raik Brinkmann, president and CEO of OneSpin Solutions. What follows are excerpts of that discussion.
SLD: As complexity increases in SoCs, what’s changing on the verification side?
Bergeron: For me, the question is what hasn’t changed. Now that we see UVM adoption accelerating, what hasn’t changed is the effort required to adopt the methodology properly. It’s natural for people who have a mental model for how things work to try to fit that new knowledge into their way of working, when the more efficient way is to figure out how to go with that new flow. RTL designers trying to pick up verification skills will be a big challenge. That hasn’t change from the early days of Specman or Vera.
SLD: Is the pain point higher?
Bergeron: It still takes a year to verify a chip. They’re bigger. The pain points are still there and the nature of them hasn’t changed. But there are more of them.
Brinkmann: UVM and its methodologies have slowed down the adoption of verification. People have focused on UVM, and everyone is writing UVM testbenches, and the assertion-based idea has been outside the scope of the media and people are complaining they don’t like to write assertions. But for simulation you have a fine excuse not to write assertions if you use UVM. It limits the use of assertion-based verification (ABV) by using UVM.
Ashar: Over the past decade, and particularly over the past couple years, the nature of complexity has changed. It’s not the number of things that go into the chip, it’s what goes into the chip that has become the pain point in verification. There are a number of reasons. A great example is the clock domain crossing. If you look at mobile SoC applications processors today, those are quite different from a decade ago. It’s a collection of diverse blocks. All the major mobile companies are working off the same reference designs and they are competing against each other on integration of the components. Basic blocks are well defined and legacy, so integration of these blocks is easy to understand. The integration challenge is at interfaces between these blocks, and it’s what you put on top of them. CDC used to be a comfort-level thing. Now it’s a signoff requirement. This requires a different approach to verification.
Foster: Today, 79% of designs fall into the category of SoC. You have at least one embedded processor. As a result, the whole nature of design and verification has changed radically in the past few years. There are three concurrent, independent paths for verification. One is IP, which includes externally purchased IP as well as internally purchased IP. The second is the system integration itself. It’s a totally different path, and that’s running out of gas in terms of what we can accomplish with simulation. UVM is a wonderful solution for IP, but it doesn’t work here. You have to go to emulation or FPGA prototyping, and FPGA prototyping starts to run out of gas at 20 million gates when you have to partition the design across multiple FPGAs. The third path is software. It’s not only the application focus. It’s also the hardware-software integration. Can the embedded processor do something as simple as write to the controlled status of registers within the IP before you can get into the applications. Most of these challenges are a combination of hardware and software and system integration. The real problems are with system integration today.
Anderson: The effectiveness of UVM falls off very rapidly when you look at the whole chip. As one of the people who helped create UVM, I didn’t think that would be the case, but I’m finding from users that it is the case. It’s too hard for a virtual sequencer to connect together 20 different ports and make sense of it all. In addition, the simulation speed becomes an issue. Because most of the chips out there are SoCs with embedded processors, you need to use the embedded processors to help verify the chip. It’s either an automated approach, or just hand-written tests or diagnostics that run simulation/emulation on those embedded processors. The approach is verification from the inside out. Instead of trying to build an unwieldy testbench and stimulate deep behaviors from the outside through I/O ports, use the processor inside the chip. But people who are doing the testbench, IP verification, and connectivity-level testing of the full SoC are often decoupled from the people writing the diagnostics. So how do you unify these disparate efforts into a plan for the whole chip?
SLD: Does the sequence, and the synchronization of the verification process, have to change so it’s more concurrent with the rest of the design?
Foster: I’ve seen the leading-edge companies move to a more predictable process. For example, the SoC integration used to involve taking all the IP, putting it together, and then trying to verify that. Now they realize there’s a sequence of steps that have to be done before that. For example, you verify the IP. The first step you do when you integrate them is to ensure connectivity. We’re generally using formal techniques today to do that. Now, to connect it right, you have to determine whether the embedded processor can talk to the memories and the controlled status registers within the IP. Here you can use graph-based techniques to generate the software tests. Once you’ve got that, you’ve proven it’s connected right and the processor can talk to things. Now you can go to the next level where you can test use cases, performance, power, and all these other system-level issues. I’m seeing this evolution in terms of a systematic way of doing this.
Ashar: The goal of partitioning and sequencing the verification process is to control the growth in complexity of design. There are different ways of doing that. One is a step-by-step manner. Another approach involves partitioning the verification process differently. You can partition along well-defined verification obligations such as CDC. It’s more of a verification-solutions based approach rather than a verification techniques approach. You might be verifying the power management control on a chip. You create silos in the verification process. One of the benefits of this solutions-based approach is that the specification that you need for verification becomes implicit. The better-defined the specification, the clearer the analysis and the scope of that analysis, and the finer resolution you get in the debug.
SLD: Isn’t that the classic divide-and-conquer strategy?
Ashar: That’s one way of looking at this space. Another way of looking at the space is with end goals of what the verification is doing. There are different dimensions along which you can look at the partitioning.
Anderson: CDC is supplemental to the fundamental functionality of the design. What about the heart of verifying the design?
Brinkmann: People are starting at the system level with virtual prototypes. That’s where they put all the features in and all the functionality that they want to see at the end. You have a rough model and you verify at the top level, and then you break it down into software and hardware. People didn’t do that years ago. Right now they’re investing in these virtual prototyping platforms. The challenge is how do you preserve the functionality that you’ve verified at the top level through implementation. You can verify certain aspects pretty well with formal, but that’s only a small portion of it.
Ashar: Functionality is not the magic thing. It’s built out of these smaller things. You’re going to simulate a lot out of it, and you need to make sure those controls work in a meaningful manner. We are creating examples of how you need to address this elephant in the room called functionality that you need to verify. That also needs to be broken down along lines of lower-level detail.
Foster: This represents one thing that’s changed since the 1990s. There was a popular phrase—orthogonalization of concerns—which is the way we verify designs. We verified the functionality independent of anything physical. That has changed. These things have started to come back together. I can verify the functionality at a high level, but when we add timing and power and all these other things, the functionality may be correct but it still might be broken by all these other things.
Bergeron: One of the challenges is that correctness becomes soft. Whereas verifying functionality is relatively easy, as soon as you get an error you stop. You’re done. But now, with an SoC, you’ve got performance and power. There is no one right answer. And as you go along, more details get added. It’s not a matter of finding one bug. You have to look over many simulations to figure out what you have. You have to have traffic profiles and power profiles that are realistic. Those change as you add more details.
Leave a Reply