Experts At The Table: SoC Verification

First of three parts: IP qualification and verification; hierarchy of verification tasks; application-specific verification; re-using testbenches; knowledge transfer across the design flow; improving communication between hardware and software teams.

popularity

By Ed Sperling
System-Level Design sat down to discuss the challenges of verification with Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence; Charles Janac, chairman and CEO of Arteris, Venkat Iyer, CTO of Uniquify; and Adnan Hamid, CEO of Breker Verification Systems. What follows are excerpts of that discussion.

SLD: As we get more third-party IP, how do we make sure it actually works?
Iyer: We do have a bunch of IP, each with its own customs and interfaces. If you buy IP from ARC it would use BDC, while ARM would use AXI. Everyone has their own bus interface and their own verification methodology. Synopsys will still ship verification IP with ARM models. Just getting this whole thing together is a huge challenge. Integration is one challenge. Verification is the other. There is no one-size-fits-all. We have been focused on the integration, so we call it IP integration qualification rather than IP qualification.
Schirrmeister: Handel Jones has data that shows the amount of time and effort spent on IP qualification is surprisingly high. It’s north of 20% within the hardware portion. And then there’s the software and the integration. And it becomes more difficult the higher up you get. You have IP that needs to be verified. That IP goes into subsystems, which needs to be verified. Those subsystems jump into the SoC, which is really a system of subsystems. And that jumps into the systems of SoCs, which are even more complex. You need to make sure the underlying pieces are correct and are correctly specified, or you will have really bad surprises with the integration teams.
Hamid: Is the time spent in verification because 80% of the problem is down at the IP level, or is it because that’s all we know how to do? We don’t have solutions to do the subsystems and systems of systems, so people stitch everything together and pray that it works.
Schirrmeister: It’s both. You know what to do at the IP level, but if you look at the market from the standpoint of complexity—there are subsystems from Synopsys and Tensilica that include software and hardware and interconnect—they have that verified, as well. You can rely that within that spec for the subsystem, everything is correct. What becomes increasingly more difficult is to understand whether that subsystem still works if you put it into the broader context. Do you starve it because an interconnect doesn’t give it enough data? Do you have the right memories and hierarchies?
Iyer: But why do we verify at the IP level?
Hamid: We have to verify at the IP level. But as you go up that food chain, existing methodologies are designed for the IP verification problem. At the system level it’s hard to do, so people do less of it.
Janac: It depends on what you’re building and how mission-critical it is. Assuming that it’s a mission-critical SoC, rather than a consumable SoC, you basically have to verify it at the unit level, at the IP level, the subsystem level and the SoC level—and the software level. Who’s responsible for that verification is interesting. We see the interconnect is becoming critical. The issue that everyone has their own protocol can be handled one of two ways. Either we buy IP from one company with one protocol, or the interconnect absorbs all the different protocols into itself and does the conversion inside the interconnect. That’s what happens with OMAP. You have to do protocol conversion, you have to separate the IP from the interconnect, because it allows you switch the IP and the interconnect topology. And you have to have a good verification methodology with the IP that allows you to check the module of the interconnect once you’ve integrated it into the chip. It’s a hierarchy of verification tasks. Some people that are designing consumable SoCs are using the interconnect as the verification IP. When the chip works, then out it goes. This is not good, but if there’s a flaw in those kinds of chips it doesn’t matter, and they’re extremely cheap. The most complex scenarios are the mobility chips. If your phone doesn’t work, people get really angry. Those are mission-critical. They have the constraint of power, space, performance and cost all at the same time, and they have to process huge amounts of information.

SLD: We have lots of different levels of verification going on now. Whose responsibility is it? Is there a chain of command?
Schirrmeister: An ARM presentation shows unit-level testbenches, top-level testbenches, and then system-level testbenches. ARM is an IP provider, so they have to do all of this even within ARM. Put this with the interconnect into a subsystem and it becomes even more complicated. I would argue that big.LITTLE is its own subsystem.
Janac: It’s not just a subsystem. It has a special problem of coherency.
Schirrmeister: Yes, because they switch back and forth. And then you put that into an SoC. Like all verification, it’s an unbound problem. You’re never done. When do you feel not so bad about taping it out and then blaming it on your interconnect afterward? There is a huge potential for automation with what Gary Smith calls a silicon virtual prototype.
Janac: One thing we’ve done is to output the interconnect models for verification in a virtual platform. You’re actually going from the lowest level of the IP—the instance level—and then you’re feeding the instance to the verification system one level above, which would be a virtual platform. That’s verifying the entire SoC including the software. It’s a stack of verification responsibility and it’s a verification ecosystem that has to be supported.

SLD: But they’re all interrelated, right?
Janac: Yes. But if the lowest level is junk then levels above it will suffer. Every level has to be very high quality.
Hamid: The problem you’re talking about is how do the different stakeholders play well together. We have talked about hierarchical verification for 20 years. The idea is not new. Now we’re in such trouble that we actually do it. We talk about building testbenches and being able to model these things, but testbenches need test cases. At low levels, when we’re doing switches, we can use random traffic. In a system, random traffic isn’t going to do anything. You need well-structured programs.
Janac: You have to run the actual traffic patterns on the SoC.
Hamid: Yes. And the big problem I see is that you have the IP guy who’s worried about whether the IP works. You have the interconnect guy who’s worried about putting all of this stuff together. You have the system integration guy who understands the system stuff and the software guy understands the software part. They all have different pieces and different constraints and needs on their testbenches. We’re starting to re-use design components in designs, but we have no idea how to do that for verification or test cases. How do you do plug-and-play verification so you can take information from the IP guy, plug it into an environment that works with the subsystem or system or software and each person along that chain gets to use information from the person downstream from him. This is all about knowledge transfer.

SLD: How do we solve that?
Janac: If you have a well-defined IP protocol such as AMBA and use that as a moat between the castle and the countryside—so the IP is verified up to the moat and then you verify the thing that carries the traffic separately, you actually have a chance.
Hamid: I don’t buy that.
Janac: That’s certainly not all you have to do. But the only way to deal with this problem is to segment it across layers and at a different level, and then you feed it to the level above.
Schirrmeister: You separate the concerns and then you divide, conquer and integrate.
Janac: You’re eating the elephant one bite at a time—at the unit, module, instance and SoC level. And you’re verifying the subsystems and the interconnect. Then all those verification models have to be fed to the upper level so the system-level verification has a chance. It isn’t just the IP. The problem is EDA. You have different levels of verification where you have simulation, formal verification and various other types. No one is putting all of these verification tools into a unified whole. It’s happening, but there are gaps.
Schirrmeister: Is it perfect? No. But are we going in that direction? Yes. The notion of information exchange is not enough, and EDA is trying to help here. If you just take two people—the software and the hardware guys—they use completely different tools. If there’s a bug, then the finger pointing starts. The problem is that they have different tools to look at it. We’re trying to bring out tools in EDA so that the hardware and the software guys at least can sit together and duke it out. Multiply that to the SoC level. If you take a database like OMAP, for the new stuff there is C models, for the existing stuff there is RTL, and there are lots of assertions in the verification environment. There may be a chip outside that you have to attach to, and then you need to interface. That already has at least six connections, and in the engines underneath there is simulation, emulation, acceleration or FPGA prototyping. We are moving in the direction of putting it all under one roof. Does it work perfectly? Not yet.



Leave a Reply


(Note: This name will be displayed publicly)