Experts at the table, final of three parts: Good engineering judgment; software adds challenges; reducing verification time.
Semiconductor Engineering sat down to discuss power management Verification issues with Arvind Shanmugavel, senior director, applications engineering at Ansys-Apache; Guillaume Boillet, technical marketing manager at Atrenta; Adam Sherer, verification product management director at Cadence; Anand Iyer, director of product marketing at Calypto; Gabriel Chidolue, verification technologist in the design verification technology division at Mentor Graphics; and Prapanna Tiwari, senior manager, static and formal verification products, verification group at Synopsys. What follows are excerpts of that discussion. Part one can be found here. Part two can be found here.
SE: Is it possible to get to 100% in verification?
Tiwari: What the EDA industry has obviously provided is different pieces of the puzzle and it’s not always possible to provide one unified solution that addresses all the way from your starting, high-level synthesis to your product. It’s practically impossible. But what we have noticed over the last decade is the designers, or the people who are designing the products, they have invented these methodologies where they have systematically reduced power from one generation of product to the next systematically to achieve predictable performance, with predictable power decrease — how can they do that? What I typically say is, there is no replacement for good engineering judgement — you can build tools, you can do so many things — but there is definitely no replacement for good engineering judgment. The example I like to cite is, a couple of years ago when Apple released the iPhone, they showed a 40x improvement of performance over a seven-year period from the first iPhone but guess what? The battery life technology has only improved 2x. How did they achieve it? By systematically modeling and reducing the power, generation over generation with the same tools that they had.
Chidolue: I would argue they made some changes along the way. The number of power domains must have gone up. Methodology got injected, of course.
Tiwari: There is a whole different layer now that hurts us. In power designs, I’ve seen several bugs now to that end. Earlier it was all about functionality –testing that was so much easier. Now, you’ve got to figure out, because the two worlds are completely separate: there are software guys and there are design guys that really haven’t talked much to each other. Now, software can completely screw up everything you’ve done on the design side. You could have the best, most stable and foolproof low power design but software can come in and just forget to shut off a domain and your battery life will end. The software needs to be taken into account and that’s where elevating the abstraction, which is closer to the software, makes perfect sense. Once you have these kind of models and describe things in a higher level of abstraction, in C or C++, even System C for that matter — you can test it with software and once you have an implementation that implements that, that’s the perfect world.
Chidolue: You still need to verify all that and you need a platform that allows that. You just talked about software forgetting to turn off a particular block, but you need to be able to verify that. For that you need a platform that allows you to run the kind of cycles to see that. One is at a high level of abstraction but you still need to come down to the accuracy level. You need emulation that allows you to run realistic scenarios.
Sherer: And then we need to be able to describe to those users how to actually do the debug. I agree with you that hardware-based solutions — and we’ve both had this for some time — can run power format. But they run it as zero and one, not IPs, which we can do in simulation. We need to be able to describe to verification teams when to use the two, how to transition from one environment to the next, how to carry data from one to the next to replicate and resolve quickly because when we are doing it in hardware, we are late in the process.
Boillet: Or to provide an innovative solution that would allow users to do things like isolation sequencing, retention sequencing, a check to make sure you’ve gone through all the power state tables — to help and prevent the need for emulation or simulation.
Sherer: I do agree that you can run formal technology on the power state, as long as you abstract it and do that, but the software element that runs on top will break that because people will do things that are outside that.
Boillet: You can make the link through assumptions, you can write a description of the software.
Sherer: Nobody’s going to do that.
Iyer: You provide a solution for 10 power domains, people run 100 power domains the next minute. That’s where, as EDA vendors, need to put our hats together and come up with solutions which are beyond what a semiconductor person can really look for.
SE: Isn’t it more than just the tools? We can throw everything at these problems, but if there isn’t a systematic approach that you follow, and ability to measure it and see if you’re actually covering it?
Chidolue: You’re right. It is all about methodology, it’s all about coverage, it’s all about measurements to figure out where you are, it’s about having a systematic step. We [tell our customers] to start off with a known good design, RTL as it were, add the power artifacts: now you know that this design works without the concept of power. Add UPF or whatever format you have and then start your verification process as you go along. We’ve mapped out a whole list of things including formal, including CDC, including software in the mix, and in different parts in the flow.
Sherer: The challenge I see is that we can probably name off the 50 companies who have the teams, the knowledge, the experience to do this. Behind them stand 300, 500 IP suppliers who don’t have the time, the resources, haven’t moved to the abstraction, have a number of constraints to deliver IP into those flows so there is still work we need to do as a community to bring the knowledge — we’re all driving into these top level, and make it all palatable and consumable. That’s the only way these big SoCs are actually going to get built.
Chidolue: I would agree with that as well. Recently we’ve been working with a number of customers — in particular one IP vendor that everyone would know about that has been pioneering through the UPF committee this successive refinement flow that allows you, as an IP vendor, to just describe the constraints for the IP. You can verify those constraints without knowledge of the VDDs or whatever, so at the IP level, the IP guy can do the simple things, and deliver IP to their customers. Their customers would know the context in which that IP would be used and can add all the other information (VDD, power states, and so on), and then do the verification as they go along.
Boillet: We can also help the designer decipher the UPF or CP itself — it can be really complex,and we are trying to help in this regard: we are providing, for instance [within our tool] a view of the power intent under UPF. This makes it much easier for the designer to get a feel of what the intent of the IP was.
Tiwari: There is a lot of work to be done, and the IP guys have to wake up to this — but most of them haven’t yet. The design teams are still waking up and realizing they have to take this seriously and early in our planning. The way I think EDA tools can help is actually if you slice the whole thing there’s a set of things – you’re doing verification planning, what I am up against? Then, in slightly overlapping stages, you start running things – there is some feedback with planning; then you move on to the later stages where you have debugging: figuring out what’s going on, what’s wrong with it. If the interplay of these stages grows … and they are all on one platform, talking to each other, we can tell you, in simulation what to do, what not to do so that your static goes easy, static can send you back to simulation if you missed a coverage point…
Sherer: Our view of ‘platform’ is slightly different in that we do agree that we need to be able to describe that, but the customer is going to choose the technologies that best fit within those use models. We may feel that we’ve added the advantage of integration but we won’t lock them into that.
Iyer: Our view is slightly different, though I agree with you that verification needs to be a well planned process but we also need to reduce the effort on verification — how can we do some correct-by-construction up front so that you can reduce the verification? Like a power-optimized solution up front, so that you can reduce that complexity.
SE: What about the IP aspect? Is there an IP planner that you give to DesignWare and all of the other IP people, because we still have issues with the IP coming in?
Tiwari: The planner for the IP itself isn’t any different — it’s just a much smaller scope. The bigger problem the IP guy has is how to capture the power intent and IP behavior such that it can go and just plug in.
SE: In the industry, we used to have this checklist of all the things that was supposed to be there. What happened to it?
Sherer: It was too dogmatic is part of the issue — the list was untenable, and it was not automated; it was just a list so to what extent you checked off was your own choice. I think there are two views of this. There is an IP provider view where I think we can provide information to help them with the interfaces, with the power format, with everything that they need to give into the flow. On the consumer side, they need to expect it. But, I think we have a distance to go in terms of the assembly tools that allow Lego style for power. I agree with your thousands of domains, but no one switches those all individually.
Chidolue: For the provider, it’s about how to capture the constraints and provide it to the integrator.
Sherer: Right, because the integrator is going to get an IP block that has multiple modes of operation, and they are going to choose to shut down a number of them for each derivative of the SoC for different customer configurations, so the IP provider has to be as open as possible, and the integrator is more closed.
Leave a Reply