Experts At The Table: The Trouble With Low-Power Verification

First of three parts: Incompatible tools and methodologies; multivendor tool issues; low-power verification reality check; user issues; the impact of complexity and feature shrinking.


By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss low-power verification with Leah Clark, associate technical director at Broadcom; Erich Marschner, product marketing manager at Mentor Graphics; Cary Chin, director of marketing for low-power solutions at Synopsys; and Venki Venkatesh, senior director of engineering at Atrenta. What follows are excerpts of that conversation.

LPHP: What are the big problems in low-power verification?
Clark: The biggest problem we’re having is getting all the pieces together at the same time. We have a lot of IP that was developed in-house, and getting UPF or low-power friendly models of all the IP at the same time so that we can use a unified power flow from RTL through GDS II is really difficult. We use a lot of different tools from different vendors. We have one tool for simulation, another tool for test, another tool for implementation, another for P&R, and getting them all to play nicely together and then have all the models is a big challenge.

LPHP: How about the possibility of consolidating on one vendor’s tools?
Clark: We don’t see a path to go to one vendor. We see different strengths for different vendors. We love one for synthesis, but prefer a different vendor’s tool for P&R. We’re constantly going back and forth inside of Broadcom. There is no unified flow. We don’t even have a CAD department.

LPHP: What is everyone else seeing as problems?
Venkatesh: There are four main problems. The first one is that we have two standards (UPF and CPF), and that’s tough on the industry. Things are getting better with UPF 2.1 and CPF 2.0. There is a lot of commonality. Second, is that even though the standards are coming together it’s not trivial to master this stuff. The high-level concepts are clear, but to do good verification you need to know all the details. The third problem involves static checks. The obvious checks are fine, but that’s not sufficient. There is almost an explosion of checks required now. One reason is that new checks are required at each new process node. For example, biasing checks are now required. And while voltages were never supposed to be negative, they are negative now. The other new checks are because of methodologies. There are so many ways of doing verification today that what works for one customer may be different from what works for another. We need an open forum to discuss this, and the industry needs to come together to discuss broad technologies. The fourth problem involves verification through the various design stages—RTL where you check the UPF; within the RTL itself; after synthesis where you check whether isolation is in the netlist; and after you have the power and ground connectivity. Verification is done through the various design stages, and it’s different at every stage.
Marschner: One of the biggest problems is that users don’t understand enough about the flow and what needs to be verified to even think about putting all the tools together and addressing low power at each stage. There are a number of different concerns with the verification and power intent even before you get into implementation. That requires different strategies and using a collection of different tools together. It also requires a different testing approach. One concern is that power domains can be powered down and powered back up again correctly—this is more about behavior analysis than a static structure. You have to make sure it re-initializes correctly or comes back to the right state every time. How do you specify what state you’re expecting it to return to in order to do exhaustive verification? You certainly can do simulation, but you want a more focused method of verifying that the state restoration is happen correctly—whether that’s specifically reset or restoration of all these registers, or a combination of both. That’s a complex problem to address well. And then there are others—the interaction of all the power domains and the power states of the system related to the expected system scenarios you want to support. There’s also the software, because the software is driving the hardware. By the time you’ve really done a thorough job of verifying a low-power design, you’ve had to integrate the hardware and the software with all the details and make sure the software can drive the hardware correctly. This goes well beyond whether you’ve inserted a level shifter in the right place. Many users haven’t grasped that. They’re still looking at a design with two power domains and they have to isolate one or the other.
Clark: And that’s complicated.
Marschner: Yes, but it’s quite tractable with the tools we have today. When you get to large-scale SoC designs with many power domains and a large software component, it gets very complex. We have partial solutions for many of these, but the integration is still a challenge from the EDA vendor point of view as well as the user point of view.
Chin: A couple months ago someone wrote, ‘Are we done with low power? What’s the next big thing?’ When I read that I almost fell out of my chair. We’re just starting. We’re at the point where we can finally begin to click the Legos together. A few years ago people had no idea what they wanted to do. We had requirements at a very high level that we had no concise way of doing. We were fumbling around with, ‘Maybe this will work, maybe something else will.’ What we’ve done in the past five years is finally get to the point where we’re just starting to think about how everything should work. But we’re still a long way from mixing and matching flows and different departments on different projects and different vendors. The assumption is being able to make all of that work is that we’re all trying to do the same thing. We’re far from that. As an industry, we’re just starting to understand these fundamental little blocks.

LPHP: How far away are we?
Chin: This is like test was 30 years ago. With power, everyone can put together their own little flows and kind of make things work. We’re still a long way from being able to make it all work together well. One area we’re just starting to look at is adding granularity in low power. We’re better at specifying low power, which gives us the fundamental building blocks to verify them. But UPF is still very restrictive for that kind of granularity. We have many years to go to get to where we’re comfortable working with low power—and before our tools and methodologies can handle it.
Marschner: One of the concerns is that everyone seems to have their own verification methodology. It’s because, as Cary said, we’re early in the process and everyone’s cobbling together their own solutions out of parts. In the long run we’re going to see a normalization of methodologies. That will lead to fewer methodologies. The real issue will be getting into a global mindset in the industry about what works and what doesn’t. There are areas we know work today. There are others where we don’t have solutions and people are experimenting.
Clark: Some of the smaller vendors are sitting back and waiting, too. They support some subset of UPF 1.1. They see too much thrash and churn in the industry, and they’ve decided they’re going to support one thing until they see what turns out. But then they’re way behind. Maybe a tool is great, but it has a limited aspect. And then the other tools don’t work. We’re still on UPF 1.1 because our tools don’t support them.
Venkatesh: I agree that we don’t have enough details to specify and verify. But in a few years we also should have an abstraction. That will help smaller vendors to embrace it and move forward. The detail is there, but it needs to be specified at a more abstract level to move forward.
Clark: One of the major challenges I see on a daily basis is how to verify the correctness of your power intent. Someone writes a document and translates it into UPF. But the person who’s a power expert isn’t a UPF language expert. They can become educated, but do you really want the same person doing all of that? So how do you verify that your UPF is what you want your power to be? This is your golden standard. How do you make sure it’s right?
Marschner: Reviews are the best solution, because there’s no substitute for human thought and reasoning.
Clark: But if it’s a detailed level, no one will be able to review it.

Leave a Reply

(Note: This name will be displayed publicly)