Divide And Conquer: A Power Verification Methodology Approach

While there isn’t a single power verification methodology for every design, there are definitely best practices to follow.

popularity

It’s no secret that the power verification challenge has grown by leaps and bounds in the recent past, especially considering design complexity and the sharp rise in the number of power domains in an SoC.

As a result, SoC teams want to apply a rigorous Verification flow, observed Gabriel Chidolue, verification technologist at Mentor Graphics. “A typical example would be a coverage-driven flow that allows them to measure and track what they are doing in the verification process, and make sure the key aspects including the power management functionality are being verified appropriately.”

To manage complexity, there needs to be a divide-and-conquer approach, he said. “There must be a complementary design and verification methodology that’s put in place. The design should help toward early verification, with the ability to use abstraction as a way of dealing with the complexity issue. If you look at it, starting with the design side of things, you want a way of specifying your IP-based constraints, for example. And then creating the rest of the system, leverage those constraints and create an implementation that satisfies those constraints. This allows you to apply verification successively as well. You start verifying at the block level, and go up to the configured system, verify that, and as you get the implementation details, you verify that as well You break it up in steps.”

John Redmond, associate technical director for digital video technology at , pointed out that even among varied products with different power requirements, there is a common power verification methodology that can be used. That can be broken down into four segments: power intent verification (ensuring the power intent of the UPF is implemented correctly); power consumption verification (ensuring the use case power consumption targets are being achieved); power management verification (hardware and software used to control the clocks, power islands, voltage, AVS (adaptive voltage scaling), DVFS (dynamic voltage and frequency scaling); and power integrity verification (IR drop, in rush current analysis, PDN).

Among these, there are two that stand out today: power consumption verification and power management verification.

In the case of power consumption verification, he said, each product to be verified has targets for different use cases, and the use cases on something like a cell phone could range from standby, a voice call, all the way up to 3D games — anywhere from 1mW to 4W, which is quite a broad range.

“You need to make sure that for each use case, you’re meeting the targets,” Redmond said. “One of the key challenges here is to get the right use case. Initially when you first start a project, at the architectural level, how do you estimate the power? That’s currently done primarily with spreadsheets and it’s a rough ballpark estimate. Then, when we start getting into RTL and gates, we can refine the estimates and get more accurate estimates and reduce the error bar. One of the issues here is trying to get the use case at the RTL and gate level. What you really need is a simulation that really represents the use case, but this can be very difficult because in something like 3D games, how can you, in simulation, run a 3D game? That’s difficult. One of the things that we’ve found to be very useful here is emulation, because with emulation, you can run real software on real hardware. In simulation verification land, you get the correct hardware, but to get the system level use cases is not really tractable because of the complexity. One of the things we really like to do is run the real software in emulation. In fact, in one of our last chips that we did, 2/3 of the power bugs were in software, so we caught a bunch of software problems [through emulation].”

Broadcom has been using emulation in this way for about five years, although it has utilized emulation for much longer. But it started piggybacking the power side of it onto the functional verification side. “These use cases, people also want to verify them thankfully, so we can piggyback a lot there. We’re definitely getting to the point where we do need to write specific test cases for power. That’s a challenge. I wouldn’t call it a technical challenge. It’s more of a mindset challenge,” Redmond said.

The other challenge there is that verification engineers are used to writing some code to do something, and almost inevitably the first thing they do is turn all the clocks on, because that makes it the easiest for them, he explained. “Now what we are asking them to do is start shutting the clocks down, or shutting the clocks off for this verification. It takes a lot more time and effort to get the chip configured in the lowest power state. Again, if it’s the real software, then that’s good because that software will go into the final product and it’s good to work on that side of it. But often this emulation system level is also just verification people writing test cases to functionally verify it. To get those people to put in the effort to put the chip in the right power state (turn off the right clocks, etc.), that can be a challenge.”

Power management verification
Another key challenge that has come up recently is the verification of power management. “Power management has gotten very complex,” Redmond noted. “This includes managing the clocks, the voltage islands, the power islands, AVS, DVFS. One of the key challenges here is that, when we first did this a few years back, we verified this, we waited until we got silicon back and we verified it. It actually took quite a long time to get it all right because it involved a lot of firmware support. Oftentimes the power management of a chip is very chip dependent. It’s not like the GPU, which is the same across all your products and once you have it verified, it doesn’t necessarily change from chip to chip. But with power management verification, it’s really hard to get a handle on pre-silicon because it involves the entire chip. And second, from chip to chip, even if you have a common infrastructure, it changes. It’s not exactly like a single core that stays the same.”

To account for this, Broadcom also uses emulation for power management verification — with much success. “The things being verified here — clocks, power islands — these aren’t the things you want to find out are wrong in silicon. You really need to tackle them pre-silicon,” he noted.

Verifying power is now essential
Power and function are critically intertwined in many semiconductors today, and verifying one without the other can cause problems in a design.

“The idea that power grid verification is somehow a separate step has evolved so when we talk about power-aware verification and applying CPF/UPF on top of the RTL instrumented with state retention and isolation, etc., that’s really something that more and more companies are starting to do with every verification run,” said Steve Carlson, vice president of marketing for low power and mixed-signal solutions at Cadence. “So it’s part of the entirety of their verification suite rather than a huge specialized run to see if I toggled my isolation cells correctly. It’s so intertwined with the rest of the functionality, it’s become inseparable for many.”

As far as methodology, it must be started with the end in mind, he stressed. “Figure out what needs to be verified, and what the approaches are as part of a structured plan for verification.”

Moreover, power verification needs to be broken into two categories: IP/block level verification and SoC level verification, said Krishna Balachandran, product marketing director for low power at Cadence. “When you are talking about methodology, it has to encompass all of these because when you do low power verification, from an IP level, you have to make it as robust as possible. At the IP level, you’ve got to make sure that the UPF or CPF that you’re delivering along with IP will work at the higher levels. That’s one task. A second task — also from an IP level — you’ve got to verify its correctness at that level and check its correctness as it integrates at the higher level. All of these have to be contemplated.”

As part of its push for a successive refinement strategy, Mentor Graphics has been working closely with ARM to see how this strategy would work in a practical environment, Chidolue said. “When you are actually putting a design together, there will be various constraints that you will face because design includes soft IP and hard macros, and what we’ve done is to make sure that successive refinement works correctly regardless of how the sub-blocks are brought together. In a top-down environment where you have soft IP, you want to ability to say for this IP, this is how you can interact with this IP. That’s your constraints. The person that configures the IP then configures the environment based on the constraints, which are there to check that configuration and make sure it is using the IP right from a power architecture point of view.”

One thing about this configured IP is that it’s still abstract. What you don’t have is implementation detail, he noted. “On the one hand you have this design process where you’ve configured the design with the power architecture. You can start to verify this block or this configured IP much earlier in the verification process that will complement the design process. You would obviously do it in the same way you would do verification of any complex design. You will have a planning stage, a verification plan would be created as a result where you would identify the key things that must be verified, as well as the coverage points that tell me I’ve verified these key things. Also, at this verification planning stage, you can begin to identify the types of verification engines you can use to verify the different components of the system. This includes the type of checks you would do (static checks, formal verification, simulation, emulation).”

Eventually, when you get to the place where you’ve added implementation details, you also do verification there to make sure that the implementation honors the logical power architecture, Chidolue added. “And when you do find bugs at the later stage of verification where you have implementation details, you know that it’s not the logical implementation that’s the problem, it’s something to do with that implementation detail that you just added.”

Then, at the SoC level, there is a whole new set of challenges, but in essence, a power verification methodology needs to include a methodology for the block level, and you have to set up a methodology for the SoC level, Balachandran said. “Start with a verification plan, gather coverage. If you are using emulation you get coverage information, then you see how well your states and your power transitions are covered. When it comes to low power methodology, it’s much less about base classes and object oriented programming than it is about trying to understand the power architecture of a design, and then writing the correct set of tests to exercise those points; gathering the coverage information systematically, and then having the tool measure the coverage of what you have achieved versus what you want to achieve.”

Engineers do need to recognize an SoC is an integrated hardware-software state machine. “That means there are driver routines and hardware FSMs to verify together, not one or the other,” said Srikanth Jadcherla, group director of R&D for low power verification at Synopsys. “Traditionally, if you would look at someone doing IC verification, they didn’t particularly care about software as long as they wrote the right patterns. But when software is the one that moves—let’s say, for example, in a tablet or a phone you’re getting an incoming phone call — this is waking up the RF unit and the baseband, and probably the applications processor and the contacts. There’s a chain of wake-ups. Some of that is happening in hardware, some of that is happening in software, and they are literally handshaking with each other. As such, you need to set up a model for that.”

Arvind Shanmugvel, senior director, applications engineering at Ansys-Apache, looks at power verification as equally important to formal verification or functional verification. “Without having the proper power budgets for a chip, there is no point in a chip operating as expected, so power budget and power verification is critical for success of any product or any chip in its socket.” He suggests power verification methodologies fall into three broad classifications. First is verifying power intent at the transaction level, where the hardware and software interacts and where there are high-level power models or transaction-level power models to verify the operation of the system.

Second, Shanmugvel said, is power verification at the architectural level or RTL stage. “This is where we have the low power verification or power intent verification. Once the intent is verified, it’s also important to verify the power budgets to determine what the overall power consumption is going to be. For this, accurate power estimation at the RTL stage becomes very important. One needs to have proper vectors at the RTL stage to exercise the blocks and the SoC and get the proper power numbers for each of those vectors. Power estimation at the gate level is easy because you have all your gates, you have the parasitics, but doing power estimation at the RTL you have to make a certain level of assumptions in terms of how the RTL is going to be synthesized, or how the clock trees are going to be synthesized, what will be loading conditions once synthesized. All of these are more predictive analysis, but over time, looking at several different implementations, the accuracy improves with better vectors.

He also noted that it’s not only important to estimate the power for the functioning vectors, but sometimes power must be understood for standby vectors. “For example, when a chip is not operating or when a block is not operating, what is going to be the standby power? Are we having clocks that are fighting all the time? Are we shutting off those blocks and saving on leakage power? All of those types of analysis also needs to be done at the RTL stage.”

Third, Shanmugvel added, going one level above this is being able to do power estimation for full frames like an OS boot-up or 4K video frames. “These are important vectors that cannot be reproduced at the block level; it can only be achieved at the complete functioning of the SoC. Where we see some of these being exercised is during emulation. Emulation-based power estimation is also becoming very important for the architectural level power.”

Anand Iyer, director of product marketing at Calypto, also believes the No. 1 priority in power verification is to look at the power vectors, and establish design coverage on the power vectors.

“First, for a good power verification coverage, power analysis has to be really good. That means the accuracy needs to be pretty high. Even though accuracy in general is not talked about in the context of verification, in order to find the real hot spot, you need a highly accurate solution. Second, you must determine what are the representative vectors that will be used to establish the power conditions,” Iyer said.“Exhaustively, you run through the entire software, which is what companies are trying to do with emulation because they are saying with the normal vectors, it’s not able to cover all of the power scenarios. That has its place and it works for some applications, but for the majority of application you’d be able to find representative vectors that can actually generate the power critical conditions or runaways conditions without emulation.

Broadcom’s Redmond has a different view. “While emulation does take resources, we’ve gotten to the point where it’s invaluable and we will not design a large SoC without it. The two major advantages are schedule reduction and design risk reduction. We have the ability to reduce system-level bugs. In simulation land we do a really good job of testing a block by itself and making sure it works. At the SoC level we do a great job to make sure everything is connected up really well, but in simulation land we miss that part about the pieces of the system working together. That’s where we are really pushing on the emulation platform for these system level tests.”

Still, trying to put together a holistic flow and methodology is difficult, he said, because power verification means different things to different people. As such, the only way to put a power verification methodology together is to bring together people from across the various disciplines in the design and verification realms.



Leave a Reply


(Note: This name will be displayed publicly)