Ensuring that a chip meets the power budget is now as critical as making sure it operates properly.
Today’s verification tasks may seem daunting — and much of it is — but all of it is absolutely necessary to make sure chips operate properly with a larger system. Throw power into the mix and the challenges mount.
The good news is that there is no shortage of tools and methodologies to help with these tasks. The bad news is that even the best tools won’t make the challenges disappear.
Power-aware verification, in particular has reached maturity after years of piecemeal solutions.That means the tools and flows are stable and available, and they are getting adopted as power verification rises in importance alongside functional verification.
“If you design a chip and you have some functionality, and that functionality if not working up to 100%, you have some options,” said Vijay Chobisa, product marketing manager in the emulation division of Mentor Graphics. “You can do some workaround in software, and you can hide that hardware bug in the software so that your chip is okay for that particular application. Or your chip may not be running at 100%, so maybe you slow it down and you still have the functionality. However with power, let’s say you design your chip for a certain power and your chip is consuming 2x or 3x more power now in that particular device, be it a tablet or smartphone. You can’t use that chip because it’s not designed for that battery. It’s not designed for those power rails. It’s not suitable for that application. My belief is, going forward, power is going to become more important than the functional verification.”
Vic Kulkarni, senior vice president and general manager of the RTL Power Business at Ansys-Apache agreed and noted that this is why power verification techniques continue to emerge with multi-domain islands, DVFS techniques or things like pixel by pixel power control of the edge of the TV. “Given that the package is very constrained, there’s no airflow — so chip, package convergence becomes a problem — the heat that it generates is a regenerative effect unfortunately, and causes more dynamic voltage problems. This is why it is important to do co-simulation of all the steps once you do your initial what-if analysis, chip-package co-simulation analysis can connect all the dots.”
These days, it’s rare to miss a market window because of issues. Power is a relatively new wrinkle in the time-to-market equation, because in the past it often was dealt with in later iterations of chips. That’s not possible anymore.
“For functional issues, you may have a software fix,” said Chobisa. “For power, you don’t have a software fix. You have to go back to the drawing board, and you can miss the whole market window because of that. That’s where the main problem is.”
For advanced designs, particularly in mobility markets, power has been an issue for the past several nodes.
“I haven’t seen somebody who is not doing low power design or power verification in a long time,” observed Srikanth Jadcherla, group director of R&D for low power verification at Synopsys. “We are at close to 100% adoption, except in the case of small ASICs, which are power managed from the outside. They still need to respond to external power events, but they themselves are probably too small to be doing anything within themselves. I’ve seen a few of those, but that’s a diminishing number. Your general-purpose SoC has no way around power and power management — it has to do power verification.”
Back in the mid-1990s, designers were focused on classic low-power design, which is basically beating down functions like multipliers and various length instruction decoders. “A function would take X watts, you wanted to get to half that,” said Jadcherla. “That’s basically the classical beating down of the capacitance.”
Then came got automatic clock gating, which helped a lot with tools doing a lot of the capacitance reduction. “We also moved from structural to time bound or time dependent power reduction. These didn’t require too much verification — they had some amount built into it,” Jadcherla noted.
Then, a shift came in the early 2000s. Companies such as ArchPro, which Jadcherla founded, used to visit engineering teams and suggest they needed to verify their low power chips, which includes both hardware and software. “Very few people believed us, actually, at that time. We kept talking about it. I wrote a book about this in 2008 about how to structure low power verification, how to quantify the coverage. In 2015 we are seeing a fairly mature treatment of power management verification. Again, you have to split low power from power management. Power management mostly involves a complex hardware/software interaction that needs to be verified, and that is what we are seeing today. Most of the SoCs follow two or three principles. First and foremost is that any low power feature you implement with all of the interactions of hardware/software/voltages must be proven to be beneficial at the system level. Believe me, this is much harder than it sounds, because low power design backfires. Second, the Boolean algebra is different—what you learn in school versus what actually happens. It’s taken almost 10 years with the maturity of the UPF standard and the design methodologies for people to come up, but slowly we’ve been making progress.”
Adam Sherer, verification product management director at Cadence, pointed out that one of the challenges with verification is measuring the dynamic power under the different operating conditions of the design. “We’ve long been an advocate of metric-driven verification to generate verification plans from the power format. As an example, one of our customers is running all of their regression with power measurement on — more than 7,000 simulations for a given SoC. We’ve seeing it become not an individual test or a specific set of tests, but customers want to run power all the time, in all of their functional verification.”
He added that power increasingly is an indispensable element of functional verification, and must not be an afterthought. “It must be integrated, it must be part of your functional verification program. The example I gave of that customer [above], their entire regression manages both functional verification and power together because their design is power- dependent. If you have a power- dependent design, you have to do power verification not as an afterthought, not as 5% of your tests, but especially if you’re in a UVM, randomization kind of environment. If you have even the chance that you may trigger a power state change, you’d better be running under the condition of power-aware verification or you won’t see proper operation of the design. That’s where it really becomes critical. That means it also falls into the normal functional verification flow, and warrants a low power verification plan: you have to be able to plan for that to cover the appropriate state changes in your design. You need to make sure it’s complete. We have customers that then take it from RTL all the way through pad analysis and post -layout analysis where they need to bring in pad information and do gate-level simulations, as well. Even in pure digital chips, we’re seeing mixed signal simulation for that purpose where you have analog power muxing pins.”
At the same time, even though it seems like the issues are well understood, it doesn’t mean power-aware functional verification is used in every design group outside of the top tier semiconductor companies.
“A lot of verification teams in a big way are only established in some of the large companies,” said Krishna Balachandran, product marketing director for low power at Cadence. “If you talk about methodology, even today in the new small companies in China, Taiwan, in many parts of Asia, they don’t follow a verification methodology. That’s even for basic functional verification—forget about power. So they are definitely not doing a thorough job of trying to figure out how to verify it with power, even though they have adopted a power-driven flow in terms of implementation. Implementation is ahead in terms of adopting low power design. There they get it. They have to meet the power, so they have to do it. Now, verification in some cases, in some of these small companies, is done by designers. It is not done by verification engineers, so they don’t have a methodology. But they are realizing if they don’t do it, they’re going to have some bugs, and those bugs can come back to bite them really bad. As a result, even the smaller companies have started looking at adopting verification methodologies that includes power.”
At a higher level, considering the verification of the chip has become very complicated interplay between the functional aspect of the design and the power aspect of the design — some of the power-related bugs are really hard to find, Balachandran pointed out. And this is making things even more challenging.
“They usually turn up as corner cases, so even if the verification engineers are well -intentioned, they are not able to catch them in some cases, and it’s a very tough problem to solve,” he said. “Many of the very smart companies are saying, we cannot do it all with just one hammer. That hammer used to be simulation. Now they are trying multiple hammers. Some of the new hammers they are using are formal verification, and that’s why some of the formal technology for low power is becoming really important. They start looking at the state machine for the power, they start looking at the state machine for the design, and they see what the interplay is between these signals and they automatically create assertions out of those and try to verify those formally. So you get proof that if the signal was like this, then the power state has to be like that, and the design has to be in this particular state. If not, based on the design and the power intent, you don’t have the right design for it. Those kinds of things are being done increasingly and the smarter companies are adopting that hammer.”
Interestingly, over the last two or three years, emulation has been used increasingly to help with power verification, he said, at least by the larger companies. “You’ve got the other monster — software — sitting on top, and that is controlling some of the power hooks. And that’s not possible to test out just with simulation because of the cycles involved.”
Leave a Reply