More information is needed up front, but gathering that information isn’t so simple.
By Ed Sperling
If verification accounts for 70% of the non-recurring engineering expenses in a design, what percentage does verifying a low-power design actually consume?
Answer: No one knows for sure.
The reason has more to do with insufficient data than tools, processes or flows. That’s also the reason that power models have never been created for more than a single design.
“Power measurement is a challenging problem,” said Bhanu Kapoor, president of Mimasic, a low-power consultancy. “Everyone wants to have elements early in the design cycle, but all the power numbers depend on a very detailed implementation of the design. When you’re dealing with wires, you have all the information you need to calculate power. But at the front end you don’t have that kind of information.”
At least part of the problem also involves how different functions and their associated software are used within a design. In a cell phone, for example, a user may be taking a picture, sending e-mail or making a phone call. All of those are vectors in power and the number of possible combinations produce a lot of vectors.
“If you run it at the back end you have very few vectors and you can do a SPICE simulation,’ Kapoor said. “At the front end you don’t have the details to do it accurately.”
Derivative chips provide at least some relief from this process. Previous iterations of a chip can be used to provide at least some level of detail. But those calculations need to be redone at every new process node, particularly for the logic portion of the chip. Memory is much simpler.
The problem also becomes more complex as more power domains are added into the chip. Raul Garibay, director of OMAP IC engineering in Texas Instrument’s wireless business unit, said his company’s latest chips have 10 voltage domains and dozens of power domains.
“Being able to functionally verify those is one of the most critical things we do,” he told the audience at CDNlive!
Cross-talk needed
At least part of the issue facing design engineers can be solved by education and collaboration. Ask engineers at the system level, the RTL level and the signoff level to define low power and you’re likely to get different answers. But low power has become a global issue, which means everyone needs to start thinking about it the same way.
“The good news is that over the last two years, with all the fighting over CPF and UPF, people have become aware of the power issues,” said Xi Wang, technology group marketing director at Cadence. “There are different abstractions at the system level, the RTL level, the gate level and the physical level. The next step will be to link them all together. This will happen at the system level, which is where you have to estimate power. Ten years ago we estimated power at the RTL level.”
Some of that information needs to be provided by IP vendors, which isn’t always easy because they don’t fully understand how their IP will be used in designs. Standardized models currently do not exist in this area, although Si2 has created a modeling group to study this issue. So far, it remains in the exploratory stages.
“What isn’t well understood is that all low-power design involves mixed signal,” Wang said. “All the PLLs are design by analog engineers. Voltage scaling is all analog. And low-power is all about analog.”
Prototypical miscues
What isn’t always obvious is just how confusing prototypes can be in this process. Both software and FPGA prototypes can offer huge time savings in getting designs to market, but that sometimes comes at the expense of either performance or power.
“You usually can simulate and verify pretty well,” said Stephen Olsen, a technical marketing engineer at Mentor Graphics. “But there are some conditions, such as where a cache is between two cores, where if you access the same piece of memory in parallel you can slow down the performance. If the algorithm is written right it’s fine. But you don’t know that when you’re hit with the code.”
At least part of this is handled with traces. Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys, said traces are run from software execution and those are executed at a lower level. And at the lower level, those traces are abstracted upward.
“If you make a fundamental wrong decision early on, it is not correctable by clock gating at the low level,” Schirrmeister said.
Those traces are also run inside an FPGA prototype to trace the data going through the chip. But FPGA prototypes, while capable of speeding designs, also add another wrinkle into the design process. In an FPGA, blocks cannot be turned off the way they can be in an ASIC. That makes gathering accurate power information harder because the interactions between blocks in different modes cannot be measured so it has to be estimated.
“If you use that profile to drive power estimation it’s going to be wrong,” said Cadence’s Wang. “Right now there is no infrastructure to shut off blocks, so it’s hard to fine-tune each chip design. We expect that will change in the next year or two.”
Conclusions
Verification has been one of the most time-consuming and expensive parts of designing semiconductors, particularly the functional piece, and that trend shows no signs of abating. While some very good tools do exist for making sure a chip is functionally correct and that it works, the kind of data necessary to build accurate power models early in the process are missing.
That will likely change over the next couple years, in part driven by EDA companies looking to cash in on this opportunity and in part because there will be enough clamor for standards that this type of information will need to be created. This will need to be accompanied by broad-based education about power, however, for it to be completely effective. And as many verification experts have made clear, no matter how good the tools there has to be training in how to use them.
Leave a Reply