LP Verification

Addressing power issues is no longer just the architect or verification engineer’s problem.

popularity

Functional verification has been a consideration throughout the design flow for the past several process nodes. Low power verification has been more of an afterthought.

That’s beginning to change, though, as the challenge of integrating IP blocks and the physical effects of shrinking wires and RC delays in interconnects begin affecting power and performance in designs. What’s becoming clear increasingly clear is that understanding the impact of power—really understanding it and how to avoid problems—can directly affect performance and battery life of an entire device.

This kind of verification will always be a mind-bending challenge. Just look at user models for the ubiquitous smart phone. A person who plays graphic-intensive action games may get five hours of battery life while another user gets three days between charges because they only use it for phone calls and occasional e-mail and Internet access. And the person who works in a good reception area will get significantly better battery life than a person who doesn’t.

But those are architectural issues, and they’re relatively well understood even though addressing them isn’t so simple. The new challenge is making sure all of this is stuff is possible at the implementation level, where power has been sneaking up on design engineers one node at a time. Bunching wires around memories, packing more functionality that adds contention for buses, memories and I/O, and running things at full power for varying amounts of time can have a big impact on everything from how things are laid out to the software used to manage various blocks, subsystems, or even code deeply embedded inside of IP.

What’s changed is that this isn’t the kind of stuff that typically shows up until the implementation and integration phase of the design. While in the past it could be ignored, handed over the wall to the verification and software teams to fix, and handed off to the foundries to fix on the process side, that’s no longer possible. For one thing, foundries now charge per wafer, not per good die. For another, the functional and physical verification teams are so swamped they’re now starting to throw design problems back to design teams to fix. And software, no matter how much it has served as a Band-Aid in the past, is generally the least efficient means of solving a power issue.

As the mainstream process node slips to 40nm, these changes are really beginning to hit home at every step of the design flow. At 28nm, they have become a core consideration for everyone. And at 20nm, particularly with 14nm finFETs thrown in, they can make the difference between whether a design is successful—or whether it works at all.

—Ed Sperling



Leave a Reply


(Note: This name will be displayed publicly)