Power, Applications Drive New Thinking On System Planning

Application-driven power-aware flows begin taking root, but they require a sharp change in the starting point and execution of designs.


By Ann Steffora Mutschler
Throwing out the term ‘application-driven power-aware design methodology’ may sound like gobbledygook to some, but this concept is keeping many technologists awake at night—especially considering video games that heat iPads to 100+ degrees centigrade (near melting). The problem is very real, and potentially painful in more ways than one.

The iPad example, along with others such as streaming video and heavy graphics in Facebook applications, which exercise the complete logic, SoC and memory access, along with high speed DDRs for displays and the display itself, show just how power hungry consumer applications are—and how much new thinking is needed to address the problems.

“If you have a GPU that is driven by high activity games, so-called power guys who are really driving the game industry, it can define the maximum power consumption limit. Instead of using the same GPU, which is doing multiple applications of low-activity movies or low-activity plain dialogue movies versus action movies versus streaming 1080P—one can think about creating GPUs that are specifically designed for those applications and optimized with respect to dynamic and leakage power,” suggested Vic Kulkarni, general manager and senior vice president of the RTL business group at Apache Design.

The alternative is creating different application processors and GPUs on multiple SoCs, including 3D stacks. “That’s another thing people are looking at very aggressively—how do we get logic on logic or logic on memory to manage power and heat,” he said. “Application-specific processing units may be better for power optimization as opposed to general purpose CPUs for different applications.”

Application-driven is really where things are headed, agreed Cary Chin, director of technical marketing for low-power solutions at Synopsys. “If you want to do the best possible optimization, one of the problems is that when you are doing the lower-level design, you don’t really know exactly what you are designing for. That’s been a dilemma for a long time. You have to make some decisions about what your target application is, and for the most part over the years we’ve gotten away with this idea of general purpose computing. You can write software to customize things, and then build the hardware however you need. The problem is when the big issue is power—as it has been for the last few years and apparently will continue to be for quite a long time—the optimizations that you need to do make a big difference at the hardware level, and for each particular application the requirements are different.”

But the amount of complexity that is being added into the mix is still on the rise.

“If you think about the amount of stuff that needs to happen—this idea of application-driven in terms of power efficiency is great, but the problem, like in most complicated problems, is that you can’t really do it top-down,” Chin said. “It would sure be nice if we could say, ‘Lets just make the applications aware,’ and everything works underneath that.”

At the heart of these designs are the highly skilled system architects, which to Chin seem analogous to good cooks. “A good cook very rarely has a recipe that they’re cooking from. They don’t say, ‘Here are all the requirements of what I need; I’m going to go get all this stuff.’ A really good cook looks at all the good stuff that’s available and creates things based on what’s available, and I think that’s what system architects do. An architect’s real job is to understand the technology, both on the hardware and the software side and maybe the application side, as well, in terms of how applications are created and distributed. The stuff that’s going on in the architect’s mind is that they start to create and formulate what kind of things can be made with the pieces that are coming together.”

Architects are thinking about customizing logic for particular very specific needs these days, he noted. “I think that’s what we’re going to start seeing because there are so many gates and so much silicon available. The idea we’ve designed around for the last 30 years or so, which is that we’re generating general-purpose silicon that basically does optimization based on re-use of the silicon for many functions, is exactly the opposite direction of the optimum for low power. To get the lowest power design you need to custom design a specific function that you need without having any other transitions in your device. If you could create custom logic for everything you would have an ultra low power device.”

Summing up the need for application-driven power-aware flows, William Ruby, senior director of RTL power product engineering at Apache Design explained, “If you look at software and hardware—these are the two basic parts of a system. Traditionally software has not been power-aware at all, and software is now being used to control the hardware. Software itself does not consume power, but it has a huge impact on how much the hardware is burning. What needs to happen is that you need to start thinking about realistic application scenarios.”

There is a huge amount of infrastructure in place to design for functionality, find functional bugs and identify corner cases. “But a lot of that infrastructure is simply not suited for any type of power analysis or power driven design work: you need something fundamentally different,” Ruby said. “If you look at how the application-driven flows will evolve, you start with the application and then you say, ‘How can I simulate, emulate or somehow replicate the effect of running that application on this hardware.’”

Regarding emulation, Qi Wang, technical marketing group director for low power and mixed signal at Cadence, explained that traditionally engineering teams have done power estimation by running some functional tests to get activities to drive power estimation. But those activities are for functional testing and have no way to represent the actual application. “This is the biggest disconnect right now and people now realize that, but there’s no technology to bridge the gap because this huge abstraction gap between system abstraction and silicon abstraction,” he said.

However, hardware can be mapped to the emulation box today and then system software and applications can be tested. Traditionally emulation boxes were used to verify system functionality used by software or application, but emulation is now being extended to do power estimation, as well.

“If you think about it, it’s a very natural extension because you run those applications anyway to verify the functionality,” Wang explained. “Why not at the same time take down the traces to record all the activities and send it to the power estimation tool to do the power estimation? That kind of bridges the gap between the software application and the actual hardware. And it’s not just for power estimation. We know that a lot of people doing advanced low power designs do power management and power gating.”

Overall, application-driven power-aware tools will need to both address the performance angle as well as model and abstract the design to higher levels able to run a system type of a power simulation, Ruby asserted. “These models need to be as accurate as possible. You need to start building a model chain all the way from silicon up to the power spec.”

It comes down to modeling and processing huge amounts of data, he concluded.

Leave a Reply

(Note: This name will be displayed publicly)