Second of three parts: Optimizing for either energy or power requires an acute understanding of use cases, target application, processor and memory budgets.
By Ann Steffora Mutschler
In the quest to optimize an SoC for both power and energy efficiency many variables come into play. Target application, use cases, processor choice and amount of memory among other specifications all figure into the optimization equation.
As discussed in Part 1 of this series, energy and power are different entities and must be understood distinctly from each other. After that point engineering teams can then begin to apply design techniques to optimize one or the other in a system.
In terms of optimizing either power or energy efficiency for a specification application, if there is a function that takes a certain amount of time to execute it could be implemented in many different ways, according to Cary Chin, director of technical marketing for Synopsys’ low power solutions. “If you were to draw the power curve over time, you might see a constant level of power used and draw out a rectangle. Essentially the energy used is the area under that rectangle which is easy to compute. An alternative implementation of that might actually turn out to be a vertically standing rectangle, which takes much less time to execute but consumes much more power.”
However, a more detailed calculation is needed to compare the area under those curves, so it’s actually quite different, Chin noted. “When you break it down, it depends a lot on things like the energy overhead of certain functions and keeping things on. Leakage plays an important part because that’s what makes the difference between the real work that’s been done—what we associate with dynamic power (switching transistors). In that case, the horizontal rectangle might just be a slice off the top because there is a lot of overhead. And if there is a lot of overhead the vertical rectangle might actually be significantly more energy-efficient. So even though it consumes more power, it will consume it for less time.”
The overhead also figures into how much energy is actually being put to use in computing the function. “When you multiply that by the level of complexity in today’s chips it really brings out how complicated of a computation these days to even try and estimate,” he added.
That’s just from the hardware design perspective.
On the other side of the table, Mark Mitchell, director of embedded tools at Mentor Graphics argues that in many cases the software might actually be more significant than the hardware. “When you are designing a device, to have some idea of what you want it to be capable of, even some requirement on how much memory—it has to be able to process this many inputs and outputs per second—and from that you can pretty quickly say you’re going to new processor that runs 1 GHz with these capabilities on it. At that point, when you look at what your competing choices are, there’s not too much power variation at that level. A lot of the variation I believe is from really different classes of devices.”
Relating the type of processor to automobiles, Mitchell said once you decide that you are looking for a high-performance sports car or a four-door sedan or pickup truck, you get a pretty good sense of what the instantaneous power on the device is going to be. “But if you want to control the actual energy usage of the device, which is how long the battery is going to last, a tremendous amount of that is at the software level and very different from application to application or design to design.”
The operating system, for example, will shut down a peripheral when it’s not in use. But shutting that peripheral down has an associated cost for bringing that peripheral back up.
“How often do you want to wait for it to be unused before you shut it down,” he asked. “The longer you wait then the more you’ll be using it and just wasting energy there but if you can’t predict when it’s going to be needed again then it actually might be the right decision. Whereas if you know this device is used really infrequently and as soon as you stop using it you turn it off, that might be a big savings. Those policy decisions are really specific to the device that you are building.”
Still, the conventional wisdom is that high performance leads to high power, and that doesn’t necessarily lead to higher energy, said Pete Hardee, director of solutions marketing at Cadence. “If I’m worried about power then I’m likely to reduce the clock frequency to minimize the power and take as long as I can to do the processing—take as long as I have available. That will get the processing done at the minimum power but the energy is basically the same. It’s the same amount of processing, which is taking longer, but the energy is going to be similar when you just look at the dynamic power and clock reduction as a technique to deal with that.”
There’s a new variable in all of this, too: leakage. The best way to manage leakage is to turn circuitry off. Approaching the situation in this way opens up the opportunity to possibly minimizing power and energy by computing as fast as possible, then shutting off a part of the circuitry for a longer period of time.
“This is an example of why it has to be managed by both software and hardware,” Hardee noted. “Whether or not you can do that, whether or not the right choice is to spread out the computing, take all the time available and do that at a lower instantaneous power or get it done quickly and shut everything off, is very application-dependent. I’m going to make different decisions on that based on various system criteria around the application.”
He explained that it depends on how regular the application is versus bursty. “If I’m rendering Web pages, that tends to be a very bursty activity. I want to see that Web page in all its glory with its great graphics as quickly as possible and I might not see another one for a few seconds. Processing video is very different. It’s a more regular thing. There’s a frame rate I have to deal with. For one I might decide to render the graphics as quickly as possible and then shut down. With the other one I might need to keep the system alive, process the video, and take as much as of the frame rate as I need to process the video. I can’t shut down because I know I’ve got 25 frames per second and I don’t have time to shut down.”
Industry-wide challenge
Put in the context of system developers, just how big are these issues?
“One concrete measure is that a fair number of the silicon companies that we are dealing with now have more software engineers on staff than they do hardware engineers. It’s not that they have a lot of really expensive, good hardware engineers in a high-cost area and then they have this vast army of low-cost software engineers offshore someplace. No, they actually have high-cost, very highly capable software engineers on staff as well because the software problems are getting to be so significant,” Mentor’s Mitchell pointed out.
And it gets worse. “As we hit the Moore’s Law scaling limits and various physical limits, we go into more complex hardware architectures that are complex on one axis but simple on another. Instead of making a faster and faster processor, we are saying, ‘Here, have two cores or eight cores or 128 cores.’ In a way that’s very complex from the hardware point of view. You have a lot of transistors hanging around but in a way it’s also kind of simplistic,” he said. “You’re saying, ‘I don’t know what to do next as a hardware engineer, so here, have a lot of cores. Software guy, it’s your problem go figure it out.’ Unfortunately, taking advantage of that efficiently in software is really complex and it’s not just around cores.”
He offered another example: programmable I/O units. Instead of building an Ethernet unit onto an SoC an engineer may build a little I/O piece and it can be an Ethernet port or it can be some other kind of I/O port. It just needs software to control it. “We keep introducing places where what used to be done in hardware now has to be done in software, and that increases the complexity and gives us a lot more opportunities. It’s more flexible but also more chances to screw things up.”
Leave a Reply