Designing For Energy Efficiency

What changes when the first priority is longer battery life or reducing a seven-figure energy bill?

popularity

Swiss watchmakers have nothing to worry about for the moment. As top-name companies crowd into the wearable market with full-featured watches, limits on battery life and frequent charges undoubtedly will limit their popularity.

Smart watches look cool or clunky, depending upon your perspective, but none of them lasts long enough between charges to be a serious market contender. That’s certain to change, though. A mad scramble is under way, propelled initially by the wearable electronics market, for far more efficient designs. While power has long been one of the tradeoffs in the power, performance and area equation, making it the primary consideration is having some interesting repercussions on designs. And that includes designs well beyond the wearable market, including areas such as automotive, medical electronics, televisions, all the way up to the server market.

“Power is what has limited everything to 3GHz,” said Wally Rhines, chairman and CEO of Mentor Graphics. “It has forced designers to stop increasing the clock speed. But it’s also popping up a lot now with ultra low power for the . If you want to implant a device in a person or use a sensor taking data over time, it has to have zero leakage when it’s off. That requires a combination of architectural design, process and cleverness to match the requirements of different types of sensors.”

And that’s exactly what’s happening in the market. Philippe Magarshack, executive vice president of design enablement and services at STMicroelectronics, said customers are migrating back from 14nm finFETs to 28nm fully depleted SOI to take advantage of body and forward biasing. “They are going to a lower Vt when they need performance, and a higher Vt when they don’t. And they’re getting 5X to 6X better results than at 14nm for a similar application. It’s a combination of materials, context and automation, and with the IoT a device has to work for as long as possible on a battery. Power is the number one criteria.”

Nor is this kind of thinking confined just to the IoT. With huge energy bills inside of data centers, sometimes in the realm of seven figures every year—roughly 3% of the world’s energy is now used by data centers—there has been very real justification for rethinking architectures at every level. That includes everything from how a data center is powered and cooled, the position of racks of servers within a data center, to what kinds of chips and power-down strategies are used within those servers.

“This stretches from the server to the set-top box and even the television,” said Frank Ferro, senior director of product marketing at Rambus. “And it applies to the applications, which in the past have been all about performance and cost. Power is becoming critical in those designs to the point where it is now the first consideration.”

Thinking differently about hardware
Defining an overall design goal isn’t so simple, though. Energy efficiency varies greatly from one market to the next, and frequently even from one device to the next or one application to the next. What’s considered efficient in one market may be considered highly inefficient in another.

“An efficient wire-line system may be considered about 5 watts,” said Bernard Murphy, chief technology officer at Atrenta. “A cell phone consumes 5 milliwatts. And an IoT device might consume 5 microwatts. To do that in an IoT device you really have to be thinking about having almost all of that device almost always off. You have a design in which pretty much everything except some kind of basic sensor is off until either a timer or a clock says, ‘Time to wake up.’ Or you might get pinged by a signal into just enough of your radio that’s always on that says you need to wake up, or something like that. Then everything else is off at all times. That also means that you can’t keep any state unless you keep it in flash memory. So if you’re trying to keep log information, that’s got to be in flash.”

Part of the strategy in energy-efficient designs also is to be more specific in what processors and embedded software are designed to do, and there are several approaches to this. One is to add more specific processors into a design, which has been talked about for years but not always successfully executed—in part because of the challenges of deciding which processors need to be on and when and the overriding emphasis on performance first. A second approach is to simply do less—eliminating some features that aren’t used or even to not do fully accurate computation, which is being explored by both the University of California at Berkeley and MIT.

A third approach, and one which has garnered more attention of late, is to do more processing using a bigger processor faster, but also to send data in bulk instead of streaming smaller quantities over time.

“If you decrease the bandwidth of the video stream, you can approach almost 1 watt in power savings,” said Chris Porthouse, director of market development in the Media Processing Group at ARM. That’s not just for phones, either. “It applies in digital television, too, which is thermally limited because the TVs are so thin. Mobile is even more constrained. And even automotive applications are constrained.”

He said this will become particularly important in the wearable electronics market, where every microwatt counts.

ARM isn’t alone in utilizing this approach. As the IoT becomes more focused, so do techniques to be successful—sometimes by using fewer system resources or allowing users fewer options.

“Understanding the acceptable performance level is essential,” said Ron Lowman, strategic marketing manager for the IoT at Synopsys. “So maybe you don’t need large amounts of data. There are new standards for wireless that cater to the amount of data to move. And maybe you don’t need to turn on circuits, or they don’t have to be turned on so quickly. The question you need to consider is how much work you’re getting done for a standard amount of energy. Sometimes that is tough to gauge. And what can be done in software versus hardware?”

Better software
One area that shows big promise for improving energy efficiency is the entire stack of software, where energy savings can be huge. Exact numbers are impossible to determine, because there is so much variability in terms of use models and applications, but there is almost universal acceptance that energy savings from better software are larger than what can be done in hardware.

The trick is to look beyond improving software by itself, or even through co-design. There is a proposal to develop a new standard for hardware to communicate back to the hardware on how much power is being spent on a particular function or application. The proposed IEEE 2415standard is aimed at improving communication between “energy-oriented hardware and energy-aware software.” (See related story here. http://semiengineering.com/new-power-standards-ahead-2/)

“The goal is setting energy as the first and foremost priority,” said Vojin Zivojnovic, president and CEO of Aggios, one of the earliest and most vocal proponents of the new standard. “That means maximum use of the hardware’s capabilities by the software. Even with modern designs we do not use all of the low power capabilities of a design. The problem is that the software has to understand it, which means you have to list all the components—describing the voltage, frequency and registers that change the power states. And then the trickiest part is how to get events from software to the hardware and not impact the functionality.”

But software itself can be improved, as well.

“Software may do things that are not necessary, or maybe you can create a specialized element in hardware,” said Krishna Balachandran, product director for low power at Cadence. “The trend is toward a holistic power flow. So more innovative companies are trying to profile a design based on IP reuse and power characterization, and they’re redesigning IP to make it more power-friendly.”

Modifying methodologies
Part of the challenge is associated with best practices, some of which are different depending upon the focus of a design.

“If power is the starting point, you want to begin dealing with it early on,” said Aveek Sarkar, vice president of product engineering and support at Ansys . “But you also don’t want to start that too early because your numbers will be too far off. So if you start with RTL, you can bound the estimates within 15% of gate-level power, and from an architectural point of view you can use those numbers to build your design. If power is the main driver, you’re not going to be looking for obvious things like clock gating. Instead, you’re going to want to use the results to drive more block-to-block interactions. Many design teams don’t consider that now because every block owner looks at his own block. But we’re seeing more and more interest in how to optimize this all.”

In effect, that requires raising the level of abstraction to be able to see what’s going on between blocks and across a system.

“It really starts with the architecture, and it doesn’t start with the architecture of the devices,” noted Atrenta’s Murphy. “It starts with the architecture of the whole system. So, if you are talking about devices that are almost always off, then your whole system has to be built around that. You can’t be pinging these things every other minute because you want to do a software update or you want to get a status or something. You’ve got to architect around it: ‘I can only access these things once every hour or once every day, and then I’d better limit the amount of traffic I’m going to have with it.'”

Where the savings are
It also starts with the utilization of different skill sets at different times in the design process.

“It’s more about when than who,” said Cadence’s Balachandran. “There are still a handful of power experts in each company. It’s not like the non-power guys are getting more expertise in those areas. But there is a lot more thought being put into energy consumption and how to optimize quiet periods. That can be significant. It can mean as much as 20% to 30% of the overall energy consumption.”

Consider, for example, capacitive touch screens. The original versions showed up on GPS devices, where processes were running constantly. Rather than utilize the main processor, that functionality was offloaded to a separate hardware element to keep updating the system.

A different way of accomplishing the same goal is by using frame buffer compression, where what’s written out from the GPU is compressed, said ARM’s Porthouse. “That allows you to save 50% to 60% of the bandwidth system-wide. There’s also no overhead on performance and you write less data out.”

Another option is simply to use different components. Rambus’ Ferro said that LP DDR4 has been getting far more attention than DDR4 in areas where it traditionally has not been that popular. “Memory is one thing that doesn’t get to go to sleep often,” he said. “That’s why there’s so much interest in LP DDR4. There’s more power granularity and DDR4. From an architectural standpoint there also is talk about more local SRAM or making memory more efficient, and we will see those architectures improve. The most extreme example of that, which I don’t think will happen anytime soon, is embedded DRAM. That really could improve performance and decrease power, but there are implementation and manufacturing challenges, which is why we’re seeing L2 and L3 cache. Embedded DRAM also requires a bigger chip, and if you make the die bigger you need to charge more for it.”



Leave a Reply


(Note: This name will be displayed publicly)