When Worlds Collide: Saving Power In Communications Applications

Understanding total energy consumption puts a new emphasis on understanding the impact of the software used in a system. A single application can throw off everything.


By Ann Steffora Mutschler
The interplay of hardware and software is a given in every device that contains a semiconductor chip, but is typically felt more acutely in communications applications given the extremely close dependencies for everything power-related. Managing power in these situations just gets more challenging as consumers demand more and better applications on their tablets, smartphones and other mobile devices.

Power is always one of those things that needs to be addressed at many levels simultaneously because there are both raw technology factors: the semiconductor technology as well as system issues—what algorithms you run, what is the collaboration that takes place between handset and base station, stressed Chris Rowen, CTO of Tensilica. “All of this can have significant effects on what the total energy consumption is of the system. As you go up in the level of abstraction you move away from the individual transistors and talk about what the system behavior is, so you can get larger and larger relative savings.”

This is because if the system can be organized such that no communication is required, or one bit of information tells the whole story, then gigabytes of information may not have to be moved across the network, he said. The problem is that the engineering team must know exactly what single bit tells the story. “There are many things that people do in finding ways to store data instead of communicate data, to encode data more cleverly, to make data communication more resilient so that they can avoid doing work, avoid doing communication processing and therefore save lots of power because the radio never has to go on, or has to go on very infrequently and the encoding can be greatly simplified.”

“At the other end of the spectrum,” Rowen continued, “there are many things that you can at the level of the circuit design, the logic design, the processing architecture, which can significantly reduce the power as well—even once you accept that certain standards and certain communications protocols can be used and the intelligent chip architect or system architect is aware of when they have to live within the ground rules of the standard that they are implementing.”

Techniques for saving power
From a software perspective, power-saving techniques are being driven by emerging new architectures, such as ARM’s big.LITTLE, which is where there is a companionship that is able to take over the system when there are low-performance requirement and high-energy requirements, and there are high-performance requirements and faster CPUs, said Achim Nohl, technical marketing manager for Synopsys’ solutions group. Within this approach is the new concept of switching from—on a very coarse grain level—a high-performance, high-energy profile CPU to a low-performance, low-energy footprint CPU.

“At the same time,” he said, “there is an orthogonal technique for power saving—dynamic voltage and frequency scaling (DVFS) where in parallel to big.LITTLE you are able to scale down the frequency of a cluster or of a single CPU. That can only be done by predicting what the performance requirements are for the specific workload to perform a just-in-time completion of a specific task. I need to know how much processing power will be required in order to satisfy this task so that it can be computed just-in-time.”

There’s a lot of impact on the software and on the whole workload prediction. Schedulers must become power-aware. There is also a contrasting scheme called “race to idle,” where rather than scaling voltage and frequency you run as fast as possible and then remain in idle mode as long as possible. But these solutions are hard to evaluate against each other because they are highly scenario-dependent. Scenario means the software and the whole user scenario, Nohl said.

Rowen pointed out a hardware technique for power savings that is gaining steam is the more careful adaptation of the processor engines to fit the tasks, because there are very distinctive things that you do in some parts of the receiver such as FFTs, while in other parts there is a lot of filtering, and in other parts there is forward error correction, which is a successive approximation method for determining what the signal was.

For those at the sharp end of silicon platforms for mobile devices, Pete Hardee, director of solutions marketing at Cadence observed that semiconductor and systems companies are seeking all the power saving techniques they can get. “This is where people have been using the regular techniques like power shut-off. We’re going to see, as well as power shut-off, a lot more use of DVFS – that’s certainly going to be seen a lot more as people struggle with power.”

One interesting technique as designs go from node to node is substrate biasing, which has been used effectively at earlier nodes like 90nm. “Once you get under 90nm there is a lot of debate as to whether or not substrate biasing is an effective technique or not. It is applying a negative bias to the body of the silicon, which reduces leakage especially when you’re at near-threshold voltages. We see substrate biasing being used even at very deep submicron, especially in relation to standby modes of memories that reduce leakage. [But] there is a lot of debate on the effectiveness of substrate biasing beyond standby mode of memories and the reason is there’s a routing issue that all of these bias signals. Per transistor, you’re effectively supplying a bias supply to each transistor of the chip so that gets very expensive from the power routing point of view and you start to hit routing congestion and so on. We see people using substrate biasing at the 40nm node and then it gets a lot fewer at 28nm and people are starting to wonder if its going to be effective at 20nm (22nm for Intel). One thing that we’re figuring out is that finFET probably obviates the need for substrate biasing. You’ve got a lot better control of leakage due to the topology of the gates through the 3D construction of the finFET transistors. When finFET becomes the norm, we think we’ll see the end of substrate biasing as a technique.”

Intersection of low power and test
While test is not so much an area where much can be done at this point to help save power, it is nonetheless an important part of the design process with unique issues, noted Greg Aldrich, director of marketing for the Silicon Test Systems group at Mentor Graphics. “The two aspects that we have to deal with for low power are first, when we are inserting structures into the design, are we consistent with all of the power intent or are we respecting all of the power intent, the power island and what is required for the low power design when we are inserting logic? Are we making sure that, for example, when we are inserting compression logic into a design that may have three or four different power island or power partitions that we’re not crossing those power partitions with the test logic that we insert or that we’re properly isolating those power partitions. If there’s a constraint in the low-power design where I can’t power up all of the three partitions at the same time, I need to be able to test it in that manner, as well.”

The second and, he said, maybe more concerning issue is then when testing a low-power design the tests cannot overstress the power design. “If it’s a low-power design, typically that means that there is very little switching activity, a lot of the design is turned off and the power rails, the power system is designed that way. When you do testing typically you want to get as much activity as quickly as possible so that you can test the device as quickly. You’re testing every single device you manufacture, so every second you spend testing that device costs you more money in the manufacturing cycle.”

In test the objective has always been to get as much activity in the circuit as possible in order to test it as fast as possible, but for low-power designs that approach could damage the device.

At the end of the day, the biggest problem in looking to save power in communications applications, according to Marc Serughetti, director of product marketing for virtual prototyping at Synopsys. “It’s not about hardware, it’s not about software. It’s about the two together, and when it comes to software it’s not about the low-level software either, it’s about the entire software stack because a simple application can create a significant problem when it comes to power consumption. Now you are talking two different worlds once again colliding, and if you approach this purely from a hardware perspective you are going to end up in situation that may sound interesting for the hardware people, but when it comes to the software world where you need to be able to run Android or Windows Mobile, the performance of the environment you need to use are a significant component that must be analyzed.”

Leave a Reply

(Note: This name will be displayed publicly)