After years of warnings that software developers weren’t worrying about power, it’s time to step back and re-assess. Are we making progress?
Gauging the energy efficiency of software is a difficult task. There are many types of software, from embedded code all the way up to software that controls various modes of operation to downloaded applications. Some software interacts with other software, while other software works independently. And some works better on one SoC configuration than another, or on one iteration of an operating system than another.
Still, with the focus on longer time between charges — particularly with wearable, ingestible and injectable electronics, as well as complex smart phones — the attention being paid to the overall energy consumed by a device’s hardware and software can amount to a competitive advantage. In some cases it also can determine whether a design can qualify for an RFP. So where are we today?
The answer is that some progress is being made, but there is still a very long way to go. The greatest advances come from companies that either have the expertise in-house, both hardware and software engineering teams, to be able to modify designs and improve energy efficiency. Companies such as Broadcom, Qualcomm and Apple fit into this camp. Others, such as NXP, have pulled together ecosystems for a device to get everyone involved in a design to think about where energy can be conserved and how to best utilize those designs.
But beyond those examples, advances in making software more energy efficient remain spotty.
“There is definitely progress, because otherwise devices would melt,” said Frank Schirrmeister, group director of product marketing for the System Development Suite at Cadence. “But it’s still not close to good enough. On my way to work, after using Waze (GPS) and talking on the phone, I’ve got 35% of my battery left—and that’s at 9 a.m. Yes, the phone is doing this amazing stuff, but it should have 90% of its battery after that. We need to invest manpower to optimize low-power states in phones, portables and in networking.”
He said that despite the progress, complexity is outrunning silicon even on the hardware side. “At least with software, there’s always the next service pack. But you also see people switching on and off software on emulators these days. The problem is becoming bigger, and people are beginning to develop solutions.”
The great divide
Still, it’s not unusual at large semiconductor companies to invite hardware and software engineers into a conference room and see them exchanging business cards for the first time. Progress is not universal, and even within the same company it may vary from one design to the next.
The reality is that people who design the hardware cannot design the software and vice versa. Often they don’t even speak the same language—sometimes literally because teams can be scattered around the globe. But to make designs more efficient, there needs to be a good understanding of what works best in hardware, what works best in software, and what works best for a particular design.
“Sometimes it’s about how you use different cores,” said Shabtay Matalon, ESL market development manager for Mentor Graphics’ Design Creation Business Unit. “In every SoC you need a power infrastructure with a variety of sleep modes and the ability to control the CPU. Those techniques are controlled by software, and you build a virtual prototype for that. But what customers are looking for now is information about what is the power that is dissipated on each block or core, and in conjunction with that, which core is running and how much data is flowing there. They want a unified view. What’s key for them are the software threads, the cores, the state of the cores and the power across the system. That’s the level of information that users need.”
That kind of granular information has never existed before on a system level for both hardware and software. Nor have companies seen the need for that level of detail prior to the past 12 to 24 months. But as companies get pushed to tighten up their power budgets, at least they are beginning to recognize where the savings can be obtained.
“This has to happen at the architecture and pre-architecture stage,” said Matalon. “It’s about choosing which cores will work best, which operating system will be more suitable, and what you do in hardware and software.”
What’s particularly strange about this relationship is that the common threads between hardware and software are EDA tools. Unless there is a top-down commitment inside of engineering companies to bring design teams together, emulation, software prototyping and RTOS development are really the only glue that many teams have. But even that isn’t enough, and while companies will invest in tools such as virtual prototyping they typically don’t make that investment just for power.
“The problem with power is that is still not really systematic,” said Johannes Stahl, director of product marketing for virtual prototyping at Synopsys. “There is power management of hardware and software, but only after silicon do you get a real understanding of how it works. That’s the biggest area for improvement. We need tools, standardization in terms of what goes into the tools, and standardization of software APIs everywhere. And then you need real-world feedback because there are different levels of optimization.”
How the software engineer sees things
Where software engineers get involved in the software stack determines whether they can do anything at all about power. At the application level, the APIs are so removed that, aside from writing good clean code, there isn’t much they can do to seriously impact energy efficiency. Further down in that stack, however, there are definitely knobs to turn and methodologies that can impact a power budget.
“For the last couple of years there has been a big push around power efficiency,” said Jesse Barker, principal software engineer at ARM. “The scheduler in the Linux kernel is where the problem is being addressed at the moment. There is a push to make the scheduler more power aware at a fine-grain level. But there’s also an illusion that when people use a phone, for example, that’s all that’s going on. In reality, there are a number of subsystems working in the background. There is some contentious discussion under way about whether the consumer should have a slider for better performance or power savings, or whether you’re better off with a heterogeneous hardware architecture so the application is blissfully unaware of all this stuff going on underneath it.”
And this is where the communication between hardware and software begins slipping. “The same application may run consistently for (x) amount of time on one platform, and (x – 5) on another,” said Barker. “That has to do with the architecture of the SoC. The way applications use memory can have a huge impact on battery life, but that’s not an easy problem to solve.”
Craig Hampel, co-chief scientist at Rambus, agrees. “On-chip networks do not communicate what memory is used for because they don’t often know. There needs to be more awareness about the structure storing data and how the software is accessing it. Basically this comes down to expressing temporal and spatial locality — and knowing the cost of using it in a random way.”
Because there are so many translations and application layers, arbitrarily assigning data maps and data structures is incredibly complex, he said.
Adding more complexity
While chipmakers struggle to bring these two worlds together, the reality is that both hardware and software are getting more complex. Power-saving features on the hardware side range from architectures to complex power management schemes such as near-threshold computing and voltage frequency scaling. On the software side, they include everything from more multithreading and better scheduling to a larger address space, which can be a big win for some applications and overhead for others.
“This process is just beginning,” said Aveek Sarkar, vice president of engineering and product support at Ansys-Apache. “We gave a reference board to two end clients and the power consumed by one was twice the other. The way they executed code burned more power. If software messes up, it’s a big deal. From our perspective, it comes down to what kind of power models you have and how much confidence you have in those numbers. Software is basically transaction-level modeling, and it’s still evolving. Some work has been done, but accuracy will be the key driver for this.”
But couple that with time-to-market pressures, tighter power budgets and more inflection points at every node, and this problem gets even tougher to solve.
“The software guys still say it’s a hardware problem because the software guys are users of hardware protocols,” said Synopsys’ Stahl.
And likewise, the hardware engineers will always look askance at the software engineering teams, questioning whether they are really doing enough.