Software Before Hardware?

The focus on power is bringing software much farther into the initial design phase, but it still doesn’t always improve the efficiency of that code.

popularity

The emphasis on battery life in wearable electronics, including always-on sensors, and the cost of powering and cooling racks of servers inside of data centers, are beginning to impact the formula for designing systems. Power is now a critical design element, but it’s also one of the most stubborn to tackle.

While ASICs, SoCs and FPGAs all have focused on being able to efficiently run software—and tools such as virtual prototyping are focused on developing software earlier—momentum is building to push design even further left. Rather than just developing hardware and software at the same time in different groups, there is a push to align them more closely. And in some cases, there are even forays into using software as a starting point rather than hardware.

In 2012, Facebook’s Amir Michael used his keynote speech at LinuxCon to broach the subject of designing hardware more efficiently with open-source software as a starting point. While Facebook’s idea is still viewed as unique, market-specific and somewhat radical, there are other moves to bring it closer to home for hardware engineers. The announcement by ARM last week of a new operating system that incorporates support for low power and security is one step in that direction. What’s changed there is the emphasis on event-driven programming and scheduling to group together various operations.  Mentor Graphics likewise has taken a similar tack with its own RTOS, which includes built-in power features that can be tweaked from a higher level of abstraction.

But creating those kinds of capabilities and actually utilizing them are not the same thing. Hardware companies still think of software from a hardware perspective, and software companies tend to view it from their own side.

“It’s difficult for software to be written that’s power-efficient,” said Andrew Caples, senior product manager for the Nucleus product line at Mentor Graphics. “From a high-level perspective there may be a requirement for a system to have a long battery life, but when the software assignments are made there very rarely, if ever, is a power requirement. It’s not until it’s all aggregated and the system is run that you realize it’s not even close to the power budget, and by that time there is so much code—device drivers, operating system, middleware—that it’s hard to do anything about it.”

Caples said there is no high-level API in most systems that can be utilized to control power. “The power requirement is more of a system requirement, rather than hardware or software, but there is no way to determine in the development cycle whether you’re close to that.”

Far from complete
The emphasis on virtual prototyping and parallel development efforts of software and hardware—the so-called “shift left”—is aimed at speeding up the process, but so far it has had only limited impact on power.

“What we have seen over the last 18 to 24 months is that software at least now how an impact on the hardware team,” said Tom De Schutter, senior product marketing manager for Virtualizer Solutions at Synopsys. “But it’s not always as straightforward as it sounds. With ARM’s big.LITTLE architecture, the thinking was that the software would start with the LITTLE core and turn on the big core when needed, but that turned out to be less efficient. If you start with the big core it actually uses less power because you can get all the processing done quickly and turn it off.”

These are primarily system-level design issues. But software expertise is growing. De Schutter notes that in most companies software management is still comprised of hardware engineers, but as more and more software engineers enter the ranks of chip companies—and as those companies are required to develop more software with their systems—that could change.

That’s the general consensus across the industry.

“It’s still fairly disconnected,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “Software has not taken control to the point where hardware is designed around it. That said, we do see specific architectural decisions being made within the system requirements. If you look at server architectures, which are impacted by data and the applications that run on them, people are looking at what do server architectures look like for software to execute efficiently.”

He said there also is a growing understanding about more efficient transactions, allowing them to access memory rather than wait in line for the cache to be cleaned out. “Software plays a big role in the performance of the interconnect, too,” he noted. “There are defining characteristics such as how much bandwidth does a task consume. In that case, software impacts quite a bit of how the hardware is designed. At least we’ve gotten to the point where the hardware and software teams are dancing together.”

Limited engagement
But there are limits to just how far things can go using the current approaches—and lots of caveats that need to be considered. Software and hardware go together in unusual ways, and design teams need to understand the various interactions. It isn’t always as straightforward as making the code run more efficiently on the hardware. Sometimes it’s a matter of recognizing that the hardware may not always be running efficiently due to physical effects.

“One problem we need to deal with is that software needs to be designed to deal with multi-physics aspects,” said Aveek Sarkar, vice president of product engineering and support at ANSYS-Apache. “With software, the usual approach is that you design it as if the hardware is perfect. But does heat change a motor’s behavior for example? And what if that slows down an air bag controller. We need to get more software models to run software for different conditions.”

Sarkar noted that the same types of physical effects can impact the performance of a rear-view camera. “If you keep a car parked in the sun, you increase the junction temperature, which increases the latency.”

Software has its own quirks, as well.

“The problem is that high variability of software and the economics of hardware have to put some bound on how much the software can drive the hardware,” said Bernard Murphy, CTO of Atrenta. “If you look at the application processes (this one goes into the iPhone and the Galaxy and so on) they’re obviously strongly driven by software. But it’s an averaged influence. You can’t really say, ‘I’m going to design this specific thing to do this specific thing because the software said that would be better for it,’ because you don’t know how the software is going to be applied on all those different platforms.”

And when you do design it, there are no tools to effectively measure it.

“Power monitoring tools are available from some tools vendors, but the measurement granularity is quite poor,” said Dave Edwards, CEO and CTO of Somnium Technologies, an embedded software tools startup. “They can certainly measure the difference between idle versus active, but attempts to relate the amount of power being consumed to specific activity is very misleading. Clearly activation of peripherals has an impact, but the power usage of a processor when executing instruction A versus instruction B isn’t enormous in modern process geometries. As with execution efficiency, memory dominates, so more memory accesses will consume more power. However, this isn’t directly related to what a given section of code is actually doing. It’s related to the state of the microprocessor system (i.e. the processor and its memory system). Function X might burn more power than function Y, but it could be because function Z has perturbed the state of the memory system in a way which does not directly relate to instructions in function X.”

Edwards said the “human in the loop” process of having programmers trying to correlate low-level machine behavior to their source code is the wrong approach. “If the code generation tools such as the compiler have been optimized properly, the correlation between source code and instructions will be far too dynamically complex for a human being to understand or control.” He added that the better alternative is measuring overall energy use for a given design, not dynamic power at a given point.

But Murphy said the real “big swingers for power” are at the application level, where the hardware doesn’t have much control. The most that the hardware can do is report back what going on, but even that is rather sporadic. “So if you want to make an intelligent decision about that, you can’t. But there is no way that hardware can know—or even the OS can know—how to optimize that. It’s really an application architecture question.”



Leave a Reply


(Note: This name will be displayed publicly)