No matter how much engineers on both sides try to work together, there are still huge gaps. Some of them may never get filled.
By Ed Sperling
As more computing is done on mobile devices rather than desktops, the idea of what constitutes good application software is changing.
This addresses the key reason why some of advanced power-saving features built into chips were not utilized by software in the past. Unless the operating systems were specifically written for mobile devices, such as Android and iOS, the real focus was on performance and readily available services and functions. And even in mobile devices, there is room for improvement.
“We have been working on applications for wireless sensor networks,” said Philippe Magarshack, group vice president for technology research and development at STMicroelectronics. “They have an even worse problem than cell phones. They’ve moved to IPv6, but that’s killing the battery. When those devices are plugged in everything works okay, but with a battery they die almost instantly. We’ve had to join the IEEE committee that oversees IPv6 to make it more mobile friendly.”
He said that even Android, which was designed for mobile devices, drains batteries too quickly. “It has to be optimized. We’re working with Google on that right now.”
That’s only part of the story, though. The real gains for this software will come from a richer set of application programming interfaces (APIs) written by companies such as Google, Microsoft and Apple for their application developers. And as PC operating systems become more mobile, there will be new levels of convergence that never existed before, mixing that same kind of legacy software with the power-efficient and optimized needs of a mobile device.
“We’re seeing the first indications of this shift with iOS and Mac OSX,” said Cary Chin, director of marketing for low-power solutions at Synopsys. “The general assumption in the past was that everything was on and that if it was on, you could use it. With changes over the last five years, that is no longer a reasonable assumption. Today’s new computing platform increasingly is the phone, and we’re seeing that approach move to desktop computers and servers, which now go to sleep when they’re not being used.”
Write once, run everywhere
Getting software developers to take advantage of these power-saving features isn’t always easy, however. While energy-efficiency optimization needs to place at all levels of the software stack, the applications developers want to write their software once and have it work everywhere. That doesn’t work with power optimization, however, which often is specific to a particular chip.
“The trick is that developers want to avoid fragmentation if they optimize it,” said Amit Rohatgi, vice president of mobile solutions at MIPS. “That hurts the consumer. The only way to solve this is by doing more work in the software space to access the hardware. That’s what the Khronos Group is doing with OpenGL—creating a rich set of specs.”
Rohatgi said that all of the major chip platforms—MIPS, ARM and Intel—are involved with Khronos for the Open Graphics Library (OpenGL) and with OpenCL for parallel programming, and with LLVM for modular, reusable compilers.
But all of that takes time, and right now the hardware is well ahead of the software when it comes to power-saving features. And that gap is likely to persist, in large part because of the different approaches taken by software and hardware engineers.
“This is a good example of the lack of collaboration,” said Mike Gianfagna, vice president of marketing at Atrenta. “What happens is the software guys don’t trust or understand what the hardware guys have created, or they don’t understand it, and instead of turning off their software when it’s not being used they leave it on.”
The result is much shorter battery life, and occasionally other physical effects such as heat, electromigration, electromagnetic interference, and potentially shorter device life with reduced signal integrity. Getting this formula right is difficult even for the best-run IDMs, and it’s much, much harder in a distributed supply chain.
“There’s a big opportunity for true software-hardware collaboration,” said Gianfagna. “This is not just an operating system problem. It goes below that, too. It’s a combination of the application, the operating system, the drivers and the hardware having to collaborate. This is a huge issue—and a huge opportunity for EDA.”
He’s not alone in seeing an opportunity for change. Cadence’s EDA 360 concept was all about tighter collaboration between hardware and software, and all of the Big Three EDA companies see a future in hardware-software co-design. The only question is when it will actually be adopted by the majority of their customers.
According to Synopsys’ Chin, the revolution in hardware involving power and energy efficiency still has not reached the software community. In particular, hardware frequently shares resources such as memory, power and the interconnect fabric. Software can be written across multiple cores and processors, but it doesn’t actually share them. Individual functions are parsed across them.
“There are a lot of interesting tradeoffs that can and should be re-examined,” he noted. “Sharing resources is just one of them.”
Leave a Reply