The Tao Of Software

Writing software to the metal is tough enough; doing it to save power is even tougher.

popularity

By Ed Sperling and Pallab Chatterjee
As software teams continue to race past hardware teams in numbers of engineers, hours spent on designs and NRE budgets, companies are beginning to question whether there needs to be a fundamental shift in priorities and strategy.

The problem is that it takes far too long to write and debug the software and to get it working on the hardware, even with virtual prototyping capabilities.

“Bare metal software is the hard part of the problem,” said John Bruggeman, Cadence’s chief marketing officer. “It’s the bane of the embedded system company—80% of the time is spent getting bare metal software to run on hardware. It takes two to three months to get Linux to boot because there is no visibility into the software and the hardware simultaneously.”

That challenge becomes increasingly more difficult at each new process node, as well, because complexity is increasing on both sides. Bruggeman said there are three reasons solutions haven’t worked so far. One is that every solution to date has been closed or proprietary, which limits the number of programmers working on a solution. The second is that solutions today are fragmented, both by multiple vendor tools as well as some of the flows by single vendors. And third, the complex multi-geographic development coupled with enormous scale and size has not resulted in a coherent solution.

Cadence clearly isn’t alone in recognizing the growing problem in software, although it is the most vocal of the Big Three EDA vendors. All have major software efforts under way and have made significant investments in these areas. Mentor Graphics has a big push in Embedded Software and Synopsys has an equivalent focus on software prototyping. All have made acquisitions in their respective areas, as well.

But getting software to run more efficiently on the hardware is a different sort of problem. It’s understanding how the two interact at a very deep level.

Glenn Perry, general manager of Mentor’s Embedded Systems Division, recounts a story of one customer that was porting Linux to a chip and couldn’t figure out why the operating system was continually burning up energy. The culprit, as it turned out, was a blinking cursor.

“The goal is to put power in front of software,” said Perry. “When we do that with a regular optimization of Linux we see a 70% to 90% improvement in power. We need to fix the simple stuff first, and this isn’t so easy. What we’ve found is that embedded developers know very little about software.”

Power games
But if hardware engineers know little about software, the reverse is also true. One of the biggest demands for improving the efficiency of software comes from the gaming world, where software typically has been written in a high-level language with little or no attention to power consumption. In gaming, the user focus always has been on performance—both in speed and in resolution—rather than power. But as more games are being downloaded onto mobile devices, that perception has changed dramatically. No matter how good the game, if it drains the battery in 20 minutes no one will buy it.

The result is that power controls need to be specified in the code, which is difficult considering the growing demands on these systems. Most online gaming is done at 720p resolution due to bandwidth limitations, with a typical compression of 1 I frame for every 200 P frames as part of the H.264 codec.

Mobile platforms typically code in OpenGL while 3D games use OpenCL. These games use a shader, 3D render, and main graphics display engine for the iPhone, iPad, Samsung phones and tablets, LG phones, Motorola and Droid phones, Asus tablets and the Motorola Xoom. Several mobile gaming companies (France, Itally, Finland, Sweden) are now developing products for Q4 release using OpenCL for the Imagination Technology PowerVR core.

The challenges are growing from there, as well. Several major software companies, to provide a higher quality visual experience, also have written a new codec for use with the Xbox360 and PS3 platforms. These new codecs handle a different raster and render routine that supports both physics-based graphics generation (fire, rain, water, snow, wind, explosions, and striking reactions from swords/sticks/knives) and secondary scan for background details (flowers on trees, multi-color grass, flowers and moss on the ground, details on reeds, etc.) in addition to the normal patterns. The new codec was needed to be able to send and render the data in the standard data stream size.

Which comes first
So how much is all of this really going to affect design? Despite predictions that software engineering teams would displace hardware teams, the reality is that both will be forced to co-exist. They will never actually speak the same language or work on the same exact project, but the push is to improve communication back and forth between them. Software needs to become far more power-aware, and hardware needs to become more efficient at running software.

The last time the design world dealt with an issue like this was when the battle over RISC vs. CISC—reduced instruction set computing vs. complex instruction set computing—was being waged. That was in the 1990s, when Unix first posed a commercial challenge to operating systems from companies such as IBM, Hewlett-Packard, Digital Equipment Corp. and a handful of others that made their own OSes back then.

But power is forcing these issues back on the table once again, driven initially by the mobile sector and increasingly by devices with a plug. The likelihood is that it will never be a perfect marriage, but it is one that is likely to last this time because both teams need to at least have the same goal—even if they don’t talk the same language.