Opportunity Lost

No matter how large the gains in power and performance on the hardware side, software will still lag.

popularity

The more software that gets written by hardware companies, the more they understand just how large the gulf has grown between the hardware and software mindsets.

In part this is a cultural issue. Software and hardware engineers have different tools, different goals and different mindsets. But to an even bigger extent, it’s a legacy issue. And unlike hardware, legacy in software can begin after only a few months of code development while on the hardware side it can be adjusted for many years to come.

It’s rare that entirely new software is written for any platform. In fact, the tablet and the smart phone are the only devices that were actually created differently—and the most successful tablets relied more heavily on the smart phone than the notebook PC.

But even the smart phone was created without concern for maximum power efficiency. Android and Apple’s iOS focused more on features, resource availability and performance—how fast you can access and download data on the Internet, how well you can connect to Bluetooth and WiFi, and screen resolution—than on how fast you can power up and down, how accurate some data needs to be, and how fast processors need to run to get the job done more quickly before shutting down.

Considering the smart phone really took off in the past half decade as the phone of choice—and the operating system on these devices was around when battery life already was an issue—it’s an indication of just how quickly the hardware has progressed compared to the software. Battery life is now a bigger issue than ever before, but the code isn’t changing anywhere near as quickly as most consumers would like. And to be fair, it can’t.

On the hardware side we have developed power gating, power modeling, near-threshold computing, voltage rails, power islands, multi-core and many-core strategies, not to mention substrate materials such as fully depleted SOI to limit physical effects. Dark silicon is an accepted facet of design, and techniques are there to keep it as dark as possible for as long as possible.

On the software side we have developed middleware and drivers to turn on and off some of these hardware components, but we haven’t made huge advances in multithreading, multiprocessing, shared resources, more efficient use of processors or more efficient use cases. We don’t know if code should be 100% accurate or 80% accurate, and we can’t parse it so part of it is idle while only critical components are run. Moreover, what’s there is hard to change because everything connects to everything else to the point where interactions create their own set of unforeseen bugs that take software updates to correct.

For all the talk about hardware-software co-design, we have only scratched the surface. And given the fact that most code has to be backward compatible—unless there is a new platform like a smart phone being developed that can alter the relationship between hardware and software—we’re now sort of stuck. At best we can fix some of the problems. At worst, we can live with them.

Had the two sides come together earlier, we might have made significant progress in this respect. But given the current state of code proliferation, it’s like trying to close a gate when a crowd is rushing through it. What’s done is done, and any progress from here will be incremental.


Tags:

Leave a Reply


(Note: This name will be displayed publicly)