Don’t Quit Your Hardware Job

In a software-defined world, does anyone still care about the hardware? Answer: Yes.

popularity

At the recent Intel Developers Forum I was struck with the prevalence of software-defined architectures. Topics covered software-defined networking, software-defined storage, and the software-defined data center. It seemed that the concept of software-defined infrastructure was everywhere. It’s not unique to IDF, however. I suspect that at the upcoming ARM TechCon the trend will continue, but at the SoC level with on-chip software-defined radios and various other software-defined capabilities covering a wide range of functions from video processing to security.

Is there really anything new happening here, or is it just a new coat of paint on old technology?

Talking to multiple sources, this trend toward software-defined infrastructure is fundamentally changing some industries. Not so long ago I was chatting with someone in the networking space who confirmed that as a company they were moving away from the creation of custom hardware toward building software solutions on top of generic networking infrastructure. The rationale for doing this was to follow the value and maximize the R&D value. Hardware is commodity, software is not. Likewise in the world of advance datacenter storage, there is a rapid move away from costly custom hardware solutions to those built on top of commodity hardware.

So what does this mean to those of us who have invested so much of our careers in developing custom hardware? In a software-defined world are unique hardware-based solutions relevant any more?

The answer is simple. Absolutely!

While people talk about software-defined solutions, these solutions are only possible thanks to the huge amount of unique architecture that is available to the software. Lets consider software-defined radio. It’s an idea that is not new, but the ability to effectively deploy the technology is only possible thanks to sophisticated application-specific instruction set processors. It’s only with these processors that the power and performance needed to implement the complex computation is feasible in SoC’s. So while it may be software-defined, it is not software on a general-purpose processor. It’s software running on a powerful IP subsystem with processing dedicated to the task.

It is true that some software-defined infrastructure does run on pure commodity compute. However, the power, size and performance requirements of those solutions limit their applicability. The vast majority of software-defined infrastructure is going to require custom hardware-based solutions to enable them.

So is this really any different to the traditional approach to SoC design? Yes. The fundamental change is a shift away from fixed-function subsystems in the SoC to highly programmable subsystems. A single video subsystem for example may be able to handle everything from video encode/decode to computer vision applications. Likewise the radio subsystem will implement any number of wireless standards, and the networking subsystem will handle a wide range of standards all sharing a common physical layer. While all this will be powered by software, it’s software running on dedicated SoC subsystems that will continue to enable companies to differentiate with unique hardware solutions.

I believe that this change is actually good for SoC design and designers. It enables an SoC to be relevant to a much wider range of applications, which in turn reduces the risk that comes with the high cost of SoC development. Its not all rosy, as the task of programming these heterogeneous compute environments is non-trivial. But thanks to advances in multi-processor development environments and well-partitioned functionality, this is a manageable problem. Besides, the problem is far outweighed by the advantages that this flexibility delivers.

In a software defined world, long live hardware.



Leave a Reply


(Note: This name will be displayed publicly)