The Case For Extensible Processors

Not all applications are alike, so why should all embedded processors.

popularity

By Neil Hand
In a previous post I talked about how intelligent subsystems are going to impact overall system design. Because many more design teams are going to be required to implement complex processing power into there subsystems, I thought it time to expand on that topic and talk a little about options available to add that intelligence.

One size does not fit all
The traditional approach to adding intelligence in the subsystem would be to pick an off-the-shelf embedded processor, integrate it, and proceed as with any other embedded design.

In the context of the subsystems, however, it is not that simple. The requirements for processing in these subsystems will vary dramatically. And while standard processors are optimized for running generic applications, these subsystems need to run highly optimized application and protocol specific code.

Going with a standard processor in these subsystems initially seems like the simplest approach, but may later cause problems in both performance and power—leading to a much longer development process as the designers struggle to optimize the software while at the same time compromising power

Pure hardware is not the answer, either
So why not simply hardcode the subsystems controller? While this will give great performance and power, it is completely unworkable for an intelligent subsystem. The problem is that as the subsystems have subsumed functionality, they are now as complex as complete SoCs from not long ago. Hard-coding a controller of that complexity would be a massive undertaking. In addition, with spec and protocols constantly being refined, a hard-coded solution would not offer sufficient flexibility and would be obsolete before it was even finished.

Specialist and extensible processors to the rescue
All is not lost, however. There is something that lives between the two extremes. Specialized embedded processors are not new and have been used for a while. The most obvious examples are video decode engines, graphics processors and DSPs for software radios. All of those leverage a mix of programmability with specialized hardware instructions based on the application being targeted to get the job done.

A really cool example of this type of approach is a project that I first heard about several years ago. GreenDroid is a project to build an Android-specific application processor with many specialized cores to accelerate specific parts of the operating system automatically. The result is improved performance and significantly reduced power. Another example is the Anton Supercomputer, which utilizes specialized processors to simulate interactions of proteins and other biological macromolecules.

For our subsystems we do not need to go to the extremes of the above examples—although it would be fun. Instead, we can leverage one of the several extensible processor IP cores in the market. Utilizing the extensibility of these processor architectures offers a compromise between the all-hardware and all-software approach. The subsystems protocol stack can be implemented, profiled, and the most processor-intensive portions assisted by the creation of custom processor commands. The net result is an overall reduction in power, improved performance, and much less time and effort trying to optimize code running on a standard processor, while at the same time preserving the flexibility.

Enabling differentiated products
Best of all, because not all subsystem providers will be using the same implementations, design teams can differentiate using their own unique mix of processor hardware and software and avoid the situation where subsystems quickly become commoditized. So everyone wins.



Leave a Reply


(Note: This name will be displayed publicly)