The (R)evolution Of Intelligent IP Subsystems

The solution to complexity is to redistribute it, which affects hardware and software.


IP subsystems have gone from talking point to reality in a very short period of time, but most coverage focuses on a hardware integrators view. The system integrator’s view is very different because the task of software integration is now vastly more complex dealing with software from multiple providers, using different assumptions and with different requirements. This effort, already larger than the hardware, now has increased potentially to the point of breaking.

The solution is a redistribution of complexity, one that affects both hardware and software. But before going into the solution, I want to draw on a parallel from my past.

Back in my system design days (longer than I care to admit), while attending an internal design conference, a team presented their solution to a complexity problem programming traditional monolithic DSPs. Their solution was to create a nano-DSP IP block, program each section of the data pipeline separately, then build a custom ASIC containing many of these nano-DSPs.

The shocker for me was that even with the IP and ASIC development, the overall system development cost was reduced—such was the previous software development effort. System programming was now high-level plumbing between proven blocks. My realization at the time was that software development is not free, and proposing an increase in the complexity of hardware was not the end of the world.

So, how does this relate to today’s SoCs?

Software complexity is now larger than hardware, and some complexity needs to shift back to hardware. The (r)evolution of the IP subsystem is that rather than delivering IP subsystems with a chunk of code for system integrators to manage, they should be delivered with fully integrated embedded processing. With IP subsystems running localized code, system software now deals with well-defined (and tested) APIs—effectively high-level plumbing. There’s no need to integrate third-party software into a larger monolithic system.

This approach will raise hardware complexity and introduce many new embedded processors and associated memory management, but overall system complexity will be significantly reduced. Best of all, because the IP is coming from an IP provider, there is no extra effort for the hardware team. Also, because the larger IP providers now have access to embedded processors through acquisition or OEM, integration for them is simple and overall support costs are lowered.

With SoCs becoming a series of subsystems communicating over an efficient on-chip network coordinated by the main processor, the recent Sonics/ARM deal makes perfect sense. ARM wants to own the SoC level processing and network, and will leave the subsystem providers to do the rest.

So why put the (r) in (r)evolution? While it will be revolutionary from a system perspective, some wireless subsystems already include embedded processors to meet strict real-time requirements. Now all IP subsystems need to evolve along the same path with an eye to simplifying software.

So bring on intelligent IP subsystems. The only loser is silicon area. But thanks to TSMC, that is abundant and (relatively) cheap.