Subsystems And Reuse

Are they solving the software challenge? That seems to be the direction, at least.


By Frank Schirrmeister
The last couple of weeks have been very busy with travel, customer meetings and presentations—DATE in Grenoble, CDNLive in San Jose and, most recently, EDPS in Monterey. Software enablement and IP sub-systems have been the key themes throughout these events, and during Gary Smith’s keynote at EDPS, I realized that subsystem reuse may be a significant step to solving software challenges. And I realized that I have seen this play out before.

First, at DATE in Grenoble, I attended a session appropriately titled “IP Sub-Systems: The Next Productivity Wave.” Tensilica,  Synopsys, Cadence and Intel were presenting on their lessons learned.

First, Tensilica’s Grant Martin presented on how Tensilica moved up in complexity from single cores to dual cores, multi-core clusters and then, ultimately, to complete subsystems. He described how a hierarchical approach allowed them to achieve maximum reuse and how automation was added in the different hierarchical steps. Tensilica added provisioning for software in two steps—communication SW and APIs as well as application-domain software from partial PHY demonstrators to complete PHY libraries and software IP. The key of the deliverables was to allow flexibility to end users—allowing them to easily replace RTL cores, to upgrade the performance of a sub-system or allow it to run proprietary algorithms when needed, and to integrate the sub-system into full SoC HW/SW systems.

Cadence’s Peter Bennett showed how Cadence I/O solutions reduced the time taken for SoC integrators from months to weeks, eliminating the associated risk of integrating independent components from different providers and enabling SoC developers to spend more time on key differentiating features rather than verifying standards-based solutions. On the software side, Peter described how the provision of firmware drivers—which they had first brought up using Cadence’s virtual prototyping solution, Cadence Virtual System Platform—completed the solution to reduce the overall time to market.

Synopsys’s Pieter van der Wolf talked about the configurable ARC audio subsystems, equipped with software plug-ins to support integration with host processors, easing the offloading. Pieter described how the value of the subsystem increases if it comes with pre-integrated standard functions but remains open to non-standard functions, and how both high-level hardware and software interfaces are needed to simplify integration into the system on chip.

Intel’s Menno Lindwer described the challenges of video en/de-coding subsystems and how they were addressed in the European Project ASAM (Architecture Synthesis and Application Mapping). The hardware abstraction layer software and video for Linux software stack looked quite complex, and Menno described the scheduling dependencies in detail, leaving me with the distinct impression that these issues really need to be worked out at the subsystem level first and should be error free to allow a surprise-free integration into a chip.

Overall, I left Grenoble feeling initially a bit surprised at how much software exactly is in a subsystem. But then, I realized that this is just a natural step in growth of complexity while maintaining the need for flexibility and customization.

Then last week, I went to EDPS in Monterey. We had a lively discussion on a panel that has already been written up by Richard Goering. Later at night, during the evening keynote, Gary Smith calculated how many gates can actually be utilized based on 2012 ITRS data within a 5W power envelope, and how hardware and software costs are predicted to develop over the next 15 years. Here is the graph:


ITRS Software/Hardware Cost Chart 2013 – 2027

What is striking is that the share of software cost for SoC design is actually going down over the next three years as part of the overall cost. The audience flagged that fact immediately, and Gary described more mechanics behind the charts and explained how that effect is caused by software in subsystems that is re-used.

Intuitively, especially given the presentations I saw at DATE a couple of weeks ago, this makes a lot of sense and I realized that we folks in the EDA industry have seen this effect before. I’ve actually blogged about a very similar effect in an article called “System-Level Design and the Waves of EDA.” I had analyzed EDAC data at the beginning of 2012. We were all worried about the looming design complexity back in the late 1990s. Then the reuse of silicon IP (block-based IP, that is) addressed this issue in the first decade of the 21st century.

It looks like subsystems will have a similar effect now in the second decade, and given the amount of software that is re-used with them, they actually have the potential to address a significant portion of the software challenge as well!

—Frank Schirrmeister is group director for product marketing of the System Development Suite at Cadence.

Leave a Reply

(Note: This name will be displayed publicly)