Power Next

The next big savings will come from putting software functionality into hardware.



Development teams are faced with many tradeoffs when defining a new product: How much should it cost? What functionality or features need to be included? And what level of performance is required? As an example, in order to reduce costs it’s possible to trade away performance by implementing functionality in software instead of in application-specific hardware. For an SoC that already includes a processor core (or two) as part of the implementation, this removes the step of taking the already defined software model of the functionality and implementing it in hardware. There’s also the chip area savings in reusing the processor that’s already in the design as opposed to spending more gates on the new functionality.

On the other hand, reducing the functionality enables the designers to improve the performance by concentrating on more specialized circuitry for fewer features while still keeping the die area and costs down. Not surprisingly, the all-singing and all-dancing, high-performance, fully feature-laden parts are typically going to cost the most.

The scenario above has led to design teams putting more functionality into software and trying to get the best performance by making their processors run faster. The reasoning behind this is that if the processor runs faster, then everything that runs on it will also run faster. It seems like a win-win situation, getting performance and functionality both with little additional cost and certainly this has been a major part of the huge increase in the use of processors over the past 4 decades. Of course, that ever increasing performance from running processors at ever higher clock frequencies has hit a wall and this is where power enters the story. The thermal, battery lifetime and power cost requirements effectively cap the clock frequencies of today’s processors.

Silicon costs ($/transistor) keep going down
At each new technology node, the number of available transistors roughly doubles. This is basically the trajectory predicted by Moore’s Law and it is still holding true. We’ve already seen clock frequencies hit a ceiling. ARM’s CTO Mike Muller has sounded the alarm that current projections indicate that the energy used per device isn’t scaling as quickly as the device geometry, so future technology nodes will only be able to power up a fraction of the available circuitry on the chip.

Better energy efficiency with specialized HW.
Processors emulate functionality and there’s generally a significant overhead from an energy standpoint for having a flexible programmable implementation. YJ Kim, senior director and head of the embedded processor group in Cavium’s Networking & Communications Division, gave a presentation on Cavium’s new OCTEON II at last November’s SoC Conference in Irvine, Calif. The OCTEON II is a MIPS-based multicore processor. One of the really novel aspects about this design is the use of hardware accelerators to create a high-performance and power-efficient part. Mr. Kim estimated that if all of the functionality were to be implemented in core it would require a 50W processor, but everything can be performed in 15W with the additional use of accelerators. He also pointed to the use of “power throttling” to save 1.5W and the importance of even this 1.5W savings. One of the target markets for the OCTEON II is outdoor base-stations, and the solution needs to operate in pretty extreme environments, be fanless and not operate at more than 17W. He said there’s a big difference between 17W, 15W or 20W in terms of being designed in.

Given the declining silicon costs for adding additional circuitry, the next needed area of cost reduction to really push this approach forward is the cost of the engineering to take that already defined software model of the functionality and implement it into gates. This is where the EDA industry can play an important role by providing tools to automate the process of taking the functionality, often described in C/C++, and creating hardware implementations. The end benefits are better performance, functionality and power characteristics with only a slight increase in cost. If the economics support it, it will be the trend.

–Barry Pangrle is a solutions architect for low-power design and verification at Mentor Graphics.

Leave a Reply

(Note: This name will be displayed publicly)