Performance Still Trumps Power

Despite all the claims about power becoming the dominant design consideration, it still hasn’t achieved that distinction.

popularity

When it comes to technology, the past was always simpler than the present and the future looks daunting. In part, this is because finding a solution to a problem allows us to discover the next problem. Over time, the previous problem becomes more understood and solutions improve to the point where it is no longer considered a problem. It was a fairly easy choice about how to implement functionality, and the tradeoff between what was executed in a general-purpose processor and the parts of a design that were implemented as custom logic. The goal was to meet necessary performance while keeping within the area constraint. In some cases, area could then be traded off against product cost.

System architects used a simple spreadsheet to perform this analysis. Those decisions became a little more complicated when solutions between the two extremes emerged: technologies such as DSPs, FPGAs and custom instruction set processors. Each of these provided a different tradeoff between performance, area and power, but still the analysis was static in nature.

Over the past decade, many of the partitioning and selection criteria have changed. Individual processors stopped becoming more powerful, requiring the transition to multicore architectures, power consumption became an important issue, integration levels rose to the point where several processors, memory hierarchies and accelerators were connected using sophisticated interconnect schemes. Semiconductor Engineering asked how technology changes and selection criteria have affected the choice of implementation technologies and how much they are driven by the need to reduce power consumption.

“The decision is really market segment based, where the normal factors of time-to-market and overall costs are always considerations,” says Mary Anne White, director of marketing for the Galaxy Implementation Platform at Synopsys. “There is no one general trend or rule of thumb for what is best for the different companies.”

Still, the most common approach seems to be one based on history. “Performance still drives most of the design decisions, but there is an increasing top-down focus on doing smart design techniques to reduce the ‘wasted’ power,” says Aveek Sarkar, vice president of product engineering & support at ANSYS-Apache.

Adds Frank Schirrmeister, group director of product marketing for the System Development Suite at Cadence: “While the importance of power consumption has grown significantly over the years, I have personally never seen it trump performance (the design simply has to perform its basic tasks). Sometimes power is traded against cost, but this is dependent on the application domain.”

Not everyone believes power is a secondary consideration, of course. Bernard Murphy, chief technology officer at Atrenta, sees two markets with very different attitudes towards power consumption. “For consumer devices which are a) very power-sensitive, b) not so dependent on the ultimate in performance, and c) certainly cost-sensitive, targeted application processors are now the only possible fit. For enterprise applications where performance is essential and power is important but not primary, general-purpose processors continue to be the best solution.”

But of the two, power predominantly remains a secondary factor in making decisions. “Overall, systems are becoming more tuned to specific needs,” says Jon McDonald, technical marketing engineer for design and creation at Mentor Graphics, “Power decisions cannot be effectively made without considering the performance and cost impact of that decision.”

It would appear that integration is having an impact on processor selection decisions. “The two in-betweens (DSP and FPGA) are being marginalized as the SoC paradigm takes hold and becomes commonplace,” observes Pranav Ashar, chief technology officer at Real Intent. “FPGAs as cores in an SoC are not yet feasible, and standalone FPGAs suffer in terms of bandwidth/power and real estate. The upshot: If some part of the system function needs specialized processing, design a hardware core for it and plug it into the SoC platform rather than move big parts of the design to an FPGA or attempt to use a hard to program DSP.”

The one “in-between” case that continues to gain traction involves processors having custom instruction sets and architectures tuned for certain types of applications. Both Cadence (Tensilica) and Synopsys (ARC) are making large investments in this area.

“In many cases, the power budgets for new designs are staying the same or declining, even with the need for higher performance,” explains Mike Thompson, senior product marketing manager for ARC processors and subsystems at Synopsys. “To deal with this, SoC designers are being more selective about the processors that they use and they are looking for a higher level of performance efficiency (DMIPS/mW) from the processors that they license. They are also implementing multicore solutions (homogeneous and heterogeneous) that enable them to partition their design among a number of processors, which allows them to selectively idle portions of the chip when they are not being used.”

But power domains are being added to control leakage power, and clearly cores are being selected to avoid power wastage. “By having ‘power’ as an engineering focus,” says ANSYS-Apache’s Sarkar, “design teams are able to identify ‘power bugs’ that do not affect their designs’ performance or functionality but end up burning extra power.”

This is evident in the number of tricks being used to limit power, as well. “Some SoCs have more than 100 power domains,” claims Thomas Anderson, vice president of marketing at Breker Verification Systems. “The tradeoff for this flexibility is much more complexity in the design and verification process.”

Power domains are often defined by the hardware team and left to the software group to utilize. “Power management software has become part of the software stack, and that needs to be brought up early on in the product design cycle,” says Tom De Schutter, product marketing manager at Synopsys.

At least part of this treatment of power may be explained by existing tools. “Semiconductor teams do a lot of manual work for power optimization today,” says Schirrmeister. “Some of the control functions have found their ways into the Linux and Android operating systems allowing them to switch certain domains on and off.

Adds Atrenta’s Murphy: “Traditionally, power strategies are done through a combination of spreadsheet modeling and build-it-and-see approaches. The spreadsheet approach models a set of use-cases based on architect and designer wisdom and generalizations by the software architect about what constitutes representative use-cases. This creates a lot of work, a lot of assumptions, and a lot of uncertainty.”

What does all of this work buy us? Kurt Shuler, vice president of marketing at Arteris, does not believe the current system is working. “A major semiconductor vendor chafes every time one of his mobile phone customers asks for more power management features because nobody implements all the existing power management features in their phone software. The software is always the thing holding up a consumer electronics product from shipping, so OEMs take shortcuts like not fully implementing power management features in software that take advantage of existing hardware features.”

If so much power is being wasted because software does not utilize the features that are available, then it raises some interesting questions about the real value—and extra cost—of adding complexity into the hardware unless the savings are automatic.



2 comments

garydpdx says:

Happy new year, Brian! Bonne annee!

It looks like producers are over-engineering their designs in an effort to meet performance. What is necessary, if the application is properly subdivided, is to find a more optimal hardware-software partition that can meet your performance goals and minimize your power consumption. You may still end up exceeding your power budget but at the least …

Brian Bailey says:

Happy New Year Gary.

If we cant rely on the software group to do their part in managing power, then there will be a greater reluctance in the future to provide the handles and instead will attempt to do more power reduction internally and controlled by hardware. The answer is that we have to get both teams involved in this earlier in the design process and the notion of fixing things later in software has to change. Software can break a product!

Leave a Reply


(Note: This name will be displayed publicly)