As devices and systems increasingly become connected, everything and everyone involved needs to focus on energy efficiency.
A decade ago, former International Rectifier CEO Alex Lidow pronounced that there were three main categories for saving energy on a mass scale—variable speed motors, fluorescent lighting, and more efficient servers.
He was right at the time. Those weren’t necessarily semiconductor-driven markets, but they were the place where the most power could be saved. In fact, at the time, the rough estimate was that impact of making chips themselves more efficient was in the single digits, and even digital control would be just a fraction of the total savings.
Fast forward to today, with more chips flooding into cars to replace mechanical functions, LED lighting and much more efficient industrial operations, not to mention the growing focus on powering up and powering down sections of data centers, and saving energy has suddenly become possible almost everywhere on a grand scale. In fact, it’s hard to separate out the chips from the devices, because chips of all sorts—sensors, MEMS, FPGAs, ASICs and complex SoCs—are now scattered almost everywhere. The amount of electronic content in a car is staggering compared with just several years ago, and it will only grow as cars and devices that interact with them continue to become smarter.
This is good news for the planet, as long as the net power reduction from all the new power techniques is less than the energy consumed by new devices being added onto the grid. But it also raises some interesting challenges in the semiconductor world.
1. Think bigger. Whether you agree with the term Internet of Things or not, or even how all of this connectivity ultimately will shape up, the reality is that devices that are connected together need to be considered in connected systems. Always on can be good, but as everyone has learned with cable set-top boxes, it also can be very wasteful. A system now needs to be designed end-to-end, and that includes an understanding of how the infrastructure connects to all the devices as well as how the devices connect back to that infrastructure and to other devices.
2. Think about what’s already there. For years, most of the energy-saving features built into hardware were ignored by software developers. There wasn’t much consumers could do about this because there weren’t any viable alternatives. But as more computing continues to be distributed—it started with the PC, then moved to the notebook, then to the mobile phone, and now it’s being done in everything from appliances to wrist bands—there will be plenty of competing edge devices. Battery life will be a key differentiator, and companies that can leverage the hooks that are already in place with more efficient software will get to market faster—and garner more sales.
3. Think about how much it will cost. The next leaps in saving energy will come from the process side—new materials, new transistor structures and new interconnects—as well as new ways of packaging all of that in 2.5D and 3D. The big question is whether costs will continue to rise per gate, and what effect that will have upon designs. All of these approaches hold out the possibility of saving more energy through more efficient transfer of electrons and shorter distances, but they also are likely to be more complex, slower and probably harder to design. And while work will continue at older nodes such as 28nm and 40nm, the industry ultimately will move in multiple directions, each of which could be more costly than before.
Taken as a whole, all of these shifts point to a mainstream recognition that power considerations are not just equal to performance and area, but in many cases even more important. And for engineers who typically haven’t focused on power before, it might be time to start brushing up on some power-saving techniques because they’re about to be required everywhere.
Leave a Reply