Performance First

The pendulum appears to be swinging back from power to performance.


Crank up the clock speed. It takes a lot more performance to run virtual reality smoothly, or to process data in the cloud, or to stream a high-definition video from a drone. And none of that compares to the amount of performance required to kill an array of disturbingly realistic zombies on a mobile device in conjunction with other players scattered around the globe.

After several years of emphasizing power and good-enough performance, enough new applications—or existing applications with increased workloads—are reaching the market with sufficient consumer demand to push the pendulum back the other way. Performance is critical to these applications, and the higher the clock frequency the better it works.

It’s not that power concerns have disappeared. Quite the opposite, in fact. Battery life is limited, and devices are getting thinner, not fatter. Thermal effects caused by static leakage, as well as resistance when electrons are pushed through skinny wires using increasingly high signal strength, are known to impact reliability. Heat kills circuits prematurely. It also changes the user experience. No one wants to hold a smart phone running at 3.5 watts for very long, and no one wants even close to that much heat next to their face.

Still, there is a growing recognition that boosting performance for a variety of consumer and business applications is becoming essential. At the same time, there is plenty of data to show that just shrinking features isn’t going to solve everything. That has driven chipmakers and systems companies to revisit approaches, tools and materials that have been pushed aside for many years because shrinking features provided good-enough benefits for the least amount of money. That formula no longer applies.

The semiconductor industry is hardly out of options, however. There are multiple ways to solve this problem, and in some cases all of them may be required. In the networking and server markets, the solution increasingly is a different type of packaging or architecture, or both. All of the major networking switch makers now offer 2.5D packages to speed the flow of data in and out of memory, and in the server market chipmakers are experimenting with everything from 2.5D to monolithic 3D. There also is work underway to increase the density of SRAM, as well, and JEDEC currently is working on a follow-on to DDR4 even though most people swore for years that it would never come to that.

On top of that, market-specific solutions are being developed, from automotive to industrial to consumer, to boost performance using more heterogeneous integration. Even supercomputers are no longer arrays of the most powerful CPUs. They are a mix of CPUs, GPUs and FPGAs, and in the future more devices and systems will begin to incorporate this kind of mix, sometimes on the same board and sometimes in the same package.

There also is more work being done on microarchitectures, where the voltage is reduced for some applications but not others, and where some cores in a processor are used differently than others.

Power is still the gating factor on performance, but performance increasingly will be what sells the whole package, particularly in certain markets. This should keep engineers busy for quite some time. It also should keep tools vendors quite happy, because it requires an increasingly large arsenal of expensive tools. And it will satisfy IP vendors because they’re the only ones with the time and wherewithal to squeeze every last microwatt out of an IP block.

And for all the speed junkies who longed for the days when increasingly powerful processors ruled the tech world, the pendulum is swinging back toward them. Only this time, it’s coming a lot faster.

Leave a Reply

(Note: This name will be displayed publicly)