Smaller, Faster, Cheaper—But Different

These metrics are taking on new meaning as electronics begin playing a bigger role.


The old mantra of “smaller, faster, cheaper” has migrated from the chip level to the electronic system level, raising some interesting questions about where the real value is being generated.

Smaller as it pertains to gate size, line widths and spaces, will continue in an almost straight line for at least the next decade. The ability to print three-dimensional features on a nanoscale using EUV is a major breakthrough that has taken more than a decade and billions of dollars to perfect. Add to that directed self-assembly, nanoimprint, self-aligned double and quadruple patterning, along with some equipment, and the ability to shrink those features down into the 1x nm range appears not only possible, but highly likely.

But it’s not the chips that are making systems smaller. It’s the ability to use electronics to accomplish things that previously were either mechanical or manual. A car managed by a central brain that controls electronic systems is far more nimble and accurate than a person controlling a collection of finely tuned, but much larger mechanical parts. That shift will continue to evolve over the next decade as vehicles migrate from electronic assistance to autonomous electronics.

Faster is a given, but even here the meaning has shifted. The challenge is not only to process data faster. The question now is how to move much more data around more quickly. That’s no longer just about a single chip. It frequently involves system-level architectural decisions, from edge devices to the cloud, and various mid-level processing islands in between. Increasingly, it also involves putting multiple chips into a single package that can be tuned for specific data types.

The biggest hurdle to making systems faster is no longer just about the architecture, though. The real gatekeeper is power, and this extends from the chip all the way up to the electric power grid. At the chip/package level, the key issues to contend with are heat and noise. Heat is regulated by sophisticated power management schemes, which throttle back the clock frequency when a system gets too warm. Noise is a function of thinner wires and dielectrics, and more digital and analog components packed into a single device.

At the larger system level, the demand for processing power is raising a number of questions about being able to process more data using the same or less power. In certain parts of the United States, there is not enough electricity being generated to power and cool larger data centers. And if estimates about bitcoin mining are correct—about 3% of the world’s energy is devoted to this—then what happens when data centers have to process the increasing amount of data generated by autonomous vehicles, robots, drones and tens of billions of other sensors?

Cheaper used to be a simple financial equation that could be done on a napkin, but that’s no longer the case. While chips are getting more expensive to produce at advanced nodes, systems that utilize those chips are getting less expensive based on the number of functions they provide. Cost has evolved into a multi-faceted equation, though. If the quality of chips improves over time, then the total cost of ownership goes down and devices are indeed cheaper. If the quality stays the same and systems begin failing after several years, then the cost goes up. Likewise, if systems can be compromised by hackers, the financial equation changes in a completely different direction.

So systems are still getting smaller, faster, and possibly cheaper, even if the electronic content is becoming larger, comprised of both smaller and faster components, and individually more expensive. And that has some interesting implications for where the real value is being created and who will profit from these shifts.