Big changes are coming, but not in the usual places.
One of the unique things about the semiconductor industry is that it has fueled the digital revolution almost entirely by focusing on its core competencies of performance, power and area. There are few, if any, industries that can tie global growth and success to what amounts to an almost isolationist business model.
Salespeople have to sell those chips, of course. Marketers have to create awareness. And outside forces increasingly define the features that are put into those chips. But for the most part, systems are built on chips rather than chips being constructed for systems. It’s hard to argue with that model’s success. Semiconductors now account for about $346 billion in sales, according to the most recent World Semiconductor Trade Statistics report. But how much longer that model will remain successful is questionable.
There are several key changes afoot. Which one came first is debatable, but all of them represent significant changes.
First, as manufacturing processes dive into the single-digit nanometer range, power has overtaken performance as the key metric to watch. At 16/14nm, the big challenge is dynamic power. At 10nm, it will be a combination of dynamic power and leakage current. And at every node thereafter, power will be the gating factor as density increases, wires become too thin to allow free passage of electrons, and costs continue to skyrocket in order to overcome these issues plus new ones.
That has led to a number of different assembly scenarios, some of which are still in testing phases, to overcome power issues and associated physical effects such as ESD, EM, EMI, as well as quantum effects that are expected to begin creeping in at 7nm and 5nm. While this change is cautiously slow, it’s also somewhat liberating for system design teams. Rather than worrying about which chip works best, they can assemble a variety of features from a menu and integrate them. In this scenario, it’s no longer one chip that defines a system. It’s the system that defines what’s in the semiconductor package, and those packages may be changed for every system that can be imagined with far less effort than developing new chips.
Second, the PC era and the smart phone era may look very different in terms of the end devices, but they were both built on the same idea of one powerful compute processor plus a graphics card connected to multiple memories with multiple standard I/Os. These were highly standardized parts that could be sold by the billions and customized, at least to some extent, by a complex software stack.
As smart phone sales begin flattening, there is no obvious single device that will use a standard processor/co-processor configuration across billions of units. The smart watch was supposed to be the next step, but it hasn’t worked out that way because there still are not any killer apps. Arguably the home health market will fill this role when accurate heart/blood sugar/enzyme sensors are available, but that also will require massive improvements in battery life. Still, it’s highly unlikely that any single vendor will have the kind of clout that smartphone vendors have amassed—and probably will continue to yield for years to come as the smartphone becomes a command center for many other services.
Third, end markets are becoming more fragmented and specialized, driven largely by a quest to connect everything. The march toward the IoT/IoE is becoming real. Engineers are scoffing at it less often and focusing more on the problems that need to be solved, such as security, reliability, power, and whether processing should be done locally or remotely. But these also are very different kinds of system designs. What works in a home gateway is far different from what kind of systems work in a car, and the chips or packages of chips will look far different from one device to the next, and from one market to the next.
In these markets—and Maker Faires offer some good insights into what’s ahead—software isn’t the glue anymore that takes advantage of a standard processor. Instead, software is part of the system design—much more limited, efficient, and created for the vertical market slice rather than how to leverage a standard processor element that needs to be kept dark most of the time because it’s complete overkill for a design.
Taken together, these represent a huge shift for the semiconductor industry. Tools and processes are still useful—probably even more useful than in the past because they are being combined to do many more things in customized design flows— but the methodologies for how to use them are changing significantly. It’s easy to be blinded by the mega-billion-dollar acquisitions across the semiconductor industry, but it also helps to look at the other end of the market. Small and midsized companies, in addition to some very large companies, are busy rethinking how to serve industries that in the past needed very specific solutions but had to fit polygons into square holes.