This may be the best time ever to be working in the semiconductor industry.
One of Steve Jobs’ great revelations occurred while watching his young daughter use a computer with a graphical user interface. He observed that adults ask how to use a computer, whereas children ask what you can do with it.
The semiconductor industry is experiencing one of those seminal moments with the Internet of Things/Internet of Everything. While work continues to ensure that electrons will flow freely at 7nm and 5nm, and even into new architectures with 2.5D and 3D stacked die and fan-outs, there’s a whole new wave of development underway to figure out how technology can be used. That wave is fostering an explosion in creativity about what’s the best way to get there, even if it means changing the architecture of the hardware, the software, and some of the ways things have been done for the past 50 years.
The metrics of smaller, faster and cheaper are still valid. But in many cases they’re less important than longer battery life and better user experience. If there is enough perceived value, whether it’s improved security, seamless connectivity, more functions that are meaningful and useful, then consumers increasingly are showing a willingness to pay more for a device. Smart phones have replaced feature phones in established economic regions, and they’re even beginning to gain share in developing economies. And segments as diverse as automobiles and home appliances are beginning to add connectivity into their products wherever they believe they can add perceivable value—meaning wherever they can get paid more for the same ingredients.
But these really aren’t the same products anymore, even though the parts list is slightly different. New cars with warning sensors are much safer to drive than older cars. Industrial sensors that can pinpoint where a bridge is beginning to corrode or where a water main has broken and is leaking water into the ground can save lives and money. All of this stuff has been considered possible for decades. Back in the early years of the millennium, former National Semiconductor CEO Brian Halla gave presentations about how sensors in a roadway or a bridge could be connected via mesh networks to send alerts in the event of a structural problem. And back in the late 1990s, former Sun CEO Scott McNealy gave speeches about maximizing compute resources by sharing the processing across multiple devices.
What’s changed since then is an understanding about how best to achieve these goals, coupled with more powerful processors, more efficient and cheaper storage, and much more efficient processing, networking (on- and off-chip) and I/O architectures. Power is now a given in designs. As more things are tethered to batteries, or tied to giant datacenters where the price of powering and cooling servers is a clearly delineated budget line item, power is recognized as a persistent challenge that must be continually improved. That’s a big step forward. Likewise, connectivity is solved to the point where it’s easier to get online in more places and stay online without interruptions. And security is at least being worked on, even if it’s far from solved.
But what gets particularly interesting at this point is that enough foundational pieces are in place—performance, storage, throughput, power, connectivity—that the focus is shifting from how to solve individual issues to what can be done with all of the technology that’s now in place. That’s evident in the hundreds of wearables hitting the market, the explosion in sensors—including new factories being built just to manufacture these sensors—and the emphasis on even lower power and energy harvesting.
The basic building blocks are at least good enough for big changes to occur across every corner of our connected world. Now the question is how all the pieces come together, what new possibilities can open up, and who’s going to run with them. And those possibilities will drive the biggest explosion in semiconductor industry creativity that has ever existed—for good and for bad.