The Big Blur

There are multiple options, opportunities and threats on the horizon, creating some difficult choices.

popularity

Chip companies, research houses, foundries—and more recently large systems companies—have been developing alternative technologies to continue scaling power and performance. It’s still not obvious which of those will win, let alone survive, or what they will do to the economics of developing chips.

For more than five decades, the biggest concern was scaling devices in order to save money. While money remains a primary driver, the way to recoup profits has far less to do with area than an array of different options, compute architectures and economic formulas. A $300 million NRE for an SoC or multi-die package may seem exorbitant compared to today’s development costs, but the economics of chip development will look completely different in the future.

For one thing, there is no indication that a single winner-takes-all strategy will work in new markets. There are too many options, too many different markets, and too many nuances within specific slices of markets. So rather than a single processor design for all PCs, or a single memory type, chip architects are working on designs that can target computer vision or deep learning or various types of sensors. They can develop chips for automotive, 5G, AI, IIoT, IoT, augmented and virtual reality, as well as mobile devices and the cloud.

Within those market segments there are a dizzying number of options available at different performance and price points. There are new materials in the III-V and II-VI ranges, different ways to put functionality together in different packages, and different schemes for accessing memories. There are even different architectures within chips, between chips, and across multiple systems, and there are new microarchitectures being developed to fine-tune those architectures. Which approaches or technologies ultimately win is unknown, but the decision certainly won’t be due to the fact that the chip industry is out of options. If anything, there are too many options.

That raises the question of how costs get amortized across a variety of markets that are still evolving quickly. Two years ago, the idea of replacing wafers with panels seemed like a dead-on-arrival approach to reducing costs—basically a revival of 450mm with less waste. But as AI chips begin hitting the market, this no longer seems quite so absurd because those chips are so large. That could create some interesting panel-level packaging options, as well.

Also not clear is how to customize chips within a variety of new markets and still reap some economies of scale. For systems companies, this may be as simple as prioritizing overall system costs in which semiconductors are doing the heavy lifting. So rather than price competition for a socket, the cost of the chip can be amortized across a system. For fabless companies, that approach doesn’t work. The more likely approach is some sort of pre-fab using off-the-shelf components such as chiplets or some other platform so that companies can confine their efforts to their core expertise. Even Intel is moving to this model.

And just to add another element of confusion into all of this, geopolitical factors such as trade wars, massive corporate debt, hints of inflation and rising electronics inventory individually could impact the chip market. Taken together, they could wreak havoc. There are a lot of moving parts here and it’s not clear who’s going to make money or how, or even for how long. There are plenty of technology options and new market opportunities, but what comes next is anyone’s guess.



Leave a Reply


(Note: This name will be displayed publicly)