The Human Bottleneck

It’s not technology that will hold back the next generations of chips.


The history of semiconductor technology can be neatly summed up as a race to eliminate the next bottleneck. This is often done one process node at a time across an increasingly complex ecosystem. And it usually involves a high level of frustration, because the biggest problems stem from areas where engineering teams generally can’t do anything about them.

Concerns over the years have ranged from ineffective or aging EDA tools to conflicting or inadequate standards. They have included materials limitations that cause everything from excessive leakage to inadequate yields, and persistent delays in lithography or equipment that is running out of gas. There have been plenty of cases of IP incompatibility and inadequate characterization. And there has been no shortage of finger-pointing back and forth between hardware and software teams about which one is holding back the other.

There will always be some bottlenecks to complain about. Different parts of the supply chain progress at different rates, and the collective learning about how to deal with changes takes time. The more complex the technology, the longer the industry’s learning curve and the longer it takes to optimize and squeeze out costs.

But despite all of these hurdles, the path is more wide open now than it has been for years. Just as doomsayers learned that the process wall could be circumvented at 1 micron, there appears to be nothing standing in the way of 7nm and maybe even 5nm. It may cost more to get there (perhaps enormous sums more), but it’s possible nonetheless. It’s also possible to build 2.5D chips and fan-outs, which eliminate any concerns about process technology and potentially even improve yield, because chips can be made smaller. There are TSVs or interposers to improve throughput. And there are enough materials in the lab to develop chips for almost any application. Moreover, there is enough progress in EDA tools to develop hardware and software quickly enough for even the most complex chips, which is why emulation sales have been so robust.

The PPA equation will always be a challenge. Trading off power versus performance versus cost is a perpetual guessing game, and chipmakers have been making these kinds of bets for years. Some companies are more successful at it than others. Most companies are more successful sometimes than others.

But this really isn’t about the technology anymore. No one has to guess when the next technological breakthroughs will be ready because there are enough options available for almost any application. There are more memory types and new ones on the way, more materials in advanced research, and more litho options on the table. The challenge is working through all of the different possibilities to weigh the complex tradeoffs, and being willing to do something differently.

In many cases, this means means reversing decades of focus on just shrinking features and adding more memory. Competition is no longer confined to several countries, and competitiveness is no longer defined by who can utilize the same components more effectively or throw more bodies at a problem for less money. It’s a race to deeper understanding of what’s available and what new methods and flows need to be implemented inside of engineering organizations to make all of this work. And that will likely present some of the most difficult bottlenecks to work through in the history of semiconductors.

Leave a Reply

(Note: This name will be displayed publicly)