Foundry Wars, Take Two

What multiple process nodes and market uncertainties mean to the design world.

popularity

Samsung, GlobalFoundries, TSMC and Intel all have declared their intention to fill in nearly every node possible with multiple processes, different packaging options, and new materials. In fact, the only number that hasn’t been taken so far is 9nm.

It’s not that one foundry’s 10nm is the same as another’s. Each company defines its nodes differently, and these days comparing nodes is almost meaningless. Moreover, markets generally don’t care. Transistor density doesn’t have equal weight for many of the new application areas the foundries are trying to address. In up-and-coming markets, no one is sure what will be critical factors, which is why there are so many possible configurations floating around.

But this mad race to fill in every whole number node—and after 2nm every half-number node, starting with 1.5nm—is becoming a big problem for EDA and IP companies. Every process is different, and each foundry’s process is different from the next foundry’s process, even if they’re using exactly the same methodology to measure node numbers.

Because developing processes at advanced nodes starts with early versions of a process, this is a huge commitment by tools and IP companies. It takes time, resources, and a lot of patience. That explains why each process has its own ecosystem, and why in the future the list of IP blocks for each process and process node will likely be different.

Until now, EDA vendors have supported everything. It was assumed that if foundries rolled out their next process technology, there would be a commitment to those nodes and clear market opportunities with predictable return on investment. Those kinds of guarantees are no longer available because no one is quite sure what will work best for new markets or when those markets will materialize. And if they materialize later or earlier than anticipated, that might drive business for a completely different process.

Automotive, medical, industrial, IoT, augmented reality, cloud/server and other markets are the big new market opportunities that have been identified by many companies. But which chips, at which nodes, and in which configurations or packages isn’t obvious at this point. And that’s a problem for EDA vendors, which already are stretching their resources. Qualifying tools is expensive, and R&D budgets as a percentage of revenue have been steadily rising for all EDA vendors. Adding more nodes and process versions on those nodes will only stretch them further, and at some point—sooner rather than later—they will be forced to choose where they will get the most return for their investment.

Smartphones will continue to drive sales at the leading edge, but there are only a couple vendors developing chips at those nodes. And there are only a few FPGA vendors working at the leading-edge processes. Memory has stalled out in the 2x nodes. Power semiconductors and sensors don’t need the latest processes. And new computing approaches, such as quantum computing, don’t benefit from smaller feature sizes.



Leave a Reply


(Note: This name will be displayed publicly)