2 Big Shifts, Lots Of Questions

Why AI, and systems companies designing their own chips, could alter semiconductor manufacturing.

popularity

The proliferation of AI everywhere, and ongoing efforts by big systems companies to develop their own chips, could have a profound effect on semiconductor manufacturing for years to come.

AI is a multi-faceted topic, but what makes this particularly interesting from a semiconductor standpoint is the architecture of AI-specific chips. So far, most of these chips have been developed for data centers, both for training purposes and for inferencing. In the future, some of that processing will have to be done at the edge, whether that’s in or near a sensor, or sensor array, or whether it’s at the edge of a network in what previously was called a mid-range or fog server.

The fastest way to process lots of data is to scatter accelerators around a chip and couple each of those with some sort of localized memory. That way data can be processed in place, and it can work as a giant parallel computing operation. So far, one of the key concerns has been how to move data through those chips as quickly as possible, and that bodes well for an extremely dense array. As a result, there is a general assumption that skyrocketing amounts of data will drive demand for more tightly packed transistors at the most advanced nodes.

The problem, at least so far, is keeping the data pipeline full. Having this many active processing elements on a chip is like stocking shelves in a warehouse. The longer inventory sits idle, the less profitable the operation because that shelf space costs money. It’s the same in a large chip. If processing elements are idle, that’s less cost-effective than fewer processing elements developed at an older node, or individual dies connected in an array. And so far, one of the big problems has been how to keep all of these processing elements busy.

Whether this changes is unknown at this point. AI is still in its infancy and data is still growing. But AI has been identified at multiple conferences as one of the next big drivers of device scaling, and much of that depends on how much data is being processed and what is the most efficient approach to do that.

The second factor involves future directions of systems vendors, which are developing their own chips. How successful they will be at this is pure conjecture at this point, even within those companies. But there are definite advantages if they can make this work. First, they can more tightly couple hardware and software design without having to worry about backward compatibility. And second, they can design known security vulnerabilities related to speculative execution.

Developing chips in-house as part of a bigger system isn’t a new idea, of course. The pendulum swings back and forth on this, but sometimes the arc can last for a decade or more. What’s worth watching here is whether these systems companies look at developing chips at the most advanced nodes, or whether they view packaging and different architectural approaches to be more useful and flexible.

Apple and Samsung today comprise the highest percentage of advanced-node manufacturing. Samsung uses its own foundry for manufacturing. Apple relies on TSMC. Both are rolling out 7nm chips, and they likely will push to 3nm sometime over the next couple of years. But how much capacity is required depends on which way the system vendors and AI chips go, and that ripples out across the entire manufacturing ecosystem, including equipment, materials and EDA tools.

There are other systems vendors working on their own chips, as well, including Amazon, Google, Alibaba. How and where these chips get manufactured, and at what node, are still unknown. But put all of these pieces together and there are some very big unknowns still hanging over the semiconductor industry, with the entire supply chain hanging in the balance.



Leave a Reply


(Note: This name will be displayed publicly)