AI: The Next Big Thing

And it’s going to push the limit on semiconductor design, manufacturing and packaging.

popularity

The next big thing isn’t actually a thing. It’s a set of finely tuned statistical models. But developing, optimizing and utilizing those models, which collectively fit under the umbrella of artificial intelligence, will require some of the most advanced semiconductors ever developed.

The demand for artificial intelligence is almost ubiquitous. As with all “next big things,” it is a horizontal technology that plays across many vertical market segments. Specialized chips are being developed for the cloud, for mid-range devices, and for edge devices in order to enable AI and its building blocks—machine learning, deep learning, neural networks. Many of these components are being designed for the advanced nodes using the most advanced manufacturing processes, which collectively will propel Moore’s Law, “More Than Moore,” and just about everything connected to semiconductors well into the future.

It’s easy to get confused about AI’s impact on semiconductors. On one level, nothing here is new. Transistors used in AI are the same ones used for servers or advanced networking. They are being manufactured using processes that are been developed for any leading-edge semiconductors. As with all chips developed at the most advanced nodes, lithography will still likely come down to EUV, DSA and some form of 193i multi-patterning. They will likely employ some type of advanced packaging and a variety of new and existing memory types. And they will require low power and high performance, which have been design targets for years, particularly for smart phones and tablets, where a single battery charge needs to last a day of intensive computing, and in data centers, where the cost of powering and cooling servers is so substantial that it’s a budgetary line item.

But on another level, everything is different. There are multiple approaches being developed to improve the accuracy and speed of AI implementations. The goal is less latency, higher bandwidth and faster performance. But none of those will happen without some significant architectural changes, and the big problem in the AI world is that the algorithms are incessantly changing. So chipmakers—which in this case include companies like Amazon, Microsoft, Google, Baidu, Alibaba, IBM, among many others—are reluctant to commit to custom-built ASICs because those chips may be worthless by the time the design makes it into volume production.

At a conference this week, entitled “ASICs Unlock Deep Learning Innovation,” and sponsored by Samsung, Amkor, eSilicon, ArterisIP and Northwest Logic, the consensus was that a discontinuity is already at hand. The path forward likely will require a mix of technologies, new design strategies that trade off different types of memory, processing power, and extremely high bandwidth, and packaging approaches that emphasis massive speed and potentially pre-developed platforms that are hardened in silicon to slash development time.

Whether that includes all ASICs, or a mix of 7/5/3nm ASICs coupled with eFPGAs, and possibly alongside DSPs and GPUs, isn’t entirely clear yet. ASICs are by far the fastest, but a purely ASIC approach can’t keep up with algorithm changes. All of this tilts the balance toward some type of advanced packaging, whether that is 2.5D, 3D-ICs, or fan-outs on substrate, because moving electrons through TSVs—whether that’s in an interposer or through the middle of stacked die—is much faster over thin copper wires. According to Samsung, yield on TSVs is somewhere in the 99% range these days.

Still, some fundamental changes will be required to make all of this work. Chips developed for this market will need to be mostly assembled from pre-hardened logic and IP. That won’t affect the need for continuous improvements in logic, IP and memory, but the pace at which all of that happens will have to be orchestrated. It won’t necessarily be concurrent. And while demand for manufacturing at the latest nodes will continue to grow, along with the most advanced packaging approaches, the mix of what’s being manufactured and when will undergo significant changes.

The “next big thing” will affect much more than just the cleverness of electronics. It will forever alter the entire supply chain, methodologies and flows that have allowed it to work in the first place. And for a technology built out of existing technologies, these changes will be surprisingly far-reaching.

Related Stories
How AI Impacts Memory Systems
The ways different architectures get around the memory bottleneck.
What Does An AI Chip Look Like?
As the market for artificial intelligence heats up, so does confusion about how to build these systems.
AI Storm Brewing
The acceleration of artificial intelligence will have big social and business implications.
What Does AI Really Mean?
eSilicon’s chairman looks at technology advances, its limitations, and the social implications of artificial intelligence—and how it will change our world.



2 comments

realjj says:

Can it even be the next big thing if it’s using the same manufacturing processes, same transistor or even MOS?
So is it really reaching even a tiny fraction of its full potential without a non-volatile switch?
It will be everywhere and “creates value” so it’s a big thing for the marketing folk but maybe there can’t be much of a social disruption without a lot more than same everything.

Ed Sperling says:

Good point, but using existing technology differently can have a big impact. In the data center, this is certainly true for the cloud, and it was true for virtualization. It’s not necessarily the technology. It’s how it’s applied that changes, and then the technology follows and gets optimized for those changes. AI/ML/DL will drive a lot of technology shifts, even if the hardware or manufacturing processes aren’t changing that much. And that will offset the flattening of the mobile market.

Leave a Reply


(Note: This name will be displayed publicly)