Some Chipmakers Sidestep Scaling, Others Hedge

No shortage of alternatives as materials, packaging and architectural options grow, and plenty of startups are jumping in.

popularity

The rising cost of developing chips at 7nm coupled with the reduced benefits of scaling have pried open the floodgates for a variety of options involving new materials, architectures and packaging that either were ignored or not fully developed in the past.

Some of these approaches are closely tied to new markets, such as assisted and autonomous vehicles, robotics and 5G. Others involve new applications of technologies such as AI across a broad swath of different markets, or cloud-type hyperscale architectures in data centers. What’s changed over the past 12 months or so is there are now more choices—and increasingly more nuanced choices. It’s no longer just about picking a processor or memory based upon a particular foundry process.

This shift has ramifications for the entire manufacturing sector, as evidenced by GlobalFoundries’ decision to stay put at 14nm and scrap plans for 7nm. UMC likewise has stopped at 14nm, at least for now. Both continue to offer multiple finFET-based processes, but they are simultaneously expanding efforts in other directions. GlobalFoundries, for example, is betting big on FD-SOI, while UMC is pushing heavily into automotive.

Benefits are still to be had in scaling to 7nm and beyond, especially on the low power front. One of the big concerns has been dynamic power density, but new technologies such as gate-all-around FETs likely will ease that problem because they can operate at lower voltages with less static leakage. An estimated 20% improvement in performance can also be gained from moving to the next node, and while that’s lower than the 30% to 50% gains of previous nodes, it’s still significant.

“Customer momentum persists for the top 10 or 20 fabless companies,” said Bob Stear, senior director of foundry marketing at Samsung Electronics. “There is still a push for higher transistor density and all of those applications are power-sensitive. With gate-all-around nanosheets at 3nm, you can lower Vnom. That allows you to run at lower voltage, which is important in the data center and for AI applications because it provides a power saving.”

But cost remains a major factor. Behind all of these developments, the economics of developing these devices is shifting. Multiple reports and analyses point to 3nm design costs topping $1 billion, and while the math is still speculative, there is no doubt that the cost per transistor or per watt is going up at each new node. That makes it hard for fabless chip companies to compete, but it’s less of an issue for systems companies such as Apple, Google and Amazon, all of which are now developing their own chips. So rather than amortizing cost across billions of units, which chip companies do, these companies can bury the development costs in the price of a system.

Chip companies clearly can’t do that, so they are pushing in a number of completely different directions. And this is why the semiconductor industry is exploring a variety of new options.

But as they look at different options, it also begins adding uncertainty into how much volume foundries and equipment makers can expect to see at the most advanced nodes. And that, in turn, affects how quickly capacity needs to be added and how much they invest in new processes, equipment and materials.

“The challenge is how to minimize risk, or at the most advanced nodes, how to limit risk,” said John Chen, marketing director at UMC. “This is why the capacity at these nodes is not too large. You want to add capacity very slowly at those nodes, and that has an impact on the customer.”

Capacity limitations force greater efficiency on the supply chain. As a result, specs get tighter, design rule decks grow larger and process development kits include a substantial amount of margin, Chen said.

The materials race
This carries over into the materials world, as well. There are a slew of new materials being introduced for a variety of reasons, some involving power, some for cost, others for reliability.

Intel, for example, is using cobalt for some of the metal layers to reduce electromigration, which can impact chip reliability, while using copper for other layers. Also III-V and II-VI compounds are being used for a number of specialized applications, each offering different sizes of bandgaps.

Rather than starting from scratch, though, most of these materials have been at least partially vetted in recent years as researchers prepared for both higher density manufacturing and a simultaneous slowdown in Moore’s Law.

“We’ve done a better job in the last 5 to 10 years of testing out lots of materials and options than we were doing previously,” said David Fried, CTO at Coventor, a Lam Research Company. “When somebody publishes some result with a new material or system, it’s not as surprising because we saw some good data from ‘XYZ Corporation’ years ago when they tried it ‘this way.’ So there’s a good sense they can get that done. It’s still very challenging and innovative, and it takes a lot of work. But it’s not falling from the sky.”

This is particularly important in semiconductor manufacturing, both for yield and end-market reliability reasons. As chips find their way into more industrial, automotive and other safety-critical applications, reliability is a growing concern.

“Most of what we are working on is some type of an evolutionary progression, at least from an element of what is going on today or going on a generation ago,” said Fried. “From a near-term perspective, has some new material landed? No. That doesn’t happen because we’ve tried everything on the periodic table and we have a pretty good understanding of these materials.”

FD-SOI is a prime example of this, and it offers another option to the rising costs and limited capacity at the most advanced nodes. FD-SOI has been in use for the better part of a decade, providing better insulation as well as body biasing using existing silicon manufacturing technology. But their use has been limited. As semiconductor economics shift, and as new technologies such as 5G and AI roll out, demand is increasing rapidly.

“There is very strong demand on 22nm,” said Jamie Schaeffer, senior director of product line management at GlobalFoundries. “We have 55 client wins and 11 product tapeouts so far. There are expected to be 17 by the end of the year, and 50 by the end of 2019. There also is strong pull on 12nm for millimeter-wave 5G and augmented and virtual reality. We expect to roll that out the second half of 2020 or the first half of 2021. Volume production will begin in late 2020 or early 2021.”

Edge devices will be a key driver of FD-SOI. In the past year, there has been widespread recognition that too much data will be generated by sensors to process everything centrally, so more computing will need to be done at or close to those sensors. Chips based on FD-SOI are significantly less expensive to develop than finFET-based devices because they don’t require multi-patterning at 22nm and above, and they can run at significantly lower power with less concern for leakage than other planar implementations.

“A big concern is chip-package interactions, and one of the big advantages of FD-SOI is that it can leverage a lot of the 14LPP technology (at 22nm),” Schaeffer said. “Another key benefit is body biasing. The alternative is voltage scaling, but that accelerates degradation of time-dependent dielectric breakdown (TDDB). So we’re starting to see people using body biasing to compensate for aging over the lifetime of a part. You can monitor how it performs over time using in-circuit monitoring.”

Samsung likewise is promoting FD-SOI in addition to finFETs. “We started at the point where competition was between FD-SOI and finFETs,” said Stear. “But FD-SOI is gaining a lot of traction in new markets. NXP is a big proponent and there are a lot of products in the pipeline. These are very complementary technologies.”

He noted that both planar and finFET technologies are being qualified for automotive grade 1. (Grade 1 is qualified for -40° to 125° C.)

The packaging scramble
Alongside of all of this, in what amounts to yet another hedge for foundries, is advanced packaging. This approach of adding multiple chips into a single package is supported by all of the major foundries as well as all of the large OSATs.

Packaging is not new, but it is getting much more attention these days. It also is getting more complex. And it is raising questions about whether packaging ultimately will impact scaling, or whether advanced-node chips will be used in packages alongside other older-node chips. There are multiple reasons to continue scaling.

“Some of it is religion, some of it is packing density,” said Klaus Schuegraf, vice president of new products and solutions at PDF Solutions. “A lot of AI is moving to big chips, which means scaling is important. But the multi-chip module for high-performance computing is absolutely coming back. Phones are made that way.”

Packaging adds its own set of challenges and benefits.

“Partitioning is going to be key,” Schuegraf said. “There isn’t just one technology node. That can reduce variability by construction. But you need to make sure every one of those chips is known good die. If you want to start burning in a multi-chip module, and you’ve got a lot of silicon from a number of sources—maybe not even the same fab—you need to make sure this is all reliable and that it doesn’t fail on burn-in. That’s a huge cost. It’s not just about 1 piece of silicon. You may have to throw out 8 or 10 pieces of expensive silicon that can be limited by the quality of a single IC.”

There is a huge amount of work underway across the semiconductor supply chain to improve the tooling for advanced packaging, which would add some consistency into the design flow, as well as experimentation with a number of options ranging from fan-out to 2.5D, 3D-IC, system-in-package and even chiplets. Most experts believe it will take several years before this becomes mainstream enough to have a big impact on cost and time-to-market, but advanced packaging already is in widespread use in consumer devices, networking chips and in a variety of customized applications.


Fig. 1: Packaging options. Source: JCET

“In the last 36 months we’ve heard from many executives who say are tired of funding Moore’s Law evolution,” said Jack Harding, president and CEO of eSilicon. “They’ve told us they’re going to put more pressure on their designers to improve those designs through architecture. That’s why the chiplet has a future. People are saying they’re not going to spend $30 million on a mask set and do two re-spins for another $30 million. The next few nodes will be for a handful of chips that can drive the volume.”

What remains to be seen is whether logic and memory developed at 7nm and below will be incorporated into those packages.

The architecture card
Alongside of all of these developments is a push toward more heterogeneous architectures. Rather than a single processor, the emphasis is now on accelerators for specific types of data. The rapid proliferation of AI has made this an all-out race, with an estimated 30 startups developing accelerators or fully integrated chips to speed up performance by 100X or more.

That makes performance gains from scaling look minuscule, but these chips are so specialized that they need to be mixed with other chips. Moreover, AI/ML/DL isn’t necessarily good for everything, so while chip architectures can be honed for specific applications that doesn’t necessarily mean everything will run at 100X performance.

Some AI chips are enormous, and they typically have arrays of accelerators coupled with small memories in very close proximity to those accelerators. The challenge from the design side is how to keep all of these processing elements busy. The challenge from the manufacturing side is how to produce these complex chips with so many heterogeneous elements, where the starting point is the data rather than the manufacturing rules deck or the standard design flow.

This is particularly difficult in edge devices, where at least some of the processing needs to happen in real time.

“We’re definitely seeing a need for more processing at the edge, and this is causing a lot of disruption,” said Frank Ferro, senior director of product management at Rambus. “Traditionally you would need a small amount of DRAM at the edge of the network. Now we’re looking at HBM and GDDR6. In automotive, you’ve got 200 gigabits of data per second, and it has to be analyzed in real time.”

Conclusion
How these constantly evolving architectures get designed, manufactured and packaged isn’t entirely clear. What is clear is there is no shortage of new options emerging, and not all of them will work out. That makes it hard for foundries, equipment companies and materials suppliers to make long-range plans, which is why many are proceeding cautiously.

In the past, there was a roadmap and a clear direction for investing in the future of chip design through manufacturing. That roadmap no longer exists. Which approaches and technologies win, and which ones fail, is anyone’s guess. But each one of those options requires a substantial investment, and it’s hard to bet the bank when you don’t know how long those options will stick around.

 

 

Related Stories

Big Changes For Mainstream Chip Architectures

New Patterning Options Emerging

Variation’s Long, Twisty Tail Worsens At 7/5nm