3nm: Blurring Lines Between SoCs, PCBs And Packages

The end of Moore’s Law is providing options for shifting what goes where and how it gets designed.


Leading-edge chipmakers, foundries and EDA companies are pushing into 3nm and beyond, and they are encountering a long list of challenges that raise questions about whether the entire system needs to be shrunk onto a chip or into a package.

For 7nm and 5nm, the problems are well understood. In fact, 5nm appears to be more of an evolution from 7nm than a major shift in direction. But at 3nm, with the introduction of gate-all-around FETs that utilize nanosheets, new materials, and much denser arrays of heterogeneous processing elements, issues that were manageable at previous nodes, such as self-heating, are suddenly looking far more challenging.

“You have a trapped gate that is surrounded by insulator,” said João Geada, chief technologist at ANSYS. “The heat has nowhere to go.”

This has led to a number of different architectural options that help to minimize this problem, ranging from back-end packaging by companies such as ASE, Amkor and JCET, to front-end packaging such as TSMC’s System on Integrated Chips (SoIC) which integrates chiplets into a die. Each has disadvantages, notably around cost and testability, but the ability to use high-speed interconnects and in some cases shorter distances — and with TSMC’s SoIC, a direct-bond approach — means that heat can be dealt with more effectively than on a single chip, and performance and power are lower than if everything is on a board.

“There is the idea of the whole 3D-IC stack and even one step beyond, replacing the entire PCB and assembling everything on silicon substrate,” Geada said. “You have the extreme servers, which did away with slicing, but there are a lot of really good technical reasons to do away with the PCB entirely and just manufacture everything on a wafer. We can manufacture wires much denser, much more controllable with much smaller dimensions, directly on silicon. And one of the major limiters of performance of large systems these days is literally the speed of light length. One picosecond is 0.3 millimeters. The closer you can put things together, the more performance you can have for less energy. Also, if you put silicon on top of silicon, the two systems expand at the same rate, so a whole bunch of thermal issues go away. If you use silicon-on-silicon, there’s no difference in the coefficient of thermal expansion. Where we are going is that chiplets become the entire system and are now on silicon. There’s no outside-of-silicon other than the connections to the real world, and even those are going change in structure.”

At 3nm and 2nm, Moore’s Law is basically coming to an end. And while scaling may continue, everything at those nodes will involve multi-chip packages.

“Previously, it was just about a physical limit,” said CT Kao, product management director at Cadence. “At 2nm, we talk about five layers of copper atoms, so we just cannot go further down. What do engineers need to do? Previously we’d go to a high dielectric material to try to keep Moore’s Law alive, but now when we hit the physical wall, the only option for the IC design to move to 3D-ICs. That three dimensional device is built of different chips, on one single silicon interposer, which is like a substrate. These multiple dies have different functions on that silicon interposer, each of which could be different thicknesses, different functions, different designs. But they are put on one substrate, and that silicon interposer is basically like a multi layer package.”

And this is where things change significantly. In the past, it was always assumed that putting everything onto a single die was the best way to achieve high performance for the least amount of power, with everything else on a PCB. In the future, more chips will be developed at whatever is the optimal process for those chips, and some of those will be integrated in a package rather than on a board to achieve scaling benefits.

“Building a 7nm chip is very complicated,” said Andy Heinig, group manager for system integration at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “For SoCs, it makes sense to shrink for digital components from 14nm to 12nm to 7nm, but analog does not shrink anymore. You need enough drive strength to make that work, and that’s a starting point for chiplets based on an organic substrate. There are companies that are not happy with a standard from any one company, and there is no way for some companies to use a package targeted at certain applications.”

In effect, chiplets take some of the essential components in an SoC and move them into the package. That approach also can be used to take some functionality normally added onto a PCB and shift it closer to the main logic, as well.

Fig. 1: Chiplet architecture. Source: Fraunhofer IIS EAS

“For the past several decades, from small size to large size, we have the chip, package, and PCB, which we need in order to package the chips so they can talk to the outside world,” said Kao. “Engineers are now working on 3D-IC. They have to. They have no choice. Then, with the idea of each chip being like a package, if I put this package on one silicon interposer, the silicon interposer would be like a PCB and then I can get rid of the PCB. That’s very straightforward thinking. Following that, if the silicon chips are put on a silicon interposer, then we eliminate the CTE (coefficient of thermal expansion) mismatch that causes the warpage deformation, which was previously between the package and PCB because they’re made of different materials. CTE mismatch occurs between different materials because they shrink and expand a different amount during the same temperature elevation or same temperature drop (delta T), but the same temperature drop will cause different expansion and shrinking amount among different materials because they have different CTEs. That causes mechanical stress within the object or component or device, and then warpage and the deformation causes failure between the package and the board. So if we can have a single material from the chip to the package to the PCB, we won’t have this CTE mismatch problem.”

Novel packaging technologies
Greg Yeric, an Arm fellow, sees a great deal of promise in architecting new systems utilizing novel packaging technologies beyond traditional PCB. “In the highest computing circles, the high trace density of advanced packaging solutions attacks the memory wall, and farther out, wafer scale systems will add value. In systems from data center to mobile to the IoT, there will be advantages in leveraging the bandwidth and form factors available with advanced die-to-wafer, die-to-die, and wafer-to-wafer packaging techniques to achieve system capabilities beyond what conventional PCB would deliver.”

While high thermal load systems may favor a silicon-on-silicon solution, he expects the PCB industry to present advanced packaging technologies with a fast moving target. “There are many areas of interesting R&D here, including advanced thermal interface materials, metal core and thermal via integrations, and improved trace density. While thermal-based failure is a very real concern for PCB-based systems, there is significant upside in the continued advancement of system-level multi-physics simulation allowing PCBs to service higher performance system needs. Here, specifically, there is an opportunity for standardization to provide value compared to the more nascent advanced packaging ecosystem. Ultimately, AI will have as profound of an impact in PCB as for other industries.”

Further, while these examples highlight system examples that could prefer advanced packing solutions over PCB, there is a broad set of future opportunities for systems utilizing PCB cost and form factors. “We are just scratching the surface of the IoT, and PCBs offer heterogeneous integration, including sensor fusion (especially for example vision systems), domain-specific acceleration, and bio-electronic interfaces, to name a few areas of opportunity for system designers,” Yeric said. “Flexible PCB and additive manufacturing techniques will offer new horizons in system level integration, as well. Flexible PCB may even be used as an advantage in high thermal load systems in some cases.”

This already is showing up in the EDA world, where tools need to be developed along with advanced processes. “With every node it becomes harder to figure out stress and OPC (optical proximity correction),” said Carey Robertson, product marketing director at Mentor, a Siemens Business. “Features are small, everything is crowded, and an electron or two makes a big difference.”

Many of the worst problems occur at the “seams” of a device, such as the interconnects. Those are particularly difficult to manage in complex designs.

“We have the notion of corner-bound problems, such as timing and temperature,” said Robertson. “Now we have interconnect corners, which are critical if you want to target a specific performance window. You can be conservative with those, or aggressive, and we will provide the model for both. But if leakage is a concern, then you have different constraints than for performance. The problem is that if you start designing now and you want to hit this performance/power/area spec, as 3nm goes mainstream those constraints may be different than today.”

Size matters
And this is where things get particularly complicated, because most chipmakers are in agreement that not everything needs to be developed at 3nm or even 28nm, and not everything needs to be instantaneous.

“Consider with a nanometer chip, the chip is approximately 1 to 2 millimeters, the package is tens of millimeters, and the PCB now is about 100 millimeters,” said Cadence’s Kao. “Within a cell phone, for example, it may measure 10cm/100mm. This kind of physical area is a challenge. We have these very small devices doing the work in our cell phones for video, audio, etc. In order to use their function, we have to give them input, we have to take the output. We need I/O to access these tiny devices, and we also need to get function out of the device. The PCB will not disappear in the near future because we still need the device to do the I/O work, in addition to needing a way to dump current voltage into these little devices, and to take out the functions we want. The PCB is the intermediate carrier or vehicle to do so.”

Economics have a big impact here. The PCB market for 2018 to 2019 is about a $30 billion to $40 billion worldwide market, and most predictions point to continued growth, Kao said. “For 2024, I’ve seen estimates that go up to $70 billion to $80 billion worldwide. That means at this moment, if we just look at 2019 to 2020, there is still a big chunk of utilization in the semiconductor market. This is understandable because PCBs are used everywhere to serve this I/O purpose. Let’s not forget the manufacturing process of PCBs is really mature. They print copper, then etch copper on this dielectric layer by layer, and it’s really mature to serve every electronic device. And it’s relatively cheap.”

Put in perspective, systems come in all shapes and sizes, and while cost is a driver across all systems, it matters much more in some than others.

“The highest-end systems will look to functionality, throughput and response time in the applications over cost, said Chris Rowen, CEO of BabbleLabs. “The drive for cost will continue to push designers and manufacturers to keep board-level assembly as simple as possible, which means extending the life of PCB-style system assembly. I believe we will continue to push architecture innovation (e.g. new on-chip accelerators and novel memory organization) and silicon level integration (even as Moore’s Law decelerates) because these can drive up system benefits without infrastructure change. Novel packaging will make progress, too, especially for high-bandwidth connectivity, and most especially for high-bandwidth DRAM access, but what combination of silicon substrate vs. novel PCM-derived substrate remains to be seen.”

—Ed Sperling contributed to this report.

Related Stories
EDA In The Cloud
Speeding time to market at 5/3nm.
IP Management And Development At 5/3nm
Just because IP worked in the past doesn’t mean it will work in a derivative design at a new node.
Moving To GAA FETs
Why finFETs are running out of steam, and what happens next.
Big Trouble At 3nm
Costs of developing a complex chip could run as high as $1.5B, while power/performance benefits are likely to decrease.
Multi-Physics At 5/3nm
Why process, voltage and temperature are so interrelated at future nodes, and what impact that has on design.
Controlling Variability And Cost At 3nm And Beyond
Lam’s CTO talks about how more data, technology advances and new materials and manufacturing techniques will extend scaling in multiple directions.
5/3nm Wars Begin
New transistors structures are on the horizon with new tools and processes, but there are lots of problems, too.
Multi-Patterning EUV Vs. High-NA EUV
Next-gen litho is important for scaling, but it’s also expensive and potentially risky.



Is this possible?

Ed Sperling says:

According to everyone involved, the answer is yes. The big question is how much it will cost.

Len Schaper, LF IEEE says:

Silicon substrates? We did that in the 1980s; they were called Multichip Modules (MCMs.) Chip stacks? We did that in the 1990s, with TSVs and plated copper posts, leaving a gap between chips for fluid flow to remove heat. There is lots of solid academic work that needs to be picked up and developed, not reinvented.

Phil Marcoux says:

Several companies have attempted to commercialize similar approaches, Micro Module Systems, Chipscale,inc, IBM, Intel, and a number of ARPA funded efforts.

There was a significant flurry of development starting in the mid-1980s.

So far chip (I guess now it’s chiplets) yields, difficulty of burn-in and overcost have been showstoppers.

Kevin Cameron says:

AMD are using chiplets – It just makes the verification problem harder – thermal issues are likely to dominate in stacked systems, and that’s a mixed-signal problem

Leave a Reply

(Note: This name will be displayed publicly)