The jury is still out about just how widespread this packaging approach will become.
Advanced packaging is becoming real on every level, from fan-outs to advanced fan-outs, 2.5D, and 3D-ICs for memory. But just how far 3D and monolithic 3D will go isn’t clear at this point.
The reason is almost entirely due to heat. In a speech at SEMI’s Integrated Strategy Symposium in January, Babek Sabi, Intel corporate VP and director of assembly and test technology development, warned that unless the thermal issues are solved, the possibility of logic-on-logic stacking is unlikely. This packaging approach still will be used for high-performance DRAM, because interconnects can be shorter when DRAM is stacked and connected with through-silicon vias (the Hybrid Memory Cube is a commercially available version of stacked DRAM on a logic controller). But the inability to remove heat from a stacked die package with logic on multiple layers remains an unsolved problem.
This is somewhat ironic because the reason chipmakers began exploring 3D in the first place was heat. Pushing electrons through increasingly narrow wires is getting tougher. IBM began talking publicly about electron crashes as early as 2002. The concern now is resistance, capacitance and tunneling, and the thermal effects from all three.
From a PPA perspective, 3D stacking always has made sense. Decades of shrinking features and critical dimensions on a chip have pushed physics to the limit. Wires are too thin and too long already. At 5nm, memories will begin exhibiting quantum behavior in a very noticeable way. Second-order effects such as single-event upsets and negative bias temperature instability will become first-order effects. Gate oxides will become more difficult to keep intact as the number of atoms is reduced to the single digits.
One of the initial problems cited for 3D-ICs was how to test them. That was worked out through a labyrinthine approach by researchers at Imec, in conjunction with companies such as Cadence. A second issue was wafer handling, particularly after die were thinned to 50 microns or less. That, too, has been largely solved through a variety of methods, including thinning dies from the back when that are mounted face down. There is even research now to do bonding at room temperature so coefficients of expansion are not an issue. And industry relationships and concerns about who’s responsible if a known good die doesn’t function in a 3D stack have been discussed at length, so at least there is a good idea of who’s liable for what.
The remaining big problem that has never been fixed, though, is how to get the heat out of the middle of a 3D stack. Work is underway on a variety of fronts, including even thinner dies, thermal TSVs that act like chimneys, fluid channeling, staggered stacking, air gapping, and new materials that are either more conductive or less sensitive to heat.
Maybe it gets easier if you don’t want everything at once and you aim to serve actual products.
What kind of power is Intel aiming for? 3 to 100W? How about 50-100mW in glasses and somewhat fully neglect cost. The dynamics do change when form factors change and folks that find ways to serve the new form factors win.
The focus on perf might be the past but power and thermal seem likely to be the future. Almost good enough perf at next to no power could be what changes the world the next time.
You wrote: “One of the initial problems cited for 3D-ICs was how to test them. That was worked out through a labyrinthine approach by researchers at Imec, in conjunction with companies such as Mentor Graphics.”
Half correct: Imec has developed a patented 3D DfT architecture in conjunction with Cadence Design Systems and TSMC.
The heat issue is why you need to be looking at aggressive power management with DVFS and/or back-biasing (FDSOI) so you can drop inactive Silicon to sub-threshold operating levels, as well as asynchronous operation.
IMS chips have a way to do very thin ICs (10um) without grinding –
http://www.ims-chips.de/home.php?id=a3b8c1en&adm=
IMO you want to build stacks with redundancy and then test, rather than look to build things that work 100%.