3D ICs: No Simple Answers

Distinct uses for stacked die are emerging and problems persist, but there is at least an understanding of what needs to be fixed.

popularity

By Pallab Chatterjee
Just how ready is the semiconductor industry for stacked die? That was the subject of a recent panel discussion involving ARM, Atrenta, Xilinx, Samsung and Mentor Graphics.

The reasoning behind 3D stacking is becoming clearer at each node. I/O count and delay times are forcing different configurations, but the time frames for these changes and the gating constraints are still somewhat fuzzy.

In the area of uses, the discussion focused on three areas–memories, SoCs and computing systems (processor cores and memory). Memories have been using stacked die approaches for many years. These stacks use traditional wirebond technology, feature either standard or thinned die, and have a known cost model.

The advantage of memories is that there is common pin-out and stacked devices can utilize existing memory test methodologies by adjusting the address range for the design. The die in this application are stacked from the top of one die to the bottom of the next die. These products have shipped literally billions of parts in this technology at a very similar price point to standard wire bond. This methodology supports using known good die for the design, has compatibility with current design tools and has known thermal performance.

Computing systems have a different target for stacked die. These systems, however, require a different architecture. There is local 3D memory for each core that is connected, where the core is placed in the die by way of a vertical interconnect. These applications have very high I/O counts that cannot be run to peripheral I/O, so they cannot use memory-style connections. The die are stacked in a top-to-top format. These are the designs targeted for TSVs.

There are questions, however, about whether the TSVs should be part of the IP blocks and whether the models for the IP should include the timing for vertically stacked memory. The challenge with including them in the IP is related to the large variability in post-processing options for TSV creation by the fabs. The tools needed to model the TSVs and verify the IP is being used properly are still lacking, according to the panelists. Moreover, the thermal models, changes in strain after thinning, and multi-layer capacitive coupling for the die being stacked face-to-face are issues that need to be dealt with for generalized IP use.

These problems are not unsolvable. Xilinx has released products using multi-die technology, and for fixed topology applications there is an understanding of how to solve these problems. The generalized use of TSVs randomly distributed over a custom processor die leads to the creation of custom memory configurations and pin-outs, as well.

It is unlikely that a standards group will drive the memory compilers and designers to a standard pin-out for the blocks. Because the processor cores are soft IP and have different optimization tradeoffs, there is no standardized application target that would allow for the performance tradeoffs of the cores to hit a standard pin-out. In general. these will be custom designs and custom applications. The stacked die setup is targeted for very high volume or high ASP products that can justify the high cost of test.

With respect to SoCs, this platform will likely be one of the last to address TSVs because of the impact on the design and release cycle. Packaging, thermal, timing and power issues for multi-die SoCs is very complicated and is beyond the capacity of most EDA tools, especially in the context of billion-device ICs that already are pushing the limits of the tools. Advances are being made for this area, and tool vendors have discussed options for system verification that are being targeted at this use. These are still in development, and the current releases of the tools address some but not all of the use models for TSVs and stacked die, or silicon interposer and stacked die, but are not to the design tradeoff stage as yet.

In addition, this whole area is still bracketed by cost. Traditional system-in-package and wirebond-based stacked die are still the most cost-effective for consumer commodity chips. The key is to identify a device, market and performance metric that can justify the high production cost of this technology now.



Leave a Reply


(Note: This name will be displayed publicly)