Improving 2.5D Components

The increased emphasis on squeezing more power and performance out of established nodes will only help when it comes to stacking them.

popularity

A lot of attention is being focused on improving designs at established, well-tested nodes where processes are mature, yields are high, and costs are under control. So what does this mean to stacking die?

For 2.5D architectures, plenty. For 3D, probably not much. Here’s why:

The advantage of 2.5D is that it can utilize dies created at whatever node makes sense. While the initial discussions involved analog IP, because it was getting very difficult to shrink analog circuits beyond 40nm, the reality is that even digital designs at 28nm or 40nm are candidates for connecting together through an interposer. Coupled with wide I/O—fatter pipes over shorter distances—the ability to turn up the performance on a chip increases significantly, while at the same time reducing the amount of current needed to drive those signals. Throw in techniques such as near-threshold computing, back biasing and FD-SOI, and suddenly this approach looks very promising for a reasonable cost.

But adding the most advanced tooling to established nodes improves 2.5D designs in other ways, too. For one thing, those tools can be used to reduce leakage and add better power management, eliminating some of the thermal concerns that have dogged stacked die since the concept first began garnering serious attention several years ago. Reducing heat also increases reliability and reduces electromigration. In addition, the ability to utilize more advanced chip routing at older nodes will solve some of the memory contention and wire placement issues that finally forced changes in the tools at the most advanced nodes, cleaning up older designs with the latest techniques and automation.

Having advanced tooling also makes it easier to create designs that have more regularly placed I/O, as well. This is essential for stacking, because the last thing design teams want to be doing is figuring out how to connect to an interposer for every multi-die package they create. And it will help in routing photonics, which will be a highly energy-efficient way of moving data between chips.

And finally, being able to verify all of this using emulators and other hardware accelerating techniques, not to mention adding in virtual prototypes for software, will go a long way to getting 2.5D packages out the door in a reasonable amount of time and for a reasonable cost—particularly when that cost is measured against multi-patterning and the cost of new materials that will be required for high mobility after 14/16nm.

For 3D-ICs, the initial benefit will be negligible. The reason is that most of these die need to be crafted from the ground up using the most advanced techniques. There are stress and thermal issues to wrestle with if through-silicon vias are used, and there are wire-bond connectivity issues if they are not. While wire bonding is simple enough today, it’s not so simple when you thin out a die to the point where they can be stacked together in a package. At this point, the general consensus is that 3D-IC is at least a couple generations away, although given the current slowdown in Moore’s Law, it’s hard to say whether that’s four years or six years—or more.



1 comments

Gretchen Patti says:

Ed: 3D-ICs are already benefiting from mixed process nodes! At Tezzaron we are building 3D-ICs that combine 110um and 130um in the same chip; also 90 + 350; and 65 + 45 + 40. These are real customer devices, not science projects. We are also using mixed substrates at 130um + 250um. As long as the TSVs line up, the differing process nodes simply don’t matter!

Leave a Reply


(Note: This name will be displayed publicly)