Modified platform approaches are beginning to take root across the industry. The question isn’t whether they will work, but when they will gain traction and what they will look like.
By Ed Sperling
The standard method of designing chips—by shrinking features and turning up the clock frequency—is running out of steam for many companies. It’s too difficult, too expensive, and without a commercially viable new lithography source it may become even more unrealistic for most applications.
That certainly doesn’t mean Moore’s Law is ending, but it could become more of a buy-and-integrate option for some companies for application-specific systems—entire systems that are designed around subsystems, platforms, and software for very specific purposes—rather than an internal design strategy.
In his keynote speech at DAC, National Instruments CEO James Truchard sounded the trumpet for platform-based design, saying that a level of abstraction is required for future progress.
But how does the semiconductor industry get there? And where exactly are we?
Stacked die
One of the logical outcomes of a push to more platforms is die-stacking. The concept of using stacked die first started gaining serious interest inside of IBM in the mid-1990s, as the length of wiring inside a chip began growing and the thickness of the wires began shrinking. Researchers there decided that connecting chips with bigger pipes would improve performance significantly because signals wouldn’t have to travel as far.
Several years ago that the same architectural and packaging approach began gaining broad attention across the semiconductor industry as a way of improving energy efficiency as well as performance. A signal that doesn’t have to travel as far, and which can use a bigger pipe, uses less energy and generates less heat. Since then test chips have been created, tools are ready, and processes are in place. And there is has stalled—or at least that’s how it looks from the outside.
Changes aren’t always obvious on the outside, though. Sometimes they aren’t even obvious to people working inside large chipmaking companies. But sources inside two large chipmakers have confirmed that stacked die and platform strategies are beginning to make real headway—in part because the cost of developing fully integrated SoCs is climbing into the stratosphere at the most advanced nodes. As engineering management grapples with rising costs, they are beginning to look for alternative approaches and faster integration strategies, and the test chips developed within these companies are finally being taken seriously.
New designs
That’s not the only change, either. The packaged designs envisioned by early proponents of stacked die are significantly different, as well. In 2010, when Xilinx introduced its 2.5D planar die with an interposer in the middle, most observers wondered why this planar device was considered a stacked die. As it turns out, it was just the starting point. The more recent approaches look more like star-configured networks, with an interposer in the middle.
Considering SoCs are largely networks, they are beginning to follow the best designs in networking. Star configurations are a standard in networking. From there, the individual platforms or subsystems may rise vertically with TSVs or other interposers, making it look like a miniature, futuristic city of vertical devices combined with planar devices.
“Through our own effort with our ecosystem partners we’ve been able to prove that yield, test and thermal are manageable,” said Tom Quan, a director at TSMC. “Last October we created a test chip with an interposer, wide I/O, and DRAM at 40nm and achieved 99% yield. Next will be a RAM stack on top of logic, which we have already taped out.”
GlobalFoundries likewise last month introduced certified design flows for 2.5D ICs, leveraging TSVs in interposers and new bonding approaches, pulling in full-implementation flows from Synopsys and Cadence, and physical verification using Mentor Graphics’ tools.
Why now?
Why the sudden interest in stacked die? Without a commercially viable extreme ultraviolet power source, the cost of double patterning at 20nm and potentially triple or quadruple patterning at 14nm is raising the specter of significantly more expensive designs. While those approaches still make sense for high-volume chips such as memories, processors or GPUs, they make it much harder to create an SoC. Splitting a mask into four layers is an engineering feat, and coupled with finFETs and power density issues, that can be time-consuming and expensive.
That forces a rethinking of what actually gets produced at the most advanced nodes, and what problems can be avoided by taking a different approach: test, thermal issues, noise, electromigration, electromagnetic interference, integration, and the sheer complexity of moving analog from one node to the next. The approach also can lengthen the lifespan of IP and remove some of the routing congestion around memories. And it can improve yield—the original reason cited by Xilinx for moving to a 2.5D configuration—because individual pieces are easier to manufacture, and the cost of respins is lower for less complicated chips.
“2.5D and 3D are obvious ways to extending Moore’s Law,” said Mike Gianfagna, vice president of corporate marketing at Atrenta. “Stacking is the way that differentiation will continue. If you’re developing 14nm 3D (finFET) designs, the pieces will be complex and expensive and they will have to be applicable for multiple applications.”
TSMC’s Quan agrees: “There will be three vector directions. One, we will continue to push Moore’s Law to 16nm and 10nm. The second, is mainstream technology. The serious node for us is 0.18 (micron). We will continue to invest in this so the analog, mixed signal and RF guys continue to have their piece. The third vector is stacked die with an interposer, because not everything can go down to 16nm and 10nm. We can hardly do analog at 28nm. A lot of things will probably stop where they are, and what makes sense is putting them into the process node that makes sense and integrating them into a system on chip on a stacked die.”
What has held up the adoption of this approach so far is cost. But now that the cost of migrating to advanced nodes has begun escalating, due to multipatterning, that’s also what’s beginning to spur new interest. That has led to STMicroelectronics and some RF amplifier makers to adopt fully depleted silicon on insulator at 28nm and 40nm, respectively, and it has raised questions about whether it makes sense to continue shrinking features for everything on an SoC. “Everything is about cost,” said Seow Yin Lim, group director for product marketing at Cadence.
In some ways, this is déjà vu—with a twist. Packaging of stacked die has been around since the days of multi-chip modules. But in the past, these were largely one or two vendor solutions, rather than collections of IP from multiple vendors. That may be the last big hurdle to overcome—how to get the ecosystem in sync.
“We’ve been on stacked die forever,” said Michael Buehler-Garcia, senior director of marketing for Calibre Design Solutions at Mentor Graphics. “The interposer and TSVs are what allow it to go forward for the new stuff. That opens up a whole different set of issues, both technically and business-wise. We’re all becoming system integrators, but who owns that?”
Leave a Reply