2.5D Timetable Coming Into Focus

Real chips under development despite gaps in design and packaging flows. Work is under way to cut interposer costs and standardize electrical connections.


After years of empty promises, the timetable for 2.5D is coming into better focus. Large and midsize chipmakers are behind it, real silicon is being developed, and contracts are being signed.

That doesn’t mean all of the pieces are in place or that market uptake is at the neck of the hockey stick. And it certainly doesn’t mean the semiconductor industry is going to abandon development at the most advanced process nodes, or even improvements at older nodes that could slow migration in all directions.

“Without a doubt not everything needs integration [on a single die], said Joe Sawicki, vice president and general manager of the Design-To-Silicon Division at Mentor Graphics. “You’re not going to be looking for 6 billion transistors in a wearable device or even a fully integrated factory. But at the same time, the number of customers doing 20nm designs is a huge number of companies.”

Although the benefits are well known, 2.5D remains a new packaging approach with different interconnects and new memory structures. There are still kinks to iron out of the packaging process, work to be done across the supply chain, and new tools to develop. Nevertheless—and in spite of all those caveats—for the first time since the idea began gaining serious attention several process nodes ago, dozens of companies have moved beyond kicking the tires to developing what ultimately will be working silicon.

Cadence, Mentor Graphics and Ansys are aggressively developing tools to make 3D more predictable,” said Herb Reiter, president of EDA2ASIC consulting. “This kind of information has to flow through the materials to the Outsourced Semiconductor Assembly and Testand foundry and then to the customer.”

Critical pieces under development include the second generation of high-bandwidth memory, which SK Hynix is expected to begin sampling in the second quarter of 2015, new and less costly interposer technologies and approaches, and new organic substrates. There also are questions about whether Intel will allow its Embedded Multi-die Interconnect Bridge (EMIB) to be widely licensed or sold outside of its own foundry. Intel’s bridge technology allows for much tighter pitches than organic substrates.

“What we don’t know is how costly [EMIB] will be to integrate in order to make it flush with the surface of the substrate or whether it’s something you can do with the die,” said Reiter. “But there also is work underway for organic substrates to make them smoother and almost the same pitch as silicon. And there is work being done to put resistors, capacitors and inductors on the interposers, which significantly increases the value proposition of the interposer.”

What’s changed?
Perhaps the biggest shift, though, is in the attitude of companies working with 2.5D and 3D-ICs. What started out as something of an interesting architectural approach to shorten distances and widen signal plumbing is now becoming much more accepted as a future direction for many chipmakers.

“The objection was always cost and risk,” said Charlie Janac, chairman and CEO of Arteris. “The complaint was that interposer technology is expensive and dies don’t necessarily work in a multi-chip package. But dealing with advanced nodes is horrible on the analog side. Memory and logic already have diverged to different process technologies, and it makes sense now to put them on separate dies. It also changes the dynamics of what’s important in an SoC. It makes the interconnect much more important and the packaging houses much more important.”

That change in attitude is widespread, even if the number of design starts is limited. Mike Gianfagna, vice president of marketing at eSilicon, said the company is “actively engaged” in several 2.5D projects.

“There’s still a lot of discussion about what’s the right interposer, whether it should be silicon or another material, how big it should be,” he said. “This is all about getting multiple chips with similar bandwidth on a chip. Not all of it is silicon interposer technology, either. Some of it uses other strategies.”

Open-Silicon likewise has seen limited uptake on 2.5D, even though there is plenty of interest.

“You still have to justify cost on a per customer basis,” said Steve Eplett, design technology and automation manager at Open-Siliconn. “If you can leverage a die across multiple customers that changes the economics. The metrics for power consumed between two die also aren’t as good as homogeneous solutions, but we have gotten that down to a minimal and reasonable tradeoff. And with new memories coming on, the power for communication will be a tiny fraction of an off-chip solution. It’s unterminated CMOS at 1.2 volts versus terminated, on-board DDR.”

What’s still missing?
Not all the pieces are there yet, either. While chips are being built, some of the process isn’t automated or as clear-cut as the move to finFETs or FD-SOI at 28nm.

“We’re not seeing a whole lot of co-design optimization where you can measure the tradeoffs of one chip versus another,” said Drew Wingard, CTO at Sonics. “We need to do mix and match in a more aggressive form. When you put together a system at the PCB level there are standard interfaces. At the board level, you can always wish a new component existed, but most system designers look at what’s available. 2.5D is a practical way of dealing with that.”

Those standard interfaces—the electrical interfaces for tying together different chips inside a single package—are under discussion by standards groups.

“The pool of die that you can integrate in a standard way is a black hole right now,” said Open-Silicon’s Eplett.

Still, most experts, standards groups and chipmakers see stacked die—both 2.5D and 3D ICs—as inevitable. While it makes sense for a company such as Intel to continue pushing its very regular-shaped digital processor technology forward for multiple more generations, the question is what else needs to go on that die. If memory can be offloaded onto separate die, either with through-silicon vias or interposers or bridges—or even bond wires—then distances will be reduced, performance will increase, and the amount of power required to drive signals will be cut significantly.

“There’s a lively debate going on right now about 2.5D and 3D, said Chris Rowen, a Cadencefellow. “This is a natural outgrowth of what’s already been done. There are limits about how many processes you can put on a die, and if you have digital logic, DRAM and analog, you can’t make it work without moving everything closer together. This is aggregation at the packaging level.”

The outlook remains optimistic, but cautiously so. As Sawicki noted, the obvious application for the first chips was in the data center, where development costs were less of a factor and power was the key metric to worry about. “For a number of reasons, that hasn’t occurred yet, but virtually everything that is required to make that happen we have put in place.”

So will this all change over the next couple years? All signs point to yes. Whether those timetables will remain in place, though, remains to be seen.