Where Is 2.5D?

After years of hype, steady progress is being made on all fronts. But gaps remain, along with some important questions.

popularity

After nearly five years of concentrated research, development, test chips and characterization, 2.5D remains a possibility for many companies but a reality for very few. So what’s taking so long and why hasn’t all of this hype turned into production runs instead of test chips? Semiconductor Engineering spent the past two months interviewing dozens of people on this subject, from chipmakers to foundries to EDA and IP companies, in a search for real answers. What’s becoming apparent is that not all of the answers are in sync or even complete.

Put in perspective, there is plenty of substance behind the hype and major progress has been achieved. Test chips are being generated and initial tests show significant improvements in performance, lower power and yield—the key attributes that all chipmakers use to assess technology. EDA tools have been modified or created to handle everything from design to integration, packaging and test, which in many cases only required tweaks for 2.5D architectures. Moreover, the newest test chips are not homogeneous, meaning they are not just being designed to improve yield.

But there also are some surprises, caveats, as well as some gaps that become evident as this approach moves forward. To begin with, there is more heat being generated by faster computing due to better throughput than most proponents initially considered. Just because it’s a planar configuration, rather than logic stacked vertically on logic, doesn’t mean the heat isn’t increasing. In addition, IP and chips in a 2.5D configuration need to be developed specifically for 2.5D.

“One of the big changes is where the I/O is located on the chip,” said David McCann, vice president of packaging R&D at GlobalFoundries. “It has to be on the two sides facing each other. That means those changes have to be made on the die about 1.5 to 2 years before production. And there has to be confidence in the supply chain that everything they create will improve yield and cost.”

While the approach does improve IP reuse, it’s not just about a new packaging technique. It’s an architectural change plus a packaging change. And on top of that, there are new issues in scaling the technology that remain unproven or unfinished, notably in the handling of the interposers on the manufacturing side and in the gaps in standards across the supply chain.

Standards issues
Two of the biggest issues that are being worked through right now involve lawyers. One focuses on patents, which were researched earlier this year in detail by Si2 and then shared with the organization’s membership. The standards group concluded that patents do not pose an ongoing threat, despite the fact that there are now a couple of cases in litigation. The second involves responsibility when something goes wrong, and Si2 President Steve Schulz said this is being addressed now that large fabless companies have joined the foundries on the committee to develop standards in this area.

“One of the big questions involves whether it’s via last, middle or first,” said Schulz. “Depending on when a via is done, it may be the foundry that’s responsible for the interposer or the OSAT (outsourced assembly and test) provider. What we’re seeing is that it’s mostly via first these days, which is the foundry. We also need consistent IP standards. Even if you only have one EDA vendor characterizing it, they may add memory from a partner that uses a whole different set of EDA tools. You can’t control the tool flow, so you need standards for the IP. The first thing we addressed there was power distribution. Then we addressed thermal constraints. Now we’re working on pathfinding, which is in good shape.”

He said the next step is in the area of the the supply chain. That involves data exchange in the design flow for IP reuse, some of which is being handled by Accellera, as well as understanding the impact of power and system-level issues so that in the future it will help ease integration with software drivers and operating systems.

Technology issues
On the technology side, there are a number of issues that need to be resolved, notably the actual size of some of the components. “If your memory height is fixed and your interface is less than that, you don’t gain anything,” said Javier DeLaCruz, senior director of engineering at eSilicon.

Silicon interposers, which are what the major foundries have been working with, are easy to use and develop in volume if they’re small. One of the key reasons for using silicon is that the coefficient of expansion is the same as it is for the chips connected by that interposer. But for large interposers, the material has been thinned so much that it’s difficult to work with.

“The smaller interposers yield well and they are easier to thin and test,” said GlobalFoundries’ McCann. “We’ve solved the yield issues for smaller interposers. With larger interposers the yield is high, but with the reticle size it’s a challenge to manage the warpage. The greater the reticle size, the larger the challenge.”

One way to solve the warpage problem is by changing materials. The latest buzz in this market centers around organic interposers, which are flexible rather than rigid like silicon. “With organic interposers, the cost is less than silicon and the assembly costs are considerably less,” said eSilicon’s DeLaCruz. He said his company’s newest designs are based on organic interposers, which are much easier to work with and don’t crack or break during the assembly process.

Heat is also an issue. “With 2.5D the area is a lot smaller because there aren’t memories all over the chip,” said DeLaCruz. “There used to be less than a dozen systems on a board. Now there are hundreds, and you’re getting more heat per bit. Thermal management is the next barrier we have to deal with.”

The efficiency of the design means that more computing is being done in less space. This is similar to the problem encountered by finFETs, where lower leakage allows chipmakers to boost the clock speed, in turn ramping up the power density and once again encountering power and heat issues.

But there is also some capability of spreading out that heat with interposers. There are tradeoffs there depending upon the materials used and the ease of handling those materials, according to Brandon Wang, engineering director for the silicon realization group at Cadence. He said that unlike 3D-ICs where heat can be trapped inside a package with thinner chips—the thinner the chip, the less ability to transfer the heat—in 2.5D it’s much easier to manage thermal effects.

Progress report from the field
Still, all of these issues are well understood and work is underway to solve them. Now the question is who’s going to jump on this technology first—and that question has been floating around for at least a year.

“Momentum is relative,” said Michael Buehler-Garcia, senior director of Calibre solutions marketing at Mentor Graphics. “Customers have done tapeouts, they’ve looked at the ROI of the solution and the cost of a target solution and they’ve had lots of discussions. From our tools standpoint, all the work is done, whether it’s 20nm, 16nm, FD-SOI or 2.5D or 3D. You’re still going to run the same golden list. The bigger unknown is with 3D with stress impact of the TSV.”

In fact, Open-Silicon and GlobalFoundries both have been showing off 2.5D test chips to customers during the past month, and eSilicon is developing its own versions. Just from a yield perspective, there are big gains from 2.5D because yield is always better on smaller die where analog, digital and memory don’t have to be crammed onto the same piece of silicon. From a power and performance perspective, those numbers are still being quantified.

“The goal here is to put more memories off chip and close to the die,” said Steve Smith, senior director of product marketing for AMS verification at Synopsys. “That’s also what makes 2.5D so attractive.”

Stacking die also adds lots of opportunities across the EDA industry, from layout to test to networking of IP within the chips. “The more connections, the bigger the opportunity,” said Kurt Shuler, vice president of marketing at Arteris. “It’s simple when there are two die from the same company, but when you get off-chip communications it gets a lot more difficult. It also changes the value chain.”

Conclusions
Unlike 3D-ICs, which have been pushed out for several years, volume shipments of 2.5D chips will be ready by the end of next year. Most industry observers and foundries believe that once ramp-up begins, it will be rapid, probably based on 28nm digital technology mixed with older-node analog technology. That allows chipmakers to avoid multi-patterning while also reaping the power, performance and area benefits of stacked die.

The big question is when that ramp-up actually starts. Some companies are taking bets it will be quick because of the difficulty of moving to 16/14nm finFETs and the unavailability of commercially viable EUV by that process node. But technology adoption timeframes are notoriously difficult to predict. This is an educated gamble, backed by solid engineering advancements, standards efforts, and support from a variety of market segments. And yet, time is still the biggest unknown.



3 comments

david moloney says:

2.5D is already here

Mobile SoC providers have been stacking DDR devices for 5+ years using wire-bonding with a trend eventually towards TSVs and wider DDR interfaces assuming DDR vendors will supply KGD in new configurations

Once you take memory out of the equation the case for 2D or even 2.5D is weak due to the cost and heat-dissipation problem which is already a big issue for planar SoCs

Dev Gupta says:

The main problem behind the delays In volume implementation, in fact even the development of basic technology for 2.5 d and 3d is the fragmentation and compartmentalization of the effort as compared to doing it all under a single management ( profit center ) in a large IDM. After the current generation of Adv. Packaging technology e,g. Flip Chip were developed at IDMs like Motorola & Intel, it took the OSATs almost a decade to understand and get a handle of chip package interaction ( stress effects ) and keep up with materials changes. TSV based 3d or even 2.5 d are far too complex for teams w/o the bandwidth of design, device, TV, assembly etc. all under one roof. We do see a lot of local optimization e,g. for individual process steps or tools but a lot of holes in the overall flow & integration. Even the well – heeled Foundries have underestimated the challenges and / or lack the expertise to address the right design space. At conferences & panel discussions instead of facing the reality that these smaller / inexperienced companies are out of their depth, the same old hype keeps getting dished out and the schedule gets pushed out.

Of Memes and Memory and Moore’s Law | Martin Falatic’s Techno Blog says:

[…] there’s the problem of interfacing with this hypothetical 2.5D chip stack or 3D device… the micro-SD card above uses anywhere from one to four pins total for its […]

Leave a Reply


(Note: This name will be displayed publicly)