Is The Stacked Die Ecosystem Stagnating?

First of two parts: By most accounts, the stacked die ecosystem is status quo, with not much happening in the past year.

popularity

It is now widely agreed that not much has been happening in terms of adoption for 2.5D interposer and 3D ICs.

“It seems like everyone is still at the starting line waiting for the race to begin,” said Javier DeLaCruz, senior director of engineering of eSilicon. “Interposer assembly and IP availability for effectively using the environment remains a challenge. There is very limited qualification data on the assembly process, and insufficient use of the interface IP that unlocks some of the benefits of the technology. Once the usage starts to ramp toward volume production, there will be clear benefits in performance and cost effectiveness.”

Until then, however, the focus is on announcements from Micron, Samsung and Hynix about the Hybrid Memory Cube.

“This is very essential step forward for the value proposition of because these memory cubes are, of course, 3D ICs,” said Herb Reiter, 2 president of eda2asic Consulting and director of 3D IC programs at Si2. “They have a logic layer on the bottom and multiple memory layers on top, and can be brought very closely to the processor. Also, if you look at the NVIDIA design [Pascal http://blogs.nvidia.com/blog/2014/03/25/gpu-roadmap-pascal/] they have a GPU in the middle and four Hynix memory cubes around the processor.”

He noted that at last week’s 3D ASIP Conference, Alok Gupta, principal engineer at NVIDIA, said the company was stuck at a bandwidth between its GPUs and memories of about 370 Gbytes per second. “With the new design with the memory cubes, they are getting to 1 Terabyte per second — a 3X jump in regard to bandwidth for the graphics capabilities they are gaining with Pascal. Most importantly, they are not only tripling the bandwidth, they are also doing this by cutting the power into a third because the memory is much closer to the processor. They are not wasting the energy in the I/Os. They are getting a double benefit of much better bandwidth, much better graphics in 2016 when volume production ramps up, and at the same time saving power that will reduce the system cost because it doesn’t need a fan or a heat sink. They are improving performance, improving power in the sense of reducing it. That’s very compelling and just an example of what these memory cubes can do to tear down the memory wall every system designer is swearing about.”

The leading assembly houses Amkor, ASE, SPIL, STATS ChipPAC and Powertech are in various stages of readiness, but they essentially are ready to go. These companies are under great pressure because before a fabless company commits a significant program to a 2.5D or 3D technology, they build a number of evaluation units. They have to be sure that as an they can make what the fabless companies will design.

Intel has also thrown its hat into the stacked die ring with its silicon bridge technology, as well. Intel’s approach includes a CPU and a memory with a bridge that is sees as a cheaper solution than silicon interposers, according to Mark Bohr, senior fellow and director of process architecture and integration at Intel. “Whether you are talking about 2.5D or 3D, you are adding extra cost.”

The company is said to be developing a monolithic memory integration scheme similar to Samsung, showing a commitment to the 3D concept for a monolithic business.

Kurt Shuler, vice president of marketing at Arteris, has seen activity on the manufacturing side of fabs testing the physical implications of bias between two die, trying to characterize and make it easier to create these SoCs, but they are dealing with power, physical expansion and contraction issues.

How long it will take for them to nail that process down is anyone’s guess, and there are business issues that still need to be worked out. “If you are a memory provider and you’re selling DRAMs, you package them so you’re pretty sure that when you send them somewhere they’re not going to crack and they’re going to work. But now [with 2.5/3D] you’re going to ship die to some other company and they are going to stack on top of your customer’s die. Who is responsible for what? Do the major foundries want to take on that responsibility? You would think they would be the obvious parties to do that given the volumes going through the foundries, and that would be a nice channel for the memory guys.”

Shuler noted the initial thinking was that third-party packaging companies would become experts in this area, but it requires too much capital and the assumption of too much liability for them to deal with. At this point, TSMC seems to be the best choice.

Known good die issues persist
Steve Pateras, product marketing director for test at Mentor Graphics, agreed there are some big questions marks on the manufacturing side, but he added there’s no real compelling event for logic-on-logic stacking. “Technology solutions are in place, but we’re just not seeing the pull.”

However, what is continually being discussed is the issue of known good die. “Even before 3D the known good die discussion had been going on,” Pateras said. “How much testing do you do at the wafer level versus what you do at the package, whether it be 2.5 or 3D packages? The sensitivities are somewhat different at 3D because you are stacking multiple die and the total yield issue is more important. The cost of the package is more important. There’s definitely a push toward going to known good die in 3D-IC. The cost of spending more time at a wafer level becomes more compelling. When people actually start doing 3D you will definitely see a move away from ‘probably good die’ to ‘known good die.’ We’re already seeing a move toward known good die for specific application areas anyway, despite the lack of 3D—automotive in particular. There’s a much greater thrust on quality now for the automotive sector, which is exploding.”

He noted that the key metric is defects per million. “You can argue that can be achieved at the package level, but there are still cost benefits of doing more of that quality testing at the wafer level. In a way, it’s sort of clearing the way for 3D when it becomes more prevalent because people already are leaning toward higher-quality testing—better fault models, cell-aware techniques, doing more testing at wafer as a way to get better quality. We are seeing more interest for I/O at wafer, using embedded techniques for doing testing leakage for I/O defects and other defects you only discover at final package. There’s definitely a trend in that direction and automotive/safety-critical in general is pushing that and will make it available more easily when 3D becomes a reality.”

And for design companies such as Open-Silicon, 2.5D and 3D could be a reality tomorrow because the company already has completed implementations. It even showed off a 2.5D demo chip at the last ARM TechCon, according to Radhakrishnan Pasirajan, vice president of silicon engineering at Open-Silicon.

“If you look at the ecosystem for building 2.5D, we are ready to deploy that for any customer design that would demand a 2.5D implementation,” Pasirajan said. “So far, from the customer side there is no specific request and there is no requirement even though we have the capabilities. We have not done a customer implementation, but we have spent more than a year to come up with this solution completely designed from the front-end architecture onward, all the way to the packaging and building a demo board and demonstrating the concept. That involves mainly the die design, specifically the known good die implementation, to make sure that the die has the proper microbumps so that it sits on the interposer, and the interposer design to take care of the inter-die communication for a given data rate, and then planning the TSVs to make sure that the interposer eventually connects to the equivalent of the die bumps. Then it goes to the packaging.”

Each of these pieces of the puzzle has been solved and implemented in his group, he added.

A complex task
Implementation needs to be broken up into at least three pieces. One is the known good die implementation and making sure that it is indeed a good die. Here, the necessary hooks for testing that die at the wafer level need to be implemented so that when the die comes out of the wafer it can be verified as a good die. Second, the interposer must be designed in such a way that the die-to-die interface is accounted for.

“When you do this you don’t have to eventually design a die as it will be packaged,” Pasirajan said. “All the die-to-die interaction needs to be dealt with a little bit differently. Apart from those connections that are going out on the die to the packages through the TSVs of the interposer, there are additional connections that need to go between the two dies to make sure the dies communicate. This is done with custom I/Os. The third part is putting these dies on the interposer and having the ability to test it at that level when we deliver it in the package. Still, we need package-level testing and die-level testing. These are the requirements when we go and do a total 2.5D solution.”

Open-Silicon believes 3D technology is more suited for homogeneous applications, but in the ASIC world where engineering teams want to integrate an RF and analog with digital, coupled with the slowing down of Moore’s Law, they are betting 2.5D is the best technology to be able to integrate heterogeneous chips.

Unanswered questions
So why aren’t there more designs using stacked die approaches? The answer is that it is happening in pockets. For the market to really take off vertically, though, it has to be far more widespread.

“The requirement needs to be seen as a major compaction technology and also as a reliable solution,” said Pasirajan. “Largely you’re looking at people who want to build a solution for ASIC as a part of a system — a single chip solution where everything gets put inside the die. When the die compaction reaches some limit, these are reasons people will end up using 2.5D.”

Intel’s Bohr sees a potential tipping point in the low-power mobile space with technologies such as memory cubes and high-bandwidth memory, as those are coming to market much faster.

And Mentor’s Pateras predicts that when the semiconductor industry pushes into lower nodes, it may be that stacking technology — 2.5D/interposer in particular — is more attractive because there will be higher demand to mix technologies from different process nodes in a single product. “The cost of yielding at these lower nodes is not scaling, so it just makes sense to put the highly critical functionality in the lower nodes and keep everything else at higher nodes.”

Once nodes the 16/14nm and 10nm nodes become become more prevalent, he expects to see some stacking with 28nm devices. But there also is an entire design ecosystem that needs to embrace these approaches for 2.5/3D. This will be addressed in part two of this series.

Additional resources:
Si2 keynote from last week’s 3D IC conference.

Open-Silicon paper presented at ARM Techcon 2014.



2 comments

Françoise von Trapp says:

Ann, while I appreciate your in-depth reporting and the opinions of your interviewees, I don’t believe your title really captures the truth about the progress we’ve made in the last year. Many believe that anyone not in the game now will lose the leading-edge position. The fact that Samsung, Hynix, nVidia, Micron and even Intel have all announced products that integrate 3D TSVs this year, with plans to go into production in 2015 and 2016, shows tremendous progress. At 3D ASIP, spirits were high because these high end computing applications will realize higher profit margins, and so the sentiment is that 3D has arrived. Maybe it seems like things have stagnated because 3D hasn’t reached consumer products yet, but it may never get there. Doesn’t mean there won’t be plenty of opportunity for decent volumes in data center, networking, and other HPC application. From where I sit, and from the folks i talk to, interposer and 3D integration is well underway and just going to get bigger. Happy Holidays!

Bill Martin says:

During the 3D ASIP conference, several presenters discussed the types of designs that could utilize and benefit from 2.5D. I believe a few summaries from Francoise von Trapp, Paul McLellan and Herb Reiter
have been posted.

Global Foundries, Zafer Kutlu, identified three types of products for 2.5D integration:

1. cost optimization where large die are split into multiple die to significantly improve yields

2. functional optimization where various blocks are processed in the ‘best fit’ node for function and cost (A/MS blocks, Memory, etc)

3. logic/memory used to improved bandwidth reducepower. Memory was mentioned severaltimes in this (Semi Eng) article.

Memory, due to increasing bandwidth requirements coupled with power issues, already forced a large consortium (Hybrid Memory Cube or HMC) to form and collectively chart new products based on complex package integration.

Each new silicon node causes additional design issues for A/MS and RF blocks, designing in these new smaller nodes is resource intensive, expensive and risky. So the economics will eventually cause these
blocks to stagnant in older, larger nodes.

‘Plain’ logic gates can continue to scale but even these are starting to show an increase in cost contrary to our expectations from Moore’s Law. Double/Triple patterning, FinFETs, etc all solve issues that arise with continued scaling but at significant costs. These costs require much larger product sales to create a positive return on investment. How many products can afford this cost structure?

Now add various MEMs functions that are required and have been used in products for 5-10 years. These will never be integrated into a homogeneous silicon solution.

In my mind, the adoption has already started. Similar too many other changes: slow at the onset and then a tipping point is achieved where it is SOP (sorry for the pun).

Leave a Reply


(Note: This name will be displayed publicly)