Improving Yield Of 2.5D Designs

Progress is being made on the packaging side of 2.5D and 3D designs, but how to improve the yield of 2.5D is still in development.

popularity

While progress is being made on the packaging side of 2.5D design, more needs to be resolved when it comes to improving yields.

Proponents of 2.5D present compelling benefits. Arif Rahman, a product architect at Altera, noted that the industry trend of silicon convergence is leading to multiple technologies being integrated into single-chip solutions. “2.5D/3D integration has multiple advantages, including greater bandwidth, higher performance, lower latency and reduced board area. If you look at FPGAs, Altera is focusing on integrating multiple heterogeneous technologies together in a single 2.5D/3D device — including FPGA with memory, analog, microprocessor or an ASIC. If you look at devices like Micron’s 3D Hybrid Memory Cube, traditional memory solutions (DDR4, GDDR5) are running out of steam, with no road map for DDR4 or GDDR5. Memory or system solutions leveraging 2.5D/3D technology is one of the approaches to deal with future shortcomings.”

These devices will be found in various forms of heterogeneous integration for networking, military, computing, and broadcast applications, he said. Also, because performance of future 200G, 400G wireline, high-performance computing, and radar applications will be severely limited by memory bandwidth, integration of high-bandwidth memory using 2.5D/3D technologies can alleviate this performance bottleneck.

But the fact remains that there is still a significant amount of work within the ecosystem to be worked out for the cost to be considered acceptable by most companies and yield issues to be resolved.

According to Herb Reiter, president of EDA2ASIC Consulting, with the exception of memories, the majority of interest in 2.5D/3D is in putting die side-by-side onto an interposer. That seems more manageable than 3D, which would require, “a lot of standards, a lot of design tools, and a number of cooling measures that we don’t have today. 2.5D is happening, as I see it. It clearly is interesting and accepted for two reasons: it’s technically compelling. People realize that the PC board densities are really not meeting requirements, especially for mobile devices. In addition, 450mm wafers are not looking that compelling right now. Even worse, EUV is late and will probably require quadruple patterning if we want to go down in feature size and achieve reasonable yields.”

To make 2.5D a reality however, costs still need to come down, which means increasing the volume of, and improving, manufacturing processes.

On the yield front, one issue that has been identified and is garnering a significant amount of interest on the research side of things is how to test the interposer to guarantee a known good interposer. This is a difficult task because the interposers don’t contain active circuitry. “It’s only wires,” Reiter pointed out, “so you cannot power this interposer and see if the circuitry works and conclude that it looks okay. For example, the Xilinx interposers have 40,000 point-to-point connections. Testing every individual wire would be a nightmare.” Along these lines, he suggested that interposers may need to be designed for testability and tested at the wafer stage.

Stephen Pateras, product marketing director for silicon test products at Mentor Graphics, agreed that for either 2.5D or 3D the only interest at this time is for stacked memory because the ROI is still not there for most companies. “It’s a yield issue. We’re seeing 2.5D because of yield. When you get to the really large SoCs, you can start improving yield if you break it down to smaller slices like what Xilinx does, for example.”

When we do get to the point that the ROI makes sense, improving the yield of 2.5D designs mainly comes down to economics, he said. “Once the economics are there for doing 2.5D or 3D, then it’s a question of how much money do I spend insuring the individual die are good, and the cost of doing that versus the cost of throwing away 2.5D packages that are bad. It’s an economic issue once again, even when it comes to test and packaging. I could spend more time testing my bare die before I put them on a 2.5D interposer so I have better probability of having a good subsystem, or I could not spend more time testing these die and reduce the probability of a good 2.5D subsystem but throw away the bad parts. It’s a numbers game.”

The current trend is to spend more time ensuring the die are good because it’s expensive to throw them away. “Even if you have two or three on an interposer, if one is bad and you are throwing away one or two good die, it’s very expensive. There seems to be a trend to want to test more of the wafer when you are doing that kind of 2.5D integrated packaging,” Pateras explained.

Improving yield
From a test perspective, how can yield be improved? There is the issue of known good die, mentioned above, in addition to improving the inherent yield of that die coming out of the manufacturing process. Both ultimately will help with the cost of that 2.5D package.

One approach is to screen the parts to make sure they are good, but even there the matter isn’t settled. “Historically, wafer test was not as rigorous a test as was done at package or final test for a number of reasons. Often it was simply a question of access. At wafer level, it’s not often as easy to access all of the I/O or ground and power pins, combined with the fact that we do need to test again at the package anyway, so let’s not spend as much time testing at both sites. Let’s keep some of the testing at the final package,” Pateras said.

The orthogonal activity in making 2.5D more economical is to improve the yield to begin with. Here, fail data can be used to analyze the defects, their location, understand where the systematic issues are and then drive back to the design process – to DFM rules – to improve overall yield, long term, Pateras added.

Geir Eide, DFT product marketing manager at Mentor Graphics, noted it is important to make designs that yield well but when there is a problem there needs to be a way to understand the source of that. “That’s something that is becoming more complex of a task to understand not just because of 2.5D/3D but also in terms of how designs tend to grow in general — even conventional designs — and also how at each new manufacturing process you tend to get additional new defect mechanisms.”

The test mechanisms mentioned above, especially as the industry moves to 2.5D, become increasingly mandatory. In the past, they were optional. Fortunately, they lend themselves very well to producing data that can be analyzed to understand the root causes of yield.

“As we see that more people tend to use these test techniques more, we also see a larger portion of [users] tend to leverage the capability that these test structures and test mechanisms have to also produce fail data that can be analyzed. So they would go the extra step, rather than just using these test techniques as a screening mechanism, also collect fail data to be able to do the analysis either to aid the foundry in resolving yield or identify issues in the design that might be yield limiting,” Eide observed.

Selective testing of packaging pads
When it comes to approaches to improving the yield of 2.5D design, the first generation of widely used 2.5D chips are going to be homogenous, which is what Xilinx does today. “You can spend a lot of money to get known good die because you’re going to make that one same die over and over again,” Javier DeLaCruz, senior director of engineering at eSilicon. There will also be heterogeneous 2.5D designs with the main winner there likely to be either an FPGA or an ASIC combined with memory.

DeLaCruz believes the memory one should be easier to manage because it should come in as known good, and the architecture should actually improve yield because repair from each memory layer can be shared. That would then be mounted onto a CMOS layer with very little else on it, so yield for that layer should be very high, too.

“For the ASIC or FPGA that it would talk to, what you need is a strategy where you’re not going to be able to touch down on every pad,” DeLaCruz explained. “Even if you touch down on every pad and use the same kind of architecture, you really limit yourself because these memories. A high bandwidth memory would have roughly 1,300 signals just in that interface, which exceeds what you have on the tester. You can do some advanced things on the tester but you really don’t need that. What you would do instead is have a much more advanced test compression, where you touch down only on a select number of pads on the die and have very good coverage—but not as good of coverage as you had when you were able to contact every pad. It’s a tradeoff, but the tradeoff should be very small.”

When it comes to the interposer approach, it contains expensive memory and an expensive ASIC going on this interposer that’s not known to be good. Approaches being considered to address this include using active circuitry on an interposer, which makes it quite expensive, as well as optically testing the interposer to visually see if everything is good. Further, there are efforts underway in the industry to develop novel IP to allow for testing at as low a cost as possible.

Whichever method is chosen, it needs to be at cost parity to equivalent solutions. As such, eSilicon is engaged in active research on how to cut the testing problem down to size recognizing that both sides of the interposer can’t be probed.

At the end of the day, to bring costs down enough and make 2.5D a realistic option that it is on cost parity with other solutions, the entire ecosystem must work together.

“The biggest barrier is to fully leverage this technology you need to make a custom piece of silicon that only works on an interposer: very high pin count, very fine pitch,” DeLaCruz said. “To commit yourself to an ASIC that will only work in a 2.5D structure that’s not proven yet is a big, big leap because of reliability data and whatever else, so we’ve got to clear all the other hurdles such that the last thing left is, make that ASIC. The volume initially will be dominated by very few players, and everybody else will be on the sidelines watching. Then, a couple of years later, they’re all going to imitate. That’s really the only way this will play out.”



Leave a Reply


(Note: This name will be displayed publicly)