2.5D, FO-WLP Issues Come Into Focus

Advanced packaging goes mainstream, creating ripples throughout the back-end of the semiconductor industry.

popularity

Advanced packaging is beginning to take off after years of hype, spurred by 2.5D implementations in high-performance markets and fan-out wafer-level packaging for a wide array of applications.

There are now more players viewing packaging as another frontier driving innovation. But perhaps a more telling sign is that large foundries in Taiwan have begun offering packaging services to customers, blurring the line between foundries and OSATs.

“This goes from one part of a foundry to another part of the factory, or to an OSAT,” said William Chen, ASE fellow and senior technical advisor. “So if you do the packaging at ASE, TSMC ships the wafer to ASE, which then has to bring yield up very fast because it is so expensive. That’s true for fan-out, but it’s particularly true for 2.5D. You need a very good yield. Otherwise you lose with this kind of packaging approach.”

Cost is the key metric to watch in advanced packaging, and that cost is determined by yield, the time it takes to achieve that yield, the thickness of various dies in the package, and the type of package used. So far, relationships between foundries and OSATs—and between customers and each of them—are not well defined when it comes to fan-outs and 2.5D. While the packaging technologies and approaches are understood well enough, the roles of all the players and the optimal methodologies for building and testing these packages are rather murky.

“From the test side, as a part of this ‘new offering’ from the very high-end foundries, we’re starting to see some components that are traditionally not part of the wafer processing world,” observed Joey Tun, principal market development manager, semiconductor test at National Instruments. “Some of these include integrated passive devices, and passives on glass. In other words, the foundries are saying to customers, ‘Not only can I make a die for you in this large 2.5D or 3D structure, which may contain logic, a bunch of memory stacks, among other things, but I can also provide you with peripherals that would have traditionally been on a PCB, such as capacitors.”

One of the advantages of advanced packaging is that it allows a very customized solution, because even starting with the same logic, different memory configurations and communications approaches can radically alter the power/performance profile of the device.

2.5D has multiple die on top of the substrate, in some cases,” said Ram Praturu, director of test product technology marketing at STATS ChipPAC. “They have passives. Some of these packages are molded completely, and some of them are not molded.”

This does present some challenges, though, including the mechanical structure of the package itself. “You have passives on the package, and to pick up the parts we need to mechanically figure out which part of the die we can touch down and pick up, because there could be a controller and a memory on top with passives on the side,” Praturu said. “I may encapsulate just the die portion of it and leave the passives on the side, so we need to adjust the handles and we need to build a different kit. It’s a custom mechanical hardware design that we need to look at. From a mechanical standpoint we are definitely seeing some customization needs to be done for testing these packages.”

Screen Shot 2017-04-08 at 11.00.33 AM
Fig. 1: Packaging solutions. Source STATS ChipPAC

This also affects the test methodology, because these devices be multi-chip and multi-die.

“What is present looks like a system-in-package where we not only have to test the whole system, but we have to test it in a system-level format and application mode — their application mode, specifically,” he said. “So we could do ATE tests that eliminate all the opens and shorts (basically the parametric testing), and some of the functional testing that we can get to. But to get to the actual application testing, both the die in sync and making sure that all the contacts are there, we are being asked to do another insertion that is a system-level test.”

Specifically in the context of a stacked solution such as HBM, especially when integrating multiple dies on a package, how do you test that everything is good?

“You’ve got known good die from your own SoC, you’re getting known good die from the DRAM vendor, said Deepak Sabharwal, general manager of eSilicon’s IP products and services. “You put them on the interposer, assemble them using some technology, and now you start doing your checking. If something doesn’t work, how do you know where the problem is? Did the interposer break? Did the assembly process break or something happen to the SoC die or DRAM die?”

As part of this, engineering teams must devise methodologies that allow for efficient debug. “The HBM standard has defined the MISR (multiple input signature register) polynomial of what kind of traffic should be run between the SoC and the HBM DRAM to validate that a connection was good, so some tests have been defined. And all of those have to be incorporated into the PHY design,” said Sabharwal. “That needs to be rolled up back into the SoC—whatever is going to be driving real traffic on that bus. First, a training algorithm must be run using a certain kind of traffic to see if everything is working. If it’s not working, there could be a problem with the timing interface, or maybe there is a hard defect, a microbump that’s not working properly. You want to be able to isolate that failure and then correct it.”

Flow issues
From the packaging design side, however, packaging engineers don’t make exceptions or design changes for testability. That’s not their focus.

“Their concern is whether they can do a flip chip, and then wire bond on top of it — basically a package-on-package on the 2.5D,” said Praturu. “Their criteria also includes how they can assemble the part, and what the constraints are so they don’t necessarily look at testability from the mechanical sense. They figure as long as there is some flat surface on top, test guys can grab it, pick and place it, and test it. That’s the bottom line for them. They don’t care what’s down the road at this point for package design.”

But this can cause problems when it comes to testing if they don’t at least have an understanding of how that product will be tested.

“As the die shrinks, and they start implementing multiple die, if there are multi-level die on the same substrate, and with passive components in between (such as a non-symmetrical top), as we progress and shrink the die, that’s when the problems start,” he said. “Right now, the reason we are having these packages is because the dies are still big, and we are putting them on top of each other in a system. That’s not an issue, but if we go to fan-out wafer-level packaging, where the product line is a PMIC (power management IC) with a controller, and you start putting not only the PMIC device along with some memory in the future, that’s when the issue will pop because PMICs are pretty tiny in size. This means I can’t have tiny steps on the top where I can grab hold of the device.”

These issues have been discussed before, but in the past they were somewhat distant concerns. As packaging takes root, they are becoming everyday reality for many engineers. Consider what needs to be done with 2.5D, for example.

“You need to ensure that the die is properly tested before it is packaged,” said Steve Pateras, product marketing director at Mentor Graphics. “Then you need to retest the die within the 2.5D package, and then of course test the interconnects between the bare die, which is either logic-to-logic or logic-to-memory as well as the infrastructure needed to do that. This involves determining how to transport test information across the die. It’s a little bit easier in 2.5D, but generally you only have access to one test interface for the whole package, and you need be able to access all the die through that one interface. This requires some kind of methodology, and if the die are from different sources you need to be sure that they are compatible. That’s where standardization becomes important. You cannot have different DFT approaches that need to talk to each other and be compatible. That’s where the 1838 standard really comes into play.”

Specifically, he said the IEEE 1838 working group for 3D test is starting to converge on a full set of solutions or architecture.

System-level test
Looking ahead, system-level test will become critical. And for companies that are more focused on production testing, the question of the day centers around partnering with the classic chip manufacturer as far as doing 2.5D testing in production — both for system level tests or custom tests as well as for burn-in.

“Most companies would like to move a lot of testing over to wafer sort,” said Karthik Ranganathan, director of engineering at Astronics. “Some of that is entirely successful, and structural testing does tend to move purely into wafer sort, but there’s a lot of system-level functionality that does not get covered in wafer sort. Companies approach this in different ways. Some prefer to approach this doing the end functionality system-level tests that we currently do on something like an applications processor or a CPU. Since they’ve integrated that apps processor or CPU along with an analog chip, and they’ve converted it into a SiP module or a regular module or a SiP type package, they end up doing a lot system-level tests and burn-in off that package as opposed to the individual ICs.”

That testing also needs to be done over time, particularly as these devices are used in industrial and safety-critical markets, where breakdown of functionality needs to be well-understood statistically.

“You need to find a combination of things that cause failures,” said Kiki Ohayon, vice president of business development at Optimal+. “Right now we typically look for a single root cause, but it may be more complicated than that. You need to build a database of what happens to a product through its entire lifecycle. You may need to modify data in simulation for how a device is really performing. You also have to know how key components are deployed. So you need to collect all of the data, which is basically the device DNA, from every chip and every board, and create an index. This is not pass/fail. It’s smart pairing. If you pair this device with this device you can predict what the outcome will be and what is the best combination.”

Conclusion
As far as other challenges on the near-term horizon, many in the industry feel the packaging issues with 2.5D are relatively stable, or perhaps solved, and fan-out wafer-level packaging is at least as far, if not further along. Full 3D packaging is another matter.

“This is the one that’s going to drive us all crazy,” Praturu said. “There are certain calculated risks the industry is taking, but because there’s not enough volume on 3D yet we’re able to get away with all these tests for now.”

Related Stories
Betting On Wafer-Level Fan-Outs
Chipmakers focus on packaging to reduce routing issues at 10nm, 7nm. Tool and methodology gaps remain.
Making 2.5D, Fan-Outs Cheaper
Standards, new materials and different approaches are under development to drive 2.5D, 3D-ICs and fan-outs into the mainstream.
2.5D Adds Test Challenges
Advanced packaging issues in testing interposers, TSVs.