Experts At The Table: Stacked Die And The Supply Chain

Last of three parts: Blurring the lines between packaging and manufacturing; progress on cross-disciplinary test and tools; issues in packaging, interconnect and test; who’s in charge and driving changes; why tools are lagging for creating TSVs; who owns what; debate over whether the supply chain will grow or shrink.

popularity

By Ed Sperling
Semiconductor Manufacturing & Design sat down to discuss the effects of stacking die on the supply chain with Stephen Pateras, production marketing director for silicon test at Mentor Graphics; Javier DeLaCruz, director of manufacturing technology at eSilicon; Colin Baldwin, director of marketing at Open-Silicon; Charles Woychik, director of marketing and technical analysis at Tessera; and Sashi Movva, strategic sourcing specialist at Qualcomm. What follows are excerpts of that conversation.

SMD: Are we starting to blur the lines between what’s a packaging house and what’s a fab with stacked die?
Woychik: Yes, and the blending of the IC fab and the packaging house is absolutely necessary to pull this off. The major fabs are bringing the two together. The foundries are bringing in the packaging guys because they need the packaging discipline to do assembly. How do you assemble it with other materials to make sure you get high yield? That’s going to drive cost. At the same time, you have to make sure the reliability requirements are met.
Movva: That’s not a novel concept. It’s been done in the industry, and it gives flexibility to the fabless companies where you can mix and match different suppliers and different applications. What’s interesting about 2.5D is that it takes it to the next level. Now the foundries and SATs (semiconductor assembly and test companies) are working more closely together. Characterization needs to be done up front. Standards need to be developed so both entities understand what the outputs and inputs are. One a conceptual level this exists. It just needs to be elevated to the next level.

SMD: As we start bridging what were previous silos, we will also need to start bridging the tools used by each. How far along are we?
Movva: It depends on the business model and where the partnering is happening. If there are certain processes done by the fabs, they can do the same process for 2.5D and 3D, like micropillar bumping and stacking. That isn’t new. But if there are certain aspects used specifically by the fabs, then those technologies will need to be developed by the OSATs. It depends on the business model and where the handover happens.
Woychik: The design tool guys already are working with the packaging house. This is where the packaging guys develops test vehicles, and that information is used in the next generation of EDA tools, which is then used in the fab. That’s a classic case of the need for integration. We’re doing the packaging, but that has a key driver on these design guidelines, and it has an effect on EDA tools used by the fabs to develop 3D solutions.
DeLaCruz: The EDA tools are close to where they need to be, but they can’t quite handle it as they currently exist. For example, if you’re doing physical design you generally can only load in one design rule set. If you’re going to design in 40G, you also can’t bring in a 130nm design rule set at the same time for a different chip. If they’re all the same technology, the tools can work with some minor modifications. If you bring in different technologies, they choke. That’s one problem. For simulation, there are no good models available for a TSV. There are so many different flavors of TSVs—you have small tungsten-filled ones, long copper-filled ones, conformal-coded ones where they’re not fully filled with copper—and they all have different electrical properties. Once we have a standard, these tools vendors will modify their tools to include that. We’re not there, and they’re waiting for someone to emerge as a clear leader so they can figure out where to best spend their time.
Baldwin: Today if you have a 40 million-gate device, which could be a processor or some sort of complex ASIC device, you’re going to get a 1,000-page or 2,000-page data sheet. People are going to come through and say, ‘Here’s the product and here’s the data sheet, go implement this device.’ And you have three months. You can’t even read it. And you have to go through and integrate it. And you have board work around it, and you have to model it and simulate it. With 3D we have the potential to bring in processors and memory and analog. Does that bring in an era where we can get rid of these data sheets and build completely integrated systems that can be integrated into a larger system in a useful amount of time? There are issues with integration of cost and power, but the big one is schedule.

SMD: Are the test tools integrated with other tools?
Pateras: They’re separate worlds, other than for things like Verilog netlists and RTL and constraints. Aside from those, they’re really separate flows. The issue with test is you need to have control of all the various parts. If you don’t, that’s where standards come in because we don’t have a solution.

SMD: How do we deal with proximity effects like leakage, noise, electrostatic discharge and electromigration?
Movva: 3D proximity effects could be mechanical, electrical and thermal. There are areas where you could characterize and create tools. It’s not possible to characterize all combinations and features, so you’ll have to use simulation to create certain rules based on what is permissible.
DeLaCruz: From an ESD standpoint, the interconnects that are designed to be solely from chip to chip don’t need ESD interconnects. But that assumes you know how that chip will be used in every possible situation. That makes it very difficult. If there is an interconnect that goes chip to chip in one format, it might be lead system in another version. So now you need to raise it above the core voltage and put ESD protection on it. This is all going to waste power and you will lose a lot of the benefit of the 3D architecture. As long as you know well ahead of time that these banks of I/O are not going to do anything but talk to another chip in my 3D and 2.5D architecture then they don’t need higher voltage, buffers or ESD protection. From an electromigration standpoint in the TSV, that’s really going to come down to the reliability of the oxide or whatever else is put into the via layer. Right now most papers are related to a certain thickness of oxide, but we all know it’s pretty brittle and it could give us leakage problems. It’s not well understood this point, and that’s part of the risk.

SMD: How about in test? Is there any focus on physical effects yet?
Pateras: We’re right now addressing it as if the defects we will have to deal with will be the same as in 2D. We’re just looking at accessibility and observability of the various components such as TSVs. Whether or not there are defect mechanisms that we would see with general interconnects is unknown.

SMD: And from a packaging side, the goal has always been the cheapest package. Will that change?
Woychik: When anyone hears packaging they still require low cost. That will still be the case with stacked die. That’s also why it’s so important to address these packaging issues right now. A lot of people have focused at the fab level. There needs to be more focus on the packaging level to address assembly and yield and also reliability. There are no showstoppers. It’s just a refinement of the technology. This is an extension of the work we’re already doing in packaging to drive a lot of these models. When you start building these structures and do the detailed metric measurements, this produces good information for EDA tools.

SMD: How about an exchange of data between vendors in other areas?
DeLaCruz: Outside of memory, there is very little activity going on with die-to-die information standards. There’s no reason to have multiplex signals raise the voltage as you go from chip to chip. But there’s no I/O interface standard for chip-to-chip.
Woychik: But there are cases where it’s working and other cases where it’s not working. What you’re seeing is OEM drivers helping to develop the infrastructure. For example, Xilinx got together with TSMC, Ibiden and Amkor to pull off this solution. These OEMs are driving it, and you’ll see more and more of that happening. That’s the early stage of larger-scale integration that will have to happen to make this a reality.
Pateras: We definitely need some information to be provided if the various die are coming from multiple sources. If you look at a memory stack on logic, if you’re going to test that memory die with self-test on the logic die, you need to know the memory’s address space, its physical scrambling, redundancy and how it’s architected. In 2D, that kind of information is being provided by the memory vendors. In a 3D world, these frequently are different vendors. They’re not involved in that kind of information exchange right now. That will have to change.

SMD: Will the supply chain get bigger because of stacking or will it shrink?
Movva: There won’t be a change in the supply base. What will change is the partition between the different players, where one takes over from the other.
DeLaCruz: I disagree. No matter what, you have at least one more link in the supply chain—someone stepping in to provide tiles. If you’re vertically integrated, that’s not a big deal. But if you’re trying to source a PLL, voltage regulator or SerDes from all these different players, who’s is in charge of taping out these tiles? Who’s going to inventory them and make sure they’re all test-compatible with each other? That’s a link in the supply chain that doesn’t exist today. Right now no one is the world leader in that role.
Woychik: There’s a battle brewing between who’s going to take ownership for these parts, and it’s not clearly defined. The packaging house will certainly take on a bigger responsibility than it has in the past.
DeLaCruz: I’m not so sure the packaging houses will have the appetite for taking on ownership and inventory. They’re going to want to see it come ready for assembly. What they will grow into is more assembly practices.
Woychik: The classic case is the PoP module, which is well-defined with the logic on the bottom and the memory on the top. With a 3D TSV, who takes this on? There’s a business risk.
Baldwin: If you think about PCB design and then you take this concept forward into 3D design, maybe you’ll have a die that’s a PLL or a SerDes or a logic block or memory block. You can imagine a Lego build in 3D. But if there’s too much complexity in packaging, assembly and test, what we will need are larger constructs. So what is the optimal process for any function? With 3D there will be an analog wafer, and that analog wafer will probably be 0.13 (microns). There will be a digital die at a lower geometry for integration. And then you can think about how these functions will be pushed up and down through this die stack. The conclusion is that you will have vertical-specific subsystems. You’ll have to. If someone is providing analog subystems, they’ll have to say, ‘This is the analog subsystem of a cell phone or a tablet or a WiFi system.’ And they’re going to have to provide these. Only after they’ve been integrated can we start dealing with the inefficiencies of aligning TSVs and making sure the modeling is correct so you can integrate it into a 3D stack. That’s going to force aggregation around function.
Woychik: That’s where 2.5D can play a very nice role, and why it will play an important role. That’s a likely interim solution. Then way when you go to 3D, you’ll be more prepared to deal with the business issues of who does what.
Movva: The business model is one of the biggest issues. Who owns the logic and who owns the memory? Those are the kinds of questions the industry has to address. That may be the biggest hurdle for the adoption of 3D.



Leave a Reply


(Note: This name will be displayed publicly)