Experts at the table, part 2: Does putting chips together in a package really cut time to market?
Semiconductor Engineering sat down to discuss 2.5D and advanced packaging with Max Min, senior technical manager at Samsung; Rob Aitken, an ARM fellow; John Shin, vice president at Marvell; Bill Isaacson, director of ASIC marketing at eSilicon; Frank Ferro, senior director of product management for memory and interface IP at Rambus; and Brandon Wang, engineering group director at Cadence. What follows are excerpts of that conversation. To view part one of this discussion, click here.
SE: What triggered this surge of interest in 2.5D? It’s been a topic of discussion for years, but it has only just started gaining traction.
Isaacson: Memory bandwidth.
Ferro: Form factor and memory bandwidth are the two drivers.
Aitken: You can argue that HMC (Hybrid Memory Cube) was a trigger, too. Wide I/O was almost a trigger, but it didn’t get there. HMC has increased bandwidth capability. It’s not exactly what everyone wants from a cost perspective. But it’s definitely pointing in the right direction, so maybe there was a way of achieving what we want by alternate means.
Min: There was a concern that the HBM die size would be very big, which would have made it unusable. But the HBM die size turned out to be very manageable. That’s why a lot of people are jumping in now.
Ferro: Related to the die size and cost is the ownership of the solution. If you’re just delivering a chip, then you can ship that chip to the customer. But now you’ve got a memory vendor, an SoC vendor, an interposer vendor. How do you test that memory? If something breaks, who’s responsible for it? Now there are pins you can’t physically get to anymore. Three or companies have to work together, so you have to get all these companies talking together. If something breaks, it’s no good.
SE: Is that really any different than what we’ve been dealing with in the IP industry. Stuff breaks all the time.
Aitken: There’s a subtle difference. If you take 27 IP blocks and put them on a chip, the end result is still your chip. If you buy 5 chips from 5 different vendors, those 5 chips are individual products. The people who made them have to guarantee that they work. Part of the cost is the whole yield issue. How do you yield it? How do you test it? Who guarantees that a working chip pops out on the other side? Answering all those questions is different for physical things than it is for IP.
Wang: IP follows the same process as an SoC. But if you’re dealing with different products, like memories or sensors, they have very different distributions, and when you put them into the same package the worst one dominates. The ASIC design is guaranteed by design with signoff at a certain speed. That is a very high distribution of performance. When you are integrating that with a die, it may be good enough, but not speed-tested. If you put a 5-sigma or 10-sigma speed distribution into a basic package, that’s a waste of high quality in a commodity product. There are differences in terms of quality and yield. The second issue is the supply chain. If you think about memory as a commodity, there is so much of different quality. So if you source memory from the channel it may be much different than from a vendor with very strict rules. Any integration always draws in any variation you have between the different pieces.
Aitken: That’s going back to the IP industry. Twenty years ago there wasn’t any unified type of test methodology for IP and now there is. Everyone agreed that if you’re going to put an SoC together, then everyone agreed you have to follow standard test practices. Twenty years from now everyone will probably agree on standard package integration practices so no one turns out to be the 10-sigma outlier.
Isaacson: We don’t have that kind of luxury. We’re an ASIC provider. We sell a completed system in a package to our customers for an agreed upon price. If it yields poorly, that’s a problem for us. But we have enough time invested and experience to understand how these systems behave, how you have to design them and test them, in order to make sure you do have something that is going to be predictable—and what type of cost structure you’re going to have when you build it. As we get into higher and higher production, we’re going to run into problems we have not seen before. You can’t probe all the pins there, so you need to give this serious thought about how you’re going to ramp it, how you’re going to debug it. That’s absolutely critical.
Ferro: You’re responsible to the end customer, but in your supply chain you are still subject to those effects. You make it easier for your end customer, but you still have to deal with it.
Isaacson: Absolutely. But that’s where the time and effort you put into understanding what the end product will be really pays off. There are lots of places to build interposers and ASICs and get them packaged up, and they all have different implications for what end product you will have and how much work you’re going to put into that product before it tapes out. That could mean test chips and additional simulations, but it has ramifications.
SE: How did Marvell deal with these issues?
Shin: We needed to make sure we had a good interconnect so we could have flexibility in putting different chips together without worrying about the interface issues. That’s why we chose to use our own interconnect. With the MoChi architecture we can mix all kinds of chips together. We can pick and choose PCI Express or USB functions on the same interface.
SE: But if anything goes wrong you only have yourself to blame for that, right?
Shin: Yes, that’s correct. But we also need to integrate memory, which is the same issue for everyone. You have to make sure it is up to a certain quality and that the entire product is well tested. So we still have to be responsible for our product, which is a large challenge. We are talking about two kinds of packaging. One involves two dies in one package or multiple dies in the package. That is one case. In another case, we are talking about chips acting separately. But if it is an all-in-one package, we have to make sure it works for our customers.
SE: Is it the same at Samsung?
Min: We use a mix. The design can be done by anyone with any options. For 2.5D, we are open to many options. We know how to make wafers. We can do packaging. But we don’t do system-level integration.
SE: Going back several years ago, the idea was that 2.5D would speed time to market. Is that real, or as we get into this does it look different than what people originally thought?
Isaacson: It depends. If you’re talking HBM, there is no speed-up. There may be a complex ASIC sitting in there, and we’re doing complex package design in parallel. The speed-up scenario is when we talk about combining existing chips, or when we take a really large, complex ASIC, and instead of doing a 400mm chip we do it in smaller tiles. Time to tapeout will be faster and time to market will be faster.
Ferro: This is the tradeoff that will have other costs.
Shin: People will need to develop a primary set of functions. Once you develop those functions, then you can develop other functions that may be much smaller in scale. We will benefit a lot in that case in terms of time to market.
Aitken: I’m still skeptical that there’s some sort of module for re-using chiplets inside an overall system. HBM is a case where it’s obvious that people want more memory, so to integrate them makes sense. In a lot of ASIC contexts, the ASIC does a very specific thing. If you can split it into four pieces, that’s great. Or if there’s some previous function that you want to re-use, that’s good. But we can already kind of do that in an IP model, anyway. I’m not sure 2.5D by itself buys you a lot.
Isaacson: There are a couple of scenarios that make sense. If we look at IP as we move to a new generation, all of that IP has to move ahead. It probably just occupies the periphery, but it has pretty much the same function as it had at the older node. That includes things like high-speed data converters, high-speed serial interfaces.
Aitken: Yes, although then you have to communicate with that.
Isaacson: And that’s the challenge I see now. There is no standardization today for the interfaces to those chiplets.
Shin: That’s why Marvell defined its interfaces before we had the MoChi product line. We can mix all of these chips in all of these different technologies. So we can move the CPU to the latest process node, but leave all of these other functions in the older technologies. You don’t need to re-do the SoC integration for all of these products. That’s a huge savings.
Aitken: That’s potentially an advantage. The question that I have is, if you look at a given die, it goes up by n^2 number of transistors per node but the ability to interface only goes up linearly. You’re going to run into interconnect limitations relatively quickly, and then you have design partitioning challenges.
Wang: There are two extremes to integration. One involves massive SoC blocks. That’s the next generation of 3D. It provides more natural channels between different blocks. There are still multiple dies, and that is comparable to an SoC in terms of scalability and size. The other extreme is heterogeneous integration. It makes sense for IoT and wearables, where a number of companies will compete on the cost. They are still at the stage of looking for killer apps. One of them will emerge in the next year or two, and then they can start consolidating. That’s where the competition will move to a cost-based competition. With heterogeneous integration, there is less integration for flash memories, sensors and MEMS. Those won’t move to TSVs because of cost. Automotive also will likely have a flash memory within the same package. That will provide a way of integrating at a lower cost.
2.5D Becomes A Reality Part 1
Is 2.5D Cheaper We may never know. But even if it is, companies may not recognize it.
Tech Talk: 2.5D Issues How ready is this packaging approach and what problems remain?
Security in 2.5D Is an advanced package design more secure than an integrated SoC?