2.5D Becomes A Reality

Experts at the table, part 3: Will the tools and IP work with advanced packaging, and what’s the difference between packaging done at foundries and by OSATs?

popularity

Semiconductor Engineering sat down to discuss 2.5D and advanced packaging with Max Min, senior technical manager at Samsung; Rob Aitken, an ARM fellow; John Shin, vice president at Marvell; Bill Isaacson, director of ASIC marketing at eSilicon; Frank Ferro, senior director of product management for memory and interface IP at Rambus; and Brandon Wang, engineering group director at Cadence. What follows are excerpts of that conversation. To view part one of this discussion, click here. For part two, click here.

SE: Have the tools kept up with 2.5D? Are they sufficient for what we have today?

Shin: For right now they are fine. We are not doing 3D with TSVs. That will happen much later, if it happens at all. But putting things side by side with massively parallel I/O is not an issue at this time.

Wang: For any structure connectivity, such as memory or FPGA, it’s really the same units. That’s not the challenge for tools. One challenge that is there is with extreme monolithic integration. Your clock tree needs to get bound into one layer of the die, which is the higher level, and then it comes down and you need to balance everything.

Aitken: There is no tool in the universe that does that at the moment. It also doesn’t do power modeling, which is where all the other tools die a horrible death.

Wang: We really need to pick an application that can mature into production to solve this. That’s the extreme case. But it looks very promising compared with microbump connectivity. The other tool challenge is the user base. At the end of the day, the tool is only as good as the user base. If you walk into different companies and you look at 3D projects, you realize the owners of those 3D projects come from all different backgrounds. The package guy may say he is the one, but we’re also seeing 3D architects putting these together. If you look at fan-out technology, the OSATs have never been as standardized as IC fabrication. It’s a challenge for tools to be compatible enough to handle all of these different cases.

Aitken: From a current standpoint, are the tools for fan-out wafer-level package essentially tools that treat that object as a tiny board, or as a giant weirdly structured chip?

Wang: That’s a great question, and it’s why I see this as so difficult. The first question I would is ask is who’s going to fabricate this. If it’s a foundry company, it’s going to be design rule checking like ICs in GDSII format. The OSATs are all about how to do vectorization. We’re basically running into two different languages and two different user experiences. The opportunity is to unify these two different universes.

Min: We deal with electrical simulation and design tools, as well as mechanical and thermal tools. The mechanical is very challenging. Because of the non-linear nature of materials and structures, it’s still an ongoing problem. With 2.5D, mechanical tools will have to make significant progress. There is a long way to go. For thermal, you can simulate and estimate what is going to happen. This is still a solution the tools can support. With signal integrity, there have been a lot of improvements and new tools over the years, but now we are dealing with chip-to-chip communication in a package. People still think the interposer itself is part of the package. The silicon guys model the interconnect as RC, not inductor or capacitor. Someday we will see the problem of inductance. Then the software vendors have to be ready.

Ferro: We’ve developed our own internal tools for the signal integrity and power integrity analysis of the interposer, from the DRAM all the way through the the interposer to the SoC. So we’re using a combination of standard tools plus in-house proprietary tools.

Isaacson: We’re using standard package design tools. We’re doing things like package routing, package design, extraction, signal integrity. The piece we really need together is the connectivity to the die and keeping track of all of that. If there’s a bump on each side for every signal, we’re doing that outside of the standard EDA tool sets. We’re finding with the set we’re already using for IC package design, we can use those same tools and transfer that to 2.5D.

SE: How does the IP have to change?

Aitken: From the interconnect standpoint, future interconnect frameworks need to be agnostic about whether they live on a single chip or a couple chips. There need to be ways to move what amounts to logically the same network. It needs to go across a physical interface. We don’t specify what the interface is because everyone wants their own, but the information still needs to go across and pop out the other side. You can do that today with the buses that are defined, but they haven’t necessarily been architected with that view in mind. There’s an interesting question with monolithic 3D or true 3D-based multiple logic dies about how you architect a multicore system. Do you just put all of those CPUs and GPUs on one high-speed logic die, or do you stack them and mix them up? The work we’ve done so far demonstrates that if you propose to mix them up, you should micro-architect the cores. If you try to retrofit an existing one, it doesn’t do any better.

Wang: Depending on which IC technology you pick, you have to define a model architecture. If the connectivity is through die, you can imagine a case where you have pMOS top-down, and nMOS bottom-up. You can have a standard cell across multiple die. If you do have that in place, then there is no need to change the tools. The only difference is that 3D is embedded in the cell. So you would take a clock tree and you put together a cell-based inverter, rather than block-based partitioning or huge block-based partitioning, and and then you could compare performance versus a 2D chip. The problem here is the tools are not mature enough so that we can auto-partition.

Aitken: If you put N transistors on one die and P transistors on another, how do you build an SRAM? Now you have four transistors on one die and two on another, and you’re wasting a ton of space. You can get around that if half your cells are going to be 4N2P and the others are going to be 4P2N. You can do similar things in standard cells, and you wind up building a 3D LEGO device.

Wang: It makes more sense to have some P and N combination in the top die and a P and N combination in the bottom die.

Ferro: We’re working in a 1D world. Everything we’re doing is still one-dimensional. We haven’t changed the IP. There is no specific thing. There is no change to physical interfaces from a protocol standpoint. From an electrical standpoint, there is a change if you go from an interposer to a PCB. In theory, it offers us the advantage of much more efficiency on power because the drivers don’t have to be as big. You’re not driving across two or three inches of a board or a DIMM socket in a DRAM. Now you’re going across a few millimeters of a silicon interposer. But that has other challenges, because now you have 1,000 signals instead of 50 or 70, and you have crosstalk issues. It’s still the same challenge we look at, whether it’s a PCB or an interposer. But we’re still working on the assumption that the world is flat.

SE: There has been some talk about hardening of IP. Is there any likelihood that would happen?

Ferro: Probably not.

Shin: It’s not easy to harden IPs. You minimize the function in one chip because you can divide it up. But if you harden an SoC design and you have a 1mm IP, for example, as long as everyone can agree on it you can reduce the cost. But if you can’t agree, it’s very difficult to make it in a rectangular form without lots of waste. That will increase the cost. Except the CPU, hardened IP is probably not easy.

Aitken: ARM started out selling hardened CPUs, then changed over to soft CPUs. We check from time to time to see if anyone wants them hardened. They usually don’t. But what they do like is, ‘Here’s the RTL for the CPU, here are the libraries and memories that work at high speed or at optimal power, and here’s the instruction book for how to build it.’ That way, because you want a different aspect ratio or you have a different metal stack, you can follow these instructions and you will get the core you want. It’s not hardened, but it’s a way to make a hardened one.

SE: Instead of standardized parts, we’re seeing more customized parts. But as we move into 3D, customized parts can raise a bunch of unanswered questions. Will that be a problem?

Min: People try to standardize sockets and IP on boards. They’re trying to make HBM a standard right now, but tomorrow they might want to customize things here and there. In packaging, standardization is still unknown. Before people talked about small packages and small chips. Now they want to make it bigger and bigger, but there are no standards. Do you have to make all of these solutions work? That is our headache. Hopefully customers will solve this and agree on the same package and size of interposer.

Wang: There is no standard format for technology parts. In the IC world, you have SPICE models that are independent of foundries. With OSATs, everything is different. As the chip gets bigger, you start to see warpage. You need to do mechanical simulation to make sure the warpage is okay. The IC designer needs to look at this almost like 2D because on the Z axis there is no flexibility. The Z axis is already determined by the foundry. If you go to a fan-out, there is no standard. You need to talk to all these vendors and do the deep dive.

Aitken: Don’t you think that will settle out a bit? Right now everyone is exploring because no one knows what the right answer is. At some point in the future it will become more obvious that certain approaches will be closer to the right answer than others. At that point, you can have this thing and it will cost $5, or you can have your own custom thing and it will cost $5,000.

Isaacson: That’s for us as chip companies to define. What is the combination of chip sizes that will yield and be reliable? The application will drive the choice of whether that’s 2.5D, but we will be able to say whether we can build that and whether it will be able to go into production.

Related Stories
2.5 D Becomes A Reality Part 1
Lower power, better bandwidth and smaller form factor propel advanced packaging into commercial use; cost is still rather murky.
2.5D Becomes A Reality Part 2
Does putting chips together in a package really cut time to market?
Security in 2.5D
Is an advanced package design more secure than an integrated SoC?



Leave a Reply


(Note: This name will be displayed publicly)