Challenges With Stacking Memory On Logic

Gaps in tools, more people involved, and increased customization complicate the 3D-IC design process.

popularity

Experts at the Table: Semiconductor Engineering sat down to discuss the changes in design tools and methodologies needed for 3D-ICs, with Sooyong Kim, director and product specialist for 3D-IC at Ansys; Kenneth Larsen, product marketing director at Synopsys; Tony Mastroianni, advanced packaging solutions director at Siemens EDA; and Vinay Patwardhan, product management group director at Cadence. What follows are excerpts of that conversation.

Experts at the Table: Semiconductor Engineering sat down to discuss the changes in design tools and methodologies needed for 3D-ICs, with Sooyong Kim, director and product specialist for 3D-IC at Ansys; Kenneth Larsen, product marketing director at Synopsys; Tony Mastroianni, advanced packaging solutions director at Siemens EDA; and Vinay Patwardhan, product management group director at Cadence.

Left to right: Kenneth Larsen (Synopsys), Vinay Patwardhan (Cadence), Sooyong Kim (Ansys), Tony Mastroianni (Siemens EDA).


SE: As chipmakers stack logic on logic, or memory on top of logic, what problems are they running into?

Patwardhan: When we talk about true 3D stacking, the first thing that some of our customers are trying to figure out is whether to do it as a homogeneous stack or a heterogeneous stack — whether the two layers should be in the same technology node or in different nodes. From an EDA perspective, our algorithms have to be different for homogeneous and heterogeneous. For homogeneous, it opens up the whole space of two tiers. With heterogeneous, sometimes it’s a limited space, which makes it easier for us to do placement and to do synthesis for each technology. That’s the first thing people adopting this have to figure out. What are the two tiers going to be? Once you’ve decided that, the next big thing is power. If these different layers are connected together, what does the power structure look like? What did the power delivery look like? And as a result of the power dissipation, what does the thermal profile look? When you have so many interconnects for the two layers being connected, you can have very quick heat dissipation for the whole stack, and the package has to support that. Of course, cross-die connectivity checking, like LVS/DRC (layout vs. schematic/design rule check), which you do for standard 2D chips, now gets even more complex. You need flow definitions, and you need to do some early analysis to have enough confidence in what you are stacking together.

Larsen: With high-performance computing, for example, the topic of discussions for some time has been how we stack processors on top of memory. A big concern is how do we propagate power up to the processing elements. Your first reaction is to take the CPU and put it as close as possible to a heat sink. But the processor has a lot of connections that have to go up through the memories, and that’s causing all sorts of issues. Through-silicon vias add resistance for your power delivery network. IR drop-off in a stack is becoming a concern, and it’s a factor because the thing that produces the most heat is sitting on top. If you switch things around, the processor is sitting closer to the board and you have your power, but then you have different challenges. The heat that’s produced by the compute goes up up through the memory. All of this creates the need for early exploration, whether it’s in one process or more than one. You need to explore the architecture itself to determine things like where do you place your TSVs? Is it face-to-face, or face-to-back? Do we use hybrid bonding or micro-bonds between the dies?

Mastroianni: Homogeneous vs. heterogeneous, and active vs. passive die stacking, are big decisions. I see big challenges if you’re working with different technologies, including on-chip variation and timing. Place-and-route, timing closure, power delivery, and thermal are key challenges for die-on-die. Just managing the thermal is an issue. You need to analyze that, and you need to optimize it during the design process and use mitigation techniques. Can your place-and-route engines deal with that, or do you just do the analysis? How do you deal with thermal issues when they come up? There are some advanced technology solutions, like microfluidic cooling for high-power devices. Another issue for 2.5D and 3D is how you’re going to test multiple dies. That has to be part of the design process. You need to build testability into the design flow, and that’s another challenge.

Kim: 2.5D can be spread out, with room to dissipate heat. That’s not possible with 3D-ICs, where you have a lot of things packed into a smaller device and higher power density. Thermal analysis is going to have to advance much further for 3D-ICs. So we will need co-simulation for thermal. Also, with 3D, we need to think about coverage. There are different vectors from one die to the next, and we need to have enough confidence in power signoff or power analysis. There is a lot of data to analyze, and the data isn’t just about the silicon. There are a lot of package designers working on 3D-ICs, as well. Dealing with different heterogeneous databases and doing co-simulation is an enormous challenge. In addition, this co-simulation needs to be done very early in the flow, because the cost of changing anything is high. There are a lot of decisions to make about materials, bump densities, TSV densities, and all of that has to be done earlier in 3D-ICs than in 2.5D.


Fig. 1: Thermal simulation showing heat flows between chip and system and cooling airflow. Source: Ansys

SE: Where are the gaps in the tool flow, and how far along is the industry in closing those gaps?

Mastroianni: Historically, the technology always leads the tools. EDA never fully catches up, and that’s an ongoing battle because the technology is moving even faster than in the past. The 3D place-and-route is a huge problem, and now you’re adding power, thermal, and stress analysis. You have to do the analysis and co-simulation, but you also have to do optimization. And if you see a problem, how do you address it? Ultimately, that needs to be part of the thermal-driven placement, and it needs to be another cost function consideration when you’re doing the development and the place-and-route. These are tough challenges, and it’s going to take some time.

Larsen: We’ve begun to look at designing and optimizing IP in context of not only one die, but the die stack. That includes where you place your IPs, versus power delivery and how you need to feed your systems. Sometimes we need to go in and co-optimize the IPs for the entire system. All decisions need to happen much earlier in the design process than in the past. Maybe as an industry we should begin to think about heterogeneous design, not heterogeneous integration. So you combine all the various technologies together, and then co-optimize them across the key metrics. PPA is certainly still very relevant, and you need to do that analysis. But you also need to recognize these are not just technologies from a single vendor, like 3nm, for example. It may include technologies from several different suppliers of silicon that you heterogeneously design together.

SE: Are you talking about rethinking the entire design flow?

Larsen: We are trying to figure out how to co-design across multiple technologies at once. It’s not just based on whether it’s 5nm or 7nm. You could imagine that if you have a silicon interposer from one supplier, a compute subsystem from another supplier, and whatever specialized IP from a suitable foundry, then getting these things designed together and co-optimized will require changes to the tool flow. That’s one of the paths that we’re taking when we are doing these packages. It ends up being, ‘How do you create optimized solutions at the budget we want?’ It’s a volume problem, too, because you’re trying to optimize PPA across all these dimensions. It’s exciting, but obviously also very challenging.

Mastroianni: Timing closure is getting harder and harder. I don’t know how many corners are in a 7nm chip, but when you start talking about multiple technologies, you are exponentially increasing the timing closure problems. Thermal needs to be considered in that mix, too. So there may be thermal timing closure as part of the problem, as well. I agree that IP needs to be considered during the early floor planning and as you do power management. But when you start doing active die-to-die, you’re creating DC generators, so the tools need to reconcile that. The IP is a little more manageable. The logic is going to be have be part of the optimization engine to deal with thermal issues.

Larsen: That’s a good point on static timing. It has to support multiple dies already, and there is an explosion of corners. When we get into many different technologies, the problem worsens, right? There are methodologies around that today. As an industry, we need to make sure that we make it simple enough for the users to enable them to do the kind of designs they want to see.

Patwardhan: If you look at what the flows are like today, the foundries are defining the flows and we are following them. But while working with customers directly, we’re seeing those customers looking to do more than what the foundries are defining as the sign-off flow. That includes things like exploration. If I have a 2D design and I split it into two tiers, how much power/performance will I gain? And if I choose the pitch of my hybrid bond to be 4µm versus 15µm, will it be an effective 3D design starting with a 2D design? Customers want that kind of exploration. The foundries are trying to define it. And we, as EDA vendors, are trying to develop an algorithmic solution for that. So even before it goes to the foundry to manufacture, a customer has tried out a few scenarios where they can see what kind of 3D stacked die they want to do, which memory, what pitch. There is more to be done there. As a starting point to the whole configuration of homogeneous data, there has to be more done from our side, as well as from the foundry side, in terms of defining which kinds of designs are completely off the table because we cannot manufacture them. ‘So don’t even try and explore them with the tools that you get from EDA.’ If the foundries define that, it becomes easier for the customer, and it becomes easier for us when we are developing a product. Second, if you’re building dies from different places, and you take one from some provider and IP from another provider, and they’re all in different technologies, just representing them in a single database or in a single format — say you have something at 120nm and something at 5nm — is a challenge. How do you represent metal one and all these technologies inside the database correctly? And then, how do you place them when you’re trying to do connectivity management or risk-based analysis? There is still room for improvement for all EDA vendors. We need to be able to put a homogeneous database together that can represent different multiple technologies. And we also need a full, clear, start-to-finish sign-off flow for different styles of stacking. It’s not standardized yet, so for initial field test chips and for field design, people will come up with their own ways. There will be a lot of over-design to make sure the 3D stack is working. That’s one thing we can improve on — to make methodologies specific to 3D stacks so you can reduce the margins. And if you know a design is going into a stack, then you can optimize it even further.

Larsen: The industry will probably continue to use the current sign-off flows, but they’re getting tuned so that if you have multiple dies, they get checked out as individual dies that follow the existing sign-off flows. That may be less of a concern right now than in the future, when we get to transistor stacking. When we get down to that level, we may need to revisit the tool chains even more. But the sign-off flows, as they stand today, actually do a pretty good job for the kind of design stuff we have seen. I agree on the need for some kind of a unified data environment — a data management environment that can ensure you can comprehend all the individual technologies and dies as one so you can visualize the entire stack, including different technologies. Most customers that are dealing with these things are looking for better design management, bringing in the entire designs into one.

Kim: With 3D-ICs, there are so many different requests from customers that it’s hard for us to catch up, and that’s partly because it’s changing so fast. So that is biggest challenge for me. Previously, we were able to plan two years out very easily. Today, you talk to a foundry and the next day there’s a new request. It’s all moving very fast. Just by itself, this is very challenging technology. There is analysis versus sign-off, and there are scaling challenges, as well. And if you look at modeling, are we ready to model very early on? Does the model scale? Can you convey that transistor model all the way to the system level? We’re trying to solve that now with the foundries, which are trying to keep up with what the designers are demanding. But they often they fall short in understanding all the requirement from the customers because they are not designers. For us it’s important to talk to the foundry, as well as customers who are actually taping out 3D-ICs. And it’s important to talk to the people developing dies number one, two and three, which may involve different companies. Even within the same company, you may have different designers.

Related
Setting Ground Rules For 3D-IC Designs
Part 2 of above roundtable. The few designs to reach silicon today are completely customized, with inconsistent tool support. That has to change for this packaging approach to succeed.
Preparing For 3D-ICs
Part 3 of above roundtable. Why disaggregation of 2D chips is so complicated, and what’s missing from the tool chain to make it easier for design teams.



1 comments

Harshita Gupta says:

Do 3D stacked memories undergo a post fabrication testing?

Leave a Reply


(Note: This name will be displayed publicly)