Preparing For A Barrage Of Physical Effects

A perfect storm of miniaturization at 3nm and advanced packaging is forcing design teams to confront issues they often ignored in the past.

popularity

Advancements in 3D transistors and packaging continue to enable better power and performance in a given footprint, but they also require more attention to physical effects stemming from both increased density and vertical stacking.

Even in planar chips developed at 3nm, it will be more difficult to build both thin and thick oxide devices, which will have an impact on everything from power to noise. Packaging those with other chips may only compound the issues, depending upon where those chips are placed in a package.

“It will get more difficult to build I/O cells with voltage larger than about 1 volt,” said Andy Heinig, head of the efficient electronics department in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “In addition, analog performance will be influenced negatively, or no area shrink would be seen. As a result, it is estimated that every 3nm device needs a supporting device in older technology for analog, and also I/O cells for standard protocols such as for monitors, cameras, etc., where the voltage level can’t be reduced. This doesn’t directly mean 2.5 or 3D, and a number of new package types will arrive to solve the issue.”

Physical effects are a bigger concern due to miniaturization. While these effects always have existed, they were often were ignored by design teams at older process nodes because they could be addressed by heavily margined design rules in the fab. At 3nm, however, there is a power and performance penalty for adding margin, and thinner dielectrics, increased dynamic power and transistor density, and greater sensitivity to various types of noise, heat, and process variation have pushed physical effects to the forefront of design.

“We want smaller things and better things, but then miniaturization and advances in transistor technology bring about all sorts of physical effects when it comes to electrical, thermal, mechanical, and reliability,” said Gary Yeap, senior R&D manager at Synopsys. “And all of this affects the entire design flow.”

Design teams now need to consider everything from current leakage to increased resistance in thinner wires, increased metal density, and routing congestion caused by increased density.

“This, in turn, brings about signal integrity issues because the wires are too close together now,” Yeap said. “The increased density also causes increased difficulty in supplying the transistors. If you want low voltage, that amplifies all the issues because low power is not just about low power. It’s like conserving water. It doesn’t mean not to use water. Low power is at the given performance that we asked for. In fact, many of the low-power systems are actually high-performance, and in terms of wattage they are not low power. It’s just an efficient use of power. This raw increase in power is bringing about thermal and mechanical issues that we haven’t seen in the past. Coupled with transistor miniaturization, and the desire to go to 2.5D/3D, that opens yet another can of worms.”


Fig. 1: Power analysis in 3D-IC. Source: Synopsys

The term 3D can be confusing because finFETs and gate-all-around FETs are 3D structures, but they also are being stacked in 3D packages. The bottom line, though, is they each add complexity to the design, and when used together in the same package they add even more complexity.

“Now we have additional parasitics,” said Rita Horner, senior staff product manager at Synopsys. “We have three-dimensional parasitics that need to be extracted, so the extraction process takes longer, the simulation process takes longer. And of course, you have to worry about all of these additional parasitics that didn’t exist in a two-dimensional world. Now going to a 2.5D/3D package, you’re going in the z direction again. In the packaging, with three dimensions, you’re able to get smaller form factor devices, smaller footprint packages, and smaller ICs but the complexity of these getting smaller is really serious.”

One of the big problems is supplying enough current to all parts of these packages without impacting other parts of the design, particularly when lower voltages are required.

“Miniaturization increases the per-unit-area power consumption,” said Yeap. “A multi-chip package presents additional challenges because the power needs to go from the bottom all the way to top die. So the die sitting on top needs to rely on the middle die and the bottom die to supply power. That’s an obvious problem. But miniaturization also limits the power-carrying capability, so now we’re seeing that the resources given to the power network are increasing, and that eats up the resources for signaling. This means there is a tighter tradeoff, because you need to spend more resource on power and the increased power density because of that miniaturization, which brings about more severe thermal and mechanical problems. If you go back 10 years, we did not care much about thermal and mechanical issues. You could dissipate the heat and it wasn’t a major design concern. Now, with all of these smaller transistors, and 2.5D/3D packaging, you need to consider that early in the design process. Also, all of this complexity comes at the designer and they are just overwhelmed with all these physical effects.”


Fig. 2: Power rail analysis for resistive and capacitive elements. Source: Synopsys

With 2.5D and 3D technology becoming more common, as semiconductor and systems companies strive to get more out of existing systems, the impacts are being felt across the industry, including the memory and interface IP realm.

“By using 3D techniques such as HBM, which takes common DRAM technology and stacks it in a 3D configuration, you’re increasing the bandwidth significantly with basically the same technology,” said Frank Ferro, senior director of product marketing for IP cores at Rambus.

However, this has a ripple effect. “Say you’ve got a 3D DRAM, which goes very wide and slow,” Ferro explained. “There are 1,024 data lines. How do you deal with those? That’s where the 2.5D technology comes in. You’ve got to put that on the interposer. Then you’ve got to put the SoC on an interposer, and that means all of the signals — whether going through the HBM DRAM or not — have to go through a silicon interposer. I may have a SerDes on my SoC. Now that’s got to go through the interposer, as well. All of this has to be taken into account. In addition to physical effects, there are also manufacturing concerns and reliability concerns. So while you’re getting a lot more performance out of the system, along with a smaller footprint potentially — and at least in the case of HBM, you’re getting a good power profile — now you’ve got all of that in a very small footprint. We’ve been spending a lot of time on how to deal with that very different type of channel. We’re used to doing signal integrity on a PCB. Now we’ve got to do the whole signal integrity challenge on a through-silicon via.”

Silicon interposers are very resistive channels. “The good news is they’re short and they’re better controlled over a temperature range like silicon, but now there are different effects that you didn’t see as prevalent in a traditional PCB,” he said. “And so that resistance of the channel is causing a lot of struggles because you want to get good signal quality. So you must look at all the different parameters around that channel, such as the thickness of the metal, the width of the metal, the placing of the metal, all of which is relatively new to designers. It’s even more challenging because there are three different foundries that all have different design rules that they have to deal with, so it’s we don’t just do it once. We have to go through this process for every foundry node, and each customer wants to do something a little bit different.”

2D vs. 3D
A lot of calculations are done on individual transistors from a layout vs. schematic, extraction, and post-layout simulation point of view, said John Ferguson, marketing director, Calibre DRC at Mentor, a Siemens Business. “That more or less works pretty well today for an SoC. There are some cases when it doesn’t, because eventually an SoC is going to quit on something — particularly if it’s over a BGA somewhere. Then you’re going to have some level of additional stress. But when we’re talking about these 2.5D and 3D packages, and you’re putting things on top of each other, you’re putting TSVs in, and you’re got various microbumps and all kinds of other scenarios, you really can start to warp your chips in ways that you didn’t predict. That causes reliability issues, but it also causes those transistors to behave in ways that you didn’t anticipate.”

Dealing with these issues is tricky. “The best way to figure it out is if you went back to first principles and really captured the detail,” Ferguson noted. “This means you have to put the whole thing together first and know exactly where everything is. The challenge is once you figured all that out, and if you then find a problem, you don’t have any time left to do something about it. You’re already at the end of the design spectrum. You’ve got to find ways to at least capture the big issues early so that you don’t make those mistakes. Then, hopefully, you can do something like post-layout simulation, find where there are issues, characterize them, and do your typical Monte Carlo corner simulations and start bucketing things whereby these work okay.”

To capture the big issues early, he said some assumptions can be made if you know where things are going to go, or if you’re deciding, for example, where you’re going to place different dies on an interposer. “Then you have some flexibility early, and even though you can’t tell to the transistor level where the problems are going to be, you can get some gross, high-level information and say, ‘I’ve got bumps in BGAs and TSVs. I’m on the edge of a die. Part of it hangs over and part of it is not hanging over.’ You can find those things and at least give the user guidance to say this may not be the best place to put this particular component. But again, it’s not as accurate, and the more of the surroundings that you have, the better you’re going to be. That can be a little challenging in a chiplet environment, where there’s a growing idea now of doing hierarchical, bigger and bigger chiplets out of smaller chiplets. And so if I’ve got a die and a bridge, or a couple of bridges, but I don’t know what that is going to be put into later, at that point you can’t change what’s in the chiplet anymore. You only can change where you put it. So it makes it a little challenging.”

All of this requires a lot of computation. “There are a couple of different approaches in the industry,” he said. “One approach is brute force, but scale it out on lots of CPUs and you’ll get good, accurate results. The other approach is to do models of all the different kinds of forces, and then sum them up. That’s much faster. We’ve shown that you can get good accuracy from that, and in a much faster manner.”

Adding thermal into that equation makes it even more complicated. “They go hand in hand,” Ferguson said. “Stress will induce heat, and heat will induce stress. You can’t really ignore one and only focus on the other. The extra challenge on the thermal side is where is the heat dissipation. If I’ve got an SoC, and I’m on a substrate, the distance to the substrate is not so far. But if I’ve got die on a die on a die, and the substrate is far away, you may have a bigger problem. How much of the thermal concern can I capture early in the design phase versus how much can I really only find once I’ve made all of my decisions?”

Different foundries, different processes
This is where work and partnership with the foundries is critical.

“Practically every foundry has its own unique recipe as to how to stack up dies, and the 2.5D or 3D technology between the die and the interposer,” said Saman Sadr, vice president of product marketing for IP cores at Rambus. “That’s actually where the biggest struggle is because these cannot be easily mixed and matched. What it means for an IP interface provider or a chip provider is that you want to have a robust solution that works with these. One complication is that you have to rely on the models that you’re getting from one foundry. And from the IP design perspective, you want to make sure that you’re covering all applications. We have to put in enough configurability and flexibility without compromising the power and area in the IP design, so that if one foundry had a little bit more resistive solution versus capacitive solution, or the microbumps had more capacitance moving from one silicon to interposer layer, it can be managed. Those are the things that have implications on the IP design. We need that configurability and margin built into the design in a creative way so that it doesn’t burden the design.”

The growing emphasis across the design flow is on more analysis up front, especially with increasing miniaturization and advanced packaging.

“Many years ago, we only cared about functional verification when we designed a transistor, then the printed circuit board, and so on,” said Sooyong Kim, senior product manager in 3D-IC chip package systems and multi-physics at Ansys. “As we go down to the lower nodes of submicron or deep submicron, we include more areas for multi-physics to determine whether the circuit is functioning right or not. We look at the functions first, and then we start looking at other aspects like timing, power, and reliability.”

Just putting everything into a 2.5D or 3D-IC package doesn’t make this analysis easier, however, and getting it wrong can be very expensive because there are multiple chips involved. But there also is good reason to move to those packaging approaches, particularly with AI and 5G, because there is so much heterogeneity required — analog blocks, antennas, specialized accelerators and different materials.

“If you look at image sensor designs, they’re putting the image sensor on top, and SOIC (small outline IC) controllers in between,” said Kim. “More importantly, the memory has to be increased and implemented right above the SoC because of the performance need. In addition, the energy that it takes to move that information is too high.”

In automotive applications, this becomes even more essential to get right. “We are looking at more stringent reliability rules for autonomous vehicles, and because we are putting many different things in together, it becomes bulky even though we’re putting it into very small areas,” he noted. “3D is very bulky. There’s a mechanical aspect to it as well. The solder balls we implement nowadays are much smaller, like microbumps, and are using copper pillars. If you just look at the solder balls, there were multiple thousands of connections in the past. Now we’re looking at millions of connections between the chips. This means all of the physics have to be solved at the same time in order to tackle that problem. The physical issues in different aspects of thermal, electromigration/IR drop, mechanical, on top of the electrical impact that used to be there, all has to be solved at the same time.”

This changes the design paradigm. “Engineering teams need to take this impact into account in the very early stage,” he said. “Traditionally in the 2D flow they didn’t really have to worry about how the entire system would be configured because individually it could be isolated. But now we’re putting everything together into one entire structure, and those pieces are interfering with each other quite a bit. You want to come up with a prototyping flow where you can understand up front the impact of signal integrity, power integrity impact, thermal integrity, or even mechanical integrity.”

This also means, from the perspective of the design flow, the design engineers must look at this from the system point of view, and be able to consider silicon on, as well as, off chip for the different multiple aspects.

“Even when we are exploring, we need to do analysis and simulation,” said Synopsys’ Horner. “Every step needs to be verified. We’re seeing more demand and more need to do earlier exploration, and to spend more time on the exploration phase of the design than actually doing the design. But once you start getting the design done, it’s not that you just design and then analyze and simulate and you’re done. You have to do a lot of iteration. So how do you achieve to minimize the number of iterations?”

Conclusion
Yeap broadly categorizes what is needed in two camps. “First, is analysis. For all of these complications and complicated physical effects, you need a better analysis engine. You want a better measure of your parasitic extraction and better modeling of your power networks. You want to consider all the thermo-mechanical effects. These are all in the area of analysis – more and more physics. Second, is design automation. Due to all of this complexity, designers are overwhelmed. To increase designers’ productivity, they need better tools. We’re addressing this problem by giving the designer better tools in terms of first importing a higher accuracy analysis engine, which gives the designer a toolbox for tradeoff capability. Ultimately, we’ll give the designer automatic optimization capabilities, and design tools are progressing in this way.”

Finally, models from the foundries become really important in this early analysis as well, and this is why EDA tool providers are working closer than ever with the foundries.

“It’s no longer enough for the foundry to just put out the PDKs,” Yeap said. “They have to talk to the software guys. They have to ask the software guys, ‘Can you do this and that?’ ‘If you want to increase the accuracy by modeling this, and if you want to create a library this way, what kind of limitation will you hit in the software tools? What kind of impact will you have in the entire design flow?’ We in EDA need the foundry, and by asking them if they will tweak their process and library this way, it will improve the efficiency.”



Leave a Reply


(Note: This name will be displayed publicly)