Experts at the table, part 1: Lines blur with middle of line as RC delay increases, reliability and yield become more difficult to achieve, and costs skyrocket.
Semiconductor Engineering sat down to discuss problems with the back end of line at leading-edge nodes with Craig Child, senior manager and deputy director for GlobalFoundries’ advanced technology development integration unit; Paul Besser, senior technology director at Lam Research; David Fried, CTO at Coventor; Chih Chien Liu, deputy division director for UMC’s advanced technology development Module Division; and Anton deVilliers, director of patterning technology and senior member of the technical staff at Tokyo Electron. What follows are excerpts of that conversation.
SE: How are do you define back end of line these days? Where does it start in the process flow and where does it end?
Child: Historically, it’s started out with a contact. You go to your first metallization layer that’s copper. That’s back end of the line. So you come out of the middle of the line and you go all the way through to copper and everything that connects to the bumps. That’s M0, M1, all the way through to the bumps. That’s historically how it’s been.
Besser: How far will it push down, given the need for lower resistance in the front end of line?
Child: It’s starting to get a little bit blurry. As everything get smaller and smaller and resistance becomes an issue, the middle of line and first back end of line start to blur. Are we doing local interconnects in the back end of line or in the middle of line? Is it copper? Is it something else?
deVilliers: It is getting blurry. And if you look at the advent of things in the memory world like buried word lines, that’s a piece of metal that is buried down into the middle of line. So that gets blurry. If you look at what that might do to logic with horizontal nanowire, those need local interconnects routing signal channels to somewhere in your middle of line. You’re accessing devices that are clearly middle of line. So it’s blurry there. It’s also blurry on the back end. Packaging is being called the next way to scale. And now we see heterogeneous 3D with a lot of back end of line interconnects. Litho comes back with dimensions that are back end of line centric, and we have blurring on the other side.
SE: What new challenges are showing up that we didn’t run into in the past?
Liu: RC delay is the most challenging for back end of line. Right now the industry is using organosilicate, but its mechanical strength is very weak. That pushed people to go further to solve this problem.
Fried: So we’ve got yield, reliability and RC delay issues. To that you have to add in cost. We’re looking at more and more layers in the back end. Anything you do to solve problems on one of these layers, you’re doing five, six, seven or eight times, depending on your stack. We can solve a lot of these problems. But if you add a tremendous amount of cost at each of these levels, and then repeat it up the stack, you defeat the purpose of scaling in the first place. Reliability, RC, yield and cost are the big issues.
Child: We’ve really had a unique nexus in this technology. Is 7nm the last optical node? Are we going to try to scale optical further than 7nm? The reason that’s an issue is, if you go back a number of years, krypton fluoride was big. It’s was the workhorse. 193nm was used for dry. Immersion was just peeking its head in the door. There were a few companies investing in what was next, which was 157nm. There were conferences where people complained about the source power on 157 and the complexity of the beam. You had to evacuate the beam train. The masks were expensive. That all sounds very familiar. But weighing the cost of what you got in return was canceled. There is no 157 anywhere now. That whole equation of lambda over NA (wavelength of light divided by numerical aperture, or /NA) was taken completely off the table. Every company was forced to be clever. NA was improved. Resistive systems were improved. Once we ran out of that scheme, we started going to more complicated integration schemes. That’s where we are now. For 7nm, the integration schemes are incredibly complicated. The mask count is going through the roof. If you look at just the back end of the line, the mask count is equal to what not very long ago was the mask count for the whole process. And the cost is going through the roof.
Fried: That’s eight levels for a single line and via—eight critical masks.
Child: Yes, and they’re all immersion. Overlay is a big deal. Who’s going to decide what we do next? Is it the engineers, or is it the executives? They may say you can’t have 199 masks. We’re at a very critical point right now. We all know about reliability and RC, but can we afford to do it? That’s the biggest issue we face right now?
SE: Is this a surprise? Or has it been building slowly?
deVilliers: For many years, memory has been leading a lot of these dimensions. That has changed recently, because memory has diverted its strategy to 3D. But memory showed us that a certain density of scaling is impractical for a lot of reasons—electrical characterization, performance and cost. We couldn’t continue to scale in 2D, so we had to move to 3D. What memory did was to make one lithographic pass more efficient. We have more edges. If you look at integrations that are being looked at here, such as horizontal and vertical nanowires and some of the more MLC (multi-level cell) scale paths that might come up, how many edges do you get for one litho pass? If you can get more edges out of the same litho pass—and in a memory cell you can get 128 edges from one litho pass—then you can drive efficiency in the use case irrespective of EUV or 193. It doesn’t really matter. We need to be as efficient as possible with the money we spend in litho. That paradigm hasn’t changed much at all. It hasn’t crept up on us. The cost of those edges is important. We have to make money with the parts we build. The economics has taught us a lesson again and again.
Child: If you look at the challenges with RC, it’s related. Dielectric scaling is a challenge. Capacitance scaling has slowed, so there is a lot of pressure on R. There are two problems with R. First, with uni-directional patterning, to connect transistors you have to go up through a couple of vias, over a line, and down a couple vias. That’s not getting the attention it deserves. It’s the via resistance, and the resistance of that barrier whose thickness is not scaling. We need to find a way to lower the barrier resistance. Intel presented a paper where it discussed barrier-less integration. Hopefully that works. But right now the barrier thickness is not scaling and we need a conductive barrier and we need to figure out how to lower that resistance. That is driving a lot of problems.
Fried: There hasn’t been a paradigm shift in architectures. But if you look at self-aligned vias—which is an integration technique—from a routability perspective it lets you make denser cells by lining them up and putting them on a single exposure. That’s not a game changer, but a lot of these techniques add up.
deVilliers: Memory showed us two game changers. One is 3D. The other game changer that logic didn’t deploy was multi-level cells. There is discussion about how to embed memory inside a transistor. But now you have to re-do every standard cell you have in your library, which is huge. But it requires the adoption of the thought process that you can change the fundamental density electrically, so you don’t have to go on and on with this crazy patterning. There is a limitation we’re facing, which is that you cannot have a scheme with so many masks that you can’t make money.
SE: Does 3D change the discussion, particularly with die stacking and TSVs?
Child: From a memory perspective, clearly. From a foundry perspective, 2.5D and 3D have promise. But the issue we have is there are so many different customers and needs that offering a 3D solution is almost a one-off. Until we can get efficient with offering that, it’s going to be able to make money. We need to have the gains from 3D and enough volume to make it worthwhile.
Fried: From the logic side, 3D has the potential to solve a footprint problem. You can put more logic in a similar footprint. But fundamentally it doesn’t address the cost or density problem.
Liu: I don’t think 3D IC can solve these problems right now.
SE: Let’s shift this discussion a bit. Where are we with air gap? IBM introduced this concept in 2007 but we haven’t seen much of it. In fact, only Intel appears to be using it. Why?
Liu: The foundry guys need to establish the design rules for air gap. That will be challenging.
Besser: Do you see a lot of pull for air gap?
Liu: Not right now, but we have to prepare for that. The foundry guys need to do a lot of process margin research for air gap structures. We need to understand how to apply this technology.
Child: The problem is that if you offer it, they will come. You have to have very solid design rules and implementation. An IDM has the luxury of designing to that, implementing it very specifically. But if you offer it to 50 customers, you have to be sure the whole infrastructure is in place. That’s where the delay is.
Fried: It’s really the only fundamental dielectric improvement we have had in generations. The dielectric constant has been at 2.4 for five or six generations. We haven’t had a real dielectric improvement. If air gap was ready, there would be tremendous pull for it.
Child: But you have to design to it.
Pain Points At 7nm
More rules, variation and data expected as IP and tools begin early qualification process.
10nm Versus 7nm
The economics and benefits of moving to the next process node are not so obvious anymore.
Interconnect Challenges Rising
Resistance and capacitance drive need for new materials and approaches.
7nm Fab Challenges
FinFET formation, mask challenges and back-end-of-line issues will make this node difficult and expensive.