Challenges In Stacking, Shrinking And Inspecting Next-Gen Chips

One-on-one with Lam CTO Rick Gottscho.

popularity

Rick Gottscho, CTO of Lam Research, sat down with Semiconductor Engineering to discuss memory and equipment scaling, new market demands, and changes in manufacturing being driven by cost, new technologies, and the application of machine learning. What follows are excerpts of that conversation.

SE: We have a lot of different memory technologies coming to market. What’s the impact of that?

Gottscho: It’s obvious that DRAM scaling is getting more and more difficult. We still see three more generations ahead, but the cost and performance benefits from each generation are getting smaller. Something has to fill the void. And then there is this storage-class memory space between NAND and DRAM, which PCRAM (phase-change memory) or XPoint memory fills partially, but not completely. PCRAM or XPoint can also create new end-use cases, thus opening up new vectors for market growth that didn’t exist with traditional NAND and DRAM. Our perspective is there won’t be one thing that replaces DRAM. We see a variety of solutions. There might be three or four variations that fill that space and replace DRAM or parts of the DRAM market. We believe that whatever the solution space consists of, it will involve 3D architectures.

SE: So where does Lam fit into this picture?

Gottscho: It’s having equipment that can fill high-aspect-ratio structures — and it’s not just a vertical structure. It’s a horizontal-vertical combination — inside-out filling that we do today with tungsten, building up a stack of materials like ONON (oxide/nitride) or OPOP (oxide/polysilicon), and then etching high-aspect ratio structures through that. We think the solutions that came out of the 3D NAND inflection from 2D NAND will be applicable to the new memories, whatever they might be. Complexity, among other things, will come with the introduction of new materials, particularly for something like the MRAM stack, which is not only complicated, but also sensitive to process conditions, and therefore difficult to etch vertically. That’s why, to date, you don’t see any high-density standalone MRAM. You see it all being embedded into logic, which is a consequence of the materials.

SE: Some of this technology now has to last 15 to 20 years, particularly for automotive and medical. What kinds of demands are you seeing on the equipment side?

Gottscho: When you’re dealing with customers serving the automotive industry, their tolerance for change is extremely low. It’s imperative that you build robust solutions up front, because if you run into an issue and you need to change something — whether that’s a hardware or process change — it’s a very expensive, time-consuming process. That’s different from the consumer electronics or the smartphone market, where everyone is swapping out devices every two years or less. And in DRAM, the risk tolerance is much, much higher because they’re generating a whole new device technology every 12 to 18 months. That’s the big difference with automotive and anything where you have direct impact on human safety. For that reason, the risk tolerance is very low. It’s a different set of challenges for us. It includes everything from obsolescence management to trying to get things right the first time.

SE: What will happen with 3D NAND? We’ve gone from 32 to 48 to 96 layers, and now announcements for 128.

Gottscho: The vertical scaling is continuing.

SE: Will we see 128 this year?

Gottscho: For sure with 128, but what will happen with the 192/196 — whether it’s this year or next — isn’t clear yet.

SE: Will those be single-deck?

Gottscho: It’s mostly double deck, but there’s some single-deck stuff out there, as well.

SE: Will 3D NAND scaling slow after 192/196 layers?

Gottscho: We are optimistic about 3D NAND scaling. There are two big challenges in scaling 3D NAND. One is the stress in the films that builds up as you deposit more and more layers, which can warp the wafer and distort the patterns, so when you go double deck or triple deck, alignment becomes a bigger challenge. We’ve come out with a new product to do backside deposition to compensate stress on the front side of the wafer. We also have solutions to reduce the intrinsic stress in the films. Backside compensation and stress reduction helps with wafer-scale warpage, but it doesn’t help with-in die, in-plane distortion very much. In fact, sometimes it can make it worse because now you have all this stress, and it’s like clamping a wafer. You’ve forced it to go flat. All of that stress gets manifested in distortion of the patterns, so you have to attack the intrinsic stress, as well.

SE: What’s the solution?

Gottscho: We’ve come up with modified versions of the films to enable us to dramatically reduce that stress. That’s enabling the scaling to continue. The other big challenge is cost scaling. Unlike lithographic shrinks, where you can get more devices in the same area for effectively the same cost, in 3D NAND you build up more layers. It takes you longer to deposit films, longer to etch — and the etch, in particular, scales non-linearly in the wrong direction with respect to how long it takes you to etch. We’re working to reduce the thickness of the films. You get more layers, but they’re not as thick so that helps with the cost problem.

SE: How about deposition?

Gottscho: We’re also looking at ways to increase deposition rates. We’ve made dramatic improvements in that area. And as in the case of etching, we’re working on different approaches to increase the inherent etch rate so that you don’t have such a terrible falloff as you increase the aspect ratio. And then, as you build up more layers and have more devices, you start to worry about things like RC delays with your wordlines. That’s driving new metallization, much like it is in logic, to deal with that problem. Those are some of the scaling challenges, and a lot of progress is being made in each one of those areas, which is why we’re pretty bullish about ongoing scaling. We’ve said [256-layer 3D NAND] will happen. Where 3D NAND will end is uncertain. Cost will continue to be a challenge. Stress becomes more challenging as we build up more and more layers.

SE: Stress has become a problem in everything from advanced packaging to 3/2nm to the films that are being deposited on these chips. This was always a problem. Now it’s much more of a problem, right?

Gottscho: It’s always been there. The degree to which it becomes limiting depends on the device. With 3D NAND, the challenge of managing stress is unparalleled. But stress also has been used for enhancement with strained silicon, which enhances mobility.

SE: There isn’t too much talk about using it for a competitive advantage anymore, though.

Gottscho: No, it’s more of a headache. When we get a spec on a film, stress is always there. Deposition is one thing. Composition of the film is another. They either have to be tensile or compressive, and within a certain range. Clearly in the case of 3D NAND, it’s more of a problem. With the initial implementation of 3D NAND, with 32 pairs, stress was an issue. But it’s far worse and more critical at 196, because everything adds up. And then if you go to double decker, stress becomes even more important because of the alignment issue. So in 3D NAND, it’s getting more challenging.

SE: In addition to structural stress, we’re also hearing about more environmental stress, particularly in automotive and industrial applications. In automotive, there are 7/5nm chips under extreme heat and vibration.

Gottscho: There are a wider range of environmental conditions. Temperature cycling is problematic. And if something is under stress and it’s subject to vibration, it can lead to premature failure. The automotive requirements are much more stringent when it comes to stress, or the temperature dependence of the stress.

SE: With 3D NAND, there is more bit density per node and more bits per cell. Is there enough demand for all of this?

Gottscho: There is strong demand long-term. There is explosive growth in data, and data generation and storage. All of these applications for mining the data are going to feed new applications for more data, so there is an insatiable demand for data and to store the data forever. There’s no reason why you can’t mine data you acquired 10 years ago and extract value from it, providing it is stored in a very accessible way. But if you think about 4-bit and 5-bit cells, Lam can make a difference here. You’re really digitizing a current/voltage characteristic. The precision with which you can divide up that I/V curve depends on whether this device looks exactly the same as the device right next to it or on top of it. So if your memory hole etch isn’t adequately uniform, then each device will be a little bit different in the array, and you’ll be doing a lot of error correction to make ‘this’ device look like ‘that’ device with 4 or 5 bits. The processing precision of both deposition and etch in building up 3D NAND structures is critically important, along with error correction and algorithms and circuitry to enable 5-bit cells.

SE: This is really about increasing what is essentially data density. What happens in terms of equipment, though? All of this is new and in a state of constant change.

Gottscho: That’s true. The value that Lam brings to high-density memory is precision of the etch and deposition. That’s precision in terms of profile control if it’s an etch, or thickness control if it’s a deposition, not having voids if you’re talking about filling things, and precision hole-to-hole, die-to-die across the wafer, wafer-to-wafer, lot-to-lot, and chamber-to-chamber. It’s all about chamber matching, process control, and process windows so you inherently have uniform results. That simplifies that software requirement to make error corrections to get the maximum bit density.

SE: What’s changed in terms of data usage and types of data?

Gottscho: We are using it differently and using more of it, but we have a long way to go to realize our vision for what we want to do. A datum is a terrible thing to waste. The problem is we generate copious amounts of data every day. The data are not being thrown away, but they’re not necessarily being archived in a way where mining is straightforward. This is limiting our ability to use it constructively, whether it’s designing a new recipe, a new tool or producing a more repeatable or precise deposition or etch process.

SE: Is that a function of inconsistencies in data and data collection?

Gottscho: It’s a combination of factors, including how the data are stored such that they can be retrieved. If you’re looking for a particular signature, if the data weren’t stored as contextual data, then it’s difficult. Or there may be tons of data coming off the tools with tool sensors, but you have to connect it to measurements on the wafer, and those measurements were done on different pieces of equipment. So you need to connect disparate kinds of data from different sources, all the way from structured to unstructured, in such a way that you can write an application that will link them all together and allow you to mine it. That infrastructure is in various stages of maturity throughout the industry.

SE: Who’s writing the training algorithms for that?

Gottscho: We write our own algorithms. We also partner with others for algorithms, and there are some algorithms in the public domain. Part of data science is understanding which algorithms to use for which applications.

SE: But doesn’t that cause some problems? You’ve got different algorithms and they don’t necessarily describe things the same way.

Gottscho: You sift through algorithms and evaluate how well they work, including whether they deliver the accuracy and precision you desire, and how fast they do that.

SE: The precision can change over time, though, depending upon what you’re trying to collect.

Gottscho: The precision, in general, is constantly getting tighter and tighter in the semiconductor business. That’s why the importance of mining the data and using the results to improve that precision is so important. Those requirements are getting tougher and tougher. This is the virtuous cycle. The data we generate and the algorithms we develop to mine the data then get turned back into making better tools and processes, which make better chips. That enables us to mine the data more effectively. There is a wide range of maturity in the industry. Our customers — semiconductor manufacturers — have more data in a useful form that can be mined compared to an equipment company. We don’t run millions of wafers through our tools until they’re in production, and once they’re in production our customers have access to this data. There are debates about who owns the data, but from a practical standpoint they have access to it readily and we may or may not.

SE: EUV is finally happening. Now we have all of these node names. How do you see all this playing out? Do you see a mad rush to 3nm?

Gottscho: For sure, 3nm will be more difficult than 5nm, which was more difficult than 7nm. My understanding is demand is robust for 7nm and will be robust for 5nm. Some of the nodes are short-lived because they didn’t supply enough benefit, and our customers’ customers may be looking at the next node and hold off. It’s hard to determine which is going to be the killer node and which one may be short-lived. But the overall trend we see is continued demand for leading-edge devices. A lot of that is being driven by the big data activity in artificial intelligence, where you have to crunch an enormous amount of information and you don’t have forever to do it. High-speed processors, dense processors, and memory are critical.

SE: Moving to gate-all-around looks more difficult. Is it really an evolution from a finFET?

Gottscho: Yes. There is a lot more complexity in a nanowire or nanosheet than in a finFET. There are new processes, and those are very challenging. But architecturally it looks a lot like a finFET that’s been sliced up. That’s why people refer to it as evolutionary rather than revolutionary.

SE: Where are we with ALE?

Gottscho: There are ALE solutions in volume production right now, and sometimes quasi-ALE. Just like ALD, you never really run at the limit because it’s too slow. So you make compromises and get close to that limit, and you get most of the benefits, and then you deal with the residual downside of not being at the limit. There’s a tradeoff between productivity and precision with both ALD and ALE. What we call mixed-mode pulsing is highly productive. There’s a spectrum of processes that get closer and closer to the ALE limit. That’s widely adopted. The big question is whether we can push this spectrum further toward a pure ALE or ALD process, because the benefits are significant. That applies to whether it’s an isotropic etch or an anisotropic etch. With gate all around, there’s an isotropic component that has to be done with extremely high selectivity. ALE would seem to be a natural process solution if you can make it sufficiently productive. That’s the challenge in ALE — how to make it faster.

SE: Everything is slowing down at advanced nodes. Test is running earlier and later in the process. We’re seeing this with verification on the design side as well. Is there a point at which all of this no longer works?

Gottscho: It’s dangerous to say things will stop. The limits of plasma etching, for example, were exceeded 20 years ago by a variety of tricks. What we do is change recipes as we etch. We may ramp parameters or pulse, and as we pulse we may change parameters. In the case of deposition, we’ve been quite successful with atomic layer deposition due to our configuration options with highest footprint productivity which we achieve through our quad station module (QSM) architecture. This allows us to process more wafers per unit area than other single wafer equipment based on other lower density architectures. That increases the output of our tools. And then we’ve made dramatic improvements in how we implement ALD, such that the ALD process goes faster and faster. At every node and in every application we start out with a throughput that is deemed to be unacceptably low, and part of the process of going from initial R&D to high-volume production is making that process go a lot faster. The first thing is to get the technical result on the wafer that you need, and then you optimize productivity. The deposition processes today run slower than they did four or five nodes ago. With a higher aspect ratio they absolutely slow down. But we’ve been able to overcome the theoretical limits for how fast they run by changing the parameters, the recipe, or by using different hardware.

SE: Where is isotropic ALE today?

Gottscho: There is some very nice work in the public domain from Steven George, professor at the University of Colorado at Boulder, College of Engineering and Applied Science, picking the right temperatures to do thermal atomic layer etching without any plasma or any energetic ions. That becomes an isotropic etch. And he’s looked at a wide variety of material systems. So isotropic ALE is feasible. That’s an important characteristic of both ALE and ALD. That first surface modification step being self-limiting is the same in ALD and ALE. In ALD, the next step is to put something down that allows you to grow layer by layer. With ALE, you take that modified layer and you put in some source of energy and you desorb it selectively. You can do that with ions, or you can do it with temperature, or you can do it with photons. You’re cleaving the bonds between the underlying layer and the modified substrate. Those bonds are weaker than the bonds within the substrate itself, which is why you get ‘one layer’ removed at a time. That’s in quotes because it’s typically more than one layer because the surface modification wasn’t one layer. It has some depth to it.

SE: Is this significantly more complicated?

Gottscho: Atomic layer etch actually simplifies the etch process tremendously. It’s very complicated when you’re trying to do surface modification and surface desorption all at once in this soup. With ALE, you’re isolating the different steps. By separating things in time and space, now you’ve broken the problem down into smaller pieces, each of which can be optimized independently.

SE: One last question. As you look out over the industry in the next couple years, what do you see as potential roadblocks?

Gottscho: The cost of development has always been a big concern. Defects are a bigger problem, and metrology is a factor. It’s slow and expensive, and it’s getting harder and harder to measure defects or a profile. It limits our customers’ ability to sample at the end of line. It’s so expensive that you can’t afford to sample, and then you run the risk of running out of control. That’s where virtual metrology and data come in. That’s both a big challenge and a big opportunity. EUV is a challenge with respect to defectivity, but I’m optimistic about those problems getting solved. One of the reasons we went into dry resist was that there didn’t appear to be a viable solution, particularly for high-NA EUV. But as an industry we’ll overcome those challenges. Another concern is big data. There is a huge amount of untapped opportunity, but the problem is that the semiconductor ecosystem struggles to share data. I’m much more optimistic about the health care industry being able to collect data from lots of individuals, anonymize the data, mine it, and then go back to the population in general, and to individuals, and provide tailored solutions based on your epi-genome. It would be wonderful if the semiconductor industry could do the same thing, so that one customer’s data could be combined with another customer’s data, so in some way, someone could mine that data in a differentiated way. Everyone recognizes the value of the data, and they are afraid of letting it go. But that inhibits innovation across the ecosystem.



Leave a Reply


(Note: This name will be displayed publicly)