EUV, Deep Learning Issues In Mask Making

Experts at the Table: Deep learning is a hot topic, but can the industry use it for mask making? (Part 3)

popularity

Semiconductor Engineering sat down to discuss extreme ultraviolet (EUV) lithography, photomask technologies and machine learning issues with Emily Gallagher, principal member of the technical staff at Imec; Harry Levinson, principal at HJL Lithography; Chris Spence, vice president of advanced technology development at ASML; Banqiu Wu, senior director of process development at Applied Materials; and Aki Fujimura, chief executive of D2S. What follows are excerpts of that conversation. To view part one, click here. Part two is here.


Left to right, Harry Levinson, principal at HJL Lithography, Emily Gallagher, principal member of the technical staff at Imec; Chris Spence, vice president of advanced technology development at ASML; Banqiu Wu, senior director of process development at Applied Materials; Aki Fujimura, chief executive of D2S.

 

SE: EUV masks are different than traditional optical masks. Optical masks consist of an opaque layer of chrome on a glass substrate. In contrast, an EUV mask consists of 40 to 50 alternating layers of silicon and molybdenum on top of a substrate, resulting in a multi-layer stack. The industry can make EUV masks, but there are still some challenges here, right?


Figure 1: Cross-section of an EUV mask. In EUV, light hits the mask at an angle of 6°. Source: Luong, V., Philipsen, V., Hendrickx, E., Opsomer, K., Detavernier, C., Laubis, C., Scholze, F., Heyns, M., “Ni-Al alloys as alternative EUV mask absorber,” Appl. Sci. (8), 521 (2018). (Imec, KU Leuven, Ghent University, PTB)

Gallagher: One thing we haven’t touched on is the current pellicle solution. With the polysilicon-based EUV pellicle, you can’t inspect through it with 193nm inspection, which is another reason for actinic inspection.

Wu: During this process, we need the inspection to guarantee that the process did not introduce some additional defects, even though we cleaned the mask. So it’s really complicated. The purpose of this process is to reduce the defects. If the process is not perfect, we can introduce defects. This is complicated compared with optical masks. For optical masks, it’s relatively easier.

Spence: People are motivated to solve the problems, and they are making progress. Ultimately, the EUV adopters will drive towards a manufacturing solution. We will do as much as we can to support them. ASML has taken a lead on the pellicles and brought that to a point where there is some feasibility and proof-of-concepts. But the situation is that we are entering production and there are still some things that need to be fixed for high volume. But the basic writing infrastructure is definitely there. The blank infrastructure is also there.

SE: Initially, chipmakers will likely move into EUV production without a pellicle. (A pellicle is a membrane that prevents particles and contaminants from landing on the mask.) What are the implications here? What are the additional steps required if you don’t have a pellicle?

Levinson: You do not want to etch your wafers until you are assured that there is no defect that’s printed from the mask, which could significantly reduce the yield. Right now, we really can’t inspect the mask even reliably. So what you need to do is to expose some wafers. So you print a special wafer and then you will inspect that special wafer. In the meantime, you are holding up your production. So the impact on cycle time is huge. It’s highly disruptive, even if you come to the conclusion that there is no defect on the mask. Then, if you find a defect, you have to go through this cleaning cycle that’s quite difficult. You also need to re-qualify the mask, and so forth. We all thought, ‘Let’s try to do early manufacturing without a pellicle.’ At some level, you can probably do that. But to get back to the kind of efficiencies we need in EUV lithography, a pellicle is critical.

SE: Machine learning is a hot topic. This technology makes use of a neural network in a system, which crunches data and identifies patterns. It matches certain patterns and learns which of those attributes are important. But for this to work, you need a lot of data. Do we need more data to make it more pervasive?

Spence: There are two things here. There is more data and then there is better data. So if you have a lot of bad data, then you may not make a good decision. But if you have more data, you should make better decisions. We’ve tried to correct a whole chip with a billion features on it. We’ve built a model with a few hundred patterns. So we starve ourselves for data to build our model. And then we hope that we can incorporate a lot of physics in it, which will allow it to extrapolate it everywhere. But some things are hard to understand in physics, like photoresists or etching. So the more data we get, the more we will be able to improve our models. That will definitely improve the accuracy for wafer correction and also for mask correction. But to me, it seems inevitable that if you have more good data, then you will get a better model.

SE: What will machine learning accomplish in mask making and lithography? Can we speed up the mask-making process and have better masks?

Spence: It could be both. The number one problem is the accuracy. If we look at our overall patterning budget, we have overlay and CD. There are also all the errors that are inherent in the OPC (optical proximity correction) and the MPC (mask process correction) in the mask. So as we keep shrinking, we need a more powerful microscope to look at each of these things. The number one thing will be to make more accurate masks. Then we’ll know what the boundary conditions are for the runtimes. It has to be no more than it is today. Then, if we have to go to GPUs, or if the neural network is faster than the physics, that’s what we will do in order to get to the throughput required.

Gallagher: There is one thing that we haven’t really touched on with AI. There is fear for a lot of adopters. Another thing I hear about is the training set and that we may introduce bias. It’s the same thing you hear about in other applications. Just being intelligent about how you train the software is going to be important for deep learning. There are a lot of great papers about things you could do. Many people are concerned they don’t understand it well enough to want to implement it. They like the data and they like what they are seeing, but there is a certain level of fear with a new introduction.

Wu: We have a lot of potential with AI. These AI methods mostly use artificial neural networks. This method can be used for most of the process tools. The technology is ready. The only thing needed is people to work on it.

Fujimura: There is always a tradeoff between accuracy that you can obtain and how long it takes to run the tool. We’ve talked about MPC having long run times. OPC and ILT (inverse lithography technology) are also getting much longer in run times, and it’s getting more difficult. Overall, we are losing on that tradeoff. And EUV is about to throw a huge wrench into the equation by making it much worse. One of the interesting things about Moore’s Law is in both the design and manufacturing cycles. You get to use the improvement that we make in how fast the computers run to enable the next generation. If it’s only by scaling, the same algorithm will scale naturally. But that’s not quite true. At every node you need to look at more details and effects. You need to simulate more and different things. Then you come up with EUV, where there is a total discontinuity and you have to do a lot more computing. Just being able to keep up in general is okay. This is because you get to use faster computers. But when you have all of these other things, and especially the big discontinuity like EUV, you need big help on the computing side, and deep learning comes along just in time. You can do more accurate simulations faster. It’s not as accurate as doing rigorous simulations, because deep learning introduces some errors. But it’s better by far than doing nothing.

Spence: There is another way to look at it. Deep learning can be more accurate. When we talk about physics, we assume we know the physical equations. The problem is that we don’t. Let’s say you want to know exactly how a photoresist works. Then, if you can write down all of the equations that show how the development happens, that’s great. So even if we know the physics, I don’t think we can measure the actual parameters. So there is always a certain point in modeling where you become empirical. The advantage of deep learning is that you bring vastly more data to your empirical model. That’s why it becomes better, because you can still incrementally improve the physics that you feel confident about. By using much more data, then you calibrate your fitting much better than if you want to do it with some pseudo equation that’s maybe a gross simplification of itself. The nuances of the process can be recovered if you have sufficient data. Still, you have to test the algorithm, show that it’s truly predicted, and that you didn’t just over-fit it. That is a technical challenge. What amount of data do you need in order to have a reliable model? Large amounts of good data will be key to helping us move forward in terms of getting the accuracy that we need to make the overall patterning solution.

Fujimura: In order to be practical, what we have to do is to rely on the physical understanding of the physics and chemistry. We need to understand as much as possible to create a physical model that’s augmented with empirical data.

Levinson: It looks like these new tools will provide value. Combined with a lot of the other things we are doing, it will help us get to the point where we can deal with the extraordinary complexity in what we are facing and getting these chips with billions of transistors and many critical features, and somehow at the end of the day, we can yield it. So it looks like they are going to be useful tools. Will they solve everything by themselves? Probably not. I don’t think there has ever been a magic bullet. It’s always been a combination of factors.

Related Stories

Single Vs. Multi-Patterning EUV

Finding Defects In Chips With Machine Learning

Machine Learning Invades IC Production

Machine Learning Moves Into Fab And Mask Shop

Machine Learning For IC Production



Leave a Reply


(Note: This name will be displayed publicly)