What’s Next With AI In Fabs?

Where machine/deep learning is useful and where it’s not.


Semiconductor Engineering sat down to discuss the issues and challenges with machine learning in semiconductor manufacturing with Kurt Ronse, director of the advanced lithography program at Imec; Yudong Hao, senior director of marketing at Onto Innovation; Romain Roux, data scientist at Mycronic; and Aki Fujimura, chief executive of D2S. What follows are excerpts of that conversation. Part one of this discussion is here. Part two is here.

L-R: Yudong Hao, Romain Roux, Aki Fujimura, Kurt Ronse.

SE: Where is machine learning in chip manufacturing today, and how will that change in the future?

Fujimura: Deep learning, a subset of machine learning, is new. Deep learning as applied to semiconductor manufacturing is even newer. But it’s not pervasive yet. Anything that has a lot of software content or software value eventually will be transformed by deep learning. Many things will still be programmed in the conventional way, but deep learning will be inserted to improve accuracy, or improve run time, or perform new tasks that were previously impractical. Deep learning is already being put into production flows in masks shops today. However, for deep learning to be everywhere, it will take some time. It might not be a year from now, but it will certainly be 10 years from now. As companies start to explore deep learning and how it can help them, many are finding two things. First, it’s easy to get to a prototype. And second, it’s harder to get from good prototype results to production quality results. Why? Because you need the right amount and the right kind of data to train deep learning networks successfully.

Ronse: Gradually, if machine learning improves, it should become better and faster than what you can do manually with the tools. All of these systems in the fabs are becoming more and more complex. Perhaps the EUV scanner is now the most complex piece of equipment. An etcher also generates lots of data. And there’s also lots of knobs to tune it. Here, we are trying to do some machine learning to predict down situations. Every tool may follow. There is no reason to limit it. Some tools are becoming necessary to use it earlier than the other ones, depending on the complexity of the processes. In resist development, though, I don’t think we use machine learning for that — yet. There are many knobs. We have no data to start with.

SE: What are some of the issues or concerns with machine learning in chip manufacturing?

Ronse: If you don’t trust it, you’re not going to use it. If you try it and see that this is indeed the right solution and it works, then you are going to use it more and more. So gradually, you can implement it.

Hao: We should look at this on a case-by-case basis. Machine learning is great, and it can help us. However, for every case, we need to take a look at what the problem statement is and what is the reason that we haven’t reached the full potential. Then we can think about what is the ideal solution for it. It could be machine learning. It also could be pure physics. For example, in metrology, we have something called thin film, which involves thickness measurement of film deposition. In this case, the physics are very mature and accurate, and the computing is fast. So deploying machine learning for the common films doesn’t actually make sense. However, in optical critical dimension (OCD) metrology, where there is a lot more sample complexity, just building an accurate model becomes very difficult or takes too much time to achieve. This is where machine learning can help. I do believe in machine learning, but it should be a case-by- case situation.

Roux: To prove that deep learning can provide a reliable solution to a given problem, you need good data. To motivate the collection of data, you need to prove a potential return on investment. And for that, you need good data. To break this circle we have to build trust toward machine learning, and it takes time and resources — including experts in the domain. We have to be careful not to make false promises, but also keep an eye on the perspectives offered by deep learning. This is a delicate balance.

Hao: On the inspection side, we create a lot of labeled data sets that can be re-used. As a result, the data set can grow over time. That is why I think deep learning can be applied more readily. In metrology, the labeled data set is mainly coming from the reference metrology. Oftentimes, you must cut the wafer. Every process step or layer is basically a specific use case. And that label data set usually needs to be regenerated every time when you change your process. It is harder to accumulate a large data set for applying deep learning.

Roux: ‘Mura’ is one issue. That means unevenness or irregularities in Japanese. This is a type of defect on a flat-panel display. It can produce repetitive patterns and systematic errors on a display. The human eye is very sensitive to such structures. Mura can develop for different reasons, so it is kind of hard to describe how to detect it when writing a mask. However, we have access to logs of normal masks. Using these, ML is used to model the normal behavior and to estimate how much one mask deviates locally from a normal behavior. It is very convenient in the sense that we don’t have to describe what abnormal means or what internal states of the system generates Mura. We just have to capture what normal means with data and the right model.

SE: Where is machine learning going in the future?

Fujimura: In general, more and more ‘useful waste’ will fuel innovation and increase the demand for ever-increasing need for more computational power. Useful waste is when we use brute force computing and let a massive computer crunch the data and generate programs that can do things that conventional programming could not do. Single-instruction, multiple-data (SIMD) computing that GPUs made accessible in the large scale is a computing approach that inherently follows the ‘useful waste’ philosophy. Deep learning and sum-product networks are both examples of what was then practically enabled by following the useful waste philosophy. Much of the computation being performed during training of deep learning is known to be wasted. The path being explored didn’t end up contributing to the end result. Yet it is better to let that go and just use the brute force power of today’s computing capabilities to attack the problem by sheer numbers of trial-and-error to let it discover what works and what doesn’t. I’m certain that this kind of approach will spawn more breakthroughs in the future. Most likely they’ll be extensions of artificial intelligence and machine learning. Most likely there will be automatic programming approaches.

Hou: I’m optimistic about machine learning and artificial intelligence. In the future, AI will help people in every aspect of life. It will also promote technology advancement and help physics, which is the foundation of everything. Machine learning helps us build better chips, which, in turn, gives us more computing power to do more physics.

Roux: In the industry, machine learning offers a set of tools in the toolbox available to engineers, just as optics, image processing or any other domain. Machine learning is going to be a more obvious part of R&D. Its prerequisite, being the availability of data, will be considered during the upfront design of products. The question is how can we collect data, how can we protect it, and how can we trace data in all equipment in a production line. Meanwhile, in the academic world, reinforcement learning is becoming very attractive. Basically, it allows a module to learn from experience by interacting with its environment, not just static and labeled observations as in supervised learning. This type of adaptive intelligence is very promising. Combined with digital twins, where one can simulate the environment, reinforcement learning could achieve incredible results.

Ronse: Maybe one thing we need to talk about is using it for the wrong purposes. Security on these databases will also have to be developed. So the people who need that data will have it, but the ones who should not have the data have no access to it. That will require software development to protect the data, and also to protect these intelligent machines that we are generating. We need to prevent them from being hacked. That is going to take some time. In addition, autonomous driving is not going to happen tomorrow. All of that has to be put in place. And privacy is also important.

Hao: In the semiconductor industry, deploying machine learning requires a lot of domain knowledge. Companies like Amazon and Google democratized machine learning, making it more easily available for us to use. However, in our industry, to make the solutions solid and effective, we need to use machine learning technology together with our domain knowledge. Simply throwing our data to those deep learning algorithms will not work.

Fujimura: Singularity is a fascinating question. I would have said ‘no way’ to the question of singularity with deep learning or any of today’s machine learning before because there is no logical reasoning in machine learning today. But is logical reasoning learnable with a multiple hierarchy of meta-learning that has underneath it only pattern matching? I suppose the answer is yes, if you believe that the brain is all we use to think. We’re still far away, but perhaps only a few breakthroughs away. I hope we use it to find tests and vaccines for new viruses quickly before that, though.

Related Stories

Applications, Challenges For Using AI In Fabs

Using Machine Learning In Fabs

Finding Defects In Chips With Machine Learning

Leave a Reply

(Note: This name will be displayed publicly)