Machine Learning Moves Into Fab And Mask Shop

Experts at the Table, part 2: Where can this technology be applied, why it is taking so long, and what challenges lie ahead.


Semiconductor Engineering sat down to discuss artificial intelligence (AI), machine learning, and chip and photomask manufacturing technologies with Aki Fujimura, chief executive of D2S; Jerry Chen, business and ecosystem development manager at Nvidia; Noriaki Nakayamada, senior technologist at NuFlare; and Mikael Wahlsten, director and product area manager at Mycronic. What follows are excerpts of that conversation. To read part one, click here.

Fig. 1: (L-R) Noriaki Nakayamada, senior technologist at NuFlare; Aki Fujimura, chief executive of D2S; Mikael Wahlsten, director and product area manager at Mycronic; and Jerry Chen, business and ecosystem development manager at Nvidia.

SE: Artificial neural networks, the precursor of machine learning, was a hot topic in the 1980s. In neural networks, a system crunches data and identifies patterns. It matches certain patterns and learns which of those attributes are important. Why didn’t this technology take off the first time around?

Chen: There are three reasons. One is that these models are so powerful. They need to consume a lot of data. It is pulling out latent information in a lot of data, but it needs the fuel to enable deep learning. The data is like the fuel. You need enough data to feed this powerful model. Otherwise, the model absorbs too much information and it becomes overtrained. We haven’t really had access to data until more recently.

The second thing is that there has been some evolution in the mathematical techniques and the tools. I’ll call that algorithms and software, and there is a big community now that is building things like TensorFlow and such. Even Nvidia is contributing a lot of the lower-level software, as well as some higher-level software that makes it easier and more accessible for the world.

The last piece of that, of course, is very dense and inexpensive computing. All of that data needs to be digested by all these fancy algorithms. But you need to do it in a practical lifetime, so you need this dense computing in order to process all of that data and come up with a solution or some kind of representation on what the world looks like. Then, you combine that, of course, with these physics models, which themselves are also very compute-intensive. This is why a lot of these national supercomputer facilities like the Summit machine at Oak Ridge National Laboratory are built with tons of GPUs, almost 30,000 GPUs. That’s the driving force. They recognize that these two sides of the coin, physics-based computing and data-driven computing, especially deep learning, are necessary.


SE: Google, Facebook and others make use of machine learning. Who else is deploying this technology?

Chen: It’s pretty much every vertical industry. It’s every company that we would consider as a cloud service provider. So they are either providing cloud infrastructure or they are providing services available from the cloud like a Siri or Google Voice. It’s no secret that they are doing a lot of this stuff. More recently, we’ve seen that technology migrate out into specific vertical applications. Obviously, there has been a lot of activity in a variety of healthcare spaces. These spaces have traditionally used lots of GPUs for visualization and graphics. Now, they are using AI to interpret more of the data they obtain, especially for radiological images in 3D volumes. We see it happening in finance, retail and telecommunications. We see lots of applications for these industrial, capital-intensive types of businesses. The semiconductor industry is the one that is the most obvious and active. The timing is perfect, and there is a lot of progress that we can make here.

Fujimura: Medical is an example. In medical imaging, you are really honing in on exactly which cells are cancerous. Using a deep learning engine, they can actually narrow it down to exactly which cells are bad. That’s a medical example. But you can imagine the same benefits that can be derived in semiconductor production.

SE: It appears that machine learning is moving into the photomask shop and fab. We’ve seen companies begin to use the technology for circuit simulation, inspection and design-for-manufacturing. I’ve heard machine learning is being used to make predictions on problematic areas or hotspots. What is the purpose here or what does this accomplish?

Fujimura: A traditional hotspot detector requires a whole series of filters. A human can’t possibly go through or even 100 humans can’t go through an entire mask or whatever and find these places. It can focus on suspect areas only. Once a machine learns to do it through deep learning, it will just diligently go through everything. So it’s going to be more accurate.

Chen: On many of the industrial applications, sometimes in the end the accuracy may be as comparable for a human. But there are two sides to accuracy. There is the detection part of it, but there is also the false positive side of it. This is maybe more prevalent in other businesses than semiconductors, although it’s also prevalent in semiconductors. The cost of dialing up your sensitivity to make sure you detect everything might be that you generate a huge pile of false positives. That can be very costly. But there is also a lot of evidence and research that show that deep learning solutions are able to, in many cases, achieve comparable or better sensitivity, while not causing the burden of a lot of false positives.

SE: Recently, D2S, Mycronic and NuFlare announced the formation of the Center for Deep Learning in Electronics Manufacturing (CDLe). What is the goal here?

Fujimura: There are three general areas. One is about people. In our world, it’s important to have domain knowledge. So you can’t just hire somebody who is trained in deep learning and expect them to be able to contribute to what we do. There is a lot of knowledge they have to learn. The only way to get deep learning to work for any of our companies is to have the marriage of deep learning expertise and people who have been trained in deep learning and domain expertise. Putting that together in one place is an important factor. So one of the goals is to have the people that come together to the center, go back to their respective companies, and become experts in deep learning with domain knowledge. They can help other people with deep learning knowledge. People is one aspect of that.

The second thing is the applications of deep learning. So we want to be able to accelerate together the ability to leverage the combination of platforms, the people and the synergy of having everybody together to bring applications of deep learning to each of our companies as quickly as possible. We don’t know what form that’s going to take, but there are some ideas.

Then, the third area is a deep learning engine itself. The art of deep learning and the technology of deep learning are very new. It only happened a few years ago. It’s amazing how far it’s come even though it’s in a nascent stage. It’s going to change a lot in the next 10 years. There are changes that are going to happen, and we think it will be more specific to our kinds of problems. This requires a huge amount of data, where seven sigma accuracy is required. And just being better than a human isn’t good enough for many of our problems. So, we want to be able to come up with an infrastructure for deep learning, and a deep learning engine itself, not just an application for it.

SE: Can’t you just develop the technology on your own?

Nakayamada: A deep learning engineer is sometimes very difficult to find. Maybe we can leverage the center and find them.

Wahlsten: This technology is picking up in Sweden. The schools have dedicated programs for deep learning. But in order to do this, we don’t have so much knowledge in-house. We could use knowledge from some experts. This is a perfect region. Collaboration is key.

SE: What are some of the other issues here?

Nakayamada: There is a synergy with Mycronic. We have a VSB background. They have a multi-beam background. We are developing multi-beam. Working together with multi-beam experts in a different field will bring some benefits.

Wahlsten: The challenge we see is to find a ‘fail fast’ model. Machine learning is a great technology, but it’s not the solution for everything. It’s equally important to take away and say: ‘Here, we have a good solution. But here, we shouldn’t spend time on something else.’ This is the idea behind collaboration.

Related Reading:

Machine Learning Invades IC Production

Reliability, Machine Learning And Advanced Packaging

Fabs Meet Machine Learning