Machine Learning In The Fab

Adding data analysis to process technology development is just the beginning.

popularity

Machine learning is exploding, especially where there are massive amounts of data to contend with and lots of potential interactions.

This leads to two obvious insertion points in the semiconductor field. One is on the design side, where just getting an advanced design to function is an enormous challenge. That challenge increases as the need for reliability in some market increases. It’s difficult enough to get a 10nm chip to work in a smart phone for several years, but it’s a lot harder to simulate how a complex 10nm logic chip will perform in a car or in an industrial operation over 10 or 15 years.

The second place where machine learning is an obvious fit is on the process technology side. Being able to manufacture an SoC with 5 billion transistors with yields of 90% or higher is a gargantuan task. In fact it may require an entirely different vocabulary of superlatives because the amount of data that needs to be processed and the number of rules that need to be created is incredibly complex. As a point of reference, materials companies are talking about impurities in the range of 1 part per quadrillion (15 zeroes).

There are other slices of technology along the flow from initial architecture to final silicon where machine learning will be applied, of course. But where this computer science approach really excels is in sorting through massive quantities of data to find patterns and aberrations, and then setting up parameters in which action can be taken based on various interactions and in the context of other factors. Some of those interactions are legal, some are not, and some are legal at certain times but not others.

The basic concept is to establish a baseline for what is acceptable behavior and when it is acceptable, and then one or more algorithms can “learn” how to optimize that behavior. In the case of complex processes, this makes a lot of sense because the amount of data that needs to be sifted and digested is far too large for any single person or even a team of people to comprehend.

The typical way of handling complexity is to divide the problem up into smaller components, stitch those components together, and then figure out the workarounds when things don’t go right. This is why assertions and formal techniques are so useful. When the pieces are put together, it allows verification teams to follow any changes all the way through a design or a process.

While the underlying tools are still essential, the overall efficiency level isn’t high enough to keep up with the rising complexity. In effect, it is becoming a bottleneck. Moreover, engineers typically build in safeguards (also known as margin or guard banding) because there is so much data and complexity. That margin costs money at older nodes, but at advanced nodes it also can impact the performance and energy efficiency of a device. And in the case of process technology, it can slow down wafer processing in the fab.

Machine learning applies big-data techniques to these problems, building complex 3D matrices that can map all of the possible interactions in a system. In doing so, it can identify patterns that are not discernable to teams of engineers working independently, or even in parallel, because they cannot grasp the whole picture. That allows for much more efficient processes, and it can help to eliminate some of the margining that adds inefficiencies into systems. As a side benefit, it also can be used to reduce the number of restrictive design rules, which allows design engineers more freedom to create new architectures.

So now the question becomes, what else can be done with these designs? And how else can these devices be processed to get to market faster, more reliably, and with fewer defects? Over the next couple of process nodes, we’re about to find out.



Leave a Reply


(Note: This name will be displayed publicly)