Next Challenge: Parts Per Quadrillion

The good, the bad and the unexpected in chasing increased purity in materials used in chip manufacturing.

popularity

Requirements for purity of the materials used in semiconductor manufacturing are being pushed to unprecedented — and increasingly unprovable — levels as demand for reliability in chips over increasingly longer lifetimes continues to rise. And while this may seem like a remote problem for many parts of the supply chain, it can affect everything from availability of materials needed to make those chips to the ultimate cost of a product.

Two main drivers are behind this trend. One is a concern by OEMs and systems companies about liability in safety-critical applications such as automotive and aerospace, where chips are expected to remain functional for up to two decades. The second is the cost of downtime in mission-critical applications, such as servers in data centers, where devices are expected to last 7 to 10 years, or in industrial operations, where devices are expected to function for up to 20 years. Impurities in materials can impact reliability in unexpected ways, which in turn can reduce a device’s lifespan or the overall yield inside a fab. In all cases, that adds to the total cost, which generally is paid for by the customer.

“You’re only one bad batch away from having the lines down,” said Steve Putna, senior supply chain manager at Intel, in a recent presentation. “The need to detect this on an ongoing basis, and to incorporate ongoing variability in your analysis, is really important. But ideally you would have an overall model that would map back to a holistic mapping of your supply chain, both horizontally and vertically, such that you could readily tell whether that material is likely to be good or bad. It could even drive continuous improvement efforts. So there is a lot of value in building models like this. We are working with certain suppliers internally to do more of this.”

But material purity at advanced nodes also raises several issues that, at least so far, don’t have solutions. Among them:

  • New goals for purity now exceed the ability of some inspection and measurement equipment.
  • When impurities are removed, it’s not always clear what else is being removed. That could change the fundamental nature of materials and how they react over time.
  • Increased purity requires more testing, modeling and analysis, which in turn raises the overall price tag for chips.

Two years ago, the goal for some materials was in the parts per billion range, and most materials companies weren’t sure if that was even possible.

“We’re at parts per trillion (PPT) today,” said Tom Brown, executive director of manufacturing and engineering at Brewer Science. “We’re trying to figure out next how to get to parts per quadrillion. We’re working with equipment where we are pushing tools beyond the capability they’re selling at, which is sub-PPT ions. But electronic or semiconductor grade are moving beyond the level you can buy today.”

This can add significantly to the time it takes to test and certify different materials.

“Today, we have to test every raw material,” said Brown. “Fifteen years ago, we weren’t doing that. There is a lot of genealogy and traceability down to the supplier range of what they’re calling a ‘lot.'”

For the most part, these issues have remained in the industry’s shadows, often in silos. That is beginning to change, however, particularly at advanced nodes where tolerances are much tighter.

“Anybody in the process world — and at Lam, we’re heavily in the clean, deposition and etch markets — is always very mindful about materials,” said David Fried, vice president of computational products at Lam Research. “These range from precursor materials to final deposited materials to sacrificial materials. If you think about etch gases, those gases are never supposed to stay on the wafer. They’re designed to remove material. But the purity and the material characteristics of those gases factor into the final performance, defectivity, and composition of the wafer. That’s incredibly difficult to figure out, because it’s not on the wafer anymore. So how do you trace that back? Materials monitoring and materials sensing in these systems is a huge part of process optimization and quality.”

Nor does it stop there. On the manufacturing side, every step of every process is touched by material purity at some level. Jerry Broz, senior vice president at International Test Solutions — which makes cleaning materials for test probes — said his company is examining what parts per quadrillion looks like and what kind of engineering is necessary to achieve that purity.

“What does it mean if you can’t measure it?” asked Broz. “For back-end test, we’re looking more at occurrences rather than parts per million or per billion, and that’s the way the automotive market is looking at this. We don’t know how parts per quadrillion will affect an MCM (multi-chip module) in automotive, and we don’t know how that will affect the cost of test.”

High-tech detective work
Finding what has gone wrong in semiconductor manufacturing always has been a challenge. But with materials, it’s much more difficult because many of these compounds are designed to disappear during manufacturing. That includes gases, materials, slurries for polishing wafers, films, and photoresist materials, which are spun onto a wafer.

It’s not even clear how to sample some of these bulk materials. There may be one defect inside a 55-gallon drum, for example, and that may be contained in some thick, low-viscosity goop.

“This is one of the main reasons the dry resist technology that Lam announced at SPIE, in collaboration with ASML and Imec, is so powerful,” said Lam’s Fried. “The control of the material properties is so much higher, and the amount of waste is so much lower. If you’re able to better control material properties and reduce the waste, the value of that process is massive.”

To make matters worse, some of these materials are mixes of industrial chemicals that are developed for other industries. So, while chip manufacturers may need a single barrel or less, that’s an insignificant part of the overall business for the suppliers. Moreover, those raw materials may sit around for several years before they are used up, and purity demands can change over that time.

Data-driven solutions
Typically, whatever problems escape final test into the field are considered acceptable until proven otherwise. But as artificial intelligence begins making inroads into more applications, the value of semiconductor content is increasing. This is obvious in increasingly autonomous devices, such as cars and robots, but it also is true for industrial applications. As a result, the price of a repair or replacement can be enormous in basic appliances, such as refrigerators, or in automotive recalls.

“The supply chain is becoming much more data-driven,” said Dave Huntley, director of business development at PDF Solutions. “You need to map all of that data. So, you may have hundreds of die and hundreds of wires for each die. Then you need to look at the process, materials and features and create a failure pattern. You also need equipment to support and capture all of that data, and track all of that so that when you get an RMA (return merchandise authorization), possibly with a single click you can explode all of that genealogy.”

The ASTM 1422 standard takes one swipe at this problem, adding “consumables” into the mix rather than just the die on a wafer. And this is where the detective work really begins, because not everything used in the manufacturing process ends up on a chip. In fact, what is left behind sometimes can cause problems.

“What we see is that advanced product quality planning is more critical,” said Julie Ply, director of quality materials at Brewer Science. “The big problem is the lack of data in the sub-supply chain. With the trade war, sub-suppliers are challenged to find materials, and if we don’t have transparency into that, problems can crop up. Typically, you have a ‘thumbprint’ where you fully characterize raw materials when they come in with a qualification process. Sometimes that’s done with a partner, sometimes not. But the goal is to identify a baseline based on many different chemical and structural characteristics. From there, you can see when there are ‘excursions,’ and you can compare that to the thumbprint.”

That works most of the time, but tracing errors is difficult after the physical evidence is gone. At that point, the only solution is to trace contaminants back through models and statistical analysis, and that requires input from multiple vendors throughout the supply chain. Sometimes the results are surprising to everyone.

“If you make things too pure, you may be removing something that is functionally necessary,” said Ply. “And if there is another material being used and there is a contaminant in that material, that can degrade our materials.”

Because this is not evident through inspection, some sort of extrapolation of data is required. So, while surface defect detection technology still has headroom, most of that is for physical inspection. But inspection does not penetrate films and other materials. That requires data modeling.

“We’ve use sophisticated machine learning algorithms for many years to perform defect detection and numerous other 2D inspection tasks,” said Tim Skunes, vice president of technology and business development at CyberOptics. “More recently, we’ve also applied deep learning in specialty applications for defect detection. In the context of our MRS (multi-reflection suppression) algorithms, those are not based on machine learning methods. However, we are looking at deep learning methods for future generations of our MRS algorithms.”

More data, more time
At that point, it comes down to who has the best data and the best modeling tools. But another complication enters the picture, because some of these devices are expected to last for a decade or more. This is true in automotive, which is pushing AI logic for assisted and autonomous driving to the most advanced nodes, as well as for industrial applications.

“The new challenge is the length of time for data storage,” said Walter Ng, vice president of business development at UMC. “Sometimes this is 10 or 12 years. A foundry generally keeps data 5 to 6 years for consumer applications, and in others it’s more about golden flows. What we’re seeing in automotive is a more limited set of golden flows with much tighter control to the wafers. To make that work, alignment in the supply chain has to happen. For all substrates that come in, we have a quality assurance procedure in place. As a consumer of those substrates, we also require a certain amount of testing. But for automotive customers, they’re also demanding that suppliers retain all of that data so they can preserve traceability.”

Ng noted this has been in place for some time in the mil/aero world, where there are stringent requirements about quality. But as automotive pushes into more advanced nodes, and as chiplets become part of the strategy going forward, the whole ecosystem needs to be in sync on data.

“There is still a hesitancy to share that data up and down the supply chain,” he said. “Even with chiplets, data sharing is one of the potential stumbling blocks. From a materials standpoint, you’re getting silicon based on different processes, and there are OSATs with different solutions. Everybody needs to be sharing data to facilitate test and debug. It’s one thing to make something under a captive supplier. It’s another to unleash it as an open ecosystem play.”

When problems do arise, everything needs to be reviewed.

“Obviously this doesn’t have to be done with every batch,” said Intel’s Putna. “But if there is an issue that requires further investigation, that would prompt you to go back and take additional parameters. What we found is that for a polymer, there was a functional difference in the morphology between two systems. They were not the same. We utilized that to conduct a supply chain review of variability in performance.”

The complexity factor
One of the complicating factors in all of this is hyper-customization. As vertical markets splinter into more sub-markets, solutions are being customized for each of those applications. This is happening from the cloud to the edge, and it is adding new stresses to the supply chain.

“Packaging is creating its own version of customization and scaling,” said Ben Rathsack, vice president and deputy general manager of technology and manufacturing at TEL. “Whether you’re trying to do that for a cell phone or a PC system, there will be more customization around that for the form factor and to get closer proximity between your memory and your processor.”

In fact, one of the goals for systems companies/OEMs is the ability to trace failures to the individual die on a wafer, the date that wafer was manufactured, and all the other components that went into that die.

“There is a lot of value in being able to connect these dots,” said Putna. “We want to integrate both the vertical and the horizontal elements to really understand all the sources of variability in our supply chain that impact certain products’ performance. In most cases, we’re largely not here, at least in the materials space. We might be a bit further along in the equipment space.”



Leave a Reply


(Note: This name will be displayed publicly)