Questions persist about how to deal with an explosion in data, and who has access to it, but changes are on the horizon.
As complexity goes up with each new process node, so does the amount of data that is generated, from initial GDSII to photomasks, manufacturing, yield and post-silicon validation. But what happens to that data, and what gets shared, remain a point of contention among companies across the semiconductor ecosystem.
The problem is that to speed up the entire design through manufacturing process, more steps now need to done concurrently or in context of other steps and often in collaboration with other companies. Still, many of those companies see their slice of the data as competitive, based on a long history of jealously guarding intellectual property.
“The number one issue is data,” said Michael Campbell, senior vice president of engineering at Qualcomm. “The fab gives you a design deck. You have a library. You give them the fab tape and get back silicon, and then you ship out the customer spec. That has to change. A simplistic relationship does not work with 2 billion transistors. And with servers, you’re ultimately looking at 25 billion transistors.”
Campbell said foundries need to share more data with semiconductor companies so the entire manufacturing process can become more interactive. “This affects time to yield, time to manufacturing and time to quality. We have to have a partnership where yield means something. Right now the cost is too high.”
This argument is not new. In fact, it has been a point of contention for at least a decade. But at 16/14nm and beyond, the sense of urgency is rising. For one thing, there are simply more devices to worry about, which creates issues involving dynamic power density, heat and signal integrity—all of which need to be dealt with as quickly as possible. At 10nm and 7nm, leakage current will begin to complicate designs again, after a brief respite provided by the first generation of finFETs. In addition, there are complications with multi-patterning that take time to correct. And all of these issues affect reliability, which in turn affects time to market and yield.
Chipmakers point out that sharing more data earlier will help resolve some of these issues more quickly, when it is easier to fix them, ultimately reducing the cost of developing complex chips. But as the amount of data increases, it’s becoming more difficult to figure out what can be shared, let alone how to make sense of it all.
“There is a need for more data,” said Ben Eynon, senior director for engineering development at Samsung. “There’s also too much data, and we have to deal with unstructured versus structured data. At the same time, the data sampling rate is not close to 100% and it’s not in real time.”
There are other complicating factors, as well, Eynon said. “You need to be able to move data into an area where you can correlate it, and you need to move the processing to the data rather than the other way around. Tool uptime is a big deal, too. But if you can save 30 engineers from tackling things that take two weeks and push that time down to two days, or maybe two hours, this can make a big impact.”
In contrast to what was being done when 65nm was the leading-edge process, foundries are in fact sharing more data. For one thing, it’s easier to share because access is simpler and communication infrastructure has improved greatly. In addition, foundry processes arguably are unique enough, particularly at the most advanced nodes, that what used to be competitive information is no longer directly competitive. And finally, foundries are working much more closely with large chipmakers to understand what data will help them design chips that are manufacturable with reasonable yield.
“Customer expectations are growing because they’re becoming more educated about manufacturing,” said Walter Ng, vice president of business management at UMC. “They’re demanding more data and expecting it to be delivered sooner, and they want the ability to download it themselves.”
But there are also areas where fabs will not share data because they consider it proprietary.
“Fabs do a lot of scheduling around preventive maintenance where they take down tools, and capacity may be impacted significantly,” Ng said. “That allows us to support greater capacity. Fabs don’t want to share that information. What we do want to share is what’s relevant to their chips and design of chips because there is a design-manufacturing interaction. Layout in context can impact yield. But there is a line, and it’s not always clear to the customer, particularly when it comes to tolerances and yield. That’s where the line gets blurry.”
Making sense of data
Large chipmakers have been demanding more data since 2006, when design-for-manufacturing tools first began gaining traction and they could begin correlating, for the first time, design with yield and time to market. Since then, the amount of data has increased to the point where it requires big data tools, including data mining, machine learning and predictive analytics.
These tools have been in use in large data centers for some time. How they fare in semiconductor manufacturing remains to be seen.
“Statistics and manufacturing are not new,” said Bill Jacobs, director of technical product management for Microsoft Revolution Analytics. “What’s new is that we’re dealing with massive amounts of data.”
Jacobs said by leveraging different languages, such as R, machine vision can spot slightly defective ball grid arrays that cannot be detected using standard testing or inspection, and machine learning can be used to identify potential problems with solder joints. Some of this involves predictive analytics, an adjunct to data mining that has been used in industrial operations to increase uptime. The basic idea is that it identifies key variables within large quantities of data, builds correlations and develops likely patterns of behavior.
“Today, we mine data based on pre-determined steps,” said Qualcomm’s Campbell. “Tomorrow we will use machine learning where we cannot pre-determine. That will improve time to market, time to money, and customer satisfaction. The problem we deal with is that each use case is different. So the same part may be used in a washing machine or an industrial machine. You have to write aggressive algorithms to do correlations because little slivers of data can cause customer problems. You have to look at every parameter versus every other parameter. A sensitivity in one customer’s system may work fine for another customer.”
Optimal+ already is collecting and cleaning manufacturing test data and making decisions based upon those results. “We ask two fundamental questions at the end of every test,” said David Park, Optimal’s vice president of worldwide marketing. “Is a bad device that’s listed as ‘bad’ actually bad? If it’s not, if it’s a false failure and we can correct it immediately, it improves yield. We also ask the corollary question: Is a ‘good’ device truly good? Because the last thing you want to do is ship a device that isn’t as good as you think it is into the market.”
Not all data is equal
Still, the semiconductor industry is unique in its complexity. As a result, the value of data varies greatly by application, by process, and even by company. It can be structured or unstructured, which makes it harder to make sense of, although new tools may simplify that process.
“In the past, we had machine constants, process recipes, and you could look at small quantities of data,” said Samsung’s Eynon. “Today, you have three or four variable interactions, and maybe more than a million sensors per wafer. With machine learning you can reduce the number of sensors to 200 significant ones, which reduces the variable step to the significant ones.”
Thomas Sonderman, vice president and general manager of the Integrated Solutions Group at Rudolph Technologies, contends that the issue goes well beyond just a single chip or foundry. “The big question is how you eliminate risk from the supply chain so you can take action on things that matter. If you have a complicated multi-chip module, you may have multiple chips, multiple OSATs, and you have to connect the dots across the supply chain. You’re going to collect a lot of data, but not all of it will be meaningful. We’re good at collecting massive amounts of data, but where is the good data versus non-good data? You need actionable intelligence or you are not using data in the right way.”
Security adds another complication. There are unequal security levels between equipment and tool makers, and between companies collaborating on chips or systems.
“It’s not just about a single plant,” said Don Harroll, North America sales director at NextNine, agrees. Harroll said. “You need to create policies in the enterprise that you can automate and hand down.”
Ever since the foundry model was first introduced, there has been contention about sharing data. Chipmakers argue that foundry insights can save them time and money. Foundries counter that not all of the data is relevant, not all of it is easy to quantify or make sense of, and they already are sharing much more than in the past.
That debate is likely to continue. What is starting to change, though, is that big data tools—data mining, machine learning and predictive analytics—are creeping into the semiconductor manufacturing world to help bring order to a growing problem. How that filters back into the design side, or between the various parts of the manufacturing process, is unknown at this point. But changes are being added into the semiconductor ecosystem, and if other industries are a good indicator, they could have a significant impact on time to market, yield, and cost.
—Jeff Dorsch contributed to this report.
Plugging Holes In Machine Learning
Part 2: Short- and long-term solutions to make sure machines behave as expected.
What’s Missing From Machine Learning Part 1
Part 1: Teaching a machine how to behave is one thing. Understanding possible flaws after that is quite another.
Big Data Meets Chip Design
As the volume of data grows, companies are looking at what else they can do with that data.
Convolutional Neural Networks Power Ahead
Adoption of this machine learning approach grows for image recognition; other applications require power and performance improvements.
What Cognitive Computing Means For Chip Design
Computers that think for themselves will be designed differently than the average SoC; ecosystem impacts will be significant.