There is plenty of data being generated, but not enough people have access to it.
The chip industry generates enormous quantities of data, from design through manufacturing, but much of it is unavailable or incomplete. And even when and where it is available, it is frequently under-utilized.
While there has been much work done in terms of establishing traceability and data formats, the cross-pollination of data between companies and between equipment makers at various process steps has made far less progress. In the past, this was a non-issue because foundries frequently managed the entire data stack, which they used to identify and fix involving power, performance and manufacturability. That allowed them to ensure chips would yield well enough, often accompanied by heavy margins and restrictive design rules.
But this kind of rescue operation by foundries happens far less often these days. For one thing, the amount of learning on each new process node and for each derivative chip is drastically lower due to smaller volumes and an associated increase in domain-specific designs. So while the transistor structures and standard cells are the same, these heterogeneous designs can vary greatly from one to the next. And what works in one market segment or for a particular application may not work for another, and there aren’t a billion units being produced to iron out any anomalies.
On top of that, adding margin into advanced-node designs or advanced packages is no longer an acceptable solution. It diminishes performance and increases power, making it non-competitive and far too costly from both a silicon and chip resources standpoint. So foundries essentially have kicked this back to EDA vendors, leaving it to them to avoid any issues through better simulation and testing.
The problem is that foundries haven’t shared enough of the data that design teams need for smaller runs, while at the same time there is a huge push for increasing reliability. As the price of designs goes up, and as more accelerators and various types of memory are added into designs, so does the cost of poor yield. And without that data, neither yield nor reliability can improve significantly.
The problem isn’t a shortage of data. In fact, plenty of data is being generated at each manufacturing step, including test, metrology, inspection, and it is being continually generated by monitors and sensors inside of devices as they are used in the field. What’s missing is a way to parse that data so it can be shared on an as-needed basis, but no one feels as if they’re giving away competitive information. In effect, more process steps are shifting both left and right, but relevant data isn’t following for any companies except large IDMs with their own fabs.
Data can be used for many purposes. It can alert chipmakers and EDA vendors when something isn’t working, and it can be used to determine how to avoid future chip failures. It even can be used to tell, in real time, when there is suspicious activity inside a chip or system. And it can be used to trace back problems to their source, which in a global supply chain is a difficult challenge.
But unless that data is understood and layered in a way that makes sense to share whatever is necessary for a particular problem, then design teams will continue to struggle with tracing problems back to their root cause and avoiding them in the future. And that’s something that affects everyone’s bottom line.
Leave a Reply