Foundries must provide the basis to predict reliability because every process is different.
While there are a number of ways to go about reliability and transistor aging analysis, it is all in large part dependent on fabs and foundries to provide the aging models.
The situation is also not entirely clear in the semiconductor ecosystem because the classic over-the-wall mentality between design and manufacturing still exists.
And unfortunately this wall is bi-directional. Not only do the design guys in general not care that much about manufacturability despite all of the advanced DFM tools but they are also not interested in taking necessarily data from manufacturing and applying it to best practices or new designs moving forward, according to David Park, vice president of marketing at Optimal+. He believes a large part of this has to do with the disaggregated supply chain. “Designers are waiting for a PDK or a device library from their foundry or from the various semiconductor process guys who say, ‘Here’s the library you need to use. If you use this library and follow these design rules you will have a manufacturable chip.’”
But this doesn’t work, he stressed, because the way Semiconductor Company A uses a design library, and the way Semiconductor Company B uses a design library are different based on what that design is intended to do in the end market. One designer could be building it for a satellite and another designer targeting a Fit Bit. “There are huge quality differences between a satellite and a Fit Bit. A lot of that is reflected not just in the library itself but also the manufacturing process, as well as the quality controls that are being done in the fabs and the foundries. Another part of that is taking all of the information that’s there — the information that’s also available through their historical modeling to make improvements. In a lot of cases it’s not worth the trouble for the supply chain to do this.”
Even though it could be extremely beneficial, companies like Optimal+ don’t push it. At the same time, the company has found a more receptive audience is where yield learning is concerned.
Still, this requires foundry models that are not always so easy to obtain — unless you are a valued customer, sources said.
The looming challenge is the fact that the equation to create the aging models is different between one foundry to the next, and sometimes even within the same foundry.
TSMC is leading the charge in this area since it has been offering aging in its processes since 40nm a modeling interface called the TMI (TSMC Modeling Interface).
EDA vendors support TMI in their tools so customers can be a bit more removed from the issue there.
While this may be state of the art for TSMC, other foundries require customers — for each given process — to work with them to get the equations. One industry source observed this is an art and very few people know how to do it. Some tools require the user to come up with the model, and for the largest integrated IDM, that’s no problem since they already have the reliability models for the processes there. Further, if a semiconductor company is already a TSMC customer, it’s not an issue.
While other foundries appear to be waking up to the needs in this space, sources said they still cooperate at different levels with different customers in providing the equations.
There are other ways to obtain the foundry equations including model characterization services by entities such as the Fraunhofer Institute for Integrated Circuits’ Division of Engineering and Adaptive Systems, which is quite specialized on this.
When it comes down to it, it would seem likely that there would be a standardization of sorts to address these issues. As with other standards, the big push from the customer side will likely be what tips the scale in terms of convincing foundries to play nice within the ecosystem.
Leave a Reply