Cutting IC Manufacturing Costs By Combining Data

Mixing financial data with manufacturing analytics can boost efficiency, but there are still pockets of resistance.

popularity

Experts at the Table: Semiconductor Engineering sat down to discuss the benefits of incorporating financial data into fab floor decision-making, including what kind of cost data is most useful, with Dieter Rathei, CEO of DR Yield; Jon Holt, senior director of product management at PDF Solutions, Alex Burlak, vice president of advanced analytics and test at proteanTecs; and Dirk de Vries, technical program manager and senior architect at Synopsys. What follows are excerpts of that conversation.

(L-R): ProteanTecs’ Burlak, DR Yield’s Rathei, Synopsys’ de Vries, PDF Solutions’ Holt

SE: What kind of product and financial data could people add to a manufacturing data analytics solution?

Rathei: In the back end, our software has the capability to calculate the test recovery rate. When you test a wafer and you have a certain amount of bin X fails, you face the question, ‘Does it make sense to retest the wafer?’ Based on previous data, you can calculate for a particular binning class the recovery rate you can expect if you retest the wafer. With this information, engineers and operators on the test floor decide whether to retest a particular product. The problem is they typically don’t know the additional revenue they can create by recovering a certain amount of dies on the wafer, and usually the engineers are not completely informed about the cost associated with the test time either. This information would need to go into the system to make a qualified decision. For example, does recovering a potential 7% of yield on a certain wafer justify another hour of test time? This is an obvious scenario where this information is not readily available to the people who make this decision on the test floor. It would make enormous sense to have this information in the system to make these qualified decisions.

Holt: There are a lot of applications that should include financial data in manufacturing data analytics. I don’t know why we’d want to separate the two, because it’s all data that makes decisions on the manufacturing process. But it has been separated. The goal is to use financial data to make more intelligent decisions, and to automate it if we can. For example, you have a sales order from a customer. This goes into your financial system (e.g., ERP system), and then the manufacturing execution system (MES) starts material, and you have a due date. It’s running through the line, and it has certain characteristics that that order has to meet. There are assumptions that are made in the financial planning side. Typically these include yield, device power, and device performance. But dynamically, the manufacturing process doesn’t match the median all the time because yield excursions and performance deviations occur. Linking that financial data is important because now you can make real-time decisions. Let’s say you have an inline yield excursion event that impacts a certain amount of material. You may want to start material right then instead of waiting until that order’s yield is reported to the financial system. And as Dieter was mentioning, you’re testing a part, and it bins out differently. Maybe you want to reassign that to another order. Having information on customer orders facilitates these types of decisions happening on the factory floor.

de Vries: Dieter gave a good overview on the initial types of financial data – die price, wafer price, and test cost to make the optimal tradeoff. One point to discuss is the question about separation. Even though it’s needed for technical decisions, there are legitimate reasons to have a separate flow for handling financial data. There is a profound reason to treat it differently — the visibility into the financials of a company. Typically, it is not exposed to the entire company. That means to share it requires mechanisms of normalization or obfuscation. Such mechanisms make sure that the correct tradeoff is made without exposing the entire financials of the operation.

Burlak: I would like to add a slightly different angle in terms of financial data, which is more related to assembly. What’s happening down the line and across operations has a lot of impact on cost, especially on packaging and multi-chip designs. We are trying to provide data visibility and predictability to make better decisions during packaging and testing and cross-correlating between different chiplets within a package, etc. Tying these things together and understanding the cost structure of the end product versus the optimization that you can do throughout the manufacturing lifecycle is a key element that impacts product cost.

SE: Can you expand on the financial impact when connecting that data?

Burlak: A chiplet-based design has several different dies, different IP, different process nodes and different wafers. When you combine them together, many times it’s based on random decisions. The assembly process does not take into account an optimized matching strategy so that you can achieve better performance, better power, etc. This translates into cost. To optimize die matching you need to do it in a smart way by having the visibility into each individual die. Then you can pair the right dies together in the same package. This approach impacts overall operations, from bin management to yield to inventory management.

Rathei: I want to elaborate on the possibility of using financial data in the front end. There are all kinds of financial contracts associated with the WIP in the line. We need to consider different scenarios here. For instance, in a factory that makes one-product, memory or CPUs, you just want to push all the material through at the fastest possible speed and with the highest yield. Nobody in this factory has a need to know anything about these financial contracts to optimize the output. But in a high-mix application-specific IC factory, you may have hundreds of different customers, each having different contracts with different delivery dates — and, of course, different margins on each product. In such a factory, you’re facing a decision when you have one product with a 5% yield loss and another product with a 7% yield loss. Which problem do you tackle first? If all other things are the same, you would tackle the product with 7% yield loss first. But what if the product with 5% yield loss has a much higher margin? Then you would tackle that first, unless you are running into a situation where maybe you cannot deliver all the parts on time. And now we have to consider even the possibility of having some penalties in the contracts for not being able to deliver all orders on time. So what seems at first sight to be a straightforward calculation can become a very complex decision process if you consider all aspects.

Another aspect, which Dirk has pointed out, is that you probably don’t want to expose financial data to hundreds of operators and engineers. This is why this should be a kind of closed box application. The yield analytics system that we are envisioning has some access to the financial data from the ERP system. It should make these decisions in a black box environment so that you get some output on the decisions, but not on the underlaying data. For example, tackle this problem first, but it should not explain to the operator in charge the details behind that decision. I also want to point out that, even if we all agree sharing all data has benefits, it might not happen. We have had this issue for decades — limited information flow between foundries and fabless customers. So the world is not ideal in terms of data sharing, and I would expect that people in the finance departments will be very reluctant to say, ‘Yes, we need to have this interface to the ERP system and expose all this data to a black-box environment.’ We hope there are some pathfinders that are willing to go this route, and then we can demonstrate what’s possible when data is combined.

SE: Will you provide other examples of how factory floor operations may change?

Holt: Dirk raised very good points on the need to protect the financial information. But there are many types of financial data, not just the sales order or customers. There’s supply chain information, procurement and parts information, for example. In procuring gases, substrates, wire-bond wires, or test boards, typically companies utilize multiple suppliers. Having that information on the factory floor is useful in the manufacturing decisions that are made daily. For instance, if there’s a shortage of one type of part in one area of the country, ordering from another supplier can be done. You don’t have to provide the financial cost. But there should be a black box method of providing the preferred suppliers. It impacts operations in the parts management and material ordering that takes place on the floor.

de Vries: Fabless companies typically have this information centralized in a very structured way in a product data management system (PDMS). Because they’re fundamentally companies that are outsourcing manufacturing, libraries, design and/or test and packaging, they are very much in the mindset of comparing suppliers and OSATs. They put them in competition. In my experience, it’s relatively straightforward to connect to the customer’s product data management system. Then you can quickly set up dashboards or visualizations that can help. Fabless companies are a bit more geared toward that supplier comparison than fabs. For example, when you look at the performance of gases or raw materials, it’s more organized per of area of manufacturing, i.e., implant.

Burlak: Dieter was talking about the complexities to define the models and the metrics around combining them, and how this black box will operate eventually to provide the right outputs to make decisions. I want to expand on the fact that the test manufacturing floors are isolated from financial impact. It’s very local. There’s not a lot of communication with the external world to make these types of decisions, i.e., to take input that might be triggered by financial analysis or any other data sources to make the flow more dynamic. That’s one of the challenges. It’s basically exposing the manufacturing test floor to be able to export some of the existing test data to the different types of analytics platforms creating these models, either driven by some orders or for metrics of in-field returns. And then, you return them back in a format that actually can be part of the test program that can be run on a per-chip basis, to make a clear decision based on a model that was created offline. It’s changing the manufacturing flow, opening it up, and we need two major infrastructures. One is modeling capabilities to take data and to provide clear results. And then there’s the triggering part, returning these models back to the test manufacturing flow and utilizing them in a smart way. And of course, there is always continuous feedback. As the models are changing, there are shifts and drifts based on new information that is coming in. So you need to maintain the flow. This type of operation is very different compared to the conventional isolated testing.

Further Reading
Balancing Parallel Test Productivity With Yield & Cost
Expensive DUT interface boards complicate development and operations.



Leave a Reply


(Note: This name will be displayed publicly)