PDF Solution’s CEO talks about the growing impact of analytics and AI on semiconductor design and manufacturing.
John Kibarian, president and CEO of PDF Solutions, sat down with Semiconductor Engineering to talk about the impact of data analytics on everything from yield and reliability to the inner structure of organizations, how the cloud and edge will work together, and where the big threats are in the future.
SE: When did you recognize that data would be so critical to hardware design and manufacturing?
Kibarian: It goes back to 2014, when we realized that consolidation in foundries was part of a bigger shift toward fabless companies. Every fabless company was going to become a systems company, and many systems companies were rapidly becoming fabless. We had been using our analytics to help customers with advanced nodes, and one of them told me that they were never going to build another factory again. Our analytics had been used for materials review board and better control of our supply chain and packaging before that. He said, ‘I think you’re focusing your analytics on the wrong part of the market.’ That kicked off a strategic review meeting and we decided to change the strategy. We started to put in place a number of things to do that. One was acquisitions, because you always need to do these things inorganically. We bought a company for test flow control and traceability throughout the assembly flow. We also did something with AI with Stream Mosaic. We had replaced the relational database with a database to handle all of the data that comes off the fab—the largest quantity and highest-velocity data. So we started putting all of that capability in for fabless and system companies.
SE: How did that work out?
Kibarian: That business has grown at a 35% CAGR since 2014. We went from having a small number of highly concentrated customers to having more than 130 customers in 20 countries. That’s up and down the semiconductor supply chain—equipment makers, foundries, IDMs, fabless companies, systems companies. We anticipated the need for analytics, but we have consistently underestimated what the opportunity is.
SE: It spread more quickly than anyone thought, right?
Kibarian: Yes. Over the past 18 months, when I met with senior vice presidents of operations and manufacturing for systems and manufacturing, they all said they have a digital twin or Industry 4.0 effort, or ‘Made In China 2025,’ mostly with visibility at the CEO level. They wanted to figure out what they could do to digitize. There are a number of drivers for that. As you start doing more system-in-package, it is a much more complicated manufacturing process with a lot more risk. One bad element in a package with 10 or 20 other elements and you risk throwing out a lot of very expensive silicon. In the past, packaging was basically a low-cost value-add step, and you wanted to drive as much cost out of that as you could. Today, you need to combine advanced technology with a system-in-a-package. That means the customers have a lot more need for analytics. That has happened very quickly. If you look at a smartphone, there are many systems-in-package, including the modem, the radio, the application processor, the cameras. And that’s just one product. That’s just one of the drivers.
SE: Did you expect all of this?
Kibarian: Not entirely. In 2016, we acquired dataConductor (Syntricity), which was the first cloud-native [yield management system]. We thought there was a growing number of fabless companies in China, and that all of them would be cloud-native, and that our largest customers would adopt later. What happened was our largest companies adopted first.
SE: Because they were doing pilot programs to figure out how to stay current?
Kibarian: Yes, and there were two surprises. One was when TSMC put OIP (Open Innovation Platform) on the cloud and started working with Cadence, Synopsys and Mentor for EDA on the cloud. The design was already on the cloud and the PDK was on the cloud, so the manufacturing data could also go there, as well. People before that had security concerns, but when you really go and look at it, the investment that cloud-scale providers are investing in security outstrips what our customers can do with their own IT systems. It used to be that you owned your own factory, but it’s hard to compete with a foundry with scale 100 times your own.
SE: Do you think that’s going to stay in the cloud, or will it move to the edge?
Kibarian: It’s going to do both of those things. And that brings up the second surprise. The first was exogenous to manufacturing, which is design and security. Qualcomm showed they can do Exensio on premise, where they have a 200-terabyte system, or on the cloud, where speedups for the really big files are 26X compared to only about 2X for the smaller files. It’s faster to do some things on-premise, but when you want a single image of a system, it’s possible to do that in the cloud. You can have all terabytes in one place and still have low latency for all of your engineers. We put compute at the OSATs where Agents can talk with our testers. That compute is there so customers can push their models to the edge.
SE: So what’s the real attraction at the edge?
Kibarian: Even though you have massive compute in the cloud, you still want no latency. The test floors assume that the Internet can go down. You can’t stop a floor for that. For things that are real-time, the models may be developed in the cloud, but they get evaluated on the edge. We have more and more customers wanting a hybrid approach. What goes real-time will always be close to the tools and native to the floors, whether it’s a front-end factory or assembly floor or test floor. And what goes on with engineers at the corporate level will migrate to the cloud. It’s not just our fabless customers. Most IDMs and foundry customers recognize that for (ML/AI) training and analysis, it’s much more efficient to do that in the cloud.
SE: How does the cloud compare with the edge for processing?
Kibarian: We are evaluating what is the most efficient way to do processing on the edge, and whether we mirror what we do on the cloud. But a number of our largest customers already have a relationship with cloud providers. It’s inexpensive to get data in, expensive to get data out, so when customers land that data in the cloud they’re incented to leave it there and share it with other systems they have.
SE: A lot of IP has test capabilities being built into it these days. How does that affect what you’re doing?
Kibarian: The chip can be a system, or it can be part of a system. Most of our customers purchase a lot of IP, and when a chip doesn’t work you want to understand what’s happening on any part of a system. And when you want to do debugging and bring-up, you need to understand all of that. No one writes code for an entire system and compiles it at the same time. You’re building modules. The parametric data that comes off those individual modules give you a richer and richer data set, and we’ve been giving customers more and more sophisticated ways to look at the relationship between wafer sort and final test and burn-in. We had a customer that was in the consumer space and started getting into the automotive space, and they bid based upon what test requirements used to be in consumer. Their test costs turned out to be much higher, so they had to get predictive about what they were doing with burn-in. All those module tests are great inputs for AI. If one chip is clearly good, you can skip burn-in test, while another may require more burn-in dollars.
SE: In the past you had to guess at that, right?
Kibarian: Yes, and now you can see the patterns. With algorithms, dealing with a 100-dimension space, or a 1,000-dimension space, is really not a big deal. For us, we can think in three dimensions.
SE: When you’re talking dimensions, these are all the different relationships between the various components?
Kibarian: Yes, there are hundreds of IP blocks with parametric data, and you’re looking at burn-in test results. This allows you to understand those relationships. The nice thing about all of the automation with AI in the test world, as opposed to the front-end world, is that you have lots of labels when you’re training. You can go back and train wafer sort to be more sophisticated because you know which chip passed what test, and your modeling is quite good.
SE: How opaque is training data today? And is that a problem?
Kibarian: We got into the analytics business because we were building yield. So we had pattern recognition and AI algorithms in the late 1990s and early 2000s. A lot of that came out of research at Carnegie Mellon, which was all about applying learning. This was AI 1.0. We built up these sophisticated algorithms, and they were hard to use because you really needed to understand something about the algorithms and statistics and data science and process integration. It’s very hard to find folks with that cross section. Small tweaks on the algorithms give you wildly different results. But the problem you have with AI in the fab is that you’re collecting data off an etcher or CMP tool, and you don’t know whether that wafer is good or not until you get to the end of the line — and there are probably 500 processing steps. So it can go bad for a lot of reasons.
SE: Where else did you see problems?
Kibarian: When we gave AI to the factories, what they found was that labeling data is tough. So we took those sophisticated algorithms and used them to label data and train AI. So now you have collaborative learning on top of a neural network. It suggests more things to you. You can use AI to train algorithms. AI can mimic those algorithms very effectively. That allows the engineer to say, ‘That’s kind of right, but I want to see how it classified everything.’ So they can take that knowledge and improve the model. You no longer need to be a data scientist or AI expert to use this stuff. You want to make it so a semiconductor professional can go in and tune it to get that last little bit of productivity. With collaborative learning, you can get that extra 5% or 10% and get it tuned to what your best expert will tell you the company can do. That’s a big breakthrough.
SE: So now you’re customizing algorithms for your specific needs?
Kibarian: Yes, and some of our customers have engineers who are really skilled at that, and a bunch that are not. Now, if the skilled ones are allowed to look at how the algorithms are classifying outliers and tuning it, that can be given to all engineers across the organization and everyone’s ability is elevated.
SE: How do you keep track of all the changes?
Kibarian: Those customers restrict who is allowed to ‘like’ and ‘not like’ information. When you apply AI and machine learning into test and process engineering and assembly and front-end packaging, you’re allowing engineers to operate at a higher level. You’ll see better productivity in those organizations, and make chips available in more end markets and grow the pie.
SE: So what is you’re biggest challenge going forward?
Kibarian: If you go from the outside in, there is significantly more geopolitical risk in the industry than there was 18 months ago, and it’s on a trajectory to never go back to the way it was. It may cool down, but we built our industry as an intertwined world. That was one of the fun things about the industry because you felt akin to people all around the world with a common language about chips and circuits. We all still want that, but if you look at the way the world is going, it’s bifurcating. That’s going to be very damaging to the industry as a whole. At the semiconductor level, AI was overhyped and fizzled out in the 1980s and 1990s because people didn’t appreciate all of the change management required. You didn’t understand what the metrology or the SPICE model would look like. My fear today is that we are doing far more sophisticated things today, but it requires change management inside of organizations. Business processes need to be set up to be native on data. We collect lots of data. If you’re going to be doing data-based manufacturing, you have to configure an organization around that. If you’re going to keep doing what you’ve been doing and apply AI to that, you’re going to throw away tremendous value. That would be detrimental to the whole industry.
Leave a Reply