Using Generative AI To Connect Lab To Fab Test

NI’s CTO looks toward an intelligent and unified data model as a critical element in future test.

popularity

Executive Insight: Thomas Benjamin, CTO at National Instruments, sat down with Semiconductor Engineering to discuss a new way of looking at test, using data as a starting point and generative AI as a bridge between different capabilities.

Thomas Benjamin, CTO at National InstrumentsSE: What are the big changes you’re seeing and how is that affecting movement of critical data from the lab to the fab?

Benjamin: If you walk into any manufacturing or lab environment, you have tests, measurement hardware, and systems, along with a software running LabVIEW on a PC or a tablet. All of these products create a test sequence. That test indicates whether a product — a semiconductor chip, or a wafer, or a motor — is performing to or deviating from the specs. But they’re all running in silos, and you might have hundreds of these in factories around the world. Now, with connectivity becoming ubiquitous at high bandwidth and a low price point, if you learn about an anomaly from one machine and understand the root cause of it, you can democratize it across multiple systems. This is very similar to how Google Maps works. Google Maps is running on phones, and there’s a hub in the cloud. When you hit traffic congestion, it reroutes you automatically through an autonomous hyper-automated system of systems. Similarly, we believe that the future of test is not an instrument, but an autonomous hyper-automated system of systems that brings together hardware, software, data, workflows, and intelligence. And then, eventually, generative AI will be used to create, deploy, and execute sequence tests, and even possibly look at root cause analysis of tests.

SE: What you’re doing here is taking generative AI and using that to link together various capabilities?

Benjamin: That is correct. But it is based on a long history. We virtualized the test instruments, so we are able to connect those instruments together. And we’ve made a lot of investments in advanced analytics. The next logical level of evolution is using AI. We are continuing to innovate using new capabilities that technology is unleashing to make things much simpler and more effective.

SE: How long have you been working on the AI piece?

Benjamin: We started this year, but we’ve been working on the data side for a few years. NVIDIA was using our advanced analytics software to do contract manufacturing because all the chip manufacturing is spread out across multiple contract manufacturers around the world. We talked about the concept of a virtual engineer complementing the physical engineer with NI software, which helped provide a yield improvement for them.

SE: How does it move from lab to fab? And does that have applications in other areas?

Benjamin: The one that we talked about involving NVIDIA is completely in production. In the lab you may measure 1,000 parameters and test it. We fine-tune it to measure a meaningful set of parameters in production, and then we correlate it back to the lab, because it’s the same infrastructure running in both places. We can go back and even retrofit that design and improve it based on what we see in production. This is only one part of the equation. It’s design, validation, production, and in-use. We started with lab, and we are cross-pollinating it into production. We also need to be able to track this with in-use data, and revalidate and fine-tune the design in this complete loop. That’s holy grail, and we’re going after it in stages as the industry and market mature.

SE: Is a lot of that being driven by the automotive industry?

Benjamin: Yes, and we can take it further in the automotive industry. We are combining our hardware and software capability and expanding this loop from design, validation, production and in-use across the semiconductor industry, the automotive industry, the aerospace/defense industry, as well as the .edu section, which is a broad-based area.

SE: How do you see the automotive market changing?

Benjamin: The first thing is that the vehicle is now going to be driven by a computer, and the computer will have to address whatever scenario is planned or unplanned. An example of a planned scenario is how you traverse an intersection. An unplanned scenario is how you read a street sign that’s partially blocked. The vehicle is constantly making decisions, and to do that requires a lot of tests and measurements. If the decisions are done accurately, you begin to trust the vehicle because it takes you to your destination. That means you need a lot of data about the different scenarios in which you’re testing the system. Typically in the past, you had stimulus/response-based testing. Now you have to do scenario-based testing. You have to identify all these complex scenarios that you can test with, which is essentially based on the data that you collect. We have to do a lot of this for autonomous driving and testing. Now, as we look at the 6G space, we are trying to essentially replicate some of these patterns, because we think most of these patterns are cross-pollinated from autonomous driving into 6G. But in the 6G auto space there are a couple of things that are table stakes. One is a good standardized way to collect, manage, and store data. We don’t have a standard spec. So under the auspices of a DARPA initiative and a National Science Foundation, NI and Northeastern University have been working on a project called RF Data Factory, which comes up with a set of automated tools to collect, manage, and store data in a standardized format called SigMF. We just released the RF Data Recording API to the open-source community so researchers and others can use this infrastructure to collect data from these different tests. We have a lot of NI USRP (Universal Software Radio Peripheral) devices out in the field to be able to do this, and we open-sourced a platform for researchers to take this API and record and test data. So once you collect this data, you have the infrastructure to create these infinite scenarios, like in the automotive driving space. For example, with 6G, what are the interference scenarios? How do you keep your cell towers? How do you organize all these things so that communications are more effective and efficient? If we can come up with a low-cost version that is usable by everyone it will accelerate the adoption of test and measurement for the 6G space.

SE: And that’s really the mantra for AI, right? Bring down the cost of everything.

Benjamin: Yes, that is exactly where we want to come in. But that’s just one part of the equation, because once you have an infinite number of these test scenarios, how do you test it? For example, if you go to ChatGPT and ask it, ‘What’s the position of the letter E in the word kitchen,’ it will tell you there is no letter E in kitchen. The reason is that the machine learning model is not trained for such scenarios. And when you get these infinite scenarios, it’s not humanly possible to test all these scenarios. So can you use an AI engine to test the AI itself? That’s another area we’re beginning to research. This is a concept we played with quite a bit in the automotive space. We’re now trying to cross-pollinate into the 6G space, taking some of these learnings and evolving, personalizing, and contextualizing them for 6G.

SE: Who owns the data as we go forward? One of the concerns with chatGPT, or any generative AI program, is that the data is conflated from a lot of different places.

Benjamin: We’re still working through that. The first goal is to collect the data and keep the right control structure. There are infrastructures provided on Azure at this point in time where you can get a localized instance that you can use for training. We are playing with it, but it’s still early days. We’ve got to come up with some models, along with an industry consortium or something like that, to make sure the right ownership of data is defined. But more importantly, the guardrails for protecting sensitive information must be place. There are concepts like homomorphic encryption and things like. But there is a lot of work to be done before this can be streamlined and ready for actual consumption.

SE: Is 6G going to be usable in the same way 4G LTE is? And is the end device a handset, or a point-to-point connection to an apartment building?

Benjamin: The promise of 6G is the ability to have somewhere around 10 Gbps-plus bandwidth. That is going to make computing much more immersive, because for the last 40 years we’ve interacted with the computer with just a keyboard and a mouse, and maybe a tablet these days. We can take that to a much more immersive level. What exactly that is, time will tell. We’ll have autonomous cars. Can cars start talking to the roads or to the signals? There are all these different use cases that can emerge out of this, and 6G is one of the stepping stones. There’s so much network bandwidth that is still not being used in the gigahertz/millimeter wave range. There are a lot of things that need to converge. If you look back 20 years ago, we were questioning whether 5g would actually happen. It’s the same with 6G. There’s a lot of evolution over the past 20 years, and it happened quite rapidly.

SE: But it’s not as if you get on your phone with 5G and expect it to provide massive data capability consistently as you drive along, right? Maybe it’s a point-to-point connection rather than a mobile device in a car?

Benjamin: That’s spot on. There’s going to be machine-to-machine communication that it can facilitate as we go forward. If you look at 1G and 2G, that was about voice calling. Today, it’s a lot more than that. And there’ll be a lot more of this, if this level of bandwidth with resiliency is available with an associated safety net.

SE: Is over-the-air testing of 5G and 6G solved? What you’re looking for is how strong is the signal, but is that an issue for hardware, software, or maybe the signal itself?

Benjamin: We haven’t solved it. We’re working through that process. That’s where the different scenarios are going to play, because it’s going to be a combination of the ground station, the over-the-air signal, and other building blocks. Having these different characteristics that can interfere with the signals — especially when you’re in a stadium or an airport or a packed location like that, where the density of consumption is much greater — requires playing out these different scenarios. There will be different characteristics of problems that emerge that might need to be solved based on context.

SE: So it will require lots of repeaters?

Benjamin: That’s probably correct.

SE: Everything we’ve discussed so far will require a lot of chips, and increasingly, chiplets. How do we manage those?

Benjamin: For each chiplet or component, can you tag it with a tenant ID, so you have tenant ownership and a specific input encryption key? Those are the techniques we have to use. The ERP guys have solved this to some extent. As a manager of an organization, you can look at the salaries of employees within your organization. But you can’t look at your peers’ organizations. This is a much smaller scale, but there are some patterns we have to extrapolate. We need to put in foundational guardrails to make this happen successfully.

SE: Backing up to 60,000 feet, what do you see as the big challenges for an AI? And what do you see as the big opportunities?

Benjamin: The big challenges for AI, particularly in the 6G space, are getting the data and creating a critical mass of scenarios that’s meaningful. That’s going to be the first big challenge, and this is something we have to work out with a few of our design partners, the actual carriers, or the semi manufacturers, or the part manufacturers, to be able to facilitate this. That’s the first part. The second part is how to secure and protect this data, and do it at a price point that is consumable. The cost of computing should not outweigh the business value that comes out of it. How to traverse these two, carefully and delicately, is something we need to figure out. There’s a lot of learning to be done in the next 12 to 18 months as these capabilities begin to converge.

SE: NI has always worked closely with the University of Texas, and it now is reaching further afield to other universities and research groups. Does this help compensate for a talent crunch that has been constricting the chip industry?

Benjamin: We have to look at where the research is happening, and not all of it is happening in our immediate vicinity. So we’re tapping into the global infrastructure we have developed. And we are a global organization at this point of time.

SE: Where are the big new opportunities?

Benjamin: The automotive space, of course, particularly EVs. But a lot of this also can be extrapolated to the aerospace and defense players, because all of this is going to work as a connected mesh of systems. The future of test is not an instrument. It’s an autonomous, hyper-automated system of systems. It’s not just one system working in isolation. It’s a mesh working together to get the end output of business or product performance for the end customer.

SE: Are you now looking at testing over time, as opposed to just a series of tests during manufacturing and it’s ready to go into the market?

Benjamin: It’s looking at it over time to detect anomalies, and what anomaly surfaces at what time can be a function of usage characteristics and load on the system. So those are dimensions we haven’t looked at traditionally. Think about a New Year’s celebration, when the density of people in one place increases and changes the behavioral characteristics of systems. Those characteristics appear in batches whenever that density increases, and when you look over time at the factors that lead to anomalies in those systems — because very rarely do anomalies happen instantaneously — there’s a degradation pattern. You can you detect the slope of the degradation pattern even before the anomaly.

SE: Does it become more difficult to predict those anomalies with heterogeneous integration and uneven aging?

Benjamin: Systems are getting much more complex. There are more moving parts coming together to make the system versus one monolith. There are more sub-modules that get assembled together in different permutations and combinations. That’s why the opportunity for test and measurement still continues to be stronger and stronger.

SE: So is the ultimate goal resiliency?

Benjamin: Yes, and this is something that you see happen in the software industry. On Black Friday, when there are millions of users trying to do their shopping, if one machine or container goes down it’s automatically rerouted, but your service is not disrupted. The question now is whether you can bring that same capability into the hardware space, and also ensure that the end customer performance of the product is not deteriorating.

SE: There are more corner cases now than ever before, and we need to identify them and deal with them more quickly. How do we do that?

Benjamin: This is where scenario-based testing comes in, and a critical mass of testing scenarios is valuable gold dust. You need a network of scenarios that you can play for anything. And that’s how we democratize test and measurement as a next logical evolution.

SE: What you’re looking for is faster time to market with fewer failures, right?

Benjamin: Yes, and it’s going to take some time to get there. But we can help facilitate that because test and measurement ubiquitously goes across different building blocks of these capabilities.

SE: All of this creates more data. How do you manage all of that data? How much do you retain and how long do you store it?

Benjamin: The idea is not to store all the data store. It’s to find the key patterns that lead to an anomaly and aggregate it, and then cold store it and compress it, or even archive it, or purge it as needed. You don’t need every line item of data. The trick is figuring out what’s the key aggregate store and what to throw away.

Related Reading
AI, Rising Chip Complexity Complicate Prototyping
Constant updates, more variables, and new demands for performance per watt are driving changes at the front end of design.
Patterns And Issues In AI Chip Design
Devices are getting smarter, but they’re also consuming more energy and harder to architect; change is a constant challenge.



Leave a Reply


(Note: This name will be displayed publicly)