Improving Chip Efficiency, Reliability, And Adaptability

Fraunhofer IIS EAS’ director maps out a plan for the next generation of electronics.

popularity

Peter Schneider, director of Fraunhofer Institute for Integrated Circuits’ Engineering of Adaptive Systems Division, sat down with Semiconductor Engineering to talk about new models and approaches for ensuring the integrity and responsiveness of systems, and how this can be done within a given power budget and at various speeds. What follows are excerpts of that conversation.

SE: Where are you focusing your research today?

Peter Schneider, director of Fraunhofer Institute for Integrated Circuits' Engineering of Adaptive Systems DivisionSchneider: One big topic is artificial intelligence. We are building a lab for testing prototypes of AI-based systems, starting from acquiring data in a real-world environment. We are creating real-world setups in automotive, robotics, and building automation. And then we are taking this data and using it in a virtual world. So we build models, generate training data by models, and have better coverage of the data that you need to train artificial intelligence. Then, on the electronic side, we are building our own edge devices. From there, we do fast prototyping, and test the systems in the real world. So you have the complete cycle, from data acquisition to virtual development processes, and afterward to testing.

SE: In automotive and robotics, you need a lot of very fast processors wherever you’re collecting the data, and then you need to move that data to some sort of centralized processing. What kinds of challenges are you running into there?

Schneider: The main challenge for processing of sensor data at the edge is low latency. In the robotics area, if you have safety functions, you need dedicated response time for those functions. So you need data processing at the edge, which involves data fusion, feature extraction, and reasoning in the process. It’s a combination of what you can afford in the actual environment because you have limited resources, which is why these development processes are so important. You have local sensor data, and you need to determine the response from the system. You can process data locally, or you can put it into a server infrastructure. But you have communication overhead when you transfer the data, and you need an optimal solution that includes the entire system context. There are different basic building blocks. Data processing is just one activity. We have other efforts underway, developing low-latency wireless communications protocols, which share the same goal. If you want a quick response, you need low latency in the entire data processing loop.

SE: This also comes down to accuracy and precision, right?

Schneider: Yes, and that’s all about resources. We have an artificial intelligence/machine learning algorithm, so you can do processing with four bits or eight bits. Maybe it works with four bits, but you have to figure out what is the right solution for the application context, and that’s a design problem. We acquire the data, put it into a server infrastructure, and then we use all these frameworks. So you have an algorithm that works fine, and then you transition from the server algorithm. You can reduce the network topology, the weights, and the number of bits of data that you have. It depends on the application. That’s why it’s so important for us to have this real-world setup to generate all the real-world data.


Fig. 1: Toward better AI. Source: Fraunhofer IIS EAS

SE: Some data is critical, some is not. How do you prioritize? There are a lot of moving pieces here.

Schneider: Longer-range, we are working in the wireless communication area to co-design the application and the communication system for a specific application or quality of service. Let’s say a robot is being used for some kind of handling process. You can control it with wireless communication, and if you have disturbances in the wireless communication, you can go to a fail-safe mode or stop the system or reduce the speed. If you reduce the speed of the handling operation, it still can go on. And if the wireless communication improves, it can speed back up. That can be used to control processes in industrial automation. It’s a new approach. Normally, if there is any problem in the wireless communication system, it stops, and a worker has to figure out what’s going on and then push a button to get it going again. With adaptive systems it could be much more flexible, preventing stoppages in production.

SE: How far along is this?

Schneider: It’s still a ways away from practical usage in a production environment, but we’re working on a project and we have good results.

SE: You mentioned better coverage in the data space. What does that entail?

Schneider: When you’re training on pictures or something like that, there are huge training sets. So you can have pictures of a cat or dog or something like that, but if you go to industrial automation and you’re dealing with maintenance issues or problems in the production process, you are working with rare events. I have a lot of data that says the production process is running perfectly, and then very few events that are related to an error. Normally with this kind of data, you can’t train the trainer or machine learning algorithm very well. What you need is a good ratio between the good and the bad, and you don’t have that. So you collect all these rare events, but this takes a long time and you still don’t have a complete picture. That’s why we are proposing an approach where you have a nominal model of a production process. You can validate it with a mass of data. We have error cases or faulty behavior, and you can add failure models to this model and generate failure data or faulty data sets. That provides better coverage of the entire data space.

SE: This is different than digital twins, which basically are a living model, right? The problem is the amount of energy required to keep those running and up-to-date.

Schneider: Yes, and that’s a big problem. Plus, you need a methodology to address the model accuracy and deal with the complexity in computing on the right level. It’s a balance. In the past, we did a lot of work in modeling of MEMS-based systems, and measuring the behavior of a complex MEMS structure such as a gyro. Normally, for designing of the electronics, you only need specific aspects of the behavior of this complex microelectromechanical structure. We applied our model order-reduction methods, where you can decrease the number of degrees of freedom. That, in turn, decreases the numerical complexity significantly. It can depend on whether you have non-linear or linear behavior from the vibration characteristics, for example.

SE: Where is your research into test going?

Schneider: If you have a hierarchical system, you have sensors, actuators, and the environment. You can test on different levels. You can do wafer-level test for the chip, package test for a 3D integrated sensor node, and you can do tests of systems in context. With sensors, we use a hierarchical test approach. We can present test data from the real environment. This has an impact on the sensors, which causes electrical signals in the electronic component. What we’re trying to do is translate these test cases from a system context into an electronic context. In our lab, we can do degradation measurements for aging. And on the other side of the lab we have a car, where we can test typical automotive cycles and measure the impact of environmental conditions on an ECU or some other system. Then we translate that to the wafer level, where we can look at the impact of environmental data or temperature.

SE: So basically you’re looking at four-dimensional data — x, y, and z, over time?

Schneider: Yes. And we cannot solve all problems at the moment, but that’s our vision.

SE: Putting this all in perspective, is the goal better testing, or developing and understanding the data models?

Schneider: We are pursuing a bridge between the knowledge that can be encapsulated in models and combining that with the data. There is a strong connection between data-driven models and physical models. That’s very important. Traditionally, we have focused on models, but we’re doing a lot of development and collecting data and doing data analytics. The best way forward is to combine both worlds.

SE: The challenge is that there is so much data, and a lot of it is in motion, which increases the number of variables. How do you address that?

Schneider: The only way is to abstract it from a specific layer. You will not be able to analyze all the data in a 60nm chip on a functional level in an automotive system. You need an abstraction from an electrical function, and then you need to amplify it. Of course, you have to validate this high-level behavior is a good representation of the details, but that’s in the methodology.

SE: That’s basically model coverage, right?

Schneider: Yes, and you need a unified view of the test cases. We are doing virtual system development, dealing with requirements engineering. At the top, you have a requirements level, and then you go step-by-step into the implementation architecture or design decisions. And if you have this consistent view over all these layers, you define the requirements and then you generate test cases.

SE: So you’re modeling the models?

Schneider: Yes. UVM is one method we’re using, and it provides a good basis for this testing and a consistent view on all the levels.



Leave a Reply


(Note: This name will be displayed publicly)