Finding And Fixing ML’s Flaws

OneSpin’s CEO looks at methodologies and models for making ML more predictable and more effective.

popularity

OneSpin CEO Raik Brinkmann sat down with Semiconductor Engineering to discuss how to make machine learning more robust, predictable and consistent, and new ways to identify and fix problems that may crop up as these systems are deployed. What follows are excerpts of that conversation.

SE: How do we make sure devices developed with machine learning behave as they’re supposed to, and how do we fix problems when they crop up?

Brinkmann: The real objective is to not have a problem in the first place. We need to build methodologies to develop these machine learning systems in the first place. Those are not established yet. There is a lot of work to be done to be able to say, ‘This is how you do it,’ and ‘This is how you prevent this type of problem.’

SE: Still, biases can creep in from one language to another, even in the best training data. We may define words differently by region, and even over time. So if you’re looking at ‘correct by construction,’ how do we make sure the building blocks are correct?

Brinkmann: There are some methodologies on the systemic side that we need to work on. It’s not the actual system itself. It’s what we use to develop the system. We need to understand how to do that better, and much more carefully, than we do it today. Where is the data coming from? You need to analyze that. It’s one of the steps you need to perform to make sure you don’t run into this issue. As for debugging, you would do that the same way you debug other systems. You look for the root cause of it. Did you overlook some bias in the training data?

SE: We think about this from the standpoint of training data. But not all applications of machine learning have a large volume of training data—particularly newer applications. Does that make correct by construction more difficult?

Brinkmann: Yes, and there won’t be a solution if you stick with neural networks as the machine learning framework. It’s difficult to do this kind of analysis using neural networks.

SE: What’s the alternative?

Brinkmann: If you look at other applications like reinforcement learning, the amount of exploration is much larger. They’re applying different methodologies in different ways and analyzing whether the system is robust enough to deal with unknown data that has not been in the training set. Everything you see will be something new all the time, and statistically it’s impossible to cover this whole space all the time. It’s too massive. But this is one way to do it. We need a set of methodologies and systems in place, so if you want to sell a system that does this type of analysis you need to do ‘these kinds of checks.’

SE: This is more of a standards-body approach to how this technology should behave, right?

Brinkmann: Yes, and there are new approaches like probabilistic or Bayesian learning, where you inherently take a different route. It targets human concept learning, where you try to conceptualize the data that you see. So instead of trying to reproduce pictures of cats, you try to formulate what a cat is. You apply different means of conceptualizing the pattern that you want to recognize.

SE: The approach machine learning uses today is to view a picture of a cat from all angles. What you’re talking about is more along the lines of how a two-year-old can recognize something.

Brinkmann: Exactly, and a two-year-old doesn’t need 10,000 images to recognize a cat. You need to show them one or two cats and the child will know what that is. There are new approaches that are more generative. They still apply basic algorithms. The system would then go about trying to create a model that generates pictures of cats. Then it would apply that model and match what it sees.

SE: Can it do this at the same speed as pattern recognition, which is basically ones and zeroes or shapes.

Brinkmann: Yes, and this creates a model that you can understand. You can inspect it, verify it, and it’s much closer to the way we want to verify things. If you have a model, you want to know if it is performing according to specs and how it does that.

SE: How far along is probabilistic learning?

Brinkmann: There was a recent paper where it was performing better than existing approaches to machine learning.

SE: Didn’t a lot of this come out of the postal system, where they were trying to automate mail sorting?

Brinkmann: Yes, that was one of the early applications of neural networks.

SE: This takes it a step further. So instead of just identifying the number ‘9’ you’re looking at what 9 represents?

Brinkmann: It will still identify the number 9, but it will construct an algorithm that can produce 9s in different ways. It’s a more generative approach. But it’s also much more compact. You can read the code that is generated and understand what it means. The next level of AI will be to understand human concepts. That’s for the future, but there’s some interesting work starting to appear.

SE: Does that make the debugging process simpler?

Brinkmann: It makes it a lot easier because you can read it and inspect it. It’s no longer a bunch of weights on a network of additions and multiplications. It’s compact and there is a way to explain it, so you can analyze it, as well.

SE: Does this require more compute power and memory?

Brinkmann: No, just the opposite. Because it’s much more compact, it takes less energy and less compute to do it.

SE: How do we get from where we are today to there?

Brinkmann: Academia is already working on this. And the big companies will do this as soon as they see there is something they can leverage.

SE: So what’s your expectation for when we will start seeing this roll out?

Brinkmann: It will be maybe two or three years before we see real applications.

SE: That moves machine learning into a whole different realm, doesn’t it?

Brinkmann: Yes. It’s the first approach where you can explain what intelligence means for humans.

SE: So you’re looking at the underpinnings of how people behave. But at that point, does debug start moving into the realm of what’s acceptable behavior rather than whether something is working properly?

Brinkmann: Yes, debugging will go well beyond just the functional side. There are some similarities to functional safety in automotive. For years, we’ve been looking at, ‘Is this the system correct?’ We have been looking at whether it performs well, does it keep a property throughout the design flow, whether that’s synthesis or place-and-route. That’s the verification side. And now we have physical effects, so now we need a whole new layer of verification. That requires assumptions about how many physical effects you see during the lifetime of the chip or over a certain amount of time. How much can this device tolerate and still be functional. This is a whole new concept.

SE: But with machine learning you’re looking at what is considered functional, right?

Brinkmann: Yes, and now you have another layer, which is the ethical side.

SE: What changes on the verification side?

Brinkmann: With a self-driving car, the data generation occurs throughout the lifetime of the car. It will send data back and you will get new models from the factory, so it will perform a little different tomorrow because it has been exposed to new situations or some other car has been exposed to new situations. It’s improving the performance or functionality. But this requires verification to be continuous. Until now, we have been designing things, verifying them, and then selling them.

SE: How far along is this?

Brinkmann: For the entire machine learning framework, I don’t think most people have mastered it today. There are some general rules. If something doesn’t work, you need more data. If it’s not enough, then maybe you need different features. But there is no analysis feature that’s a push-button. It’s the judgment of the machine learning architect or expert to say which is the right way to go.

SE: What’s the impact on the business side?

Brinkmann: It’s becoming difficult for companies to build and maintain these machine learning systems. Many of these systems today are not industry production-grade systems. There are pieces that you put together and they work. But once the guy who built it leaves, you have no way to maintain it. It’s really more of an art. You add more features here, add something else there. You may not even be able to reproduce it from scratch.

SE: It’s a complex balancing of weights, right?

Brinkmann: Yes. If you have something pre-trained, and the data from that is gone, then you get more data and retrain it. But there’s no way to say this is what you did. If you think about ISO 9000, that’s based on repeatability and reusability. This is one of the first things to look at. What is the proper way of structuring machine learning? How can you justify what you do? Even if you can’t explain the network itself or how it works, you can at least explain how you arrived at the solution you have.

SE: What else is required?

Brinkmann: The verification of the network. There are interesting papers on this. You can analyze how robust the network really is. You can look at the weights and generate a view on what the network represents. What type of data is able to discriminate between one or two choices, for example. This is where you will see a lot of formalizing of these questions, and out of that there will be practical solutions. So maybe you can’t completely mathematically verify this formally, but you can at least make predictions.

SE: Can you make formal a tool for probabilistic behavior?

Brinkmann: That’s one approach. Another approach is to determine whether we can learn from something we’ve done in the past. For example, is there some reason to believe we can apply fault-injection technologies to test corner cases in simulation in combination with formal. If you have something that works, can you apply it somewhere else?



Leave a Reply


(Note: This name will be displayed publicly)