The Darker Side Of Machine Learning

Machine learning needs techniques to prevent adversarial use, along with better data protection and management.

popularity

Machine learning can be used for many purposes, but not all of them are good—or intentional.

While much of the work underway is focused on the development of machine learning algorithms, how to train these systems and how to make them run faster and do more, there is a darker side to this technology. Some of that involves groups looking at what else machine learning can be used for. Some of it is simply accidental. But at this point, none of it is regulated.

“Algorithms people write algorithms,” said Andrew Kahng, professor at the University of California at San Diego. “In general, algorithms used inside chip design have been deterministic and not statistical. Humans can understand how they work. But what folks expect in this world of deep learning is gleaned from fitting a neural network model on a classic Von Neumann machine, doing tenfold cross-validation, and that’s it. You get statistically likely good results. But that’s not something that IC designers and concepts of signoff and handoff — or, even, the concept of an ASSP/SOC product — know how to live with.”

But what happens when the data is bad or the data is corrupted on purpose? This might come down to the DNA of the engineer and the product sector, according to Kahng.

That data can be corrupted inadvertently, as well. Bias is a well-known problem in training systems, but one that is difficult to prevent.

“We found that in early versions of the software we worked on that it made mistakes based on ethnicity that we weren’t even aware of,” said Seth Neiman, chairman of eSilicon. “You have to have a pretty sophisticated speaker and member of the culture to even point out the mistakes. It’s dumb learning—like your kids didn’t realize you taught them to hate peas because you hate peas.”

This can quickly get out of control, too, because those systems are used to create other systems. “It used to be humans wrote software,” said Neiman. “Now data writes software. We have a system where if we pump enough data in it, it will write the software for you. It’s not going to write a user interface—or at least not yet.”

Minimizing problems
One way to handle these problems is to add checks and balances into machine learning. “When we as humans are faced with making too many mistakes, what do we do? We ask one guy to check another other guy’s work,” said Ting Ku, senior director of engineering at NVIDIA. “There is an adversarial network mechanism that does that cross-checking, so perhaps a few layers of redundancies is necessary to deal with that data corruption problem. This is not out of the ordinary. We’ve been doing this for thousands of years. When I don’t trust one guy, I get two guys. If I don’t trust two guys, I get a congress to vote because we don’t want a king, we want a whole bunch of people that are accountable for decisions. And we want to leverage so that even if one guy gets shot, we’re still okay. Essentially that’s the same answer as how we manage human society — redundancies.”

Harry Foster, verification chief scientist, Mentor, a Siemens business, points to a “trust-and-verify” approach as the solution.

Best practices still apply, of course. Machine learning requires good coding methods, asserted Sashi Obilisetty, director of R&D at Synopsys. “You have to write secure code, you have to write more secure code, you have to have checks and balances. Let’s say your output is not as you expect. You have to have redundancies to make sure that your QoR or whatever you’re trying to output, you’re not compromising that.”

And just how bad this can get depends upon the application. [The data corruption] problem is not as bad as autonomous driving, where there are fatal mistakes,” said Norman Chang, chief technologist of the semiconductor business unit at ANSYS. “We need to learn with bad data, and customers will come up with a strategy to deal with the bad data.”

While safety-critical systems are certainly more important than a failed $10 million tapeout, that tapeout is still a serious issue.

“There remains the fact that when you have a manually-written tool, there is probably one guy you can go to and say, ‘This didn’t get the output I expected. Can you look again at the algorithm and really convince me that this is right,’ said Chris Rowen, CEO, Cognite Ventures. “Whereas especially with deep neural networks, it’s very difficult to figure out how it arrived via training at that solution. This is something that’s a big push in the deep learning community to have more auditability, more transparency, more analysis tools for the models themselves, and those will be important. But for some time there will be an inexplicable gap between manually written and learning models.”

In terms of a more nefarious model, he said bizarre and interesting examples exist of people that come up with inputs that game the system by looking like something other than what they really are. So far that hasn’t happened for the chip design process, which is otherwise a fairly secure process. Why, for example, would inadvertently try to fool the tools.

Bias plays a role here. So does a wrong decision made by an engineer, which may be nothing more than an honest mistake put into the database as part of the training. But that can have a significant impact, Ku said. “You reference the bad one, make a bad decision, and the whole decision-making process gets skewed to the wrong side. That’s a worrisome thing to most people. Cross-checking helps with that to steer the data back.”

Another strategy includes implementing time expirations for the data. “If the data is really, really old, I treat it with less importance,” he said. “Data that is newer is a little bit more relevant. So hopefully the mistake that we made 10 years ago has been forgotten.”

There is one built-in safeguard, as well. “This notion of using a diversity of data types also gives you implicit cross check,” said Rowen. “You really take several different views of the data, even if it is the same database at its center. There may be no new true information. You may have different biases or different flaws in how that data was extracted and prepared and labeled, and even that will then create some self checking implicit in the process.”

Still, because machine learning is just coming to point that it’s being applied on a more widespread basis, there are a whole bunch of unknowns. Simon Davidmann, CEO of Imperas Software, said things can go wrong for malicious reasons, and things that can be stolen and misused to make other things happen. And while a better architecture is needed so that machine learning can be implemented more quickly, because everything is going to need to do learning, he does not believe people have started considering the security aspects of it. The discussion always centers around ways to speed up the process, he said.

All of this is related to the dark side of progress, in general, Davidmann continued. “And it turns out that a lot of the embedded world is worried about the security aspects of what they do. If the wrong seeds got in there at the beginning, how do you ever find out? This is true with everything. Every now and again a medical professional does something really stupid on purpose, because something’s gone wrong and humans are fallible. So we’re going to get this. Things are going to go wrong in machine learning applications, and it often will be because someone puts a back door in accidentally or on purpose. They may do it maliciously. But it’s a game of probabilities. As long as that probability will be very low, it will be there, and we’re going to live with it.”

Conclusion
Whether we can live with the errors inherent in machine learning data remains to be seen. This is a technology approach that is just beginning to roll out, with uncertainties that have yet to be fully defined. But this definitely is an area where more discussion needs to take place.

—Ed Sperling contributed to this report.

Related Stories
Machine Learning Meets IC Design
There are multiple layers in which machine learning can help with the creation of semiconductors, but getting there is not as simple as for other application areas.
The Great Machine Learning Race
Chip industry repositions as technology begins to take shape; no clear winners yet.
Plugging Holes In Machine Learning
Part 2: Short- and long-term solutions to make sure machines behave as expected.
What’s Missing From Machine Learning
Part 1: Teaching a machine how to behave is one thing. Understanding possible flaws after that is quite another.
Building Chips That Can Learn
Machine learning, AI, require more than just power and performance.
What Does An AI Chip Look Like?
As the market for artificial intelligence heats up, so does confusion about how to build these systems.
AI Storm Brewing
The acceleration of artificial intelligence will have big social and business implications.



Leave a Reply


(Note: This name will be displayed publicly)