Understanding prejudices is critical to building intelligent systems.
A couple of weeks ago, I wrote an article entitled The Multiplier and the Singularity. That article has been well received and I thank those who have made some kind and interesting comments on it. Such articles can be difficult to write without inserting writer’s bias. As a writer, I have many of my own thoughts and possibly even prejudices, but those are not meant to make their way into my writing. I am sure they do in subtle ways, but there are times when I want to scream a response to some of things that people say. Of course, my blog is where I can say such things and impart my own views.
But the prime thing I want to talk about in this blog is not my reaction to what people said in that article, but the whole notion of bias as we continue to get deeper into the realm of artificial intelligence (AI) and the dangers that it represents. Consider just for a moment that machine learning starts with a collection of items that are fed into the training system. If we are talking about machine vision, then it is a collection of photographs.
Can that collection contain bias? Perhaps a better way of phrasing that might be, ‘Is it possible to create a training set that does not contain bias?’ Then as a secondary question you may ask, ‘Will that bias affect the operation of the system in a material manner?’ This problem is not new, or even about machine learning.
I am reminded about this cartoon I saw some time ago after Supreme Court Justice Sonia Sotamayor acknowledged that there was a cultural bias in standardized tests.
Such bias can significantly affect the outcome that is reached. There are bias tests devised by Stanford, and I am sure others, to try and find what hidden biases you may have as an individual. It is very telling that even the landing page for this test contains a warning “I am aware of the possibility of encountering interpretations of my IAT test performance with which I may not agree.”
People rarely see their own biases, and yet we are probably inserting those into the learning data sets that we use. It probably means that many AI systems are sexist, racist and have many other negative qualities and phobias, just like the people who train them. Let’s face it, the demographic that does the training is largely highly educated and biased toward Western males, probably in their thirties and forties.
Thankfully, few applications of machine learning are going to be affected by such biases so far. Can there be a bias shown in the way a computer plays Go or poker? Probably not. However, machine vision starts to get a little trickier. It is possible that certain races will be identified better than others, and could that affect the decisions that such systems could make.
As soon as any kind of decision making gets attached to vision recognition or other cognitive input, then we have to be very careful. Transplant teams do this every day when they have 10 patients needing a heart transplant and one is available. They have to make a value judgement about who is the most worthy recipient. Are there things that we can learn from them or are they equally biased?
Even if they think they are unbiased, could we compare the decisions that they make with a similar team from a different culture. If there is no bias, then they should arrive at the same decision, but culture and religion often mean that very different value sets are used in making such decisions. Should the technology developers be the ones to impose their ideals on the rest of the world?
The IEEE has started to address this issue and have a draft version for a code of ethics for people working on AI. I hope that companies developing AI technology take this very seriously and ensure that any product they develop does not just stop at the letter of the law, but puts in place an ethics committee to take a serious look at the decisions they are making.
Too many times already, young, startup companies have raced to be the front of the pack and along the way they have done some terrible things. There may have been no laws or statements about the morality of what they were proposing at the time, but if they had only stopped for a minute and through about it, it would have been obvious that they were putting money and success ahead of what was right and decent.
Europe is further ahead than the United States in openly discussing some of these issues today, and the discussion probably has not even started in some other areas of the world. If these issues are not addressed, then notions of a singularity can never even be contemplated. We have not even made the first baby step in the direction of ethical machine decision making.
Related Stories
Convolutional Neural Networks Power Ahead
Adoption of this machine learning approach grows for image recognition; other applications require power and performance improvements.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
The Multiplier And The Singularity
AI makes interesting reading, but physics will limit just how far it can go and how quickly.
Happy 25th Birthday, HAL!
AI has come a long way since HAL became operational.
Leave a Reply