What Does AI Really Mean?

eSilicon’s chairman looks at technology advances, its limitations, and the social implications of artificial intelligence—and how it will change our world.

popularity

Seth Neiman, chairman of eSilicon, founder of Brocade Communications, and a board member and investor in a number of startups, sat down with Semiconductor Engineering to talk about advances in AI, what’s changing, and how it ultimately could change our lives. What follows are excerpts of that conversation.

SE: How far has AI progressed?

Neiman: We’ve been working with AI since the mid 1960s, in terms of first artificial neurons and attempts at modeling vision and speech. Now, for the first time, there are things that people consider successful. Successful means we are redefining what we mean by the “I” in AI. That’s the thing to look at. The art form to being a good investor is to get Sethneimanin front of the inevitable. You don’t have to select the exact right puppy from the litter. But you do have to spot what is inevitable, sort out the timeframe and capital requirements, model what works for venture capital, and start to get involved. Right now, we have enormous success with really fairly old architectures that have been fueled by the advent of SIMD (single instruction, multiple data) architectures that are in everyone’s computers because of GPUs.

SE: So after all these years of work, what is AI actually good for?

Neiman: It’s good for anything where there are a lot of relatively deep non-linear statistical relationships. These are not typical mathematical analyses, like black-box stuff and objective functions and those sorts of things. But it’s also not new. The current approach with these very deep networks is 20-something years old. We just didn’t have the computers to do it in the past. Alex Krizhevsky (then a University of Toronto graduate student) said, ‘Wait a minute, we’ve got these GPUs, so let’s give it a try.’ There were a number of sophisticated analyses that talked about why this was never going to work. As it turned out, it was just a problem of not enough computation. Suddenly they got lots of GPUs and plenty of computation, and then the problem was it was not enough of the right kind of computation. But with a modest amount of software, people started to make progress.

SE: What did they make progress on?

Neiman: If you can represent the data in two-dimensional arrays, you can discover relationships between the elements of that data that have many, many layers of degrees of freedom and distance between the statistical relationships. In fact, the biggest problem that people have right now is that the analytical power of deep learning is so potent that you end up modeling the noise.

SE: Or you assume it’s noise?

Neiman: Yes, and you have to get more data to figure that out. The architectures are mostly about using your best instincts for where there ought to be relationships and looking for them. The power of this particular one is that it’s how our vision works. So we can build these very deep networks to sense things that feel a lot like human vision, or in some cases better than human vision. That is really powerful. The primary visual cortex, a big chunk 41 layers deep on the back of your brain, has a very different architecture than what we do today. But we can discover things in visual fields that are extremely valuable to use for archetypal problems now, like detecting cancer, reading radiology images, picking objects, and analyzing photographs. A photograph can tell you a huge amount about what kind of objects are in there. Just a few years ago that was just not feasible. It turns out detecting those objects is a relatively straightforward problem with this technology, but figuring out what they mean is not.

SE: What’s different about that?

Neiman: It’s a so-called common-sense problem. ‘How do I know that this little wrinkle here is not dried blood?’ We can analyze these two-dimensional arrays of data, where you have a huge amount of data and read handwriting, pull faces and objects out of photographs and tell you what the expressions are. We can even detect objects and expressions that humans have trouble with. But we can’t tell you very much about what it means and why. That’s why I say we are redefining what the ‘I’ is in AI. We’re realizing that extracting all these objects out is more a problem of sensing—sense data and correlation and the like—than true intelligence. I work with a company that is spectacular about reading your face. An average parent is 50-50 about detecting their child’s lie. A trained human can get to 90%. It’s pretty easy for a piece of software to get to 97% to 98% accuracy.

SE: Why is software so much better?

Neiman: It turns out there are really a lot of cues, but they aren’t the type of things we consciously think about, or that even psychologists necessarily think about. We have these powerful tools that can find these non-linear statistical correlations and it does things that are really useful in simulating things that look like our intelligence. They can determine that not only is it a dog, but it’s a Siberian Husky and probably about one year old. In AI, they talk about the chicken sexing problem. In chicken farms, the chicks go by a conveyor belt and somebody has to pick out the hens because they need those. There is no strict model for this. It turns out that after watching for a few weeks, a person will be able to tell which are the hens. But they don’t know how they do it. AI is currently like that. We are discovering associations that we may not know what the meaning is.

SE: How about other AI advancements?

Neiman: One that doesn’t get much attention is a decision-making process, rather than the statistical analysis process, called reinforcement learning. Reinforcement learning is a very similar learning algorithm—it’s a way of capturing time. It allows a sophisticated system to look at the progression of something over time, and use similar statistical associations to make decisions. The one everyone know about is the ‘Go’ playing computer from DeepMind Technologies. The technology captures time, captures a sequence, and uses similar statistical methods to make decisions that are a lot closer to something a person does. ‘Since this is happening and this, I’m going to make this call.’ It’s still not very inspectable in terms of saying, ‘How did you arrive at that?’ But if you can give it enough data, it can learn to play much better than the human level.

SE: Why is that?

Neiman: It comes down to, ‘Can we reason in time?’ It turns out only a few number of people can play Go at the world championship level. These look like very high-level things, and they are. But they turn out not to be the hardest things. If the Go player bumped the table and moved the pieces, DeepMind AlphaGo is not going to say ‘Oops, we bumped the table, rule 52.’ It’s not going to know. It doesn’t have the general-purpose common sense. It turns out that maybe those are actually the harder problems.

SE: So what are we left with?

Neiman: We have enormously powerful technologies, but finding a really potential useful application for them is a struggle. There are some great medical applications, like reading X-rays. Those are coming very quickly. The only thing from keeping this from being a $5,000 piece of software are insurance companies and the legal aspects.

SE: What do you see as the big markets for this technology?

Neiman: Computer driven cars already are better drivers than humans. They have some problems, but there’s a lot of confidence that the overall rate of accidents and related problems will be a lot lower.

SE: So why isn’t it widely deployed yet?

Neiman: It’s all about society, insurance, and law. There are problems with a computer-driven car, for example, that pulls up to a corner and the light changes and it wants to turn right, but there are pedestrians. They really don’t know how to work their way in there and eventually get through. It’s another very sophisticated, general-purpose reasoning problem, which the technology is not very good at yet. But if you are a truck driver in the U.S., Europe or most of Asia, those jobs are going away because they can put the trucks in specialized lanes. It’s a giant industry and will create its own giant segment.

SE: They can just go to a depot somewhere just outside of the city, right?

Neiman: Exactly right. It replaces the railroads and the drivers. We’re already there now. A lot of the technology in the car is not nearly sophisticated as some of the AI we’re talking about now. This problem was solved, to a large degree, in parallel. They’ve integrated the technology because it was a big industry, and we have enough computing power and they built sensors and trained the cars.

SE: So where do you see all of this heading?

Neiman: If you have a specialized sensing application, you are going to replace human expertise in a relatively narrow domain that mostly has to do with looking at something and thinking about it evolving in time, in a way in which a lot of data is available. Those applications are vulnerable to this kind of technology. The amount of work that it takes to do it is distressingly small. Cars are complicated because you have a lot of hardware technology, LiDAR, cameras and integrated sensors. But in today’s world, six researchers in a year and $2 million could attack almost any problem if the data is available.

SE: And it’s not just vision, right?

Neiman: It’s anything that can be reduced to these sequences of things that are roughly like images, though technology is particularly suitable—because of the way GPUs are structured—to two dimensional arrays of data. If it can be readily folded into that, the technology works. You can do it with voice, smell, and almost any sensory input, as long as you have the data.

SE: How critical are the algorithms?

Neiman: They’re important, but the data is the key. If you want to see who is going to win, look at who has the data. Companies are gathering that in extraordinary ways. There are some devices on the market that are collecting data all the time. Think of the world of sensors as a reverse content delivery network where people have a vested interest in getting the data.

SE: What happens to all the data that’s being collected?

Neiman: If you have the data, the technology exists today to discover whatever is in that data. How do you make a system that recognizes facial expressions? You collect 2 million faces and you have humans score those expressions for the first 100,000. Then you build a rudimentary network out of those, start scoring them automatically, and then have humans judge samples. You need judged samples of data to train the deep nets. That is what you get if you have the data. You pick a problem, and if you have the data for it, you can build a neural network for it. For example, if you have the data for how at 7nm finFET you solve heat dissipation problems without interposers at 6 GHz, you can use the technology if you have enough samples to discover what the experts know and then replicate it. That’s the value of the data. Another example is that you can build a detector that would make a spectacular adviser on depression. You wouldn’t want to let it diagnose depression, but doctors could ask their patients to take a 60 second test, and then the tool would indicate whether you should refer them for evaluation for depression. Right now, that’s how depression gets diagnosed most of the time. Your primary care doctor identifies a man or woman who is not old enough to have certain problems. It’s a very unsophisticated diagnostic. But you could have a million images that have been scored by validated psychiatrists, associating a human judgment with an image. You could have a system at a mall that could detect and alert a human that a fight may be coming—patterns, faces, voices that are associated with conflict. That sounds good until the camera is owned by someone who doesn’t have our best interests.

SE: What about the hardware?

Neiman: This is really just re-purposed GPUs. The GPU companies so far seem to be mostly supporting the training of these applications. A little over a year ago when I was having conversations with them, it wasn’t clear they were thinking very hard about the deployment of these applications and what kind of opportunity that was. For example, with health care, what you really want is something that can track your face to tell when you are in pain, read your pulse, take a guess at your blood pressure and has your personal medical history and doctor’s concerns. The system would interpret symptoms. There hasn’t been a huge effort around building hardware. That’s why Intel bought Nirvana (a deep-learning startup). It’s a new architecture—not a radical one—that makes do with fewer bits and does the same thing that others do with GPUs. There are folks working on much more sophisticated devices with neuromorphic chips. It would be misleading to say we are going to reproduce the brain, but sometime in the next decade somebody will hit on some version where we are going to do something we never conceived of.

SE: How about the software that runs this?

Neiman: It’s not very sophisticated. The software that runs on a 10,000 GPU cluster is pretty fancy software, but it’s really just big systems programming software. It’s sophisticated because you have to not stop the 100-hour training thing if 1 GPU fails. The actual software that deploys most of this stuff is not sophisticated. It’s mostly in Python and about 20,000 lines. So that’s the next thing.

SE: Will the applications create whole new industries?

Neiman: It’s not clear yet if they will make new industries or companies or just make new features. We know it’s going to chop the brains out of lots of industries. Don’t let your kid be a radiologist or attorney. That’s already happening with very simple software. A lot of things can be done with simple microphones, simple cameras, and the same amount of computing power that is in your phone or laptop. In one way, it’s really cool and means the deployment of something hugely valuable can be deployed easily because you don’t need new hardware. That also means broad applications can be deployed extraordinarily quickly.

SE: One issue that keeps coming up with AI is job displacement. How real is this and where is it most likely to create problems?

Neiman: If we can deploy systems that mimic or replace some relatively high elements of human cognition, and it can be done on a phone or laptop, as soon as we have relatively general-purpose applications we can see a huge disruption in employment. You can update a billion phones in a week. That’s done all the time. We have to start thinking about that. The people that are analyzing this portion of the problem now are focusing on the fact that, for the most part, it’s the specialized experts who are going to be challenged—the $300,000 radiologist. You can reproduce that sophisticated training with neural nets and reinforced learning, but you can’t replace the person that schedules you and knows you are afraid of blood draws. There are a lot of folks worrying about the impact on unemployment. For the first time, we have a technology that is not just replacing labor, which is bad enough and a huge problem. Our experience and faith that people’s jobs will be transformed and everyone will find new jobs has never been faced with this sort of displacement. There is a lot of disagreement among experts about what will happen.

SE: What about technological unemployment with cars?

Neiman: You have to replace all those cars and that’s slow. But if you’re talking about replacing radiologists, they’re already looking at the image digitally. Technology won’t eliminate all the radiologists. Some will be needed to review the results. But think of the ripple effect—how many people does a person making $300,000/year really support? What happens when that person is cut out of the system? There’s a secondary effect. Relatively high-priced specialized jobs—taxes and dermatologists—will be hollowed out of our economy. Does it increase productivity? Yes, but it also hollows out our economy.

SE: What else should we worry about?

Neiman: There is a risk of the technology getting deployed with software that has bugs no one knows about. It’ll be hard to know how well the technology worked. It will be a problem when it comes to cars or diagnosing cancer, for example. If sophisticated software is replacing the decision-making piece of the human operator, how will we test those? What will the law do to hold companies liable? It’s a problem. Another problem is that speed of deployment could disrupt jobs rapidly because hardware doesn’t have to be deployed. It could create huge swaths of unemployment more rapidly than society will know how to deal with. In the past, automation has mostly displaced muscles. Now it will mostly displace brains, and our history doesn’t tell us very much about what is going to happen given how fast this could be. With the industrial revolution, people that were good with their hands as artisans were redeployed making machines that did the things they used to do. That turned out to be expansive—the market got bigger and costs went down. In talking with an economist, a lot of the data and modeling suggests we’ll have this compression and we’ll see huge numbers of people going into the services industry. On the plus side, we’ll have a lot of need for service people in health care, particularly home health care nurses, elderly care and phlebotomists. But wages will go down as more and more people take those jobs. One of the most frightening things is wage and job polarization, where you see the popular jobs start to lose the amount they earn. I’m an optimist, but there are a lot of ‘shriekers’ that worry about overlords controlling everything. What I worry about is what happens to a country like ours when 30% to 40% are unemployed or radically under-employed.

Related Stories
What Does An AI Chip Look Like?
As the market for artificial intelligence heats up, so does confusion about how to build these systems.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Plugging Holes In Machine Learning
Part 2: Short- and long-term solutions to make sure machines behave as expected.
Inside Neuromorphic Computing
General Vision’s chief executive talks about why there is such renewed interest in this technology and how it will be used in the future.
Convolutional Neural Networks Power Ahead
Adoption of this machine learning approach grows for image recognition; other applications require power and performance improvements.
AI Storm Brewing (Opinion)
The acceleration of artificial intelligence will have big social and business implications.



Leave a Reply


(Note: This name will be displayed publicly)