Concerns are growing that machines can replace or hurt people.
In the tech industry, the main concern over the past five decades has been about what machines could not do. Now the big worry is what they can do.
From the outset of the computer age, the biggest challenges were uptime, ease of use, reliability, and as devices became more connected, the quality and reliability of that connection. As the next phase of machines begins, those problems have been reduced to occasional annoyances rather than regular events.
In the days of mainframes and dumb terminals, computer makers had technicians on-site at large customer sites because crashes were so frequent. Now, many companies don’t even have their own IT departments. When there are problems, they either replace their servers or outsource the necessary services.
Since the first PC was introduced by IBM in 1981, computing has become much more reliable and far more intuitive to use. It also has become more ubiquitous. A smartphone now has more compute power than an old supercomputer, and any kid can select their favorite app on a phone and master it beyond the capabilities of many adults. But even more impressive, that phone can do multiple things at once better than individual devices such as a GPS or an electronic camera used to be able to do, and it can do them much more reliably, more quickly, while using far less energy.
Some of these devices are considered “smart.” A phone can dim the screen when it is not next to a user’s face to conserve energy, and a car can detect movement in a blind spot and warn a driver or even prevent a driver from shifting lanes. And by using a field covered in sensors, farmers can tell which crops need watering and which ones need fertilizer.
As “smart” technology improves, though, to the point where machines can customize and optimize themselves, the debate is shifting to what else these devices can do. The Partnership on AI, rolled out this week by Amazon, Google, Facebook, IBM and Microsoft, shows just how serious companies are taking this question. The stated goal is to “recommend best practices in areas including ethics, fairness and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability and robustness of the technology.”
Translation: Enough people are worried about this next phase of technology, including its possible impact on jobs and security, that companies recognize they need to do something. And those companies that have the most to gain also have the most to lose. In 1968, when Arthur C. Clarke published 2001: A Space Odyssey, where a computer named HAL (one letter back from IBM) tries to seize control, this was science fiction. If cars can drive themselves and robots can be employed for military or policing purposes, then it begins to look far less like science fiction.
Enough pieces are in place today—memory, throughput, processing power, manufacturing excellence, low cost—to make AI real. After five decades of Moore’s Law, all of the major kinks have been ironed out of a very complex global ecosystem. So now the question is what else can be done with all of this technology. While much of it can be used to improve people’s lives, it’s not a stretch to see how in the wrong hands it also can be used for far less lofty purposes. It’s time to sit back and think about what comes next on multiple levels, and for the first time to establish standards before the technology becomes an issue—starting with standard definitions so that people can talk the same language instead of arguing about what is AI or what is machine learning.
It’s not just the back end of design that needs to shift left. It’s now the entire market. And the sooner companies and standards groups recognize that, the faster we can get onto the next phase of technology development—and some of the most interesting challenges ever faced by engineers.
Building Chips That Can Learn
Machine learning, AI, require more than just power and performance.
Plugging Holes In Machine Learning Part 2
Short- and long-term solutions to make sure machines behave as expected.
What’s Missing From Machine Learning Part 1
Teaching a machine how to behave is one thing. Understanding possible flaws after that is quite another.
Inside AI and Deep Learning
What’s happening in AI and can today’s hardware keep up?