Preparing For AI

The public policy implications of intelligent systems are enormous.

popularity

Suppose an autonomous car is coming up an on-ramp onto a bridge. The ramp is fine, but the bridge is icy, and there’s an overturned bus full of children blocking several lanes.

Children are evacuating through the windows and milling around on the pavement. There isn’t time to stop, even with the better-than-human reaction time an autonomous car might have. Swerving to one side might send the car off the bridge, to the other side might send it into a retaining wall, potentially killing or injuring the passengers in either case. The car is fully autonomous, with no ability for a human to take control, even if there were more time. What should it do?

This is a version of the “trolley problem,” a famous ethics thought experiment. One common response, the utilitarian position, is that the car should try to save the largest number of people, even at the expense of its own passengers. In the abstract, many people would support this position.

But what if it’s your car? What if a family member is a passenger? Would you buy a car with the knowledge that it might choose to kill you to save others? Mercedes-Benz, at least, is betting that you wouldn’t, and already has taken the position that its autonomous vehicles will protect their passengers. With human drivers, that response is defensible. Few would expect someone to deliberately drive into a wall, even to save others. When the “driver” is a machine, though, it feels problematic. Should we as citizens have to share the road with cars that will kill others to protect their occupants? And is the answer different if the dilemma results from an error in the car’s sensors or programming, rather than from weather or other forces outside its control?

The autonomous car dilemma is one of the more dramatic public policy issues raised by the emergence of intelligent systems, but it is by no means the only one. Some are beyond the scope of this publication. Chip and system designers and manufacturers have little control over the impact of advanced automation on labor markets, or the legal environment surrounding third-party data collection. In many cases, though, system designers will find themselves either implementing government regulations or, in the absence of regulation, making and being required to justify what are inherently ethical and policy decisions. In the case of autonomous vehicles, the National Highway Traffic Safety Administration has proposed guidelines for car manufacturers and best practices for regulators, but is allowing the technology to evolve before making final rules. The behavior of manufacturers is sure to play a role in shaping the future regulatory environment.

More intelligence brings more real-world consequences
These dilemmas are in some ways an indication of the growing sophistication of intelligent systems. No one really cares if a music recommendation engine confuses Duke Ellington with Duke Robillard. As systems become more capable, though, their behavior can have real-world consequences, from recommending cancer treatments to determining which criminal defendants qualify for parole or bail.

One set of potential issues lies at the interface between machines and humans. If a machine learning system is trained using data labeled by human experts, then it is likely to carry the biases of those experts into its own calculations. From traffic stops to the New York Times obituary pages, there are abundant reasons to suspect bias in human-generated datasets. If, on the other hand, the system is merely given a data set and told to find patterns, it is vulnerable to omitted variable bias. The system can only draw inferences from the data it actually has, which is the data that creators of the dataset believed to be important. Correlations between these variables can be useful, but don’t necessarily show anything about underlying causes.

Because any collection of data is necessarily finite, Peter Eckersley, chief computer scientist at the Electronic Frontier Foundation, believes it may not be possible to avoid omitted variable bias. It’s definitely a function of the algorithms being used, he said, not just the potential biases of the training set. At a minimum, according to American Civil Liberties Union senior policy analyst Jay Stanley, the potential for biased data and biased algorithms makes it imperative that algorithms with public policy implications be deployed in a transparent way. Government officials, taxpayers, and the people affected will need to be able to understand and challenge algorithmic decisions. To answer such challenges, designers will need to build in audit trails. Meeting this requirement is more difficult than it may appear: today, intelligent systems are judged by their ability to obtain “correct” results, but the specific basis for a given result may not be clear.

Stanley admitted to mixed feelings about the use of intelligent systems in contexts like law enforcement and the administration of government programs. There are biased humans in decision-making roles already, and even a less-than-perfect algorithm might be an improvement. On the other hand, there is evidence that human decision makers will use algorithmic results to justify decisions that they agree with, but will ignore recommendations with which they disagree.

Securing datasets and conclusions
Audit trails are also essential to maintaining the integrity and security of the system. As intelligent systems take more sensitive tasks from humans, the incentives for malicious actors to subvert or exploit them will increase. If a facial recognition system system decides which airline passengers will be subjected to more intensive screening, then a person with the ability to access the training database can make specific individuals or specific characteristics more or less likely to draw attention.

The integrity of the training database is relatively easy to protect in applications where a largely static dataset is created and maintained by humans. Most current and near-term applications fall into this category. Looking to the future, though, many Internet of Things applications involve collecting large quantities of data from multiple sources into a central repository, where it is analyzed automatically. Can such a dataset be corrupted by either defective sensors on individual devices or malicious users? If a rogue device is identified, can it be locked out of the system without disrupting other devices? Can any data it contributed be purged? The proliferation of high profile data breaches suggests that companies need to devote more energy to these questions.

The need for robust security with minimal human intervention illustrates the value of security schemes that depend on physical unclonable functions. For instance, at last year’s Semicon West Imec Technology Forum, Thomas Kallstenius discussed a key mechanism that depends on the breakdown characteristics of an array of MOS transistors. The breakdown of an individual transistor is random, so the pattern in the array is neither predictable nor clonable. It uniquely identifies the device independent of the owner or location where it is installed.

Is computing safer at the edge?
The collection of large datasets at a central repository raises significant privacy concerns, too, highlighted by the ongoing Facebook/Cambridge Analytica scandal. As Ginger Zhe Jin of the University of Maryland explained, companies have an incentive to collect as much data as their users are willing to give them, on the assumption that it will be useful at some point in the future.

Companies with access to a lot of data have a clear market advantage. Google’s image recognition and machine translation tools are superior in part because large numbers of people use them, giving Google a steady supply of training data. Facebook is a valuable platform for advertisers because billions of people want to use the same network as their friends.

The consequences of misuse of data, in contrast, are borne by individual users. The risks externalized by large companies already include identity theft, credit card fraud, and exposure to fraudulent advertising. More intelligent systems with larger repositories of data can potentially enable targeted attacks on medically vulnerable people, infrastructure attacks aimed at highways or the electrical grid, and so on.

Users are becoming more aware of data privacy issues, and intelligent systems are becoming more involved in applications with clear privacy implications. Alexa can hear anything that happens within range. A smart electric meter can tell whether you’re home and, to some extent, can tell what you’re doing there. More informed users are likely to demand more stringent regulation and enforcement of privacy standards, and will be reluctant to share data if they fear it will be misused or shared with untrusted third parties. Edge computing, already desirable to reduce power consumption and improve response time, can help ease these concerns by leaving sensitive data in the hands of users.

Yet central computing offers real benefits. An algorithm with more data can give better results. A larger data sample can help overcome biases. Autonomous vehicles that can communicate with each other can share the road more efficiently. In each application, system designers will need to balance the advantages and disadvantages of local and centralized computing, for both performance and user trust.

Defining regulations for the AI future
It is a truism that government action moves more slowly than technology. The European Union’s General Data Protection Regulation was only enacted in 2016, a decade into the social media era. In the United States, Senate Bill 2217, the “FUTURE of Artificial Intelligence Act” is one of the first pieces of federal legislation to grapple with the implications of artificial intelligence, if only through the very limited step of establishing a study commission. Washington Senator Maria Cantwell, one of the sponsors, notes that policymakers don’t yet know how AI technologies will be used. States are regulating the deployment of autonomous vehicles within their borders, but have little ability to control interstate flows of data.

The next few years may offer the best opportunity for system designers, and citizens, to shape the artificial intelligence-enabled future.

Related Stories
Anatomy Of An Autonomous Vehicle Crash
Accidents happen, but with self-driving cars a crash is only the beginning.
When AI Goes Awry
So far there are no tools and no clear methodology to eliminating bugs. That would require understanding what an AI bug actually is.



Leave a Reply


(Note: This name will be displayed publicly)