AI cameras; preventing side channel attacks; better security for cloud-based ML.
Two types of computers create faster, less energy-intensive image processor for autonomous cars, security cameras, medical devices
Stanford University researchers reminded that the image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence. These are the computers that essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem? The computers running the artificial intelligence algorithms are currently too large and slow for future applications like handheld medical devices.
Now, a team at Stanford have developed a new type of artificially intelligent camera system that they say can classify images faster and more energy efficiently. In the future, they expect it could be built small enough to be embedded in the devices themselves, something that is not possible today.
Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, who led the research, said, “That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk. Future applications will need something much faster and smaller to process the stream of images.”
Wetzstein and Julie Chang, a graduate student and first author on a paper on the topic, took a step toward that technology by marrying two types of computers into one, creating a hybrid optical-electrical computer designed specifically for image analysis.
They explained that the first layer of the prototype camera is a type of optical computer, which does not require the power-intensive mathematics of digital computing. The second layer is a traditional digital electronic computer.
The optical computer layer operates by physically preprocessing image data, filtering it in multiple ways that an electronic computer would otherwise have to do mathematically, and since the filtering happens naturally as light passes through the custom optics, this layer operates with zero input power, the researchers said. This saves the hybrid system a lot of time and energy that would otherwise be consumed by computation.
Chang noted, “We’ve outsourced some of the math of artificial intelligence into the optics,” the result of which is profoundly fewer calculations, fewer calls to memory and far less time to complete the process. Having leapfrogged these preprocessing steps, the remaining analysis proceeds to the digital computer layer with a considerable head start.
In both simulations and real-world experiments, the team said they used the system to successfully identify airplanes, automobiles, cats, dogs and more within natural image settings.
“Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles,” Wetzstein said.
In addition to shrinking the prototype, Wetzstein, Chang and colleagues at the Stanford Computational Imaging Lab are now looking at ways to make the optical component do even more of the preprocessing. Eventually, their smaller, faster technology could replace the trunk-size computers that now help cars, drones and other technologies learn to recognize the world around them, the team added.
Closing security hole in encryption software
Georgia Institute of Technology cybersecurity researchers have helped close a security vulnerability that could have allowed hackers to steal encryption keys from a popular security package by briefly listening in on unintended “side channel” signals from smartphones.
The attack was reported to software developers before it was publicized, and took advantage of programming that was, ironically, designed to provide better security. The attack used intercepted electromagnetic signals from the phones that could have been analyzed using a small portable device costing less than a thousand dollars.
Unlike earlier intercept attempts that required analyzing many logins, the “One & Done” attack was carried out by eavesdropping on just one decryption cycle, the team said.
Milos Prvulovic, associate chair of Georgia Tech’s School of Computer Science explained, “This is something that could be done at an airport to steal people’s information without arousing suspicion and makes the so-called ‘coffee shop attack’ much more realistic. The designers of encryption software now have another issue that they need to take into account because continuous snooping over long periods of time would no longer be required to steal this information.”
The researchers reminded that the side channel attack is believed to be the first to retrieve the secret exponent of an encryption key in a modern version of OpenSSL without relying on the cache organization and/or timing. OpenSSL is a popular encryption program used for secure interactions on websites and for signature authentication. The attack showed that a single recording of a cryptography key trace was sufficient to break 2048 bits of a private RSA key.
After successfully attacking the phones and an embedded system board – which all used Arm processors – the researchers proposed a fix for the vulnerability, which was adopted in versions of the software made available in May.
Side channel attacks are still relatively rare, but Prvulovic says the success of “One & Done” demonstrates an unexpected vulnerability. The availability of low-cost signal processing devices small enough to use in coffee shops or airports could make the attacks more practical.
“We now have relatively cheap and compact devices – smaller than a USB drive – that are capable of analyzing these signals,” said Prvulovic. “Ten years ago, the analysis of this signal would have taken days. Now it takes just seconds, and can be done anywhere – not just in a lab setting.”
Producers of mobile devices are becoming more aware of the need to protect electromagnetic signals of phones, tablets and laptops from interception by shielding their side channel emissions. Improving the software running on the devices is also important, but Prvulovic suggests that users of mobile devices must also play a security role.
“This is something that needs to be addressed at all levels,” he said. “A combination of factors – better hardware, better software and cautious computer hygiene – make you safer. You should not be paranoid about using your devices in public locations, but you should be cautious about accessing banking systems or plugging your device into unprotected USB chargers.”
Combo of two encryption techniques protects private data, keeps neural networks running
In an approach that holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data, MIT researchers have created an encryption method that secures data used in online neural networks, without dramatically slowing their runtimes.
Outsourcing machine learning is a rising trend in industry, the team noted, as evidenced by major tech firms launching cloud platforms that conduct computation-heavy tasks such as running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.
But what if there are leaks of private data? In recent years, researchers have explored various secure-computation techniques to protect such sensitive data. But those methods have performance drawbacks that make neural network evaluation (testing and validating) sluggish — sometimes as much as million times slower — limiting their wider adoption, MIT said.
However, the team has now described a system that blends two conventional techniques — homomorphic encryption and garbled circuits — in a way that helps the networks run orders of magnitude faster than they do with conventional approaches.
According to the researchers, they tested their GAZELLE system on two-party image-classification tasks whereby a user sends encrypted image data to an online server evaluating a CNN running on GAZELLE. After this, both parties share encrypted information back and forth in order to classify the user’s image. Throughout the process, the system ensures that the server never learns any uploaded data, while the user never learns anything about the network parameters. Compared to traditional systems, however, GAZELLE ran 20 to 30 times faster than state-of-the-art models, while reducing the required network bandwidth by an order of magnitude.
One promising application for the system is training CNNs to diagnose diseases. Hospitals could, for instance, train a CNN to learn characteristics of certain medical conditions from magnetic resonance images (MRI) and identify those characteristics in uploaded MRIs. The hospital could make the model available in the cloud for other hospitals since the model is trained on, and further relies on, private patient data. However, there are no efficient encryption models currently, so the application isn’t quite ready for prime time, the researchers pointed out.
Leave a Reply