Debugging deep learning algorithms; big data analysis; higher quality VR.
Exposing logic errors in deep neural networks
In a new approach meant to brings transparency to self-driving cars and other self-taught systems, researchers at Columbia and Lehigh universities have come up with a way to automatically error-check the thousands to millions of neurons in a deep learning neural network.
Their tool — DeepXplore — feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning by clusters of neurons.
Backing up a bit, the researchers reminded that while computers can now beat humans at chess and Go, it may be some time before people trust their driving. They reminded about the current instability and danger of self-driving cars, highlighted last year when Tesla’s autonomous car collided with a truck it mistook for a cloud, killing its passenger. Self-driving cars depend on a form of machine learning called deep learning, which is modeled after the human brain, whereby layers of artificial neurons process and consolidate information, developing a set of rules to solve complex problems. And even though the technology has achieved impressive feats of intelligence, as more tasks become automated this way, concerns about safety, security, and ethics, are growing. Deep learning systems do not explain how they make their decisions, and that makes them hard to trust.
The team said debugging the neural networks in self-driving cars is an especially slow and tedious process, with no way of measuring how thoroughly logic within the network has been checked for errors.
Manually-generated test images can be randomly fed into the network until one triggers a wrong decision, telling the car to veer into the guardrail, for example, instead of away. A faster technique, called adversarial testing, automatically generates test images it alters incrementally until one image tricks the system.
The DeepXplore tool has been able to find a wider variety of bugs than random or adversarial testing by using the network itself to generate test images likely to cause neuron clusters to make conflicting decisions, the researchers asserted.
Testing their software on 15 state-of-the-art neural networks, including Nvidia’s Dave 2 network for self-driving cars, the researchers uncovered thousands of bugs missed by previous techniques. They report activating up to 100 percent of network neurons—30 percent more on average than either random or adversarial testing—and bringing overall accuracy up to 99 percent in some networks, a 3 percent improvement on average.
Still, a high level of assurance is needed before regulators and the public are ready to embrace robot cars and other safety-critical technology like autonomous air-traffic control systems. One limitation of DeepXplore is that it can’t certify that a neural network is bug-free. That requires isolating and testing the exact rules the network has learned.
Further, another new tool developed at Stanford University, called ReluPlex, uses the power of mathematical proofs to do this for small networks. Costly in computing time, but offering strong guarantees, this small-scale verification technique complements DeepXplore’s full-scale testing approach, according to ReluPlex co-developer Clark Barrett, a computer scientist at Stanford.
The team has made their open-source software public for other researchers to use, and launched a website to let people upload their own data to see how the testing process works. Ultimately, the goal is to be able to test a system, like self-driving cars, and tell the creators whether it is truly safe, and under what conditions.
Tensor algebra speeds up big data analysis 100-fold
In the big data age, in order to account for sparse data, analytic algorithms end up doing a lot of addition and multiplication by zero, which is wasted computation. Programmers get around this by writing custom code to avoid zero entries, but that code is complex, and it generally applies only to a narrow range of problems. Now, researchers from MIT, the French Alternative Energies and Atomic Energy Commission, and Adobe Research have developed Taco (tensor algebra compiler), a system that automatically produces code optimized for sparse data.
Taco promises a 100-fold speedup over existing, non-optimized software packages, and its performance is comparable to that of meticulously hand-optimized code for specific sparse-data operations, while requiring far less work on the programmer’s part, the team said.
In recent years, however, the mathematical manipulation of tensors — tensor algebra — has become crucial to not only big-data analysis but machine learning, too, in addition to being a a staple of scientific research since Einstein’s time.
Traditionally, to handle tensor algebra, mathematics software has decomposed tensor operations into their constituent parts but in the age of big data, this approach is too time-consuming. For efficient operation on massive data sets, every sequence of tensor operations requires its own “kernel,” or computational template.
Taco adds all the extra code automatically to do this. The programmer simply specifies the size of a tensor, whether it’s full or sparse, and the location of the file from which it should import its values. For any given operation on two tensors, Taco builds a hierarchical map that indicates, first, which paired entries from both tensors are nonzero and, then, which entries from each tensor are paired with zeroes. All pairs of zeroes it simply discards.
Check out the MIT link above for more.
Untethered high-quality VR
To address the challenge faced by the virtual reality industry that being users being tethered to a server or PC in order to play high-quality apps, instead of relying on hardware improvements, Purdue University researchers are proposing a three-step software solution. The platform, called Furion, allows for untethered playing of high-quality VR games using a smartphone. At the same time, next-generation smartphones and wireless networks will not be advanced enough to sever the tether, the team said.
Y. Charlie Hu, a Purdue University professor of electrical and computer engineering said, “We have performed a systematic design study of the ‘elephant in the room’ facing the VR industry: Is it feasible to enable high-quality VR apps on untethered mobile devices such as smartphones? Today’s mobile hardware and wireless networks are about 10 times too slow for high-quality, immersive VR.”
The team recognized that waiting for future mobile hardware or next-generation wireless networks is unlikely to help because of power limitations and greater computational demands needed for processing packets under higher data rates, he said.
The research team tested Furion with popular high-quality VR games Viking Village, Corridor and Nature.
For the QoE to be acceptable, each VR frame must be rendered at a rate of 16 milliseconds, or 60 frames per second. However, trying to render at this speed quickly exhausts the capacity of a smartphone’s central processing unit; Google’s Pixel XL is only capable of a speed of 111 milliseconds per frame.
Today’s high-quality VR systems consist of a headset and server, which contains a powerful graphical processing unit, and the user is tethered to the server. One strategy to allow for untethered operation might be to render all of the frames on the server and transmit the frames over WiFi to the smartphone. But this takes even longer: around 200 milliseconds per frame at the highest data rate of WiFi supported by current smartphones.
“A key observation we made is that waiting for next-generation wireless networks such as 5G will not help because packet processing at 10 times higher data rate will exhaust the CPU on today’s smartphones,” Hu said.
Meanwhile, stagnating lithium-ion battery technology will limit next-generation smartphone hardware performance because battery capacity in mobile devices has barely doubled over the past 15 years, and this has limited the CPU of smartphones from getting faster.
At the same time, the clock rate of GPUs, which is critical to graphics performance, also has not improved much in recent years.
One reason for the heavy computational workload of VR apps is the constant need to render updates to the background environment in the virtual world. However, the background environment is largely unchanged from frame-to-frame – mountains and landscape, for example, remain much the same – and this background changes primarily in relation to the user’s position.
Furion splits up the rendering, performing the background rendering on the PC or server and the less computationally heavy rendering of the foreground in the smartphone or other mobile device. This “cooperative rendering” approach – or pre-rendering the background on the PC and rendering the foreground on the smartphone – speeds the frame-rendering time to 14 milliseconds on Pixel XL and satisfies the QoE of high-quality VR.
Leave a Reply