Predicting battery life; realistic self-driving simulations; identifying images like AI.
Predicting battery life
Researchers at Stanford University, MIT, and Toyota Research Institute developed a machine learning model that can predict how long a lithium-ion battery can be expected to perform.
The researchers’ model was trained on a few hundred million data points of batteries charging and discharging. The dataset consists of 124 commercial lithium iron phosphate/graphite cells cycled under fast-charging conditions, with widely varying cycle lives ranging from 150 to 2,300 cycles. That variation was partly the result of testing different methods of fast charging but also due to manufacturing variability among batteries.
Based on voltage declines and a few other factors in early cycles, the algorithm predicted how many more cycles each battery would last.
The model had a 9% test error rate for quantitatively predicting cycle life using the first 100 cycles. Separately, the algorithm categorized batteries as either long or short life expectancy based on just the first five charge/discharge cycles. Here, the predictions were correct 95% of the time.
In beginning the project, the researchers were looking for a better way to charge batteries in 10 minutes, something that could help with mass electric vehicle adoption. To generate the training dataset, the team charged and discharged the batteries until each one reached the end of its useful life, which they defined as capacity loss of 20%. However, they wanted to find whether it was necessary to perform the full number of cycles to determine the battery’s lifetime.
Needing only to run a limited number of cycles could speed up battery development. “For all of the time and money that gets spent on battery development, progress is still measured in decades,” said Patrick Herring, a scientist at the Toyota Research Institute. “In this work, we are reducing one of the most time-consuming steps – battery testing – by an order of magnitude.”
Beyond faster development, the method could be used to separate battery cells after manufacturing, sending ones with lower expected lifetimes to less-demanding applications. It could help optimize manufacturing, as well, said Peter Attia, a Stanford doctoral candidate in materials science and engineering. “The last step in manufacturing batteries is called ‘formation,’ which can take days to weeks. Using our approach could shorten that significantly and lower the production cost.”
The dataset created by the researchers has been made publicly available.
Realistic self-driving simulations
Scientists from the University of Maryland, Baidu Research, and the University of Hong Kong created a new simulator for training and testing autonomous vehicles that uses photo-realistic environments as well as real-world traffic flow patterns and driver behaviors.
The team says their system, called Augmented Autonomous Driving Simulation (AADS), more accurately represents the inputs a self-driving car would receive on the road. Rather than using computer-generated imagery and mathematically modeled movement patterns for pedestrians, bicycles, and other cars, as is typical for self-driving simulators, AADS combines photos, videos, and lidar point clouds with real-world trajectory data for pedestrians, bicycles, and other cars. These trajectories can be used to predict the driving behavior and future positions of other vehicles or pedestrians on the road for safer navigation.
“We are rendering and simulating the real world visually, using videos and photos, but also we’re capturing real behavior and patterns of movement,” said Dinesh Manocha, a professor of computer science and electrical and computer engineering at University of Maryland. “The way humans drive is not easy to capture by mathematical models and laws of physics. So, we extracted data about real trajectories from all the video we had available, and we modeled driving behaviors using social science methodologies. This data-driven approach has given us a much more realistic and beneficial traffic simulator.”
Additionally, the researchers developed technology that isolates the various components of a real-world street scene and renders them as individual elements that can be resynthesized to create a multitude of photo-realistic driving scenarios. This is the key to the use of real video and lidar data, and allows vehicles and pedestrians to be lifted from one environment and placed into another with the proper lighting and movement patterns.
“Because we’re using real-world video and real-world movements, our perception module has more accurate information than previous methods,” Manocha said. “And then, because of the realism of the simulator, we can better evaluate navigation strategies of an autonomous driving system.”
The team hopes companies developing autonomous vehicles incorporate some of the same approaches in their simulators.
Identifying images like AI
There are plenty of examples of AI systems and neural networks misidentifying images due to a slight change in the image, such as application of tape on a stop sign. To investigate, researchers at Johns Hopkins University turned the tables and asked human subjects to identify a series of unclear, static-filled images to determine whether they would make the same decisions as a neural net.
“Most of the time, research in our field is about getting computers to think like people,” said Chaz Firestone, an assistant professor in Johns Hopkins’ Department of Psychological and Brain Sciences. “Our project does the opposite — we’re asking whether people can think like computers.”
“These machines seem to be misidentifying objects in ways humans never would,” Firestone added. “But surprisingly, nobody has really tested this. How do we know people can’t see what the computers did?”
To test this, the team showed people dozens of adversarial images designed to trick neural nets that had, in fact, led computers to give incorrect answers, and gave people the same kinds of labeling options that the machine had. In particular, they asked people which of two options the computer decided the object was – one being the computer’s real conclusion and the other a random answer.
75% of the time, people chose the same answer as the neural net. Additionally, 98% of people tended to answer like the computer did.
To follow up, the team narrowed the choices to the computer’s chosen answer and second-highest ranked option. Again, 91% of people tested agreed with the machine’s first choice.
Even when the researchers had people guess between 48 choices for what the object was, and even when the pictures resembled television static, an overwhelming proportion of the subjects chose what the machine chose well above the rates for random chance.
1,800 people were included in the various experiments. “We found if you put a person in the same circumstance as a computer, suddenly the humans tend to agree with the machines,” Firestone concluded. “This is still a problem for artificial intelligence, but it’s not like the computer is saying something completely unlike what a human would say.”
Leave a Reply