System Bits: Sept. 18

SCADA security simulator; more efficient, transparent ML; AI in chemistry.

popularity

Better AI technique for chemistry predictions
CalTech researchers have found a new technique that uses machine learning more effectively to predict how complex chemicals will react to reagents. The tool is a new twist on similar machine learning techniques to find more effective catalysts without having the time-consuming trial-and-error research, making it a time-saver for drug researchers.

The new tool focuses on the properties of molecular orbital—the electrons around the molecules—rather than other methods that used machine learning databases of atomic characteristics. Using Gaussian process regression and Hartree-Fock input, “a change of focus for prediction software,” said the researchers in a pre-published version of their research.

The predications from this new method are more accurate than machine learning predictions created with DFT and faster than CC and MP2 methods, which looked at atoms.

“If we can get this to work, it will be a big deal for the way in which computers are used to study chemical problems,” said Tom Miller, a professor of chemistry at Caltech in a press statement. “We’re very excited about it.”

Miller was one of three researchers working on the new method. Also working on the project were Matt Welborn, a postdoctoral scholar at the Resnick Sustainability Institute and Lixue Cheng, a chemistry and chemical engineering graduate student.

Read more here.

A “Hive Mind” Doctor Swarm Diagnoses Better Than ML
We know that neural networks can now teach computers to read radiology images better than any single human practitioner, but don’t replace the human being yet. The question is, can a group of humans—a hive mind—do a better job than software in speed and accuracy of diagnosis? A recent study conducted by Stanford University and the company Unanimous AI seems to prove that a swarm of doctors can diagnose pneumonia faster than machine learning software can.

The study used Artificial Swarm Intelligence (ASI) technology in a closed-loop system on Stanford University School of Medicine’s deep learning system, CheXNet. Researchers created a hive mind of human practitioners by networking them together and applying AI algorithms, using Unanimous AI’s Swarm AI technology.

Stanford and Unanimous AI compared the old-fashioned one-doctor diagnosis, machine learning diagnosis from its CheXNet system and real-time ASI diagnoses. Researchers concluded that “a small group of networked radiologists, when working as a real-time closed-loop ASI system, was significantly more accurate than the individuals on their own, reducing errors by 33%, as well as significantly more accurate (22%) than a state- of-the-art software-only solution using deep learning.”

Swarming behavior from the natural world has been fertile ground for robotics and research for many years. A large group of animals or insects can school, flock or swarm, acting like a single mind, moving in unison is a biological phenomenon called swarm intelligence (SI). Honeybees are specifically mentioned in the researchers’ article for the bees’ ability to come up with correct answers as a swarm that no one bee could achieve on its own. Unanimous AI’s Swarm AI technology mimics bee swarming for this experiment. ASI technology may put a new meaning on getting a second opinion.

Louis Rosenberg, Gregg Willcox, David Baltaxe and Mimi Lyons of Unanimous AI worked on the study along with Safwan Halabi, MD, and Matthew Lungren, MD, of Stanford University Medical School in Stanford, Calif. Read more here.

Open-source SCADA simulator trains operators to spot attacks
The patchwork of supervisory control and data acquisition (SCADA) systems in factories, powerplants and industry facilities around the U.S. is considered vulnerable to hackers. A new open-source simulator developed by security startup company Fortiphyd Logic may help SCADA system operators find the vulnerabilities in their systems.

The simulator, available for free to educators and individuals, helps industry, researchers and students think in the mindset of a hacker looking for vulnerabilities. “Our goal is to make sure the good guys get this experience so they can respond appropriately,” said Raheem Beyah, cofounder of Fortiphyd Logic and a professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. The Georgia Research Alliance is supporting the work on the simulator, which was also the brainchild of Georgia Tech postdoctoral researcher David Formby, the other founder of Fortiphyd Logic.

The first simulated situation available now is for a chemical plant. The simulator, named Graphical Realism Framework for Industrial Control Simulations (GRFICS), allows users to try out the roles of attackers and defenders, using the actual software used on programmable logic controllers (PLCs) in chemical plants. Users can practice attacking the system, detecting attacks and discovering the consequences of their actions on the control system. 3D video gamifies the interface for the simulator. If you reach the simulated explosion, game over.

A typical attack might be a bad actor taking control of system’s sensors to tamper with the data they output and do some harm. The simulator lets users practice initiating and detecting such attacks. Other control systems’ software for different PLCs, such as electronic grid and wastewater treatment plants, can be added to expand GRFICS.

Read the full story on Georgia Tech Research’s website.

MIT demos more efficient machine learning module that interprets object changes
A Massachusetts Institute of Technology (MIT) machine learning model has gotten closer to mimicking a human’s ability to interpret and predict actions based a few images. Using several large preexisting video databases of hand gestures and hands interacting with objects, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has trained a machine learning module to quickly identify and predict what the human is about to do do and in very general terms.

Human beings learn in infancy and childhood basic interactions with inanimate objects and the language to describe them. For instance, an object is under another object, a person is a spinning an object (such as a pen) on a table, a person is putting an object inside another, such putting a pen into a cup. From experience, we can look at before and after images (single frames of a video) of simple interactions and say what happened in between. The example MIT uses is a first image of a human hand at the bottom of a small, precarious stack soda cans on a table; the second image shows the cans lying on the table. Humans quickly surmise that the human knocked down the cans.

Robots have to be trained to draw the same conclusions given limited input. Other machine learning systems can do this already, but the MIT CSAIL researchers’ machine learning module—the Temporal Relation Network (TRN)—outperforms existing modules. “The system doesn’t go through all the frames—it picks up key frames and, using the temporal relation of frames, recognize what’s going on. That improves the efficiency of the system and makes it run in real-time accurately,” said Bolei Zhou, a former PhD student at CSAIL, said in an MIT article. Zhou is now an assistant professor of computer science at the Chinese University of Hong Kong.

MIT used three crowdsourced datasets:

  • Something-Something, from company TwentyBN, has 200,000 videos in 174 action categories of humans interacting with objects.
  • Jester, almost 150,000 videos of 27 different hand gestures.
  • Charades, built by Carnegie Mellon University researchers, has nearly 10,000 videos of 157 categorized activities.

Co-authors on the paper are Antonio Torralba, a CSAIL principal investigator and professor in the Department of Electrical Engineering and Computer Science; Aude Oliva, a principal research scientist at CSAIL; and Alex Andonian, a CSAIL research assistant

Read further and see a short video of system working, here on MIT’s website.

Transparent neural network visualizes its thought process
The proverbial reading of minds is a scientific reality with a neural network from MIT Lincoln Laboratory Intelligence and Decision Technologies Group. Researchers have developed a neural network that visually shows it’s thinking process as it solves problems and answers questions about images. Called the Transparency by Design Network (TbD-net), the neural network also performs better than most

It’s valuable for researchers to understand how the neural network thinks. Surprisingly, neural networks are often complex black boxes. The Lincoln Laboratory researchers say the transparent network will reveal valuable information into the network’s decision-making process.

Read more.



Leave a Reply


(Note: This name will be displayed publicly)