System Bits: Aug. 13

Cameras cover crops; PLC flaws; anti-bias training.

popularity

Keeping tabs on crops
University of Missouri researchers collaborated with the Agricultural Research Service at the U.S. Department of Agriculture on pairing a regular digital camera with a miniature infrared camera for a novel system providing temperature data and detailed images of crops.

“Using an infrared camera to monitor crop temperature can be tricky because it is difficult to differentiate between the plants and background elements like soil or shade,” said Ken Sudduth, a USDA agricultural engineer and adjunct professor of bioengineering at MU’s College of Agriculture, Food and Natural Resources. “By augmenting a miniature infrared camera with a digital camera, we created a system that can examine crop temperatures with great detail and accuracy.”

Sudduth developed the camera system with Philip Drew, a graduate student researcher who completed his master’s degree at MU while working on the project. Together, the cameras produce two distinct images of the same area: a visually detailed photograph and an infrared image. The setup, known as the Multi-band System for Imaging of a Crop Canopy, allows farmers to identify problem areas from the digital camera images and analyze those areas with infrared images that map temperature to light intensity.

Coupled with an algorithm that automatically filters soil, shade, and other non-plant presences from the images, the camera system would allow farmers to precisely irrigate their crops according to the specific needs of individual plants, maximizing yields and optimizing water use without requiring the purchase of more expensive infrared cameras.


Photo credit: University of Missouri

“Medium-scale farmers have big fields, but they don’t always have the funds for expensive monitoring equipment,” Sudduth said. “Our system allows for precision monitoring over a large area for a more manageable cost. That’s good for farmers who can earn a bigger profit, and it’s good for everyone who depends on their crops.”

Sudduth said the system needs more fine-tuning before it can be sold to farmers, and future iterations could incorporate drones for increased versatility.

Rogue computer exposed PLC vulnerabilities
Cybersecurity researchers at Tel Aviv University and the Technion Institute of Technology discovered critical vulnerabilities in the Siemens S7 Simatic programmable logic controller (PLC), one of the world’s most secure PLCs used to run industrial processes.

Prof. Avishai Wool and M.Sc student Uriel Malin of TAU’s School of Electrical Engineering worked together with Prof. Eli Biham and Dr. Sara Bitan of the Technion to disrupt the PLC’s functions and gain control of its operations.

The team presented their findings at the recent Black Hat USA conference in Las Vegas, revealing the security weaknesses they found in the newest generation of the Siemens systems and how they reverse-engineered the proprietary cryptographic protocol in the S7.

Following the best practices of responsible disclosure, the research findings were shared with Siemens well in advance of the Black Hat USA presentation, allowing the manufacturer to prepare.

The scientists’ rogue engineering workstation posed as a so-called TIA engineering station that interfaced with the Simatic S7-1500 PLC controlling the industrial system. “The station was able to remotely start and stop the PLC via the commandeered Siemens communications architecture, potentially wreaking havoc on an industrial process,” Prof. Wool explains. “We were then able to wrest the controls from the TIA and surreptitiously download rogue command logic to the S7-1500 PLC.”

The researchers hid the rogue code so that a process engineer could not see it. If the engineer were to examine the code from the PLC, he or she would see only the legitimate PLC source code, unaware of the malicious code running in the background and issuing rogue commands to the PLC.

The research combined deep-dive studies of the Siemens technology by teams at both the Technion and TAU.

Their findings demonstrate how a sophisticated attacker can abuse Siemens’ newest generation of industrial controllers that were built with more advanced security features and supposedly more secure communication protocols.

Siemens doubled down on industrial control system (ICS) security in the aftermath of the Stuxnet attack in 2010, in which its controllers were targeted in a sophisticated attack that ultimately sabotaged centrifuges in the Natanz nuclear facility in Iran.

“This was a complex challenge because of the improvements that Siemens had introduced in newer versions of Simatic controllers,” adds Prof. Biham. “Our success is linked to our vast experience in analyzing and securing controllers and integrating our in-depth knowledge into several areas: systems understanding, reverse engineering, and cryptography.”

Dr. Bitan noted that the attack emphasizes the need for investment by both manufacturers and customers in the security of industrial control systems. “The attack shows that securing industrial control systems is a more difficult and challenging task than securing information systems,” she concludes.

Training better machine learning models
Researchers at Harvard University are helping to eliminate human bias in training machine learning models.

Datasets with hundreds of thousands of these questions, generated by humans, have led to an explosion of new neural network architectures for solving Natural Language Inference. Over the years, these neural networks have gotten better and better. Today’s state-of-the-art models usually get the equivalent of a B+ on these tests. Humans usually score an A or A-.

But researchers recently discovered that machine learning models still do remarkably well when they’re given only the answer, also called the hypothesis, without the original premise.

As it turns out, these datasets are rife with human biases. When asked to come up with contradictory sentences, humans often use negations, like “don’t” or “nobody.”

“These models aren’t learning to understand the relationship between texts, they are learning to capture human idiosyncrasies,” said Yonatan Belinkov, first author of the paper and a Postdoctoral Fellow in Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS).

To combat this, Belinkov and colleagues developed a new method to build machine learning models that reduces the model’s reliance on these biases.

The team presented their research at the 57th Annual Meeting of the Association for Computational Linguistics (ACL) in Florence, Italy on July 28th – August 2nd.

It’s common to model the typical Natural Language Inference test as a single stream — the premise and hypothesis are both processed together and fed to a classifier which predicts contradiction, neutral or entailment.

The team added a second stream to the model, this one with only the hypothesis. The model learns to perform Natural Language Inference with both streams simultaneously, but if it does well on the hypothesis-only side, it’s penalized. This approach encourages the model to focus more on the premise side and refrain from learning the biases that led to successful hypothesis-only performance.

“Our hope is that with this method, the model isn’t just focused on biased words, like ‘no’ or ‘doesn’t,’ but rather it’s learned something deeper,” said Stuart Shieber, James O. Welch, Jr. and Virginia B. Welch Professor of Computer Science at SEAS and co-author of the paper.

Those biases, however, can also be important context clues to solving the problem, so it’s critical not to devalue them too much.

“There is a thin line between bias and usefulness,” said Gabriel Grand, CS ’18, who worked on the project as part of his undergraduate thesis. “Reaching peak performance means forgetting a lot of assumptions but not all of them.”

By removing many of these assumptions, the two-stream model unsurprisingly did slightly worse on the data that it was trained on than the model which wasn’t penalized for relying on biases. However, when tested on new datasets — with different biases — the model did significantly better.

“Even though the model did a few percentage points worse on its own dataset, it has learned not to rely on biases as much. So, this method produces a model that performs more generally and is more robust,” said Shieber.

This method may apply to a range of artificial intelligence tasks that require identifying deeper relationships – such as visual question answering, reading comprehension, and other natural language tasks – while avoiding superficial biases.

The paper – available from Harvard’s DASH repository – was co-authored by Adam Poliak, a PhD student and co-first author, and Benjamin Van Durme, Assistant Professor in Computer Science, both members of the Johns Hopkins Center for Language and Speech Processing, and by Alexander M. Rush, Associate Professor of Computer Science at SEAS.

This research was supported by the Harvard Mind, Brain, and Behavior Initiative, DARPA LORELEI, and the National Science Foundation.



Leave a Reply


(Note: This name will be displayed publicly)