System Bits: Aug. 7

ML interprets human vitals; flaw found in computer vision algorithms; ML, big data solves toughest science problems.

popularity

ML leverages existing hospital patient data to detect trouble
Focusing on emergency and critical care patients, a University of Michigan spinout, Fifth Eye, has developed a system that combines a machine learning algorithm with signal processing to monitor the autonomic nervous system of hospital patients and interprets the data every two minutes, which can sometimes be almost two days faster than traditional vital signs like heart rate and blood pressure.

Technology from Fifth Eye can predict if a patient will deteriorate several hours before normal vital signs signal a problem is occurring, as well as monitor traumatic and other brain injuries and predict secondary injury for patients.

For example, one patient experienced internal bleeding while recovering from surgery, and in a retrospective analysis of the data, using traditional vital signs like heart rate and blood pressure, it took clinicians 37 hours longer to detect problems than the Fifth Eye system would, the team said. The technology would have likely improved the patient’s outcome and shortened their stay.

This technology gives physicians and nurses an early warning of hemodynamic deterioration, according to Jen Baird, CEO of Fifth Eye. “They are excited about this because we can give immediate feedback on the treatment they are trying. Some have even described detecting hemodynamic instability like this as a holy grail of feedback.”

Instead of using traditional vital signs to detect patient deterioration, Fifth Eye analytics uses a single streaming EKG lead, and based on the activity of the heart, it can predict what someone’s body is compensating for.

“Any patient within the hospital currently being monitored by an electrocardiogram, or EKG, has the potential to benefit from this product,” Baird said.

Baird, a U-M alum and serial entrepreneur, teamed up with three researchers from the Michigan Center for Integrative Research in Critical Care (MCIRCC) to create the startup with technology licensed from U-M.

The potential application of the technology is large, and what is unique is that the technology uses data that is already being generated, not something new.

The key was bringing together a highly multidisciplinary team committed to developing a life-saving ‘big data’ precision-medicine tool while simultaneously understanding the need to develop and cultivate a crucial business case to move the idea to impact, the researchers pointed out.

Developed using machine learning and sophisticated signal processing, U-M invested in collecting patient data from more than 200 hospital beds. This rich data set was used to do initial technology validation, but the analytic still needs FDA clearance before it can be used with patients, which Baird hopes will be cleared in 2019.

Key weakness in modern computer vision systems found
In a finding that could point the way toward better computer vision systems, Brown University researchers show why computers are so bad at seeing when one thing is not like another.

In a paper presented last week at the annual meeting of the Cognitive Science Society, the Brown team sheds light on why computers are so bad at these types of tasks and suggests avenues toward smarter computer vision systems.

When one of these things is not like the other
Computers are great at categorizing images based on objects within them, but they’re bad at figuring out if two objects within an image are the same or different. New research helps to explain why tasks like these are so hard for computers.
Source: Brown University

Thomas Serre, associate professor of cognitive, linguistic and psychological sciences at Brown and the paper’s senior author said, “There’s a lot of excitement about what computer vision has been able to achieve, and I share a lot of that, but we think that by working to understand the limitations of current computer vision systems as we’ve done here, we can really move toward new, much more advanced systems rather than simply tweaking the systems we already have.”

For the study, Serre said he and his colleagues used state-of-the-art computer vision algorithms to analyze simple black-and-white images containing two or more randomly generated shapes. In some cases the objects were identical; sometimes they were the same but with one object rotated in relation to the other; sometimes the objects were completely different. The computer was asked to identify the same-or-different relationship.

The study showed that, even after hundreds of thousands of training examples, the algorithms were no better than chance at recognizing the appropriate relationship. The question, then, was why these systems are so bad at this task.

The researchers had a suspicion that it has something to do with the inability of these computer vision algorithms to individuate objects. They found the source of the problem in individuating objects is the architecture of the machine learning systems that power the algorithms. The algorithms use convolutional neural networks — layers of connected processing units that loosely mimic networks of neurons in the brain. A key difference from the brain is that the artificial networks are exclusively “feed-forward” — meaning information has a one-way flow through the layers of the network but that’s not how the visual system in humans works, Serre said.

“If you look at the anatomy of our own visual system, you find that there are a lot of recurring connections, where the information goes from a higher visual area to a lower visual area and back through,” Serre said.
While it’s not clear exactly what those feedbacks do, Serre says, it’s likely that they have something to do with our ability to pay attention to certain parts of our visual field and make mental representations of objects in our minds.

The team hypothesizes that the reason computers can’t do anything like that is because feed-forward neural networks don’t allow for the kind of recurrent processing required for this individuation and mental representation of objects. It could be that making computer vision smarter will require neural networks that more closely approximate the recurrent nature of human visual processing, they concluded.

ML used to handle big data in modern science experiments
A group of researchers, including scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Fermi National Accelerator Laboratory, summarize current applications and future prospects of machine learning in particle physics in a paper recently published.

Researchers from SLAC and around the world increasingly use machine learning to handle Big Data produced in modern experiments and to study some of the most fundamental properties of the universe.
Source: SLAC

co-author Alexander Radovic from the College of William & Mary, who works on the NOvA neutrino experiment said, “Compared to a traditional computer algorithm that we design to do a specific analysis, we design a machine learning algorithm to figure out for itself how to do various analyses, potentially saving us countless hours of design and analysis work.”

Experiments at the Large Hadron Collider (LHC), the world’s largest particle accelerator at the European particle physics lab CERN, produce about a million gigabytes of data every second. Even after reduction and compression, the data amassed in just one hour is similar to the data volume Facebook collects in an entire year – too much to store and analyze. To handle the gigantic data volumes produced in modern experiments like the ones at the LHC, researchers apply what they call “triggers” – dedicated hardware and software that decide in real time which data to keep for analysis and which data to toss out, the team reported.

In LHCb, an experiment that could shed light on why there is so much more matter than antimatter in the universe, machine learning algorithms make at least 70 percent of these decisions, says LHCb scientist Mike Williams from the Massachusetts Institute of Technology, one of the authors of the new paper. “Machine learning plays a role in almost all data aspects of the experiment, from triggers to the analysis of the remaining data,” he said.

Machine learning has proven extremely successful in the area of analysis. The gigantic ATLAS and CMS detectors at the LHC, which enabled the discovery of the Higgs boson, each have millions of sensing elements whose signals need to be put together to obtain meaningful results, the researchers said. “These signals make up a complex data space,” says Michael Kagan from SLAC, who works on ATLAS and was also an author on the paper review. “We need to understand the relationship between them to come up with conclusions, for example that a certain particle track in the detector was produced by an electron, a photon or something else.”

To read more on current and future projects, see the original article here.



Leave a Reply


(Note: This name will be displayed publicly)