System Bits: Aug. 8

4D camera; AI sleep monitoring; learning to run.

popularity

Improving robot vision, virtual reality, self-driving cars
In order to generate information-rich images and video frames that will enable robots to better navigate the world and understand certain aspects of their environment, such as object distance and surface texture, engineers at Stanford University and the University of California San Diego have developed a camera that generates 4D images and can capture 138 degrees of information.

The researchers see this light field camera — with a single lens, and wide field of view — being used in autonomous vehicles and augmented and virtual reality technologies.

Two 138-degree light field panoramas (top and center) and a depth estimate of the second panorama (bottom). (Source: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego)

The team said they wanted consider what would be the right camera for a robot that drives or delivers packages by air, according to Donald Dansereau, a postdoctoral fellow in electrical engineering at Stanford and the first author of a paper on this development. “We’re great at making cameras for humans but do robots need to see the way humans do? Probably not,” he said.

The project is a collaboration between the labs of electrical engineering professors Gordon Wetzstein at Stanford and Joseph Ford at UC San Diego.

UC San Diego researchers designed a spherical lens that provides the camera with an extremely wide field of view, encompassing nearly a third of the circle around the camera.

This group previously developed the spherical lenses under the DARPA “SCENICC” (Soldier CENtric Imaging with Computational Cameras) program to build a compact video camera that captures 360-degree images in high resolution, with 125 megapixels in each video frame. In that project, the video camera used fiber optic bundles to couple the spherical images to conventional flat focal planes, providing high-performance but at high cost.

The new camera uses a version of the spherical lenses that eliminates the fiber bundles through a combination of lenslets and digital signal processing. Combining the optics design and system integration hardware expertise of Ford’s lab and the signal processing and algorithmic expertise of Wetzstein’s lab resulted in a digital solution that not only leads to the creation of these extra-wide images but enhances them.

The new camera also relies on a technology developed at Stanford called light field photography, which is what adds a fourth dimension to this camera by capturing the two-axis direction of the light hitting the lens and combines that information with the 2D image.

Another noteworthy feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots could use this technology to see through rain and other things that could obscure their vision, the team reminded.

The camera’s capabilities open up all kinds of applications in VR and robotics, along with various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’re made of.

Monitoring sleep with AI
To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body by using a device that employs an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).

Researchers have devised a new way to monitor sleep stages without sensors attached to the body. Their device uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).
Source: (MIT)

The team’s vision is to develop health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change behavior in any way, explained Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study.

Katabi and members of her group in MIT’s Computer Science and Artificial Intelligence Laboratory previously developed radio-based sensors that allow vital signs and behaviors that can be indicators of health to be remotely measured. The sensors consist of a wireless device, about the size of a laptop computer, that emits low-power radio frequency (RF) signals. As the radio waves reflect off of the body, any slight movement of the body alters the frequency of the reflected waves. Analyzing those waves can reveal vital signs such as pulse and breathing rate.

“It’s a smart Wi-Fi-like box that sits in the home and analyzes these reflections and discovers all of these changes in the body, through a signature that the body leaves on the RF signal,” Katabi said.

Katabi and her students also used this approach to create a sensor called WiGait that can measure walking speed using wireless signals, which could help doctors predict cognitive decline, falls, certain cardiac or pulmonary diseases, or other health problems. After developing those sensors, Katabi thought that a similar approach could also be useful for monitoring sleep, which is currently done while patients spend the night in a sleep lab hooked up to monitors such as electroencephalography (EEG) machines.
The team believes the opportunity for this technology is very big because sleep is not well understood, and a high fraction of the population has sleep problems. This technology could move us from a world where we do sleep studies once every few months in the sleep lab to continuous sleep studies in the home.

To achieve that, the researchers had to come up with a way to translate their measurements of pulse, breathing rate, and movement into sleep stages. Recent advances in artificial intelligence have made it possible to train computer algorithms known as deep neural networks to extract and analyze information from complex datasets, such as the radio signals obtained from the researchers’ sensor. However, these signals have a great deal of information that is irrelevant to sleep and can be confusing to existing algorithms. The MIT researchers had to come up with a new AI algorithm based on deep neural networks, which eliminates the irrelevant information.

Other researchers have tried to use radio signals to monitor sleep, but these systems are accurate only 65 percent of the time and mainly determine whether a person is awake or asleep, not what sleep stage they are in. Katabi and her colleagues were able to improve on that by training their algorithm to ignore wireless signals that bounce off of other objects in the room and include only data reflected from the sleeping person.
The researchers now plan to use this technology to study how Parkinson’s disease affects sleep.

Help wanted: Better models of bone, muscles and nerves
While computer-generated skeletons compete in a virtual race — running, hopping and jumping as far as they can before collapsing in an electronic heap — in the real world, their coaches (teams of machine learning and artificial intelligence enthusiasts) are competing to see who can best train their skeletons to mimic those complex human movements.

Interestingly, the event’s creator has a serious end goal: making life better for kids with cerebral palsy.

A computer model of human bones, muscles and motor control similar to the ones participating in the “Learning to Run” machine learning competition. (Image credit: Stanford Neuromuscular Biomechanics Lab)

Łukasz Kidziński, a postdoctoral fellow in bioengineering at Stanford University, dreamed up the contest as a way to better understand how people with cerebral palsy will respond to muscle-relaxing surgery. Often, doctors resort to surgery to improve a patient’s gait, but it doesn’t always work. The key question is how to predict how patients will walk after surgery, and this is extremely difficult to approach.

Kidziński works in the lab of Scott Delp, a professor of bioengineering and of mechanical engineering who has spent decades studying the mechanics of the human body. As part of that work, Delp and his collaborators collected data on the movements and muscle activity of hundreds of individuals as they walk and run. With data like that, Delp, Kidziński and their team can build accurate models of how individual muscles and limbs move in response to signals from the brain but what they could not do was predict how people relearn to walk after surgery – because, as it turns out, no one is quite sure how the brain controls complex processes like walking, let alone walking through the obstacle course of daily life or relearning how to walk after surgery.

Delp said that while the team has gotten quite good at building computational models of muscles and joints and bones and how the whole system is connected, an open challenge is how the brain orchestrates and controls this complex dynamic system.

Machine learning, a variety of artificial intelligence, has reached a point where it could be a useful tool for modeling of the brain’s movement control systems, Delp said, but for the most part its practitioners have been interested in self-driving cars, playing complex games like chess or serving up more effective online ads.

In the long run, Kidziński said he hopes the work may benefit more than just kids with cerebral palsy. For example, it may help others design better-calibrated devices to assist with walking or carrying loads, and similar ideas could be used to find better baseball pitches or sprinting techniques.
But, Kidziński said, he and his collaborators have already created something important: a new way of solving problems in biomechanics that looks to virtual crowds for solutions.



Leave a Reply


(Note: This name will be displayed publicly)