System Bits: Sept. 26

Smartphone-enabled health; how neural networks think; autonomous vehicle positioning.

popularity

Spectroscopic science camera
While the latest versions of most smartphones contain at least two and sometimes three built-in cameras, researchers at the University of Illinois would like to convince mobile device manufactures to add yet another image sensor as a built-in capability for health diagnostics, environmental monitoring, and general-purpose color sensing applications.
 
This comes three years after the National Science Foundation provided a pair of University of Illinois professors with a grant to develop a “Lab-in-a-Smartphone.” Over that time, the research teams of Brian Cunningham, Donald Biggar Willett Professor of Engineering, and John Dallesasse, associate professor of electrical and computer engineering, have published papers detailing potential ways the mobile devices could provide health diagnostic tests and other measurements normally performed in a laboratory setting. 

Top view and side view of the compact spectrometer for the smartphone science camera, comprised of an image sensor chip with a linear variable filter attached over the surface. (Source: University of Illinois)

Most recently, the team has shown that mobile devices incorporating their sensor can provide accurate measurements of optical absorption spectra of colored liquids or the optically scattered spectra of solid objects. In other words, a mobile device incorporating the lab-in-a-smartphone “science camera” could accurately read liquid-based or paper-based medical tests in which the end result is a material that changes from one color to another in the presence of a specific analyte. They’ve demonstrated a very compact and inexpensive system that performs optical spectroscopy in a form factor that can fit inside the body of a phone. It uses inexpensive components and the same kind of LEDs being used for flash illumination in phones. By adding a special component attached on top of a conventional smartphone image sensor, they were able to measure the light absorption of liquids, and the scattering spectrum of solids. 

They said they’ve done several projects to look at using the sensing capabilities of smartphones and mobile devices for point-of-use biomedical tests or tests that could be performed away from the laboratory, but in all the projects there had been a cradle or some instrument that the phone has to be in contact with to perform the measurement.

But now they’ve devised a way for a smartphone to be placed directly over a cartridge containing the liquid to measure the specific color of the liquid. The results could then be directly sent electronically to a physician, who could make a diagnosis and suggest a remedy without a patient needing to see that physician in person.

“To make it work, smartphone manufacturers would add a camera for science purposes. The pixels of the additional image sensor would have a linear variable filter glued on top of it that transforms the camera into a spectrometer. Since the component would be an integral part of the phone, the information generated by it can be seamlessly integrated with other information about the patient, and the test being performed, while interacting with a cloud-based smart service system that provides immediate actionable feedback,” Cunningham said.

Specifically, the technology uses illumination from a bank of light emitting diodes (LEDs), which is gathered into a cylindrical plastic rod. The rod collimates the light and sends it to a test point in front of the camera. The system allows only one wavelength to pass through to the camera at a time, but the selected wavelength is linearly variable across the width of the camera.  The component used in the system, called a Linear Variable Filter (LVF), looks like a ~2×8 mm2 thin piece of glass that is glued on top of the camera’s pixels, so it performs wavelength separation without using vertical space, like conventional spectrometers do.

General-purpose neural net training
Artificial-intelligence research has been transformed by machine-learning systems called neural networks, which learn how to perform tasks by analyzing huge volumes of training data, reminded MIT researchers. During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does. Understanding what neural networks are doing can help researchers improve their performance and transfer their insights to other applications, and computer scientists have recently developed some clever techniques for divining the computations of particular neural networks.

But recently, at the 2017 Conference on Empirical Methods on Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory presented a new general-purpose technique for making sense of neural networks that are trained to perform natural-language-processing tasks, in which computers attempt to interpret freeform texts written in ordinary, or “natural,” language (as opposed to a structured language, such as a database-query language).

MIT researchers have created a general-purpose technique for making sense of neural networks trained to perform natural-language-processing tasks, in which computers attempt to interpret freeform texts written in ordinary, or natural language (as opposed to a programming language, for example).
(Source: MIT) 


They said the technique applies to any system that takes text as input and produces strings of symbols as output, such as an automatic translator, and because its analysis results from varying inputs and examining the effects on outputs, it can work with online natural-language-processing services, without access to the underlying software.

In fact, the technique works with any black-box text-processing system, regardless of its internal machinery. In their experiments, the researchers show that the technique can identify idiosyncrasies in the work of human translators, too.

The team explained that the technique is analogous to one that has been used to analyze neural networks trained to perform computer vision tasks, such as object recognition. Software that systematically perturbs — or varies — different parts of an image and resubmits the image to an object recognizer can identify which image features lead to which classifications. But adapting that approach to natural language processing isn’t straightforward.

Read more about their work here.

Enabling connected autonomous vehicles
Cranfield University is teaming up with Spirent Communications to develop connected autonomous vehicle (CAV) technologies.

The aim of the research project is to improve positioning and timing technologies to enable better performance of unmanned vehicles, such as autonomous aircraft or connected cars. Spirent engineers are working with Cranfield’s postgraduate researchers to develop new methods for synchronization and location testing, using Spirent’s advanced test systems.

Location awareness for autonomous vehicles is of major importance, and is one of the most challenging applications in commercial GNSS development. As such, the two will work to create new test and development tools that will provide the opportunity for improved system performance, accuracy and resilience.



Leave a Reply


(Note: This name will be displayed publicly)