Bioimaging; smart mats; tracking drones.
Bioimaging technique tracks multiple in vivo interactions
To make it possible to quickly and economically monitor multiple molecular interactions in a large area of living tissue – such as an organ or a small animal — Rensselaer Polytechnic Institute researchers have created an approach to optical imaging that could have applications in medical diagnosis, guided surgery, or pre-clinical drug testing.
The method simultaneously tracking 16 colors of spatially linked information over an area spanning several centimeters, and can capture interactions that occur in mere billionths of a second.
Xavier Intes, a professor of biomedical engineering at Rensselaer Polytechnic Institute said, “We have developed a smart way to acquire a massive amount of information in a short period of time. Our approach is faster and less expensive than existing technology without any compromise in the precision of the data we acquire.”
Optical imaging uses light to investigate a target, and in biomedical applications, optical imaging has many advantages over techniques such as MRI and PET, which use magnetism and positron emissions to acquire images inside of living tissue.
The method the Intes lab developed makes use of advanced optical imaging techniques – fluorescence lifetime imaging paired with foster resonance energy transfer – to reveal the molecular state of tissues. In fluorescence lifetime imaging (FLIM), molecules of interest are tagged with fluorescent “reporter” molecules which, when excited by a beam of light, emit a light signal with a certain color over time that is indicative of their immediate environment. Reporter molecules can be tuned to offer information on environmental factors such as viscosity, pH, or the presence of oxygen. FLIM is ideal for the thick tissues of a body because it relies on time information, rather than light intensity, which degrades significantly as it travels through tissue. Researchers also used Forster resonance energy transfer (FRET), which determines close proximity between two similarly tagged molecules – such as a drug and its target – based on an energy transfer that occurs only when the tagged molecules are delivered into the diseased cells for maximal therapeutically efficacy.
However, while the FLIM-FRET method generates a signal rich in information, collecting that signal quickly and economically is problematic. Current methods rely on expensive cameras, which can image only one reporter at a time, and scanning the subject can take hours as the camera collects information from its full field of vision.
To overcome this obstacle, the researchers explained that they dispensed with cameras and instead used a single-pixel detection method combined with a mathematical sampling technique (based on a Hadamard transform) that allowed them to collect sufficient relevant information in 10 minutes to construct a precise image. The detection method can collect information on 16 spectral channels simultaneously, and three detection devices positioned around the sample provided spatial information used to construct a three-dimensional image.
Early warning detection of foot ulcers
While completing his residency in anesthesiology at Massachusetts General Hospital in the mid-2000s, MIT alumni Jon Bloom saw his fair share of foot amputations among patients with diabetes due to infected foot ulcers.
Bloom’s startup, Podimetrics, has developed a smart mat that can detect early warning signs before foot ulcers form, which may drastically reduce amputations and cut medical costs. The startup was additionally co-founded by Brian Petersen SM ’13, MBA ’13, who is Podimetrics’ chief data scientist; David Linders SM ’13, MBA ’13, who is Podimetrics’ chief technology officer; and Harvard Business School graduate Jeff Engler.
Conceived at an MIT Hacking Medicine hackathon, the smart mat is equipped with sensors that detect minute spikes in temperature around the foot, which precede the formation of ulcers. A patient stands on the mat for about 20 seconds per day, and the measurements are sent to the cloud. If an ulcer is suspected, the startup shoots an alert to the patient’s physician, who can help the patient start a treatment plan.
Podimetrics relies on a decades-old discovery: Wounds heat up before they break down. Foot ulcers take weeks to develop, usually from the force of walking or friction from shoes. When ulcers begin to form, the tissue breaks down, causing inflammation, leading to minute spikes in temperature.
Podimetrics uses that threshold, programming its mat to monitor the whole foot and note areas that are 2.2 C hotter than other areas. Once alerted, physicians call the patient, telling them to keep off their feet until the temperature dies down. Or they can set up an appointment.
Camera to spot, track drones
To be sure, the rising number of drones in air space poses numerous challenges, and topping the list is the ability to simply detect these small unmanned aerial vehicles. Periodic near-misses between drones and large airplanes raise the specter of disaster, and the drones themselves often lack the necessary technology to locate other moving objects. To address these issues, researchers at EPFL have developed algorithms capable of detecting and tracking small flying objects using a simple camera. The proof of concept was conducted as part of a PhD dissertation, and a real-time detection and collision avoidance system is now being developed in a project funded by the Commission for Technology and Innovation (CTI).
Today’s collision avoidance systems operate actively: an airplane in flight calculates its position, altitude and course, and communicates this information to other aircraft using the same technology. Those aircraft can then evaluate the risk of a collision based on their own positioning data and, if necessary, alert the pilot. But this system is only effective as long as all aircraft are equipped with the same technology, the team pointed out. In reality, drones often lack such systems, which are costly and heavy and consume more power.
A camera can thus be an effective, non-cooperative (i.e., not every aircraft must be equipped with it) addition to that system, provided the camera can successfully detect a flying drone.
This is the obstacle that researchers at EPFL’s Computer Vision Laboratory (CVLAB) sought to overcome.
The biggest challenge for a moving camera is to spot another moving object. This is much more difficult on a drone than it is on a car, which only moves in two dimensions. Drones move in three dimensions, and the camera is called on to detect objects against the sky or the ground, depending on the angle of sight. Plus, drones need to locate objects as quickly as possible, such as when they are still fuzzy black dots against a dark forest. And the fact that no two drones look alike anymore – new models are constantly being developed – meant that the researchers had to find a way to teach the camera to recognize all sorts of drones.
The researchers managed to develop a reliable algorithm capable of detecting a drone using a lightweight camera similar to those found in smartphones. The aim of the project, now financed by the CTI, is to train a detector using an even larger data set to improve its real-time performance and accuracy. EPFL’s CVLAB researchers are working on this in collaboration with FLARM Technology AG, a supplier of collision avoidance technology for civil aviation. The first commercial models are expected to be released next year.
Leave a Reply