Near field microscopes and the ability to separate direct reflected light from indirect light are worthy of a closer look.
Imaging is the common theme for everything at Photonics West this year, and two new ideas caught my attention—a near field very small microscope and a new way of collecting images that separates reflected light from indirect light.
IMEC has presented papers about its near field microscope before. This year the research house has reduced it to a product. The idea is to illuminate a object with a coherent source and collect the interferogram with a camera chip. In transmission, the chip is within a few μm’s of the object, making a very low profile device. Researchers then take a Fourier transform to reconstruct the object and avoid the problems of set-up tolerance and sampling frequency by modulating the light source and trapping multiple interferograms to create a single image. They can even magnify the image by using a spherical coherent source. These days, there is enough compute power to reconstruct real time video. Auto focus comes for free as part of the reconstruction.
The comparison between a conventional high-resolution microscope that is 20 to 30 cm high and the IMEC camera chip and object at less than 1 cm is rather impressive. It is easy to think of all sorts of portable or automated applications for this sort of technology. Here is a concept…
At the other end of the scale, Mathew O’Toole from the University of Toronto showed how to separate the direct reflected light from an indirect light. Direct light leaves a light source, reflects off a single surface and hits the camera. Indirect light passes through at least two reflections before making it to the camera. It turns out that the direct reflected light is limited to particular plane that connects the source geometry to the camera geometry. They place spatial light modulators in the illumination and camera plane. By switching the two modulators, the direct reflected light and the indirect light are separated.
This is a big deal for a couple of reasons. First, if the object is being scanned for 3D reconstruction, only the direct light is useful, so the separation into direct light greatly improves the quality of 3D models. The indirect light allows the viewer to see clearly into the shadows, and also eliminates direct reflected glare. As a result all sorts of surface texture and surface light absorption becomes visible.
Both these examples show how digital imaging and clever synchronization of source and detector enable new ways of imaging. Oh yes…and access to ridiculous amounts of cheap compute power is essential.