System Bits: Sept. 24

Printing nanostructures; microcameras.

popularity

Printing nanostructures with self-assembling material
A multi-institutional team of engineers from the University of Illinois at Urbana-Champaign, the University of Chicago and Hanyang University in Korea has developed a new approach to the fabrication of nanostructures for the semiconductor and magnetic storage industries.

The approach combines top-down advanced ink-jet printing technology with a bottom-up approach that involves self-assembling block copolymers, a type of material that can spontaneously form ultrafine structures. With this approach, the team was able to increase the resolution of their intricate structure fabrication from approximately 200nm to approximately 15nm.

The ability to fabricate nanostructures out of polymers, DNA, proteins and other “soft” materials has the potential to enable new classes of electronics, diagnostic devices and chemical sensors. The challenge is that many of these materials are fundamentally incompatible with the sorts of lithographic techniques that are traditionally used in the integrated circuit industry.

Recently developed ultrahigh resolution ink-jet printing techniques have some potential, with demonstrated resolution down to 100 to 200nm, but there are significant challenges in achieving true nanoscale dimension. The work demonstrates that processes of polymer self-assembly can provide a way around this limitation, the researchers said.

Combining jet printing with self-assembling block copolymers enabled the engineers to attain the much higher resolution.

This atomic force microscope image shows directed self-assembly of a printed line of block copolymer on a template prepared by photolithography. The microscope’s software colored and scaled the image. The density of patterns in the template (bounded by the thin lines) is two times that of the self-assembled structures (the ribbons). (Source: University of Illinois-Urbana)

This atomic force microscope image shows directed self-assembly of a printed line of block copolymer on a template prepared by photolithography. The microscope’s software colored and scaled the image. The density of patterns in the template (bounded by the thin lines) is two times that of the self-assembled structures (the ribbons). (Source: University of Illinois-Urbana)

 

 

Flexible microcameras
Imagine sticking a thin sheet of microscopic cameras to the surface of a car to provide a rear-view image, or wrapping that sheet around a pole to provide 360-degree surveillance of an intersection under construction. A thin sheet of micro-cameras could fit where bulkier cameras cannot — and many small cameras working together could even rival high-end cameras’ image quality, according to University of Wisconsin-Madison researchers.

The researchers have received a $1 million National Science Foundation grant to develop smart micro-camera arrays mounted on thin, flexible polymer sheets. They will focus not simply on making these cameras small and higher-quality, but also on developing algorithms that allow the cameras to change direction and focus both individually and collectively.

Like so many complex technological problems, this one comes down to making different disciplines work together.

The polymer sheets, combined with the micro-cameras, will measure less than a centimeter thick. Whereas a traditional camera design must be bigger and bulkier to capture more light and increase its image quality, the researchers propose to improve their image quality through a sort of “collective aperture” of many micro-cameras.

The cameras can also coordinate to capture the whole scene, while an algorithm will decide what the cameras look at and the cameras’ focus plane. This encompasses not just the image processing itself but the control of the camera array as well.

The current focus is on figuring out how to control the orientation of the cameras. By manipulating them through computation, the researchers hope to maximize the collective potential of small cameras that would be rather weak on their own.

The arrays ultimately could do things that conventional cameras can’t do at all — for example, focusing simultaneously on different objects at different distances – because the image data the camera array captures contains 3D depth information, which can make otherwise fragile image-recognition algorithms more powerful.

~Ann Steffora Mutschler



Leave a Reply


(Note: This name will be displayed publicly)