System Bits: July 25

Smart glove controls virtual objects; improving robot vision.

popularity

The language of glove
In a development that allows the gestures in American Sign Language to be decoded, University of California San Diego researchers have developed a smart glove that also has application in virtual and augmented reality to telesurgery, technical training and defense.

“The Language of Glove”: a smart glove that wirelessly translates the American Sign Language (ASL) alphabet into text and controls a virtual hand to mimic ASL gestures. (Source: UC San Diego Jacobs School of Engineering)

The smart glove wirelessly translates the American Sign Language alphabet into text and controls a virtual hand to mimic sign language gestures. The device, which the engineers call “The Language of Glove,” was built for less than $100 using stretchable and printable electronics that are inexpensive, commercially available and easy to assemble.

The ultimate goal is to make this a smart glove that in the future will allow people to use their hands in virtual reality, which is much more intuitive than using a joystick and other existing controllers, which the researchers expect could be better for games and entertainment, but more importantly for virtual training procedures in medicine, for example, where it would be advantageous to actually simulate the use of one’s hands.

The glove contains sensors made from stretchable materials, is inexpensive and simple to manufacture. The team explained that they innovated a low-cost and straightforward design for smart wearable devices using off-the-shelf components. They expect this work could enable other researchers to develop similar technologies without requiring costly materials or complex fabrication methods.

The device was built using a leather athletic glove with nine stretchable sensors adhered to the back at the knuckles — two on each finger and one on the thumb. The sensors are made of thin strips of a silicon-based polymer coated with a conductive carbon paint. The sensors are secured onto the glove with copper tape. Stainless steel thread connects each of the sensors to a low power, custom-made printed circuit board that’s attached to the back of the wrist.

The team explained that the sensors change their electrical resistance when stretched or bent, which allows them to code for different letters of the American Sign Language alphabet based on the positions of all nine knuckles. A straight or relaxed knuckle is encoded as “0” and a bent knuckle is encoded as “1”. When signing a particular letter, the glove creates a nine-digit binary key that translates into that letter. For example, the code for the letter “A” (thumb straight, all other fingers curled) is “011111111,” while the code for “B” (thumb bent, all other fingers straight) is “100000000.” Engineers equipped the glove with an accelerometer and pressure sensor to distinguish between letters like “I” and “J”, whose gestures are different but generate the same nine-digit code.

A low power PCB on the glove converts the nine-digit key into a letter and then transmits the signals via Bluetooth to a smartphone or computer screen. The glove can wirelessly translate all 26 letters of the American Sign Language alphabet into text. Researchers also used the glove to control a virtual hand to sign letters in the American Sign Language alphabet.

Looking ahead, the team is developing the next version of this glove that is endowed with the sense of touch. The goal is to make a glove that could control either a virtual or robotic hand and then send tactile sensations back to the user’s hand.

Camera improves robot vision, virtual reality
Building on technology first described by Stanford University researchers more than 20 years ago, a new camera has been developed that could generate the kind of information-rich images that robots need to navigate the world.

The camera generates a four-dimensional image, and can also capture nearly 140 degrees of information.

Donald Dansereau, a postdoctoral fellow in electrical engineering said, “We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.”

Assistant Professor Gordon Wetzstein, left, and postdoctoral research fellow Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields. (Source: Stanford University)

So with robotics in mind, Dansereau and Gordon Wetzstein, assistant professor of electrical engineering, along with colleagues from the University of California, San Diego created the first-ever single-lens, wide field of view, light field camera, which they just presented at the computer vision conference CVPR 2017 on July 23.

They noted that as technology stands now, robots have to move around, gathering different perspectives, if they want to understand certain aspects of their environment, such as movement and material composition of different objects. They believe this camera could allow them to gather much the same information in a single image. They also see this being used in autonomous vehicles and augmented and virtual reality technologies since the convergence of algorithms and optics in this setting are facilitating unprecedented imaging systems.

Postdoctoral research fellow Donald Dansereau holds a spherical lens like the one which is at the heart of the panoramic light field camera, capturing rich light field information over a wide field of view. (Source: Stanford University)



Leave a Reply


(Note: This name will be displayed publicly)