Commercializing photonic MEMS; stretchy antenna; training AI with voice.
Commercializing photonic MEMS
Researchers from the University of California Berkeley, Daegu Gyeongbuk Institute of Science & Technology, SUSS MicroOptics, TSI Semiconductors, Gwangju Institute of Science and Technology, KAIST, Ecole Polytechnique Fédérale de Lausanne (EPFL), and Korea Polytechnic University demonstrated a path for commercial fabrication of photonic MEMS.
Photonic MEMS hold promise as optical switches, which could help route data more efficiently. However, thus far they have been fabricated using nonstandard and complex processes in laboratory environments. The team set to making one using an unmodified, commercially available CMOS process.
The researchers said the photonic switch was fabricated on silicon-on-insulator (SOI) 200-mm wafers using regular photolithographic and dry-etching processes in a commercial foundry. The whole photonic integrated circuit is included in the silicon top layer, which has the advantage of limiting the number of fabrication steps: There are two different dry-etching processes, one lift-off to create metal interconnects, and the final release of the MEMS by oxide etching.
The switch design includes 32 input ports and 32 output ports, representing a 32 x 32 matrix (the full size is 5.9 mm x 5.9 mm) of the same replicated element. In each of the single elements, the light transfer from one channel to the other is produced by decreasing the distance between two waveguides to couple their modes, an operation achieved by an electrostatic comb drive also included in the silicon top layer.
“For the first time, large-scale and integrated MEMS photonic switches have been fabricated in a commercial foundry on 200-mm SOI wafers. In my opinion, this is a convincing demonstration that this technology is suited for commercialization and mass production. They could be incorporated in data communication systems in the near future,” said Jeremy Béguelin, one of the Berkeley researchers.
In tests, they found light power loss through the entire switch of 7.7 dB, the optical bandwidth of about 30 nm at the 1550 nm wavelength, and the speed of the switching operation of 50 μs, which compares favorably with other photonic switch approaches.
Stretchy antenna
Researchers at Pennsylvania State University, Heriot-Watt University, and Chinese Academy of Science are working on flexible, stretchable, wearable antennas. They have different challenges from wearable sensors: when antennas are compressed or stretched, their resonance frequency (RF) changes and they transmit radio signals at wavelengths that may not match those of the antenna’s intended receivers.
“Changing the geometry of an antenna will change its performance,” said Huanyu “Larry” Cheng, Assistant Professor of Engineering Science and Mechanics in the Penn State College of Engineering. “We wanted to target a geometric structure that would allow for movement while leaving the transmitting frequency unchanged.”
To counter this, the team built the flexible transmitter in layers. It uses a copper mesh with a pattern of overlapping, wavy lines. This mesh makes up the bottom layer, which touches the skin, and the top layer, which serves as the radiating element in the antenna. The top layer creates a double arch when compressed and stretches when pulled, moving between these stages in an ordered set of steps. The structured process through which the antenna mesh arches, flattens, and stretches improves the overall flexibility of the layer and reduces RF fluctuations between the antenna’s states, according to Cheng.
The wearable transmitter is designed to compress its top layer in a double arch pattern, shown here, to respond to movement without compromising signal transmission. Image credit: Huanyu Cheng
The bottom mesh layer keeps radio signals from interacting with the skin, protecting the wearer and preventing signal degradation. The antenna’s ability to maintain a steady RF also allows the transmitter to collect energy from radio waves, Cheng said, potentially lowering energy consumption from outside sources.
Capable of transmitting wireless data at nearly 300 feet, it is also capable of integrating chips or sensors.
“We’ve demonstrated robust wireless communication in a stretchable transmitter,” Cheng said. “To our knowledge, this is the first wearable antenna that exhibits almost completely unchanged resonance frequency over a relatively large range of stretching.”
Researchers also created a device with a similar mesh pattern but lacking the double-arch compression structure to measure and simulate the relationship between the deformation and the antenna performance.
They noted that such a device could have applications in in health monitoring and clinical treatments, as well as energy generation and storage.
Training AI with voice
AI researchers at Columbia University found that instead of labeling training images with binary data, using sound files of recorded speech may be a more robust solution when training data is limited.
The researchers created two new neural networks with the goal of training both of them to recognize 10 different types of objects in a collection of 50,000 training images.
One AI system was trained the traditional way, by uploading a giant data table containing thousands of rows, each row corresponding to a single training photo. The first column was an image file containing a photo of a particular object or animal; the next 10 columns corresponded to 10 possible object types: cats, dogs, airplanes, etc. A “1” in any column indicates the correct answer, and nine 0s indicate the incorrect answers.
The second network was fed it a data table whose rows contained a photograph of an animal or object, and the second column contained an audio file of a recorded human voice actually voicing the word for the depicted animal or object out loud.
Both networks were trained for a total of 15 hours. Instead of returning a series of 1s and 0s corresponding to the object types, as the typical network does, the experimental returned a voice trying to “say” what the object in the image was. Initially, the researchers said the sound was just a garble. Sometimes it was a confusion of multiple categories, like “cog” for cat and dog. But they found that, eventually, the voice was mostly correct.
Both networks were able to correctly identify the animal or object depicted in a photograph about 92% of the time, with the same result in a repeat experiment.
However, when the researchers set up another comparison using far fewer images, the experimental network outperformed. This time, the networks were only given 2,500 training images. The control network’s accuracy fell to about 35%, while the experimental network’s accuracy only fell to 70%.
In another trial, the team used more difficult photographs that were harder for an AI to understand, such as slightly corrupted or oddly colored subjects. The voice-trained neural network was correct about 50% of the time, whereas the numerically-trained network achieved only 20% accuracy.
“Our findings run directly counter to how many experts have been trained to think about computers and numbers; it’s a common assumption that binary inputs are a more efficient way to convey information to a machine than audio streams of similar information ‘richness,'” explained Boyuan Chen, the lead researcher on the study.
“If you think about the fact that human language has been going through an optimization process for tens of thousands of years, then it makes perfect sense, that our spoken words have found a good balance between noise and signal,” said Hod Lipson, a mechanical engineering professor at Columbia. “Therefore, when viewed through the lens of Shannon Entropy, it makes sense that a neural network trained with human language would outperform a neural network trained by simple 1s and 0s.”
“We should think about using novel and better ways to train AI systems instead of collecting larger datasets,” added Chen. “If we rethink how we present training data to the machine, we could do a better job as teachers.”
Leave a Reply