Research Bits: June 14

Photonic deep neural network chip; using noise in optical AI; ML in physical systems.

popularity

Photonic deep neural network chip

Engineers from the University of Pennsylvania built a photonic deep neural network on a 9.3 square millimeter chip they say is faster and more efficient at classifying images, with the ability to process nearly two billion images a second.

The chip uses a series of waveguides that form ‘neutron layers’ mimicking the brain. “Our chip processes information through what we call ‘computation-by-propagation,’ meaning that unlike clock-based systems, computations occur as light propagates through the chip,” said Firooz Aflatouni, associate professor in Electrical and Systems Engineering at University of Pennsylvania. “We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology.”

“When current computer chips process electrical signals they often run them through a graphics processing unit, or GPU, which takes up space and energy,” said Farshid Ashtiani, a postdoctoral fellow at University of Pennsylvania. “Our chip does not need to store the information, eliminating the need for a large memory unit.”

Aflatouni added, “And, by eliminating the memory unit that stores images, we are also increasing data privacy. With chips that read image data directly, there is no need for photo storage and thus, a data leak does not occur.”

As a proof of concept, the researchers’ chip was tested on data sets containing either two or four types of handwritten characters, achieving classification accuracies higher than 93.8% and 89.8%, respectively.

“What’s really interesting about this technology is that it can do so much more than classify images,” said Aflatouni. “We already know how to convert many data types into the electrical domain — images, audio, speech, and many other data types. Now, we can convert different data types into the optical domain and have them processed almost instantaneously using this technology.”

“Our next steps in this research will examine the scalability of the chip as well as work on three-dimensional object classification,” Aflatouni continued. “Then maybe we will venture into the realm of classifying non-optical data. While image classification is one of the first areas of research for this chip, I am excited to see how it will be used, perhaps together with digital platforms, to accelerate different types of computations.”

Using noise in optical AI

Researchers from the University of Washington, Duke University, and University of Maryland developed optical computing hardware for AI that can mitigate the issue of noise in optical systems and use some of it as input to help enhance the output of the artificial neural network within the system.

“We’ve built an optical computer that is faster than a conventional digital computer,” said Changming Wu, a UW doctoral student in electrical and computer engineering. “And also, this optical computer can create new things based on random inputs generated from the optical noise that most researchers tried to evade.”

The team tested several noise mitigation techniques, which included using some of the noise generated by the optical computing core to serve as random inputs for a generative adversarial network, or GAN.

One of the tasks the researchers assigned the GAN was how to handwrite the number “7” like a person would. The optical computer could not simply print out the number according to a prescribed font but had to generate digital images that had a style similar to the samples it had studied, but were not identical to them. It was eventually able to write numbers from one to 10 in a unique style.

“Instead of training the network to read handwritten numbers, we trained the network to learn to write numbers, mimicking visual samples of handwriting that it was trained on,” said Mo Li, a UW professor of electrical and computer engineering. “We, with the help of our computer science collaborators at Duke University, also showed that the GAN can mitigate the negative impact of the optical computing hardware noises by using a training algorithm that is robust to errors and noises. More than that, the network actually uses the noises as random input that is needed to generate output instances.”

Next, the researchers plan to build the device at a larger scale using current semiconductor manufacturing technology.

“This optical system represents a computer hardware architecture that can enhance the creativity of artificial neural networks used in AI and machine learning, but more importantly, it demonstrates the viability for this system at a large scale where noise and errors can be mitigated and even harnessed,” Li said. “AI applications are growing so fast that in the future, their energy consumption will be unsustainable. This technology has the potential to help reduce that energy consumption, making AI and machine learning environmentally sustainable — and very fast, achieving higher performance overall.”

Machine learning in physical systems

Researchers from Cornell University are training physical systems such as computer speakers and lasers to perform machine learning tasks such as identifying handwritten numbers.

“Many different physical systems have enough complexity in them that they can perform a large range of computations,” said Peter McMahon, assistant professor of applied and engineering physics in the College of Engineering at Cornell. “The systems we performed our demonstrations with look nothing like each other, and they seem to have nothing to do with handwritten-digit recognition or vowel classification, and yet you can train them to do it.”

The team aimed to find a generic way to train different types of physical systems to perform machine learning using a hybrid backpropagation method. It was then demonstrated on mechanical, optical, and electrical physical systems.

For the mechanical system, the researchers placed a titanium plate atop a commercially available speaker, creating a driven multimode mechanical oscillator. The optical system consisted of a laser beamed through a nonlinear crystal that converted the colors of incoming light into new colors by combining pairs of photons. The third experiment used a small electronic circuit with four components, a resistor, a capacitor, an inductor, and a transistor.

“Artificial neural networks work mathematically by applying a series of parameterized functions to input data. The dynamics of a physical system can also be thought of as applying a function to data input to that physical system,” McMahon said. “This mathematical connection between neural networks and physics is, in some sense, what makes our approach possible, even though the notion of making neural networks out of unusual physical systems might at first sound really ridiculous.”

In the experiments, pixels of an image of a handwritten number were encoded in a pulse of light or an electrical voltage. The system processed the information and gave its output in a similar type of optical pulse or voltage.

The researchers said they were able to train the optical system to classify handwritten numbers with an accuracy of 97%. The accuracy is below the state-of-the-art for conventional neural networks running on a standard electronic processor, but they say it shows that even a very simple physical system can be taught to perform machine learning. The optical system was also successfully trained to recognize spoken vowel sounds.

“It turns out you can turn pretty much any physical system into a neural network,” McMahon said. “However, not every physical system will be a good neural network for every task, so there is an important question of what physical systems work best for important machine-learning tasks. But now there is a way to try find out – which is what my lab is currently pursuing.”



Leave a Reply


(Note: This name will be displayed publicly)