Research Bits: April 30

Sound waves in optical neural networks; 3D spectral processor; attack-resistant ML accelerator.

popularity

Sound waves in optical neural networks

Researchers from the Max Planck Institute for the Science of Light and Massachusetts Institute of Technology found a way to build reconfigurable recurrent operators based on sound waves for photonic machine learning.

They used light to create temporary acoustic waves in an optical fiber, which manipulate subsequent computational steps of an optical recurrent neural network. Because sound waves have a much longer transmission time than the optical information stream, they can remain in the optical fiber longer and can be linked to each subsequent processing step in turn. The method is entirely optically controlled, making the optoacoustic computer programmable on a pulse-by-pulse basis without requiring complicated structures and transducers.

The team sees potential for use in a new class of optical neuromorphic computing which could be reconfigured spontaneously and would allow large-scale in-memory computing in the present telecommunication network, along with on-chip implementations of optical neural networks that could use photonic waveguides without additional electronic controls. [1]

3D spectral processor

Scientists at the University of Florida built a 3D ferroelectric-gate fin nanomechanical resonator that can be used to make spectral processors that integrate different frequencies on one monolithic chip for wireless communications.

“By harnessing the strengths of semiconductor technologies in integration, routing and packaging, we can integrate different frequency-dependent processors on the same chip,” said Roozbeh Tabrizian, an associate professor in UF’s Department of Electrical and Computer Engineering, in a release. “That’s a huge benefit.”

The ferroelectric-gate fins are created by growing atomic-layered ferroelectric hafnia-zirconia transducers on silicon nano-fins. The process is CMOS-compatible. The team said that the processors occupy less physical space while delivering enhanced performance and have indefinite scalability. [2]

Attack-resistant ML accelerator

Researchers from the Massachusetts Institute of Technology and MIT-IBM Watson AI Lab developed an on-device digital in-memory compute (IMC) machine learning accelerator that is resistant to side-channel and bus-probing attacks.

The three-pronged approach first splits data in the IMC into random pieces to combat side-channel attacks. The researchers found a way to simplify computations, making it easier to effectively split data while eliminating the need for random bits.

To prevent bus-probing attacks, they used a lightweight cipher that encrypts the model stored in off-chip memory, only decrypting the pieces of the model stored on the chip when necessary. Additionally, they generated the key that decrypts the cipher directly on the chip using a physically unclonable function, rather than moving it back and forth with the model.

In tests, the researchers were unable to reconstruct any real information or extract pieces of the model or dataset even after making millions of attempts. The addition of security did reduce the energy efficiency of the accelerator, and it also required a larger chip area, but accuracy was not impacted. [3]

References

[1] Becker, S., Englund, D. & Stiller, B. An optoacoustic field-programmable perceptron for recurrent neural networks. Nat Commun 15, 3020 (2024). https://doi.org/10.1038/s41467-024-47053-6

[2] Hakim, F., Rudawski, N.G., Tharpe, T. et al. A ferroelectric-gate fin microwave acoustic spectral processor. Nat Electron 7, 147–156 (2024). https://doi.org/10.1038/s41928-023-01109-5

[3] Maitreyi Ashok, Saurav Maji, Xin Zhang, John Cohn, Anantha Chandrakasan. The research will be presented at the IEEE Custom Integrated Circuits Conference.



Leave a Reply


(Note: This name will be displayed publicly)