A new technical paper titled “Delocalized photonic deep learning on the internet’s edge” was published by researchers at MIT and Nokia Corporation.
“Every time you want to run a neural network, you have to run the program, and how fast you can run the program depends on how fast you can pipe the program in from memory. Our pipe is massive — it corresponds to sending a full feature-length movie over the internet every millisecond or so. That is how fast data comes into our system. And it can compute as fast as that,” said senior author Dirk Englund, an associate professor in the EECS Department and member of the MIT Research Laboratory of Electronics, in this MIT news article.
Find the technical paper here. Published Oct. 22.
DOI: 10.1126/science.abq8271.
Authors: Dirk Englund, Alexander Sludds, Saumil Bandyopadhyay, Ryan Hamerly, as well as others from MIT, the MIT Lincoln Laboratory, and Nokia Corporation.
Related Reading
Using AI To Speed Up Edge Computing
Optimizing a system’s behavior can improve PPA and extend its useful lifetime.
Rethinking Machine Learning For Power
To significantly reduce the power being consumed by machine learning will take more than optimization, it will take some fundamental rethinking.
Leave a Reply