Home
TECHNICAL PAPERS

New Method of Comparing Neural Networks (Los Alamos National Lab)

popularity

A new research paper titled “If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness” from researchers at Los Alamos National Laboratory (LANL) and was recently presented at the Conference on Uncertainty in Artificial Intelligence.

The team developed a new approach for comparing neural networks and “applied their new metric of network similarity to adversarially trained neural networks, and found, surprisingly, that adversarial training causes neural networks in the computer vision domain to converge to very similar data representations, regardless of network architecture, as the magnitude of the attack increases,” according to this LANL news release.

Find the technical paper here. Published 2022.

Authors: Haydn Jones, Jacob Springer and Garrett Kenyon, Juston Moore.

Related Reading
Semiconductor Engineering’s Neural Networks Knowledge Center
New Uses For AI In Chips
ML/DL is increasing design complexity at the edge, but it’s also adding new options for improving power and performance.
Rethinking Machine Learning For Power
To significantly reduce the power being consumed by machine learning will take more than optimization, it will take some fundamental rethinking.
Easier And Faster Ways To Train AI
Simpler approaches are necessary to keep pace with constantly evolving models and new applications.



Leave a Reply


(Note: This name will be displayed publicly)