no code implementations • 21 Mar 2024 • Sayanton V. Dibbo, Adam Breuer, Juston Moore, Michael Teti
Recent model inversion attack algorithms permit adversaries to reconstruct a neural network's private training data just by repeatedly querying the network and inspecting its outputs.
no code implementations • 21 Jan 2024 • Siddharth Mansingh, Michal Kucer, Garrett Kenyon, Juston Moore, Michael Teti
Deep neural networks (DNNs) are easily fooled by adversarial perturbations that are imperceptible to humans.
no code implementations • 26 Mar 2018 • Michael Teti, William Edward Hahn, Shawn Martin, Christopher Teti, Elan Barenholtz
To date, however, there has been no systematic comparison of how different deep learning architectures perform at such tasks, or an attempt to determine a correlation between classification performance and performance in an actual vehicle, a potentially critical factor in developing self-driving systems.