no code implementations • 29 Jun 2023 • Kshitij Bhardwaj, Zishen Wan, Arijit Raychowdhury, Ryan Goldhahn
While deep neural networks are being utilized heavily for autonomous driving, they need to be adapted to new unseen environmental conditions for which they were not trained.
no code implementations • 23 Feb 2023 • Yize Li, Pu Zhao, Xue Lin, Bhavya Kailkhura, Ryan Goldhahn
Deep neural networks (DNNs) are sensitive to adversarial examples, resulting in fragile and unreliable performance in the real world.
no code implementations • 26 Sep 2022 • Hao Cheng, Pu Zhao, Yize Li, Xue Lin, James Diffenderfer, Ryan Goldhahn, Bhavya Kailkhura
Recently, Diffenderfer and Kailkhura proposed a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks.
no code implementations • 21 Apr 2021 • Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn
To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of the training loss.
no code implementations • 30 Mar 2021 • Cheng Chen, Bhavya Kailkhura, Ryan Goldhahn, Yi Zhou
Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks.
no code implementations • 14 Oct 2017 • Qunwei Li, Bhavya Kailkhura, Ryan Goldhahn, Priyadip Ray, Pramod K. Varshney
We also provide conditions on the erroneous updates for exact convergence to the optimal solution.