no code implementations • 14 May 2023 • Ranyang Zhou, Sabbir Ahmed, Adnan Siraj Rakin, Shaahin Angizi
With deep learning deployed in many security-sensitive areas, machine learning security is becoming progressively important.
no code implementations • 13 Mar 2023 • Jingtao Li, Adnan Siraj Rakin, Xing Chen, Li Yang, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
We show that under practical cases, the proposed ME attacks work exceptionally well for SFL.
1 code implementation • ICCV 2023 • Sabbir Ahmed, Abdullah Al Arafat, Mamshad Nayeem Rizve, Rahim Hossain, Zhishan Guo, Adnan Siraj Rakin
Source-free domain adaptation (SFDA) is a popular unsupervised domain adaptation method where a pre-trained model from a source domain is adapted to a target domain without accessing any source data.
1 code implementation • CVPR 2022 • Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server.
no code implementations • CVPR 2022 • Li Yang, Adnan Siraj Rakin, Deliang Fan
To develop memory-efficient on-device transfer learning, in this work, we are the first to approach the concept of transfer learning from a new perspective of intermediate feature reprogramming of a pre-trained model (i. e., backbone).
no code implementations • 8 Nov 2021 • Adnan Siraj Rakin, Md Hafizul Islam Chowdhuryy, Fan Yao, Deliang Fan
Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model.
no code implementations • 22 Mar 2021 • Adnan Siraj Rakin, Li Yang, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, Deliang Fan
Apart from recovering the inference accuracy, our RA-BNN after growing also shows significantly higher resistance to BFA.
1 code implementation • 20 Jan 2021 • Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
In this work, we propose RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery scheme to protect DNN weights against PBFA.
no code implementations • 2 Dec 2020 • Li Yang, Adnan Siraj Rakin, Deliang Fan
We observe that large memory used for activation storage is the bottleneck that largely limits the training time and cost on edge devices.
no code implementations • 5 Nov 2020 • Adnan Siraj Rakin, Yukui Luo, Xiaolin Xu, Deliang Fan
Specifically, she can aggressively overload the shared power distribution system of FPGA with malicious power-plundering circuits, achieving adversarial weight duplication (AWD) hardware attack that duplicates certain DNN weight packages during data transmission between off-chip memory and on-chip buffer, to hijack the DNN function of the victim tenant.
2 code implementations • 24 Jul 2020 • Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan
Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory.
no code implementations • 22 Jul 2020 • Ye Wang, Shuchin Aeron, Adnan Siraj Rakin, Toshiaki Koike-Akino, Pierre Moulin
Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples.
no code implementations • 30 Mar 2020 • Fan Yao, Adnan Siraj Rakin, Deliang Fan
Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains.
3 code implementations • CVPR 2020 • Adnan Siraj Rakin, Zhezhi He, Deliang Fan
However, when the attacker activates the trigger by embedding it with any input, the network is forced to classify all inputs to a certain target class.
no code implementations • 30 May 2019 • Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, Deliang Fan
In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack.
1 code implementation • ICCV 2019 • Adnan Siraj Rakin, Zhezhi He, Deliang Fan
Several important security issues of Deep Neural Network (DNN) have been raised recently associated with different applications and components.
1 code implementation • CVPR 2019 • Adnan Siraj Rakin, Zhezhi He, Deliang Fan
Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation.
no code implementations • 18 Jul 2018 • Adnan Siraj Rakin, Jin-Feng Yi, Boqing Gong, Deliang Fan
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.
no code implementations • 5 Feb 2018 • Adnan Siraj Rakin, Zhezhi He, Boqing Gong, Deliang Fan
Blind pre-processing improves the white box attack accuracy of MNIST from 94. 3\% to 98. 7\%.