no code implementations • 3 Apr 2024 • Nandish Chattopadhyay, Atreya Goswami, Anupam Chattopadhyay
For all of the aforementioned studies, we have run tests on multiple models with varying dimensionality and used a word-vector level adversarial attack to substantiate the findings.
no code implementations • 9 Feb 2024 • Nandish Chattopadhyay, Amira Guesmi, Muhammad Shafique
Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems.
no code implementations • 20 Nov 2023 • Nandish Chattopadhyay, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique
ODDR employs a three-stage pipeline: Fragmentation, Segregation, and Neutralization, providing a model-agnostic solution applicable to both image classification and object detection tasks.
no code implementations • 21 Nov 2020 • Nandish Chattopadhyay, Lionell Yip En Zhi, Bryan Tan Bing Xing, Anupam Chattopadhyay
Adversarial attacks have proved to be the major impediment in the progress on research towards reliable machine learning solutions.