Search Results for author: Quang H. Nguyen

Found 2 papers, 2 papers with code

Fooling the Textual Fooler via Randomizing Latent Representations

1 code implementation2 Oct 2023 Duy C. Hoang, Quang H. Nguyen, Saurav Manchanda, Minlong Peng, Kok-Seng Wong, Khoa D. Doan

Despite outstanding performance in a variety of NLP tasks, recent studies have revealed that NLP models are vulnerable to adversarial attacks that slightly perturb the input to cause the models to misbehave.

Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks

1 code implementation1 Oct 2023 Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan

Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.