1 code implementation • 2 Oct 2023 • Duy C. Hoang, Quang H. Nguyen, Saurav Manchanda, Minlong Peng, Kok-Seng Wong, Khoa D. Doan
Despite outstanding performance in a variety of NLP tasks, recent studies have revealed that NLP models are vulnerable to adversarial attacks that slightly perturb the input to cause the models to misbehave.
1 code implementation • 1 Oct 2023 • Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan
Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.
Ranked #6 on Image Classification on delete