no code implementations • ICLR 2019 • Zeyuan Chen, Shaoliang Nie, Tianfu Wu, Christopher G. Healey
Face completion is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled-in fragments.
no code implementations • ACL (WOAH) 2021 • Lambert Mathias, Shaoliang Nie, Aida Mostafazadeh Davani, Douwe Kiela, Vinodkumar Prabhakaran, Bertie Vidgen, Zeerak Waseem
We present the results and main findings of the shared task at WOAH 5 on hateful memes detection.
no code implementations • NAACL (maiworkshop) 2021 • Woojeong Jin, Maziar Sanjabi, Shaoliang Nie, Liang Tan, Xiang Ren, Hamed Firooz
In this paper, we propose modality-specific distillation (MSD) to effectively transfer knowledge from a teacher on multimodal datasets.
no code implementations • 29 Sep 2023 • Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu, Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian Khabsa, Qifan Wang, Xia Hu
We establish this equivalence mathematically by demonstrating that graph convolution networks (GCN) and simplified graph convolution (SGC) can be expressed as a form of Mixup.
1 code implementation • 11 May 2023 • Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, Xiang Ren
Existing metrics like task performance of the LM generating the rationales, or similarity between generated and gold rationales are not good indicators of their human utility.
no code implementations • 14 Oct 2022 • Nan Wang, Qifan Wang, Yi-Chia Wang, Maziar Sanjabi, Jingzhou Liu, Hamed Firooz, Hongning Wang, Shaoliang Nie
However, the bias inherent in user written text, often used for PTG model training, can inadvertently associate different levels of linguistic quality with users' protected attributes.
1 code implementation • 12 Oct 2022 • Tao Yang, Jinghao Deng, Xiaojun Quan, Qifan Wang, Shaoliang Nie
Fine-tuning large pre-trained language models on downstream tasks is apt to suffer from overfitting when limited training data is available.
no code implementations • 2 Jul 2022 • Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, Xiang Ren
Following how humans communicate, free-text rationales aim to use natural language to explain neural language model (LM) behavior.
1 code implementation • 25 May 2022 • Brihi Joshi, Aaron Chan, Ziyi Liu, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz, Xiang Ren
to align with human rationales (Which input tokens would humans focus on?).
no code implementations • Findings (ACL) 2022 • Khalil Mrini, Shaoliang Nie, Jiatao Gu, Sinong Wang, Maziar Sanjabi, Hamed Firooz
Without the use of a knowledge base or candidate sets, our model sets a new state of the art in two benchmark datasets of entity linking: COMETA in the biomedical domain, and AIDA-CoNLL in the news domain.
no code implementations • 31 Dec 2021 • Nimit S. Sohoni, Maziar Sanjabi, Nicolas Ballas, Aditya Grover, Shaoliang Nie, Hamed Firooz, Christopher Ré
Theoretically, we provide generalization bounds for our approach in terms of the worst-group performance, which scale with respect to both the total number of training points and the number of training points with group labels.
1 code implementation • BigScience (ACL) 2022 • Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz
An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.
no code implementations • Findings (EMNLP) 2021 • Woojeong Jin, Maziar Sanjabi, Shaoliang Nie, Liang Tan, Xiang Ren, Hamed Firooz
The idea aims at mimicking a teacher's modality-specific predictions by introducing auxiliary loss terms for each modality.
no code implementations • 25 Sep 2019 • Zeyuan Chen, Shaoliang Nie, Tianfu Wu, Christopher G. Healey
The proposed frequency-oriented attentive module (FOAM) encourages GANs to attend to only finer details in the coarse-to-fine progressive training, thus enabling progressive attention to face structures.
no code implementations • 23 Jan 2018 • Zeyuan Chen, Shaoliang Nie, Tianfu Wu, Christopher G. Healey
It is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled-in fragments.