3 code implementations • 16 Aug 2023 • Rui Cao, Ming Shan Hee, Adriel Kuek, Wen-Haw Chong, Roy Ka-Wei Lee, Jing Jiang
Specifically, we prompt a frozen PVLM by asking hateful content-related questions and use the answers as image captions (which we call Pro-Cap), so that the captions contain information critical for hateful content detection.
1 code implementation • 28 May 2023 • Ming Shan Hee, Wen-Haw Chong, Roy Ka-Wei Lee
Recent studies have proposed models that yielded promising performance for the hateful meme classification task.
no code implementations • 8 Feb 2023 • Rui Cao, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang
Specifically, we construct simple prompts and provide a few in-context examples to exploit the implicit knowledge in the pre-trained RoBERTa language model for hateful meme classification.
Ranked #3 on Hateful Meme Classification on HarMeme
no code implementations • 4 Apr 2022 • Ming Shan Hee, Roy Ka-Wei Lee, Wen-Haw Chong
For instance, it is unclear if these models are able to capture the derogatory or slurs references in multimodality (i. e., image and text) of the hateful memes.
no code implementations • 9 Aug 2021 • Rui Cao, Ziqing Fan, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang
Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task.
Ranked #4 on Hateful Meme Classification on HarMeme