1 code implementation • NAACL 2022 • Chuyun Deng, Mingxuan Liu, Yue Qin, Jia Zhang, Hai-Xin Duan, Donghong Sun
Adversarial texts help explore vulnerabilities in language models, improve model robustness, and explain their working mechanisms.
no code implementations • 28 May 2024 • Mingxuan Liu, Yilin Ning, Salinelat Teixayavong, Xiaoxuan Liu, Mayli Mertens, Yuqing Shang, Xin Li, Di Miao, Jie Xu, Daniel Shu Wei Ting, Lionel Tim-Ee Cheng, Jasmine Chiat Ling Ong, Zhen Ling Teo, Ting Fang Tan, Narrendar RaviChandran, Fei Wang, Leo Anthony Celi, Marcus Eng Hock Ong, Nan Liu
The ethical integration of Artificial Intelligence (AI) in healthcare necessitates addressing fairness-a concept that is highly context-specific across medical fields.
1 code implementation • 16 May 2024 • Mingxuan Liu, Tyler L. Hayes, Elisa Ricci, Gabriela Csurka, Riccardo Volpi
Open-vocabulary object detection (OvOD) has transformed detection into a language-guided task, empowering users to freely define their class vocabularies of interest during inference.
no code implementations • 8 Mar 2024 • Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty, Marcus Eng Hock Ong, Roger Vaughan, Nan Liu
The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness.
no code implementations • 4 Mar 2024 • Ziwen Wang, Jin Wee Lee, Tanujit Chakraborty, Yilin Ning, Mingxuan Liu, Feng Xie, Marcus Eng Hock Ong, Nan Liu
The calibration of DeepSurv (IBS: 0. 041) performed the best, followed by RSF (IBS: 0. 042) and GBM (IBS: 0. 0421), all using the full variables.
no code implementations • 7 Feb 2024 • Mingxuan Liu, Jiankai Tang, Haoxiang Li, Jiahao Qi, Siwei Li, Kegang Wang, Yuntao Wang, Hong Chen
Additionally, the power consumption of the transformer block is reduced by a factor of 12. 2, while maintaining decent performance as PhysFormer and other ANN-based models.
no code implementations • 24 Jan 2024 • Mingxuan Liu, Subhankar Roy, Wenjing Li, Zhun Zhong, Nicu Sebe, Elisa Ricci
Identifying subordinate-level categories from images is a longstanding task in computer vision and is referred to as fine-grained visual recognition (FGVR).
2 code implementations • 2 Nov 2023 • Yilin Ning, Salinelat Teixayavong, Yuqing Shang, Julian Savulescu, Vaishaanth Nagaraj, Di Miao, Mayli Mertens, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Mingxuan Liu, Jiuwen Cao, Michael Dunn, Roger Vaughan, Marcus Eng Hock Ong, Joseph Jao-Yiu Sung, Eric J Topol, Nan Liu
The widespread use of ChatGPT and other emerging technology powered by generative artificial intelligence (GenAI) has drawn much attention to potential ethical issues, especially in high-stakes applications such as healthcare, but ethical discussions are yet to translate into operationalisable solutions.
1 code implementation • 5 Sep 2023 • Siwei Li, Mingxuan Liu, Yating Zhang, Shu Chen, Haoxiang Li, Zifei Dou, Hong Chen
Image deblurring is a critical task in the field of image restoration, aiming to eliminate blurring artifacts.
1 code implementation • 20 Aug 2023 • Mingxuan Liu, Jie Gan, Rui Wen, Tao Li, Yongli Chen, Hong Chen
To fill the gap, we propose a Spiking-Diffusion model, which is based on the vector quantized discrete diffusion model.
Ranked #1 on Image Generation on EMNIST-Letters
no code implementations • 26 Apr 2023 • Mingxuan Liu, Yilin Ning, Salinelat Teixayavong, Mayli Mertens, Jie Xu, Daniel Shu Wei Ting, Lionel Tim-Ee Cheng, Jasmine Chiat Ling Ong, Zhen Ling Teo, Ting Fang Tan, Ravi Chandran Narrendar, Fei Wang, Leo Anthony Celi, Marcus Eng Hock Ong, Nan Liu
In this paper, we discuss the misalignment between technical and clinical perspectives of AI fairness, highlight the barriers to AI fairness' translation to healthcare, advocate multidisciplinary collaboration to bridge the knowledge gap, and provide possible solutions to address the clinical concerns pertaining to AI fairness.
1 code implementation • 28 Mar 2023 • Mingxuan Liu, Subhankar Roy, Zhun Zhong, Nicu Sebe, Elisa Ricci
Discovering novel concepts from unlabelled data and in a continuous manner is an important desideratum of lifelong learners.
1 code implementation • 1 Mar 2023 • Siqi Li, Yilin Ning, Marcus Eng Hock Ong, Bibhas Chakraborty, Chuan Hong, Feng Xie, Han Yuan, Mingxuan Liu, Daniel M. Buckland, Yong Chen, Nan Liu
We also calculated the average AUC values and SDs for each local model, and the FedScore model showed promising accuracy and stability with a high average AUC value which was closest to the one of the pooled model and SD which was lower than that of most local models.
no code implementations • 16 Dec 2022 • Yilin Ning, Mingxuan Liu, Nan Liu
Current practice in interpretable machine learning often focuses on explaining the final model trained from data, e. g., by using the Shapley additive explanations (SHAP) method.
no code implementations • 15 Oct 2022 • Mingxuan Liu, Siqi Li, Han Yuan, Marcus Eng Hock Ong, Yilin Ning, Feng Xie, Seyed Ehsan Saffari, Victor Volovici, Bibhas Chakraborty, Nan Liu
We found that model backbone(s) differed among data types as well as the imputation strategy.
1 code implementation • 18 Jul 2022 • Subhankar Roy, Mingxuan Liu, Zhun Zhong, Nicu Sebe, Elisa Ricci
We study the new task of class-incremental Novel Class Discovery (class-iNCD), which refers to the problem of discovering novel categories in an unlabelled data set by leveraging a pre-trained model that has been trained on a labelled data set containing disjoint yet related categories.
1 code implementation • 8 Jun 2022 • Mingxuan Liu, Yilin Ning, Han Yuan, Marcus Eng Hock Ong, Nan Liu
This study sought to investigate the effects of data imbalance on SHAP explanations for deep learning models, and to propose a strategy to mitigate these effects.
no code implementations • 24 Apr 2022 • Han Yuan, Mingxuan Liu, Lican Kang, Chenkui Miao, Ying Wu
In our empirical study on the MIMIC-III dataset, we show that the two core explanations - SHAP values and variable rankings fluctuate when using different background datasets acquired from random sampling, indicating that users cannot unquestioningly trust the one-shot interpretation from SHAP.