1 code implementation • 21 Mar 2024 • Weipeng Deng, Runyu Ding, Jihan Yang, Jiahui Liu, Yijiang Li, Xiaojuan Qi, Edith Ngai
To test the language understandability of 3D-VL models, we first propose a language robustness task for systematically assessing 3D-VL models across various tasks, benchmarking their performance when presented with different language style variants.
no code implementations • 15 Mar 2024 • Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang
In this work, we provide a finetuning approach to enhance the robustness of vision transformers inspired by the concept of nullspace from linear algebra.
no code implementations • 15 Mar 2024 • Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang
Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.
no code implementations • 8 Mar 2024 • Yijiang Li, Sucheng Ren, Weipeng Deng, Yuzhi Xu, Ying Gao, Edith Ngai, Haohan Wang
Starting with the class of interest, we query the LLMs to extract relevant knowledge for these novel domains.
1 code implementation • 28 Feb 2024 • Guangji Bai, Yijiang Li, Chen Ling, Kibaek Kim, Liang Zhao
The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands.
no code implementations • 15 Feb 2024 • Haoyang Liu, Yijiang Li, Jinglin Jian, Yuxuan Cheng, Jianrong Lu, Shuyi Guo, Jinglei Zhu, Mianchen Zhang, Miantong Zhang, Haohan Wang
For instance, it has facilitated the identification of disease-predictive genes from gene expression data, significantly advancing healthcare.
no code implementations • 30 Nov 2023 • Haoyang Liu, Yijiang Li, Tiancheng Xing, Vibhu Dalal, Luwei Li, Jingrui He, Haohan Wang
Dataset Distillation (DD) emerges as a powerful strategy to encapsulate the expansive information of large datasets into significantly smaller, synthetic equivalents, thereby preserving model performance with reduced computational overhead.
no code implementations • 26 Nov 2023 • Jixuan Leng, Yijiang Li, Haohan Wang
SCMD leverages the capabilities of large vision-language models, specifically CLIP, to train a more efficient model, ensuring it acquires robust generalization capabilities across unseen domains.
no code implementations • 1 Oct 2023 • Yijiang Li, Ying Gao, Haohan Wang
We investigate the robustness and security issues from a novel and practical setting: a group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients, and only revealing their adversary position after the training to conduct transferable adversarial attacks with their data, which is usually a subset of the data that FL system is trained with.
1 code implementation • ICCV 2023 • Yijiang Li, Xinjiang Wang, Lihe Yang, Litong Feng, Wayne Zhang, Ying Gao
Deep co-training has been introduced to semi-supervised segmentation and achieves impressive results, yet few studies have explored the working mechanism behind it.
1 code implementation • ICCV 2023 • Siquan Huang, Yijiang Li, Chong Chen, Leyu Shi, Ying Gao
To evaluate the effectiveness of our approach, we conduct comprehensive experiments on different datasets under various attack settings, where our method achieves the best defensive performance.
1 code implementation • CVPR 2023 • Xinjiang Wang, Xingyi Yang, Shilong Zhang, Yijiang Li, Litong Feng, Shijie Fang, Chengqi Lyu, Kai Chen, Wayne Zhang
In this study, we dive deep into the inconsistency of pseudo targets in semi-supervised object detection (SSOD).
no code implementations • 20 Jun 2021 • Yijiang Li, Wentian Cai, Ying Gao, Chengming Li, Xiping Hu
The local and detailed feature from the shallower layer such as boundary and tissue texture is particularly more important in medical segmentation compared with natural image segmentation.