1 code implementation • NeurIPS 2023 • Zikai Xiao, Zihan Chen, Songshang Liu, Hualiang Wang, Yang Feng, Jin Hao, Joey Tianyi Zhou, Jian Wu, Howard Hao Yang, Zuozhu Liu
Data privacy and long-tailed distribution are the norms rather than the exception in many real-world tasks.
no code implementations • 5 Oct 2023 • Jianhong Bai, Yuchen Yang, Huanpeng Chu, Hualiang Wang, Zuozhu Liu, Ruizhe Chen, Xiaoxuan He, Lianrui Mu, Chengfei Cai, Haoji Hu
Quantization has emerged as a promising direction for model compression.
1 code implementation • NeurIPS 2023 • Jianhong Bai, Zuozhu Liu, Hualiang Wang, Ruizhe Chen, Lianrui Mu, Xiaomeng Li, Joey Tianyi Zhou, Yang Feng, Jian Wu, Haoji Hu
In this paper, we formally define a more realistic task as distribution-agnostic generalized category discovery (DA-GCD): generating fine-grained predictions for both close- and open-set classes in a long-tailed open-world setting.
no code implementations • 24 Aug 2023 • Siming Fu, Xiaoxuan He, Xinpeng Ding, Yuchen Cao, Hualiang Wang
Category prototype-guided mechanism for image-text matching makes the features of different classes converge to these distinct and uniformly distributed category prototypes, which maintain a uniform distribution in the feature space, and improve class boundaries.
1 code implementation • ICCV 2023 • Hualiang Wang, Yi Li, Huifeng Yao, Xiaomeng Li
Subsequently, we introduce two loss functions: the image-text binary-opposite loss and the text semantic-opposite loss, which we use to teach CLIPN to associate images with no prompts, thereby enabling it to identify unknown samples.
1 code implementation • 27 Jul 2023 • Marawan Elbatel, Hualiang Wang, Robert Martí, Huazhu Fu, Xiaomeng Li
Existing federated methods under highly imbalanced datasets primarily focus on optimizing a global model without incorporating the intra-class variations that can arise in medical imaging due to different populations, findings, and scanners.
2 code implementations • 8 Jun 2023 • Jianhong Bai, Zuozhu Liu, Hualiang Wang, Jin Hao, Yang Feng, Huanpeng Chu, Haoji Hu
Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect.
2 code implementations • 12 Apr 2023 • Yi Li, Hualiang Wang, Yiqun Duan, Xiaomeng Li
Contrastive Language-Image Pre-training (CLIP) is a powerful multimodal large vision model that has demonstrated significant benefits for downstream tasks, including many zero-shot learning and text-guided vision tasks.
Ranked #2 on Open Vocabulary Semantic Segmentation on COCO-Stuff-171 (mIoU metric)
Interactive Segmentation Open Vocabulary Semantic Segmentation +4
no code implementations • 27 Sep 2022 • Yi Li, Huifeng Yao, Hualiang Wang, Xiaomeng Li
We call the proposed framework as FreeSeg, where the mask is freely available from raw feature map of pretraining model.
1 code implementation • 15 Sep 2022 • Yi Li, Hualiang Wang, Yiqun Duan, Hang Xu, Xiaomeng Li
For this problem, we propose the Explainable Contrastive Language-Image Pre-training (ECLIP), which corrects the explainability via the Masked Max Pooling.
1 code implementation • 22 Aug 2022 • Hualiang Wang, Siming Fu, Xiaoxuan He, Hangxiang Fang, Zuozhu Liu, Haoji Hu
To our knowledge, this is the first work to measure representation quality of classifiers and features from the perspective of distribution overlap coefficient.
no code implementations • 30 Jun 2022 • Zihan Chen, Songshang Liu, Hualiang Wang, Howard H. Yang, Tony Q. S. Quek, Zuozhu Liu
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.