no code implementations • 1 Mar 2024 • Zeling Zhang, Dongqi Cai, Yiran Zhang, Mengwei Xu, Shangguang Wang, Ao Zhou
Communication overhead is a significant bottleneck in federated learning (FL), which has been exaggerated with the increasing size of AI models.
1 code implementation • 16 Jan 2024 • Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, QiPeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, Qiyang Zhang, Zhenyan Lu, Li Zhang, Shangguang Wang, Yuanchun Li, Yunxin Liu, Xin Jin, Xuanzhe Liu
Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine learning lifecycle, from training to deployment.
1 code implementation • 28 Aug 2023 • Jinliang Yuan, Chen Yang, Dongqi Cai, Shihe Wang, Xin Yuan, Zeling Zhang, Xiang Li, Dingge Zhang, Hanzi Mei, Xianqing Jia, Shangguang Wang, Mengwei Xu
Concurrently, each app contributes a concise, offline fine-tuned "adapter" tailored to distinct downstream tasks.
1 code implementation • 26 Aug 2023 • Mengwei Xu, Dongqi Cai, Yaozong Wu, Xiang Li, Shangguang Wang
Federated Learning (FL), a method to preserve user data privacy, is often employed in fine-tuning LLMs to downstream mobile tasks, an approach known as FedLLM.
1 code implementation • 15 Aug 2023 • Dongqi Cai, Yangyuxuan Kang, Anbang Yao, Yurong Chen
This paper presents Ske2Grid, a new representation learning framework for improved skeleton-based action recognition.
1 code implementation • 12 Dec 2022 • Dongqi Cai, Shangguang Wang, Yaozong Wu, Felix Xiaozhu Lin, Mengwei Xu
Such an inadequacy of data labels is known as a few-shot scenario; it becomes the key blocker for mobile NLP applications.
no code implementations • 1 Dec 2022 • Dongqi Cai, Yaozong Wu, Haitao Yuan, Shangguang Wang, Felix Xiaozhu Lin, Mengwei Xu
To address these challenges, we first introduce a data generator for federated few-shot learning tasks, which encompasses the quantity and skewness of scarce labeled data in a realistic setting.
1 code implementation • 20 May 2022 • Dongqi Cai, Yaozong Wu, Shangguang Wang, Felix Xiaozhu Lin, Mengwei Xu
A key challenge is to properly configure the depth and width of adapters, to which the training speed and efficiency is highly sensitive.
1 code implementation • NeurIPS 2021 • Dongqi Cai, Anbang Yao, Yurong Chen
In this paper, we present Dynamic Normalization and Relay (DNR), an improved normalization design, to augment the spatial-temporal representation learning of any deep action recognition model, adapting to small batch size training settings.
no code implementations • CVPR 2018 • Zhou Su, Chen Zhu, Yinpeng Dong, Dongqi Cai, Yurong Chen, Jianguo Li
Second is the mechanism for handling multiple knowledge facts expanding from question and answer pairs.