1 code implementation • 28 May 2024 • Huiping Zhuang, Di Fang, Kai Tong, Yuchen Liu, Ziqian Zeng, Xu Zhou, Cen Chen
One of these scenarios can be formulated as an online continual learning (OCL) problem.
1 code implementation • 25 May 2024 • Huiping Zhuang, Run He, Kai Tong, Di Fang, Han Sun, Haoran Li, Tianyi Chen, Ziqian Zeng
In this paper, we introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i. e., closed-form) solutions to the federated learning (FL) community.
no code implementations • 8 Apr 2024 • Weikai Lu, Ziqian Zeng, Jianwei Wang, Zhengdong Lu, Zelin Chen, Huiping Zhuang, Cen Chen
Jailbreaking attacks can enable Large Language Models (LLMs) to bypass the safeguard and generate harmful content.
1 code implementation • 26 Mar 2024 • Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Zhiping Lin
The compensation stream is governed by a Dual-Activation Compensation (DAC) module.
no code implementations • 23 Mar 2024 • Huiping Zhuang, Yuchen Liu, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Yi Wang, Lap-Pui Chau
Online Class Incremental Learning (OCIL) aims to train the model in a task-by-task manner, where data arrive in mini-batches at a time while previous data are not accessible.
1 code implementation • 23 Mar 2024 • Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen
The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution, leading to intensified forgetting.
no code implementations • 20 Mar 2024 • Run He, Huiping Zhuang, Di Fang, Yizhu Chen, Kai Tong, Cen Chen
The DS-BPT pretrains model in streams of both supervised learning and self-supervised contrastive learning (SSCL) for base knowledge extraction.
1 code implementation • 24 Feb 2024 • Ziqian Zeng, Jiahong Yu, Qianshi Pang, ZiHao Wang, Huiping Zhuang, HongEn Shao, Xiaofeng Zou
Within this framework, we introduce a lightweight draft model that effectively utilizes previously generated tokens to predict subsequent words.
no code implementations • 8 Feb 2024 • Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei
In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.
1 code implementation • 19 Dec 2023 • Ziqian Zeng, Yihuai Hong, Hongliang Dai, Huiping Zhuang, Cen Chen
We propose ConsistentEE, an early exiting method that is consistent in training and inference.
1 code implementation • CVPR 2023 • Huiping Zhuang, Zhenyu Weng, Run He, Zhiping Lin, Ziqian Zeng
In this paper, we approach the FSCIL by adopting analytic learning, a technique that converts network training into linear problems.
no code implementations • 8 Dec 2022 • Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.
1 code implementation • 30 May 2022 • Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, Zhiping Lin
Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively.
no code implementations • 14 Feb 2022 • Huiping Zhuang, Zhiping Lin, Yimin Yang, Kar-Ann Toh
Training convolutional neural networks (CNNs) with back-propagation (BP) is time-consuming and resource-intensive particularly in view of the need to visit the dataset multiple times.
no code implementations • 3 Dec 2020 • Huiping Zhuang, Zhiping Lin, Kar-Ann Toh
Decoupled learning is a branch of model parallelism which parallelizes the training of a network by splitting it depth-wise into multiple modules.
1 code implementation • 21 Jun 2019 • Huiping Zhuang, Yi Wang, Qinglai Liu, Shuai Zhang, Zhiping Lin
Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients, which forces the network modules to work in a synchronous fashion.