no code implementations • 26 Mar 2024 • Mingfu Liang, Jong-Chyi Su, Samuel Schulter, Sparsh Garg, Shiyu Zhao, Ying Wu, Manmohan Chandraker
This necessitates an expensive process of continuously curating and annotating data with significant human effort.
no code implementations • 23 Nov 2023 • Lei Fan, Mingfu Liang, Yunxuan Li, Gang Hua, Ying Wu
Active recognition enables robots to intelligently explore novel observations, thereby acquiring more information while circumventing undesired viewing conditions.
no code implementations • ICCV 2023 • Zhongzhan Huang, Mingfu Liang, Jinghui Qin, Shanshan Zhong, Liang Lin
The self-attention mechanism (SAM) is widely used in various fields of artificial intelligence and has successfully boosted the performance of different models.
no code implementations • 25 Apr 2023 • Changhao Shi, Haomiao Ni, Kai Li, Shaobo Han, Mingfu Liang, Martin Renqiang Min
We show that this paradigm based on latent classifier guidance is agnostic to pre-trained generative models, and present competitive results for both image generation and sequential manipulation of real and synthetic images.
no code implementations • 5 Feb 2023 • Zhongzhan Huang, Mingfu Liang, Liang Lin
With the development of deep learning techniques, AI-enhanced numerical solvers are expected to become a new paradigm for solving differential equations due to their versatility and effectiveness in alleviating the accuracy-speed trade-off in traditional numerical solvers.
no code implementations • 27 Oct 2022 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, Liang Lin
The self-attention mechanism has emerged as a critical component for improving the performance of various backbone neural networks.
no code implementations • 16 Jul 2022 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang, Liang Lin
Recently many plug-and-play self-attention modules (SAMs) are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).
no code implementations • 13 Jul 2021 • Zhongzhan Huang, Mingfu Liang, Senwei Liang, wei he
Deep neural networks suffer from catastrophic forgetting when learning multiple knowledge sequentially, and a growing number of approaches have been proposed to mitigate this problem.
no code implementations • 11 Jul 2021 • wei he, Zhongzhan Huang, Mingfu Liang, Senwei Liang, Haizhao Yang
One filter could be important according to a certain criterion, while it is unnecessary according to another one, which indicates that each criterion is only a partial view of the comprehensive "importance".
1 code implementation • 6 Jan 2021 • wei he, Meiqing Wu, Mingfu Liang, Siew-Kei Lam
In this paper, we advocate the importance of contextual information during channel pruning for semantic segmentation networks by presenting a novel Context-aware Pruning framework.
1 code implementation • 6 Jan 2021 • wei he, Meiqing Wu, Mingfu Liang, Siew-Kei Lam
In this paper, we advocate the importance of contextual information during channel pruning for semantic segmentation networks by presenting a novel Context-aware Pruning framework.
1 code implementation • 28 Nov 2020 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang
Recently, many plug-and-play self-attention modules are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).
2 code implementations • 12 Aug 2019 • Senwei Liang, Zhongzhan Huang, Mingfu Liang, Haizhao Yang
Batch Normalization (BN)(Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and hence BN will bring the noise to the gradient of the training loss.
3 code implementations • 25 May 2019 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, Haizhao Yang
Attention networks have successfully boosted the performance in various vision problems.
Ranked #139 on Image Classification on CIFAR-100 (using extra training data)