no code implementations • 3 Feb 2024 • Yongwei Nie, Changzhen Liu, Chengjiang Long, Qing Zhang, Guiqing Li, Hongmin Cai
We tackle the problem of single-image Human Mesh Recovery (HMR).
no code implementations • 25 Jan 2024 • Yongwei Nie, Mingxian Fan, Chengjiang Long, Qing Zhang, Jian Zhu, Xuemiao Xu
(2) We devise a dual-network architecture to convey the novel training paradigm, which is composed of a main regression network and an auxiliary network, in which we can formulate the exemplar optimization loss function in the same form as the training loss function.
no code implementations • 24 Jan 2024 • Yongwei Nie, Hao Huang, Chengjiang Long, Qing Zhang, Pradipta Maji, Hongmin Cai
In previous work, the two models are closely entangled with each other, and it is not known how to upgrade their method without modifying their training framework significantly.
no code implementations • 10 Dec 2023 • Wenju Xu, Chengjiang Long, Yongwei Nie, Guanghui Wang
Unlike the existing works leveraging the semantic masks to obtain the representation of each component, we propose to generate disentangled latent code via a novel attribute encoder with transformers trained in a manner of curriculum learning from a relatively easy step to a gradually hard one.
2 code implementations • 23 Oct 2023 • Maomao Li, Ge Yuan, Cairong Wang, Zhian Liu, Yong Zhang, Yongwei Nie, Jue Wang, Dong Xu
Based on this disentanglement, face swapping can be simplified as style and mask swapping.
no code implementations • 11 May 2023 • Qing Zhang, Hao Jiang, Yongwei Nie, Wei-Shi Zheng
We present a simple but effective technique to smooth out textures while preserving the prominent structures.
no code implementations • CVPR 2023 • Wenju Xu, Chengjiang Long, Yongwei Nie
Arbitrary style transfer has been demonstrated to be efficient in artistic image generation.
no code implementations • CVPR 2023 • Zhian Liu, Maomao Li, Yong Zhang, Cairong Wang, Qi Zhang, Jue Wang, Yongwei Nie
We rethink face swapping from the perspective of fine-grained face editing, \textit{i. e., ``editing for swapping'' (E4S)}, and propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.
2 code implementations • 15 Jul 2022 • Lingwei Dang, Yongwei Nie, Chengjiang Long, Qing Zhang, Guiqing Li
In this paper, we propose a novel sampling strategy for sampling very diverse results from an imbalanced multimodal distribution learned by a deep generative model.
Ranked #2 on Human Pose Forecasting on HumanEva-I
1 code implementation • CVPR 2022 • Tiezheng Ma, Yongwei Nie, Chengjiang Long, Qing Zhang, Guiqing Li
This motivates us to propose a novel two-stage prediction framework, including an init-prediction network that just computes the good guess and then a formal-prediction network that predicts the target future poses based on the guess.
Ranked #5 on Human Pose Forecasting on Human3.6M
1 code implementation • ICCV 2021 • Zhian Liu, Yongwei Nie, Chengjiang Long, Qing Zhang, Guiqing Li
In this paper, we propose $\text{HF}^2$-VAD, a Hybrid framework that integrates Flow reconstruction and Frame prediction seamlessly to handle Video Anomaly Detection.
Ranked #1 on Video Anomaly Detection on ShanghaiTech Campus
1 code implementation • ICCV 2021 • Lingwei Dang, Yongwei Nie, Chengjiang Long, Qing Zhang, Guiqing Li
The extracted features at each scale are then combined and decoded to obtain the residuals between the input and target poses.
Ranked #7 on Human Pose Forecasting on Human3.6M
2 code implementations • 30 Oct 2019 • Qing Zhang, Yongwei Nie, Wei-Shi Zheng
By performing dual illumination estimation, we obtain two intermediate exposure correction results for the input image, with one fixes the underexposed regions and the other one restores the overexposed regions.
no code implementations • 25 Jul 2019 • Qing Zhang, Yongwei Nie, Lei Zhu, Chunxia Xiao, Wei-Shi Zheng
To obtain high-quality results free of these artifacts, we present a novel underexposed photo enhancement approach that is able to maintain the perceptual consistency.
no code implementations • 20 Jun 2019 • Qiuxia Lai, Salman Khan, Yongwei Nie, Jianbing Shen, Hanqiu Sun, Ling Shao
With three example computer vision tasks, diverse representative backbones, and famous architectures, corresponding real human gaze data, and systematically conducted large-scale quantitative studies, we quantify the consistency between artificial attention and human visual attention and offer novel insights into existing artificial attention mechanisms by giving preliminary answers to several key questions related to human and artificial attention mechanisms.
no code implementations • 5 Mar 2017 • Yongwei Nie, Xu Cao, Chengjiang Long, Ping Li, Guiqing Li
Current face alignment algorithms can robustly find a set of landmarks along face contour.