no code implementations • ECCV 2020 • Zhuo Su, Lan Xu, Zerong Zheng, Tao Yu, Yebin Liu, Lu Fang
To enable robust tracking, we embrace both the initial model and the various visual cues into a novel performance capture scheme with hybrid motion optimization and semantic volumetric fusion, which can successfully capture challenging human motions under the monocular setting without pre-scanned detailed template and owns the reinitialization ability to recover from tracking failures and the disappear-reoccur scenarios.
no code implementations • 27 May 2024 • Ruizhi Shao, Youxin Pang, Zerong Zheng, Jingxiang Sun, Yebin Liu
We present a novel approach for generating high-quality, spatio-temporally coherent human videos from a single image under arbitrary viewpoints.
no code implementations • 12 May 2024 • Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, Yebin Liu
In the single-layer reconstruction stage, we propose a series of geometric constraints to reconstruct smooth surfaces and simultaneously obtain the segmentation between body and clothing.
1 code implementation • 5 Dec 2023 • Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, Yebin Liu
Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups.
1 code implementation • 27 Nov 2023 • Zhe Li, Yipengjing Sun, Zerong Zheng, Lizhen Wang, Shengping Zhang, Yebin Liu
To associate 3D Gaussians with the animatable avatar, we learn a parametric template from the input videos, and then parameterize the template on two front & back canonical Gaussian maps where each pixel represents a 3D Gaussian.
no code implementations • ICCV 2023 • Siyou Lin, Boyao Zhou, Zerong Zheng, Hongwen Zhang, Yebin Liu
To achieve wrinkle-level as well as texture-level alignment, we present a novel coarse-to-fine two-stage method that leverages intrinsic manifold properties with two neural deformation fields, in the 3D space and the intrinsic space, respectively.
no code implementations • 31 May 2023 • Ruizhi Shao, Jingxiang Sun, Cheng Peng, Zerong Zheng, Boyao Zhou, Hongwen Zhang, Yebin Liu
We introduce Control4D, an innovative framework for editing dynamic 4D portraits using text instructions.
no code implementations • 8 May 2023 • Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data.
1 code implementation • 25 Apr 2023 • Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu
To this end, we present PoseVocab, a novel pose encoding method that encourages the network to discover the optimal pose embeddings for learning the dynamic human appearance.
no code implementations • CVPR 2023 • Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng, Han Huang, Yandong Guo, Yebin Liu
In this way, the clothing deformations are disentangled such that the pose-dependent wrinkles can be better learned and applied to unseen poses.
no code implementations • CVPR 2023 • Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu
The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.
1 code implementation • 21 Nov 2022 • Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu
The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.
no code implementations • 16 Jul 2022 • Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu
At its core is a novel diffusion-based stereo module, which introduces diffusion models, a type of powerful generative models, into the iterative stereo matching network.
1 code implementation • 14 Jul 2022 • Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, Yebin Liu
We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing.
1 code implementation • 5 Jul 2022 • Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu
Then given a monocular RGB video of this subject, our method integrates information from both the image observation and the avatar prior, and accordingly recon-structs high-fidelity 3D textured models with dynamic details regardless of the visibility.
no code implementations • 7 Apr 2022 • Yuemei Zhou, Tao Yu, Zerong Zheng, Ying Fu, Yebin Liu
Existing state-of-the-art novel view synthesis methods rely on either fairly accurate 3D geometry estimation or sampling of the entire space for neural volumetric rendering, which limit the overall efficiency.
no code implementations • CVPR 2022 • Zerong Zheng, Han Huang, Tao Yu, Hongwen Zhang, Yandong Guo, Yebin Liu
These local radiance fields not only leverage the flexibility of implicit representation in shape and appearance modeling, but also factorize cloth deformations into skeleton motions, node residual translations and the dynamic detail variations inside each individual radiance field.
no code implementations • CVPR 2022 • Hao Zhao, Jinsong Zhang, Yu-Kun Lai, Zerong Zheng, Yingdi Xie, Yebin Liu, Kun Li
To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures.
no code implementations • 19 Dec 2021 • Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker
To handle complicated motions (e. g., self-occlusions), we then leverage the encoded information on the UV manifold to construct a 3D volumetric representation based on a dynamic pose-conditioned neural radiance field.
no code implementations • CVPR 2021 • Tao Yu, Zerong Zheng, Kaiwen Guo, Pengpeng Liu, Qionghai Dai, Yebin Liu
Human volumetric capture is a long-standing topic in computer vision and computer graphics.
no code implementations • ICCV 2021 • Yang Zheng, Ruizhi Shao, Yuxiang Zhang, Tao Yu, Zerong Zheng, Qionghai Dai, Yebin Liu
We propose DeepMultiCap, a novel method for multi-person performance capture using sparse multi-view cameras.
no code implementations • CVPR 2021 • Zhe Li, Tao Yu, Zerong Zheng, Kaiwen Guo, Yebin Liu
By contributing a novel reconstruction framework which contains pose-guided keyframe selection and robust implicit surface fusion, our method fully utilizes the advantages of both tracking-based methods and tracking-free inference methods, and finally enables the high-fidelity reconstruction of dynamic surface details even in the invisible regions.
no code implementations • 30 Nov 2020 • Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Siyou Lin, Tao Yu, Jinli Suo, Yebin Liu
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.
1 code implementation • CVPR 2021 • Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu
Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power.
1 code implementation • 8 Jul 2020 • Zerong Zheng, Tao Yu, Yebin Liu, Qionghai Dai
To overcome the limitations of regular 3D representations, we propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
Ranked #2 on 3D Human Reconstruction on CAPE
no code implementations • CVPR 2020 • Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu
In this paper, we propose an efficient method for robust 3D self-portraits using a single RGBD camera.
1 code implementation • ICCV 2019 • Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, Yebin Liu
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image.
no code implementations • CVPR 2019 • Tao Yu, Zerong Zheng, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Gerard Pons-Moll, Yebin Liu
This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e. g., cloth wrinkles) using a single RGBD camera.
no code implementations • ECCV 2018 • Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, Yebin Liu
We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).
no code implementations • CVPR 2018 • Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, Yebin Liu
We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance.