no code implementations • 13 May 2024 • Wenqi Dong, Bangbang Yang, Lin Ma, Xiao Liu, Liyuan Cui, Hujun Bao, Yuewen Ma, Zhaopeng Cui
As humans, we aspire to create media content that is both freely willed and readily controlled.
no code implementations • 15 Apr 2024 • Tong Wu, Jia-Mu Sun, Yu-Kun Lai, Yuewen Ma, Leif Kobbelt, Lin Gao
To address these issues, we introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
no code implementations • 19 Oct 2023 • Bangbang Yang, Wenqi Dong, Lin Ma, WenBo Hu, Xiao Liu, Zhaopeng Cui, Yuewen Ma
To ensure meaningful and aligned textures to the scene, we develop a novel coarse-to-fine panoramic texture generation approach with dual texture alignment, which both considers the geometry and texture cues of the captured scenes.
no code implementations • ICCV 2023 • WenBo Hu, Yuling Wang, Lin Ma, Bangbang Yang, Lin Gao, Xiao Liu, Yuewen Ma
Despite the tremendous progress in neural radiance fields (NeRF), we still face a dilemma of the trade-off between quality and efficiency, e. g., MipNeRF presents fine-detailed and anti-aliased renderings but takes days for training, while Instant-ngp can accomplish the reconstruction in a few minutes but suffers from blurring or aliasing when rendering at various distances or resolutions due to ignoring the sampling area.
no code implementations • CVPR 2022 • Yu-Jie Yuan, Yang-tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, Lin Gao
In this paper, we propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene, and synthesizes the novel view images of the edited scene without re-training the network.