no code implementations • 2 Apr 2024 • Sicheng Li, Hao Li, Yiyi Liao, Lu Yu
The emergence of Neural Radiance Fields (NeRF) has greatly impacted 3D scene modeling and novel-view synthesis.
no code implementations • 27 Mar 2024 • Sicheng Li, Keqiang Sun, Zhixin Lai, Xiaoshi Wu, Feng Qiu, Haoran Xie, Kazunori Miyata, Hongsheng Li
Secondly, to overcome the issue of limited conditional supervision, we introduce Diffusion Consistency Loss (DCL), which applies supervision on the denoised latent code at any given time step.
no code implementations • 17 Oct 2023 • Jun Wu, Sicheng Li, Sihui Ji, Yue Wang, Rong Xiong, Yiyi Liao
Decomposing a target object from a complex background while reconstructing is challenging.
no code implementations • CVPR 2023 • Sicheng Li, Hao Li, Yue Wang, Yiyi Liao, Lu Yu
Neural Radiance Fields (NeRF) have demonstrated superior novel view synthesis performance but are slow at rendering.
1 code implementation • 27 Jun 2022 • Zhan Chen, Sicheng Li, Bing Yang, Qinghan Li, Hong Liu
To solve this problem, we present a multi-scale spatial graph convolution (MS-GC) module and a multi-scale temporal graph convolution (MT-GC) module to enrich the receptive field of the model in spatial and temporal dimensions.
no code implementations • 21 Dec 2021 • Zhongzhi Yu, Yonggan Fu, Sicheng Li, Chaojian Li, Yingyan Lin
ViTs are often too computationally expensive to be fitted onto real-world resource-constrained devices, due to (1) their quadratically increased complexity with the number of input tokens and (2) their overparameterized self-attention heads and model depth.
1 code implementation • NeurIPS 2020 • Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin
Multiplication (e. g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs).
1 code implementation • 7 Aug 2020 • Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, Hai Li
Rather than learning a shared global model in classic federated learning, each client learns a personalized model via LotteryFL; the communication cost can be significantly reduced due to the compact size of lottery networks.
no code implementations • 27 May 2017 • Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen
Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.