Search Results for author: Shengqu Cai

Found 5 papers, 2 papers with code

Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control

no code implementations27 May 2024 Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, Gordon Wetzstein

Research on video generation has recently made tremendous progress, enabling high-quality videos to be generated from text prompts or images.

Scene Generation Video Generation +1

Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models

no code implementations3 Dec 2023 Shengqu Cai, Duygu Ceylan, Matheus Gadelha, Chun-Hao Paul Huang, Tuanfeng Yang Wang, Gordon Wetzstein

Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path.

Text-to-Image Generation Video Generation

Pix2NeRF: Unsupervised Conditional $π$-GAN for Single Image to Neural Radiance Fields Translation

2 code implementations26 Feb 2022 Shengqu Cai, Anton Obukhov, Dengxin Dai, Luc van Gool

We propose a pipeline to generate Neural Radiance Fields~(NeRF) of an object or a scene of a specific class, conditioned on a single input image.

3D-Aware Image Synthesis Novel View Synthesis +2

Pix2NeRF: Unsupervised Conditional p-GAN for Single Image to Neural Radiance Fields Translation

1 code implementation CVPR 2022 Shengqu Cai, Anton Obukhov, Dengxin Dai, Luc van Gool

We propose a pipeline to generate Neural Radiance Fields (NeRF) of an object or a scene of a specific class, conditioned on a single input image.

3D-Aware Image Synthesis Novel View Synthesis +2

Cannot find the paper you are looking for? You can Submit a new open access paper.