Search Results for author: Xingyu Miao

Found 6 papers, 3 papers with code

ExactDreamer: High-Fidelity Text-to-3D Content Creation via Exact Score Matching

no code implementations24 May 2024 Yumin Zhang, Xingyu Miao, Haoran Duan, Bo Wei, Tejal Shah, Yang Long, Rajiv Ranjan

Furthermore, to effectively capture the dynamic changes of the original and auxiliary variables, the LoRA of a pre-trained diffusion model implements these exact paths.

Dreamer XL: Towards High-Resolution Text-to-3D Generation via Trajectory Score Matching

1 code implementation18 May 2024 Xingyu Miao, Haoran Duan, Varun Ojha, Jun Song, Tejal Shah, Yang Long, Rajiv Ranjan

In this work, we propose a novel Trajectory Score Matching (TSM) method that aims to solve the pseudo ground truth inconsistency problem caused by the accumulated error in Interval Score Matching (ISM) when using the Denoising Diffusion Implicit Models (DDIM) inversion process.

3D Generation Denoising +1

Sentinel-Guided Zero-Shot Learning: A Collaborative Paradigm without Real Data Exposure

no code implementations14 Mar 2024 Fan Wan, Xingyu Miao, Haoran Duan, Jingjing Deng, Rui Gao, Yang Long

With increasing concerns over data privacy and model copyrights, especially in the context of collaborations between AI service providers and data owners, an innovative SG-ZSL paradigm is proposed in this work.

Zero-Shot Learning

ConRF: Zero-shot Stylization of 3D Scenes with Conditioned Radiation Fields

1 code implementation2 Feb 2024 Xingyu Miao, Yang Bai, Haoran Duan, Fan Wan, Yawen Huang, Yang Long, Yefeng Zheng

Most of the existing works on arbitrary 3D NeRF style transfer required retraining on each single style condition.

Style Transfer

CTNeRF: Cross-Time Transformer for Dynamic Neural Radiance Field from Monocular Video

no code implementations10 Jan 2024 Xingyu Miao, Yang Bai, Haoran Duan, Yawen Huang, Fan Wan, Yang Long, Yefeng Zheng

The goal of our work is to generate high-quality novel views from monocular videos of complex and dynamic scenes.

DS-Depth: Dynamic and Static Depth Estimation via a Fusion Cost Volume

1 code implementation14 Aug 2023 Xingyu Miao, Yang Bai, Haoran Duan, Yawen Huang, Fan Wan, Xinxing Xu, Yang Long, Yefeng Zheng

Nevertheless, the dynamic cost volume inevitably generates extra occlusions and noise, thus we alleviate this by designing a fusion module that makes static and dynamic cost volumes compensate for each other.

Monocular Depth Estimation Optical Flow Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.