Search Results for author: Hoseok Do

Found 3 papers, 0 papers with code

Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation

no code implementations7 May 2024 Jihyun Kim, Changjae Oh, Hoseok Do, Soohyun Kim, Kwanghoon Sohn

To do this, we combine the strengths of Generative Adversarial networks (GANs) and diffusion models (DMs) by employing the multi-modal features in the DM into the latent space of the pre-trained GANs.

Image Generation

Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields

no code implementations ICCV 2023 Hyeonseop Song, Seokhun Choi, Hoseok Do, Chul Lee, Taehyeong Kim

Text-driven localized editing of 3D objects is particularly difficult as locally mixing the original 3D object with the intended new object and style effects without distorting the object's form is not a straightforward process.

Object

Quantitative Manipulation of Custom Attributes on 3D-Aware Image Synthesis

no code implementations CVPR 2023 Hoseok Do, EunKyung Yoo, Taehyeong Kim, Chul Lee, Jin Young Choi

While 3D-based GAN techniques have been successfully applied to render photo-realistic 3D images with a variety of attributes while preserving view consistency, there has been little research on how to fine-control 3D images without limiting to a specific category of objects of their properties.

3D-Aware Image Synthesis Attribute +1

Cannot find the paper you are looking for? You can Submit a new open access paper.