no code implementations • 18 Jan 2024 • Thao Nguyen, Utkarsh Ojha, Yuheng Li, Haotian Liu, Yong Jae Lee
With increased human control, it is now possible to edit an image in a plethora of ways; from specifying in text what we want to change, to straight up dragging the contents of the image in an interactive point-based manner.
1 code implementation • 26 Jul 2023 • Thao Nguyen, Yuheng Li, Utkarsh Ojha, Yong Jae Lee
Given pairs of example that represent the "before" and "after" images of an edit, our goal is to learn a text-based editing direction that can be used to perform the same edit on new images.
1 code implementation • CVPR 2023 • Utkarsh Ojha, Yuheng Li, Yong Jae Lee
In this work, we first show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models when trained to detect GAN fake images.
2 code implementations • CVPR 2021 • Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang
Training generative models, such as GANs, on a target domain containing limited examples (e. g., 10) can easily result in overfitting.
Ranked #3 on 10-shot image generation on Babies
no code implementations • 5 Apr 2021 • Utkarsh Ojha, Krishna Kumar Singh, Yong Jae Lee
We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e. g., dogs and cars).
no code implementations • ICLR 2021 • Utkarsh Ojha, Krishna Kumar Singh, Yong Jae Lee
We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e. g., dogs and cars).
3 code implementations • CVPR 2020 • Yuheng Li, Krishna Kumar Singh, Utkarsh Ojha, Yong Jae Lee
We present MixNMatch, a conditional generative model that learns to disentangle and encode background, object pose, shape, and texture from real images with minimal supervision, for mix-and-match image generation.
1 code implementation • NeurIPS 2020 • Utkarsh Ojha, Krishna Kumar Singh, Cho-Jui Hsieh, Yong Jae Lee
We propose a novel unsupervised generative model that learns to disentangle object identity from other low-level aspects in class-imbalanced data.
1 code implementation • CVPR 2019 • Krishna Kumar Singh, Utkarsh Ojha, Yong Jae Lee
We propose FineGAN, a novel unsupervised GAN framework, which disentangles the background, object shape, and object appearance to hierarchically generate images of fine-grained object categories.
Ranked #1 on Image Clustering on Stanford Cars
1 code implementation • CVPR 2018 • Konda Reddy Mopuri, Utkarsh Ojha, Utsav Garg, R. Venkatesh Babu
Our trained generator network attempts to capture the distribution of adversarial perturbations for a given classifier and readily generates a wide variety of such perturbations.