1 code implementation • arXiv 2021 • Min Jin Chong, David Forsyth
The paired dataset is then used to fine-tune a StyleGAN.
1 code implementation • 2 Nov 2021 • Min Jin Chong, Hsin-Ying Lee, David Forsyth
Recently, StyleGAN has enabled various image manipulation and editing tasks thanks to the high-quality generation and the disentangled latent space.
1 code implementation • ICCV 2021 • Min Jin Chong, Wen-Sheng Chu, Abhishek Kumar, David Forsyth
We present Retrieve in Style (RIS), an unsupervised framework for facial feature transfer and retrieval on real images.
no code implementations • CVPR 2021 • Kedan Li, Min Jin Chong, Jeffrey Zhang, Jingen Liu
Prior works produce images that are filled with artifacts and fail to capture important visual details necessary for commercial applications.
2 code implementations • 11 Jun 2021 • Min Jin Chong, David Forsyth
This adversarial loss guarantees the map is diverse -- a very wide range of anime can be produced from a single content code.
Ranked #1 on Image-to-Image Translation on selfie2anime
no code implementations • 22 Mar 2020 • Kedan Li, Min Jin Chong, Jingen Liu, David Forsyth
However, obtaining a realistic image is challenging because the kinematics of garments is complex and because outline, texture, and shading cues in the image reveal errors to human viewers.
1 code implementation • CVPR 2020 • Min Jin Chong, David Forsyth
In turn, this effectively bias-free estimate requires good estimates of scores with a finite number of samples.
1 code implementation • ICLR 2020 • Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, D. A. Forsyth
Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation.
no code implementations • NeurIPS 2017 • Yogatheesan Varatharajah, Min Jin Chong, Krishnakant Saboo, Brent Berry, Benjamin Brinkmann, Gregory Worrell, Ravishankar Iyer
This paper presents a probabilistic-graphical model that can be used to infer characteristics of instantaneous brain activity by jointly analyzing spatial and temporal dependencies observed in electroencephalograms (EEG).
1 code implementation • CVPR 2017 • Aditya Deshpande, Jiajun Lu, Mao-Chuang Yeh, Min Jin Chong, David Forsyth
Finally, we build a conditional model for the multi-modal distribution between grey-level image and the color field embeddings.