no code implementations • ICCV 2023 • Sungwon Hwang, Junha Hyung, Daejin Kim, Min-Jung Kim, Jaegul Choo
To do so, we first train a scene manipulator, a latent code-conditional deformable NeRF, over a dynamic scene to control a face deformation using the latent code.
no code implementations • CVPR 2023 • Junha Hyung, Sungwon Hwang, Daejin Kim, Hyunji Lee, Jaegul Choo
Specifically, we present three add-on modules of LENeRF, the Latent Residual Mapper, the Attention Field Network, and the Deformation Network, which are jointly used for local manipulations of 3D features by estimating a 3D attention field.
no code implementations • 25 Oct 2022 • Youngin Cho, Daejin Kim, Dongmin Kim, Mohammad Azam Khan, Jaegul Choo
Time series forecasting has become a critical task due to its high practicality in real-world applications such as traffic, energy consumption, economics and finance, and disease analysis.
no code implementations • 12 Sep 2022 • Daejin Kim, Youngin Cho, Dongmin Kim, Cheonbok Park, Jaegul Choo
Extensive experiments on METR-LA and PEMS-BAY demonstrate that our ResCAL can correctly capture the correlation of errors and correct the failures of various traffic forecasting models in event situations.
no code implementations • 12 Jun 2022 • Youngin Cho, Daejin Kim, Mohammad Azam Khan, Jaegul Choo
Therefore, in this study we explore the practical setting called the single positive setting, where each data instance is annotated by only one positive label with no explicit negative labels.
no code implementations • CVPR 2021 • Daejin Kim, Mohammad Azam Khan, Jaegul Choo
While the existing cycle-consistency loss ensures that the image can be translated back, our approach makes the model further preserve the attribute-irrelevant regions even in a single translation to another domain by using the Grad-CAM output computed from the discriminator.
no code implementations • 1 Jan 2021 • Daejin Kim, Hyunjung Shim, Jongwuk Lee
We demonstrate that AAP equipped with existing pruning methods (i. e., iterative pruning, one-shot pruning, and dynamic pruning) consistently improves the accuracy of original methods at 128× - 4096× compression ratios on three benchmark datasets.