1 code implementation • 28 Nov 2023 • Junwon Seo, Sangyoon Lee, Kwang In Kim, Jaeho Lee
Neural field is an emerging paradigm in data representation that trains a neural network to approximate the given signal.
1 code implementation • 25 Sep 2023 • Uyoung Jeong, Seungryul Baek, Hyung Jin Chang, Kwang In Kim
Our new instance embedding loss provides a learning signal on the entire area of the image with bounding box annotations, achieving globally consistent and disentangled instance representation.
no code implementations • 7 Jan 2023 • Yunpyo An, Suyeong Park, Kwang In Kim
Our proposed model adeptly updates the surrogate learner for every new data instance, enabling it to emulate and capitalize on the continuous learning dynamics of the neural network without necessitating a complete retraining of the principal model for each individual label.
no code implementations • 1 Aug 2022 • Tze Ho Elden Tse, Zhongqun Zhang, Kwang In Kim, Ales Leonardis, Feng Zheng, Hyung Jin Chang
In this paper, we propose a novel semi-supervised framework that allows us to learn contact from monocular images.
no code implementations • CVPR 2022 • Tze Ho Elden Tse, Kwang In Kim, Ales Leonardis, Hyung Jin Chang
Estimating the pose and shape of hands and objects under interaction finds numerous applications including augmented and virtual reality.
Ranked #6 on hand-object pose on DexYCB
no code implementations • CVPR 2022 • Kwang In Kim
We consider distributed (gradient descent-based) learning scenarios where the server combines the gradients of learning objectives gathered from local clients.
no code implementations • ICCV 2021 • Kwang In Kim, James Tompkin
Then, we empirically estimate and strengthen the statistical dependence between the initial noisy predictor and the additional features via manifold denoising.
no code implementations • 15 Jul 2021 • Jake Deane, Sinead Kearney, Kwang In Kim, Darren Cosker
Synthetic data is becoming increasingly common for training computer vision models for a variety of tasks.
no code implementations • 24 Jun 2021 • Youssef A. Mejjati, Isa Milefchik, Aaron Gokaslan, Oliver Wang, Kwang In Kim, James Tompkin
We present an algorithm that learns a coarse 3D representation of objects from unposed multi-view 2D mask supervision, then uses it to generate detailed mask and image texture.
no code implementations • ICCV 2021 • Dong Uk Kim, Kwang In Kim, Seungryul Baek
Three dimensional hand pose estimation has reached a level of maturity, enabling real-world applications for single-hand cases.
no code implementations • ECCV 2020 • Youssef Alami Mejjati, Celso F. Gomez, Kwang In Kim, Eli Shechtman, Zoya Bylinskii
Extensions of our model allow for multi-style edits and the ability to both increase and attenuate attention in an image region.
no code implementations • ECCV 2020 • Kwang In Kim, Christian Richardt, Hyung Jin Chang
Predictor combination aims to improve a (target) predictor of a learning task based on the (reference) predictors of potentially relevant tasks, without having access to the internals of individual predictors.
1 code implementation • CVPR 2020 • Sinead Kearney, Wenbin Li, Martin Parsons, Kwang In Kim, Darren Cosker
We evaluate our model on both synthetic and real RGBD images and compare our results to previously published work fitting canine models to images.
no code implementations • CVPR 2021 • Kwanyoung Kim, Dongwon Park, Kwang In Kim, Se Young Chun
Often, labeling large amount of data is challenging due to high labeling cost limiting the application domain of deep learning techniques.
1 code implementation • 1 Jan 2020 • Youssef Alami Mejjati, Zejiang Shen, Michael Snower, Aaron Gokaslan, Oliver Wang, James Tompkin, Kwang In Kim
We present an algorithm to generate diverse foreground objects and composite them into background images using a GAN architecture.
no code implementations • ICML Workshop Deep_Phenomen 2019 • Dushyant Mehta, Kwang In Kim, Christian Theobalt
We show implicit filter level sparsity manifests in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained using adaptive gradient descent techniques with L2 regularization or weight decay.
no code implementations • 13 May 2019 • Dushyant Mehta, Kwang In Kim, Christian Theobalt
We show implicit filter level sparsity manifests in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques and L2 regularization or weight decay.
no code implementations • CVPR 2019 • Kwang In Kim, Hyung Jin Chang
We present a new predictor combination algorithm that improves a given task predictor based on potentially relevant reference predictors.
no code implementations • CVPR 2019 • Seungryul Baek, Kwang In Kim, Tae-Kyun Kim
Once the model is successfully fitted to input RGB images, its meshes i. e. shapes and articulations, are realistic, and we augment view-points on top of estimated dense hand poses.
1 code implementation • NeurIPS 2018 • Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, Kwang In Kim
Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene.
no code implementations • CVPR 2019 • Dushyant Mehta, Kwang In Kim, Christian Theobalt
We investigate filter level sparsity that emerges in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques and L2 regularization or weight decay.
4 code implementations • ECCV 2018 • Aaron Gokaslan, Vivek Ramanujan, Daniel Ritchie, Kwang In Kim, James Tompkin
Unsupervised image-to-image translation techniques are able to map local texture between two domains, but they are typically unsuccessful when the domains require larger shape change.
no code implementations • 11 Jun 2018 • Juil Sock, Kwang In Kim, Caner Sahin, Tae-Kyun Kim
Our architecture jointly learns multiple sub-tasks: 2D detection, depth, and 3D pose estimation of individual objects; and joint registration of multiple objects.
2 code implementations • 6 Jun 2018 • Youssef A. Mejjati, Christian Richardt, James Tompkin, Darren Cosker, Kwang In Kim
Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene.
no code implementations • CVPR 2018 • Kwang In Kim, Juhyun Park, James Tompkin
When learning functions on manifolds, we can improve performance by regularizing with respect to the intrinsic manifold geometry rather than the ambient space.
no code implementations • CVPR 2018 • Youssef A. Mejjati, Darren Cosker, Kwang In Kim
We tackle this by considering task-specific estimators as random variables.
no code implementations • CVPR 2018 • Seungryul Baek, Kwang In Kim, Tae-Kyun Kim
By training the HPG and HPE in a single unified optimization framework enforcing that 1) the HPE agrees with the paired depth and skeleton entries; and 2) the HPG-HPE combination satisfies the cyclic consistency (both the input and the output of HPG-HPE are skeletons) observed via the newly generated unpaired skeletons, our algorithm constructs a HPE which is robust to variations that go beyond the coverage of the existing database.
no code implementations • 11 Apr 2018 • Yassir Saquil, Kwang In Kim, Peter Hall
In this paper, we investigate the use of generative adversarial networks in the task of image generation according to subjective measures of semantic attributes.
no code implementations • ICCV 2017 • Kwang In Kim, James Tompkin, Christian Richardt
We present an algorithm for test-time combination of a set of reference predictors with unknown parametric forms.
no code implementations • 12 Jun 2017 • James Tompkin, Kwang In Kim, Hanspeter Pfister, Christian Theobalt
Large databases are often organized by hand-labeled metadata, or criteria, which are expensive to collect.
no code implementations • 6 Jun 2017 • Seungryul Baek, Kwang In Kim, Tae-Kyun Kim
Each response map-or node-in both the convolutional and fully-connected layers selectively respond to class labels s. t.
Ranked #168 on Image Classification on CIFAR-100 (using extra training data)
no code implementations • 28 Oct 2016 • Seungryul Baek, Kwang In Kim, Tae-Kyun Kim
Online action detection (OAD) is challenging since 1) robust yet computationally expensive features cannot be straightforwardly used due to the real-time processing requirements and 2) the localization and classification of actions have to be performed even before they are fully observed.
no code implementations • ICCV 2015 • Kwang In Kim, James Tompkin, Hanspeter Pfister, Christian Theobalt
Existing approaches for diffusion on graphs, e. g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer.
no code implementations • CVPR 2015 • Kwang In Kim, James Tompkin, Hanspeter Pfister, Christian Theobalt
The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems.
no code implementations • CVPR 2015 • Kwang In Kim, James Tompkin, Hanspeter Pfister, Christian Theobalt
In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points.