no code implementations • 14 Mar 2024 • Hyung-Il Kim, Kimin Yun, Jun-Seok Yun, Yuseok Bae
Recently, foundation models trained on massive datasets to adapt to a wide range of domains have attracted considerable attention and are actively being explored within the computer vision community.
no code implementations • 15 Sep 2022 • Hyung-Il Kim, Kimin Yun, Yong Man Ro
This is mainly attributed to the mismatch between training and testing sets.
no code implementations • 12 Apr 2022 • Sunoh Kim, Kimin Yun, Jin Young Choi
The key to successful grounding for video surveillance is to understand a semantic phrase corresponding to important actors and objects.
no code implementations • 17 Oct 2021 • Geonu Lee, Kimin Yun, Jungchan Cho
To solve the uncorrelated attention issue, we also propose a novel group sparsity-based temporal attention module.
1 code implementation • 1 Dec 2020 • Youngwan Lee, Hyung-Il Kim, Kimin Yun, Jinyoung Moon
By using the proposed temporal modeling method (T-OSA), and the efficient factorized component (D(2+1)D), we construct two types of VoV3D networks, VoV3D-M and VoV3D-L.
Ranked #29 on Action Recognition on Something-Something V1 (using extra training data)
no code implementations • 28 Jun 2020 • Youngwan Lee, Joong-won Hwang, Hyung-Il Kim, Kimin Yun, Yongjin Kwon, Yuseok Bae, Sung Ju Hwang
To tackle these limitations, we propose a new localization uncertainty estimation method called UAD for anchor-free object detection.
Ranked #127 on Object Detection on COCO test-dev
no code implementations • 21 Jan 2019 • Sunoh Kim, Kimin Yun, Jongyoul Park, Jin Young Choi
In this paper, to address this problem, we propose a new framework for recognizing object-related human actions by graph convolutional networks using human and object poses.
Ranked #1 on Action Recognition on IRD
1 code implementation • CVPR 2017 • Sangdoo Yun, Jongwon Choi, Youngjoon Yoo, Kimin Yun, Jin Young Choi
In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale.
no code implementations • CVPR 2016 • YoungJoon Yoo, Kimin Yun, Sangdoo Yun, JongHee Hong, Hawook Jeong, Jin Young Choi
In this paper, we consider moving dynamics of co-occurring objects for path prediction in a scene that includes crowded moving objects.