no code implementations • 13 Mar 2024 • Minsoo Kim, Gi Pyo Nam, Haksub Kim, Haesol Park, Ig-Jae Kim
In the realm of face image quality assesment (FIQA), method based on sample relative classification have shown impressive performance.
no code implementations • 13 Mar 2024 • Minsoo Kim, Min-Cheol Sagong, Gi Pyo Nam, Junghyun Cho, Ig-Jae Kim
Initially, we train the face recognition model using a real face dataset and create a feature space for both real and virtual IDs where virtual prototypes are orthogonal to other prototypes.
no code implementations • 9 Nov 2023 • Jangwhan Lee, Minsoo Kim, SeungCheol Baek, Seok Joong Hwang, Wonyong Sung, Jungwook Choi
Large Language Models (LLMs) are proficient in natural language processing tasks, but their deployment is often restricted by extensive parameter sizes and computational demands.
1 code implementation • NeurIPS 2023 • Minsoo Kim, Sihwa Lee, Janghwan Lee, Sukjin Hong, Du-Seong Chang, Wonyong Sung, Jungwook Choi
Generative Language Models (GLMs) have shown impressive performance in tasks such as text generation, understanding, and reasoning.
no code implementations • 11 Jun 2023 • Minsoo Kim, HongSeok Kim
Furthermore, we prove the convergence of DeepLDE and show that the primal-dual learning method alone cannot ensure equality constraints without the help of equality embedding.
no code implementations • CVPR 2023 • Minyoung Hwang, Jaeyeon Jeong, Minsoo Kim, Yoonseon Oh, Songhwai Oh
We show that an exploitation policy, which moves the agent toward a well-chosen local goal among unvisited but observable states, outperforms a method which moves the agent to a previously visited state.
Ranked #1 on Visual Navigation on R2R
1 code implementation • 23 Feb 2023 • Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, Jungwook Choi
Pre-trained Transformer models such as BERT have shown great success in a wide range of applications, but at the cost of substantial increases in model complexity.
1 code implementation • 20 Nov 2022 • Minsoo Kim, Sihwa Lee, Sukjin Hong, Du-Seong Chang, Jungwook Choi
In particular, KD has been employed in quantization-aware training (QAT) of Transformer encoders like BERT to improve the accuracy of the student model with the reduced-precision weight parameters.
no code implementations • NAACL 2022 • Garam Lee, Minsoo Kim, Jai Hyun Park, Seung-won Hwang, Jung Hee Cheon
Embeddings, which compress information in raw text into semantics-preserving low-dimensional vectors, have been widely adopted for their efficacy.
no code implementations • 22 Sep 2022 • Eunkyu Oh, Taehun Kim, Minsoo Kim, Yunhu Ji, Sushil Khyalia
As a crucial component of contrastive learning, we propose two global context enhanced data augmentation methods while maintaining the semantics of the original session.
1 code implementation • NAACL 2022 • Jihyuk Kim, Minsoo Kim, Seung-won Hwang
Deep learning for Information Retrieval (IR) requires a large amount of high-quality query-document relevance labels, but such labels are inherently sparse.
no code implementations • 3 Dec 2021 • Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, Dong Hyun Lee, Jungwook Choi
Non-linear operations such as GELU, Layer normalization, and Softmax are essential yet costly building blocks of Transformer models.
no code implementations • 2 Jul 2021 • Younsik Kim, Dongjin Oh, Soonsang Huh, Dongjoon Song, Sunbeom Jeong, Junyoung Kwon, Minsoo Kim, Donghan Kim, Hanyoung Ryu, Jongkeun Jung, Wonshik Kyung, Byungmin Sohn, Suyoung Lee, Jounghoon Hyun, Yeonghoon Lee, Yeongkwan Kimand Changyoung Kim
In such case, the limited time available for data acquisition can be a serious constraint for experiments in which multidimensional spectral data are acquired.
1 code implementation • 1 Mar 2021 • Je Hyeong Hong, Hanjo Kim, Minsoo Kim, Gi Pyo Nam, Junghyun Cho, Hyeong-Seok Ko, Ig-Jae Kim
Our method proceeds by first fitting a 3D morphable model on the input image, second overlaying the mask surface onto the face model and warping the respective mask texture, and last projecting the 3D mask back to 2D.
1 code implementation • 21 Dec 2016 • Kyunghyun Paeng, Sangheum Hwang, Sunggyun Park, Minsoo Kim
We present a unified framework to predict tumor proliferation scores from breast histopathology whole slide images.
no code implementations • WS 2016 • Minsoo Kim, Moirangthem Dennis Singh, Minho Lee
The results also show that the temporal hierarchies help improve the ability of seq2seq models to capture compositionalities better without the presence of highly complex architectural hierarchies.