1 code implementation • ACL 2022 • Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example.
no code implementations • COLING 2022 • Eunhwan Park, Jong-Hyeon Lee, Jeon Dong Hyeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
This study proposes Semantic-Infused SElective Graph Reasoning (SISER) for fact verification, which newly presents semantic-level graph reasoning and injects its reasoning-enhanced representation into other types of graph-based and sequence-based reasoning methods.
no code implementations • LREC 2022 • Hyeondey Kim, Seonhoon Kim, Inho Kang, Nojun Kwak, Pascale Fung
Our experiment results prove that the proposed methods improve the model performance of the investigated Korean language understanding tasks.
no code implementations • 21 Nov 2022 • Jiho Jang, Chaerin Kong, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak
Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations.
1 code implementation • 25 Nov 2021 • Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak
Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an early-exited sub-network not much degraded from that of the full network.
2 code implementations • EMNLP 2021 • Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, WooMyoung Park, Nako Sung
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.
no code implementations • 17 Sep 2020 • Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, Nojun Kwak
In this paper, we propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning.
no code implementations • ACL 2019 • Daesik Kim, Seonhoon Kim, Nojun Kwak
Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems.
no code implementations • 29 May 2018 • Seonhoon Kim, Inho Kang, Nojun Kwak
Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers.
Ranked #10 on Natural Language Inference on SNLI