no code implementations • SemEval (NAACL) 2022 • Daewook Kang, Sung-Min Lee, Eunhwan Park, Seung-Hoon Na
In this study, we examine the ability of contextualized representations of pretrained language model to distinguish whether sequences from instructional articles are plausible or implausible.
1 code implementation • SemEval (NAACL) 2022 • Sung-Min Lee, Seung-Hoon Na
This paper describes our system in the SemEval-2022 Task 12: ‘linking mathematical symbols to their descriptions’, achieving first on the leaderboard for all the subtasks comprising named entity extraction (NER) and relation extraction (RE).
Joint Entity and Relation Extraction Machine Reading Comprehension +1
1 code implementation • Conference 2023 • Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
Transformer-based models for question answering (QA) over tables and texts confront a “long” hybrid sequence over tabular and textual elements, causing long-range reasoning problems.
Ranked #1 on Question Answering on HybridQA