no code implementations • COLING 2022 • Satoshi Sekine, Kouta Nakayama, Masako Nomoto, Maya Ando, Asuka Sumida, Koji Matsuda
The training data were provided by Japanese categorization and the language links, and the task was to categorize the Wikipedia pages into 30 languages, with no language links from Japanese Wikipedia (20M pages in total).
no code implementations • COLING 2022 • Shuhei Kurita, Hiroki Ouchi, Kentaro Inui, Satoshi Sekine
Semantic Role Labeling (SRL) is the task of labeling semantic arguments for marked semantic predicates.
no code implementations • NAACL (SocialNLP) 2021 • Hayato Kobayashi, Hiroaki Taguchi, Yoshimune Tabuchi, Chahine Koleejan, Ken Kobayashi, Soichiro Fujita, Kazuma Murao, Takeshi Masuyama, Taichi Yatsuka, Manabu Okumura, Satoshi Sekine
Ranking the user comments posted on a news article is important for online news services because comment visibility directly affects the user experience.
1 code implementation • Findings (EMNLP) 2021 • Kouta Nakayama, Shuhei Kurita, Akio Kobayashi, Yukino Baba, Satoshi Sekine
In this research, we propose a scheme to utilize all those systems which participated in the shared tasks.
no code implementations • WASSA (ACL) 2022 • Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine
By sharing parameters and providing task-independent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains.
no code implementations • RepL4NLP (ACL) 2022 • Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine
Simultaneous training of a multi-task learning network on different domains or tasks is not always straightforward.
no code implementations • 22 Feb 2024 • Ziqi Yin, Hao Wang, Kaito Horio, Daisuke Kawahara, Satoshi Sekine
We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs).
1 code implementation • 10 May 2023 • Kenichiro Ando, Satoshi Sekine, Mamoru Komachi
Here, we propose WikiSQE, the first large-scale dataset for sentence quality estimation in Wikipedia.
no code implementations • 29 Sep 2021 • Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine
Simultaneous training of a multi-task learning network on different domains or tasks is not always straightforward.
no code implementations • AKBC 2021 • Satoshi Sekine, Kouta Nakayama, Maya Ando, Yu Usami, Masako Nomoto, Koji Matsuda
In our "Resource by Collaborative Contribution (RbCC)" scheme, we conducted a shared task of structuring Wikipedia to attract participants but simultaneously submitted results are used to construct a knowledge base.
no code implementations • 21 Jan 2020 • Tiphaine Viard, Thomas McLachlan, Hamidreza Ghader, Satoshi Sekine
Wikipedia is a huge opportunity for machine learning, being the largest semi-structured base of knowledge available.
no code implementations • IJCNLP 2019 • Koki Washio, Satoshi Sekine, Tsuneaki Kato
Definition modeling includes acquiring word embeddings from dictionary definitions and generating definitions of words.
no code implementations • LREC 2020 • Hassan S. Shavarani, Satoshi Sekine
Wikipedia is a great source of general world knowledge which can guide NLP models better understand their motivation to make predictions.
1 code implementation • IJCNLP 2019 • Xiaoyu Shen, Jun Suzuki, Kentaro Inui, Hui Su, Dietrich Klakow, Satoshi Sekine
As a result, the content to be described in the text cannot be explicitly controlled.
no code implementations • WS 2019 • Tomoya Mizumoto, Hiroki Ouchi, Yoriko Isobe, Paul Reisert, Ryo Nagata, Satoshi Sekine, Kentaro Inui
This paper provides an analytical assessment of student short answer responses with a view to potential benefits in pedagogical contexts.
1 code implementation • WS 2019 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos
Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.
1 code implementation • SEMEVAL 2019 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos
To investigate this issue, we introduce a new dataset, called HELP, for handling entailments with lexical and logical phenomena.
no code implementations • 26 Feb 2019 • Thai-Hoang Pham, Khai Mai, Nguyen Minh Trung, Nguyen Tuan Duc, Danushka Bolegala, Ryohei Sasano, Satoshi Sekine
The current state-of-the-art (SOTA) model for FG-NER relies heavily on manual efforts for building a dictionary and designing hand-crafted features.
no code implementations • AKBC 2019 • Satoshi Sekine, Akio Kobayashi, Kouta Nakayama
We believe this situation can be improved by the following changes: 1. designing the shared-task to construct knowledge base rather than evaluating only limited test data 2. making the outputs of all the systems open to public so that we can run ensemble learning to create the better results than the best systems 3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (bootstrapping and active learning) We conducted “SHINRA2018” with the above mentioned scheme and in this paper we report the results and the future directions of the project.
1 code implementation • EMNLP 2018 • Saku Sugawara, Kentaro Inui, Satoshi Sekine, Akiko Aizawa
From this study, we observed that (i) the baseline performances for the hard subsets remarkably degrade compared to those of entire datasets, (ii) hard questions require knowledge inference and multiple-sentence reasoning in comparison with easy questions, and (iii) multiple-choice questions tend to require a broader range of reasoning skills than answer extraction and description questions.
no code implementations • COLING 2018 • Khai Mai, Thai-Hoang Pham, Minh Trung Nguyen, Tuan Duc Nguyen, Danushka Bollegala, Ryohei Sasano, Satoshi Sekine
However, there is little research on fine-grained NER (FG-NER), in which hundreds of named entity categories must be recognized, especially for non-English languages.
no code implementations • WS 2016 • Anietie Andy, Satoshi Sekine, Mugizi Rwebangira, Mark Dredze
In this paper, we propose an algorithm to reduce the number of unanswered questions in Yahoo!
no code implementations • WS 2016 • Anietie Andy, Mugizi Rwebangira, Satoshi Sekine
For unanswered questions that do not have a past resolved question with a shared need, we propose to use the best answer to a past resolved question with similar needs.