no code implementations • EcomNLP (COLING) 2020 • Shotaro Misawa, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma
To generate a slogan, we apply an encoder–decoder model which has shown effectiveness in many kinds of natural language generation tasks, such as abstractive summarization.
no code implementations • EcomNLP (COLING) 2020 • Ryo Shimura, Shotaro Misawa, Masahiro Sato, Tomoki Taniguchi, Tomoko Ohkuma
Previous laboratory studies have indicated that the ratings recorded by these systems differ from the actual evaluations of the users, owing to the influence of historical ratings in the system.
no code implementations • EACL 2021 • Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
We conduct experiments on three summarization models; one pretrained model and two non-pretrained models, and verify our method improves the performance.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Ryuji Kano, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma
The training task of the model is to predict whether a reply candidate is a true reply to a post.
Extractive Summarization Unsupervised Extractive Summarization
no code implementations • COLING 2020 • Motoki Taniguchi, Yoshihiro Ueda, Tomoki Taniguchi, Tomoko Ohkuma
To assess the difficulty of DA recognition on our corpus, we evaluate several models, including a pre-trained contextual representation model, as our baselines.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Toru Nishino, Ryota Ozaki, Yohei Momoki, Tomoki Taniguchi, Ryuji Kano, Norihisa Nakano, Yuki Tagawa, Motoki Taniguchi, Tomoko Ohkuma, Keigo Nakamura
We propose a novel reinforcement learning method with a reconstructor to improve the clinical correctness of generated reports to train the data-to-text module with a highly imbalanced dataset.
no code implementations • IJCNLP 2019 • Toru Nishino, Shotaro Misawa, Ryuji Kano, Tomoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
The results show that our model generates more consistent headlines, key phrases and categories.
no code implementations • WS 2019 • Yuki Tagawa, Motoki Taniguchi, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma, Takayuki Yamamoto, Keiichi Nemoto
Knowledge graphs (KGs) are generally used for various NLP tasks.
no code implementations • WS 2019 • Takumi Takahashi, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
This paper describes our model for the reading comprehension task of the MRQA shared task.
no code implementations • WS 2018 • Motoki Taniguchi, Tomoki Taniguchi, Takumi Takahashi, Yasuhide Miura, Tomoko Ohkuma
A simple entity linking approach with text match is used as the document selection component, this component identifies relevant documents for a given claim by using mentioned entities as clues.
no code implementations • COLING 2018 • Yasuhide Miura, Ryuji Kano, Motoki Taniguchi, Tomoki Taniguchi, Shotaro Misawa, Tomoko Ohkuma
We proposed a model that integrates discussion structures with neural networks to classify discourse acts.
no code implementations • IJCNLP 2017 • Yasuhide Miura, Tomoki Taniguchi, Motoki Taniguchi, Shotaro Misawa, Tomoko Ohkuma
We propose a hierarchical neural network model for language variety identification that integrates information from a social network.
no code implementations • ACL 2017 • Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
We propose a novel geolocation prediction model using a complex neural network.
no code implementations • WS 2016 • Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
In the test run of the task, the model achieved the accuracy of 40. 91{\%} and the median distance error of 69. 50 km in message-level prediction and the accuracy of 47. 55{\%} and the median distance error of 16. 13 km in user-level prediction.