Search Results for author: Tomoki Taniguchi

Found 15 papers, 0 papers with code

Distinctive Slogan Generation with Reconstruction

no code implementations EcomNLP (COLING) 2020 Shotaro Misawa, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma

To generate a slogan, we apply an encoder–decoder model which has shown effectiveness in many kinds of natural language generation tasks, such as abstractive summarization.

Abstractive Text Summarization Decoder +1

Aspect-Similarity-Aware Historical Influence Modeling for Rating Prediction

no code implementations EcomNLP (COLING) 2020 Ryo Shimura, Shotaro Misawa, Masahiro Sato, Tomoki Taniguchi, Tomoko Ohkuma

Previous laboratory studies have indicated that the ratings recorded by these systems differ from the actual evaluations of the users, owing to the influence of historical ratings in the system.

Quantifying Appropriateness of Summarization Data for Curriculum Learning

no code implementations EACL 2021 Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma

We conduct experiments on three summarization models; one pretrained model and two non-pretrained models, and verify our method improves the performance.

Translation

A Large-Scale Corpus of E-mail Conversations with Standard and Two-Level Dialogue Act Annotations

no code implementations COLING 2020 Motoki Taniguchi, Yoshihiro Ueda, Tomoki Taniguchi, Tomoko Ohkuma

To assess the difficulty of DA recognition on our corpus, we evaluate several models, including a pre-trained contextual representation model, as our baselines.

Multi-Task Learning

Integrating Entity Linking and Evidence Ranking for Fact Extraction and Verification

no code implementations WS 2018 Motoki Taniguchi, Tomoki Taniguchi, Takumi Takahashi, Yasuhide Miura, Tomoko Ohkuma

A simple entity linking approach with text match is used as the document selection component, this component identifies relevant documents for a given claim by using mentioned entities as clues.

Entity Linking Natural Language Inference +4

A Simple Scalable Neural Networks based Model for Geolocation Prediction in Twitter

no code implementations WS 2016 Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma

In the test run of the task, the model achieved the accuracy of 40. 91{\%} and the median distance error of 69. 50 km in message-level prediction and the accuracy of 47. 55{\%} and the median distance error of 16. 13 km in user-level prediction.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.