no code implementations • 31 Jan 2024 • Takashi Morita
This study discusses the effects of positional encoding on recurrent neural networks (RNNs) utilizing synthetic benchmarks.
1 code implementation • 20 Aug 2023 • Pongpisit Thanasutives, Takashi Morita, Masayuki Numao, Ken-ichi Fukui
We propose a new parameter-adaptive uncertainty-penalized Bayesian information criterion (UBIC) to prioritize the parsimonious partial differential equation (PDE) that sufficiently governs noisy spatial-temporal observed data with few reliable terms.
1 code implementation • 26 Jun 2022 • Pongpisit Thanasutives, Takashi Morita, Masayuki Numao, Ken-ichi Fukui
This work is concerned with discovering the governing partial differential equation (PDE) of a physical system.
1 code implementation • 11 May 2020 • Takashi Morita, Hiroki Koda
In this study, we reported our exploration of Text-To-Speech without Text (TTS without T) in the Zero Resource Speech Challenge 2020, in which participants proposed an end-to-end, unsupervised system that learned speech recognition and TTS together.
2 code implementations • NAACL 2019 • Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy
We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.
no code implementations • 5 Nov 2018 • Takashi Morita, Hiroki Koda
A pervasive belief with regard to the differences between human language and animal vocal sequences (song) is that they belong to different classes of computational complexity, with animal song belonging to regular languages, whereas human language is superregular.
no code implementations • WS 2018 • Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell
RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.
1 code implementation • 5 Sep 2018 • Richard Futrell, Ethan Wilcox, Takashi Morita, Roger Levy
Recurrent neural networks (RNNs) are the state of the art in sequence modeling for natural language.
no code implementations • 31 Aug 2018 • Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell
RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.