1 code implementation • Findings (ACL) 2021 • Zhousi Chen, Longtu Zhang, Aizhan Imankulova, Mamoru Komachi
We propose two fast neural combinatory models for constituency parsing: binary and multi-branching.
no code implementations • 13 Mar 2021 • Fei Yuan, Longtu Zhang, Huang Bojun, Yaobo Liang
In most machine learning tasks, we evaluate a model $M$ on a given data population $S$ by measuring a population-level metric $F(S;M)$.
no code implementations • 1 Mar 2019 • Longtu Zhang, Mamoru Komachi
Unsupervised neural machine translation (UNMT) requires only monolingual data of similar language pairs during training and can produce bi-directional translation models with relatively good performance on alphabetic languages (Lample et al., 2018).
no code implementations • WS 2018 • Longtu Zhang, Mamoru Komachi
Recent neural machine translation (NMT) systems have been greatly improved by encoder-decoder models with attention mechanisms and sub-word units.
no code implementations • 7 Sep 2018 • Longtu Zhang, Mamoru Komachi
Recent neural machine translation (NMT) systems have been greatly improved by encoder-decoder models with attention mechanisms and sub-word units.