no code implementations • Findings (EMNLP) 2021 • Dhanasekar Sundararaman, Henry Tsai, Kuang-Huei Lee, Iulia Turc, Lawrence Carin
It has been shown that training multi-task models with auxiliary tasks can improve the target task quality through cross-task transfer.
no code implementations • EMNLP 2021 • Pu-Chin Chen, Henry Tsai, Srinadh Bhojanapalli, Hyung Won Chung, Yin-Wen Chang, Chun-Sung Ferng
Our analysis shows that the gain actually comes from moving positional information to attention layer from the input.
2 code implementations • ICLR 2021 • Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models.
Ranked #1 on Cross-Lingual NER on NER
Cross-Lingual Natural Language Inference Cross-Lingual NER +4
no code implementations • 15 Aug 2020 • Henry Tsai, Jayden Ooi, Chun-Sung Ferng, Hyung Won Chung, Jason Riesa
Transformer-based models have achieved stateof-the-art results in many tasks in natural language processing.
no code implementations • 1 Sep 2019 • Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman
The recently proposed massively multilingual neural machine translation (NMT) system has been shown to be capable of translating over 100 languages to and from English within a single model.
no code implementations • IJCNLP 2019 • Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, Amelia Archer
We propose a practical scheme to train a single multilingual sequence labeling model that yields state of the art results and is small and fast enough to run on a single CPU.