no code implementations • 2 Apr 2024 • James Anibal, Hannah Huth, Ming Li, Lindsey Hazen, Yen Minh Lam, Nguyen Thi Thu Hang, Michael Kleinman, Shelley Ost, Christopher Jackson, Laura Sprabery, Cheran Elangovan, Balaji Krishnaiah, Lee Akst, Ioan Lina, Iqbal Elyazar, Lenny Ekwati, Stefan Jansen, Richard Nduwayezu, Charisse Garcia, Jeffrey Plum, Jacqueline Brenner, Miranda Song, Emily Ricotta, David Clifton, C. Louise Thwaites, Yael Bensoussan, Bradford Wood
This report introduces a consortium of partners for global work, presents the application used for data collection, and showcases the potential of informative voice EHR to advance the scalability and diversity of audio AI.
no code implementations • 5 Feb 2024 • Ashley Shin, Qiao Jin, James Anibal, Zhiyong Lu
Our study suggests that repurposing user query logs of academic search engines can be a promising way to train state-of-the-art models for explaining literature recommendation.
no code implementations • 8 Oct 2021 • Hieu Nguyen, Long Phan, James Anibal, Alec Peltekian, Hieu Tran
Text summarization is a challenging task within natural language processing that involves text generation from lengthy input sequences.
1 code implementation • 18 Jun 2021 • Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen
In this paper, we propose SPBERT, a transformer-based language model pre-trained on massive SPARQL query logs.
1 code implementation • ACL (NLP4Prog) 2021 • Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen, James Anibal, Alec Peltekian, Yanfang Ye
We train CoTexT on different combinations of available PL corpus including both "bimodal" and "unimodal" data.