no code implementations • 1 Jan 2024 • Ke Yang, Jiateng Liu, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, ChengXiang Zhai
The prominent large language models (LLMs) of today differ from past language models not only in size, but also in the fact that they are trained on a combination of natural language and formal language (code).
no code implementations • 1 Aug 2022 • Seth Ockerman, John Wu, Christopher Stewart
Taken together, the answers to these questions lay the foundation for a new dataset-aware benchmarking paradigm.
1 code implementation • 31 Dec 2019 • John Wu
We are able to accurately predict a galaxy's logarithmic HI mass fraction, ≡log(MHI/M⋆), by training a CNN on galaxies in the ALFALFA 40% sample.