Search Results for author: John Wu

Found 3 papers, 1 papers with code

If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

no code implementations1 Jan 2024 Ke Yang, Jiateng Liu, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, ChengXiang Zhai

The prominent large language models (LLMs) of today differ from past language models not only in size, but also in the fact that they are trained on a combination of natural language and formal language (code).

Code Generation

A Case for Dataset Specific Profiling

no code implementations1 Aug 2022 Seth Ockerman, John Wu, Christopher Stewart

Taken together, the answers to these questions lay the foundation for a new dataset-aware benchmarking paradigm.

Benchmarking Model Selection

Connecting Optical Morphology, Environment, and H I Mass Fraction for Low-Redshift Galaxies Using Deep Learning

1 code implementation31 Dec 2019 John Wu

We are able to accurately predict a galaxy's logarithmic HI mass fraction, ≡log(MHI/M⋆), by training a CNN on galaxies in the ALFALFA 40% sample.

Cannot find the paper you are looking for? You can Submit a new open access paper.