no code implementations • 12 Dec 2023 • Tom Davidson, Jean-Stanislas Denain, Pablo Villalobos, Guillem Bas
State-of-the-art AI systems can be significantly improved without expensive retraining via "post-training enhancements"-techniques applied after initial training like fine-tuning the system to use a web browser.
1 code implementation • 26 Oct 2022 • Pablo Villalobos, Anson Ho, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, Marius Hobbhahn
We investigate the potential constraints on LLM scaling posed by the availability of public human-generated text data.
no code implementations • 5 Jul 2022 • Pablo Villalobos, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, Anson Ho, Marius Hobbhahn
From 1950 to 2018, model size in language models increased steadily by seven orders of magnitude.
1 code implementation • 11 Feb 2022 • Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Villalobos
Since the advent of Deep Learning in the early 2010s, the scaling of training compute has accelerated, doubling approximately every 6 months.