Search Results for author: Dominic Masters

Found 10 papers, 5 papers with code

GenCast: Diffusion-based ensemble forecasting for medium-range weather

no code implementations25 Dec 2023 Ilan Price, Alvaro Sanchez-Gonzalez, Ferran Alet, Tom R. Andersson, Andrew El-Kadi, Dominic Masters, Timo Ewalds, Jacklynn Stott, Shakir Mohamed, Peter Battaglia, Remi Lam, Matthew Willson

Weather forecasts are fundamentally uncertain, so predicting the range of probable weather scenarios is crucial for important decisions, from warning the public about hazardous weather, to planning renewable energy use.

Decision Making Weather Forecasting

Generating QM1B with PySCF$_{\text{IPU}}$

2 code implementations NeurIPS 2023 Alexander Mathiasen, Hatem Helal, Kerstin Klaser, Paul Balanca, Josef Dean, Carlo Luschi, Dominique Beaini, Andrew Fitzgibbon, Dominic Masters

Similar benefits are yet to be unlocked for quantum chemistry, where the potential of deep learning is constrained by comparatively small datasets with 100k to 20M training examples.

GPS++: An Optimised Hybrid MPNN/Transformer for Molecular Property Prediction

1 code implementation18 Nov 2022 Dominic Masters, Josef Dean, Kerstin Klaser, Zhiyi Li, Sam Maddrell-Mander, Adam Sanders, Hatem Helal, Deniz Beker, Ladislav Rampášek, Dominique Beaini

This technical report presents GPS++, the first-place solution to the Open Graph Benchmark Large-Scale Challenge (OGB-LSC 2022) for the PCQM4Mv2 molecular property prediction task.

Denoising Molecular Property Prediction +1

8-bit Numerical Formats for Deep Neural Networks

no code implementations6 Jun 2022 Badreddine Noune, Philip Jones, Daniel Justus, Dominic Masters, Carlo Luschi

Given the current trend of increasing size and complexity of machine learning architectures, it has become of critical importance to identify new approaches to improve the computational efficiency of model training.

Computational Efficiency Image Classification

Revisiting Small Batch Training for Deep Neural Networks

3 code implementations20 Apr 2018 Dominic Masters, Carlo Luschi

Modern deep neural network training is typically based on mini-batch stochastic gradient optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.