1 code implementation • 1 Mar 2024 • Pascal Pernot
As the generative probability distribution for the simulation of synthetic errors is often not constrained, the sensitivity of simulated reference values to the choice of generative distribution might be problematic, shedding a doubt on the calibration diagnostic.
1 code implementation • 15 Feb 2024 • Pascal Pernot
Average calibration of the prediction uncertainties of machine learning regression tasks can be tested in two ways: one is to estimate the calibration error (CE) as the difference between the mean absolute error (MSE) and the mean variance (MV) or mean squared uncertainty; the alternative is to compare the mean squared z-scores (ZMS) or scaled errors to 1.
1 code implementation • 18 Oct 2023 • Pascal Pernot
Binwise Variance Scaling (BVS) has recently been proposed as a post hoc recalibration method for prediction uncertainties of machine learning regression problems that is able of more efficient corrections than uniform variance (or temperature) scaling.
1 code implementation • 12 Sep 2023 • Pascal Pernot
Reliable uncertainty quantification (UQ) in machine learning (ML) regression tasks is becoming the focus of many studies in materials and chemical science.
1 code implementation • 8 Jun 2023 • Pascal Pernot
Partitioning of the resulting data into equal-sized bins introduces an aleatoric component to the estimation of bin-based calibration statistics.
1 code implementation • 17 May 2023 • Pascal Pernot
The Expected Normalized Calibration Error (ENCE) is a popular calibration statistic used in Machine Learning to assess the quality of prediction uncertainties for regression problems.
1 code implementation • 13 Mar 2023 • Pascal Pernot
The practice of uncertainty quantification (UQ) validation, notably in machine learning for the physico-chemical sciences, rests on several graphical methods (scattering plots, calibration curves, reliability diagrams and confidence curves) which explore complementary aspects of calibration, without covering all the desirable ones.
1 code implementation • 6 Apr 2020 • Pascal Pernot, Bing Huang, Andreas Savin
Quantum machine learning models have been gaining significant traction within atomistic simulation communities.
Data Analysis, Statistics and Probability Chemical Physics Computational Physics
1 code implementation • 3 Mar 2020 • Pascal Pernot, Andreas Savin
In the first part of this study (Paper I), we introduced the systematic improvement probability (SIP) as a tool to assess the level of improvement on absolute errors to be expected when switching between two computational chemistry methods.
Chemical Physics Data Analysis, Statistics and Probability
1 code implementation • 2 Mar 2020 • Pascal Pernot, Andreas Savin
The comparison of benchmark error sets is an essential tool for the evaluation of theories in computational chemistry.
Methodology Chemical Physics Data Analysis, Statistics and Probability
1 code implementation • 14 Nov 2016 • Pascal Pernot
The main advantage of the latter approach is its transferability to the prediction of other quantities of interest based on the same parameters.
Data Analysis, Statistics and Probability Chemical Physics
1 code implementation • 14 Nov 2016 • Pascal Pernot, Fabien Cailliez
Inference of physical parameters from reference data is a well studied problem with many intricacies (inconsistent sets of data due to experimental systematic errors, approximate physical models...).
Data Analysis, Statistics and Probability Chemical Physics