Search Results for author: Pascal Pernot

Found 12 papers, 12 papers with code

Validation of ML-UQ calibration statistics using simulated reference values: a sensitivity analysis

1 code implementation1 Mar 2024 Pascal Pernot

As the generative probability distribution for the simulation of synthetic errors is often not constrained, the sensitivity of simulated reference values to the choice of generative distribution might be problematic, shedding a doubt on the calibration diagnostic.

Uncertainty Quantification

Negative impact of heavy-tailed uncertainty and error distributions on the reliability of calibration statistics for machine learning regression tasks

1 code implementation15 Feb 2024 Pascal Pernot

Average calibration of the prediction uncertainties of machine learning regression tasks can be tested in two ways: one is to estimate the calibration error (CE) as the difference between the mean absolute error (MSE) and the mean variance (MV) or mean squared uncertainty; the alternative is to compare the mean squared z-scores (ZMS) or scaled errors to 1.

regression Uncertainty Quantification

Can bin-wise scaling improve consistency and adaptivity of prediction uncertainty for machine learning regression ?

1 code implementation18 Oct 2023 Pascal Pernot

Binwise Variance Scaling (BVS) has recently been proposed as a post hoc recalibration method for prediction uncertainties of machine learning regression problems that is able of more efficient corrections than uniform variance (or temperature) scaling.

regression

Calibration in Machine Learning Uncertainty Quantification: beyond consistency to target adaptivity

1 code implementation12 Sep 2023 Pascal Pernot

Reliable uncertainty quantification (UQ) in machine learning (ML) regression tasks is becoming the focus of many studies in materials and chemical science.

Uncertainty Quantification

Stratification of uncertainties recalibrated by isotonic regression and its impact on calibration error statistics

1 code implementation8 Jun 2023 Pascal Pernot

Partitioning of the resulting data into equal-sized bins introduces an aleatoric component to the estimation of bin-based calibration statistics.

regression

Properties of the ENCE and other MAD-based calibration metrics

1 code implementation17 May 2023 Pascal Pernot

The Expected Normalized Calibration Error (ENCE) is a popular calibration statistic used in Machine Learning to assess the quality of prediction uncertainties for regression problems.

Validation of uncertainty quantification metrics: a primer based on the consistency and adaptivity concepts

1 code implementation13 Mar 2023 Pascal Pernot

The practice of uncertainty quantification (UQ) validation, notably in machine learning for the physico-chemical sciences, rests on several graphical methods (scattering plots, calibration curves, reliability diagrams and confidence curves) which explore complementary aspects of calibration, without covering all the desirable ones.

Uncertainty Quantification

Impact of non-normal error distributions on the benchmarking and ranking of Quantum Machine Learning models

1 code implementation6 Apr 2020 Pascal Pernot, Bing Huang, Andreas Savin

Quantum machine learning models have been gaining significant traction within atomistic simulation communities.

Data Analysis, Statistics and Probability Chemical Physics Computational Physics

Probabilistic performance estimators for computational chemistry methods: Systematic Improvement Probability and Ranking Probability Matrix. II. Applications

1 code implementation3 Mar 2020 Pascal Pernot, Andreas Savin

In the first part of this study (Paper I), we introduced the systematic improvement probability (SIP) as a tool to assess the level of improvement on absolute errors to be expected when switching between two computational chemistry methods.

Chemical Physics Data Analysis, Statistics and Probability

Probabilistic performance estimators for computational chemistry methods: Systematic Improvement Probability and Ranking Probability Matrix. I. Theory

1 code implementation2 Mar 2020 Pascal Pernot, Andreas Savin

The comparison of benchmark error sets is an essential tool for the evaluation of theories in computational chemistry.

Methodology Chemical Physics Data Analysis, Statistics and Probability

The parameters uncertainty inflation fallacy

1 code implementation14 Nov 2016 Pascal Pernot

The main advantage of the latter approach is its transferability to the prediction of other quantities of interest based on the same parameters.

Data Analysis, Statistics and Probability Chemical Physics

A critical review of statistical calibration/prediction models handling data inconsistency and model inadequacy

1 code implementation14 Nov 2016 Pascal Pernot, Fabien Cailliez

Inference of physical parameters from reference data is a well studied problem with many intricacies (inconsistent sets of data due to experimental systematic errors, approximate physical models...).

Data Analysis, Statistics and Probability Chemical Physics

Cannot find the paper you are looking for? You can Submit a new open access paper.