Search Results for author: Fredrik Hellström

Found 9 papers, 1 papers with code

Generalization and Informativeness of Conformal Prediction

no code implementations22 Jan 2024 Matteo Zecchin, Sangwoo Park, Osvaldo Simeone, Fredrik Hellström

A popular technique to achieve this goal is conformal prediction (CP), which transforms an arbitrary base predictor into a set predictor with coverage guarantees.

Conformal Prediction Decision Making +1

Comparing Comparators in Generalization Bounds

1 code implementation16 Oct 2023 Fredrik Hellström, Benjamin Guedj

We derive generic information-theoretic and PAC-Bayesian generalization bounds involving an arbitrary convex comparator function, which measures the discrepancy between the training and population loss.

Generalization Bounds

Generalization Bounds: Perspectives from Information Theory and PAC-Bayes

no code implementations8 Sep 2023 Fredrik Hellström, Giuseppe Durisi, Benjamin Guedj, Maxim Raginsky

Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and design new ones.

Generalization Bounds

A New Family of Generalization Bounds Using Samplewise Evaluated CMI

no code implementations12 Oct 2022 Fredrik Hellström, Giuseppe Durisi

Furthermore, using the evaluated CMI, we derive a samplewise, average version of Seeger's PAC-Bayesian bound, where the convex function is the binary KL divergence.

Generalization Bounds

Evaluated CMI Bounds for Meta Learning: Tightness and Expressiveness

no code implementations12 Oct 2022 Fredrik Hellström, Giuseppe Durisi

Recent work has established that the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020) is expressive enough to capture generalization guarantees in terms of algorithmic stability, VC dimension, and related complexity measures for conventional learning (Harutyunyan et al., 2021, Haghifam et al., 2021).

Generalization Bounds Learning Theory +2

Fast-Rate Loss Bounds via Conditional Information Measures with Applications to Neural Networks

no code implementations22 Oct 2020 Fredrik Hellström, Giuseppe Durisi

If the conditional information density is bounded uniformly in the size $n$ of the training set, our bounds decay as $1/n$.

Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures

no code implementations28 Sep 2020 Fredrik Hellström, Giuseppe Durisi

If the conditional information density is bounded uniformly in the size $n$ of the training set, our bounds decay as $1/n$, which is referred to as a fast rate.

Generalization Bounds via Information Density and Conditional Information Density

no code implementations16 May 2020 Fredrik Hellström, Giuseppe Durisi

We present a general approach, based on exponential inequalities, to derive bounds on the generalization error of randomized learning algorithms.

Generalization Bounds

Generalization Error Bounds via $m$th Central Moments of the Information Density

no code implementations20 Apr 2020 Fredrik Hellström, Giuseppe Durisi

Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the case in which a new hypothesis is randomly generated every time the algorithm is used - as often assumed in the probably approximately correct (PAC)-Bayesian literature - and in the single-draw case, where the hypothesis is extracted only once.

Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.