Search Results for author: Nicolo Ruggeri

Found 2 papers, 2 papers with code

Provable concept learning for interpretable predictions using variational autoencoders

2 code implementations1 Apr 2022 Armeen Taeb, Nicolo Ruggeri, Carina Schnuck, Fanny Yang

In safety-critical applications, practitioners are reluctant to trust neural networks when no interpretable explanations are available.

Variational Inference

Fast Rates for Noisy Interpolation Require Rethinking the Effects of Inductive Bias

1 code implementation7 Mar 2022 Konstantin Donhauser, Nicolo Ruggeri, Stefan Stojanovic, Fanny Yang

Good generalization performance on high-dimensional data crucially hinges on a simple structure of the ground truth and a corresponding strong inductive bias of the estimator.

Inductive Bias valid

Cannot find the paper you are looking for? You can Submit a new open access paper.