Search Results for author: Emanuele Ledda

Found 2 papers, 1 papers with code

Adversarial Attacks Against Uncertainty Quantification

no code implementations19 Sep 2023 Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli

Machine-learning models can be fooled by adversarial examples, i. e., carefully-crafted input perturbations that force models to output wrong predictions.

Semantic Segmentation Uncertainty Quantification

Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks

1 code implementation6 Feb 2023 Emanuele Ledda, Giorgio Fumera, Fabio Roli

Among Bayesian methods, Monte-Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks.

Crowd Counting Uncertainty Quantification

Cannot find the paper you are looking for? You can Submit a new open access paper.