Search Results for author: Pascal Berrang

Found 5 papers, 4 papers with code

Link Stealing Attacks Against Inductive Graph Neural Networks

1 code implementation9 May 2024 Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang

This paper fills the gap by conducting a systematic privacy analysis of inductive GNNs through the lens of link stealing attacks, one of the most popular attacks that are specifically designed for GNNs.

Fine-Tuning Is All You Need to Mitigate Backdoor Attacks

no code implementations18 Dec 2022 Zeyang Sha, Xinlei He, Pascal Berrang, Mathias Humbert, Yang Zhang

Backdoor attacks represent one of the major threats to machine learning models.

Data Poisoning Attacks Against Multimodal Encoders

1 code implementation30 Sep 2022 Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, Yang Zhang

Extensive evaluations on different datasets and model architectures show that all three attacks can achieve significant attack performance while maintaining model utility in both visual and linguistic modalities.

Contrastive Learning Data Poisoning

Albatross: An optimistic consensus algorithm

1 code implementation4 Mar 2019 Bruno França, Marvin Wissfeld, Pascal Berrang, Philipp von Styp-Rekowsky, Reto Trinkler

In this paper, we introduce Albatross, a Proof-of-Stake (PoS) blockchain consensus algorithm that aims to combine the best of both worlds.

Cryptography and Security

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

7 code implementations4 Jun 2018 Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

BIG-bench Machine Learning Inference Attack +1

Cannot find the paper you are looking for? You can Submit a new open access paper.