no code implementations • 5 Feb 2024 • Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot
Existing work on trustworthy machine learning (ML) often concentrates on individual aspects of trust, such as fairness or privacy.
1 code implementation • 30 Jan 2024 • Krishna Acharya, Franziska Boenisch, Rakshit Naidu, Juba Ziani
DP requires to specify a uniform privacy level $\varepsilon$ that expresses the maximum privacy loss that each data point in the entire dataset is willing to tolerate.
1 code implementation • 19 Jan 2024 • Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch
Our definition compares the difference in alignment of representations for data points and their augmented views returned by both encoders that were trained on these data points and encoders that were not.
no code implementations • 14 Jun 2023 • Jiapeng Wu, Atiyeh Ashari Ghomi, David Glukhov, Jesse C. Cresswell, Franziska Boenisch, Nicolas Papernot
Differential privacy and randomized smoothing are effective defenses that provide certifiable guarantees for each of these threats, however, it is not well understood how implementing either defense impacts the other.
no code implementations • NeurIPS 2023 • Franziska Boenisch, Christopher Mühl, Adam Dziedzic, Roy Rinberg, Nicolas Papernot
DP-SGD is the canonical approach to training models with differential privacy.
no code implementations • 17 Feb 2023 • Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot
Deploying machine learning (ML) models often requires both fairness and privacy guarantees.
no code implementations • 9 Jan 2023 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.
no code implementations • 9 Jan 2023 • Karla Pizzi, Franziska Boenisch, Ugur Sahin, Konstantin Böttinger
To the best of our knowledge, our work is the first one extending MI attacks to audio data, and our results highlight the security risks resulting from the extraction of the biometric data in that setup.
no code implementations • 16 Sep 2022 • Adam Dziedzic, Haonan Duan, Muhammad Ahmad Kaleem, Nikita Dhawan, Jonas Guan, Yannis Cattan, Franziska Boenisch, Nicolas Papernot
We introduce a new dataset inference defense, which uses the private training set of the victim encoder model to attribute its ownership in the event of stealing.
no code implementations • 24 Feb 2022 • Anvith Thudi, Ilia Shumailov, Franziska Boenisch, Nicolas Papernot
We find this greatly reduces the bound on MI positive accuracy.
no code implementations • 21 Feb 2022 • Franziska Boenisch, Christopher Mühl, Roy Rinberg, Jannis Ihrig, Adam Dziedzic
Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP).
1 code implementation • 6 Dec 2021 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.
no code implementations • 17 May 2021 • Franziska Boenisch, Philip Sperl, Konstantin Böttinger
An important problem in deep learning is the privacy and security of neural networks (NNs).
no code implementations • 25 Sep 2020 • Franziska Boenisch
Machine learning (ML) models are applied in an increasing variety of domains.
3 code implementations • 21 Aug 2019 • Max Kaufmann, Daniel Kang, Yi Sun, Steven Basart, Xuwang Yin, Mantas Mazeika, Akul Arora, Adam Dziedzic, Franziska Boenisch, Tom Brown, Jacob Steinhardt, Dan Hendrycks
To narrow in on this discrepancy between research and reality we introduce ImageNet-UA, a framework for evaluating model robustness against a range of unforeseen adversaries, including eighteen new non-L_p attacks.
1 code implementation • 9 Feb 2018 • Franziska Boenisch, Benjamin Rosemann, Benjamin Wild, Fernando Wario, David Dormagen, Tim Landgraf
Computational approaches to the analysis of collective behavior in social insects increasingly rely on motion paths as an intermediate data layer from which one can infer individual behaviors or social interactions.