no code implementations • 13 Oct 2023 • Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.
no code implementations • 5 Oct 2023 • Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman
The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training.
no code implementations • 4 Sep 2023 • Andrew Yuan, Alina Oprea, Cheng Tan
DROPOUTATTACK attacks the dropout operator by manipulating the selection of neurons to drop instead of selecting them uniformly at random.
no code implementations • 2 Jun 2023 • Giorgio Severi, Simona Boboila, Alina Oprea, John Holodnak, Kendra Kratkiewicz, Jason Matterer
As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical.
1 code implementation • 1 Jun 2023 • John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman
In this work we propose a new membership-inference threat model where the adversary only has access to the finetuned model and would like to infer the membership of the pretraining data.
1 code implementation • 6 Feb 2023 • Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, H. Brendan McMahan, Vinith M. Suriyakumar
Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss in settings where known analytical bounds are not tight.
no code implementations • 23 Jan 2023 • Gokberk Yar, Simona Boboila, Cristina Nita-Rotaru, Alina Oprea
Most machine learning applications rely on centralized learning processes, opening up the risk of exposure of their training datasets.
1 code implementation • 27 Aug 2022 • Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru
Federated learning is a popular strategy for training models on distributed, sensitive data, while preserving data privacy.
1 code implementation • 25 Aug 2022 • Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan Ullman
Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model.
no code implementations • 23 May 2022 • Talha Ongun, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Jason Hiser, Jack Davidson
In this study, we propose CELEST (CollaborativE LEarning for Scalable Threat detection, a federated machine learning framework for global threat detection over HTTP, which is one of the most commonly used protocols for malware dissemination and communication.
no code implementations • 20 May 2022 • Harsh Chaudhari, Matthew Jagielski, Alina Oprea
Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data.
2 code implementations • 12 May 2022 • Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan Ullman, Roxana Geambasu
Our results on four public datasets show that our attacks are effective at using update information to give the adversary a significant advantage over attacks on standalone models, but also compared to a prior MI attack that takes advantage of model updates in a related machine-unlearning setting.
no code implementations • 4 May 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli
In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.
1 code implementation • 5 Oct 2021 • Lisa Oakley, Alina Oprea, Stavros Tripakis
We outline a class of threat models under which adversaries can perturb system transitions, constrained by an $\varepsilon$ ball around the original transition probabilities.
3 code implementations • 14 Dec 2020 • Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel
We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data.
1 code implementation • 24 Jun 2020 • Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea
Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed.
no code implementations • 21 Jun 2020 • Jialin Wen, Benjamin Zi Hao Zhao, Minhui Xue, Alina Oprea, Haifeng Qian
To this end, we analyze and develop a new poisoning attack algorithm.
1 code implementation • NeurIPS 2020 • Matthew Jagielski, Jonathan Ullman, Alina Oprea
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.
2 code implementations • 2 Mar 2020 • Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea
Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point.
1 code implementation • 23 Sep 2019 • Alesia Chernikova, Alina Oprea
Finally, we demonstrate the potential of performing adversarial training in constrained domains to increase the model resilience against these evasion attacks.
no code implementations • 6 Aug 2019 • Indranil Jana, Alina Oprea
Web applications in widespread use have always been the target of large-scale attacks, leading to massive disruption of services and financial loss, as in the Equifax data breach.
Cryptography and Security
no code implementations • 10 Jul 2019 • Talha Ongun, Timothy Sakharaov, Simona Boboila, Alina Oprea, Tina Eliassi-Rad
Machine learning (ML) started to become widely deployed in cyber security settings for shortening the detection cycle of cyber attacks.
2 code implementations • 27 Jun 2019 • Lisa Oakley, Alina Oprea
FlipIt is a security game that models attacker-defender interactions in advanced scenarios such as APTs.
no code implementations • 15 Apr 2019 • Alesia Chernikova, Alina Oprea, Cristina Nita-Rotaru, BaekGyu Kim
Deep Neural Networks (DNNs) have tremendous potential in advancing the vision for self-driving cars.
no code implementations • 9 Apr 2019 • Xianrui Meng, Dimitrios Papadopoulos, Alina Oprea, Nikos Triandopoulos
In collaborative learning, multiple parties contribute their datasets to jointly deduce global machine learning models for numerous predictive tasks.
no code implementations • 6 Dec 2018 • Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman
This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.
no code implementations • 8 Sep 2018 • Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.
1 code implementation • 1 Apr 2018 • Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms.
no code implementations • 7 Aug 2016 • Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea
The effectiveness of supervised learning techniques has made them ubiquitous in research and practice.