no code implementations • 12 Mar 2024 • Sierra Wyllie, Ilia Shumailov, Nicolas Papernot
We simulate AR interventions by curating representative training batches for stochastic gradient descent to demonstrate how AR can improve upon the unfairnesses of models and data ecosystems subject to other MIDS.
no code implementations • 2 Mar 2024 • Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot
In the privacy literature, this is known as membership inference.
no code implementations • 10 Feb 2024 • Harry Langford, Ilia Shumailov, Yiren Zhao, Robert Mullins, Nicolas Papernot
In this work we construct an arbitrary trigger detector which can be used to backdoor an architecture with no human supervision.
no code implementations • 8 Feb 2024 • Jamie Hayes, Ilia Shumailov, Itay Yona
Mixture of Experts (MoE) has become a key ingredient for scaling large foundation models while keeping inference costs steady.
1 code implementation • 8 Oct 2023 • Cheng Zhang, Jianyi Cheng, Ilia Shumailov, George A. Constantinides, Yiren Zhao
In this work, we explore the statistical and learning properties of the LLM layer and attribute the bottleneck of LLM quantisation to numerical scaling offsets.
no code implementations • 3 Oct 2023 • Avital Shafran, Ilia Shumailov, Murat A. Erdogdu, Nicolas Papernot
We discover that prior knowledge of the attacker, i. e. access to in-distribution data, dominates other factors like the attack policy the adversary follows to choose which queries to make to the victim model API.
no code implementations • 30 Sep 2023 • David Khachaturov, Yue Gao, Ilia Shumailov, Robert Mullins, Ross Anderson, Kassem Fawaz
Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world.
no code implementations • 23 Aug 2023 • Yue Gao, Ilia Shumailov, Kassem Fawaz
Machine Learning (ML) systems are vulnerable to adversarial examples, particularly those from query-based black-box attacks.
no code implementations • 20 Jul 2023 • David Glukhov, Ilia Shumailov, Yarin Gal, Nicolas Papernot, Vardan Papyan
Specifically, we demonstrate that semantic censorship can be perceived as an undecidable problem, highlighting the inherent challenges in censorship that arise due to LLMs' programmatic and instruction-following capabilities.
no code implementations • 1 Jul 2023 • Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot
Put all together, our evaluation shows that this novel DP-SGD analysis allows us to now formally show that DP-SGD leaks significantly less privacy for many datapoints (when trained on common benchmarks) than the current data-independent guarantee.
no code implementations • 24 Jun 2023 • Pranav Dahiya, Ilia Shumailov, Ross Anderson
As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it.
1 code implementation • 12 Jun 2023 • Nicholas Boucher, Jenny Blessing, Ilia Shumailov, Ross Anderson, Nicolas Papernot
While text-based machine learning models that operate on visual inputs of rendered text have become robust against a wide range of existing attacks, we show that they are still vulnerable to visual adversarial examples encoded as text.
1 code implementation • 27 May 2023 • Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson
It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images.
1 code implementation • 27 Apr 2023 • Nicholas Boucher, Luca Pajola, Ilia Shumailov, Ross Anderson, Mauro Conti
Search engines are vulnerable to attacks against indexing and searching via text encoding manipulation.
1 code implementation • 7 Apr 2023 • Yulin Zhou, Yiren Zhao, Ilia Shumailov, Robert Mullins, Yarin Gal
Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting.
no code implementations • 9 Jan 2023 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.
no code implementations • 2 Oct 2022 • Jason Ross Brown, Yiren Zhao, Ilia Shumailov, Robert D Mullins
Given the wide and ever growing range of different efficient Transformer attention mechanisms, it is important to identify which attention is most effective when given a task.
no code implementations • 2 Oct 2022 • Jason Ross Brown, Yiren Zhao, Ilia Shumailov, Robert D Mullins
We demonstrate that wide single layer Transformer models can compete with or outperform deeper ones in a variety of Natural Language Processing (NLP) tasks when both are trained from scratch.
no code implementations • 30 Sep 2022 • Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, Robert Mullins
These backdoors are impossible to detect during the training or data preparation processes, because they are not yet present.
1 code implementation • 29 Sep 2022 • Joseph Rance, Yiren Zhao, Ilia Shumailov, Robert Mullins
It is well known that backdoors can be inserted into machine learning models through serving a modified dataset to train on.
1 code implementation • 22 Sep 2022 • Jiaqi Wang, Roei Schuster, Ilia Shumailov, David Lie, Nicolas Papernot
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns.
no code implementations • 1 Jul 2022 • Maximilian Kaufmann, Yiren Zhao, Ilia Shumailov, Robert Mullins, Nicolas Papernot
In this paper we demonstrate data pruning-a method for increasing adversarial training efficiency through data sub-sampling. We empirically show that data pruning leads to improvements in convergence and reliability of adversarial training, albeit with different levels of utility degradation.
1 code implementation • 19 Jun 2022 • Yue Gao, Ilia Shumailov, Kassem Fawaz, Nicolas Papernot
An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model.
2 code implementations • CVPR 2023 • Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert Mullins, Nicolas Papernot
Machine learning is vulnerable to adversarial manipulation.
no code implementations • 24 Feb 2022 • Anvith Thudi, Ilia Shumailov, Franziska Boenisch, Nicolas Papernot
We find this greatly reduces the bound on MI positive accuracy.
no code implementations • 9 Feb 2022 • Duo Wang, Yiren Zhao, Ilia Shumailov, Robert Mullins
Bayesian Neural Networks (BNNs) offer a mathematically grounded framework to quantify the uncertainty of model predictions but come with a prohibitive computation cost for both training and inference.
no code implementations • 6 Feb 2022 • Shimaa Ahmed, Yash Wani, Ali Shahin Shamsabadi, Mohammad Yaghini, Ilia Shumailov, Nicolas Papernot, Kassem Fawaz
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning.
1 code implementation • 6 Dec 2021 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.
no code implementations • 22 Oct 2021 • Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot
Machine unlearning, i. e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten.
no code implementations • 10 Sep 2021 • Yiren Zhao, Xitong Gao, Ilia Shumailov, Nicolo Fusi, Robert Mullins
H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints.
1 code implementation • 18 Jun 2021 • Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot
In this paper, we explore a large class of adversarial examples that can be used to attack text-based models in a black-box setting without making any human-perceptible visual modification to inputs.
1 code implementation • 1 Jun 2021 • David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross Anderson
Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching.
1 code implementation • NeurIPS 2021 • Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross Anderson
Machine learning is vulnerable to a wide variety of attacks.
1 code implementation • 18 Apr 2021 • Yue Gao, Ilia Shumailov, Kassem Fawaz
As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm.
no code implementations • 1 Dec 2020 • Almos Zarandy, Ilia Shumailov, Ross Anderson
Voice assistants are now ubiquitous and listen in on our everyday lives.
no code implementations • 22 Nov 2020 • Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson
The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat.
no code implementations • 20 Aug 2020 • Baiwu Zhang, Jin Peng Zhou, Ilia Shumailov, Nicolas Papernot
We discuss the ethical implications of our work, identify where our technique can be used, and highlight that a more meaningful legislative framework is required for a more transparent and ethical use of generative modeling.
2 code implementations • 5 Jun 2020 • Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson
The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs.
no code implementations • 20 Feb 2020 • Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson
Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat.
no code implementations • 6 Sep 2019 • Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson
In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.
no code implementations • 26 Mar 2019 • Ilia Shumailov, Laurent Simon, Jeff Yan, Ross Anderson
We found the device's microphone(s) can recover this wave and "hear" the finger's touch, and the wave's distortions are characteristic of the tap's location on the screen.
no code implementations • 23 Jan 2019 • Ilia Shumailov, Xitong Gao, Yiren Zhao, Robert Mullins, Ross Anderson, Cheng-Zhong Xu
Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision.
no code implementations • 2 Dec 2018 • Rasika Bhalerao, Maxwell Aliapoulios, Ilia Shumailov, Sadia Afroz, Damon McCoy
Our analysis of the automatically generated supply chains demonstrates underlying connections between products and services within these forums.
no code implementations • 18 Nov 2018 • Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson
Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured.
no code implementations • 29 Sep 2018 • Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson
We, therefore, investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs.