no code implementations • 20 Sep 2023 • Stefan Trawicki, William Hackett, Lewis Birch, Neeraj Suri, Peter Garraghan
Adversarial Machine Learning (AML) is a rapidly growing field of security research, with an often overlooked area being model attacks through side-channels.
no code implementations • 19 Sep 2023 • Lewis Birch, William Hackett, Stefan Trawicki, Neeraj Suri, Peter Garraghan
Model Leeching is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model.
no code implementations • 20 Feb 2023 • Andrew Sogokon, Burak Yuksek, Gokhan Inalhan, Neeraj Suri
Specifying the intended behaviour of autonomous systems is becoming increasingly important but is fraught with many challenges.
no code implementations • 1 Oct 2022 • Yang Lu, Zhengxin Yu, Neeraj Suri
Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem.
no code implementations • 13 Sep 2022 • William Hackett, Stefan Trawicki, Zhengxin Yu, Neeraj Suri, Peter Garraghan
Adversarial extraction attacks constitute an insidious threat against Deep Learning (DL) models in-which an adversary aims to steal the architecture, parameters, and hyper-parameters of a targeted DL model.