no code implementations • ICLR 2019 • Florin Schimbinschi, Christian Walder, Sarah Erfani, James Bailey
Learning synthesizers and generating music in the raw audio domain is a challenging task.
1 code implementation • 21 Feb 2024 • Canaan Yung, Hadi Mohaghegh Dolatabadi, Sarah Erfani, Christopher Leckie
To address this issue, we propose the Round Trip Translation (RTT) method, the first algorithm specifically designed to defend against social-engineered attacks on LLMs.
no code implementations • 7 Feb 2024 • Chen Wang, Sarah Erfani, Tansu Alpcan, Christopher Leckie
Our offline learning model is an adaptation of behavioural cloning with a transformer policy network, where we modify the training process to learn a Q function and a state value function from normal trajectories.
1 code implementation • 15 Mar 2023 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
In particular, we leverage the power of diffusion models and show that a carefully designed denoising process can counteract the effectiveness of the data-protecting perturbations.
1 code implementation • 26 Jan 2023 • Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey
We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.
1 code implementation • 13 Oct 2022 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
We show the effectiveness of the proposed method for robust training of DNNs on various poisoned datasets, reducing the backdoor success rate significantly.
1 code implementation • 13 Sep 2022 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.
1 code implementation • 28 Jun 2022 • Yixin Su, Yunxiang Zhao, Sarah Erfani, Junhao Gan, Rui Zhang
Detecting beneficial feature interactions is essential in recommender systems, and existing approaches achieve this by examining all the possible feature interactions.
no code implementations • 20 May 2022 • Shiquan Yang, Xinting Huang, Jey Han Lau, Sarah Erfani
Data artifacts incentivize machine learning models to learn non-transferable generalizations by taking advantage of shortcuts in the data, and there is growing evidence that data artifacts play a role for the strong results that deep learning models achieve in recent natural language processing benchmarks.
1 code implementation • ACL 2022 • Shiquan Yang, Rui Zhang, Sarah Erfani, Jey Han Lau
To obtain a transparent reasoning process, we introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains.
2 code implementations • 1 Dec 2021 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output.
1 code implementation • NeurIPS 2021 • Jiabo He, Sarah Erfani, Xingjun Ma, James Bailey, Ying Chi, Xian-Sheng Hua
Bounding box (bbox) regression is a fundamental task in computer vision.
no code implementations • 29 Sep 2021 • Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague
More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.
1 code implementation • 10 May 2021 • Yixin Su, Rui Zhang, Sarah Erfani, Junhao Gan
User and item attributes are essential side-information; their interactions (i. e., their co-occurrence in the sample data) can significantly enhance prediction accuracy in various recommender systems.
1 code implementation • EMNLP 2020 • Shiquan Yang, Rui Zhang, Sarah Erfani
End-to-end task-oriented dialogue systems aim to generate system responses directly from plain text inputs.
1 code implementation • 23 Sep 2020 • Jiabo He, Sarah Erfani, Sudanthi Wijewickrema, Stephen O'Leary, Kotagiri Ramamohanarao
Time series with large discontinuities are common in many scenarios.
Databases
1 code implementation • 23 Sep 2020 • Jiabo He, Sarah Erfani, Sudanthi Wijewickrema, Stephen O'Leary, Kotagiri Ramamohanarao
Semantic segmentation is one of the key problems in the field of computer vision, as it enables computer image understanding.
4 code implementations • 2 Aug 2020 • Yixin Su, Rui Zhang, Sarah Erfani, Zhenghua Xu
To make the best out of feature interactions, we propose a graph neural network approach to effectively model them, together with a novel technique to automatically detect those feature interactions that are beneficial in terms of recommendation accuracy.
1 code implementation • NeurIPS 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.
1 code implementation • 6 Jul 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.
4 code implementations • ICML 2020 • Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey
However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.
Ranked #30 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)
1 code implementation • 25 Mar 2020 • Farbod Taymouri, Marcello La Rosa, Sarah Erfani, Zahra Dasht Bozorgi, Ilya Verenich
Predictive process monitoring aims to predict future characteristics of an ongoing process case, such as case outcome or remaining timestamp.
1 code implementation • 15 Jan 2020 • Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie
The significant advantage of such models is their easy-to-compute inverse.
no code implementations • 25 Feb 2019 • Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani
Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.
no code implementations • 17 Aug 2018 • Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague
Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.