no code implementations • 3 Feb 2024 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models.
no code implementations • 6 Jan 2024 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, Yige Li, James Bailey
Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security.
1 code implementation • 15 Nov 2022 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
We find that, compared to images, it can be more challenging to achieve the two goals on time series.
1 code implementation • 21 Apr 2021 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications.
no code implementations • 11 Jun 2017 • Yujing Jiang, Xin He, Mei-Ling Ting Lee, Bernard Rosner, Jun Yan
For independent data, they are available in several R packages such as stats and coin.
Computation