no code implementations • 2 May 2021 • Ziv Katzir, Yuval Elovici
By combining theoretical reasoning with a series of empirical results, we show that it is practically impossible to predict whether a given adversarial example is transferable to a specific target model in a black-box setting, hence questioning the validity of adversarial transferability as a real-life attack tool for adversaries that are sensitive to the cost of a failed attack.
no code implementations • 7 Oct 2020 • Yael Mathov, Eden Levy, Ziv Katzir, Asaf Shabtai, Yuval Elovici
We, however, argue that machine learning models trained on heterogeneous tabular data are as susceptible to adversarial manipulations as those trained on continuous or homogeneous data such as images.
no code implementations • 23 Sep 2020 • Gil Fidel, Ron Bitton, Ziv Katzir, Asaf Shabtai
Recent works have shown that the input domain of any machine learning classifier is bound to contain adversarial examples.
no code implementations • 11 Jul 2019 • Ziv Katzir, Yuval Elovici
We show that contrary to commonly held belief, the ability to bypass defensive distillation is not dependent on an attack's level of sophistication.
no code implementations • 22 Nov 2018 • Ziv Katzir, Yuval Elovici
We leverage those classifiers to produce a sequence of class labels for each nonperturbed input sample and estimate the a priori probability for a class label change between one activation space and another.