no code implementations • 29 Feb 2024 • Anan Kabaha, Dana Drachsler-Cohen
We evaluate VHAGaR on several datasets and classifiers and show that, given a three hour timeout, the average gap between the lower and upper bound on the minimal globally robust bound computed by VHAGaR is 1. 9, while the gap of an existing global robustness verifier is 154. 7.
no code implementations • 31 Oct 2023 • Roie Reshef, Anan Kabaha, Olga Seleznova, Dana Drachsler-Cohen
We propose Sphynx, an algorithm that computes an abstraction of all networks, with a high probability, from a small set of networks, and verifies LDCP directly on the abstract network.
no code implementations • 12 Sep 2022 • Anan Kabaha, Dana Drachsler-Cohen
Deep neural networks have been shown to be vulnerable to adversarial attacks that perturb inputs based on semantic features.
no code implementations • 29 Sep 2021 • Anan Kabaha, Dana Drachsler Cohen
We focus on robustness to perturbations of semantic features and introduce the concept of proof guided by velocity to scale the analysis.
no code implementations • 1 Jan 2021 • Anan Kabaha, Dana Drachsler Cohen
In this work, we take a new approach and study the robustness of networks to the inputs' semantic features.