Search Results for author: Mislav Balunovic

Found 7 papers, 3 papers with code

Boosting Certified Robustness of Deep Networks via a Compositional Architecture

no code implementations ICLR 2021 Mark Niklas Mueller, Mislav Balunovic, Martin Vechev

In this work, we propose a new architecture which addresses this challenge and enables one to boost the certified robustness of any state-of-the-art deep network, while controlling the overall accuracy loss, without requiring retraining.

Scalable Polyhedral Verification of Recurrent Neural Networks

1 code implementation27 May 2020 Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev

We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron.

Adversarial Training and Provable Defenses: Bridging the Gap

1 code implementation ICLR 2020 Mislav Balunovic, Martin Vechev

We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds -- it produces a state-of-the-art neural network with certified robustness of 60. 5% and accuracy of 78. 4% on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.

Certifying Geometric Robustness of Neural Networks

1 code implementation NeurIPS 2019 Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).

Cannot find the paper you are looking for? You can Submit a new open access paper.