Search Results for author: Aws Albarghouthi

Found 22 papers, 7 papers with code

Generating Programmatic Referring Expressions via Program Synthesis

1 code implementation ICML 2020 Jiani Huang, Calvin Smith, Osbert Bastani, Rishabh Singh, Aws Albarghouthi, Mayur Naik

The policy neural network employs a program interpreter that provides immediate feedback on the consequences of the decisions made by the policy, and also takes into account the uncertainty in the symbolic representation of the image.

Enumerative Search Logical Reasoning

Verified Training for Counterfactual Explanation Robustness under Data Shift

no code implementations6 Mar 2024 Anna P. Meyer, Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Our empirical evaluation demonstrates that VeriTraCER generates CEs that (1) are verifiably robust to small model updates and (2) display competitive robustness to state-of-the-art approaches in handling empirical model updates including random initialization, leave-one-out, and distribution shifts.

counterfactual Counterfactual Explanation

The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions

1 code implementation20 Apr 2023 Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni

We introduce dataset multiplicity, a way to study how inaccuracies, uncertainty, and social bias in training datasets impact test-time predictions.

counterfactual

PECAN: A Deterministic Certified Defense Against Backdoor Attacks

no code implementations27 Jan 2023 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Neural networks are vulnerable to backdoor poisoning attacks, where the attackers maliciously poison the training set and insert triggers into the test input to change the prediction of the victim model.

backdoor defense Image Classification +1

AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels

no code implementations30 Aug 2022 Nicholas Roberts, Xintong Li, Tzu-Heng Huang, Dyah Adila, Spencer Schoenberg, Cheng-Yu Liu, Lauren Pick, Haotian Ma, Aws Albarghouthi, Frederic Sala

While it has been used successfully in many domains, weak supervision's application scope is limited by the difficulty of constructing labeling functions for domains with complex or high-dimensional features.

Benchmarking

Certifying Data-Bias Robustness in Linear Regression

no code implementations7 Jun 2022 Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni

Datasets typically contain inaccuracies due to human error and societal biases, and these inaccuracies can affect the outcomes of models trained on such datasets.

regression

BagFlip: A Certified Defense against Data Poisoning

1 code implementation26 May 2022 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model.

Backdoor Attack Data Poisoning +2

Certifying Robustness to Programmable Data Bias in Decision Trees

no code implementations NeurIPS 2021 Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni

To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point.

Fairness

Introduction to Neural Network Verification

no code implementations21 Sep 2021 Aws Albarghouthi

Deep learning has transformed the way we think of software and what it can do.

Certified Robustness to Programmable Transformations in LSTMs

1 code implementation EMNLP 2021 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Deep neural networks for natural language processing are fragile in the face of adversarial examples -- small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction.

Learning Differentially Private Mechanisms

no code implementations4 Jan 2021 Subhajit Roy, Justin Hsu, Aws Albarghouthi

We demonstrate that our approach is able to learn foundational algorithms from the differential privacy literature and significantly outperforms natural program synthesis baselines.

Program Synthesis

Generalized Universal Approximation for Certified Networks

no code implementations1 Jan 2021 Zi Wang, Aws Albarghouthi, Somesh Jha

To certify safety and robustness of neural networks, researchers have successfully applied abstract interpretation, primarily using interval bound propagation.

Interval Universal Approximation for Neural Networks

no code implementations12 Jul 2020 Zi Wang, Aws Albarghouthi, Gautam Prakriya, Somesh Jha

This is a crucial question, as our constructive proof of IUA is exponential in the size of the approximation domain.

Backdoors in Neural Models of Source Code

no code implementations11 Jun 2020 Goutham Ramakrishnan, Aws Albarghouthi

Deep neural networks are vulnerable to a range of adversaries.

Robustness to Programmable String Transformations via Augmented Abstract Training

1 code implementation ICML 2020 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

We then present an approach to adversarially training models that are robust to such user-defined string transformations.

Semantic Robustness of Models of Source Code

1 code implementation7 Feb 2020 Goutham Ramakrishnan, Jordan Henkel, Zi Wang, Aws Albarghouthi, Somesh Jha, Thomas Reps

Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions.

Proving Data-Poisoning Robustness in Decision Trees

no code implementations2 Dec 2019 Samuel Drews, Aws Albarghouthi, Loris D'Antoni

Machine learning models are brittle, and small changes in the training data can result in different predictions.

BIG-bench Machine Learning Data Poisoning

Synthesizing Action Sequences for Modifying Model Decisions

1 code implementation30 Sep 2019 Goutham Ramakrishnan, Yun Chan Lee, Aws Albarghouthi

When a model makes a consequential decision, e. g., denying someone a loan, it needs to additionally generate actionable, realistic feedback on what the person can do to favorably change the decision.

Program Synthesis

A Static Analysis-based Cross-Architecture Performance Prediction Using Machine Learning

no code implementations18 Jun 2019 Newsha Ardalani, Urmish Thakker, Aws Albarghouthi, Karu Sankaralingam

Porting code from CPU to GPU is costly and time-consuming; Unless much time is invested in development and optimization, it is not obvious, a priori, how much speed-up is achievable or how much room is left for improvement.

BIG-bench Machine Learning Binary Classification

Neural-Augmented Static Analysis of Android Communication

no code implementations11 Sep 2018 Jinman Zhao, Aws Albarghouthi, Vaibhav Rastogi, Somesh Jha, Damien Octeau

We address the problem of discovering communication links between applications in the popular Android mobile operating system, an important problem for security and privacy in Android.

Quantifying Program Bias

no code implementations17 Feb 2017 Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori

With the range and sensitivity of algorithmic decisions expanding at a break-neck speed, it is imperative that we aggressively investigate whether programs are biased.

Decision Making Fairness

Fairness as a Program Property

no code implementations19 Oct 2016 Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori

We explore the following question: Is a decision-making program fair, for some useful definition of fairness?

Decision Making Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.