1 code implementation • 28 Jun 2023 • Hrayr Harutyunyan
Despite the popularity and success of deep learning, there is limited understanding of when, how, and why neural networks generalize to unseen examples.
no code implementations • 22 Jun 2023 • Rafayel Darbinyan, Hrayr Harutyunyan, Aram H. Markosyan, Hrant Khachatrian
Neural networks employ spurious correlations in their predictions, resulting in decreased performance when these correlations do not hold.
no code implementations • CVPR 2023 • Achin Jain, Gurumurthy Swaminathan, Paolo Favaro, Hao Yang, Avinash Ravichandran, Hrayr Harutyunyan, Alessandro Achille, Onkar Dabeer, Bernt Schiele, Ashwin Swaminathan, Stefano Soatto
The PPL improves the performance estimation on average by 37% across 16 classification and 33% across 10 detection datasets, compared to the power law.
no code implementations • 28 Jan 2023 • Hrayr Harutyunyan, Ankit Singh Rawat, Aditya Krishna Menon, Seungyeon Kim, Sanjiv Kumar
Despite the popularity and efficacy of knowledge distillation, there is limited understanding of why it helps.
no code implementations • 13 May 2022 • Hrayr Harutyunyan, Greg Ver Steeg, Aram Galstyan
Remarkably, PAC-Bayes, single-draw and expected squared generalization gap bounds that depend on information in pairs of examples exist.
1 code implementation • CVPR 2022 • Tigran Galstyan, Hrayr Harutyunyan, Hrant Khachatrian, Greg Ver Steeg, Aram Galstyan
On Camelyon-17, domain-invariance degrades the quality of representations on unseen domains.
1 code implementation • NeurIPS 2021 • Hrayr Harutyunyan, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm.
1 code implementation • ICLR 2021 • Hrayr Harutyunyan, Alessandro Achille, Giovanni Paolini, Orchid Majumder, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto
We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights.
1 code implementation • ICML 2020 • Hrayr Harutyunyan, Kyle Reing, Greg Ver Steeg, Aram Galstyan
In the presence of noisy or incorrect labels, neural networks have the undesirable tendency to memorize information about the noise.
1 code implementation • Nature Scientific Data 2019 • Hrayr Harutyunyan, Hrant Khachatrian, David C. Kale, Greg Ver Steeg, Aram Galstyan
Health care is one of the most exciting frontiers in data mining and machine learning.
2 code implementations • 30 May 2019 • Hrayr Harutyunyan, Daniel Moyer, Hrant Khachatrian, Greg Ver Steeg, Aram Galstyan
Estimating the covariance structure of multivariate time series is a fundamental problem with a wide-range of real-world applications -- from financial modeling to fMRI analysis.
3 code implementations • 30 Apr 2019 • Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, Aram Galstyan
Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships.
1 code implementation • 10 Oct 2017 • Greg Ver Steeg, Rob Brekelmans, Hrayr Harutyunyan, Aram Galstyan
Scientists often seek simplified representations of complex systems to facilitate prediction and understanding.
3 code implementations • NeurIPS 2019 • Greg Ver Steeg, Hrayr Harutyunyan, Daniel Moyer, Aram Galstyan
We also use our approach for estimating covariance structure for a number of real-world datasets and show that it consistently outperforms state-of-the-art estimators at a fraction of the computational cost.
11 code implementations • 22 Mar 2017 • Hrayr Harutyunyan, Hrant Khachatrian, David C. Kale, Greg Ver Steeg, Aram Galstyan
Health care is one of the most exciting frontiers in data mining and machine learning.