1 code implementation • 7 Apr 2022 • Bogdan Kulynych, Yao-Yuan Yang, Yaodong Yu, Jarosław Błasiok, Preetum Nakkiran
In contrast, we show that Differentially-Private (DP) training provably ensures the high-level WYSIWYG property, which we quantify using a notion of distributional generalization.
1 code implementation • 10 Feb 2022 • Yao-Yuan Yang, Chi-Ning Chou, Kamalika Chaudhuri
Neural networks are known to use spurious correlations such as background information for classification.
2 code implementations • 28 Oct 2021 • Yao-Yuan Yang, Moto Hira, Zhaoheng Ni, Anjali Chourdia, Artyom Astafurov, Caroline Chen, Ching-Feng Yeh, Christian Puhrsch, David Pollack, Dmitriy Genzel, Donny Greenberg, Edward Z. Yang, Jason Lian, Jay Mahadeokar, Jeff Hwang, Ji Chen, Peter Goldsborough, Prabhat Roy, Sean Narenthiran, Shinji Watanabe, Soumith Chintala, Vincent Quenneville-Bélair, Yangyang Shi
This document describes version 0. 10 of TorchAudio: building blocks for machine learning applications in the audio and speech processing domain.
1 code implementation • 14 Feb 2021 • Michal Moshkovitz, Yao-Yuan Yang, Kamalika Chaudhuri
We then show that a tighter bound on the size is possible when the data is linearly separated.
1 code implementation • 17 Nov 2020 • Yao-Yuan Yang, Cyrus Rashtchian, Ruslan Salakhutdinov, Kamalika Chaudhuri
Overall, adversarially robust networks resemble a nearest neighbor classifier when it comes to OOD data.
1 code implementation • NeurIPS 2020 • Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov, Kamalika Chaudhuri
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning.
1 code implementation • 7 Jun 2019 • Yao-Yuan Yang, Cyrus Rashtchian, Yizhen Wang, Kamalika Chaudhuri
To test our defense, we provide a novel attack that applies to a wide range of non-parametric classifiers.
1 code implementation • 5 Feb 2018 • Yao-Yuan Yang, Yi-An Lin, Hong-Min Chu, Hsuan-Tien Lin
Extracting the hidden correlation is generally a challenging task.
5 code implementations • 1 Oct 2017 • Yao-Yuan Yang, Shao-Chuan Lee, Yu-An Chung, Tung-En Wu, Si-An Chen, Hsuan-Tien Lin
libact is a Python package designed to make active learning easier for general users.
1 code implementation • 29 Nov 2016 • Yao-Yuan Yang, Kuan-Hao Huang, Chih-Wei Chang, Hsuan-Tien Lin
Label space expansion for multi-label classification (MLC) is a methodology that encodes the original label vectors to higher dimensional codes before training and decodes the predicted codes back to the label vectors during testing.