no code implementations • 27 Apr 2023 • Xiaoqian Liu, Xu Han, Eric C. Chi, Boaz Nadler
In 1-bit matrix completion, the aim is to estimate an underlying low-rank matrix from a partial set of binary observations.
no code implementations • 9 Jan 2023 • Rodney Fonseca, Boaz Nadler
In multiple domains, statistical tasks are performed in distributed settings, with data split among several end machines that are connected to a fusion center.
no code implementations • 15 Sep 2022 • Chen Amiraz, Robert Krauthgamer, Boaz Nadler
We study distributed schemes for high-dimensional sparse linear regression, based on orthogonal matching pursuit (OMP).
1 code implementation • 31 Jan 2022 • Pini Zilber, Boaz Nadler
The inductive matrix completion (IMC) problem is to recover a low rank matrix from few observed entries while incorporating prior knowledge about its row and column subspaces.
1 code implementation • 24 Jun 2021 • Pini Zilber, Boaz Nadler
Low rank matrix recovery problems, including matrix completion and matrix sensing, appear in a broad range of applications.
1 code implementation • 26 Feb 2021 • Yariv Aizenbud, Ariel Jaffe, Meng Wang, Amber Hu, Noah Amsel, Boaz Nadler, Joseph T. Chang, Yuval Kluger
For large trees, a common approach, termed divide-and-conquer, is to recover the tree structure in two steps.
no code implementations • 5 Feb 2021 • Chen Amiraz, Robert Krauthgamer, Boaz Nadler
We assume there are $M$ machines, each holding $d$-dimensional observations of a $K$-sparse vector $\mu$ corrupted by additive Gaussian noise.
no code implementations • 3 Jan 2021 • Nimrod Segol, Boaz Nadler
In previous works, the required number of samples had a quadratic dependence on the maximal separation between the K components, and the resulting error estimate increased linearly with this maximal separation.
no code implementations • 20 Jul 2020 • Amir Weiss, Boaz Nadler
Specifically, our algorithm works in the frequency-domain, where it tries to mimic the optimal unrealizable non-linear Wiener-like filter as if the unknown deterministic signal were known.
2 code implementations • 18 May 2020 • Tal Amir, Ronen Basri, Boaz Nadler
We present a new approach to solve the sparse approximation or best subset selection problem, namely find a $k$-sparse vector ${\bf x}\in\mathbb{R}^d$ that minimizes the $\ell_2$ residual $\lVert A{\bf x}-{\bf y} \rVert_2$.
3 code implementations • 28 Feb 2020 • Ariel Jaffe, Noah Amsel, Yariv Aizenbud, Boaz Nadler, Joseph T. Chang, Yuval Kluger
A common assumption in multiple scientific applications is that the distribution of observed data can be modeled by a latent tree graphical model.
3 code implementations • 5 Feb 2020 • Jonathan Bauch, Boaz Nadler
We present a new, simple and computationally efficient iterative method for low rank matrix completion.
Optimization and Control
no code implementations • 6 Jun 2018 • Yaniv Tenzer, Amit Moscovich, Mary Frances Dorn, Boaz Nadler, Clifford Spiegelman
The resulting classifier is linear in the log-transformed univariate and bivariate densities that correspond to the tree edges.
1 code implementation • ICML 2018 • Ariel Jaffe, Roi Weiss, Shai Carmi, Yuval Kluger, Boaz Nadler
Latent variable models with hidden binary units appear in various applications.
3 code implementations • ICLR 2018 • Uri Shaham, Kelly Stanton, Henry Li, Boaz Nadler, Ronen Basri, Yuval Kluger
Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points.
1 code implementation • NeurIPS 2017 • Sven Peter, Ferran Diego, Fred A. Hamprecht, Boaz Nadler
In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute.
2 code implementations • 22 Jun 2017 • Nati Ofir, Meirav Galun, Sharon Alpert, Achi Brandt, Boaz Nadler, Ronen Basri
A fundamental question for edge detection in noisy images is how faint can an edge be and still be detected.
no code implementations • 8 Mar 2017 • Omer Dror, Boaz Nadler, Erhan Bilal, Yuval Kluger
Consider a regression problem where there is no labeled data and the only observations are the predictions $f_i(x_j)$ of $m$ experts $f_{i}$ over many samples $x_j$.
no code implementations • 7 Nov 2016 • Amit Moscovich, Ariel Jaffe, Boaz Nadler
We consider semi-supervised regression when the predictor variables are drawn from an unknown manifold.
1 code implementation • 6 Feb 2016 • Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph Chang, Yuval Kluger
We show how deep learning methods can be applied in the context of crowdsourcing and unsupervised ensemble learning.
no code implementations • 20 Oct 2015 • Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger
In unsupervised ensemble learning, one obtains predictions from multiple sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to assess it.
3 code implementations • CVPR 2016 • Nati Ofir, Meirav Galun, Boaz Nadler, Ronen Basri
Detecting edges is a fundamental problem in computer vision with many applications, some involving very noisy images.
no code implementations • 12 May 2015 • Ofer Shwartz, Boaz Nadler
observations, it is typically estimated by the sample covariance matrix, at a computational cost of $O(np^{2})$ operations.
no code implementations • 7 Feb 2015 • Roi Weiss, Boaz Nadler
In various applications involving hidden Markov models (HMMs), some of the hidden states are aliased, having identical output distributions.
no code implementations • 29 Jul 2014 • Ariel Jaffe, Boaz Nadler, Yuval Kluger
In various situations one is given only the predictions of multiple classifiers over a large unlabeled test data.
1 code implementation • 10 Jul 2014 • Jonathan Rosenblatt, Boaz Nadler
For both regimes and under suitable assumptions, we present asymptotically exact expressions for this estimation error.
no code implementations • 16 Jun 2013 • Robert Krauthgamer, Boaz Nadler, Dan Vilenchik
In fact, we conjecture that in the single-spike model, no computationally-efficient algorithm can recover a spike of $\ell_0$-sparsity $k\geq\Omega(\sqrt{n})$.
no code implementations • 13 Mar 2013 • Fabio Parisi, Francesco Strino, Boaz Nadler, Yuval Kluger
This scenario is different from the standard supervised setting, where each classifier accuracy can be assessed using available labeled data, and raises two questions: given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to a) reliably rank them; and b) construct a meta-classifier more accurate than most classifiers in the ensemble?
no code implementations • NeurIPS 2009 • Boaz Nadler, Nathan Srebro, Xueyuan Zhou
We study the behavior of the popular Laplacian Regularization method for Semi-Supervised Learning at the regime of a fixed number of labeled points but a large number of unlabeled points.
3 code implementations • 3 Jul 2007 • Ann B. Lee, Boaz Nadler, Larry Wasserman
In many modern applications, including analysis of gene expression and text documents, the data are noisy, high-dimensional, and unordered--with no particular meaning to the given order of the variables.
Methodology