no code implementations • 13 May 2022 • Romain Cosentino, Anirvan Sengupta, Salman Avestimehr, Mahdi Soltanolkotabi, Antonio Ortega, Ted Willke, Mariano Tepper
When used for transfer learning, the projector is discarded since empirical results show that its representation generalizes more poorly than the encoder's.
no code implementations • 16 Feb 2022 • Romain Cosentino, Randall Balestriero, Yanis Bahroun, Anirvan Sengupta, Richard Baraniuk, Behnaam Aazhang
This enables (i) the reduction of intrinsic nuisances associated with the data, reducing the complexity of the clustering task and increasing performances and producing state-of-the-art results, (ii) clustering in the input space of the data, leading to a fully interpretable clustering algorithm, and (iii) the benefit of convergence guarantees.
no code implementations • 16 Dec 2020 • Romain Cosentino, Randall Balestriero, Yanis Bahroun, Anirvan Sengupta, Richard Baraniuk, Behnaam Aazhang
We design an interpretable clustering algorithm aware of the nonlinear structure of image manifolds.
no code implementations • NeurIPS 2019 • Yanis Bahroun, Dmitri Chklovskii, Anirvan Sengupta
Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules.
1 code implementation • NeurIPS 2018 • Anirvan Sengupta, Cengiz Pehlevan, Mariano Tepper, Alexander Genkin, Dmitri Chklovskii
Many neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i. e., they respond to a small neighborhood of stimulus space.
no code implementations • 23 Mar 2017 • Cengiz Pehlevan, Anirvan Sengupta, Dmitri B. Chklovskii
Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience.