no code implementations • 12 Feb 2024 • Billy J. Franks, Christopher Morris, Ameya Velingker, Floris Geerts
Moreover, we focus on augmenting $1$-WL and MPNNs with subgraph information and employ classical margin theory to investigate the conditions under which an architecture's increased expressivity aligns with improved generalization performance.
no code implementations • 2 Oct 2023 • Federico Barbero, Ameya Velingker, Amin Saberi, Michael Bronstein, Francesco Di Giovanni
Graph Neural Networks (GNNs) are popular models for machine learning on graphs that typically follow the message-passing paradigm, whereby the feature of a node is updated recursively upon aggregating information over its neighbors.
no code implementations • 2 Jun 2023 • Ameya Velingker, Maximilian Vötsch, David P. Woodruff, Samson Zhou
We introduce efficient $(1+\varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem, where the inputs are a matrix $\mathbf{A}\in\{0, 1\}^{n\times d}$, a rank parameter $k>0$, as well as an accuracy parameter $\varepsilon>0$, and the goal is to approximate $\mathbf{A}$ as a product of low-rank factors $\mathbf{U}\in\{0, 1\}^{n\times k}$ and $\mathbf{V}\in\{0, 1\}^{k\times d}$.
1 code implementation • 10 Mar 2023 • Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J. Sutherland, Ali Kemal Sinop
We show that incorporating Exphormer into the recently-proposed GraphGPS framework produces models with competitive empirical results on a wide variety of graph datasets, including state-of-the-art results on three datasets.
Ranked #1 on Graph Classification on MNIST
no code implementations • 7 Dec 2021 • Pravesh K. Kothari, Pasin Manurangsi, Ameya Velingker
Prior works obtained private robust algorithms for mean estimation of subgaussian distributions with bounded covariance.
no code implementations • 1 Jan 2021 • Sreenivas Gollapudi, Kostas Kollias, Benjamin Plaut, Ameya Velingker
We consider the problem of routing users through a network with unknown congestion functions over an infinite time horizon.
no code implementations • 21 Mar 2020 • Michael Kapralov, Navid Nouri, Ilya Razenshteyn, Ameya Velingker, Amir Zandieh
Random binning features, introduced in the seminal paper of Rahimi and Recht (2007), are an efficient method for approximating a kernel matrix using locality sensitive hashing.
no code implementations • 24 Sep 2019 • Badih Ghazi, Pasin Manurangsi, Rasmus Pagh, Ameya Velingker
Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\mathrm{poly}(n)}$, incurs only a constant error and requires only a constant number of messages per party.
Cryptography and Security Data Structures and Algorithms
1 code implementation • 3 Sep 2019 • Thomas D. Ahle, Michael Kapralov, Jakob B. T. Knudsen, Rasmus Pagh, Ameya Velingker, David Woodruff, Amir Zandieh
Oblivious sketching has emerged as a powerful approach to speeding up numerical linear algebra over the past decade, but our understanding of oblivious sketching solutions for kernel matrices has remained quite limited, suffering from the aforementioned exponential dependence on input parameters.
Data Structures and Algorithms
no code implementations • 29 Aug 2019 • Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, Ameya Velingker
- Protocols in the multi-message shuffled model with $poly(\log{B}, \log{n})$ bits of communication per user and $poly\log{B}$ error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms.
no code implementations • 19 Jun 2019 • Badih Ghazi, Rasmus Pagh, Ameya Velingker
Federated learning promises to make machine learning feasible on distributed, private datasets by implementing gradient descent using secure aggregation methods.
no code implementations • 20 Dec 2018 • Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh
We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the *statistical dimension* of the allowed power spectrum of that class.
no code implementations • ICML 2017 • Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh
Qualitatively, our results are twofold: on the one hand, we show that random Fourier feature approximation can provably speed up kernel ridge regression under reasonable assumptions.