1 code implementation • 22 May 2023 • Flavio Chierichetti, Mirko Giacchini, Ravi Kumar, Alessandro Panconesi, Andrew Tomkins
In this work we consider the problem of fitting Random Utility Models (RUMs) to user choices.
no code implementations • NeurIPS 2021 • Matteo Almanza, Flavio Chierichetti, Silvio Lattanzi, Alessandro Panconesi, Giuseppe Re
Clustering is a central topic in unsupervised learning and its online formulation has received a lot of attention in recent years.
no code implementations • 10 Aug 2021 • Flavio Chierichetti, Alessandro Panconesi, Giuseppe Re, Luca Trevisan
We study the reconstruction version of this problem in which one is seeking to reconstruct a latent clustering that has been corrupted by random noise and adversarial modifications.
no code implementations • 6 Oct 2020 • Flavio Chierichetti, Anirban Dasgupta, Ravi Kumar
We show that an approximately submodular function defined on a ground set of $n$ elements is $O(n^2)$ pointwise-close to a submodular function.
no code implementations • NeurIPS 2018 • Flavio Chierichetti, Anirban Dasgupta, Shahrzad Haddadan, Ravi Kumar, Silvio Lattanzi
The classic Mallows model is a widely-used tool to realize distributions on per- mutations.
no code implementations • NeurIPS 2018 • Matteo Almanza, Flavio Chierichetti, Alessandro Panconesi, Andrea Vattani
We present a novel approach for LDA (Latent Dirichlet Allocation) topic reconstruction.
no code implementations • ICML 2018 • Flavio Chierichetti, Ravi Kumar, Andrew Tomkins
In this model, a user is offered a slate of choices (a subset of a finite universe of $n$ items), and selects exactly one item from the slate, each with probability proportional to its (positive) weight.
2 code implementations • NeurIPS 2017 • Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii
We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms.
no code implementations • ICML 2017 • Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, David P. Woodruff
We consider the problem of approximating a given matrix by a low-rank matrix so as to minimize the entrywise $\ell_p$-approximation error, for any $p \geq 1$; the case $p = 2$ is the classical SVD problem.
no code implementations • NeurIPS 2011 • Flavio Chierichetti, David Liben-Nowell, Jon M. Kleinberg
There is a tree T that we cannot observe directly (representing the structure along which the information has spread), and certain nodes randomly decide to make their copy of the information public.