no code implementations • 21 Aug 2023 • Abram Magner, Arun Padakandla
We show that the empirical risk defined in previous works and matching the definition in the classical theory fails to satisfy the uniform convergence property enjoyed in the classical setting for some learnable classes.
no code implementations • 11 May 2023 • Sepideh Neshatfar, Abram Magner, Salimeh Yasaei Sekeh
To gain a theoretical perspective on the supervised summarization problem itself, we first formulate it in terms of maximizing the Shannon mutual information between the summarized graph and the class label.
no code implementations • 22 Sep 2021 • Abram Magner, Carolyn Kaminski, Petko Bogdanov
We highlight one such phenomenon -- temporal distortion -- caused by a misalignment between the rate at which observations of a cascade process are made and the rate at which the process itself operates, and argue that failure to correct for it results in degradation of performance on downstream statistical tasks.
no code implementations • 13 Feb 2020 • Abram Magner, Mayank Baranwal, Alfred O. Hero III
We investigate the power of GCNs, as a function of their number of layers, to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
no code implementations • 28 Oct 2019 • Abram Magner, Mayank Baranwal, Alfred O. Hero III
We give a precise characterization of the set of pairs of graphons that are indistinguishable by a GCN with nonlinear activation functions coming from a certain broad class if its depth is at least logarithmic in the size of the sample graph.
no code implementations • 6 Apr 2019 • Abram Magner, Wojciech Szpankowski
Numerous networks in the real world change over time, in the sense that nodes and edges enter and leave the networks.