Search Results for author: Soumyadip Ghosh

Found 15 papers, 1 papers with code

On Convergence of the Alternating Directions SGHMC Algorithm

no code implementations21 May 2024 Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki

We study convergence rates of Hamiltonian Monte Carlo (HMC) algorithms with leapfrog integration under mild conditions on stochastic gradient oracle for the target distribution (SGHMC).

Obtaining Explainable Classification Models using Distributionally Robust Optimization

no code implementations3 Nov 2023 Sanjeeb Dash, Soumyadip Ghosh, Joao Goncalves, Mark S. Squillante

Model explainability is crucial for human users to be able to interpret how a proposed classifier assigns labels to data based on its feature values.

Binary Classification Classification

On Representations of Mean-Field Variational Inference

no code implementations20 Oct 2022 Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki, Edith Zhang

We present a framework to analyze MFVI algorithms, which is inspired by a similar development for general variational Bayesian formulations.

Bayesian Inference Variational Inference

A Class of Geometric Structures in Transfer Learning: Minimax Bounds and Optimality

no code implementations23 Feb 2022 Xuhui Zhang, Jose Blanchet, Soumyadip Ghosh, Mark S. Squillante

In contrast, our study first illustrates the benefits of incorporating a natural geometric structure within a linear regression model, which corresponds to the generalized eigenvalue problem formed by the Gram matrices of both domains.

Transfer Learning

Polynomial convergence of iterations of certain random operators in Hilbert space

no code implementations4 Feb 2022 Soumyadip Ghosh, Yingdong Lu, Tomasz J. Nowicki

We study the convergence of a random iterative sequence of a family of operators on infinite dimensional Hilbert spaces, inspired by the Stochastic Gradient Descent (SGD) algorithm in the case of the noiseless regression, as studied in [1].

regression

Efficient Generalization with Distributionally Robust Learning

no code implementations NeurIPS 2021 Soumyadip Ghosh, Mark Squillante, Ebisa Wollega

Distributionally robust learning (DRL) is increasingly seen as a viable method to train machine learning models for improved model generalization.

Hamiltonian Monte Carlo with Asymmetrical Momentum Distributions

no code implementations21 Oct 2021 Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki

Existing rigorous convergence guarantees for the Hamiltonian Monte Carlo (HMC) algorithm use Gaussian auxiliary momentum variables, which are crucially symmetrically distributed.

EventGraD: Event-Triggered Communication in Parallel Machine Learning

2 code implementations12 Mar 2021 Soumyadip Ghosh, Bernardo Aquino, Vijay Gupta

To relieve some of this overhead, in this paper, we present EventGraD - an algorithm with event-triggered communication for stochastic gradient descent in parallel machine learning.

BIG-bench Machine Learning

HMC, an Algorithms in Data Mining, the Functional Analysis approach

no code implementations4 Feb 2021 Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki

The main purpose of this paper is to facilitate the communication between the Analytic, Probabilistic and Algorithmic communities.

On $L^q$ Convergence of the Hamiltonian Monte Carlo

no code implementations21 Jan 2021 Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki

We establish $L_q$ convergence for Hamiltonian Monte Carlo algorithms.

Unbiased Gradient Estimation for Distributionally Robust Learning

no code implementations22 Dec 2020 Soumyadip Ghosh, Mark Squillante

Seeking to improve model generalization, we consider a new approach based on distributionally robust learning (DRL) that applies stochastic gradient descent to the outer minimization problem.

Quantifying the Empirical Wasserstein Distance to a Set of Measures: Beating the Curse of Dimensionality

no code implementations NeurIPS 2020 Nian Si, Jose Blanchet, Soumyadip Ghosh, Mark Squillante

We consider the problem of estimating the Wasserstein distance between the empirical measure and a set of probability measures whose expectations over a class of functions (hypothesis class) are constrained.

Efficient Stochastic Gradient Descent for Learning with Distributionally Robust Optimization

no code implementations22 May 2018 Soumyadip Ghosh, Mark Squillante, Ebisa Wollega

Distributionally robust optimization (DRO) problems are increasingly seen as a viable method to train machine learning models for improved model generalization.

Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD

no code implementations3 Mar 2018 Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, Priya Nagpurkar

Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers).

Efficient Estimation in the Tails of Gaussian Copulas

no code implementations5 Jul 2016 Kalyani Nagaraj, Jie Xu, Raghu Pasupathy, Soumyadip Ghosh

The first of our proposed estimators $\estOpt$ is the "full-information" estimator that actively exploits such local structure to achieve bounded relative error in Gaussian settings.

Cannot find the paper you are looking for? You can Submit a new open access paper.