no code implementations • 6 Sep 2023 • Suhail M. Shah, Albert S. Berahas, Raghu Bollapragada
We consider network-based decentralized optimization problems, where each node in the network possesses a local function and the objective is to collectively attain a consensus solution that minimizes the sum of all the local functions.
no code implementations • 15 Jun 2022 • Raghu Bollapragada, Tyler Chen, Rachel Ward
Simple stochastic momentum methods are widely used in machine learning optimization, but their good practical performance is at odds with an absence of theoretical guarantees of acceleration in the literature.
no code implementations • 28 Oct 2021 • Soumyajit Gupta, Gurpreet Singh, Raghu Bollapragada, Matthew Lease
Multi-objective optimization (MOO) problems require balancing competing objectives, often under constraints.
no code implementations • 24 Sep 2021 • Raghu Bollapragada, Stefan M. Wild
We consider unconstrained stochastic optimization problems with no available gradient information.
no code implementations • 7 Mar 2021 • David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip
Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA) where, during each iteration, a "deterministic solver" executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency.
no code implementations • 31 Dec 2020 • Yuchen Xie, Raghu Bollapragada, Richard Byrd, Jorge Nocedal
The motivation for this paper stems from the desire to develop an adaptive sampling method for solving constrained optimization problems in which the objective function is stochastic and the constraints are deterministic.
no code implementations • 29 Oct 2019 • Raghu Bollapragada, Stefan M. Wild
We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning.
no code implementations • ICML 2018 • Raghu Bollapragada, Dheevatsa Mudigere, Jorge Nocedal, Hao-Jun Michael Shi, Ping Tak Peter Tang
The standard L-BFGS method relies on gradient approximations that are not dominated by noise, so that search directions are descent directions, the line search is reliable, and quasi-Newton updating yields useful quadratic models of the objective function.
no code implementations • 30 Oct 2017 • Raghu Bollapragada, Richard Byrd, Jorge Nocedal
In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations.
no code implementations • 17 May 2017 • Albert S. Berahas, Raghu Bollapragada, Jorge Nocedal
Sketching, a dimensionality reduction technique, has received much attention in the statistics community.
no code implementations • 27 Sep 2016 • Raghu Bollapragada, Richard Byrd, Jorge Nocedal
The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling.