Search Results for author: Hailiang Liu

Found 5 papers, 3 papers with code

SGEM: stochastic gradient with energy and momentum

1 code implementation3 Aug 2022 Hailiang Liu, Xuping Tian

In this paper, we propose SGEM, Stochastic Gradient with Energy and Momentum, to solve a large class of general non-convex stochastic optimization problems, based on the AEGD method that originated in the work [AEGD: Adaptive Gradient Descent with Energy.

Stochastic Optimization

An Adaptive Gradient Method with Energy and Momentum

1 code implementation23 Mar 2022 Hailiang Liu, Xuping Tian

We introduce a novel algorithm for gradient-based optimization of stochastic objective functions.

A global convergence theory for deep ReLU implicit networks via over-parameterization

no code implementations ICLR 2022 Tianxiang Gao, Hailiang Liu, Jia Liu, Hridesh Rajan, Hongyang Gao

Implicit deep learning has received increasing attention recently due to the fact that it generalizes the recursive prediction rules of many commonly used neural network architectures.

SGDEM: stochastic gradient descent with energy and momentum

no code implementations29 Sep 2021 Hailiang Liu, Xuping Tian

In this paper, we propose SGDEM, Stochastic Gradient Descent with Energy and Momentum to solve a large class of general nonconvex stochastic optimization problems, based on the AEGD method that originated in the work [AEGD: Adaptive Gradient Descent with Energy.

Stochastic Optimization

AEGD: Adaptive Gradient Descent with Energy

1 code implementation10 Oct 2020 Hailiang Liu, Xuping Tian

We propose AEGD, a new algorithm for first-order gradient-based optimization of non-convex objective functions, based on a dynamically updated energy variable.

Cannot find the paper you are looking for? You can Submit a new open access paper.