Search Results for author: Nachuan Xiao

Found 5 papers, 0 papers with code

Developing Lagrangian-based Methods for Nonsmooth Nonconvex Optimization

no code implementations15 Apr 2024 Nachuan Xiao, Kuangyu Ding, Xiaoyin Hu, Kim-Chuan Toh

Preliminary numerical experiments on deep learning tasks illustrate that our proposed framework yields efficient variants of Lagrangian-based methods with convergence guarantees for nonconvex nonsmooth constrained optimization problems.

Decentralized Stochastic Subgradient Methods for Nonsmooth Nonconvex Optimization

no code implementations18 Mar 2024 Siyuan Zhang, Nachuan Xiao, Xin Liu

Furthermore, we establish that our proposed framework encompasses a wide range of existing efficient decentralized subgradient methods, including decentralized stochastic subgradient descent (DSGD), DSGD with gradient-tracking technique (DSGD-T), and DSGD with momentum (DSGDm).

Adam-family Methods with Decoupled Weight Decay in Deep Learning

no code implementations13 Oct 2023 Kuangyu Ding, Nachuan Xiao, Kim-Chuan Toh

As a practical application of our proposed framework, we propose a novel Adam-family method named Adam with Decoupled Weight Decay (AdamD), and establish its convergence properties under mild conditions.

SGD-type Methods with Guaranteed Global Stability in Nonsmooth Nonconvex Optimization

no code implementations19 Jul 2023 Nachuan Xiao, Xiaoyin Hu, Kim-Chuan Toh

We further illustrate that our scheme yields variants of SGD-type methods, which enjoy guaranteed convergence in training nonsmooth neural networks.

Adam-family Methods for Nonsmooth Optimization with Convergence Guarantees

no code implementations6 May 2023 Nachuan Xiao, Xiaoyin Hu, Xin Liu, Kim-Chuan Toh

In this paper, we present a comprehensive study on the convergence properties of Adam-family methods for nonsmooth optimization, especially in the training of nonsmooth neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.