no code implementations • 25 Oct 2021 • Theodoros Mamalis, Dusan Stipanovic, Petros Voulgaris
Theoretical results show accelerated almost-sure convergence rates of Stochastic Gradient Descent in a nonconvex setting when using an appropriate stochastic learning rate, compared to a deterministic-learning-rate scheme.
no code implementations • 20 Oct 2021 • Theodoros Mamalis, Dusan Stipanovic, Petros Voulgaris
In this work, multiplicative stochasticity is applied to the learning rate of stochastic optimization algorithms, giving rise to stochastic learning-rate schemes.