Surrogate losses for online learning of stepsizes in stochastic non-convex optimization

Files
zhuang19a.pdf(371.27 KB)
Published version
Date
2019-06-10
DOI
Authors
Zhuang, Zhenxun
Cutkosky, Ashok
Orabona, Francesco
Version
OA Version
Published version
Citation
Zhenxun Zhuang, Ashok Cutkosky, Francesco Orabona. 2019. "Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization." International Conference on Machine Learning
Abstract
Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of noregret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.
Description
License
Copyright 2019 by the author(s).