Surrogate losses for online learning of stepsizes in stochastic non-convex optimization

Date Issued
2019-06-10Author(s)
Zhuang, Zhenxun
Cutkosky, Ashok
Orabona, Francesco
Metadata
Show full item recordPermanent Link
https://hdl.handle.net/2144/40899OA Version
Published version
Citation (published version)
Zhenxun Zhuang, Ashok Cutkosky, Francesco Orabona. 2019. "Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization." International Conference on Machine LearningAbstract
Stochastic Gradient Descent (SGD) has played
a central role in machine learning. However, it
requires a carefully hand-picked stepsize for fast
convergence, which is notoriously tedious and
time-consuming to tune. Over the last several
years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of noregret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.
Rights
Copyright 2019 by the author(s).Collections