2 new papers: Newton and adaptive gradient

We recently released two new papers that are available on arxiv as preprints. The first work considers the global convergence of regularized Newton method (short summary in this twitter thread) and the second work is about adaptive stepsizes for stochastic gradient descent.

Konstantin Mishchenko
Konstantin Mishchenko
Postdoctoral Researcher

I study optimization and its applications in machine learning.