Posts

Paper on Regularized Newton accepted at SIAM Journal on Optimization (SIOPT)
My paper on Regularized Newton got accepted for publication at SIAM Journal on Optimization (SIOPT). The main result of this work is to show that one can globalize Newton’s method by using regularization proportional to the square root of the gradient norm. The corresponding method achieves global acceleration over gradient descent and it converges with the $O(1/k^2)$ rate of cubic Newton.
Presenting at 2022 Workshop on FL and Analytics organized by Google
I’m taking part in the 2022 Workshop on Federated Learning and Analytics on 9 and 10 November. I am giving a talk about our work on Asynchronous SGD in the mini-workshop on Federated Systems at Scale on 9 November.
I'm giving a talk at Institut Henri Poincaré
I’m giving a talk at Séminaire Parisien d’Optimisation on 10 October. I will be presenting my work on second-order optimization, including the Super-Universal Newton paper.
Visiting Sebastian Stich in Germany
Between 22 June and 28 June, I visited Sebastian Stich, who works at CISPA Helmholtz Center for Information Security near Saarbrücken in Germany. We discusses new ways to solve nonconvex problems, and we already have some interesting results. Sebastian is also looking for PhD students, so if you or someone you know is looking for a good supervisor working on optimization, Sebastian should be the first person to contact!
New paper: Asynchronous SGD with arbitrary delays
My first ever optimization project was an ICML paper about an asynchronous gradient method. At the time, I was quite confused by the fact that no matter what I was doing, Asynchronous gradient descent still converged. Five years later, I can finally give an answer: Because Asynchronous SGD doesn’t care about the delays, which we proved in https://arxiv.org/abs/2206.07638 our new paper. For a short summary, you can read my twitter thread about the paper or check my slides.