Posts

2 papers accepted to ICML

Two of my papers got accepted for presentation at ICML:

The first of these two papers was a first-time submission and the latter was a resubmission. Earlier, we opted in to release online the reviews for the Prox RR paper from NeurIPS 2021, so the ICML reviewers could see (if they searched) that our work was previously rejected. Nevertheless, it was recommended for acceptance.
Although I’m happy about my works, I feel there is still a lot of changed required to fix the reviewing process. One thing that I’m personally waiting for is that every conference would use OpenReview instead of CMT. OpenReview give the opportunity to write individual responses to the reviewers and supports LaTeX in the editor, which are amazing things.
If your paper did not get accepted, don’t take it as a strong evidence that your work is not appreciated, it often happens to high-quality works. A good example of this is the recent revelation by Mark Schmidt on Twitter that their famous SAG paper was rejected from ICML 2012.

New paper: ProxSkip, a method for federated learning
Our new paper is now available on arXiv: abstract, pdf. We present a new proximal-gradient method capable of skipping computation of the proximity operator, which we designed with applications in federated learning in mind. Specifically, when the skipped operator is the averaging of local (i.e., stored on devices) iterates, this corresponds to skipping communication with provable benefits. In fact, we show that one can accelerate the convergence in terms of communication rounds, similarly to Nesterov’s acceleration but without using any momentum. Related methods, such as Scaffold, are only proved to perform comparably to gradient descent, but not better. Nevertheless, our method, when it is specialized to federated learning, is algorithmically very similar to Scaffold, so we call it Scaffnew (any guesses why?:) ).
Also, check this twitter thread if you want to read an informal description.
Website update and some news from 2021
Today I updated my website after 1 year of silence. A few things have happened during this period. First and foremost, I defended my PhD thesis, and moved to Paris for my postdoc! I also have written a couple more papers (our IntSGD paper got accepted as a spotlight for ICLR 2022), gave a few online talks (check my talk Newton on method!), and got the “Rising Star in Data Science” award. More exciting papers are going to appear soon, and I will try to update my website more actively to keep those interested posted.
41 Papers reviewed in 2020 and some thoughts on reviewing

Reflecting on 2020, I realized I spent a lot of time reviewing. I reviewed 34 conference papers, 3 journal papers and 4 workshop papers. To my surprise, Lihua Lei reviewed 48 papers, 20 of which were for journals and probably took extra time due to revisions.

Reviewing is an important part of doing a PhD and it actually helps when writing papers. However, it does not help anymore once reviewing takes more time than writing. Of course, one solution that seems to be a common choice is to spend less time on each reviewed paper but this has led to the popular joke of the notorious Reviewer #2. I plan, nevertheless, to keep spending as much time as I need for every paper. My only hope is that the overall system at ML conferences would improve in near future, lowering the load on the people involved, for example by adopting the reviewing system of ICLR.

My presentation on Random Reshuffling at the Informs Annual Meeting 2020
Today I’m presenting at the INFORMS Annual Meeting at the session “Painless Large-Scale Optimization with (Almost) Hessian-Free Acceleration”, which can be accessed here if you registered for the conference. I am covering our recent work on Random Reshuffling which has been accepted at NeurIPS 2020 as a conference paper. The session is organized by Junzi Zhang and features three more speakers: Brendan O’Donoghue, Wentao Tang and Juyong Zhang.
My presentation on Random Reshuffling at the Informs Annual Meeting 2020