Two of my papers got accepted for presentation at ICML:
The first of these two papers was a first-time submission and the latter was a resubmission. Earlier, we opted in to release online the reviews for the Prox RR paper from NeurIPS 2021, so the ICML reviewers could see (if they searched) that our work was previously rejected. Nevertheless, it was recommended for acceptance.
Although I’m happy about my works, I feel there is still a lot of changed required to fix the reviewing process. One thing that I’m personally waiting for is that every conference would use OpenReview instead of CMT. OpenReview give the opportunity to write individual responses to the reviewers and supports LaTeX in the editor, which are amazing things.
If your paper did not get accepted, don’t take it as a strong evidence that your work is not appreciated, it often happens to high-quality works. A good example of this is the recent revelation by Mark Schmidt on Twitter that their famous SAG paper was rejected from ICML 2012.
Reflecting on 2020, I realized I spent a lot of time reviewing. I reviewed 34 conference papers, 3 journal papers and 4 workshop papers. To my surprise, Lihua Lei reviewed 48 papers, 20 of which were for journals and probably took extra time due to revisions.
Reviewing is an important part of doing a PhD and it actually helps when writing papers. However, it does not help anymore once reviewing takes more time than writing. Of course, one solution that seems to be a common choice is to spend less time on each reviewed paper but this has led to the popular joke of the notorious Reviewer #2. I plan, nevertheless, to keep spending as much time as I need for every paper. My only hope is that the overall system at ML conferences would improve in near future, lowering the load on the people involved, for example by adopting the reviewing system of ICLR.