09 January 2006

The Mad Paper Rush

The HLT/NAACL deadline passed last month and the ACL deadline is next month. The cluster here seems more swamped than usual, people are rushing to get experiments done and papers written. It seems that most research gets done two months prior to the due dates and papers are rarely, if ever, submitted early. (Actually, I saw a statistic from ICML that most early-submitted papers actually got rejected in the end!) As someone who is trying to organize a ski trip to Mammoth two weeks before the ACL deadline and having difficulty getting people to come, I wonder why there's always this last minute rush. The ACL deadline was published at least four months ago, and even if not, it's actually later this year than usual.

The only explanation that makes sense to me is that the rush is to run a handful of well-controlled experiments based on continuous tweaking of a system. The further you can delay running these experiments, the more you can tweak. Importantly, assuming you're a good researcher, the experiments are not to convince yourself that your approach is viable. The experiments are to convince your potential reviewers that your approach is viable.

Based on this, there seem to be two ways to cut down on this rush (assuming people don't like it). First is to reduce the amount of tweaking done. Second is to reduce the number of well-controlled experiments run. Unfortunately, both of these will probably result in lower chances of your paper being accepted (confirming the above-cited ICML statistic). But this is bad. Tweaking will improve your scores (hopefully), but will rarely be mentioned in the paper, leading to irreproducible results. Running too many experiments can cloud the point of your paper and significantly cut down on your time to work on real things.

Despite this, people continue to tweak and run too many experiments (myself included). This seems to be because the cost of failure is too high: if your paper is rejected, you basically have to wait another year to resubmit, so you want to cover all bases. Two solutions come to mind. First, we could spread our conferences further apart, meaning that you have only to wait 6 months. Second, we could try to ensure that reviewers know that with a limit of 8 pages, one cannot run all possible experiments, and that they can suggest alternative contrastive experiments to be run, but if the experiments back up the main point(s) of the paper, this should be sufficient. I don't think the "review/response" approach will fix this because running a new experiment is not what author responses are for, and this is just delaying the problem, rather than fixing it (though I do advocate the response phase).

No comments: