13 September 2010

AIStats 2011 Call for Papers

The full call, and some changes to the reviewing process. The submission deadline is Nov 1, and the conference is April 11-13, in Fort Lauderdale, Florida. Promises to be warm :).

The changes the the reviewing process are interesting.  Basically the main change is that the author response is replaced by a journal-esque "revise and resubmit."  That is, you get 2 reviews, edit your paper, submit a new version, and get a 3rd review.  The hope is that this will reduce author frustration from the low bandwidth of author response.  Like with a journal, you'll also submit a "diff" saying what you've changed.  I can see this going really well: the third reviewer will presumably see a (much) better than the first two.  The disadvantage, which irked me at ICML last year, is that it often seemed like the third reviewer made the deciding call, and I would want to make sure that the first two reviewers also get updated.  I can also see it going poorly: authors invest even more time in "responding" and no one listens.  That will be increased frustration :).

The other change is that there'll be more awards.  I'm very much in favor of this, and I spend two years on the NAACL exec trying to get NAACL to do the same thing, but always got voted down :).  Oh well.  The reason I think it's a good idea is two-fold.  First, I think we're bad at selecting single best papers: a committee decision can often lead to selecting least offensive papers rather than ones that really push the boundary.  I also think there are lots of ways for papers to be great: they can introduce new awesome algorithms, have new theory, have a great application, introduce a cool new problem, utilize a new linguistic insight, etc., etc., etc... Second, best papers are most useful at promotion time (hiring, and tenure), where you're being compared with people from other fields.  Why should our field put our people at a disadvantage by not awarding great work that they can list of their CVs?

Anyway, it'll be an interesting experiment, and I encourage folks to submit!

6 comments:

  1. 1. This increase in reviewing load for AAAI was the straw that broke this reviewing camel's back. The back and forth after author responses was endless.

    2. Call me a curmudgeon, but all of this newfangled "best speech paper by a student not mentioning HMMs" kind of award seems like nothing more than grade inflation.

    Do we really think hiring, promotion, and tenure committees are that naive? If they're the bean counters we imagine, won't "best paper" rate just get factored in with acceptance rate? Do we really want to focus them on this kind of thing?

    We're already trusting these same committees to let us publish mostly in conferences while (almost) everyone else seems to publish mostly in journals (e.g., in bio, if it's not in MEDLINE, it doesn't count).

    ReplyDelete
  2. @Bob:

    1. Interesting, I didn't know this :).

    2. Well obviously there's a limit. I guess I should have said it in the original, but I think there are three other reasons why there should be best papers at all: 1) to make the authors feel good, 2) to draw attention to these papers for people in our field that might not otherwise follow them, 3) for people in other fields to see what great stuff is going on in ours (eg., I often read the best papers at other conferences, just because they're best papers). As long as there are only a few (say 2-5), I don't think having more than 2 hurts us in any way.

    And maybe they're naive, maybe they're not... I've never been on that side of the curtain. We all know that all of these statistics are totally meaningless (don't get me started on H index), but people like to have objective criteria and it's nice to be able to point to another reason why you're awesome :).

    ReplyDelete
  3. The other change is that there'll be more awards.

    Here's a tangential issue that I've occasionally wondered about: It seems to me that Best Paper awards are not predictive of citation counts. This raises two questions for me: (1) Is it true that awards are not predictive of citations? (2) If it's true, does it matter? Perhaps it implies that awards are not indicative of future impact and do not influence future impact; perhaps it implies that awards are not relevant to scientific progress; or perhaps not.

    ReplyDelete
  4. One way to increase the recognition of more papers without diluting the best paper award, is to move the conferences to single session plus larger poster sessions, ala NIPS and CVPR. That way people can use the "oral presentation" distinction as an indication of a notable publication. My understanding is that in the vision community an oral presentation is generally understood to be a big deal amongst faculty. I also prefer this model since it allows me to see more papers and not session jump.

    +1 to Bob's comment. I find that the increasing number of conferences with multiple rounds of reviewing is causing me to reject more requests for reviews and -- even worse -- generally write poorer reviews. Allowing authors to clarify to misunderstandings in a review is one thing (I benefited from it this year at NAACL), but I don't see anything wrong with rejecting a paper based on its current form and expecting the authors to re-submit at the next conference. I can see this model encouraging authors to submit unfinished papers for the original deadline while using the time between submission and the first reviewing stage to polish the paper. E.g., I have some cool idea (to me at least), I write it up and run some small experiments and submit. I then use the next 3-4 weeks to run the experiments that would normally be required for the paper to be acceptable. The reviewers come back with "to prelim". If my results are good, I revise and re-submit, otherwise I do nothing and have wasted the reviewers time since I did not take the effort before submitting to verify the results.

    ReplyDelete
  5. @Hal: Thanks for the clarification on quantity. Back when I went through the tenure process (1995), the most important factor was the 20 odd references they collected. I'm told things are more quantitative now, though I doubt that makes them any less subjective.

    Most of my citations were for my two books, which are hard to factor in.

    @Peter: Check out Gene Golovchinsky's post on his FXPal blog, titled Citing Best Papers, where he does some evals:

    http://palblog.fxpal.com/?p=4648

    @Ryan: Now that you point it out, it's obvious people will game the system in exactly the way you describe.

    ReplyDelete
  6. @Bob

    > Check out Gene Golovchinsky's post on
    > his FXPal blog, titled Citing Best
    > Papers, where he does some evals:
    >
    > http://palblog.fxpal.com/?p=4648

    This tends to support my view. I believe there is no demonstrated value to Best Paper awards. We should stop giving them out until somebody can show they serve a useful function. Randomly boosting one researcher's self-confidence and career opportunities at the cost of the researcher's peers is not a useful function.

    ReplyDelete