25 January 2007

Error Analysis

I was recently asked if I thought that it would be a good idea if our conferences were to explicitly require an error analysis to be performed and reported in papers. While this is perhaps a bit extreme (more on this later), there are at least two reasons why this would be desirable.

  1. When multiple techniques exist for solving the same problem, and they get reasonably close scores, is this because they are making the same sort of errors or different sorts?
  2. If someone were to build on your paper and try to improve it, where should they look?
There's an additional aspect that comes up, especially once you're in a sort of supervisory role. It's often hard to get students to actually look at outputs and forcing this as part of the game early on is a good idea. I was the same as a student (and continue to be the same now) -- only two or three our of a dozen or so papers of mine contain an error analysis.

This situation reminds me a bit of an excellent talk I saw a few years ago (at ACL or EMNLP in Barcelona, I think) by Mitch Marcus talking about some parsing stuff. I don't really remember much of his talk, except that he kept flashing a single slide that read "Look at the data, stupid." His argument was essentially that we're not going to be able to model what we want to model unless we really understand what's going on in the data representing the phenomena we're trying to study.

An exercise that's also good from this perspective is to do some data annotation yourself. This is perhaps even more painful than doing an error analysis, but it really drives home the difficulties in the task.

Getting back to the point at hand, I don't think it's feasible or even necessarily advisable to require all papers to include an error analysis. But I also think that more papers should contain error analyses than actually do (including some of my own). In the universal struggle to fit papers within an 8 page limit, things have to get cut. It seems that the error analysis is the first thing to get cut (in that it gets cut before the paper is even written -- typically by not being performed).

But, at least for me, when I read a paper, I want to know after the fact what I have learned. Occasionally it's a new learning technique. Or occasionally it's some new useful features. Or sometimes it's a new problem. But if you were to take the most popular problems out there that I don't work on (MT, parsing, language modeling, ASR, etc.), I really have no idea what problems are still out there. I can guess (I think names in MT are hard, as is ordering; I think probably attachment and conjunctions in parsing; I have little idea in LM and ASR), but I'm sure that people who work on these problems (and I really mean work: like, you care about getting better systems, not just getting papers) know. So it would be great to see it in papers.

2 comments:

Anonymous said...

I agree that explicitly requiring an error analysis does not seem like a good solution. For one thing, some papers are theoretical in nature. For example, what kind of error analysis could one require of the Nederhof and Satta '05 NAACL paper proving various estimators of PCFGs are consistent?

The more problematic issue (as you noted) is the page limit of conferences. I have seen many reviewers complain that a paper was too "busy" when a new idea coupled with experiments and error analysis were presented. I have even heard about one case where a paper was rejected with one reviewer suggesting that it be split into two: one for the idea and one for the error analysis!!

Another related problem is the perception that papers that contain only a thorough error analysis will not be accepted at conferences. These pops up once in a while, but they are rare and are often given a poster and not an oral. Reviewers often complain that these are "interesting, but do not contain any novel ideas", or something of this nature. As a result, many good analysis papers never see the light of day, outside of a thesis or some tech report. A good solution might be to structure our conferences more like NIPS. Have limited orals for just the best new papers containing new groundbreaking ideas and analysis. Then have a large poster session where most of the work is presented. This would allow us to increase acceptance rates and hopefully pick up some good analysis papers. Also, if we go to only an online proceedings we would not have to worry about increasing the acceptance rates. (Note that this would also correct some of the problems Ken Church raises in a recent issue of CL).

One school of thought is that this kind of work falls into the domain of journals (or maybe highly specific workshops). But the problem here is that the publishing cycle for journal papers is ridiculously slow -- usually a year or more. Also, traditional journals have limited space. For instance CL only published 14 articles last year. Maybe we need something like JMLR which published 100+ papers last year and has shorter publication cycles. Such a resource would allow for quicker access to more papers and provide another forum for all the error analysis papers out there.

I am amazed that we have not already done a JMLR for NLP. Why are there still non open access journals out there in our field? But I guess this could be the topic of an entirely different blog post ... Hal? :-)

hal said...

Wow Ryan, you hit on two topics I've been thinking about talking about for a while, but haven't gotten around to: the Ken Church article and a "JCLR". Let's revisit those shortly.

WRT the error analysis bit, you're absolutely right that error analysis has little place in theory papers. I think the proposal that I was queried about was more aimed at "well known and formulated task, well known existing solutions; paper presents a new solution with epsilon error reduction." For these, I would (typically) love to see error analysis, otherwise I don't really know what I've learned by reading the paper (other than, if I switch from X system for solving this problem to Y, then on average I'll do slightly better). I'm certainly also not advocating error-analysis-only papers, except in some particularly rare circumstances ... those would be quite boring.