I just spent a few days visiting Yee Whye and NUS (photos to prove it!). YW and I talked about many things, but one that stood out as a ripper is attempting to answer the question: as a machine learning person (i.e., YW), what problems are there in NLP that are likely to be amenable to interesting machine learning techniques (i.e., Bayesian methods, or non-parametric methods, etc.). This was a question we tried to answer at the workshop last year, but I don't think we reached a conclusion.
At first, we talked about some potential areas, mostly focusing around problems for which one really needs to perform some sort of inference at a global scale, rather than just locally. I think that this answer is something of a pearler, but not the one I want to dwell on.
Another potential partial answer arose, which I think bears consideration: it will not be on any problem that is popular right now. Why? Well, what we as NLPers usually do these days is use simple (often hueristic) techniques to solve problems. And we're doing a sick job at it, for the well studied tasks (translation, factoid QA, ad hoc search, parsing, tagging, etc.). The hunch is that one of the reasons such problems are so popular these days is because such techniques work so bloody well. Given this, you'd have to be a flamin' galah to try to apply something really fancy to solve one of these tasks.
This answer isn't incompatible with the original answer (globalness). After all, most current techniques only use local information. There is a push toward "joint inference" problems and reducing our use of pipelining, but this tends to be at a fairly weak level.
This is not to say that Bayesian techniques (or, fancy machine learning more generally) is not applicable to problems with only local information, but there seems to be little need to move toward integrating in large amounts of global uncertainty. Of course, you may disagree and if you do, no wuckers.
p.s., I'm in Australia for ACL, so I'm trying to practice my Aussie English.