I'm giving a tutorial on Bayesian methods for NLP at HLT-NAACL 2006. I gave a similar tutorial about a year ago here at ISI. This gave me a pretty good idea of what I want to keep in and what I want to cut out. The topics I intend to cover are, roughly:
- Bayesian paradigm: priors, posteriors, normalization, etc.
- Graphical models, expectation maximization, non-bayesian inference techniques
- Common statistical distributions: uniform, binomial/multinomial, beta/dirichlet
- Simple inference: integration, summaring, monte carlo
- Advanced inference: MCMC, Laplace, Variational
- Survey of popular models: LDA, Topics and Syntax, Words and Pictures
- Pointers to literature
Does anyone have anything they'd really like to hear that's not on the list? Or anything that's on the list that they don't care about? Keep in mind several constraints: 3 hours (minus coffee time), generally accessible, focused on NLP applications, and something I know something about. (For instance, I covered expectation propagation in the tutorial last year, but decided to cut it for this to give more time to other issues.) Note that I am also preparing a written tutorial that covers roughly the same material.