NIPS is over as of last night. Overall I thought the program was strong (though I think someone, somewhere, is trying to convince me I need to do deep learning -- or at least that was the topic d'jour... or I guess d'an? this time). I wasn't as thrilled with the venue (details at the end) but that's life. Here were some of the highlights for me, of course excluding our own papers :P, (see the full paper list here)... note that there will eventually be videos for everything!
- User-Friendly Tools for Studying Random Matrices
Joel A Tropp
This tutorial was awesome. Joel has apparently given it several times and so it's really well fine-tuned. The basic result is that if you love your Chernoff bounds and Bernstein inequalities for (sums of) scalars, you can get almost exactly the same results for (sums of) matrices. Really great talk. If I ever end up summing random matrices, I'm sure I'll use this stuff! - Emergence of Object-Selective Features in Unsupervised Feature Learning
Adam Coates, Andrej Karpathy, Andrew Y. Ng
They show that using only unlabeled data that is very heterogenous, some simple approaches can pull out faces. I imagine that some of what is going on is that faces are fairly consistent in appearance whereas "other stuff" often is not. (Though I'm sure my face-recognition colleagues would argue with my "fairly consistent" claim.) - Scalable nonconvex inexact proximal splitting
Suvrit Sra
I just have to give props to anyone who studies nonconvex optimization. I need to read this -- I only had a glance at the poster -- but I definitely think it's worth a look. - A Bayesian Approach for Policy Learning from Trajectory Preference Queries
Aaron Wilson, Alan Fern, Prasad Tadepalli
The problem solved here is imitation learning where your interaction with an expert is showing them two trajectories (that begin at the same state) and asking them which is better. Something I've been thinking about recently -- very happy to see it work! - FastEx: Hash Clustering with Exponential Families
Amr Ahmed, Sujith Ravi, Shravan M. Narayanamurthy, Alexander J. Smola
The idea here is to replace the dot product between the parameters and sufficient statistics of an exp fam model with an approximate dot product achieved using locality sensitive hashing. Take a bit to figure out exactly how to do this. Cool idea and nice speedups. - Identifiability and Unmixing of Latent Parse Trees
Daniel Hsu, Sham M. Kakade, Percy Liang
Short version: spectral learning for unsupervised parsing; the challenge is to get around the fact that different sentences have different structures, and "unmixing" is the method they propose to do this. Also some identifiability results. - Tensor Decomposition for Fast Parsing with Latent-Variable PCFGs
Shay B. Cohen and Michael Collins
Another spectral learning paper, this time for doing exact latent variable learning for latent-variable PCFGs. Fast, and just slightly less good than EM. - Multiple Choice Learning: Learning to Produce Multiple Structured Outputs
Abner Guzman-Rivera Dhruv Batra Pushmeet Kohli
Often we want our models to produce k-best outputs, but for some reason we only train them to produce one-best outputs and then just cross our fingers. This paper shows that you can train directly to produce a good set of outputs (not necessarily diverse: just that it should contain the truth) and do better. It's not convex, but the standard training is a good initializer. - [EDIT Dec 9, 11:12p PST -- FORGOT ONE!]
Query Complexity of Derivative-Free Optimization
Kevin G. Jamieson, Robert D. Nowak, Benjamin Recht
This paper considers derivative free optimization with two types of oracles. In one you can compute f(x) for any x with some noise (you're optimizing over x). In the other, you can only ask whether f(x)>f(y) for two points x and y (again with noise). It seems that the first is more powerful, but the result of this paper is that you get the same rates with the second! - I didn't see it, but Satoru Fujishige's talk Submodularity and Discrete Convexity in the Discrete Machine Learning workshop was supposedly wonderful. I can't wait for the video.
- Similarly, I heard that Bill Dolan's talk on Modeling Multilingual Grounded Language in the xLiTe workshop was very good.
- Ryan Adam's talk on Building Probabilistic Models Around Deterministic Optimization Procedures in the "Perturbations, Optimization and Statistics" workshop (yeah, I couldn't figure out what the heck that meant either) was also very good. The Perturb-and-MAP stuff and the Randomized Optimum models are high on my reading list, but I haven't gotten to them quite yet.
- As always, Ryan McDonald and Ivan Titov gave very good talks in xLiTe, on Advances in Cross-Lingual Syntactic Transfer and Inducing Cross-Lingual Semantic Representations of Words, Phrases, Sentences and Events, respectively.
Really my only gripe about NIPS this year was the venue. I normally wouldn't take the time to say this, but since we'll be enjoying this place for the next few years, I figured I'd state what I saw as the major problems, some of which are fixable. For those who didn't come, we're in Stateline, NV (on the border between CA and NV) in two casinos. Since we're in NV, there is a subtle note of old cigarette on the nose fairly constantly. There is also basically nowhere good to eat (especially if you have dietary restrictions) -- I think there are a half dozen places on yelp with 3.5 stars or greater. My favorite tweet during NIPS was Jacob Eisenstein who said: "stateline, nevada / there is nothing but starbucks / the saddest haiku". Those are the "unfixables" that make me think that I'll think twice about going to NIPS next year, but of course I'll go.
The things that I think are fixable... there was no where to sit. Presumably this is because the casino wants you to sit only where they can take your money, but I had most of my long discussions either standing or sitting on the ground. More chairs in hallways would be much appreciated. There was almost no power in rooms, which could be solved by some power strips. The way the rooms divided for tutorials was really awkward, as the speaker was clear on one side of the room and the screen was on the other (and too high to point to) so it was basically like watching a video of slides online without ever seeing the presenter. Not sure if that's fixable, but seems plausible. And the walls between the workshop rooms were so thin that often I could hear another workshop's speaker better than I could hear the speaker in the workshop I was attending. And the internet in my hotel room was virtually unusably slow (though the NIPS specific internet was great).
No comments:
Post a Comment