15 August 2017

Column squishing for multiclass updates

Score-based multiclass classifiers typically have the following form: x is a d-dimensional input vector (perhaps engineered features, perhaps learned features), A is a d*k matrix, where k is the number of classes, and the prediction is given by computing a score vector y=Ax. (This is a blog post, not an arxiv paper, so I'm going to be a bit fast a loose with dimension ordering.) The predicted class is then taken as argmaxi yi.

(Advanced thanks to my former Ph.D. student Abhishek Kumar for help with this!)

The question I wanted to answer, which I felt must have a known answer though I'd never seen it, is the following. Suppose (e.g., at training time), I know that the correct label is i, but the model is (perhaps) not predicting i. I'd like to change A as little as possible so that it predicts i, perhaps with an added margin of 1. If I measure "as little as possible" by l2 norm, and assume wlog that i=1, then I get:

  minB ||B-A||2 st (xB)1≥(xB)i+1 for all i ≥ 2

This problem arises most specifically in Crammer et al., 2006, "Online Passive-Aggressive Algorithms", though appears elsewhere too.

I'll show below (including python code using just numpy), a very efficient solution to this problem. (If this appears elsewhere and I simply was unable to find it, please let me know.)

First, I'll make the following unproven assertion, though I'm pretty sure it'll go through (famous last words). The assertion is that any difference between A and B will be in the direction of x. In other words, the first row of A will likely move in the direction of x and the other rows of A will move away. Hand-wavy reason: because otherwise you increase the norm ||B-A|| without helping satisfy the constraints.

In particular, I'll assume that bi=ai+dix, where the dis are scalars.

Given this, we can do a bit of algebra:

  ||B-A||2 = Σi (bi - ai)2 = Σi (ai + di x - ai)2= ||x||2 Σi di2

Since x is a constant, we really only care about minimizing the norm of the deltas.

We can similarly rewrite the constraints to just say:

  xb1 ≥ xbi + 1 for all i ≥ 2
iff    x(a1 + d1x) ≥ x(ai + dix) + 1 for all i ≥ 2
iff    xa1 + d1 ||x||2 ≥ xai + di||x||2 + 1 for all i ≥ 2
iff    d1 ≥ di + Ci for all i ≥ 2

where Ci = [x(ai-a1)+1]/||x||2 is independent of d.

Now, we have a plausibly simpler optimization problem just over the d vector:

  mind Σi di2 st d1 ≥ di + Ci for all i ≥ 2

This was the place I got stuck. I felt like there would be some algorithm for solving this that involves sorting and projecting and whatever, but couldn't figure it out for a few days. I then asked current and former advisees, at which point Abhishek Kumar came to my rescue :). He pointed me to the paper "Factoring Non-negative Matrices with Linear Programs" by Bittorf et al., 2012. It's maybe not obvious that this is all connect from the title, but they solve a very similar problem in Algorithm 5. All of the following is due to Abhishek:

In particular, their Equation 11 has the form:

  minx ||z-x||2 st 0 ≤ xi ≤ x1 for all i, x1 ≤ 1

My problem can be happed to this by a change of variables: z=d+D, where D=[0, C2, C3, ..., Ck]. We also need to remove the lower and upper bounds. This means that their Algorithm 5 can be used to solve my problem, but with all of the [0,1] projection steps removed. For completeness, here is their algorithm:



Putting this all together, we arrive at some python code in column_squishing.py for solving my multiclass problem. Here's an example of running it:


≫ A = np.random.randn(3,5)
≫ x = np.random.randn(5)
≫ A.dot(x)
array([ 0.90085352,  2.25573249,  0.25974194])

So currently label "1" is winning by a big margin. Let's make each label win by a margin of one, one at a time:

≫ multiclass_update(A, x, 0).dot(x)
array([ 2.078293  ,  1.078293  ,  0.25974194])
 
≫ multiclass_update(A, x, 1).dot(x) 
array([ 0.90085352,  2.25573249,  0.25974194])
 
≫ multiclass_update(A, x, 2).dot(x) 
array([ 0.80544265,  0.80544265,  1.80544265])
 
 
Hopefully you find this helpful. If you end up using it, please make some sort of acknowledgement to this blog post and definitely please credit Abhishek.

12 April 2017

Humans can still extort more money from me than machines can

Like lots of folks, I wonder sometimes about AI and jobs. I'm neither a believer that there's a catastrophe coming up, nor am I a believer that everything will magically work out and we're not entering a world with new forms of inequities.

I did have an experience recently that made me think somewhat differently about what sorts of jobs are at risk.

I was visiting family in LA, and renting a car (because LA). I can't remember what company it was, but they didn't have "live people" at the booth. Instead they had a row of kiosks, each with a monitor and camera and old school phone that---I kid you not---looked like:
Complete with curly cord.

So how does this work? You go up to the device, and it auto-dials a real person. This person lives who knows were, though in my case he had a nice painted backdrop of some nature scene.

They ask about your reservation, look it up, etc., and while looking it up, he asks what I'm doing in LA. Oh I'm visiting my mom. Blah blah blah.

I then have to hold my drivers license up to the camera, we chat some more about how LA traffic is horrible. Blah blah blah.

Throughout this whole conversation, I'm thinking: this would be so much easier if this thing were automated. I don't mean automated like "chatbot", I mean automated like the kiosks at airports, where I just push a bunch of buttons and then get a ticket (or in this case, car).

And then, as you're all expecting, he asks me if I want to buy insurance.

No machine would ever under any circumstances be able to sell me insurance. It would show a screen, ask me to click if I wanted it or not, and I would immediately press the "No" button.

Now, I'm kind of a pushover, and so, especially since this guy was nice, had asked about my mom, shared complaints about traffic together, etc., when he asked me if I wanted insurance, it was much harder to say no. I did say no, though I'm pretty sure that me-five-years-ago might not have.

Whatever company this is, I'm sure they did a study. They looked at how much they'd save by having automated systems rather that some folks in presumably a part of the country/world with much lower cost of living than LA. And the main trade-off was almost certainly that a machine isn't going to be nearly as successful at upselling insurance or car model or whatever as a person. And clearly at the end of the day, they decided that automation wasn't worth it here.

(Insert all sorts of analogies to other jobs/areas of life.)

EDIT 11am Eastern 12 Apr 2018: It occurred to me after I hit "post" that it's worth highlighting a major implicit caveat: the fact that the alleged study showed that it was worth putting money into human interaction is probably largely due to whatever the car rental company knew about, for instance, the economic status of their average customer. Where this needle falls---for instance if no few customers will choose to or can afford to be upsold---will have a significant impact on what the results of such a study will be, and will therefore vary across industries.


03 April 2017

Structured prediction is *not* RL

It's really strange to look back now over the past ten to fifteen years and see a very small pendulum that no one really cares about swing around. I've spent the last ten years trying to convince you that structured prediction is RL; now I'm going to tell you that was a lie :).

Short Personal History

Back in 2005, John Langford, Daniel Marcu and I had a workshop paper at NIPS on relating structured prediction to reinforcement learning. Basically the message of that paper is: you can cast structured prediction as RL, and then use off-the-shelf RL techniques like conservative policy iteration to solve it, and this works pretty well. This view arose, for me, main from Collins and Roark's incremental structured perceptron model, but I'm sure it dates back much further than that (almost certainly there's work from neural nets land in the 80s; I'd certainly appreciate pointers!). This led eventually to Searn, and then in the early 2010s, I went around a bunch of places (ACL, AAAI, ICML, etc.), espousing connections between structured prediction and (inverse) reinforcement learning (slides).

This kinda sorta caught on in a very small subcommunity.

One thing I should have realized looking back ten years is that you should not try to sell a hammer to hammer manufacturers, but rather to people with weird nails. I spent too much time and energy trying to convince the "traditional structured prediction" crowd (the CRF and M3N and SVMstruct folks) that the "RL" view of structured prediction was awesome. That was a losing strategy, unfortunately. (Although I learned a few nights ago at dinner that this might be changing now!)

But now everything has changed. To some degree, Ross, Gordon and Bagnell's DAgger is a successful, better successor to Searn, and for years I had gone around telling everyone DAgger is my favorite algorithm ever (it consistently outperforms Searn's in their---and my---experiments, and is really really easy to implement and has stronger-ish theory). And then DAgger (or more precisely DAD) gets renamed/rebranded as "scheduled sampling" (though you should read Marc'Aurelio's comment, which is very on point), and now these ideas are everywhere, particularly in sequence-to-sequence neural transduction models.

Nowadays

In the past year or two, there's been a flurry of work applying not just imitation learning algorithms like DAgger to neural models for structured prediction problems, but also just applying straight-up RL algorithms (like reinforce, policy gradient, or actor/critic) to them. The important point is that while people have tried to do things like neural CRFs, etc., the basic sequence-to-sequence style model naturally fits a search-based structured prediction (aka RL-ish) view.

But these tasks are not the same and, in fact, structured prediction is much simpler, and I think we need to develop algorithms that take that into account.

The biggest difference is that in (all or at least almost all) structured prediction problems, conditioned on the input x, the world is known, deterministic and therefore reversible and/or fully-explorable (modulo limits of computation). This is generally not true in RL, and one of biggest challenges in RL is that once you take an action, you cannot un-take that action, and you cannot try out other alternatives.

That is to say: computation aside, in structured prediction, conditioned on x, you can build out the entire search tree and do whatever the heck you want with it. (Of course "computation aside" makes no sense in a SP setting because the whole difficulty of SP is computation.) In fact, one of the big advantages to things like CRFs is effectively that they do build out the entire search tree, at least implicitly, which is possibly precisely because of the limited expressivity of features.

This observation is probably perfectly obvious to most NLP folks. In a sense, the semantic parsing crowd has been doing something reinforcement-learning like for quite some time. You produce a semantic parse, run it against a database, check if you get the correct answer or not. If so, positive reward; if not, negative. But no one (as far as I know) just produces one parse: you produce a beam of a bunch of parses and try them all. This is definitely not something you can do in standard RL, but it is something you can do in structured prediction.

In my mind, this was (and continues to be) one of the major weakness of the Searn/DAgger approach to structured prediction that continues to be a problem in applying standard RL algorithms to structured prediction. In a real sense, I think this is something that incremental perceptron, broken LaSO and not-broken LaSO-BST, and seq2seqLaSO got right that the more RL-ish approaches got wrong. (This continues to be true in bandit structured prediction, which will get a separate post in the maybe-near future.) One approach that blends the two to some degree is Vieira and Eisner's recent paper that uses dynamic programming and change propogation within LOLS (which is effectively a variant of AggreVaTe, a follow-on to DAgger) to learn to prune (it's not obvious to me how to generalize this to other tasks yet).

Why the Gap?
I don't think this gap is an accident, and I think there are essentially two reasons it exists.

First, as suggested above, is the question of computation. If you're willing to say "I don't care about computation" then you might as well put yourself in CRF world where life is (relatively) easy, at least if you want to do analysis. If you do care about computation, then doing the "purely greedy" thing is very natural and then you can say "well I know I'm computationally efficient because I'm greedy, and now I can focus entirely on the statistical problem." Once you're willing to spend a bit more computation, you have to figure out how to "charge" yourself properly for that computation. That is, you enter a world where there's a trade-off between statistics and computation (though not in the usual sense) and it's not at all clear how to balance that. It's also hard to convince myself that it's better to spend ten times longer on a single structured example than to do ten different examples. This is a question I've been interested in since my dissertation (p44) but have made basically zero progress on:

(Note that I don't currently agree with the entirety of this passage---in particular, the complexity argument is somewhat broken---but I think the basic idea is right.)

Second, I think there's a bit of a looking-under-the-lamppost effect that's not easy to ignore. Here, the lamppost is mainly a computational efficiency lamppost, and secondarily a convenience lamppost. Greedy search is really fast. Even compared to beam search with a beam size of K, greedy is often much more than K times faster because you don't have bookkeeping overhead. And it's way easier to implement greedy solutions than non-greedy, especially in neural land if you want things to be efficient on a GPU. And often toolkits have greedy already implemented for you. This obviously isn't un-recoverable, but runs into the problem of: if I can do 50 sentences greedily on my GPU in the time it would take to do beam-10 search for one of the sentences, is the computation really going to come off in my favor?

As always, I'd appreciate pointers to work that I don't know about that addresses any of these challenges!

27 March 2017

Initial thoughts on fairness in paper recommendation?

There are a handful of definitions of "fairness" lying around, of which the most common is disparate impact: the rate at which you hire members of a protected category should be at least 80% of the rate you hire members not of that category. (Where "hire" is, for our purposes, a prediction problem, and 80% is arbitrary.) DI has all sorts of issues, as do many other notions of fairness, but all the ones I've seen rely on a pre-ordained notion of "protected category".

I've been thinking a lot about something many NLP/ML people have thought about in their musing/navel-gazing hours: something like a recommender system for papers. In fact, Percy Liang and I build something like this a few years ago (called Braque), but it's now defunct, and its job wasn't really to recommend, but rather to do offline search. Recommendation was always lower down the TODO list. I know others have thought about this a lot because over the last 10 years I've seen a handful of proposals and postdoc ads go out on this topic, though I don't really know of any solutions.

A key property that such a "paper recommendation system" should have is that it be fair.

But what does fair mean in this context, where the notion of "protected category" is at best unclear and at worst a bad idea? And to whom should it be fair?

Below are some thoughts, but they are by no means complete and not even necessarily good :P.

In order to talk about fairness of predictions, we have to first define what is being predicted. To make things concrete, I'll go with the following: the prediction is whether the user wants to read the entire paper or not. For instance, a user might be presented with a summary or the abstract of the paper, and the "ground truth" decision is whether they choose to read the rest of the paper.

The most obvious fairness concept is authorship fairness: that whether a paper is recommended or not should be independent of who the authors are (and what institutions they're from). On the bright side, a rule like this attempts to break the rich-get-richer effect, and means that even non-famous authors' papers get seen. On the dark side, authorship is actually a useful feature for determining how much I (as a reader) trust a result. Realistically, though, no recommender system is going to model whether a result is trustworthy: just that someone finds a paper interesting enough to read beyond the abstract. (Though the two are correlated.)

A second obvious but difficult notion of fairness is that performance of the recommender system should not be a function of, eg., how "in domain" the paper is. For example, if our recommender system relies on generating parse trees (I know, comical, but suppose...), and parsing works way better on NLP papers than ML papers, this shouldn't yield markedly worse recommendations for ML papers. Or similarly, if the underlying NLP fares worse on English prose that is slightly non-standard, or slightly non-native (for whatever you choose to be "native"), this should not systematically bias against such papers.

A third notion of fairness might have to do with underlying popularity of topics. I'm not sure how to formalize this, but suppose there are two topics that anyone ever writes papers about: deep learning and discourse. There are far more DL papers that discourse papers, but a notion of fairness might establish that they be recommended at similar rates.

This strong rule seems somewhat dubious to me: if there are lots of papers on DL then probably there are lots of readers, and so probably DL papers should be recommended more. (Of course it could be that there exists an area where tons of papers get written and none get read, in which case this wouldn't be true.)

A weaker version of this rule might state conditions of one-sided error rates. Suppose that every time d discourse paper is recommended, it is read (high precision), but that only about half of the recommended DL papers get read (low precision). Such a situation might be considered unfair to discourse papers because tons of DL papers get recommended when they shouldn't, but not so for discourse papers.

Now, one might argue that this is going to be handled by just maximizing accuracy (aka click-through rate), but this is not the case if the number of people who are interested in discourse is dwarfed by the number interested in DL. Unless otherwise constrained, a system might completely forgo performance on those interested in discourse in favor of those interested in DL.

This is all fine, except that the world doesn't consist of just DL papers and just discourse papers (and nary a paper in the intersection, sorry Yi and Jabob :P). So what can we do then?

Perhaps a strategy is to say: I should not be able to predict the accuracy of recommendation on a specific paper, given its contents. That is: just because I know that a paper includes the words "discourse" and "RST" shouldn't tell me anything about what the error rate is on this paper. (Of course it does tell me something about the recommendations I would make on this paper.) You'd probably need to soften this with some empirical confidence intervals to handle the fact that many papers will have very few observations. You could also think about making a requirement/goal like this simultaneously on both false positives and false negatives.

A related issue is that of bubbles. I've many times been told that one of my (pre-neural-net) papers was done in neural nets land ten years ago; I've many times told-or-wanted-to-tell the opposite. Both of these are failures of exploration. Not out of malice, but just out of lack-of-time. If a user chooses to read papers if and only if they're on DL, should a system continue to recommend non-DL papers to them? If so, why? This directly contradicts the notion of optimizing for accuracy.

Overall, I'm not super convinced by any of these thoughts enough to even try to really formalize them. Some relevant links I found on this topic:

16 March 2017

Trying to Learn How to be Helpful (IWD++)

Over the past week, in honor women of this International Women's Day, I had a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr, Awesome People: Ellen Riloff, Awesome People: Lise Getoor, Awesome People: Karen Spärck Jones and Awesome People: Kathy McKeown. (Today's is delayed one day, sorry!)

I've been incredibly fortunate to have a huge number of influential women in my life and my career. Probably the other person (of any gender) who contributed to my career as much as those above is my advisor, Daniel. I've been amazingly supported by these amazing women, and I've been learning and thinking and trying to do a lot over the past few years to do what I can to support women in our field.
There are a lot of really good articles out there on how to be a male ally in "tech." Some of these are more applicable to academia than others and I've linked to a few below.

Sometimes "tech" is not the same as "academia", and in the context of the academy, easily the best resource I've seen is Margaret Mitchell's writeup on a Short Essay on Retaining and Increasing Gender Diversity, with Focus on the Role that Men May Play. You should go read this now.

No really, go read it. I'll be here when you get back.

On Monday I attended a Male Allies in Tech workshop put on by ABI, with awesome organization by Rose Robinson and Lauren Murphy, in which many similar points were made. This post is basically a summary of the first half of that workshop, with my own personal attempt to try to interpret some of the material into an academic setting. Many thanks to especially to Natalia Rodriguez, Erin Grau, Reham Fagiri and Venessa Pestritto on the Women perspectives panel, and Dan Storms, Evin Robinson, Chaim Haas and Kip Zahn on the Men perspective panel (especially Evin Robinson!).

The following summary of the panels has redundancy with many of Margaret's points, which I have not suppressed and have tried to highlight.
  1. Know that you're going to mess up and own it. I put this one first because I'm entirely sure that even in the writing this post, I'm going to mess up. I'm truly uncomfortable writing this (and my fingernails have paid the price) because it's not about me, and I really don't want to center myself. On the other hand, I also think it's important to discuss how men (ie me) can try to be helpful, and shying away from discussion also feels like a problem. The only place I feel like I can honestly speak from is my own experience. This might be the worst idea ever, and if it is, I hope someone will tell me and talk to me about it. So please, feel free to email me, message me, come find me in person or whatever.
  2. Pretty much the most common thing I've heard, read, whatever, is: listen to and trust women. Pretty much all the panelists at the workshop mentioned in this in some form, and Margaret mentions this in several places. As an academic, though, there's more that I've tried to do: read some papers. There's lots of research on the topic of things like unconscious bias in all sorts of settings, and studies of differences in how men and women are cited, and suggested for talks, and everything else under the sun. A reasonable "newbie" place to start might be Lean In (by Sheryl Sandberg) which, for all the issues that it has, provides some overview of research and includes citations to literature. But in general, doing research and reading papers is something I know how to do, so I've been trying to do it. Beyond being important, I've honestly found it really intellectually engaging.
  3. Another very frequently raised topic on the panels, and something that Margeret mentions too, is to say something when you see or hear something sexist. Personally, I'm pretty bad at thinking of good responses in these cases: I'm a very non-type-A person, I'm not good with confrontation, and my brain totally goes into "flight" mode. I've found it really useful to have a cache of go-to responses. An easy one is something like "whoah" or just "not cool" whose primary benefit is being easy [3]. Both seem to work pretty well, and take very little thought/planning. A more elaborate alternative is to ask for clarification. If someone says something sexist, ask what's meant by that. Often in the process of trying to explain it, the issue becomes obvious. (I've been on the receiving side of both such tactics, too, and have found them both effective there as well.)

    Another standard thing in meetings is for men to restate what a woman has stated as their own idea. A suggested response from Rose Robinson (one of the organizers) at the workshop is "I'm so glad you brought that up because maybe it wasn't clear when [woman] brought it up earlier." I haven't tried this yet, but it's going into my collection of go-to responses so I don't have to think too much. I'd love to hear other suggestions!
  4. A really interesting suggestion from the panel at the workshop was "go find a woman in your organization with the same position as you and tell her your salary." That said, I've heard personally from two women at two different universities that they were told they could not be given more of a raise because then they'd be making more than (some white guy). I'm not sure what I can do about cases like that. A related topic is startup: startup packages in a university are typically not public, so a variant of this is to tell your peers what your startup was.
  5. There were a lot of suggestions around the idea of making sure that your company's content has broad representation; I think in academic this is closely related to the first three of Margaret's points about suggesting women for panels, talks or interviews in your stead. I would add leadership roles to that list. One thing I've been trying to do when I'm invited to regular seminar series is to look at their past speakers and decide whether I would be contributing to the problem by accepting. This is harder for one-off things like conference talks/panels (because there's often no history), but even in those cases it's easy enough to ask if I'll be on an all-male panel. In cases where I've done this, the response has been positive. I've also been trying to be more openly intentional recently: if I do accept something, I'll try to explicitly say that I'm accepting because I noticed that past speakers were balanced. Positive feedback is good. A personally useful thing I did was write template emails for turning down invitations or asking for more information, with a list of researchers from historically excluded groups in CS (including but not limited to women) who could be invited in my stead. I almost never send these exactly as is, but they give me a starting point.

    There's a dilemma here: if every talk series, panel, etc., were gender balanced, women would be spending all their time going around giving talks and would have less time for research. I don't have a great solution here. (I do know that a non-solution is to be paternalistic and make decisions for other people.) One option would be to pay honoraria to women speakers and let the "market" work. This doesn't address the dilemma fully (time != money), but I haven't heard of or found other ideas. Please help!

    Turning down invitations to things as an academic is really hard. I recognize my relative privilege here that I already have tenure and so the cost to me for turning down this or that is pretty low in comparison to someone who is still a Ph.D. student or an untenured faculty member. That is to say: it's easy for me to say that I'm willing to take a short term negative reward (not giving a talk) in exchange for a long term very positive reward (being part of a more diverse community that both does better science and is also more supportive and inclusive). If I were still pre-tenure, this would definitely get clouded with the problem that it's great if there's a better environment in the future but not so great for me if I'm not part of it. On the other hand, pre-tenure is definitely a major part of the leaky pipeline, and so it's also really important to try to be equitable here. Each person is going to have to find a balance that they're comfortable with.

    One last thought on this topic is something that I was very recently inspired to think about by Hanna Wallach. My understanding is that she, like most people, cannot accept honoraria as part of a company, and so she recently started asking places to donate her honoraria to good causes. I can accept honoraria for talks, which hurts the pipeline, but perhaps by donating these funds to organizations like ABI or BlackGirlsCode, I can try to help other parts of the pipeline. (There are tons of organizations out there I've thought about supporting; I like BGC for original intersectionality reasons.)
  6. I've been working hard to follow women on social media (and to follow members of other historically excluded groups, including women). This has been super valuable to me for expanding my views of tons of topics.
  7. The final topic at the workshop was a talk by the two authors of a new book on how and why men can mentor women called Athena Rising. This was really awesome. Mentoring in tech is different than advising in academia, but not that different. Or at least there are certainly some parallels. Looking back at Hal-a-few-years ago, I very much had fallen into the trap of "okay I advise a diverse group of PhD students ergo I'm supporting diversity." This is painfully obvious now when I re-read old grant proposals. A consistent thing I've heard is that this is a pretty low bar, especially because women who do the extra required to get to our PhD program are really really amazing.

    I still think this is an important factor, but this discussion at the workshop made me realize that I can also go out and learn how to be a better advisor, especially to students whose live experiences are very different than my own. And that it's okay if students don't want the same path in life that I do: "hone don't clone" was the catch-phrase here. This discussion reminded me a comment one of the PhD students made to me after going to Grace Hopper: she really appreciated it because she could ask questions there that she couldn't ask me. I think there will always be such questions (because my lived experience is different), but I've decided to try to close the gap a bit by learning more here.
  8. Finally (and really, thank you if you've read this far), a major problem that was made apparent to me by Bonnie Webber is that one reason that women receive fewer awards in general is because women are nominated for fewer awards (note: this is not the only reason). Nominating women for awards is a super easy thing for me to do. It costs a few hours of my time to nominate someone for an award, or to write a letter (of course for serious awards, it's far more than a few hours to write a letter, but whatev). This includes internal awards at UMD, as well as external awards like ACL (or ACM or whatever) fellows, etc. Whenever I get an email for things like this, I'm trying to think about: who could I nominate for this that might otherwise be overlooked (Margaret's point on page 2!).
I promised some other intro resources; I would suggest:
  1. GeekFeminism: Allies
  2. GeekFeminism: Resources for Allies
  3. GeekFeminism: Good sexism comebacks
  4. Everyday Feminism: Male Feminist Rules to Follow
  5. GeekFeminism: Allies Workshop
Like I said at the beginning, what I really hope is that people will reply here with (a) suggestions for things they've been trying that seem to be working (or not!), (b) critical feedback that something here is really a bad idea and that something else is likely to be much more effective, (c) and general discussion about the broad issues of diversity and inclusion in our communities.

Because of the topic of the workshop, this is obviously focused in particular on women, but the broader discussion needs to include topics related to all historically excluded groups because what works for one does not necessary work for another. Especially when intersectionality is involved. Rose Robinson ended the ABI Workshop saying "To get to the same place, women have to do extra. And Black women have to do extra extra." What I'm trying to figure out is what extra I can do to try to balance a bit more. So please, please, help me!

14 March 2017

Awesome people: Kathy McKeown (IWD++)

To honor women this International Women's Day, I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr, Awesome People: Ellen Riloff, Awesome People: Lise Getoor and Awesome People: Karen Spärck Jones.

Continuing on the topic of "who has been influential in my career and helped me get where I am?" today I'd like to talk about Kathy McKeown, who is currently the Director of the Institute for Data Sciences and Engineering at Columbia. I had the pleasure of writing a mini-bio for Kathy for NAACL 2013 when I got to introduce her as one of the two invited speakers, and learned during that time that she was the first woman chair of computer science at Columbia and also the first woman to get tenure in the entirety of Columbia's School of Engineering and Applied Science. Kathy's name is near synonymous with ACL: she's held basically every elected position there is in our organization, in addition to being a AAAI, ACM, and ACL Fellow, and having won the Presidential Young Investigator Award from NSF, the Faculty Award for Women from NSF and the ABI Women of Vision Award.

One aspect of Kathy's research that I find really impressive is something that was highlighted in a nomination letter for her to be an invited speaker at NAACL 2013. I no longer have the original statement, but it was something like "Whenever a new topic becomes popular in NLP, we find out that Kathy worked on it ten years ago." This rings true of my own experience: recent forays into digital humanities, work on document and sentence compression, paraphrasing, technical term translation, and even her foundational work in the 80s on natural language interfaces to databases (now called "semantic parsing").

Although---like Bonnie and Karen---I met Kathy through DUC as a graduate student, I didn't start working with her closely until I moved to Maryland and I had the opportunity to work on a big IARPA proposal with her as PI. That was the first of two really big proposals that she'd lead and I'd work on. These proposals involved both a huge amount of new-idea-generation and a huge amount of herding-professors, both of which are difficult in different ways.

On the research end, in the case of both proposals, Kathy's feedback on ideas has been invaluable. She's amazingly good at seeing through a convoluted idea and pushing on the parts that are either unclear or just plain don't make sense. She's really helped me hone my own ideas here.

On the herding-professors end, I am so amazed with how Kathy manages a large team. We're currently having weekly phone calls, and one of the other co-PI's and I have observed in all seriousness that being on these phone calls is like free mentoring. I hope that one day I'd be able to manage even half of what Kathy manages.

One of my favorite less-research-y memories of Kathy was when our previous IARPA project was funded, she invited the entire team to a kickoff meeting in the Hamptons. It was the Fall, so the weather wasn't optimal, but a group of probably a ten faculty and twenty students converged there, ran around the beach, cooked dinner as a group, and bonded. And we discussed some research too. I still think back to this even regularly, because it's honestly not something I would have felt comfortable doing in her position. I have a tendency to keep my work life and my personal life pretty separate, and inviting thirty colleagues over for a kickoff meeting would've been way beyond my comfort zone: I think I worry about losing stature. Perhaps Kathy is more comfortable with this because of personality or because her stature is indisputable. Either way, it's made me think regularly about what sort of relationship I want and am comfortable with with students and colleagues.

Spending any amount of time with Kathy is a learning experience for me, and I also have to thank Bonnie Dorr for including me on the first proposal with Kathy that kind of got me in the door. I'm incredibly indebted to her amazing intellect, impressive herding abilities, and open personality.

Thanks, Kathy!

13 March 2017

Awesome people: Karen Spärck Jones (IWD++)

To honor women this International Women's Day, I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr, Awesome People: Ellen Riloff and Awesome People: Lise Getoor.


Today is the continuation of the theme "who has been influential in my career and helped me get where I am?" and in that vein, I want to talk about another awesome person: Karen Spärck Jones. Like Bonnie Dorr, Karen is someone I first met at the Document Understanding Conference series back when I was a graduate student.

Karen has done it all. First, she invents inverse document frequency, one of those topics that's so ingrained that no one even cites it anymore. I'm pretty sure I didn't know she invented IDF when I first met her. Frankly, I'm not sure it even occurred to me that this was something someone had to invent: it was like air or water. She's the recipient of the AAAI Allen Newell Award, the BCS Lovelace Medal, the ACL lifetime achievement award, and ASIS&T Award of Merit, the Gerard Salton Award, was a fellow of the British Academy (and VP thereof), a fellow of AAAI, ECCAI and ACL. I highly recommend reading her speech from her ACL fellow award. Among other things, I didn't realize that IDF was the fourth attempt to get the formulation right!

If there are two things I learned from Karen, they are:
  1. simple is good
  2. examples are good
Although easily stated, these two principles are quite difficult to follow.I distinctly remember given a talk at DUC on BayeSum and, afterward, Karen coming up to talk to me to try to get to the bottom of what the model was actually doing and why it was working, basically sure that there was a simpler explanation buried under the model.

I also can't forget Karen routinely pushing people for examples in talks. Giving a talk on MT that doesn't have example outputs of your translation system? Better hope Karen isn't in the audience.

Karen was also a huge proponent of breaking down gender barriers in computing. She's famously quoted as saying:
I think it's very important to get more women into computing. My slogan is: "Computing is too important to be left to men."
This quote is a wonderful reflection both of Karen's seriousness and of her tongue-in-cheek humor. She was truly one of the kindest people I've met.


In particular, more than any of these specifics I just remember being so amazed and grateful that even as a third year graduate student, Karen, who was like this amazing figure in IR and summarization, would come talk to me for a half hour to help me make my research better. I was extremely sad nearly ten years ago when I learned that Karen has passed away. Just a week earlier, we had been exchanging emails about document collections, and the final email I had from her on the topic read as follows:
Document collections is a much misunderstood topic -- you have to think what its for and eg where (in retrieval) you are going to get real queries from. Just assembling some bunch of stuff and saying hey giys, what would you like to do with this is useless.
This was true in 2007 and it's true today. In fact, I might argue that it's even more true today. We have nearly infinite ability to create datasets today, be them natural, artificial or whatever, and it's indeed not enough just to throw some stuff together and cross your fingers.

I miss Karen so much. She had this joy that she brought everywhere and my life is less for that loss.  

10 March 2017

Awesome people: Lise Getoor (IWD++)

To honor women this International Women's Day, I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr and Awesome People: Ellen Riloff.

Today is the continuation of the theme "who has been influential in my career and helped me get where I am?" and in that vein, I want to talk about another awesome person: Lise Getoor.

Lise is best known for her deep work in statistical relational learning, link mining, knowledge graph models and tons of applications to real world inference problems where data can be represented as a graph. She's currently a professor in CS at UCSC, but I had the fortune to spend a few years with her while she was still here at UMD. During this time, she was an NSF Career awardee, and is now a Fellow of the AAAI. At UMD when you're up for promotion, you give a "promotion talk" to the whole department, and I still remember sitting in her Full Prof promotion talk and being amazed---despite having known her for years at this point---at how well she made both deep technical contributions and also built software and tools that are useful for a huge variety of practitioners.

Like Bonnie Dorr, Lise was something of an unofficial mentor to me. Ok I'll be honest. She's still an unofficial mentor to me. Faculty life is hard work, especially when one has just moved; going through tenure is stressful in a place where you haven't had years to learn how things work; none of which is made easier by simultaneously having personal-life challenges. Lise was always incredibly supportive in all of these areas, and I don't think I realized until after she had moved to UCSC how much I benefited from Lise's professional and emotional labor in helping me survive. And how helpful it is to have an openly supportive senior colleague to help grease some gears. I always felt like Lise was on my side.

Probably one of the most important things I learned from Lise is how to be strategic, both in terms of research (what is actually worth putting my time and energy into) and departmental work (how can we best set ourselves up for success). As someone who has a tendency to spread himself too thin, it was incredibly useful to have a reminder that focusing on a smaller number of deeper things is more likely to have real lasting impact. I also found that I greatly respected her attention to excellence: my understanding (mostly from her students and postdocs) is that her personal acceptance rate on conference submissions is incredibly high (like almost 1.0), because her own internal bar for submission is generally much higher than any reviewer's. This is obviously something I haven't been able to replicate, but I think incredibly highly of Lise for this.

Lise and I got promoted the same year---her to full prof, me to associate prof---and so we had a combined celebration dinner party at one of the (many) great Eritrean restaurants in DC followed by an attempt to go see life jazz at one of my favorite venues across the street. The music basically never showed up, but it was a really fun time anyway. Lise gave me a promotion gift that I still have on my desk: a small piece of wood (probably part of a branch of a tree) with a plaque that reads "Welcome to the World of Deadwood." This is particularly meaningful to me because Lise is so far from deadwood that it puts me to shame, and I can only hope to be as un-deadwood-like as her for the rest of my career.

Thanks Lise!

09 March 2017

Awesome people: Ellen Riloff (IWD++)

To honor women this International Women's Day, I have a several posts, broadly around the topic of women in STEM. Previous posts in this series include: Awesome People: Bonnie Dorr.

Today is the continuation of the theme "who has been influential in my career and helped me get where I am?" and in that vein, I want to talk about another awesome person: Ellen Riloff. Ellen is a professor of computer science at the University of Utah, and literally taught me everything I know about being a professor. I saw a joke a while ago that the transition from being a PhD student to a professor is like being trained for five years to swim and then being told to drive a boat. This was definitely true for me, and if it weren't for Ellen I'd have spent the past N years barely treading water. I truly appreciate the general inclusive and encouraging environment I belonged to during my time at Utah, and specifically appreciate everything Ellen did. When I think of Ellen as a researcher and as a person, I think: honest and forthright.

Ellen is probably best known for her work on bootstrapping (for which she and Rosie Jones received a AAAI Classic Paper award in 2017) and information extraction (AAAI Classic Paper honorable mention in 2012), but has also worked more broadly on coreference resolution, sentiment analysis, active learning, and, in a wonderful project that also reveals her profound love of animals, veterinary medicine. Although I only "officially" worked on one project with her (on plot units), her influence on junior-faculty-Hal was deep and significant.

I would be impossible to overstate how much impact Ellen has had on me as a researcher and a person. I still remember on my first NSF proposal, I sent her a draft and her comment main was "remove half." I was like "nooooooo!!!!" But she was right, and ever since them I try to repeat this advice to myself every time I write a proposal now.

One of the most important scientific lessons I learned from Ellen is that how you construct your data matters. NLP is a field that's driven by the existence of data, but if we want NLP to be meaningful at all, we need to make sure that that data means what we think it means. Ellen's attention to detail in making sure that data was selected, annotated, and inspected correctly is deeper and more thoughtful than anyone else I've ever known. When we were working on the plot units stuff, we each spent about 30 minutes annotating a single fable, followed by another 30 minutes of adjudication, and then reannotation. I think we did about twenty of them. Could we have done it faster? Yes. Could we have had mechanical turkers do it? Probably. Would the data have been as meaningful? Of course not. Ellen taught me that when one releases a new dataset, this comes with a huge responsibility to make sure that it's carefully constructed and precise. Without that, the number of wasted hours of others that you run the risk of creating is huge. Whenever I work on building datasets these days, the Ellen-level-of-quality is my (often unreached) aspiration point.

I distinctly remember a conversation Ellen and I had about advising Ph.D. students in my first or second year, in which I mentioned that I was having trouble figuring out how to motivate different students. Somewhat tongue-in-cheek, Ellen pointed out that different students are actually different people. Obvious (and amusing) in retrospect, but as I never saw my advisor interacting one on one with his other advisees, it actually had never occurred to me that he might have dealt with each of us differently. Like most new faculty, I also had to learn how to manage students, how to promote their work, how to correct them when they mis-step (because we all mis-step), and also how to do super important things like write letters. All of these things I learned from Ellen. I still try to follow her example as best I can.

I was lucky enough to have the office just-next-door to Ellen, and we were both in our offices almost every weekday, and her openness to having me stick my head in her door to ask questions about anything from what are interesting grand research questions to how to handle issues with students, from how to write proposals to what do you want for lunch, was amazing. I feel like we had lunch together almost every day (that's probably an exaggeration, but that's how I remember it), and I owe her many thanks for helping me flesh out research ideas, and generally function as a junior faculty member. She was without a doubt the single biggest impact on my life as junior faculty, and I remain deeply indebted to her for everything she did directly and behind the scenes.

Thanks Ellen!

08 March 2017

Awesome people: Bonnie Dorr (IWD++)

To honor women this International Women's Day, I have a several posts, broadly around the topic of women in STEM.

This is the first, and the topic is "who has been influential in my career and helped me get where I am?" There are many such people, and any list will be woefully incomplete, but today I'm going to highlight Bonnie Dorr (who founded the CLIP lab together with Amy Weinberg and Louiqa Raschid, and who also is a recent fellow of the ACL!).

For those who haven't had the chance to work with Bonnie, you're missing out. I don't know how she does it, but the depth and speed at which she interacts, works, produces ideas and gets things done is stunning. Before leaving for a program manager position at DARPA and then later to IHMC, Bonnie was full professor (and then associate dean) here at UMD. At DARPA she managed basically two PM's worth of projects, and was always excited about everything. During her time as a professor here at UMD (after earning her Ph.D. from MIT), Bonnie was an NSF Presidential Faculty Fellow, a Sloan Recipient, a recipient of the NSF Young Investigator Award, and a AAAI Fellow.

I learned a lot from Bonnie. I first met her back when I was a graduate student and Daniel and I had a paper in the Document Understanding Conference (basically the summarization workshop of the day) on evaluation. It was closely related to something Bonnie had worked on previously, and I was really thrilled to get feedback from her. Fast forward six years and then I'm writing proposals with Bonnie, advising postdocs and students together, and otherwise trying to learn as much as possible by osmosis and direct instruction.

One of the most important things I learned from Bonnie was: if you want it done, just do it. Bonnie is a do-er. This is reflected in her incredibly broad scientific contributions (summarization, machine translation, evaluation, etc.) as well as the impact she had on the department. It was clear almost immediately that the faculty here really respected Bonnie's opinion; her ability to move mountains was evident.

On a more personal note, although she was not my official senior-faculty-mentor when I came to UMD, Bonnie was one of two senior faculty members here who really did everything she could to help me---both professionally and personally. Whenever I was on the fence about how to handle something, I knew that I could go to Bonnie and get her opinion and that her opinion would be well reasoned. I wouldn't always take it (sometimes to my own chagrin), but she was always ready with concrete advice about specific steps to take about almost any topic. I've also been on two very-large grant proposals with her (one successful and one not) which have both been incredible learning experiences. Getting a dozen faculty to work on a 30 page document is no easy task, and Bonnie's combination of just-do-it and lead-by-example is something I still try to mimic when I'm in a similar (if smaller) position. Even when she was at DARPA, as well as now, as professor emerita at UMD, she's still actively supporting both me and other faculty here, and clearly really cares that people at UMD are successful.

In addition to Bonnie's seriousness and excellence in research and professional life, I also really appreciated her more laid back side. When I visited UMD back before accepting a job here, she hosted a visit day dinner for prospective grad students at her house, which overlapped with one of her student's Ph.D. defense: hence, a combined party. To honor the student, Bonnie had written a rap, which she then performed with her son beatboxing. It was in that moment that I realized truly how amazing Bonnie is not just as a researcher but as a person. (Of course, she attacked this task with exactly the same high intensity that she attacks every other problem!)

Overall, Bonnie is both one of the most amazing researchers I know, one of the strongest go-getters I know, and someone I've been extremely luck to have not just as a collaborator, but also a colleague and mentor.

Thanks Bonnie!