24 November 2008

Supplanting vs Augmenting Human Language Capabilities

With an analogy to robotics, I've seen two different approaches. The first is to develop humanoid robots. The second is to develop robots that enhance human performance. The former supplants a human (eg., the long awaited robot butler); the latter augments a human. There are parallels in many AI fields.

What about NLP?

I would say that most NLP research aims to supplant humans. Machine translation puts translators out of work. Summarization puts summarizers out of work (though there aren't as many of these). Information extraction puts (one form of) information analysts out of work. Parsing puts, well... hrm...

There seems actually to be quite little in the way of trying to augment human capabilities. Web search might be one such area, though this is only tenuously an "NLP" endeavor. Certainly there is a reasonable amount of work in translation assistance: essentially fancy auto-completion for translators. Some forms of IE might look like this: find all the names, coreferenceify them and then present "interesting" results to a real human analyst who just doesn't have time to look through all the documents... though this looks to me like a fancy version of some IR task that happens to use IE at the bottom.

Where else might NLP technology be used to augment, rather than supplant?

  • A student here recently suggested the following. When learning a new language, you are often reading an encounter unknown words. These words could be looked up in a dictionary and described in a little pop-up window. Of course, if the definition (hopefully sense-disambiguated) itself contains unknown words, you'd need to recurse. He then suggested a similar model for reading Wikipedia pages: tell Wikipedia everything you know and then have it regenerate the "variational EM" page explaining things that you don't know about (again, recursively). This could either be interactive or not. Wikipedia is nice here because you can probably look up most things that a reader might not know via internal Wikipedia links.

  • Although really a field of IR, there's the whole interactive track at TREC that essentially aims for an interactive search experience, complete with suggestions, refinements, etc.

  • I can imagine electronic tutorials that automatically follow your progress in some task (eg., learning to use photoshop) and auto-generate text explaining parts where you seem to be stuck, rather than just providing you with random, consistent advice. (Okay, this starts to sound a bit like our mutual enemy clippy... but I suspect it could actually be done well, especially if it were really in the context of learning.)

  • Speaking of learning, someone (I don't even remember anymore! Sorry!) suggested the following talk to me a while ago. When trying to learn a foreign language, there could be some proxy server you go through that monitors when you are reading pages in this language that you want to learn. It can keep track of what you know and offer mouseover suggestions for words you don't know. This is a bit like the first suggestion above.
That's all I can come up with now.

One big "problem" with working on such problems is that you then cannot avoid actually doing user studies, and we all know how much we love doing this in NLP these days.


Daniel Tunkelang said...

I'm heartened to see an NLP / machine learning researcher thinking in these terms. I suggest you explore the emerging field of HCIR. And you might enjoy my blog, The Noisy Channel.

Chris Brew said...

All the "autocomplete" things in Eclipse and similar are in the business of guessing what the user is intending to do and then making it easy. How do you do that in the context of writing a typical academic paper? Wouldn't you want to track point of view, arguments and evidence like Simone Teufel's thesis work did? That's NLP, and obviously an augmentation of human capacity when done well. Feasible?

Chris Brew said...

and see Whitelock et al from a decade ago


It's a very good idea, so your student should feel pleased for thinking of it rather than sad about
missing the prior art.

Ted Dunning ... apparently Bayesian said...

The Oleada project from years ago did lots of human augmentation work along the lines you suggest. It had translation memory, dictionary access and many other UI functions to assist translators.

Relevance feedback fits into the category of human augmentation as well.

Anonymous said...

I tend to adopt (or come up with?) the informal differentiation between NLP and Computational linguistics. While NLP enhance humans, CL tries to understand the cognitive process. at least that's how *I* refer to these terms. Actually, I was just about to post about it - asking for the community feedback, but this is a much better forum for that.

Anonymous said...

Diagnostic systems in clinical practices often work this way. The automatic system suggests things like if a patient's chart says they have a heart attack, make sure their drug list contains beta blockers. It doesn't automatically start injecting people -- it calls things to a clinician's attention.

Diagnostic coding like ICD-9 often works this way, too. The problem is to match hand-written or dictated clinical notes and discharge summaries and other bits of text with a huge ontology of diseases, procedures and what not. Classification systems in play here are more like search aides than auto-coders.

Clinical dictation is also done with machine-assistance. A clinician dictates some text over the phone, an automatic system transcribes it, and it then gets proofed and corrected by humans.

Customer help desk apps work the same way. You get a query via some kind of IM or e-mail or web form, and you can auto-suggest a bunch of answers for a human to select from or modify.

Anonymous said...

A lot of the work on applications of machine translation is actually to assist, rather than replace, translators, by giving them the option of post-editing MT output.

Anonymous said...

There is a misconception about human and machine learning, a certain subtelty that will always require augmentation by human. An example of augmented NLP at work can be seen at www.topodia.com - you'll have to dig into the application to discover it, but its pretty clear that the aggregate of indexes ranked by some relationship to popularity will eventually converge to produce a more meaningful result than the machine only method...

Unknown said...

What about synthetic telepathy? It's not terribly well-developed and a wicked hard problem, but there is some current funding and from my perspective a good target to shoot for, as there's a wide range of augmentation benefits.

Anonymous said...

酒店經紀PRETTY GIRL 台北酒店經紀人 ,禮服店 酒店兼差PRETTY GIRL酒店公關 酒店小姐 彩色爆米花酒店兼職,酒店工作 彩色爆米花酒店經紀, 酒店上班,酒店工作 PRETTY GIRL酒店喝酒酒店上班 彩色爆米花台北酒店酒店小姐 PRETTY GIRL酒店上班酒店打工PRETTY GIRL酒店打工酒店經紀 彩色爆米花