tag:blogger.com,1999:blog-19803222.post115074198294616017..comments2024-03-18T01:45:45.724-06:00Comments on natural language processing blog: Having an Impacthalhttp://www.blogger.com/profile/02162908373916390369noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-19803222.post-18110135773966988942009-05-12T11:18:00.000-06:002009-05-12T11:18:00.000-06:00酒店經紀PRETTY GIRL 台北酒店經紀人 ,禮服店 酒店兼差PRETTY GIRL酒店公關 酒...酒店經紀PRETTY GIRL <A HREF="http://www.taipeilady.com/" REL="nofollow" TITLE="台北酒店經紀人">台北酒店經紀人</A> ,<A HREF="http://tw.myblog.yahoo.com/jw!qZ9n..6QEhhc0LkItOBm/" REL="nofollow" TITLE="禮服店">禮服店</A> 酒店兼差PRETTY GIRL<A HREF="http://www.mashow.org/" REL="nofollow" TITLE="酒店公關">酒店公關</A> 酒店小姐 彩色爆米花<A HREF="http://blog.xuite.net/jkl338801/blog/" REL="nofollow" TITLE="酒店兼職">酒店兼職</A>,酒店工作 彩色爆米花<A HREF="http://tw.myblog.yahoo.com/jw!BIBoU5SeBRs21nb_ajFpncbTqXds" REL="nofollow" TITLE="酒店經紀">酒店經紀</A>, <A HREF="http://mypaper.pchome.com.tw/news/thomsan/3/1310065116/20080905040949/" REL="nofollow" TITLE="酒店上班">酒店上班</A>,酒店工作 PRETTY GIRL<A HREF="http://tw.myblog.yahoo.com/jw!rybqykeeER6TH3AKz1HQ5grm/" REL="nofollow" TITLE="酒店喝酒">酒店喝酒</A>酒店上班 彩色爆米花<A HREF="http://mypaper.pchome.com.tw/news/jkl338801/" REL="nofollow" TITLE="台北酒店">台北酒店</A>酒店小姐 PRETTY GIRL<A HREF="http://www.mashow.org/" REL="nofollow" TITLE="酒店上班">酒店上班</A>酒店打工PRETTY GIRL<A HREF="http://www.tpangel.com/" REL="nofollow" TITLE="酒店打工">酒店打工</A>酒店經紀 彩色爆米花Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1151861416084830002006-07-02T11:30:00.000-06:002006-07-02T11:30:00.000-06:00Bob, I had another thought on the "things past two...Bob, I had another thought on the "things past two decades are forgotten." I don't think this is really necessarily true. We just don't remember them as papers. Especially the younguns among us, I never read the Sparck-Jones tf-idf paper, or the Viterbi paper or the unification paper, but I <I>learned</I> about these things in classes and clearly something like tf-idf has had more of an impact than, say, parsing with maxent models (no offense to Adwait). But I think that, at least for current students, we think of these things as "known," rather than "research." (I had to specifically seek out the Brown 93 paper to read a few years back, and that took some doing.) This of course renders "infuential papers" lists rather bogus, but I don't think it means we don't know or care about the older stuff.halhttps://www.blogger.com/profile/02162908373916390369noreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1151448462953581922006-06-27T16:47:00.000-06:002006-06-27T16:47:00.000-06:00My sense is that techniques vary more over time th...My sense is that <I>techniques</I> vary more over time than <I>problems</I>. CRFs, max margin stuff, etc. are very popular now, though the problem foci (MT, parsing) haven't changed in 50 years (if anything, my sense is that things were actually more broad 15-25 years ago than they are now). Of course, this is outside my first-hand knowledge, so I may be wrong :).halhttps://www.blogger.com/profile/02162908373916390369noreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1151421114350703232006-06-27T09:11:00.000-06:002006-06-27T09:11:00.000-06:00I think there's some irony to a data-mining expert...I think there's some irony to a data-mining expert wondering which topic you should research next. :-)<BR/><BR/>Though maybe rather than trying to find the topic everyone is discussing you really want to find the area no one is discussing.John_Casshttps://www.blogger.com/profile/06879960964396128190noreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1150987790782158442006-06-22T08:49:00.000-06:002006-06-22T08:49:00.000-06:00John -- I mean research.Kevin -- I agree new field...John -- I mean research.<BR/><BR/>Kevin -- I agree new fields are fun, and probably the direction I will go. But there are two caveats. (1) It can sometime be <I>very</I> difficult to break ground, publishing-wise. People seems somewhat psycologically more comfortable with old ground. Moreover, and more importantly, if you start a new field, you have to solve a lot of problems, the biggest of which (from a publications perspective) is evaluation. In my experience (and the experience of many people I've talked to), this is the easiest way to have a paper killed: someone can always find something wrong with your evaluation method. (2) I don't know how much one should worry about this, but for a lot of reasonably commercially viable web-based things, I have some concern that a larger organization (eg., Google, Yahoo, MSN) would essentially scoop what new cool thing I try (not intentionally -- good ideas just seem to pop up in different places all the time) and "win" just on the basis of having more data. While this hasn't happened to me, it's a bit scary and I'd imagine somewhat frustrating.<BR/><BR/>Bob -- very insightful :). I'd imagine that of the things on the list, the only thing that has any hope of mattering is the IBM paper, if for no other reason than it really brought home the idea of alignments to NLP (of course they existed in speech for quite some time before that)...or, at least, having not been around in 1993, my <I>impression</I> is that this paper did that. But I think your point is well taken. There is some sense in which some of the old stuff might be coming back (the PARC people have been publishing on LFG for a few years now, some of the syntactic MT work at ISI is starting to have more and more of a unification perspective, though they don't call it that, etc.). I even know of some people who (I believe unsuccessfully, unfortunately) tried to reinvent Hobbs abductive reasoning using modern statistical techniques (note that really abduction is nothing more than Bayesian inference; the "bang" that Hobbs gets is just the explaining away phenomenon).<BR/><BR/>So what can we learn from this? That no one will remember what we do? That we should stick to whatever's the hot topic of the day? It seems there must be a positive in here, but I'm searching for it...halhttps://www.blogger.com/profile/02162908373916390369noreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1150924145374353712006-06-21T15:09:00.000-06:002006-06-21T15:09:00.000-06:00The take-away message from the most "impactful" pa...The take-away message from the most "impactful" papers list is that we forget most of what happened more than 10 years ago. <BR/><BR/>What about long-term impact? My vote's for Shannon's seminal paper on information theory -- he introduced character n-grams and the noisy channel model. In the late 1940s. Or Viterbi's first-best HMM decoder? <BR/><BR/>As a field, I would claim that there haven't been *any* papers written and published in CL/ACL that'll stand the test of time. <BR/><BR/>The ACL has been around since the 1960s. I remember Ron Kaplan and Bonnie Webber complaining in 1987 that all of the significant work done in the 1970s (Lunar, parallel parsing, etc.) was either forgotten or being reinvented.<BR/><BR/>If someone had asked me the question in 1987, I'd be listing the LFG papers of Kaplan, unification grammars of Shieber, some cool work on coordination by Steedman, nifty abductive reasoning by Hobbs, etc. etc. <BR/><BR/>My guess is that Hal's current list will look just as quaint in 20 years. Especially since almost none of it is really that NLP-specific, and the stuff that is NLP-specific (Collins's parser, IBM's translation models, Identifinder/Nymble) has been superseded by "better" models.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1150922794794640372006-06-21T14:46:00.000-06:002006-06-21T14:46:00.000-06:00I feel you! The question of doing impactful resear...I feel you! The question of doing impactful research is so important! I've been stuck thinking about my thesis topic for a while now. I want to work on something that has practical impact, either to the research community (as a well-cited paper) or to the industry/society (as a good application). However, how does one do this? It seems that one can either enter a crowded field (which guarantees a large audience) and carve out a niche, or invent a new area that have strong future impact. <BR/><BR/>As I see it, the crowded fields now are MT and parsing. To gain some recognition here, one really needs to invent some new model/technique that outperforms all the other systems (e.g. David Chiang's Hiero system), or find a less-studied area such as morphology in MT or adaptation of parsers. <BR/><BR/>Regarding new fields, personally I think things that deal with the web, such as sentiment analysis of reviews, social networks, blogs, and <A HREF="http://turing.cs.washington.edu/papers/aaai06.pdf" REL="nofollow"><BR/>>machine reading</A>, etc. may be future killer applications because they have direct social ramifications. I also think that productivity tools, such as email analysis, may have a huge market as future office workers become buried in information. Another area that does not directly relate to NLP but has the potential is multimedia communication--there is an increasingly large repository of video and audio content on the web (eg. youtube.com) which requires easy browsing and searching. NLP and summarization of these data may open up some opportunities. <BR/><BR/>Having said all that, I'm still pretty much want to work on machine learning. Personally, I'd like to develop a research direction that allows me to stradle both machine learning and NLP, applying advanced machine learning methods to NLP, and inspiring new machine learning problems from NLP. This led me to the following question: are there types of problems in NLP that hasn't been investigated in the machine learning community? <BR/><BR/>After being stuck on figuring out a thesis topic for a while, my advisor suggested that I just start work on *something*. Anything is fine. I guess the act of working on some project may inspire me to think of better ideas.Kevin Duhhttps://www.blogger.com/profile/07407894290644783502noreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1150812713621438072006-06-20T08:11:00.000-06:002006-06-20T08:11:00.000-06:00Do you mean research or working for an organizatio...Do you mean research or working for an organization?John_Casshttps://www.blogger.com/profile/06879960964396128190noreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1150780494103205242006-06-19T23:14:00.000-06:002006-06-19T23:14:00.000-06:00Mind.Forth, the quintessentially NLP AI that I hav...<A HREF="http://mind.sourceforge.net/mind4th.html" REL="nofollow">Mind.Forth</A>, the quintessentially NLP AI that I have been working on for a decade (and had a <A HREF="http://artilectworld.com/html/mentifex.html" REL="nofollow">breakthrough</A> in a few weeks ago), has elements of both <A HREF="http://mind.sourceforge.net/parser.html" REL="nofollow">parsing</A> and <A HREF="http://mind.sourceforge.net/think.html#mt" REL="nofollow">machine translation</A>. Alas, however, I do not know how to direct anybody towards a career in open-source artificial intelligence. I can only ask that NLP-ers download Mind.Forth, run it in <A HREF="http://mind.sourceforge.net/m4thuser.html#tutorial" REL="nofollow">Tutorial</A> mode to watch the artificial mind think -- and show it to as many interested persons as possible. Then maybe jobs in NLP AI will begin to emerge. Best of luck!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1150749566276328572006-06-19T14:39:00.000-06:002006-06-19T14:39:00.000-06:00Yes, parsing = parsing a sentence into syntactic u...Yes, parsing = parsing a sentence into syntactic units. Although there is a current trend toward using sytactic information for translation, the current best systems (according to some metrics) do not use syntax at all, but essentially segment a foreign sentence into phrases, translate the phrases independently through memorizing potential translations, then reorder them to try to produce fluent output. I'm planning on putting together a "getting started in" for MT soon, but the current story is: no.halhttps://www.blogger.com/profile/02162908373916390369noreply@blogger.comtag:blogger.com,1999:blog-19803222.post-1150749311388291672006-06-19T14:35:00.000-06:002006-06-19T14:35:00.000-06:00By parsing, do mean parsing a sentence?If so, then...By parsing, do mean parsing a sentence?<BR/>If so, then doesn't Machine translation require parsing?<BR/><BR/>I am just a high schooler right now, so I apologize if the question is a bit stupid.Anonymousnoreply@blogger.com