tag:blogger.com,1999:blog-19803222.post5627465315399432840..comments2024-03-18T01:45:45.724-06:00Comments on natural language processing blog: Multiclass learning as multitask learninghalhttp://www.blogger.com/profile/02162908373916390369noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-19803222.post-18575077022478072672009-05-12T10:49:00.000-06:002009-05-12T10:49:00.000-06:00酒店經紀PRETTY GIRL 台北酒店經紀人 ,禮服店 酒店兼差PRETTY GIRL酒店公關 酒...酒店經紀PRETTY GIRL <A HREF="http://www.taipeilady.com/" REL="nofollow" TITLE="台北酒店經紀人">台北酒店經紀人</A> ,<A HREF="http://tw.myblog.yahoo.com/jw!qZ9n..6QEhhc0LkItOBm/" REL="nofollow" TITLE="禮服店">禮服店</A> 酒店兼差PRETTY GIRL<A HREF="http://www.mashow.org/" REL="nofollow" TITLE="酒店公關">酒店公關</A> 酒店小姐 彩色爆米花<A HREF="http://blog.xuite.net/jkl338801/blog/" REL="nofollow" TITLE="酒店兼職">酒店兼職</A>,酒店工作 彩色爆米花<A HREF="http://tw.myblog.yahoo.com/jw!BIBoU5SeBRs21nb_ajFpncbTqXds" REL="nofollow" TITLE="酒店經紀">酒店經紀</A>, <A HREF="http://mypaper.pchome.com.tw/news/thomsan/3/1310065116/20080905040949/" REL="nofollow" TITLE="酒店上班">酒店上班</A>,酒店工作 PRETTY GIRL<A HREF="http://tw.myblog.yahoo.com/jw!rybqykeeER6TH3AKz1HQ5grm/" REL="nofollow" TITLE="酒店喝酒">酒店喝酒</A>酒店上班 彩色爆米花<A HREF="http://mypaper.pchome.com.tw/news/jkl338801/" REL="nofollow" TITLE="台北酒店">台北酒店</A>酒店小姐 PRETTY GIRL<A HREF="http://www.mashow.org/" REL="nofollow" TITLE="酒店上班">酒店上班</A>酒店打工PRETTY GIRL<A HREF="http://www.tpangel.com/" REL="nofollow" TITLE="酒店打工">酒店打工</A>酒店經紀 彩色爆米花Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-19803222.post-75351951479697385142008-07-24T06:27:00.000-06:002008-07-24T06:27:00.000-06:00What this approach lacks in general is the notion ...<I>What this approach lacks in general is the notion that if classes j and k "share" some features (i.e., they have similar weights), then they're more likely to "share" other features.</I><BR/><BR/>I'm not sure about that. Actually, if I got what they say right, then it is essentially what happens when you make L1 and L2 "fighting". You have two "competing" tendencies in your regularizer - make features weight vector for each classifier sparse and make the use of each feature by all classifiers dense. If your classifiers weights (unregularized) are close, then they will be drawn towards each other by L2 and if they are far they will be kept separate because of L1 (going close would, most likely, require using more features).Vezhnickhttps://www.blogger.com/profile/02418231712664125268noreply@blogger.com