Like a lot of people, I've been excited about AMR (the "Abstract Meaning Representation") recently. It's hard not to get excited. Semantics is all the rage. And there are those crazy people out there who think you can cram meaning of a sentence into a !#$* vector [1], so the part of me that likes Language likes anything that has interesting structure and calls itself "Meaning." I effluviated about AMR in the context of the (awesome) SemEval panel.
There is an LREC paper this year whose title is where I stole the title of this post from: Not an Interlingua, But Close: A Comparison of English AMRs to Chinese and Czech by Xue, Bojar, Hajič, Palmer, Urešová and Zhang. It's a great introduction to AMR and you should read it (at least skim).
What I guess I'm interested in discussing is not the question of whether AMR is a good interlingua but whether it's a semantic representation. Note that it doesn't claim this: it's not called ASR. But as semantics is the study of the relationship between signifiers and denotation,
We've spent some time looking at the data (dummy), to try to understand what is actually there. What surprised me was how un-semantics-y AMR tends to be. The conclusion I reached is that it might be much closer to a sort of D-structure (admittedly with some word sense disambiguation) than a semantics (sorry, I grew up during GB days and haven't caught on to the whole minimalism thing). And actually it's kind of dubious even as a D-structure....
Why do I say that? A handful of reasons; I'll give an example of some of them. All of these examples are from the 1274 training AMRs from The Little Prince.
Gapped agents in matrix clauses
Example: "... insisted the little prince , who wanted to help him ."
(i / insist-01 :ARG0 (p / prince :mod (l / little) :ARG0-of (w / want-01 :ARG1 (h / help-01 :ARG1 (h2 / he)))) :ARG1 (...))
Why does this surprise me? I would strongly have expected "p" (the Little Prince) to be the ARG0 of help. But in this representation, help doesn't have any ARG0.
You could make some argument that you could write a tool to transform such matrix clauses and re-insert the ARG0 of the matrix verb as the ARG0 of the subordinate verb, but this doesn't work in general. For instance, "I always want to rest" in the dataset also doesn't have "I" as an argument of "rest." Unfortunately, the rest-er in the propbank/verbnet definition of rest is (correctly) its ARG1/theme. So this deterministic mapping doesn't work -- if you did this the interpretation would be that "I always want to rest" means "I always want to cause-something-unknown to rest" which is clearly different.
Perhaps this is an annotation error? I found the same "error" in "And I knew that I could not bear the thought of never hearing that laughter any more" (the ARG0 of "hear" is missing) but there are a few cases where the subordinate clause does get an agent; namely:
- She did not wish to go out into the world all rumpled...
("go" correctly gets "she" as the ARG0) - ...that she wished to appear...
("appear" gets "she" as the ARG0)
So I'm not sure what's going on here. At least it's inconsistent...
Noun-noun compounds
In a semantic representation, I would expect the interpretation of noun noun compounds to be disambiguated.
For instance, the string "only one ring of petals" is annotated as:
(r / ring :quant 1 :mod (o / only) :consist-of (p / petal)))
But the string "glass globe" is simply:
(g / globe :mod (g2 / glass)))
In other words, the "ring of petals" is a ring that consists of petals. But the "glass globe" is just a "globe" modified by "glass." We have no idea what sort of modification this is, though one could argue that it's the consist-of relation that was used for ring of petals.
As made poinant by Ewan Dunbar, disambiguating Noun-Noun compounds is important when translating into many other languages:
Possession
There are many types of possession, many of which can be turned into predicates:
- Erin's student => the student Erin advises
- Julio's paper => the paper Julio wrote
- Smita's football team => the team Smita plays on; the team Smita owns; the team Smita roots for
- Kenji's speech => the manner in which Kenji talks; the speech Kenji gave (these last are actually hard to predicatize given my inventory of English verbs)
In AMR, it appears the rule is that apostrophe-s ('s) turns into :poss. You (appear to) get (student :poss Erin) and (paper :poss Julio) and (team :poss Smita) and (speech :poss Kenji).
This is a bit strange because the Norman genitive alternation ("of") of possession in English (as opposed to the Saxon genitive "'s") does not turn into :poss. For instance, "other side of the planet" becomes:
(s2 / side :mod (o / other) :part-of (p2 / planet))
Here, the "of" has been disambiguated into :part-of; in contrast, with "air of authority", we get:
(a / air :domain (a2 / authority))
where the "of" has turned into ":domain". In fact, I cannot find any cases where there is a :poss and no "'s" (or his/her/etc...).
Now, you could argue that "planet's side" and "authority's air" sound at best poetic and at worst wrong. (Though I find "planet's other side" pretty acceptable.) But this is basically a property of English. But these are totally fine in Japanese with の/no as the possessive marker (according first to my prior and then confirmed by my Japanese informant Alvin -- thanks Alvin :P). I'm guessing they're okay in Chinese too (with 的/de as the possessive marker), but I'm not positive.
Moreover, possession is more complicated that than in lots of languages that distinguish between alienable and inalienable possession. A classic example being "my aunt" versus "my car." My mother is inalienable because try as I might, I cannot sell, buy, exchange, etc., my aunt. My car is alienable because I can do these things. In lots of languages, (about half, according to WALS), there is more than one class of possession, and the two class (in)alienable distinction is the most common.
As an example (taken from that WALS link, originally due to Langdon (1970)): In Mesa Grande Diegueño (Yuman; California), inalienable nouns like mother (ətalʸ) take a simple prefix ʔ- ('my'), while house (ewa) takes the compound prefix ʔə-nʸ- as in:
a. ʔ-ətalʸ 1sg-mother ‘my mother’ b. ʔə-nʸ-ewaː 1sg-alienable-house ‘my house’
So you could argue that in this case it's purely a property of the possessed noun, and so even in Mesa Grande Digueño, you could say :poss and then disambiguate by the semantics of the possessee.
Wrap Up
I could continue to go on, but perhaps I've made my point. Which is not that AMR sucks or is uninteresting or anything like that. It's just that even if we can parse English into AMR, there's a long way to go before we can start doing semantic reasoning from it. And maybe along the way you learned something. I know I did.
I think what really drove it home for me that AMR is not so much of a semantic representation is the ease with which I could imagine writing a rule-based generator for AMR to English. Yes, the sentences would come out stodgy and kind of wrong, but given an AMR and doing an in-order traversal of the tree, I'm pretty sure I could generate some fairly reasonable sentences (famous last words, right?). I believe this is true even if you took the AMR, re-ified it into a Hobbs-esque flat form first. The first step would be un-reification, which would basically amount to choosing a root, and then going from there. Like all NLP papers written in the 80s, perhaps the devil is in the details, in which case I'll be happy to be proved wrong.
One interesting final point for me is that as established in the paper I stole this title from, AMR is actually a pretty reasonable interlingua. But it's a pretty crappy semantic representation (IMO). This sort of breaks the typical MT Vauquois triangle triangle. That alone is kind of interesting :).
[1] Due to Ray Mooney, saw on Twitter, but now can't find the picture of the slide any more.
Hal, thanks for the insightful post.
ReplyDeleteFirst, I'd like to emphasize that one of the chief goals of AMR (in contrast to the semantic representations of the olden days) is to enable rapid human annotation of corpora with broad coverage. Thus, there are many things about it that are shallower than one might like, such as the lack of any formal relationship between the predicates kill-01 and die-01. It is a (sometimes difficult!) tradeoff.
Second, a historical note: the Little Prince dataset was originally annotated by various people involved in the design of AMR (including myself). The details of individual annotations should be taken with a planet-sized grain of salt. :) Thankfully, the annotation conventions and tool support have become richer since then, and while some of the improvements have been backported ("retrofitted") to that data, there is a great deal of newer data that should be more consistent. For example, :poss was originally superficially applied to genitive constructions—which, as you point out, is far from ideal—but now the definition of :poss is more semantically restricted (documentation here). In my opinion, :consist-of should have been used for glass globe (it is for some sentences, but not others); and there is now a predicate called have-rel-role-91 for kinship and other person-to-person relations.
I guess the broader point is that I hope we can continue to push on AMR where it falls short as a semantic representation, without sacrificing annotator productivity. And in my view, it would also be interesting to explore automatic methods for identifying inconsistencies in richly structured annotations to help with the retrofitting process!
thanks, Nathan -- that's really interesting. i especially like the "call to arms" :)
ReplyDeleteit's interesting to think about balancing (a) annotator time, (b) learnability (ie to not get back into the olden days again) and (c) what's actually represented. obviously i don't have an answer :)
For the record, my "$&!#* vector" comment is from my ACL-14 Workshop on Semantic Parsing talk, the slides are available on the workshop website. The full quote was: “You can’t cram the meaning of a whole %&!$# sentence into a single $&!#* vector!” I originally exclaimed this in a moment of jetlagged weakness at a small reading group, frustrated by discussing yet another paper that was trying to do this.
ReplyDeleteHi Hal,
ReplyDeleteInteresting to see your take on AMR!
My response is prompted by this comment: "But as semantics is the study of the relationship between signifiers and denotation, [AMR]'s probably the closest we have."
I wonder: What other systems of semantic representation did you consider before making that claim? (I know, I know: blog posts aren't peer reviewed publications...)
In particular, I'd like to draw your attention to Minimal Recursion Semantics (MRS; Copestake et al 2005) and especially the MRS representations produced by the English Resource Grammar (Flickinger 2000, 2011).
The ERG is a broad-coverage, rule-based, linguistically precise grammar for English. You can play with a demo here: erg.delph-in.net
We've also started documenting the particular analyses behind the MRS representations:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/562_Paper.pdf
If it's not a grammar/parser you want but a treebank (or sembank), check out DeepBank, which has representations produced by the ERG over the same old WSJ text as the PTB: http://moin.delph-in.net/DeepBank This is done with the Redwoods treebanking methodology (Oepen et al 2004), meaning the analyses are all grammar-produced but manually disambiguated. DeepBank was the resource behind the DM representation in 2014 SemEval Task 8 (on semantic dependency parsing), though that representation is impoverished compared with the full MRS. (The Redwoods treebanks aren't limited to WSJ text---lots of other genres are available too.)
I think you'll find that ERG-MRS, as compared to AMR, is much more articulated in the representation of things like control and much more consistent because of the annotation methodology. On the other hand, it doesn't do WSD (beyond that which is morphosyntactically required/constrained), including disambiguating the relationship between the members of noun-noun compounds. (On the futility of that one, see Ó Séaghdha, 2007.)
@Ray: thanks!
ReplyDelete@Emily: haha, thanks for calling me out on the "closest we have" comment. In retrospect I have no idea why I said that! I know (and love) the ERG. I think comments _are_ the peer-review of blogs and now I can go edit it :).
More seriously, what I think I meant (or maybe my rationalization) is that I wanted to stress that semantics != meaning, but since AMR doesn't claim to be semantics, we'll have to settle for "meaning." Does that make sense? I'll go back and clarify.
@Hal: I think that the distinction between what's sometimes called `standing' meaning (associated with sentence types) and `occasion' or `speaker' meaning (what a speaker is using a sentence to communicate, in a given instance) is critical, and I think that a lot of current work in `semantic processing' in our field doesn't keep this distinction clear. My best understanding of AMR is that they are looking to model sentence meaning (one representation that can be used for multiple different applications), but it seems like a lot of the annotations draw on annotator intuitions about occasion meaning. I'm working on a paper about this and will happily point you to it once it's out!
ReplyDelete@Emily: can't wait!
ReplyDeleteSome of the shortcomings Hal described are indeed simply annotation errors, and other examples are not fully representative of AMR.
ReplyDeleteFor example, AMR annotation guidelines specify to include gapped agents in matrix clauses and to capture the semantics in noun-noun compounds if it can be expressed by a (simple) semantic role. And many instances of English "'s" and "of" are expressed using frames and roles other than :mod or :poss.
On the other hand, for practical reasons, AMR indeed is not interlingua. Some concepts and roles remain somewhat shallow, particularly if they would require additional concepts not explicit in the original text. But we do strive to continuously deepen AMRs for common patterns as practically feasible. For example, "the president of the United States, Barack Obama" is no longer annotated as "president :poss country" but as
(p / person :wiki "Barack_Obama" :name ...
:ARG0-of (h / have-org-role-91
:ARG1 (c / country :wiki "United_States" :name ...)
:ARG2 (p2 / president)))
Most of the AMRs have only single annotations (which allows us to annotate more AMRs) so they will always be imperfections as for any non-toy-size corpus, but we continuously correct errors that we find, often based on a growing number of automatic checks that we apply to the AMR corpus.
Other examples for non-:poss "'s" and "of":
Snt: Obama's announcement
(a / announce-01
:ARG0-of (p / person :wiki "Barack_Obama" ...))
Snt: Indonesia's West Papua province
(p / province :wiki "West_Papua_(province)" :name ...
:location (c / country :wiki "Indonesia" ...))
Snt: Wednesday's earthquake
(e / earthquake
:time (d / date-entity :weekday (w / wednesday)))
Snt.: the roof of the house
(r / roof
:part-of (h / house))
Snt.: two gallons of milk
(m / milk
:quant (v / volume-quantity :quant 2 :unit (g / gallon)))
Corrected annotation:
Snt.: "... insisted the little prince , who wanted to help him ."
(i / insist-01
:ARG0 (p / prince
:mod (l / little)
:ARG0-of (w / want-01
:ARG0 p
:ARG1 (h / help-01
:ARG1 (h2 / he))))
:ARG1 (...))
Corrected annotation:
Snt.: glass globe
(g / globe
:consist-of (g2 / glass))
For more examples on noun-noun compounds, see http://www.isi.edu/~ulf/amr/lib/amr-dict.html#implied
I've been reminded that I promised to post a link to the paper-in-progress I mentioned in my comment from 10/2/14 here. It's been out for a few months now; hopefully still of interest:
ReplyDeleteBender, Emily M., Dan Flickinger, Stephan Oepen, Woodley Packard and Ann Copestake. 2015. Layers of Interpretation: On Grammar and Compositionality. In Proceedings of the 11th International Conference on Computational Semantics (IWCS 2015), London. pp.239-249.
http://aclweb.org/anthology/W/W15/W15-0128.pdf