27 September 2014

AMR: Not semantics, but close (? maybe ???)

Okay, necessary warning. I'm not a semanticist. I'm not even a linguist. Last time I took semantics was twelve years ago (sigh.)

Like a lot of people, I've been excited about AMR (the "Abstract Meaning Representation") recently. It's hard not to get excited. Semantics is all the rage. And there are those crazy people out there who think you can cram meaning of a sentence into a !#$* vector [1], so the part of me that likes Language likes anything that has interesting structure and calls itself "Meaning." I effluviated about AMR in the context of the (awesome) SemEval panel.

There is an LREC paper this year whose title is where I stole the title of this post from: Not an Interlingua, But Close: A Comparison of English AMRs to Chinese and Czech by Xue, Bojar, Hajič, Palmer, Urešová and Zhang. It's a great introduction to AMR and you should read it (at least skim).

What I guess I'm interested in discussing is not the question of whether AMR is a good interlingua but whether it's a semantic representation. Note that it doesn't claim this: it's not called ASR. But as semantics is the study of the relationship between signifiers and denotation, [Edit: it's a reasonable place to look; see Emily Bender's comment.] it's probably the closest we have.


We've spent some time looking at the data (dummy), to try to understand what is actually there. What surprised me was how un-semantics-y AMR tends to be. The conclusion I reached is that it might be much closer to a sort of D-structure (admittedly with some word sense disambiguation) than a semantics (sorry, I grew up during GB days and haven't caught on to the whole minimalism thing). And actually it's kind of dubious even as a D-structure....

Why do I say that? A handful of reasons; I'll give an example of some of them. All of these examples are from the 1274 training AMRs from The Little Prince.

Gapped agents in matrix clauses

Example: "... insisted the little prince , who wanted to help him ."

(i / insist-01
  :ARG0 (p / prince
          :mod (l / little)
          :ARG0-of (w / want-01
                     :ARG1 (h / help-01
                             :ARG1 (h2 / he))))
  :ARG1 (...))

Why does this surprise me? I would strongly have expected "p" (the Little Prince) to be the ARG0 of help. But in this representation, help doesn't have any ARG0.

You could make some argument that you could write a tool to transform such matrix clauses and re-insert the ARG0 of the matrix verb as the ARG0 of the subordinate verb, but this doesn't work in general. For instance, "I always want to rest" in the dataset also doesn't have "I" as an argument of "rest." Unfortunately, the rest-er in the propbank/verbnet definition of rest is (correctly) its ARG1/theme. So this deterministic mapping doesn't work -- if you did this the interpretation would be that "I always want to rest" means "I always want to cause-something-unknown to rest" which is clearly different.

Perhaps this is an annotation error? I found the same "error" in "And I knew that I could not bear the thought of never hearing that laughter any more" (the ARG0 of "hear" is missing) but there are a few cases where the subordinate clause does get an agent; namely:
  • She did not wish to go out into the world all rumpled...
    ("go" correctly gets "she" as the ARG0)
  • ...that she wished to appear...
    ("appear" gets "she" as the ARG0)

So I'm not sure what's going on here. At least it's inconsistent...

Noun-noun compounds

In a semantic representation, I would expect the interpretation of noun noun compounds to be disambiguated.

For instance, the string "only one ring of petals" is annotated as:

        (r / ring :quant 1
                  :mod (o / only)
                  :consist-of (p / petal)))

But the string "glass globe" is simply:

        (g / globe
                  :mod (g2 / glass)))

In other words, the "ring of petals" is a ring that consists of petals. But the "glass globe" is just a "globe" modified by "glass." We have no idea what sort of modification this is, though one could argue that it's the consist-of relation that was used for ring of petals.

As made poinant by Ewan Dunbar, disambiguating Noun-Noun compounds is important when translating into many other languages:



Possession

There are many types of possession, many of which can be turned into predicates:
  • Erin's student => the student Erin advises
  • Julio's paper => the paper Julio wrote
  • Smita's football team => the team Smita plays on; the team Smita owns; the team Smita roots for
  • Kenji's speech => the manner in which Kenji talks; the speech Kenji gave (these last are actually hard to predicatize given my inventory of English verbs)
(Thanks to Ani Nenkova for most of these examples.)

In AMR, it appears the rule is that apostrophe-s ('s) turns into :poss. You (appear to) get (student :poss Erin) and (paper :poss Julio) and (team :poss Smita) and (speech :poss Kenji).

This is a bit strange because the Norman genitive alternation ("of") of possession in English (as opposed to the Saxon genitive "'s") does not turn into :poss. For instance, "other side of the planet" becomes:

                     (s2 / side
                              :mod (o / other)
                              :part-of (p2 / planet))

Here, the "of" has been disambiguated into :part-of; in contrast, with "air of authority", we get:

         (a / air
            :domain (a2 / authority))

where the "of" has turned into ":domain". In fact, I cannot find any cases where there is a :poss and no "'s" (or his/her/etc...).

Now, you could argue that "planet's side" and "authority's air" sound at best poetic and at worst wrong. (Though I find "planet's other side" pretty acceptable.) But this is basically a property of English. But these are totally fine in Japanese with /no as the possessive marker (according first to my prior and then confirmed by my Japanese informant Alvin -- thanks Alvin :P). I'm guessing they're okay in Chinese too (with /de as the possessive marker), but I'm not positive.

Moreover, possession is more complicated that than in lots of languages that distinguish between alienable and inalienable possession. A classic example being "my aunt" versus "my car." My mother is inalienable because try as I might, I cannot sell, buy, exchange, etc., my aunt. My car is alienable because I can do these things. In lots of languages, (about half, according to WALS), there is more than one class of possession, and the two class (in)alienable distinction is the most common.

As an example (taken from that WALS link, originally due to Langdon (1970)): In Mesa Grande Diegueño (Yuman; California), inalienable nouns like mother (ətalʸ) take a simple prefix ʔ- ('my'), while house (ewa) takes the compound prefix ʔə-nʸ- as in:

    a. ʔ-ətalʸ
       1sg-mother
       ‘my mother’ 

    b. ʔə-nʸ-ewaː
       1sg-alienable-house
       ‘my house’ 

So you could argue that in this case it's purely a property of the possessed noun, and so even in Mesa Grande Digueño, you could say :poss and then disambiguate by the semantics of the possessee.


Wrap Up

I could continue to go on, but perhaps I've made my point. Which is not that AMR sucks or is uninteresting or anything like that. It's just that even if we can parse English into AMR, there's a long way to go before we can start doing semantic reasoning from it. And maybe along the way you learned something. I know I did.

I think what really drove it home for me that AMR is not so much of a semantic representation is the ease with which I could imagine writing a rule-based generator for AMR to English. Yes, the sentences would come out stodgy and kind of wrong, but given an AMR and doing an in-order traversal of the tree, I'm pretty sure I could generate some fairly reasonable sentences (famous last words, right?). I believe this is true even if you took the AMR, re-ified it into a Hobbs-esque flat form first. The first step would be un-reification, which would basically amount to choosing a root, and then going from there. Like all NLP papers written in the 80s, perhaps the devil is in the details, in which case I'll be happy to be proved wrong.

One interesting final point for me is that as established in the paper I stole this title from, AMR is actually a pretty reasonable interlingua. But it's a pretty crappy semantic representation (IMO). This sort of breaks the typical MT Vauquois triangle triangle. That alone is kind of interesting :).




[1] Due to Ray Mooney, saw on Twitter, but now can't find the picture of the slide any more.