|Title: Designing Abstract Meaning Representations|
|Seminar: Computer Science|
|Speaker: Martha Palmer of University of Colorado|
|Contact: Jinho Choi, email@example.com|
|Date: 2017-03-17 at 3:00PM|
Abstract Meaning Representations (AMRs) provide a single, graph-based semantic representation that abstracts away from the word order and syntactic structure of a sentence, resulting in a more language-neutral representation of its meaning. AMRs implements a simplified, standard neo-Davidsonian semantics. A word in a sentence either maps to a concept or a relation or is omitted if it is already inherent in the representation or it conveys inter-personal attitude (e.g., stance or distancing). The basis of AMR is PropBanks lexicon of coarse-grained senses of verb, noun and adjective relations as well as the roles associated with each sense (each lexicon entry is a roleset). By marking the appropriate roles for each sense, this level of annotation provides information regarding who is doing what to whom. However, unlike PropBank, AMR also provides a deeper level of representation of discourse relations, non-relational noun phrases, prepositional phrases, quantities and time expressions (which PropBank largely leaves unanalyzed), as well as Named Entity tags with Wikipedia links. Additionally, AMR makes a greater effort to abstract away from language-particular syntactic facts. The latest version of AMR includes adding coreference links across sentences, including links to implicit arguments. This talk will explore the differences between PropBank and AMR, the current and future plans for AMR annotation, and the potential of AMR as a basis for machine translation. It will end with a discussion of areas of semantic representation that AMR is not currently addressing, which remain as open challenges.\\ \\ Martha Palmer is a Professor at the University of Colorado in Linguistics, Computer Science and Cognitive Science, and a Fellow of the Association of Computational Linguistics.. She works on trying to capture elements of the meanings of words that can comprise automatic representations of complex sentences and documents. Supervised machine learning techniques rely on vast amounts of annotated training data so she and her students are engaged in providing data with word sense tags, semantic role labels and AMRs for English, Chinese, Arabic, Hindi, and Urdu, both manually and automatically, funded by DARPA and NSF. These methods have also recently been applied to biomedical journal articles, clinical notes, and geo-science documents, funded by NIH and NSF. She is a co-editor of LiLT, Linguistic Issues in Language Technology, and has been on the CLJ Editorial Board and a co-editor of JNLE. She is a past President of the Association for Computational Linguistics, past Chair of SIGLEX and SIGHAN, co-organizer of the first few Sensevals, and was the Director of the 2011 Linguistics Institute held in Boulder, Colorado.
See All Seminars