from Words to Actions:
Semantic Interpretation in an
Submission deadline extended to April 14
Call for papers is out!
Effective and seamless Human-Computer interaction using natural language is arguably one of the major challenges of natural language processing and artificial intelligence in general. Making significant progress in developing natural language capabilities that support this level of interaction has countless applications and is bound to attract many researchers from several AI fields: from robotics to games to the social sciences.
From the natural language processing perspective the problem is often formulated as a translation task: mapping between natural language input and a logical output language that can be executed in the domain of interest. Unlike shallow approaches for semantic interpretation, which provide an incomplete or underspecified interpretation of the natural language input, the output of a formal semantic interpreter is expected to provide complete meaning representation that can be executed directly by a computer system. Examples of such systems include robotic control, database access, game playing and more. Current approaches to this task take a data driven approach, in which a learning algorithm is given a set of natural language sentences as input and their corresponding logical meaning representation and learns a statistical semantic parser: a set of parameterized rules mapping lexical items and syntactic patterns to a logical formula.
In recent years this framework was challenged by an exciting line of research, advocating that semantic interpretation should not be studied in isolation, but rather in the context of the external environment (or computer system) which provides the semantic context for interpretation. This line of research comprises several directions, focusing on grounded semantic representations, flexible semantic interpretation models, and alternative learning protocols driven by indirect supervision signals.
This progress has contributed to expanding the scope of semantic interpretation, introduced new domains and tasks and revealed that it is possible to make progress in this direction with reduced manual effort. In particular, it resulted in a wide range of models, learning protocols, learning tasks, and semantic formalisms that, while clearly related, are not directly comparable and understood under a single framework.
The goal of this workshop is to provide researchers interested in the field with an opportunity to exchange ideas, discuss other perspectives, and formulate a shared vision for this research direction.
Topics of InterestWe invite submissions that explore this field from multiple, theoretical and experimental, perspectives on, but not limited to, the following topics:
- Indirect supervision protocols for semantic interpretation
- Modeling and representing an external world
- Incorporating domain knowledge into semantic inference
- Interactive language interpretation
- New domains and tasks
FormatSubmissions should follow the NAACL'12 formatting instructions, namely an 8 page submission for long papers (plus additional two pages for references), or a 4 page submission for short papers. We will also accept abstract submissions (1-2 pages) describing previously published work. Submissions must be anonymized.
- April 2: Paper due date
- April 23: Notification of acceptance
- May 04: Camera ready deadline
- June 8: Words-to-Actions Workshop
|Morning Session 1|
Learning to Interpret Natural Language Instructions [ PDF ]
Monica Babes-Vroman, James MacGlashan, Ruoyuan Gao, Kevin Winner, Richard Adjogah, Marie desJardins, Michael Littman and Smaranda Muresan
Learning Perceptually Grounded Word Meanings from Unaligned Parallel Data |
Stefanie Tellex, Pratiksha Thaker, Josh Joseph, and Nicholas Roy
|10:05–10:25|| Invited Talk: Integrating Natural Language Programming and Programming by Demonstration|
Mehdi Hafezi Manshadi
|Morning Session 2|
Invited Talk: Learning to Interpret Natural Language Navigation Instructions from Observations
[ Demo ]
|Afternoon Session 1|
Invited Talk: Grounded Learning of Semantics Parsers
Invited Talk: Learning to Represent Semantics
|Afternoon Session 2|
Invited Talk: Describing images with sentences |
Invited Talk: Learning from Natural Instructions [ PDF ]
Yoshua Bengio, Ray Mooney and Dan Roth