Abstract - Machine Learning || Jonathan Berant

Tel Aviv University, working in Bar-Ilan University's NLP group under the supervision of Ido dagan and Jacob Goldberger.



Global Learning of Entailment Graphs

One of the key challenges in developing natural language understanding applications such as Question Answering, Information Retrieval, or Information Extraction is overcoming the variability of semantic expression, namely the fact that the same meaning can be expressed in natural language by many phrases. In this work, we address a crucial component of this problem: learning inference rules or entailment rules between natural language predicates, such as “X buy from Y --> Y sell to X”.

Previous work has focused on estimating each entailment rule independently of others, but clearly there are interactions between different entailment rules. We address this issue by modelling the problem of learning entailment rules as a graph learning problem (termed “entailment graphs”), and attempt to learn graphs that are “coherent” in the sense that they obey certain global properties. We formulate the problem as an Integer Linear Program (ILP) and introduce two algorithms that scale the use of ILP solvers to larger entailment graphs. We learn entailment graphs in 2 scenarios: (1) where one of the arguments is instantiated (X increase asthma symptoms --> X affects asthma) (2) where the arguments are typed (Xcountry conquer Ycity -->Xcountry invade Ycity) and show an improvement in performance over previous state-of-the-art algorithms. We also show that our scaling techniques increase the recall of the algorithm without harming precision.

This work is based on the paper Global Learning of Focused Entailment Graphs and on recently-submitted work performed at The University of Washington. This is joint work with Ido Dagan and Jacob Goldberger.

Slides