Abstract - Machine Learning || Amir Globerson

The Hebrew University of Jerusalem

 

Learning with Approximate Inference - From LP Relaxations to Pseudo-Max Approaches

Supervised learning problems often involve the prediction of complex structure labels, such as sequences (e.g., POS tagging) or trees (e.g., dependency parsing). To achieve high accuracy in these tasks, one is often interested in introducing complex dependencies between label parts. However, this can result in prediction problems that are NP hard. A natural approach in these cases is to use tractable approximations of the prediction problem. In this talk I will present our recent work on using approximate inference for structured prediction tasks. I will describe linear programming (LP) relaxations for the problem, and show highly scalable algorithms for learning using these relaxations. I will next introduce a simpler approach, called 'psuedo-max' learning, and show that it is consistent for separable problems under certain conditions, and has empirical performance that is similar to LP relaxations. I will conclude by addressing the problem of finding the K best solutions in such problems, and show a new class of relaxations that has theoretical guarantees and works well in practice.

Slides <link