Abstract - Natural Language Processing ||
Efficient Decomposed Learning for Structured Prediction

UIUC

 

Abstract
Structured prediction is the cornerstone of several machine learning applications. Unfortunately, in structured prediction settings with expressive inter-variable interactions, exact inference-based learning algorithms, e.g. Structural SVM, are often intractable. We present a new way, Decomposed Learning (DecL), which performs efficient learning by restricting the inference step to a limited part of the structured output spaces. We provide characterizations based on the structure, target parameters, and gold labels, under which DecL is equivalent to exact learning. We then show that in real world settings, where our theoretical assumptions may not hold exactly, DecL-based algorithms are significantly more efficient and perform as well as exact learning.

Bio:
Rajhans is a fourth year PhD student working with Prof. Dan Roth. Rajhans works on ML techniques for NLP. More specifically, Rajhans works on supervised and un(semi-)supervised learning for structured output prediction for NLP applications. Rajhans is the self-proclaimed most fashionable guy this side of Green street.