Abstract
Many classic problems in natural language processing can be cast as building mapping from a complex input (e.g., a sequence of words) to a complex output (e.g., a syntax tree or semantic graph). This task is challenging both because language is ambiguous (learning difficulties) and represented with discrete combinatorial structures (computational difficulties). Often these are at odds: the features you want to add to decrease learning difficulties cause nontrivial additional structure yielding worse computational difficulties.
I will present approaches to address this problem that explicitly learn to trade-off accuracy and efficiency, applied to a variety of linguistic phenomena. This will include black-box solutions to any problem that can be solved using branch-and-bound (e.g., integer linear programs). Finally, I will show that in some cases, we can actually obtain a model that is faster and more accurate by exploiting smarter learning algorithms.