Abstract - Natural Language Processing ||
Efficient Lifted Inference with Large-Scale Graphical Models

 


This talk provides new insights for large-scale probabilistic graphical models. It provides a novel idea how to maintain a compact structure when solving inference problems for large continuous models. The insight flourishes to a new Kalman filter, Lifted Relational Kalman Filter (LRKF), an efficient estimation algorithm for large-scale linear dynamic systems. It shows that the LRKF enables to scale the exact KF from 1,000 variables to 1,000,000,000 variables in experiments. Another key idea of this talk is that it proves that typically used probabilistic first-order languages including Markov Logic Networks (MLNs) and First-Order Probabilistic Models (FOPMs) with hybrid domains are reduced to compact probabilistic graphical representations under reasonable conditions. Specifically, it shows that aggregate operations and the existential quantifiers in the languages are equivalent to linear constraints over the Gaussian distribution. In the general case, the probabilistic languages are converted into a nonparametric variational model where lifted inference algorithms can solve inference problems efficiently.