Time: 14:30 on May 3, 2019
Place: room 803, B1 building, HUST
Presenter: Tuan Anh Le, final-year PhD student in the Department of Engineering Science, University of Oxford
Title: Inference compilation and model learning for probabilistic programming
Abstract:
Probabilistic programming is a powerful abstraction layer for Bayesian inference, separating the modeling and the inference part of the problem. However, approximate inference algorithms for general-purpose probabilistic programming systems are typically slow and must be re-run for a new set of observations. Also, writing well-tuned generative models is hard. In the first part of the talk, I describe an approach to “compile” the cost of inference during training time in order to speed up repeated inference during test time. In the second part the talk, I describe an approach to model learning suitable for probabilistic programming based on maximum marginal likelihood. Importantly, this approach to model learning supports discrete latent variables and control flow, while not having to use difficult to implement control-variate methods. It also has the property that increasing computation doesn’t hurt performance, which is a surprising property of methods based on the importance-weighted autoencoders.
Tuan Anh Le is a final-year PhD student in the Department of Engineering Science at the University of Oxford, supervised by Frank Wood and Yee Whye Teh. His research interests are probabilistic programming, deep learning and artificial intelligence. He completed his MEng degree in Engineering Science at Oxford.