Research

My research interests lie in the areas of machine learning and computer vision, including graphical models, efficient Markov chain Monte Carlo methods and variational inference methods for Bayesian models, deep learning and reinforcement learning.

My Ph.D. thesis is about a deterministic sampling algorithm known as herding. Herding takes as input a probabilistic distribution or a set of random samples, and outputs pseudo-samples without explicitly specifying a probabilistic model. These pseudo-samples are highly negatively correlated, and convey more information of the input distribution than i.i.d. samples of the same size. I have also worked on Bayesian inference for Markov random fields, as well as efficient and scalable MCMC methods.

At Cambridge, I worked with Prof. Zoubin Ghahramani on scalable inference methods for Bayesian nonparametric models. My long-term interest on this topic is to make Bayesian inference algorithms for (nonparametric) probabilistic models as scalable and easy to use as optimization-based algorithms. A few approaches I have been studying include:

At DeepMind, I am doing research on deep learning and reinforcement learning. I am particularly interested in deep generative models and applying Bayesian methods to deep/reinforcement learning.

Also, I feel very privileged to have taken part in the AlphaGo project at DeepMind and helped build an AI Go player that beat the Go world champion 4 - 1 in a five-game match between 9 and 15 March 2016.