Research

My research interests lie in the areas of machine learning and computer vision, including graphical models, efficient Markov chain Monte Carlo methods and variational inference methods for Bayesian models, deep learning and reinforcement learning.

My Ph.D. thesis is about a deterministic sampling algorithm known as herding. Herding takes as input a probabilistic distribution or a set of random samples, and outputs pseudo-samples without explicitly specifying a probabilistic model. These pseudo-samples are highly negatively correlated, and convey more information of the input distribution than i.i.d. samples of the same size. I have also worked on Bayesian inference for Markov random fields, as well as efficient and scalable MCMC methods.

At Cambridge, I worked with Prof. Zoubin Ghahramani on scalable inference methods for Bayesian nonparametric models. My long-term interest on this topic is to make Bayesian inference algorithms for (nonparametric) probabilistic models as scalable and easy to use as optimization-based algorithms. A few approaches I have been studying include:

At DeepMind, I am doing research on deep learning, reinforcement learning, and meta learning. I am particularly interested in deep generative models and applying Bayesian methods to deep/reinforcement learning.

AlphaGo

I feel very privileged to have taken part in the AlphaGo project at DeepMind and helped build an AI program for game Go. AlphaGo beat the world champion, Lee Sedol, in 4 out of 5 games in 2016, defeated 60 top professionals online and world’s number one player, Ke Jie, in 3 out of 3 games in 2017.

We also developed a new version AlphaGo Zero that was trained from scratch without human knowledge and surpassed all the previous versions of AlphaGo after 21 days of training.

The DeepMind AlphaGo team received Inaugural IJCAI Marvin Minsky Medal for Outstanding Achievements in AI in 2017.