Level: |
MSci, MSc |
Title: |
MCMC methods that use derivative information |
Supervisor: |
|
Research Area: |
Probability and Applications [Including Statistics] |
Description: |
Markov Chain Monte Carlo (MCMC) methods are used to simulate approximate samples from probability distributions. The methods are widely used in Bayesian statistical inference, where we want to draw a sample from the posterior distribution of parameter values. The simplest methods only require that we can calculate the probability density (without needing any normalising constant) at any parameter vector. However these methods become inefficient in high-dimensional parameter spaces. Methods that use the first derivatives of the log probability density can be more efficient. This project would review basic MCMC methods, and then variants that use the first derivative. The student would implement some of these methods to investigate how well each one can sample a variety of probability distributions. |
Further Reading: |
-
W. R. Gilks, S. Richardson, D. Spiegelhalter, Markov Chain Monte Carlo in Practice. Chapman & Hall, London, 1996.
- G. O. Roberts, R.L. Tweedie, Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli Journal of Mathematical Statistics and Probability 2 (1997) 341–63.
-
R. M. Neal, MCMC Using Hamiltonian Dynamics, in Handbook of Markov Chain Monte Carlo. Chapman & Hall, London, 2011 [http://www.cs.toronto.edu/~radford/ftp/ham-mcmc.pdf].
|
Key Modules: |
|
Other Information: |
Some programming ability is needed for this project. |
Current Availability: |
Yes |
|