Functional approximations to posterior densities: a neural network approach to efficient sampling
The performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate results with efficient sampling. Neural networks seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. That is, conditional upon the specification of the neural network, sampling can be done either directly or using a Gibbs sampling technique, possibly using auxiliary variables. A key step in the proposed class of methods is the construction of a neural network that approximates the target density accurately. The methods are tested on a set of illustrative models which include a mixture of normal distributions, a Bayesian instrumental variable regression problem with weak instruments and near-identification, and two-regime growth model for US recessions and expansions. These examples involve experiments with non-standard, non-elliptical posterior distributions. The results indicate the feasibility of the neural network approach.
|Keywords||Bayesian inference, Markov chain Monte Carlo, importance sampling, neural networks|
Hoogerheide, L.F., Kaashoek, J.F., & van Dijk, H.K.. (2002). Functional approximations to posterior densities: a neural network approach to efficient sampling (No. EI 2002-48). Retrieved from http://hdl.handle.net/1765/1727