2007-04-30
Note on neural network sampling for Bayesian inference of mixture processes
Publication
Publication
Report / Econometric Institute, Erasmus University Rotterdam
In this paper we show some further experiments with neural network sampling, a class of sampling methods that make use of neural network approximations to (posterior) densities, introduced by Hoogerheide et al. (2007). We consider a method where a mixture of Student's t densities, which can be interpreted as a neural network function, is used as a candidate density in importance sampling or the Metropolis-Hastings algorithm. It is applied to an illustrative 2-regime mixture model for the US real GNP growth rate. We explain the non-elliptical shapes of the posterior distribution, and show that the proposed method outperforms Gibbs sampling with data augmentation and the griddy Gibbs sampler.
Additional Metadata | |
---|---|
hdl.handle.net/1765/10090 | |
Econometric Institute Research Papers | |
Report / Econometric Institute, Erasmus University Rotterdam | |
Organisation | Erasmus School of Economics |
Hoogerheide, L., & van Dijk, H. (2007). Note on neural network sampling for Bayesian inference of mixture processes (No. EI 2007-15). Report / Econometric Institute, Erasmus University Rotterdam. Retrieved from http://hdl.handle.net/1765/10090 |