H.K. van Dijk (Herman)
http://repub.eur.nl/ppl/263/
List of Publicationsenhttp://repub.eur.nl/eur_signature.png
http://repub.eur.nl/
RePub, Erasmus University RepositoryDynamic Predictive Density Combinations for Large Data Sets in Economics and Finance
http://repub.eur.nl/pub/78461/
Wed, 01 Jul 2015 00:00:01 GMT<div>R. Casarin</div><div>S. Grassi</div><div>F. Ravazzolo</div><div>H.K. van Dijk</div>
__Abstract__
A Bayesian nonparametric predictive model is introduced to construct time-varying weighted combinations of a large set of predictive densities. A clustering mechanism allocates these densities into a smaller number of mutually exclusive subsets. Using properties of Aitchinson's geometry of the simplex, combination weights are defined with a probabilistic interpretation. The class-preserving property of the logistic-normal distribution is used to define a compositional dynamic factor model for the weight dynamics with latent factors defined on a reduced dimension simplex. Groups of predictive models with combination weights are updated with parallel clustering and sequential Monte Carlo filters. The procedure is applied to predict Standard & Poor's 500 index using more than 7000 predictive densities based on US individual stocks and finds substantial forecast and econ omic gains. Similar forecast gains are obtained in point and density forecasting of US real GDP, Inflation, Treasury Bill yield and employment using a large data set.Combined Density Nowcasting in an Uncertain Economic Environment
http://repub.eur.nl/pub/77398/
Tue, 09 Dec 2014 00:00:01 GMT<div>K.A. Aastveit</div><div>F. Ravazzolo</div><div>H.K. van Dijk</div>
__Abstract__
We introduce a Combined Density Nowcasting (CDN) approach to Dynamic Factor Models (DFM) that in a coherent way accounts for time-varying uncertainty of several model and data features in order to provide more accurate and complete density nowcasts. The combination weights are latent random variables that depend on past nowcasting performance and other learning mechanisms. The combined density scheme is incorporated in a Bayesian Sequential Monte Carlo method which re-balances the set of nowcasted densities in each period using updated information on the time-varying weights. Experiments with simulated data show that CDN works particularly well in a situation of early data releases with relatively large data uncertainty and model incompleteness. Empirical results, based on US real-time data of 120 leading indicators, indicate that CDN gives more accurate density nowc asts of US GDP growth than a model selection strategy and other combination strategies throughout the quarter with relatively large gains for the two first months of the quarter. CDN also provides informative signals on model incompleteness during recent recessions. Focusing on the tails, CDN delivers probabilities of negative growth, that provide good signals for calling recessions and ending economic slumps in real time.Bayesian Forecasting of US Growth using Basic Time Varying Parameter Models and Expectations Data
http://repub.eur.nl/pub/77113/
Sun, 14 Sep 2014 00:00:01 GMT<div>N. Basturk</div><div>S.P. Ceyhan</div><div>H.K. van Dijk</div>
__Abstract__
Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential method to determine the forecasting performance of the various model specifications. The extension of a basic growth model with a constant mean to models including time variation in the mean and variance requires careful investigation of possible identification issues of the parameters and existence conditions of the posterior under a diffuse prior. The use of diffuse priors leads to a focus on the likelihood fu nction and it enables a researcher and policy adviser to evaluate the scientific information contained in model and data. Empirical results indicate that incorporating time variation in mean growth rates as well as in volatility are important in order to improve for the predictive performances of growth models. Furthermore, using data information on growth expectations is important for forecasting growth in specific periods, such as the the recession periods around 2000s and around 2008.On the Rise of Bayesian Econometrics after
Cowles Foundation Monographs 10, 14
http://repub.eur.nl/pub/51651/
Tue, 08 Jul 2014 00:00:01 GMT<div>N. Basturk</div><div>C. Cakmakli</div><div>S.P. Ceyhan</div><div>H.K. van Dijk</div>
__Abstract__
This paper starts with a brief description of the introduction of the likelihood approach in econometrics as presented in Cowles Foundation Monographs 10 and 14. A sketch is given of the criticisms on this approach mainly from the first group of Bayesian econometricians. Publication and citation patterns of Bayesian econometric papers are analyzed in ten major econometric journals from the late 1970s until the first few months of 2014. Results indicate a cluster of journals with theoretical and applied papers, mainly consisting of Journal of Econometrics, Journal of Business and Economic Statistics and Journal of Applied Econometrics which contains the large majority of high quality Bayesian econometric papers. A second cluster of theoretical journals, mainly consisting of Econometrica and Review of Economic Studies contains few Bayesian econometric papers. The scientific impact, however, of these few papers on Bayesian econometric research is substantial. Special issues from the journals Econometric Reviews, Journal of Econometrics and Econometric Theory received wide attention. Marketing Science shows an ever increasing number of Bayesian papers since the middle nineties. The International Economic Review and the Review of Economics and Statistics show a moderate time varying increase. An upward movement in publication patterns in most journals occurs in the early 1990s due to the effect of the 'Computational Revolution'.Bayesian Analysis of Instrumental Variable Models: Acceptance-Rejection within Direct Monte Carlo
http://repub.eur.nl/pub/73371/
Sat, 01 Feb 2014 00:00:01 GMT<div>A. Zellner</div><div>T. Ando</div><div>N. Basturk</div><div>L.F. Hoogerheide</div><div>H.K. van Dijk</div>
We discuss Bayesian inferential procedures within the family of instrumental variables regression models and focus on two issues: existence conditions for posterior moments of the parameters of interest under a flat prior and the potential of Direct Monte Carlo (DMC) approaches for efficient evaluation of such possibly highly non-elliptical posteriors. We show that, for the general case of m endogenous variables under a flat prior, posterior moments of order r exist for the coefficients reflecting the endogenous regressors' effect on the dependent variable, if the number of instruments is greater than m +r, even though there is an issue of local non-identification that causes non-elliptical shapes of the posterior. This stresses the need for efficient Monte Carlo integration methods. We introduce an extension of DMC that incorporates an acceptance-rejection sampling step within DMC. This Acceptance-Rejection within Direct Monte Carlo (ARDMC) method has the attractive property that the generated random drawings are independent, which greatly helps the fast convergence of simulation results, and which facilitates the evaluation of the numerical accuracy. The speed of ARDMC can be easily further improved by making use of parallelized computation using multiple core machines or computer clusters. We note that ARDMC is an analogue to the well-known "Metropolis-Hastings within Gibbs" sampling in the sense that one 'more difficult' step is used within an 'easier' simulation method. We compare the ARDMC approach with the Gibbs sampler using simulated data and two empirical data sets, involving the settler mortality instrument of Acemoglu et al. (2001) and father's education's instrument used by Hoogerheide et al. (2012a). Even without making use of parallelized computation, an efficiency gain is observed both under strong and weak instruments, where the gain can be enormous in the latter case.CFEnetwork: The Annals of computational and financial econometrics: 2nd issue
http://repub.eur.nl/pub/53379/
Wed, 01 Jan 2014 00:00:01 GMT<div>E.J. Kontoghiorghes</div><div>H.K. van Dijk</div><div>D. Belsley</div><div>T. Bollerslev</div><div>F.X. Diebold</div><div>J.-M. Dufour</div><div>R. Engle</div><div>A.C. Harvey</div><div>S.J. Koopman</div><div>M.H. Pesaran</div><div>P.C.B. Phillips</div><div>R. Smith</div><div>M. West</div><div>Q. Yao</div><div>A. Amendola</div><div>M. Billio</div><div>C.W.S. Chen</div><div>C. Chiarella</div><div>A. Colubi</div><div>M. Deistler</div><div>C. Francq</div><div>M. Hallin</div><div>E. Jacquier</div><div>K. Judd</div><div>G. Koop</div><div>H. Lütkepohl</div><div>J.G. MacKinnon</div><div>S. Mittnik</div><div>Y. Omori</div><div>I. Pollock</div><div>T. Proietti</div><div>J.V.K. Rombouts</div><div>O. Scaillet</div><div>W. Semmler</div><div>M.K.P. So</div><div>J. Steel</div><div>R.N. Taylor</div><div>E. Tzavalis</div><div>J.-M. Zakoian</div><div>H. Peter Boswijk</div><div>A. Luati</div><div>J. Maheu</div>
Time-varying combinations of predictive densities using nonlinear filtering
http://repub.eur.nl/pub/51721/
Sun, 01 Dec 2013 00:00:01 GMT<div>M. Billio</div><div>R. Casarin</div><div>F. Ravazzolo</div><div>H.K. van Dijk</div>
__Abstract__
We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts of simulated data, US macroeconomic time series and surveys of stock market prices. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. Also, substantial uncertainty appears in the weights when predictors are similar; residual uncertainty reduces when the model set is complete; and learning reduces this uncertainty. For the macro series we find that incompleteness of the models is relatively large in the 1970's, the beginning of the 1980's and during the recent financial crisis, and lower during the Great Moderation; the predicted probabilities of recession accurately compare with the NBER business cycle dating; model weights have substantial uncertainty attached. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 1990's and switches to giving more weight to the professional forecasts over time. Information on the complete predictive distribution and not just on some moments turns out to be very important, above all during turbulent times such as the recent financial crisis. More generally, the proposed distributional state space representation offers great flexibility in combining densities.Interactions between Eurozone and US
Booms and Busts:
A Bayesian Panel Markov-switching VAR
Model
http://repub.eur.nl/pub/50275/
Wed, 11 Sep 2013 00:00:01 GMT<div>M. Billio</div><div>R. Casarin</div><div>F. Ravazzolo</div><div>H.K. van Dijk</div>
__Abstract__
Interactions between the eurozone and US booms and busts and among major eurozone economies are analyzed by introducing a panel Markov-switching VAR model well suitable for a multi-country cyclical analysis. The model accommodates changes in low and high data frequencies and endogenous time-varying transition matrices of the country-specific Markov chains. The transition matrix of each Markov chain depends on its own past history and on the history of the other chains, thus allowing for modeling of the interactions between cycles. An endogenous common eurozone cycle is derived by aggregating country-specific cycles. The model is estimated using a simulation based Bayesian approach in which an efficient multi-move strategy algorithm is defined to draw common time-varying Markov-switching chains. Our results show that the US and eurozone cycles are not fully synchronized over the 1991-2013 sample period, with evidence of more recessions in the Eurozone. Shocks affect the US 1-quarter in advance of the eurozone, but these spread very rapidly among economies. An increase in the number of eurozone countries in recession increases the probability of the US to stay within recession, while the US recession indicator has a negative impact on the probability to stay in recession for eurozone countries. Turning point analysis shows that the cycles of Germany, France and Italy are closer to the US cycle than other countries. Belgium, Spain, and Germany, provide more timely information on the aggregate recession than Netherlands and France.Posterior-Predictive Evidence on US
Inflation using Extended Phillips Curve
Models with Non-filtered Data
http://repub.eur.nl/pub/40586/
Tue, 16 Jul 2013 00:00:01 GMT<div>N. Basturk</div><div>C. Cakmakli</div><div>S.P. Ceyhan</div><div>H.K. van Dijk</div>
Changing time series properties of US inflation and economic activity, measured as marginal costs, are modeled within a set of extended Phillips Curve (PC) models. It is shown that mechanical removal or modeling of simple low
frequency movements in the data may yield poor predictive results which depend on the model specification used. Basic PC models are extended to include structural time series models that describe typical time varying patterns in levels
and volatilities. Forward as well as backward looking expectation mechanisms for inflation are incorporated and their relative importance evaluated. Survey data on expected inflation are introduced to strengthen the information in the likelihood. Use is made of simulation based Bayesian techniques for the empirical analysis. No credible evidence is found on endogeneity and long run stability between inflation and marginal costs. Backward-looking inflation appears stronger than forward-looking one. Levels and volatilities of inflation are estimated more
precisely using rich PC models. Estimated inflation expectations track nicely the observed long run inflation from the survey data. The extended PC structures compare favorably with existing basic Bayesian Vector Autoregressive and Stochastic Volatility models in terms of fit and prediction. Tails of the complete predictive distributions indicate an increase in the probability of disinflation in recent years.
Dynamic econometric modeling and forecasting in the presence of instability
http://repub.eur.nl/pub/40160/
Tue, 30 Apr 2013 00:00:01 GMT<div>A. Timmermann</div><div>H.K. van Dijk</div>
Parallel Sequential Monte Carlo for
Efficient Density Combination:
The Deco Matlab Toolbox
http://repub.eur.nl/pub/39840/
Mon, 08 Apr 2013 00:00:01 GMT<div>R. Casarin</div><div>S. Grassi</div><div>F. Ravazzolo</div><div>H.K. van Dijk</div>
This paper presents the Matlab package DeCo (Density Combination) which is based on the paper by Billio et al. (2013) where a constructive Bayesian approach is presented for combining predictive densities originating from different models or other sources of information. The combination weights are time-varying and may depend on past predictive forecasting performances and other learning mechanisms. The core algorithm is the function DeCo which applies banks of parallel Sequential Monte Carlo algorithms to filter the time-varying combination weights. The DeCo procedure has been implemented both for standard CPU computing and for Graphical Process Unit (GPU) parallel computing. For the GPU implementation we use the Matlab parallel computing toolbox and show how to use General Purposes GPU computing almost effortless. This GPU implementation comes with a speed up of the execution time up to seventy times compared to a standard CPU Matlab implementation on a multicore CPU. We show the use of the package and the computational gain of the GPU version, through some simulation experiments and empirical applications.
Genome-wide analysis of macrosatellite repeat copy number variation in worldwide populations: Evidence for differences and commonalities in size distributions and size restrictions
http://repub.eur.nl/pub/40840/
Mon, 04 Mar 2013 00:00:01 GMT<div>M. Schaap</div><div>R.J.L.F. Lemmers</div><div>R. Maassen</div><div>P.J. van der Vliet</div><div>L.F. Hoogerheide</div><div>H.K. van Dijk</div><div>N. Basturk</div><div>P. de Knijff</div><div>S.M. van der Maarel</div>
Background: Macrosatellite repeats (MSRs), usually spanning hundreds of kilobases of genomic DNA, comprise a significant proportion of the human genome. Because of their highly polymorphic nature, MSRs represent an extreme example of copy number variation, but their structure and function is largely understudied. Here, we describe a detailed study of six autosomal and two X chromosomal MSRs among 270 HapMap individuals from Central Europe, Asia and Africa. Copy number variation, stability and genetic heterogeneity of the autosomal macrosatellite repeats RS447 (chromosome 4p), MSR5p (5p), FLJ40296 (13q), RNU2 (17q) and D4Z4 (4q and 10q) and X chromosomal DXZ4 and CT47 were investigated. Results: Repeat array size distribution analysis shows that all of these MSRs are highly polymorphic with the most genetic variation among Africans and the least among Asians. A mitotic mutation rate of 0.4-2.2% was observed, exceeding meiotic mutation rates and possibly explaining the large size variability found for these MSRs. By means of a novel Bayesian approach, statistical support for a distinct multimodal rather than a uniform allele size distribution was detected in seven out of eight MSRs, with evidence for equidistant intervals between the modes. Conclusions: The multimodal distributions with evidence for equidistant intervals, in combination with the observation of MSR-specific constraints on minimum array size, suggest that MSRs are limited in their configurations and that deviations thereof may cause disease, as is the case for facioscapulohumeral muscular dystrophy. However, at present we cannot exclude that there are mechanistic constraints for MSRs that are not directly disease-related. This study represents the first comprehensive study of MSRs in different human populations by applying novel statistical methods and identifies commonalities and differences in their organization and function in the human genome. Evidence on features of a dsge business cycle model from bayesian model averaging
http://repub.eur.nl/pub/38911/
Fri, 01 Feb 2013 00:00:01 GMT<div>R.W. Strachan</div><div>H.K. van Dijk</div>
The empirical support for features of a Dynamic Stochastic General Equilibrium model with two technology shocks is evaluated using Bayesian model averaging over vector autoregressions. The model features include equilibria, restrictions on long-run responses, a structural break of unknown date, and a range of lags and deterministic processes. We find support for a number of features implied by the economic model, and the evidence suggests a break in the entire model structure around 1984, after which technology shocks appear to account for all stochastic trends. Business cycle volatility seems more due to investment-specific technology shocks than neutral technology shocks. A class of adaptive importance sampling weighted EM algorithms for efficient and robust posterior and predictive simulation
http://repub.eur.nl/pub/37738/
Sat, 01 Dec 2012 00:00:01 GMT<div>L.F. Hoogerheide</div><div>A. Opschoor</div><div>H.K. van Dijk</div>
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-t densities that approximates accurately the target distribution-typically a posterior distribution, of which we only require a kernel-in the sense that the Kullback-Leibler divergence between target and mixture is minimized. We label this approach Mixture of t by Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis-Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis-Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model's mixture regimes' parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model. Posterior-Predictive Evidence on US Inflation using Phillips Curve Models with Non-Filtered Time Series
http://repub.eur.nl/pub/38747/
Sat, 01 Dec 2012 00:00:01 GMT<div>N. Basturk</div><div>C. Cakmakli</div><div>S.P. Ceyhan</div><div>H.K. van Dijk</div>
Changing time series properties of US inflation and economic activity are analyzed within a class of extended Phillips Curve (PC) models. First, the misspecification effects of mechanical removal of low frequency movements of these series on posterior inference of a basic PC model are analyzed using a Bayesian simulation based approach. Next, structural time series models that describe changing patterns in low and high frequencies and backward as well as forward inflation expectation mechanisms are incorporated in the class of extended PC models. Empirical results indicate that the proposed models compare favorably with existing Bayesian Vector Autoregressive and Stochastic Volatility models in terms of fit and predictive performance. Weak identification and dynamic persistence appear less important when time varying dynamics of high and low frequencies are carefully modeled. Modeling inflation expectations using survey data and adding level shifts and stochastic volatility improves substantially in sample fit and out of sample predictions. No evidence is found of a long run stable cointegration relation between US inflation and marginal costs. Tails of the complete predictive distributions indicate an increase in the probability of disinflation in recent years.
The Annals of Computational and Financial Econometrics, first issue
http://repub.eur.nl/pub/52218/
Thu, 01 Nov 2012 00:00:01 GMT<div>D. Belsley</div><div>E.J. Kontoghiorghes</div><div>H.K. van Dijk</div><div>L. Bauwens</div><div>D. Belsley</div><div>E.J. Kontoghiorghes</div><div>S.J. Koopman</div><div>M.J. McAleer</div><div>H.K. van Dijk</div><div>A. Amendola</div><div>M. Billio</div><div>C. Croux</div><div>C.W.S. Chen</div><div>R. Davidson</div><div>P. Duchesne</div><div>P. Foschi</div><div>C. Francq</div><div>A.-M. Fuertes</div><div>G. Koop</div><div>L. Khalaf</div><div>M. Paolella</div><div>I. Pollock</div><div>E. Ruiz</div><div>R. Paap</div><div>T. Proietti</div><div>P. Winker</div><div>P.L.H. Yu</div><div>J.-M. Zakoian</div><div>A. Zeileis</div>
Time-varying Combinations of Predictive Densities using Nonlinear Filtering
http://repub.eur.nl/pub/38198/
Mon, 29 Oct 2012 00:00:01 GMT<div>M. Billio</div><div>R. Casarin</div><div>F. Ravazzolo</div><div>H.K. van Dijk</div>
We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. For the macro series we find that incompleteness of the models is relatively large in the 70's, the beginning of the 80's and during the recent financial crisis, and lower during the Great Moderation. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 90's and switches to giving more weight to the professional forecasts over time.
Bayesian Analysis of Instrumental Variable Models: Acceptance-Rejection within Direct Monte Carlo
http://repub.eur.nl/pub/37314/
Fri, 21 Sep 2012 00:00:01 GMT<div>A. Zellner</div><div>T. Ando</div><div>N. Basturk</div><div>H.K. van Dijk</div>
We discuss Bayesian inferential procedures within the family of instrumental variables regression models and focus on two issues: existence conditions for posterior moments of the parameters of interest under a flat prior and the potential of Direct Monte Carlo (DMC) approaches for efficient evaluation of such possibly highly onelliptical posteriors. We show that, for the general case of m endogenous variables under a flat prior, posterior moments of order r exist for the coefficients reflecting the endogenous regressors’ effect on the dependent variable, if the number of instruments is greater than m+r, even though there is an issue of local non-identification that causes non-elliptical shapes of the posterior. This stresses the need for efficient Monte Carlo integration methods. We introduce an extension of DMC that incorporates an acceptance-rejection sampling step within DMC. This Acceptance-Rejection within Direct Monte Carlo (ARDMC) method has the attractive property that the generated random drawings are independent, which greatly helps the fast convergence of simulation results, and which facilitates the evaluation of the numerical accuracy. The speed of ARDMC can be easily further improved by making use of parallelized computation using multiple core machines or computer clusters. We note that ARDMC is an analogue to the well-known 'Metropolis-Hastings within Gibbs' sampling in the sense that one 'more difficult' step is used within an 'easier' simulation method. We compare the ARDMC approach with the Gibbs sampler using simulated data and two empirical data sets, involving the settler mortality instrument of Acemoglu et al. (2001) and father's education's instrument used by Hoogerheide et al. (2012a). Even without making use of parallelized computation, an efficiency gain is observed both under strong and weak instruments, where the gain can be enormous in the latter case.The R Package MitISEM: Mixture of Student-t Distributions using Importance Sampling Weighted Expectation Maximization for Efficient and Robust Simulation
http://repub.eur.nl/pub/37313/
Thu, 20 Sep 2012 00:00:01 GMT<div>N. Basturk</div><div>L.F. Hoogerheide</div><div>A. Opschoor</div><div>H.K. van Dijk</div>
This paper presents the R package MitISEM, which provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. The package provides also an extended MitISEM algorithm, â€˜sequential MitISEMâ€™, which substantially decreases the computational time when the target density has to be approximated for increasing data samples. This occurs when the posterior distribution is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of non-elliptical posterior shapes and that describe well-known data patterns in econometrics and finance. We show that the candidate distribution obtained by MitISEM outperforms those obtained by â€˜naiveâ€™ approximations in terms of numerical efficiency. Further, the MitISEM approach can be used for Bayesian model comparison, using the predictive likelihoods.Combination schemes for turning point predictions
http://repub.eur.nl/pub/37707/
Thu, 13 Sep 2012 00:00:01 GMT<div>M. Billio</div><div>R. Casarin</div><div>F. Ravazzolo</div><div>H.K. van Dijk</div>
We propose new forecast combination schemes for predicting turning points of business cycles. The proposed combination schemes are based on the forecasting performances of a given set of models with the aim to provide better turning point predictions. In particular, we consider predictions generated by autoregressive (AR) and Markov-switching AR models, which are commonly used for business cycle analysis. In order to account for parameter uncertainty we consider a Bayesian approach for both estimation and prediction and compare, in terms of statistical accuracy, the individual models and the combined turning point predictions for the United States and the Euro area business cycles.