N. Basturk (Nalan)
http://repub.eur.nl/ppl/16059/
List of Publicationsenhttp://repub.eur.nl/logo.jpg
http://repub.eur.nl/
RePub, Erasmus University RepositoryOn the Rise of Bayesian Econometrics after
Cowles Foundation Monographs 10, 14
http://repub.eur.nl/pub/51651/
Tue, 08 Jul 2014 00:00:01 GMT<div>N. Basturk</div><div>C. Cakmakli</div><div>P. Ceyhan</div><div>H.K. van Dijk</div>
__Abstract__
This paper starts with a brief description of the introduction of the likelihood approach in econometrics as presented in Cowles Foundation Monographs 10 and 14. A sketch is given of the criticisms on this approach mainly from the first group of Bayesian econometricians. Publication and citation patterns of Bayesian econometric papers are analyzed in ten major econometric journals from the late 1970s until the first few months of 2014. Results indicate a cluster of journals with theoretical and applied papers, mainly consisting of Journal of Econometrics, Journal of Business and Economic Statistics and Journal of Applied Econometrics which contains the large majority of high quality Bayesian econometric papers. A second cluster of theoretical journals, mainly consisting of Econometrica and Review of Economic Studies contains few Bayesian econometric papers. The scientific impact, however, of these few papers on Bayesian econometric research is substantial. Special issues from the journals Econometric Reviews, Journal of Econometrics and Econometric Theory received wide attention. Marketing Science shows an ever increasing number of Bayesian papers since the middle nineties. The International Economic Review and the Review of Economics and Statistics show a moderate time varying increase. An upward movement in publication patterns in most journals occurs in the early 1990s due to the effect of the 'Computational Revolution'.Bayesian Analysis of Instrumental Variable Models: Acceptance-Rejection within Direct Monte Carlo
http://repub.eur.nl/pub/73371/
Sat, 01 Feb 2014 00:00:01 GMT<div>A. Zellner</div><div>T. Ando</div><div>N. Basturk</div><div>L.F. Hoogerheide</div><div>H.K. van Dijk</div>
We discuss Bayesian inferential procedures within the family of instrumental variables regression models and focus on two issues: existence conditions for posterior moments of the parameters of interest under a flat prior and the potential of Direct Monte Carlo (DMC) approaches for efficient evaluation of such possibly highly non-elliptical posteriors. We show that, for the general case of m endogenous variables under a flat prior, posterior moments of order r exist for the coefficients reflecting the endogenous regressors' effect on the dependent variable, if the number of instruments is greater than m +r, even though there is an issue of local non-identification that causes non-elliptical shapes of the posterior. This stresses the need for efficient Monte Carlo integration methods. We introduce an extension of DMC that incorporates an acceptance-rejection sampling step within DMC. This Acceptance-Rejection within Direct Monte Carlo (ARDMC) method has the attractive property that the generated random drawings are independent, which greatly helps the fast convergence of simulation results, and which facilitates the evaluation of the numerical accuracy. The speed of ARDMC can be easily further improved by making use of parallelized computation using multiple core machines or computer clusters. We note that ARDMC is an analogue to the well-known "Metropolis-Hastings within Gibbs" sampling in the sense that one 'more difficult' step is used within an 'easier' simulation method. We compare the ARDMC approach with the Gibbs sampler using simulated data and two empirical data sets, involving the settler mortality instrument of Acemoglu et al. (2001) and father's education's instrument used by Hoogerheide et al. (2012a). Even without making use of parallelized computation, an efficiency gain is observed both under strong and weak instruments, where the gain can be enormous in the latter case.Estimation of flexible fuzzy GARCH models for conditional density estimation
http://repub.eur.nl/pub/40785/
Wed, 31 Jul 2013 00:00:01 GMT<div>R.J. Almeida e Santos Nogueira</div><div>N. Basturk</div><div>U. Kaymak</div><div>J.M. Costa Sousa</div>
In this work we introduce a new flexible fuzzy GARCH model for conditional density estimation. The model combines two different types of uncertainty, namely fuzziness or linguistic vagueness, and probabilistic uncertainty. The probabilistic uncertainty is modeled through a GARCH model while the fuzziness or linguistic vagueness is present in the antecedent and combination of the rule base system. The fuzzy GARCH model under study allows for a linguistic interpretation of the gradual changes in the output density, providing a simple understanding of the process. Such a system can capture different properties of data, such as fat tails, skewness and multimodality in one single model. This type of models can be useful in many fields such as macroeconomic analysis, quantitative finance and risk management. The relation to existing similar models is discussed, while the properties, interpretation and estimation of the proposed model are provided. The model performance is illustrated in simulated time series data exhibiting complex behavior and a real data application of volatility forecasting for the S&P 500 daily returns series.Posterior-Predictive Evidence on US
Inflation using Extended Phillips Curve
Models with Non-filtered Data
http://repub.eur.nl/pub/40586/
Tue, 16 Jul 2013 00:00:01 GMT<div>N. Basturk</div><div>C. Cakmakli</div><div>P. Ceyhan</div><div>H.K. van Dijk</div>
Changing time series properties of US inflation and economic activity, measured as marginal costs, are modeled within a set of extended Phillips Curve (PC) models. It is shown that mechanical removal or modeling of simple low
frequency movements in the data may yield poor predictive results which depend on the model specification used. Basic PC models are extended to include structural time series models that describe typical time varying patterns in levels
and volatilities. Forward as well as backward looking expectation mechanisms for inflation are incorporated and their relative importance evaluated. Survey data on expected inflation are introduced to strengthen the information in the likelihood. Use is made of simulation based Bayesian techniques for the empirical analysis. No credible evidence is found on endogeneity and long run stability between inflation and marginal costs. Backward-looking inflation appears stronger than forward-looking one. Levels and volatilities of inflation are estimated more
precisely using rich PC models. Estimated inflation expectations track nicely the observed long run inflation from the survey data. The extended PC structures compare favorably with existing basic Bayesian Vector Autoregressive and Stochastic Volatility models in terms of fit and prediction. Tails of the complete predictive distributions indicate an increase in the probability of disinflation in recent years.
Genome-wide analysis of macrosatellite repeat copy number variation in worldwide populations: Evidence for differences and commonalities in size distributions and size restrictions
http://repub.eur.nl/pub/40840/
Mon, 04 Mar 2013 00:00:01 GMT<div>M. Schaap</div><div>R.J.L.F. Lemmers</div><div>R. Maassen</div><div>P.J. van der Vliet</div><div>L.F. Hoogerheide</div><div>H.K. van Dijk</div><div>N. Basturk</div><div>P. de Knijff</div><div>S.M. van der Maarel</div>
Background: Macrosatellite repeats (MSRs), usually spanning hundreds of kilobases of genomic DNA, comprise a significant proportion of the human genome. Because of their highly polymorphic nature, MSRs represent an extreme example of copy number variation, but their structure and function is largely understudied. Here, we describe a detailed study of six autosomal and two X chromosomal MSRs among 270 HapMap individuals from Central Europe, Asia and Africa. Copy number variation, stability and genetic heterogeneity of the autosomal macrosatellite repeats RS447 (chromosome 4p), MSR5p (5p), FLJ40296 (13q), RNU2 (17q) and D4Z4 (4q and 10q) and X chromosomal DXZ4 and CT47 were investigated. Results: Repeat array size distribution analysis shows that all of these MSRs are highly polymorphic with the most genetic variation among Africans and the least among Asians. A mitotic mutation rate of 0.4-2.2% was observed, exceeding meiotic mutation rates and possibly explaining the large size variability found for these MSRs. By means of a novel Bayesian approach, statistical support for a distinct multimodal rather than a uniform allele size distribution was detected in seven out of eight MSRs, with evidence for equidistant intervals between the modes. Conclusions: The multimodal distributions with evidence for equidistant intervals, in combination with the observation of MSR-specific constraints on minimum array size, suggest that MSRs are limited in their configurations and that deviations thereof may cause disease, as is the case for facioscapulohumeral muscular dystrophy. However, at present we cannot exclude that there are mechanistic constraints for MSRs that are not directly disease-related. This study represents the first comprehensive study of MSRs in different human populations by applying novel statistical methods and identifies commonalities and differences in their organization and function in the human genome. Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis
http://repub.eur.nl/pub/37867/
Fri, 01 Feb 2013 00:00:01 GMT<div>T. Tervonen</div><div>G. van Valkenhoef</div><div>N. Basturk</div><div>D. Postmus</div>
Models for Multiple Criteria Decision Analysis (MCDA) often separate per-criterion attractiveness evaluation from weighted aggregation of these evaluations across the different criteria. In simulation-based MCDA methods, such as Stochastic Multicriteria Acceptability Analysis, uncertainty in the weights is modeled through a uniform distribution on the feasible weight space defined by a set of linear constraints. Efficient sampling methods have been proposed for special cases, such as the unconstrained weight space or complete ordering of the weights. However, no efficient methods are available for other constraints such as imprecise trade-off ratios, and specialized sampling methods do not allow for flexibility in combining the different constraint types. In this paper, we explore how the Hit-And-Run sampler can be applied as a general approach for sampling from the convex weight space that results from an arbitrary combination of linear weight constraints. We present a technique for transforming the weight space to enable application of Hit-And-Run, and evaluate the sampler's efficiency through computational tests. Our results show that the thinning factor required to obtain uniform samples can be expressed as a function of the number of criteria n as (n) = (n - 1)3. We also find that the technique is reasonably fast with problem sizes encountered in practice and that autocorrelation is an appropriate convergence metric. Posterior-Predictive Evidence on US Inflation using Phillips Curve Models with Non-Filtered Time Series
http://repub.eur.nl/pub/38747/
Sat, 01 Dec 2012 00:00:01 GMT<div>N. Basturk</div><div>C. Cakmakli</div><div>P. Ceyhan</div><div>H.K. van Dijk</div>
Changing time series properties of US inflation and economic activity are analyzed within a class of extended Phillips Curve (PC) models. First, the misspecification effects of mechanical removal of low frequency movements of these series on posterior inference of a basic PC model are analyzed using a Bayesian simulation based approach. Next, structural time series models that describe changing patterns in low and high frequencies and backward as well as forward inflation expectation mechanisms are incorporated in the class of extended PC models. Empirical results indicate that the proposed models compare favorably with existing Bayesian Vector Autoregressive and Stochastic Volatility models in terms of fit and predictive performance. Weak identification and dynamic persistence appear less important when time varying dynamics of high and low frequencies are carefully modeled. Modeling inflation expectations using survey data and adding level shifts and stochastic volatility improves substantially in sample fit and out of sample predictions. No evidence is found of a long run stable cointegration relation between US inflation and marginal costs. Tails of the complete predictive distributions indicate an increase in the probability of disinflation in recent years.
Bayesian Analysis of Instrumental Variable Models: Acceptance-Rejection within Direct Monte Carlo
http://repub.eur.nl/pub/37314/
Fri, 21 Sep 2012 00:00:01 GMT<div>A. Zellner</div><div>T. Ando</div><div>N. Basturk</div><div>H.K. van Dijk</div>
We discuss Bayesian inferential procedures within the family of instrumental variables regression models and focus on two issues: existence conditions for posterior moments of the parameters of interest under a flat prior and the potential of Direct Monte Carlo (DMC) approaches for efficient evaluation of such possibly highly onelliptical posteriors. We show that, for the general case of m endogenous variables under a flat prior, posterior moments of order r exist for the coefficients reflecting the endogenous regressors’ effect on the dependent variable, if the number of instruments is greater than m+r, even though there is an issue of local non-identification that causes non-elliptical shapes of the posterior. This stresses the need for efficient Monte Carlo integration methods. We introduce an extension of DMC that incorporates an acceptance-rejection sampling step within DMC. This Acceptance-Rejection within Direct Monte Carlo (ARDMC) method has the attractive property that the generated random drawings are independent, which greatly helps the fast convergence of simulation results, and which facilitates the evaluation of the numerical accuracy. The speed of ARDMC can be easily further improved by making use of parallelized computation using multiple core machines or computer clusters. We note that ARDMC is an analogue to the well-known 'Metropolis-Hastings within Gibbs' sampling in the sense that one 'more difficult' step is used within an 'easier' simulation method. We compare the ARDMC approach with the Gibbs sampler using simulated data and two empirical data sets, involving the settler mortality instrument of Acemoglu et al. (2001) and father's education's instrument used by Hoogerheide et al. (2012a). Even without making use of parallelized computation, an efficiency gain is observed both under strong and weak instruments, where the gain can be enormous in the latter case.The R Package MitISEM: Mixture of Student-t Distributions using Importance Sampling Weighted Expectation Maximization for Efficient and Robust Simulation
http://repub.eur.nl/pub/37313/
Thu, 20 Sep 2012 00:00:01 GMT<div>N. Basturk</div><div>L.F. Hoogerheide</div><div>A. Opschoor</div><div>H.K. van Dijk</div>
This paper presents the R package MitISEM, which provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. The package provides also an extended MitISEM algorithm, â€˜sequential MitISEMâ€™, which substantially decreases the computational time when the target density has to be approximated for increasing data samples. This occurs when the posterior distribution is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of non-elliptical posterior shapes and that describe well-known data patterns in econometrics and finance. We show that the candidate distribution obtained by MitISEM outperforms those obtained by â€˜naiveâ€™ approximations in terms of numerical efficiency. Further, the MitISEM approach can be used for Bayesian model comparison, using the predictive likelihoods.Structural differences in economic growth: an endogenous clustering approach
http://repub.eur.nl/pub/26749/
Sun, 01 Jan 2012 00:00:01 GMT<div>N. Basturk</div><div>R. Paap</div><div>D.J.C. van Dijk</div>
This article addresses heterogeneity in determinants of economic growth in a data-driven way. Instead of defining groups of countries with different growth characteristics a priori, based on, for example, geographical location, we use a finite mixture panel model and endogenous clustering to examine cross-country differences and similarities in the effects of growth determinants. Applying this approach to an annual unbalanced panel of 59 countries in Asia, Latin and Middle America and Africa for the period 1971-2000, we can identify two groups of countries in terms of distinct growth structures. The structural differences between the country groups mainly stem from different effects of investment, openness measures and government share in the economy. Furthermore, the detected segmentation of countries does not match with conventional classifications in the literature. Instrumental Variables, Errors in Variables, and Simultaneous Equations Models: Applicability and Limitations of Direct Monte Carlo
http://repub.eur.nl/pub/26507/
Tue, 27 Sep 2011 00:00:01 GMT<div>A. Zellner</div><div>T. Ando</div><div>N. Basturk</div><div>L.F. Hoogerheide</div><div>H.K. van Dijk</div>
A Direct Monte Carlo (DMC) approach is introduced for posterior simulation in the Instrumental Variables (IV) model with one possibly endogenous regressor, multiple instruments and Gaussian errors under a flat prior. This DMC method can also be applied in an IV model (with one or multiple instruments) under an informative prior for the endogenous regressor's effect. This DMC approach can not be applied to more complex IV models or Simultaneous Equations Models with multiple endogenous regressors. An Approximate DMC (ADMC) approach is introduced that makes use of the proposed Hybrid Mixture Sampling (HMS) method, which facilitates Metropolis-Hastings (MH) or Importance Sampling from a proper marginal posterior density with highly non-elliptical shapes that tend to infinity for a point of singularity. After one has simulated from the irregularly shaped marginal distri- bution using the HMS method, one easily samples the other parameters from their conditional Student-t and Inverse-Wishart posteriors. An example illustrates the close approximation and high MH acceptance rate. While using a simple candidate distribution such as the Student-t may lead to an infinite variance of Importance Sampling weights. The choice between the IV model and a simple linear model un- der the restriction of exogeneity may be based on predictive likelihoods, for which the efficient simulation of all model parameters may be quite useful. In future work the ADMC approach may be extended to more extensive IV models such as IV with non-Gaussian errors, panel IV, or probit/logit IV.Essays on Parameter Heterogeneity and Model Uncertainty
http://repub.eur.nl/pub/21190/
Thu, 04 Nov 2010 00:00:01 GMT<div>N. Basturk</div>
The choice of a particular model in quantitative economic analysis reflects the economic
question analyzed, jointly with related economic theory and the specific structure of the
given data being analyzed. The degree to which economic theory or the data dominates
the analysis is an important strategic decision that the researcher has to face. In the first
strategy, the model is based mainly on a priori economic theory. Several contributions
in the economics literature, in particular those that occurred in the period just after the
second World War, are based on this strategy, suggesting explicit links between economic
theory, mathematics and statistics (see e.g. the contributions of the Cowles Foundation
for Research in Economics at Yale University. In the second strategy, which became more
popular during late nineteen seventies and early nineteen eighties, modeling is based more
on the data information, see e.g. Sims (1980). In the time series context, the advantages
of this data-based approach are addressed and it is mentioned that economic theory often
does not provide precise information on functional relationships between variables. A
good survey of this approach is given by Zellner and Palm (2004). These latter authors
conclude that the use of data information for discovering and repairing the defects of
proposed models are of crucial importance.
Common practice in empirical research is to combine these strategies in a meaningful
way, i.e. the constructed model is based on economic theory and the data information at
the same time. This combination of strategies is motivated by two arguments: On the one
hand, data information may not be informative enough. On the other hand, too strong
assumptions may affect the reliability of results and the forecasting performance. This thesis
considers the relatively more data-based approach in analyzing economic relationships
and provides alternative methods to avoid very strong assumptions in the analysis.
This thesis consists of two parts. The first part develops new econometric models with
a sufficient degree of flexibility to accommodate various forms and degrees of heterogeneity
in (the relations among) economic variables. The second part considers model uncertainty
issues providing new tools for evaluating to what extent one (or more) model is suitable
to the economic data at hand.Financial Development and Convergence Clubs
http://repub.eur.nl/pub/20741/
Wed, 22 Sep 2010 00:00:01 GMT<div>N. Basturk</div><div>R. Paap</div><div>D.J.C. van Dijk</div>
This paper studies the economic development process, measured by Gross Domestic Product (GDP), for a large panel of countries. We propose a methodology that identifies groups of countries (convergence clubs) that show similar GDP structures, while allowing for changes in club memberships over time. As a second step we analyze the short-run and long-run effects of financial development (measured by financial intermediary development and stock market development) on the GDP process, and the composition of the convergence clubs. We find that the club memberships are quite persistent, but still their compositions change substantially over time. In particular, several EU member countries and East Asian countries are found to belong to a higher GDP club in recent times compared to the beginning of the 1970s. In terms
of the effects of financial development indicators on the GDP process, our results partially confirm the theoretical basis for different effects of financial development indicators in the short-run and the long-run. In the long-run, financial development is found to affect the countries’ GDP level positively. The short-run effects of financial development indicators however are found to be less clear, in the sense that we do not find a negative short-run effect of financial intermediary development on GDP levels, while the short-run effect of stock market development is found to be negative.A Comparative Study of Monte Carlo Methods for Efficient Evaluation of Marginal Likelihoods
http://repub.eur.nl/pub/19830/
Tue, 01 Jun 2010 00:00:01 GMT<div>D. David</div><div>N. Basturk</div><div>L.F. Hoogerheide</div><div>H.K. van Dijk</div>
Strategic choices for efficient and accurate evaluation of marginal likelihoods by means of Monte Carlo simulation methods are studied for the case of highly non-elliptical posterior distributions. A comparative analysis is presented of possible advantages and limitations of different simulation techniques; of possible choices of candidate distributions and choices of target or warped target distributions; and finally of numerical standard errors. The importance of a robust and flexible estimation strategy is demonstrated where the complete posterior distribution is explored. Given an appropriately yet quickly tuned adaptive candidate, straightforward importance sampling provides a computationally efficient estimator of the marginal likelihood (and a reliable and easily computed corresponding numerical standard error) in the cases investigated in this paper, which include a non-linear regression model and a mixture GARCH model. Warping the posterior density can lead to a further gain in efficiency, but it is more important that the posterior kernel is appropriately wrapped by the candidate distribution than that is warped.Structural Differences in Economic Growth
http://repub.eur.nl/pub/14044/
Fri, 29 Aug 2008 00:00:01 GMT<div>N. Basturk</div><div>R. Paap</div><div>D.J.C. van Dijk</div>
This paper addresses heterogeneity in determinants of economic growth in a data-driven way. Instead of defining groups of countries with different growth characteristics a priori, based on, for example, geographical location, we use a finite mixture panel model and endogenous clustering to examine cross-country differences and similarities in the effects of growth determinants. Applying this approach to an annual unbalanced panel of 59 countries in Asia, Latin and Middle America and Africa for the period 1971-2000, we can identify two groups of countries in terms of distinct growth structures. The structural differences between the country groups mainly stem from different effects of investment, openness measures and government share in the economy. Furthermore, the detected segmentation of countries does not match with conventional classifications in the literature.