R. Legerstee (Rianne)
http://repub.eur.nl/ppl/5559/
List of Publicationsenhttp://repub.eur.nl/eur_signature.png
http://repub.eur.nl/
RePub, Erasmus University RepositoryDo Experts’ SKU Forecasts Improve after Feedback?
http://repub.eur.nl/pub/50605/
Wed, 01 Jan 2014 00:00:01 GMT<div>R. Legerstee</div><div>Ph.H.B.F. Franses</div>
__Abstract__
We analyze the behavior of experts who quote forecasts for monthly SKU-level sales data, where we compare data before and after the moment that experts received different kinds of feedback on their behavior. We have data for 21 experts located in as many countries who make SKU-level forecasts for a variety of pharmaceutical products for October 2006 to September 2007. We study the behavior of the experts by comparing their forecasts with those from an automated statistical program, and we report the forecast accuracy over these 12 months. In September 2007 these experts were given feedback on their behavior and they received training at the headquarters office, where specific attention was given to the ins and outs of the statistical program. Next, we study the behavior of the experts for the 3 months after the training session, i.e. October 2007 to December 2007. Our main conclusion is that in the second period the experts’ forecasts deviated less from the statistical forecasts and that their accuracy improved substantially.Statistical institutes and economic prosperity
http://repub.eur.nl/pub/50607/
Wed, 01 Jan 2014 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
__Abstract__
The quality of economic institutions can impact economic growth and it can mediate the relation between economic growth and its drivers. We examine the relevance of one such institution, which is the establishment of a national statistical institute for, amongst others, national accounts. We collect data for 106 countries, and we estimate that there are four separate clusters of countries with similar establishment dates. For these clusters we fit regression models to explain economic growth, and we obtain significant differences across these clusters with respect to relevant explanatory variables and effect sizes, suggesting that a national statistics institute indeed is an important institution for the macro-economy.Do statistical forecasting models for SKU-level data benefit from including past expert knowledge?
http://repub.eur.nl/pub/38706/
Tue, 01 Jan 2013 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
We determine whether statistical model forecasts of SKU level sales data can be improved by formally including past expert knowledge in the model as additional variables. Upon analyzing various forecasts in a large database, using various models, forecast samples and accuracy measures, we demonstrate that experts' knowledge, on average, apparently is not associated with variables which are systematically omitted from the statistical models. We also find that the formal inclusion of past judgment can be helpful in cases when the model performs poorly. This can lead to an improved interaction between models and experts, and we discuss the design features of a forecasting support system. Evaluating macroeconomic forecasts: A concise review of some recent developments
http://repub.eur.nl/pub/37653/
Wed, 19 Sep 2012 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>M.J. McAleer</div><div>R. Legerstee</div>
__Abstract__
Macroeconomic forecasts are frequently produced, widely published, intensively discussed, and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyze some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC), and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes nonstandard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the (econometric) Staff of the Federal Reserve Board and the FOMC on inflation, unemployment, and real GDP growth. It is shown that the FOMC does not forecast significantly better than the Staff, and that the intuition of the FOMC does not add significantly in forecasting the actual values of the economic fundamentals. This would seem to belie the purported expertise of the FOMC.Evaluating Econometric Models and Expert Intuition
http://repub.eur.nl/pub/32244/
Thu, 10 May 2012 00:00:01 GMT<div>R. Legerstee</div>
This thesis is about forecasting situations which involve econometric models and expert intuition. The first three chapters are about what it is that experts do when they adjust statistical model forecasts and what might improve that adjustment behavior. It is investigated how expert forecasts are related to model forecasts, how this potential relation is influenced by other factors and how it influences forecast accuracy, how feedback influences forecasting behavior and accuracy and which loss function is associated with experts’ forecasts.
The final chapter focuses on how to make use in an optimal way of multiple forecasts produced by multiple experts for one and the same event. It is found that potential disagreement amongst forecasters can have predictive value, especially when used in Markov regime-switching models.
Statistical Institutes and Economic Prosperity
http://repub.eur.nl/pub/32410/
Tue, 01 May 2012 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
Estimating Loss Functions of Experts
http://repub.eur.nl/pub/30685/
Thu, 15 Dec 2011 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div><div>R. Paap</div>
We propose a new and simple methodology to estimate the loss function associated with experts' forecasts. Under the assumption of conditional normality of the data and the forecast distribution, the asymmetry parameter of the lin-lin and linex loss function can easily be estimated using a linear regression. This regression also provides an estimate for potential systematic bias in the forecasts of the expert. The residuals of the regression are the input for a test for the validity of the normality assumption. We apply our approach to a large data set of SKU-level sales forecasts made by experts and we compare the outcomes with those for statistical model-based forecasts of the same sales data. We find substantial evidence for asymmetry in the loss functions of the experts, with underprediction penalized more than overprediction.Estimating Loss Functions of Experts
http://repub.eur.nl/pub/31226/
Thu, 15 Dec 2011 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div><div>R. Paap</div>
We propose a new and simple methodology to estimate the loss function associated with experts' forecasts. Under the assumption of conditional normality of the data and the forecast distribution, the asymmetry parameter of the lin-lin and linex loss function can easily be estimated using a linear regression. This regression also provides an estimate for potential systematic bias in the forecasts of the expert. The residuals of the regression are the input for a test for the validity of the normality assumption.
We apply our approach to a large data set of SKU-level sales forecasts made by experts and we compare the outcomes with those for statistical model-based forecasts of the same sales data. We find substantial evidence for asymmetry in the loss functions of the experts, with underprediction penalized more than overprediction.Do Experts incorporate Statistical Model Forecasts and should they?
http://repub.eur.nl/pub/26526/
Fri, 30 Sep 2011 00:00:01 GMT<div>R. Legerstee</div><div>Ph.H.B.F. Franses</div>
Experts can rely on statistical model forecasts when creating their own forecasts. Usually it is not known what experts actually do. In this paper we focus on three questions, which we try to answer given the availability of expert forecasts and model forecasts. First, is the expert forecast related to the model forecast and how? Second, how is this potential relation influenced by other factors? Third, how does this relation influence forecast accuracy? We propose a new and innovative two-level Hierarchical Bayes model to answer these questions. We apply our proposed methodology to a large data set of forecasts and realizations of SKU-level sales data from a pharmaceutical company. We find that expert forecasts can depend on model forecasts in a variety of ways. Average sales levels, sales volatility, and the forecast horizon influence this dependence. We also demonstrate that theoretical implications of expert behavior on forecast accuracy are reflected in the empirical data.Do experts incorporate statistical model forecasts and should they?
http://repub.eur.nl/pub/26660/
Fri, 30 Sep 2011 00:00:01 GMT<div>R. Legerstee</div><div>Ph.H.B.F. Franses</div><div>R. Paap</div>
Experts can rely on statistical model forecasts when creating their own forecasts.
Usually it is not known what experts actually do. In this paper we focus on three
questions, which we try to answer given the availability of expert forecasts and
model forecasts. First, is the expert forecast related to the model forecast and
how? Second, how is this potential relation influenced by other factors? Third,
how does this relation influence forecast accuracy?
We propose a new and innovative two-level Hierarchical Bayes model to answer
these questions. We apply our proposed methodology to a large data set of
forecasts and realizations of SKU-level sales data from a pharmaceutical company.
We find that expert forecasts can depend on model forecasts in a variety of
ways. Average sales levels, sales volatility, and the forecast horizon influence this
dependence. We also demonstrate that theoretical implications of expert behavior
on forecast accuracy are reflected in the empirical data.
Do Experts' SKU Forecasts improve after Feedback?
http://repub.eur.nl/pub/26506/
Mon, 26 Sep 2011 00:00:01 GMT<div>R. Legerstee</div><div>Ph.H.B.F. Franses</div>
We analyze the behavior of experts who quote forecasts for monthly SKU-level sales data where we compare data before and after the moment that experts received different kinds of feedback on their behavior. We have data for 21 experts located in as many countries who make SKU-level forecasts for a variety of pharmaceutical products for October 2006 to September 2007. We study the behavior of the experts by comparing their forecasts with those from an automated statistical program, and we report the forecast accuracy over these 12 months. In September 2007 these experts were given feedback on their behavior and they received a training at the headquarters' office, where specific attention was given to the ins and outs of the statistical program. Next, we study the behavior of the experts for the 3 months after the training session, that is, October 2007 to December 2007. Our main conclusion is that in the second period the experts' forecasts deviated lesser from the statistical forecasts and that their accuracy improved substantially.Do experts' SKU forecasts improve after feedback?
http://repub.eur.nl/pub/26656/
Thu, 22 Sep 2011 00:00:01 GMT<div>R. Legerstee</div><div>Ph.H.B.F. Franses</div>
We analyze the behavior of experts who quote forecasts for monthly
SKU-level sales data where we compare data before and after the moment
that experts received different kinds of feedback on their behavior. We
have data for 21 experts located in as many countries who make SKUlevel
forecasts for a variety of pharmaceutical products for October 2006
to September 2007. We study the behavior of the experts by comparing
their forecasts with those from an automated statistical program, and we
report the forecast accuracy over these 12 months. In September 2007
these experts were given feedback on their behavior and they received a
training at the headquarters' office, where specific attention was given to
the ins and outs of the statistical program. Next, we study the behavior
of the experts for the 3 months after the training session, that is, October
2007 to December 2007. Our main conclusion is that in the second period
the experts' forecasts deviated less from the statistical forecasts and that
their accuracy improved substantially.Experts' adjustment to model-based SKU-level forecasts: Does the forecast horizon matter?
http://repub.eur.nl/pub/23711/
Tue, 01 Mar 2011 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
Experts (managers) may have domain-specific knowledge that is not included in a statistical model and that can improve short-run and long-run forecasts of SKU-level sales data. While one-step-ahead forecasts address the conditional mean of the variable, model-based forecasts for longer horizons have a tendency to convert to the unconditional mean of a time series variable. Analyzing a large database concerning pharmaceutical sales forecasts for various products and adjusted by a range of experts, we examine whether the forecast horizon has an impact on what experts do and on how good they are once they adjust model-based forecasts. For this, we use regression-based methods and we obtain five innovative results. First, all horizons experience managerial intervention of forecasts. Second, the horizon that is most relevant to the managers shows greater overweighting of the expert adjustment. Third, for all horizons the expert adjusted forecasts have less accuracy than pure model-based forecasts, with distant horizons having the least deterioration. Fourth, when expert-adjusted forecasts are significantly better, they are best at those distant horizons. Fifth, when expert adjustment is down-weighted, expert forecast accuracy increases.Combining SKU-level sales forecasts from models and experts
http://repub.eur.nl/pub/23715/
Tue, 01 Mar 2011 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
Abstract
We study the performance of SKU-level sales forecasts which linearly combine statistical model forecasts and expert forecasts. Using a large and unique database containing model forecasts for monthly sales of various pharmaceutical products and forecasts given by about 50 experts, we document that a linear combination of those forecasts usually is most accurate. Correlating the weights of the expert forecasts in these linear combinations with the experts’ experience and behaviour shows that more experience and modest deviation from model forecasts gives most weight of the expert forecast. When the rate of bracketing increases, we notice a convergence to equal weights. We show that these results are robust across 12 different forecast horizons.Experts' adjustment to model-based SKU-level forecasts: Does the forecast horizon matter
http://repub.eur.nl/pub/76380/
Tue, 01 Mar 2011 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
Experts (managers) may have domain-specific knowledge that is not included in a statistical model and that can improve short-run and long-run forecasts of SKU-level sales data. While one-step-ahead forecasts address the conditional mean of the variable, model-based forecasts for longer horizons have a tendency to convert to the unconditional mean of a time series variable. Analysing a large database concerning pharmaceutical sales forecasts for various products and adjusted by a range of experts, we examine whether the forecast horizon has an impact on what experts do and on how good they are once they adjust model-based forecasts. For this, we use regression-based methods and we obtain five innovative results. First, all horizons experience managerial intervention of forecasts. Second, the horizon that is most relevant to the managers shows greater overweighting of the expert adjustment. Third, for all horizons the expert adjusted forecasts have less accuracy than pure model-based forecasts, with distant horizons having the least deterioration. Fourth, when expert-adjusted forecasts are significantly better, they are best at those distant horizons. Fifth, when expert adjustment is down-weighted, expert forecast accuracy increases.Does Disagreement Amongst Forecasters have Predictive Value?
http://repub.eur.nl/pub/20744/
Wed, 22 Sep 2010 00:00:01 GMT<div>R. Legerstee</div><div>Ph.H.B.F. Franses</div>
Forecasts from various experts are often used in macroeconomic forecasting models. Usually the focus is on the mean or median of the survey data. In the present study we adopt a different perspective on the survey data as we examine the predictive power of disagreement amongst forecasters. The premise is that this variable could signal upcoming structural or temporal changes in an economic process or in the predictive power of the survey forecasts. In our empirical work, we examine a variety of macroeconomic variables, and we use different measurements for the degree of disagreement, together with measures for location of the survey data and autoregressive components. Forecasts from simple linear models and forecasts from Markov regime-switching models with constant and with time-varying transition probabilities are constructed in real-time and compared on forecast accuracy. We find that disagreement has predictive power indeed and that this variable can be used to improve forecasts when used in Markov regime-switching models.Does Disagreement amongst Forecasters have Predictive Value?
http://repub.eur.nl/pub/20753/
Thu, 02 Sep 2010 00:00:01 GMT<div>R. Legerstee</div><div>Ph.H.B.F. Franses</div>
Forecasts from various experts are often used in macroeconomic forecasting models. Usually the focus is on the mean or median of the survey data. In the present study we adopt a different perspective on the survey data as we examine the predictive power of disagreement amongst forecasters. The premise is that this variable could signal upcoming structural or temporal changes in an economic process or in the predictive power of the survey forecasts. In our empirical work, we examine a variety of macroeconomic variables, and we use different measurements for the degree of disagreement, together with measures for location of the survey data and autoregressive components. Forecasts from simple linear models and forecasts from Markov regime-switching models with constant and with time-varying transition probabilities are constructed in real-time and compared on forecast accuracy. We find that disagreement has predictive power indeed and that this variable can be used to improve forecasts when used in Markov regime-switching models.A unifying view on multi-step forecasting using an autoregression
http://repub.eur.nl/pub/20234/
Thu, 01 Jul 2010 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
This paper unifies two methodologies for multi-step forecasting from autoregressive time series models. The first is covered in most of the traditional time series literature and it uses short-horizon forecasts to compute longer-horizon forecasts, while the estimation method minimizes one-step-ahead forecast errors. The second methodology considers direct multi-step estimation and forecasting. In this paper, we show that both approaches are special (boundary) cases of a technique called partial least squares (PLS) when this technique is applied to an autoregression. We outline this methodology and show how it unifies the other two. We also illustrate the practical relevance of the resultant PLS autoregression for 17 quarterly, seasonally adjusted, industrial production series. Our main findings are that both boundary models can be improved by including factors indicated from the PLS technique.Do experts' adjustments on model-based SKU-level forecasts improve forecast quality?
http://repub.eur.nl/pub/19985/
Thu, 01 Apr 2010 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>R. Legerstee</div>
Model-based SKU-level forecasts are often adjusted by experts. In this paper we propose a statistical methodology to test whether these expert forecasts improve on model forecasts. Application of the methodology to a very large database concerning experts in 35 countries who adjust SKU-level forecasts for pharmaceutical products in seven distinct categories leads to the general conclusion that expert forecasts are equally good at best, but are more often worse than model-based forecasts. We explore whether this is due to experts putting too much weight on their contribution, and this indeed turns out to be the case.Evaluating Macroeconomic Forecast: A Review of Some Recent Developments
http://repub.eur.nl/pub/18604/
Tue, 30 Mar 2010 00:00:01 GMT<div>Ph.H.B.F. Franses</div><div>M.J. McAleer</div><div>R. Legerstee</div>
Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth