Many publicly available macroeconomic forecasts are judgmentally-adjusted model-based forecasts. In practice usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights given to the model forecasts and to the judgment are usually unknown to the analyst.

This paper proposes a methodology to evaluate the quality of such final forecasts, also to allow learning from past errors. To do so, the analyst needs benchmark forecasts. We propose two such benchmarks. The first is the simple no-change forecast, which is the bottom line forecast that an expert should be able to improve. The second benchmark is an estimated model based forecast, which is found as the best forecast given the realizations and the final forecasts. We illustrate this methodology for two sets of GDP growth forecasts, one for the US and for the Netherlands. These applications tell us that adjustment appears most effective in periods of first recovery from a recession.

Additional Metadata
Keywords forecast decomposition, expert adjustment, total least squares
JEL Econometric Methods: Single Equation Models; Single Variables: General (jel C20), Model Construction and Estimation (jel C51)
Persistent URL hdl.handle.net/1765/79222
Series Econometric Institute Research Papers
Citation
Franses, Ph.H.B.F, & de Bruijn, L.P. (2015). Benchmarking judgmentally adjusted forecasts (No. EI2015-36). Econometric Institute Research Papers. Retrieved from http://hdl.handle.net/1765/79222