Many publicly available macroeconomic forecasts are judgmentally adjusted model-based forecasts. In practice, usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights given to the model forecasts and to the judgement are usually unknown to the analyst. This paper proposes a methodology to evaluate the quality of such final forecasts, also to allow learning from past errors. To do so, the analyst needs benchmark forecasts. We propose two such benchmarks. The first is the simple no-change forecast, which is the bottom line forecast that an expert should be able to improve. The second benchmark is an estimated model-based forecast, which is found as the best forecast given the realizations and the final forecasts. We illustrate this methodology for two sets of GDP growth forecasts, one for the USA and one for the Netherlands. These applications tell us that adjustment appears most effective in periods of first recovery from a recession.

Additional Metadata
Keywords Expert adjustment, Forecast decomposition, Total least squares
JEL Econometric Methods: Single Equation Models; Single Variables: General (jel C20), Model Construction and Estimation (jel C51)
Persistent URL dx.doi.org/10.1002/ijfe.1569, hdl.handle.net/1765/94546
Series Econometric Institute Reprint Series
Journal International Journal of Finance and Economics
Citation
Franses, Ph.H.B.F, & de Bruijn, L.P. (2017). Benchmarking Judgmentally Adjusted Forecasts. International Journal of Finance and Economics, 22(1), 3–11. doi:10.1002/ijfe.1569