Many publicly available macroeconomic forecasts are judgmentally adjusted model-based forecasts. In practice, usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights given to the model forecasts and to the judgement are usually unknown to the analyst. This paper proposes a methodology to evaluate the quality of such final forecasts, also to allow learning from past errors. To do so, the analyst needs benchmark forecasts. We propose two such benchmarks. The first is the simple no-change forecast, which is the bottom line forecast that an expert should be able to improve. The second benchmark is an estimated model-based forecast, which is found as the best forecast given the realizations and the final forecasts. We illustrate this methodology for two sets of GDP growth forecasts, one for the USA and one for the Netherlands. These applications tell us that adjustment appears most effective in periods of first recovery from a recession. Copyright © 2016 John Wiley & Sons, Ltd.

forecast decomposition, expert adjustment, total least squares
Econometric Methods: Single Equation Models; Single Variables: General (jel C20), Model Construction and Estimation (jel C51)
hdl.handle.net/1765/132385
Department of Econometrics

Franses, Ph.H.B.F. (2020). Benchmarking judgmentally adjusted forecasts. Retrieved from http://hdl.handle.net/1765/132385