We introduce tests for multi-horizon superior predictive ability. Rather than comparing forecasts of different models at multiple horizons individually, we propose to jointly consider all horizons of a forecast path. We define the concepts of uniform and average superior predictive ability. The former entails superior performance at each individual horizon, while the latter allows inferior performance at some horizons to be compensated by others. The paper illustrates how the tests lead to more coherent conclusions, and how they are better able to differentiate between models than the single-horizon tests. We provide an extension of the previously introduced Model Confidence Set to allow for multi-horizon comparison of more than two models. Simulations demonstrate appropriate size and high power. An illustration of the tests on a large set of macroeconomic variables demonstrates the empirical benefits of multi-horizon comparison.

Additional Metadata
Keywords Forecasting, Long-Horizon, Multiple Testing, Path Forecasts, Superior Predictive Ability
JEL Time-Series Models; Dynamic Quantile Regressions (jel C22), Model Evaluation and Testing (jel C52), Forecasting and Other Model Applications (jel C53), Financial Econometrics (jel C58)
Persistent URL dx.doi.org/10.1080/07350015.2019.1620074, hdl.handle.net/1765/113954
Journal Journal of Business and Economic Statistics
Citation
Quaedvlieg, R. (2019). Multi-Horizon Forecast Comparison. Journal of Business and Economic Statistics. doi:10.1080/07350015.2019.1620074