Barrow, Devon K. and Crone, Sven F. (2013) Crogging (cross-validation aggregation) for forecasting - A novel algorithm of neural network ensembles on time series subsamples. In: 2013 International Joint Conference on Neural Networks, IJCNN 2013 :. Proceedings of the International Joint Conference on Neural Networks . IEEE, USA. ISBN 9781467361293
Full text not available from this repository.Abstract
In classification, regression and time series prediction alike, cross-validation is widely employed to estimate the expected accuracy of a predictive algorithm by averaging predictive errors across mutually exclusive subsamples of the data. Similarly, bootstrapping aims to increase the validity of estimating the expected accuracy by repeatedly sub-sampling the data with replacement, creating overlapping samples of the data. Estimates are then used to anticipate of future risk in decision making, or to guide model selection where multiple candidates are feasible. Beyond error estimation, bootstrapping has recently been extended to combine each of the diverse models created for estimation, and aggregating over each of their predictions (rather than their errors), coined bootstrap aggregation or bagging. However, similar extensions of cross-validation to create diverse forecasting models have not been considered. In accordance with bagging, we propose to combine the benefits of cross-validation and forecast aggregation, i.e. crogging. We assesses different levels of cross-validation, including a (single-fold) hold-out approach, 2-fold and 10-fold cross validation and Monte-Carlos cross validation, to create diverse base-models of neural networks for time series prediction trained on different data subsets, and average their individual multiple-step ahead predictions. Results of forecasting the 111 time series of the NN3 competition indicate significant improvements accuracy through Crogging relative to Bagging or individual model selection of neural networks.