Wynn, H. P. and Brown, P. J. and Anderson, C. and Rougier, J. C. and Diggle, Peter J. and Goldstein, M. and Kendall, W. S. and Craig, P. (2001) Comments on ‘Bayesian calibration of mathematical models’ by M. C. Kennedy and A. O’Hagan. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63 (3). pp. 450-464. ISSN 1369-7412
Full text not available from this repository.Abstract
We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.