Model selection confidence sets by likelihood ratio testing

Zheng, Chao and Ferrari, Davide and Yang, Yuhong (2019) Model selection confidence sets by likelihood ratio testing. Statistica Sinica, 29 (2). pp. 827-851. ISSN 1017-0405

Text (MSCS_final)
MSCS_final.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.

Download (799kB)


The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets (MSCSs) based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a user-specified level of confidence, which extends the familiar notion of confidence intervals to the model-selection framework. Our approach guarantees asymptotically correct coverage probability of the true model when both sample size and model dimension increase. We derive conditions under which the MSCS contains all the relevant information about the true model structure. In addition, we propose natural statistics based on the MSCS to measure importance of variables in a principled way that accounts for the overall model uncertainty. When the space of feasible models is large, MSCS is implemented by an adaptive stochastic search algorithm which samples MSCS models with high probability. The MSCS methodology is illustrated through numerical experiments on synthetic and real data examples

Item Type:
Journal Article
Journal or Publication Title:
Statistica Sinica
Uncontrolled Keywords:
ID Code:
Deposited By:
Deposited On:
22 Jun 2019 09:10
Last Modified:
22 Nov 2022 07:29