Performance-Estimation Properties of Cross-Validation-Based Protocols with Simultaneous Hyper-Parameter Optimization

Ioannis Tsamardinos, Amin Rakhshani, Vincenzo Lagani

Research output: Chapter in Book/Report/Conference proceedingConference contribution

50 Scopus citations

Abstract

In a typical supervised data analysis task, one needs to perform the following two tasks: (a) select an optimal combination of learning methods (e.g., for variable selection and classifier) and tune their hyper-parameters (e.g., K in K-NN), also called model selection, and (b) provide an estimate of the performance of the final, reported model. Combining the two tasks is not trivial because when one selects the set of hyper-parameters that seem to provide the best estimated performance, this estimation is optimistic (biased/overfitted) due to performing multiple statistical comparisons. In this paper, we discuss the theoretical properties of performance estimation when model selection is present and we confirm that the simple Cross-Validation with model selection is indeed optimistic (overestimates performance) in small sample scenarios and should be avoided. We present in detail and investigate the theoretical properties of the Nested Cross Validation and a method by Tibshirani and Tibshirani for removing the estimation bias. In computational experiments with real datasets both protocols provide conservative estimation of performance and should be preferred. These statements hold true even if feature selection is performed as preprocessing.
Original languageEnglish (US)
Title of host publicationInternational Journal on Artificial Intelligence Tools
PublisherWorld Scientific
DOIs
StatePublished - Oct 1 2015
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-23

Fingerprint

Dive into the research topics of 'Performance-Estimation Properties of Cross-Validation-Based Protocols with Simultaneous Hyper-Parameter Optimization'. Together they form a unique fingerprint.

Cite this