Comparison of standard resampling methods for performance estimation of artificial neural network ensembles

Michael Green, Mattias Ohlsson

    Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

    108 Downloads (Pure)

    Abstract

    Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampling strategies for estimating the true performance of ANN ensembles. The bootstrap, using the .632+ rule, is too optimistic, while the holdout $0.75$ underestimates the true performance.
    Original languageEnglish
    Title of host publicationThird International Conference on Computational Intelligence in Medicine and Healthcare
    EditorsEmmanuel Ifeachor
    Number of pages6
    Publication statusPublished - 2007
    EventThird International Conference on Computational Intelligence in Medicine and Healthcare - Plymouth, England
    Duration: 2007 Jul 252007 Jul 27

    Conference

    ConferenceThird International Conference on Computational Intelligence in Medicine and Healthcare
    Period2007/07/252007/07/27

    Subject classification (UKÄ)

    • Biophysics

    Free keywords

    • performance estimation
    • k-fold cross validation
    • bootstrap
    • artificial neural networks

    Fingerprint

    Dive into the research topics of 'Comparison of standard resampling methods for performance estimation of artificial neural network ensembles'. Together they form a unique fingerprint.

    Cite this