Inter-laboratory comparison of channelized hotelling observer computation

Research output: Contribution to journalArticle

Abstract

Purpose: The task-based assessment of image quality using model observers is increasingly used for the assessment of different imaging modalities. However, the performance computation of model observers needs standardization as well as a well-established trust in its implementation methodology and uncertainty estimation. The purpose of this work was to determine the degree of equivalence of the channelized Hotelling observer performance and uncertainty estimation using an intercomparison exercise. Materials and Methods: Image samples to estimate model observer performance for detection tasks were generated from two-dimensional CT image slices of a uniform water phantom. A common set of images was sent to participating laboratories to perform and document the following tasks: (a) estimate the detectability index of a well-defined CHO and its uncertainty in three conditions involving different sized targets all at the same dose, and (b) apply this CHO to an image set where ground truth was unknown to participants (lower image dose). In addition, and on an optional basis, we asked the participating laboratories to (c) estimate the performance of real human observers from a psychophysical experiment of their choice. Each of the 13 participating laboratories was confidentially assigned a participant number and image sets could be downloaded through a secure server. Results were distributed with each participant recognizable by its number and then each laboratory was able to modify their results with justification as model observer calculation are not yet a routine and potentially error prone. Results: Detectability index increased with signal size for all participants and was very consistent for 6 mm sized target while showing higher variability for 8 and 10 mm sized target. There was one order of magnitude between the lowest and the largest uncertainty estimation. Conclusions: This intercomparison helped define the state of the art of model observer performance computation and with thirteen participants, reflects openness and trust within the medical imaging community. The performance of a CHO with explicitly defined channels and a relatively large number of test images was consistently estimated by all participants. In contrast, the paper demonstrates that there is no agreement on estimating the variance of detectability in the training and testing setting.

Details

Authors
  • Alexandre Ba
  • Craig K. Abbey
  • Jongduk Baek
  • Minah Han
  • Ramona W. Bouwman
  • Christiana Balta
  • Jovan Brankov
  • Francesc Massanes
  • Howard C. Gifford
  • Irene Hernandez-Giron
  • Wouter J.H. Veldkamp
  • Dimitar Petrov
  • Nicholas Marshall
  • Frank W. Samuelson
  • Rongping Zeng
  • Justin B. Solomon
  • Ehsan Samei
  • Ingrid Reiser
  • Lifeng Yu
  • Hao Gong
  • François O. Bochud
Organisations
External organisations
  • Lausanne University Hospital
  • University of California, Santa Barbara
  • Yonsei University
  • Radboud University Nijmegen
  • Illinois Institute of Technology
  • University of Houston
  • Leiden University Medical Centre
  • Catholic University of Leuven
  • University Hospitals Leuven
  • United States Food and Drug Administration
  • Duke University
  • University of Chicago
  • Mayo Clinic Minnesota
Research areas and keywords

Subject classification (UKÄ) – MANDATORY

  • Radiology, Nuclear Medicine and Medical Imaging

Keywords

  • channelized hotelling observer, computed tomography, image quality, intercomparison, model observers
Original languageEnglish
Pages (from-to)3019-3030
JournalMedical Physics
Volume45
Issue number7
Publication statusPublished - 2018 Jul 1
Publication categoryResearch
Peer-reviewedYes