TY - GEN
T1 - Domain-adversarial neural network for improved generalization performance of gleason grade classification
AU - Arvidsson, Ida
AU - Overgaard, Niels Christian
AU - Krzyzanowska, Agnieszka
AU - Marginean, Felicia Elena
AU - Simoulis, Athanasios
AU - Bjartell, Anders
AU - Aström, Kalle
AU - Heyden, Anders
PY - 2020
Y1 - 2020
N2 - When training a deep learning model, the dataset used is of great importance to make sure that the model learns relevant features of the data and that it will be able to generalize to new data. However, it is typically difficult to produce a dataset without some bias toward any specific feature. Deep learning models used in histopathology have a tendency to overfit to the stain appearance of the training data - if the model is trained on data from one lab only, it will usually not be able to generalize to data from other labs. The standard technique to overcome this problem is to use color augmentation of the training data which, artificially, generates more variations for the network to learn. In this work we instead test the use of a so called domain-adversarial neural network, which is designed to prevent the model from being biased towards features that in reality are irrelevant such as the origin of an image. To test the technique, four datasets from different hospitals for Gleason grading of prostate cancer are used. We achieve state of the art results for these particular datasets, and furthermore for two of our three test datasets the approach outperforms the use of color augmentation.
AB - When training a deep learning model, the dataset used is of great importance to make sure that the model learns relevant features of the data and that it will be able to generalize to new data. However, it is typically difficult to produce a dataset without some bias toward any specific feature. Deep learning models used in histopathology have a tendency to overfit to the stain appearance of the training data - if the model is trained on data from one lab only, it will usually not be able to generalize to data from other labs. The standard technique to overcome this problem is to use color augmentation of the training data which, artificially, generates more variations for the network to learn. In this work we instead test the use of a so called domain-adversarial neural network, which is designed to prevent the model from being biased towards features that in reality are irrelevant such as the origin of an image. To test the technique, four datasets from different hospitals for Gleason grading of prostate cancer are used. We achieve state of the art results for these particular datasets, and furthermore for two of our three test datasets the approach outperforms the use of color augmentation.
KW - Domain adversarial neural network
KW - Generalization
KW - Gleason grading
U2 - 10.1117/12.2549011
DO - 10.1117/12.2549011
M3 - Paper in conference proceeding
AN - SCOPUS:85120979234
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2020
A2 - Tomaszewski, John E.
A2 - Ward, Aaron D.
PB - SPIE
T2 - Medical Imaging 2020: Digital Pathology
Y2 - 19 February 2020 through 20 February 2020
ER -