Abstract: Are Fast Labeling Methods Reliable?: A Case Study of Computer-aided Expert Annotations on Microscopy Slides
Christian Marzahl, Christof A. Bertram, Marc Aubreville, Anne Petrick, Kristina Weiler, Agnes C. Gläsel, Marco Fragoso, Sophie Merz, Florian Bartenschlager, Judith Hoppe, Alina Langenhagen, Anne Katherine Jasensky, Jörn Voigt, Robert Klopfleisch, Andreas Maier
Friedrich-Alexander Universität Erlangen-Nürnberg, Lehrstuhl für Mustererkennung
Abstract
Deep-learning-based pipelines have shown the potential to revolutionalize microscopy image diagnostics by providing visual augmentations and evaluations to a pathologist. However, to match human performance, the methods rely on the availability of vast amounts of high-quality labeled data, which poses a significant challenge. To circumvent this, augmented labeling methods, also known as expertalgorithm-collaboration, have recently become popular. However, potential biases introduced by this operation mode and their effects on training deep neuronal networks are not entirely understood [1]. This work aimed to evualte this for three pathological pattern of interest. Ten trained pathology experts performed a labeling tasks without and with computer-generated augmentation. To investigate different biasing effects, we intentionally introduced errors to the augmentation. In total, experts annotated 26,015 cells on 1,200 images in this novel annotation study. Backed by this extensive data set, we found that the concordance of multiple experts was significantly increased in the computer-aided setting, versus the unaided annotation. However, a significant percentage of the deliberately introduced false labels was not identified by the experts.
Full Video
References
1. Marzahl C, Bertram CA, Aubreville M, et al. Are Fast Labeling Methods Reliable? A Case Study of Computer-Aided Expert Annotations on Microscopy Slides. In: MICCAI. Cham: Springer International Publishing; 2020. p. 24-32.