Hinrich Winther has been a specialist in radiology at the Hannover Medical School since 2015 and heads the Machine Learning Workgroup at the Department of Diagnostic and Interventional Radiology. He received his Ph.D. (Dr. med.) from Johannes Gutenberg University Mainz in 2016. His current research interests include image segmentation, image classification, interventional radiology, and biomarkers using deep learning, as well as classical machine learning approaches. He is one of the founding members of the Young Radiologists and co-founded a start-up (Comprehenso GmbH) in 2018, focusing on bringing machine learning methods into interventional radiology.
Introduction: The use of well-established bone removal (BRM) techniques is crucial for CT angiography (CTA) of the head, body, and lower limbs. The lower body stem, which includes the abdomen and pelvis, does not have a BRM for cone beam CT (CBCT). This frequently necessitates the interventionist doing a manual BRM, particularly in the pelvic area, as in the case of a prostate embolization, necessitating his exit from the interventional room.The aim of this study is to create and clinically test a BRM method for CBCT of the lower body stem that is high-quality, totally automated, and uses a convolutional neural network.\nMaterials and methods: Medical students with training manually segmented the ground truth images, and a radiologist with at least five years of expertise evaluated their accuracy. A convolutional neural network (3D U-Net, Içek, et al.) was trained using the image data and ground truth of 534 training examples. Five percent of the training data were used for the online evaluation. The top-performing model was selected based on overlap measures generated during the online validation. The test set consists of 30 cases for which the BRM was produced using the final model. Three interventional radiologists with at least ten years of experience visually assessed the test cases for comprehensiveness and overall quality.\nResults: The bone mask was rated as complete in 100 % (n=30) with no overhanging, truncating the vascular tree, or soft tissue in 100 % (n=30) of the test cases, with either no or very minor residuals in the maximum intensity projection (MIP) or VRT. Each BRM test case was of good diagnostic quality and did not need any manual intervention. One CBCT takes about 20 seconds to process.\nDiscussion: The AI-based bone removal approach relieves the interventionist of perform- ing a manual bone removal, allowing him to maximize the time spent in the intervention, potentially reducing total intervention time for the patient. BRM can also be used for CBCT image guidance, such as prostatic artery embolization.
Bernhard Kainz is a full professor at Friedrich-Alexander-University Erlangen-Nuremberg where he heads the Image Data Exploration and Analysis Lab (www.idea.tf.fau.eu) and he is Reader for medical image computing in the Department of Computing at Imperial College London where he leads the human-in-the-loop computing group and co-leads the biomedical image analysis research group (biomedia.doc.ic.ac.uk). Bernhard's research is dedicated to developing novel image processing methods that augment human decision-making capabilities, with a focus on bridging the gaps between modern computing methods and clinical practice.
His current research questions include: Can we democratize rare healthcare expertise through Machine Learning, providing guidance in real-time applications and second reader expertise? Can we develop normative learning from large populations, integrating imaging, patient records and omics, leading to data analysis that mimics human decision making? Can we provide human interpretability of machine decision making to support the 'right for explanation' in healthcare?
Bernhard's scientific drive is documented with over 150 state-of-the-art-defining scientific publications in the field. He has worked as a scientific advisor for ThinkSono Ldt./GmbH., Ultromics Ldt., Cydar medical Ldt., and as a clinical imaging scientist at St. Thomas' Hospital London and has collaborated with numerous industries. He is an IEEE Senior Member, associate editor for IEEE Transactions on Medical Imaging, and has won awards, prizes, and honours, including seven best paper awards. In 2023, his research was awarded an ERC Consolidator grant.
Machine learning has been widely regarded as a solution for diagnostic automation in medical image analysis, but there are still unsolved problems in robust modelling of normal appearance and identification of features pointing into the long tail of population data. In this talk, I will explore the fitness of machine learning for applications at the front line of care and high throughput population health screening, specifically in prenatal health screening with ultrasound and MRI, cardiac imaging, and bedside diagnosis of deep vein thrombosis. I will discuss the requirements for such applications and how quality control can be achieved through robust estimation of algorithmic uncertainties and automatic robust modelling of expected anatomical structures. I will also explore the potential for improving models through active learning and the accuracy of non-expert labelling workforces.
However, I will argue that supervised machine learning might not be fit for purpose, as it cannot handle the unknown and requires a lot of annotated examples from well-defined pathological appearance. This categorization paradigm cannot be deployed earlier in the diagnostic pathway or for health screening, where a growing number of potentially hundred-thousands of medically catalogued illnesses may be relevant for diagnosis.
Therefore, I introduce the idea of normative representation learning as a new machine learning paradigm for medical imaging. This paradigm can provide patient-specific computational tools for robust confirmation of normality, image quality control, health screening, and prevention of disease before onset. I will present novel deep learning approaches that can learn without manual labels from healthy patient data only. Our initial success with single class learning and self-supervised learning will be discussed, along with an outlook into the future with causal machine learning methods and the potential of advanced generative models.