CTARR: A fast and robust method for identifying anatomical regions on CT images via atlas registration

Thomas Buddenkotte1,2,3, Roland Opfer2, Julia Krüger2, Alessa Hering4, Mireia Crispin-Ortuzar5,6
1: Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg- Eppendorf, Hamburg, Germany, 2: jung diagnostics, Hamburg Germany, 3: Department of Oncology, University of Cambridge, UK, 4: Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands, 5: Early Cancer Institute, Department of Oncology, University of Cambridge, UK, 6: Cancer Research UK Cambridge Centre, University of Cambridge, UK
Publication date: 2024/10/24
https://doi.org/10.59275/j.melba.2024-f5fc
PDF · Code · arXiv

Abstract

Medical image analysis tasks often focus on regions or structures located in a particular location within the patient’s body. Often large parts of the image may not be of interest for the image analysis task. When using deep-learning based approaches, this causes an unnecessary increases the computational burden during inference and raises the chance of errors. In this paper, we introduce CTARR, a novel generic method for CT Anatomical Re- gion Recognition. The method serves as a pre-processing step for any deep learning-based CT image analysis pipeline by automatically identifying the pre-defined anatomical region that is relevant for the follow-up task and removing the rest. It can be used in (i) image segmentation to prevent false positives in anatomically implausible regions and speeding up the inference, (ii) image classification to produce image crops that are consistent in their anatomical context, and (iii) image registration by serving as a fast pre-registration step. Our proposed method is based on atlas registration and provides a fast and robust way to crop any anatomical region encoded as one or multiple bounding box(es) from any unlabeled CT scan of the brain, chest, abdomen and/or pelvis. We demonstrate the utility and robustness of the proposed method in the context of medical image segmentation by evaluating it on six datasets of public segmentation challenges. The foreground voxels in the regions of interest are preserved in the vast majority of cases and tasks (97.45-100%) while taking only fractions of a seconds to compute (0.1-0.21s) on a deep learning worksta- tion and greatly reducing the segmentation runtime (2.0-12.7x). Our code is available at https://github.com/ThomasBudd/ctarr

Keywords

CT · Deep learning · Image Segmentation · Image Registration · Atlas Registration

Bibtex @article{melba:2024:022:buddenkotte, title = "CTARR: A fast and robust method for identifying anatomical regions on CT images via atlas registration", author = "Buddenkotte, Thomas and Opfer, Roland and Krüger, Julia and Hering, Alessa and Crispin-Ortuzar, Mireia", journal = "Machine Learning for Biomedical Imaging", volume = "2", issue = "October 2024 issue", year = "2024", pages = "2067--2088", issn = "2766-905X", doi = "https://doi.org/10.59275/j.melba.2024-f5fc", url = "https://melba-journal.org/2024:022" }
RISTY - JOUR AU - Buddenkotte, Thomas AU - Opfer, Roland AU - Krüger, Julia AU - Hering, Alessa AU - Crispin-Ortuzar, Mireia PY - 2024 TI - CTARR: A fast and robust method for identifying anatomical regions on CT images via atlas registration T2 - Machine Learning for Biomedical Imaging VL - 2 IS - October 2024 issue SP - 2067 EP - 2088 SN - 2766-905X DO - https://doi.org/10.59275/j.melba.2024-f5fc UR - https://melba-journal.org/2024:022 ER -

2024:022 cover