Cross Attention Transformers for Multi-modal Unsupervised Whole-Body PET Anomaly Detection

Ashay Patel1Orcid, Petru-Danial Tudiosu1Orcid, Walter H.L. Pinaya1Orcid, Gary Cook1Orcid, Vicky Goh1Orcid, Sebastien Ourselin1Orcid, M. Jorge Cardoso1Orcid
1: King’s College London
Publication date: 2023/04/19
https://doi.org/10.59275/j.melba.2023-18c1
PDF · arXiv

Abstract

Cancer is a highly heterogeneous condition that can occur almost anywhere in the human body. [18F]fluorodeoxyglucose Positron Emission Tomography (18F-FDG PET) is a imaging modality commonly used to detect cancer due to its high sensitivity and clear visualisation of the pattern of metabolic activity. Nonetheless, as cancer is highly heterogeneous, it is challenging to train general-purpose discriminative cancer detection models, with data availability and disease complexity often cited as a limiting factor. Unsupervised learning methods, more specifically anomaly detection models, have been suggested as a putative solution. These models learn a healthy representation of tissue and detect cancer by predicting deviations from the healthy norm, which requires models capable of accurately learning long-range interactions between organs, their imaging patterns, and other abstract features with high levels of expressivity. Such characteristics are suitably satisfied by transformers, which have been shown to generate state-of-the-art results in unsupervised anomaly detection by training on normal data. This work expands upon such approaches by introducing multi-modal conditioning of the transformer via cross-attention i.e. supplying anatomical reference information from paired CT images to aid the PET anomaly detection task. Furthermore, we show the importance and impact of codebook sizing within a Vector Quantized Variational Autoencoder, on the ability of the transformer network to fulfill the task of anomaly detection. Using 294 whole-body PET/CT samples containing various cancer types, we show that our anomaly detection method is robust and capable of achieving accurate cancer localization results even in cases where normal training data is unavailable. In addition, we show the efficacy of this approach on out-of-sample data showcasing the generalizability of this approach even with limited training data. Lastly, we propose to combine model uncertainty with a new kernel density estimation approach, and show that it provides clinically and statistically significant improvements in accuracy and robustness, when compared to the classic residual-based anomaly maps. Overall, a superior performance is demonstrated against leading state-of-the-art alternatives, drawing attention to the potential of these approaches.

Keywords

Transformers · Unsupervised Anomaly Detection · Cross Attention · Multi-modal · Vector Quantized Variational Autoencoder · Whole-Body · Kernel Density Estimation

Bibtex @article{melba:2023:006:patel, title = "Cross Attention Transformers for Multi-modal Unsupervised Whole-Body PET Anomaly Detection", author = "Patel, Ashay and Tudiosu, Petru-Danial and Pinaya, Walter H.L. and Cook, Gary and Goh, Vicky and Ourselin, Sebastien and Cardoso, M. Jorge", journal = "Machine Learning for Biomedical Imaging", volume = "2", issue = "April 2023 issue", year = "2023", pages = "172--201", issn = "2766-905X", doi = "https://doi.org/10.59275/j.melba.2023-18c1", url = "https://melba-journal.org/2023:006" }
RISTY - JOUR AU - Patel, Ashay AU - Tudiosu, Petru-Danial AU - Pinaya, Walter H.L. AU - Cook, Gary AU - Goh, Vicky AU - Ourselin, Sebastien AU - Cardoso, M. Jorge PY - 2023 TI - Cross Attention Transformers for Multi-modal Unsupervised Whole-Body PET Anomaly Detection T2 - Machine Learning for Biomedical Imaging VL - 2 IS - April 2023 issue SP - 172 EP - 201 SN - 2766-905X DO - https://doi.org/10.59275/j.melba.2023-18c1 UR - https://melba-journal.org/2023:006 ER -

2023:006 cover