Learning Interpretable Microscopic Features of Tumor by Multi-task Adversarial CNNs To Improve Generalization

Mara Graziani1,2, Sebastian Otálora1,2, Stéphane Marchand-Maillet1, Henning Müller1,2, Vincent Andrearczyk1
1: University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland, 2: University of Geneva (UNIGE), Department of Computer Science (CUI), 1227, Carouge, Switzerland
Publication date: 2023/06/20
https://doi.org/10.59275/j.melba.2023-3462
PDF · Code · arXiv

Abstract

Adopting Convolutional Neural Networks (CNNs) in the daily routine of primary diagnosis requires not only near-perfect precision, but also a sufficient degree of generalization to data acquisition shifts and transparency. Existing CNN models act as black boxes, not ensuring to the physicians that important diagnostic features are used by the model. Building on top of successfully existing techniques such as multi-task learning, domain adversarial training and concept-based interpretability, this paper addresses the challenge of introducing diagnostic factors in the training objectives. Here we show that our architecture, by learning end-to-end an uncertainty-based weighting combination of multi-task and adversarial losses, is encouraged to focus on pathology features such as density and pleomorphism of nuclei, e.g. variations in size and appearance, while discarding misleading features such as staining differences. Our results on breast lymph node tissue show significantly improved generalization in the detection of tumorous tissue, with best average AUC 0.89 (0.01) against the baseline AUC 0.86 (0.005). By applying the interpretability technique of linearly probing intermediate representations, we also demonstrate that interpretable pathology features such as nuclei density are learned by the proposed CNN architecture, confirming the increased transparency of this model. This result is a starting point towards building interpretable multi-task architectures that are robust to data heterogeneity. Our code is available at https://github.com/maragraziani/multitask_adversarial

Keywords

Interpretable Deep Learning · Histopathology · Multi-task learning

Bibtex @article{melba:2023:011:graziani, title = "Learning Interpretable Microscopic Features of Tumor by Multi-task Adversarial CNNs To Improve Generalization", author = "Graziani, Mara and Otálora, Sebastian and Marchand-Maillet, Stéphane and Müller, Henning and Andrearczyk, Vincent", journal = "Machine Learning for Biomedical Imaging", volume = "2", issue = "June 2023 issue", year = "2023", pages = "312--337", issn = "2766-905X", doi = "https://doi.org/10.59275/j.melba.2023-3462", url = "https://melba-journal.org/2023:011" }
RISTY - JOUR AU - Graziani, Mara AU - Otálora, Sebastian AU - Marchand-Maillet, Stéphane AU - Müller, Henning AU - Andrearczyk, Vincent PY - 2023 TI - Learning Interpretable Microscopic Features of Tumor by Multi-task Adversarial CNNs To Improve Generalization T2 - Machine Learning for Biomedical Imaging VL - 2 IS - June 2023 issue SP - 312 EP - 337 SN - 2766-905X DO - https://doi.org/10.59275/j.melba.2023-3462 UR - https://melba-journal.org/2023:011 ER -

2023:011 cover