Please use this identifier to cite or link to this item: https://ahro.austin.org.au/austinjspui/handle/1/27777
Title: Cascaded deep-learning based auto-segmentation for head and neck cancer patients: Organs at risk on T2 weighted magnetic resonance imaging.
Austin Authors: Korte, James C;Hardcastle, Nicholas;Ng, Sweet Ping ;Clark, Brett;Kron, Tomas;Jackson, Price
Affiliation: Radiation Oncology
Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Australia
Department of Radiation Oncology, Peter MacCallum Cancer Centre, Melbourne, Australia
Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Australia
Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
Centre for Medical Radiation Physics, University of Wollongong, Wollongong, Australia
Olivia Newton-John Cancer Wellness and Research Centre
Issue Date: Dec-2021
Date: 2021-10-22
Publication information: Medical physics 2021-12; 48(12): 7757-7772
Abstract: To investigate multiple deep learning methods for automated segmentation (auto-segmentation) of the parotid glands, submandibular glands, level II and level III lymph nodes on magnetic resonance imaging (MRI). Outlining radiosensitive organs on images used to assist radiation therapy (radiotherapy) of patients with head and neck cancer (HNC) is a time-consuming task, in which variability between observers may directly impact on patient treatment outcomes. Auto-segmentation on computed tomography imaging has been shown to result in significant time reductions and more consistent outlines of the organs at risk. Three convolutional neural network (CNN) based auto-segmentation architectures were developed using manual segmentations and T2 weighted MRI images provided from the AAPM RT-MAC challenge dataset (n = 31). Auto-segmentation performance was evaluated with segmentation similarity and surface distance metrics on the RT-MAC dataset with institutional manual segmentations (n = 10). The generalisability of the auto-segmentation methods was assessed on an institutional MRI dataset (n = 10). Auto-segmentation performance on the RT-MAC images with institutional segmentations was higher than previously reported MRI methods for the parotid glands (dice: 0.860 ± 0.067, mean surface distance: 1.33 ± 0.40 mm) and the first report of MRI performance for submandibular glands (dice: 0.830 ± 0.032, mean surface distance: 1.16 ± 0.47 mm). We demonstrate that high-resolution auto-segmentations with improved geometric accuracy can be generated for the parotid and submandibular glands by cascading a localiser CNN and a cropped high-resolution CNN. Improved mean surface distances were observed between automatic and manual segmentations of the submandibular glands when a low-resolution auto-segmentation was used as prior knowledge in the second stage CNN. Reduced auto-segmentation performance was observed on our institutional MRI dataset when trained on external RT-MAC images; only the parotid gland auto-segmentations were considered clinically feasible for manual correction (dice: 0.775 ± 0.105, mean surface distance: 1.20 ± 0.60 mm). This work demonstrates that CNNs are a suitable method to auto-segment the parotid and submandibular glands on MRI images of patients with HNC, and that cascaded CNNs can generate high resolution segmentations with improved geometric accuracy. Deep learning methods may be suitable for auto-segmentation of the parotid glands on T2 weighted MRI images from different scanners, but further work is required to improve the performance and generalisability of these methods for auto-segmentation of the submandibular glands and lymph nodes. This article is protected by copyright. All rights reserved.
URI: https://ahro.austin.org.au/austinjspui/handle/1/27777
DOI: 10.1002/mp.15290
Journal: Medical Physics
PubMed URL: 34676555
Type: Journal Article
Subjects: convolutional neural networks
head and neck cancer
image segmentation
magnetic resonance imaging
organs at risk
Appears in Collections:Journal articles

Show full item record

Page view(s)

32
checked on Dec 22, 2024

Google ScholarTM

Check


Items in AHRO are protected by copyright, with all rights reserved, unless otherwise indicated.