Project 1.3 - Deep Transfer Learning
For real world systems training data has been acquired with slightly different acquisition protocols, different scanners, or from a different patient population. To still learn robustly, we will develop deep transfer learning technology where the domain transfer is addressed in the representation learning step for which we use different coupled network architectures.
Project Leader
![]() |
Dr. Marleen de Bruijne Erasmus Medical Center marleen.debruijne@erasmusmc.nl |
Co-Applicants
![]() |
Dr. Ivana Išgum Amsterdam UMC i.isgum@amsterdamumc.nl |
![]() |
Prof.dr. Max Welling University of Amsterdam m.welling@uva.nl |
Researchers
![]() |
Maximilian Ilse University of Amsterdam m.ilse@uva.nl |
![]() |
Kimberlin van Wijnen Erasmus Medical Center k.vanwijnen@erasmusmc.nl |
![]() |
Julia Noothout Amsterdam UMC j.m.h.noothout@amsterdamumc.nl |
Publications
2020 |
J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum Deep learning-based regression and classification for automatic landmark localization in medical images Journal Article IEEE Transactions on Medical Imaging (in press), 2020. @article{Noothout2020b, title = {Deep learning-based regression and classification for automatic landmark localization in medical images}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum}, url = {https://arxiv.org/pdf/2007.05295.pdf}, doi = {10.1109/TMI.2020.3009002}, year = {2020}, date = {2020-07-09}, journal = {IEEE Transactions on Medical Imaging (in press)}, abstract = {In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. }, keywords = {}, pubstate = {published}, tppubtype = {article} } In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. |
J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum Automatic segmentation of the olfactory bulbs in MRI Inproceedings SPIE Medical Imaging (in press), 2020. @inproceedings{Noothout2020, title = {Automatic segmentation of the olfactory bulbs in MRI}, author = {J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum}, year = {2020}, date = {2020-10-14}, booktitle = {SPIE Medical Imaging (in press)}, abstract = {A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs. |
2019 |
Julia M.H. Noothout, Bob D. de Vos, Jelmer M. Wolterink, Richard A.P. Takx, Tim Leiner, Ivana Išgum Deep Learning for Automatic Landmark Localization in CTA for Transcatheter Aortic Valve Implantation Conference Radiological Society of North America, 105th Annual Meeting, 2019. @conference{Noothout2019, title = {Deep Learning for Automatic Landmark Localization in CTA for Transcatheter Aortic Valve Implantation}, author = {Julia M.H. Noothout, Bob D. de Vos, Jelmer M. Wolterink, Richard A.P. Takx, Tim Leiner, Ivana Išgum}, url = {http://dlmedia.eu/landmarks_rsna2019_final-3/}, year = {2019}, date = {2019-12-03}, booktitle = {Radiological Society of North America, 105th Annual Meeting}, abstract = {PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI. |
Kimberlin MH van Wijnen, Florian Dubost, Pinar Yilmaz, M Arfan Ikram, Wiro J Niessen, Hieab Adams, Meike W Vernooij, Marleen de Bruijne Automated lesion detection by regressing intensity-based distance with a neural network Inproceedings International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-242, Springer, Cham, 2019. @inproceedings{Wijnen2019, title = {Automated lesion detection by regressing intensity-based distance with a neural network}, author = {Kimberlin MH van Wijnen, Florian Dubost, Pinar Yilmaz, M Arfan Ikram, Wiro J Niessen, Hieab Adams, Meike W Vernooij, Marleen de Bruijne}, url = {https://arxiv.org/pdf/1907.12452.pdf}, doi = {https://doi.org/10.1007/978-3-030-32251-9_26}, year = {2019}, date = {2019-10-13}, booktitle = {International Conference on Medical Image Computing and Computer-Assisted Intervention}, pages = {234-242}, publisher = {Springer, Cham}, abstract = {Localization of focal vascular lesions on brain MRI is an important component of research on the etiology of neurological disorders. However, manual annotation of lesions can be challenging, time-consuming and subject to observer bias. Automated detection methods often need voxel-wise annotations for training. We propose a novel approach for automated lesion detection that can be trained on scans only annotated with a dot per lesion instead of a full segmentation. From the dot annotations and their corresponding intensity images we compute various distance maps (DMs), indicating the distance to a lesion based on spatial distance, intensity distance, or both. We train a fully convolutional neural network (FCN) to predict these DMs for unseen intensity images. The local optima in the predicted DMs are expected to correspond to lesion locations. We show the potential of this approach to detect enlarged perivascular spaces in white matter on a large brain MRI dataset with an independent test set of 1000 scans. Our method matches the intra-rater performance of the expert rater that was computed on an independent set. We compare the different types of distance maps, showing that incorporating intensity information in the distance maps used to train an FCN greatly improves performance. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Localization of focal vascular lesions on brain MRI is an important component of research on the etiology of neurological disorders. However, manual annotation of lesions can be challenging, time-consuming and subject to observer bias. Automated detection methods often need voxel-wise annotations for training. We propose a novel approach for automated lesion detection that can be trained on scans only annotated with a dot per lesion instead of a full segmentation. From the dot annotations and their corresponding intensity images we compute various distance maps (DMs), indicating the distance to a lesion based on spatial distance, intensity distance, or both. We train a fully convolutional neural network (FCN) to predict these DMs for unseen intensity images. The local optima in the predicted DMs are expected to correspond to lesion locations. We show the potential of this approach to detect enlarged perivascular spaces in white matter on a large brain MRI dataset with an independent test set of 1000 scans. Our method matches the intra-rater performance of the expert rater that was computed on an independent set. We compare the different types of distance maps, showing that incorporating intensity information in the distance maps used to train an FCN greatly improves performance. |
2018 |
J.M.H. Noothout, B.D de Vos, J.M. Wolterink, T. Leiner, I. Isgum CNN-based Landmark Detection in Cardiac CTA Scans Inproceedings Medical Imaging with Deep Learning. MIDL Amsterdam, 2018. @inproceedings{Noothout2018b, title = {CNN-based Landmark Detection in Cardiac CTA Scans}, author = {J.M.H. Noothout, B.D de Vos, J.M. Wolterink, T. Leiner, I. Isgum}, url = {https://openreview.net/forum?id=r1malb3jz}, year = {2018}, date = {2018-05-20}, booktitle = {Medical Imaging with Deep Learning. MIDL Amsterdam}, abstract = {Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans. |