![]() |
Dr. Jelmer Wolterink Amsterdam UMC j.m.wolterink@amsterdamumc.nl |
Postdoctoral Researcher
E-mail: j.m.wolterink@amsterdamumc.nl
Phone: –
LinkedIn; Google Scholar
Jelmer Wolterink obtained his Bachelor of Science degree in Artificial Intelligence in 2010 from Radboud University Nijmegen. In 2012, he received his Master of Science degree in Mathematical Sciences from Utrecht University. During this master programme, Jelmer did a six month internship in the NANO-D team at INRIA Rhone-Alpes (Grenoble, France). His master thesis focused on the acceleration of molecular simulations and modeling.
In May 2017 Jelmer finished his PhD at the Image Sciences Institute at UMC Utrecht with a thesis entitled Machine learning based analysis of cardiovascular images. He is currently a postdoctoral researcher in the Quantitative Medical Image Analysis Group. His work focuses on the development of deep generative models for the analysis of cardiac spectral CT images.
Jelmer is co-organizer of the MICCAI Challenge on Automatic Coronary Calcium Scoring.
2020 |
S. Bruns, J.M. Wolterink, R.A.P. Takx, R.W. van Hamersvelt, D. Suchá, M.A. Viergever, T. Leiner, I. Išgum Deep learning from dual-energy information for whole-heart segmentation in dual-energy and single-energy non-contrast-enhanced cardiac CT Journal Article Medical Physics (in press), 2020. @article{Bruns2020b, title = {Deep learning from dual-energy information for whole-heart segmentation in dual-energy and single-energy non-contrast-enhanced cardiac CT}, author = {S. Bruns, J.M. Wolterink, R.A.P. Takx, R.W. van Hamersvelt, D. Suchá, M.A. Viergever, T. Leiner, I. Išgum }, url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.14451}, doi = {10.1002/mp.14451}, year = {2020}, date = {2020-08-05}, journal = {Medical Physics (in press)}, abstract = {Purpose Deep learning-based whole-heart segmentation in coronary CT angiography (CCTA) allows the extraction of quantitative imaging measures for cardiovascular risk prediction. Automatic extraction of these measures in patients undergoing only non-contrast-enhanced CT (NCCT) scanning would be valuable, but defining a manual reference standard that would allow training a deep learning-based method for whole-heart segmentation in NCCT is challenging, if not impossible. In this work, we leverage dual-energy information provided by a dual-layer detector CT scanner to obtain a reference standard in virtual non-contrast (VNC) CT images mimicking NCCT images, and train a 3D convolutional neural network (CNN) for the segmentation of VNC as well as NCCT images. Methods Eighteen patients were scanned with and without contrast enhancement on a dual-layer detector CT scanner. Contrast-enhanced acquisitions were reconstructed into a CCTA and a perfectly aligned VNC image. In each CCTA image, manual reference segmentations of the left ventricular (LV) myocardium, LV cavity, right ventricle, left atrium, right atrium, ascending aorta, and pulmonary artery trunk were obtained and propagated to the corresponding VNC image. These VNC images and reference segmentations were used to train 3D CNNs in a six-fold cross-validation for automatic segmentation in either VNC images or NCCT images reconstructed from the non-contrast-enhanced acquisition. Automatic segmentation in VNC images was evaluated using the Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD). Automatically determined volumes of the cardiac chambers and LV myocardium in NCCT were compared to reference volumes of the same patient in CCTA by Bland-Altman analysis. An additional independent multi-vendor multi-center set of single-energy NCCT images from 290 patients was used for qualitative analysis, in which two observers graded segmentations on a five-point scale. Results Automatic segmentations in VNC images showed good agreement with reference segmentations, with an average DSC of 0.897 ± 0.034 and an average ASSD of 1.42 ± 0.45 mm. Volume differences [95% confidence interval] between automatic NCCT and reference CCTA segmentations were -19 [-67; 30] mL for LV myocardium, -25 [-78; 29] mL for LV cavity, -29 [-73; 14] mL for right ventricle, -20 [-62; 21] mL for left atrium, and -19 [-73; 34] mL for right atrium, respectively. In 214 (74%) NCCT images from the independent multi-vendor multi-center set, both observers agreed that the automatic segmentation was mostly accurate (grade 3) or better. Conclusion Our automatic method produced accurate whole-heart segmentations in NCCT images using a CNN trained with VNC images from a dual-layer detector CT scanner. This method might enable quantification of additional cardiac measures from NCCT images for improved cardiovascular risk prediction. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose Deep learning-based whole-heart segmentation in coronary CT angiography (CCTA) allows the extraction of quantitative imaging measures for cardiovascular risk prediction. Automatic extraction of these measures in patients undergoing only non-contrast-enhanced CT (NCCT) scanning would be valuable, but defining a manual reference standard that would allow training a deep learning-based method for whole-heart segmentation in NCCT is challenging, if not impossible. In this work, we leverage dual-energy information provided by a dual-layer detector CT scanner to obtain a reference standard in virtual non-contrast (VNC) CT images mimicking NCCT images, and train a 3D convolutional neural network (CNN) for the segmentation of VNC as well as NCCT images. Methods Eighteen patients were scanned with and without contrast enhancement on a dual-layer detector CT scanner. Contrast-enhanced acquisitions were reconstructed into a CCTA and a perfectly aligned VNC image. In each CCTA image, manual reference segmentations of the left ventricular (LV) myocardium, LV cavity, right ventricle, left atrium, right atrium, ascending aorta, and pulmonary artery trunk were obtained and propagated to the corresponding VNC image. These VNC images and reference segmentations were used to train 3D CNNs in a six-fold cross-validation for automatic segmentation in either VNC images or NCCT images reconstructed from the non-contrast-enhanced acquisition. Automatic segmentation in VNC images was evaluated using the Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD). Automatically determined volumes of the cardiac chambers and LV myocardium in NCCT were compared to reference volumes of the same patient in CCTA by Bland-Altman analysis. An additional independent multi-vendor multi-center set of single-energy NCCT images from 290 patients was used for qualitative analysis, in which two observers graded segmentations on a five-point scale. Results Automatic segmentations in VNC images showed good agreement with reference segmentations, with an average DSC of 0.897 ± 0.034 and an average ASSD of 1.42 ± 0.45 mm. Volume differences [95% confidence interval] between automatic NCCT and reference CCTA segmentations were -19 [-67; 30] mL for LV myocardium, -25 [-78; 29] mL for LV cavity, -29 [-73; 14] mL for right ventricle, -20 [-62; 21] mL for left atrium, and -19 [-73; 34] mL for right atrium, respectively. In 214 (74%) NCCT images from the independent multi-vendor multi-center set, both observers agreed that the automatic segmentation was mostly accurate (grade 3) or better. Conclusion Our automatic method produced accurate whole-heart segmentations in NCCT images using a CNN trained with VNC images from a dual-layer detector CT scanner. This method might enable quantification of additional cardiac measures from NCCT images for improved cardiovascular risk prediction. |
J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum Deep learning-based regression and classification for automatic landmark localization in medical images Journal Article IEEE Transactions on Medical Imaging (in press), 2020. @article{Noothout2020b, title = {Deep learning-based regression and classification for automatic landmark localization in medical images}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum}, url = {https://arxiv.org/pdf/2007.05295.pdf}, doi = {10.1109/TMI.2020.3009002}, year = {2020}, date = {2020-07-09}, journal = {IEEE Transactions on Medical Imaging (in press)}, abstract = {In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. }, keywords = {}, pubstate = {published}, tppubtype = {article} } In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. |
2020 |
S. Bruns, J.M. Wolterink, T.P.W. van den Boogert, J.P. Henriques, J. Baan, R.N. Planken, I. Išgum Automatic whole-heart segmentation in 4D TAVI treatment planning CT Inproceedings SPIE Medical Imaging (in press), 2020. @inproceedings{Bruns2020, title = {Automatic whole-heart segmentation in 4D TAVI treatment planning CT }, author = {S. Bruns, J.M. Wolterink, T.P.W. van den Boogert, J.P. Henriques, J. Baan, R.N. Planken, I. Išgum}, year = {2020}, date = {2020-10-14}, booktitle = {SPIE Medical Imaging (in press)}, abstract = {4D cardiac CT angiography (CCTA) images acquired for transcatheter aortic valve implantation (TAVI) planning provide a wealth of information about the morphology of the heart throughout the cardiac cycle. We propose a deep learning method to automatically segment the cardiac chambers and myocardium in 4D CCTA. We obtain automatic segmentations in 472 patients and use these to automatically identify end-systolic (ES) and end-diastolic (ED) phases, and to determine the left ventricular ejection fraction (LVEF). Our results show that automatic segmentation of cardiac structures through the cardiac cycle is feasible (median Dice similarity coefficient 0.908, median average symmetric surface distance 1.59 mm). Moreover, we demonstrate that these segmentations can be used to accurately identify ES and ED phases (bias [limits of agreement] of 1.81 [-11.0; 14.7]% and -0.02 [-14.1; 14.1]%). Finally, we show that there is correspondence between LVEF values determined from CCTA and echocardiography (-1.71 [-25.0; 21.6]%). Our automatic deep learning approach to segmentation has the potential to routinely extract functional information from 4D CCTA.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } 4D cardiac CT angiography (CCTA) images acquired for transcatheter aortic valve implantation (TAVI) planning provide a wealth of information about the morphology of the heart throughout the cardiac cycle. We propose a deep learning method to automatically segment the cardiac chambers and myocardium in 4D CCTA. We obtain automatic segmentations in 472 patients and use these to automatically identify end-systolic (ES) and end-diastolic (ED) phases, and to determine the left ventricular ejection fraction (LVEF). Our results show that automatic segmentation of cardiac structures through the cardiac cycle is feasible (median Dice similarity coefficient 0.908, median average symmetric surface distance 1.59 mm). Moreover, we demonstrate that these segmentations can be used to accurately identify ES and ED phases (bias [limits of agreement] of 1.81 [-11.0; 14.7]% and -0.02 [-14.1; 14.1]%). Finally, we show that there is correspondence between LVEF values determined from CCTA and echocardiography (-1.71 [-25.0; 21.6]%). Our automatic deep learning approach to segmentation has the potential to routinely extract functional information from 4D CCTA. |
2019 |
S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, M. Zreik, T. Leiner, I. Išgum Improving myocardium segmentation in cardiac CT angiography using spectral information Inproceedings SPIE Medical Imaging, 2019. @inproceedings{Bruns2019, title = {Improving myocardium segmentation in cardiac CT angiography using spectral information}, author = {S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, M. Zreik, T. Leiner, I. Išgum}, url = {https://arxiv.org/abs/1810.03968}, year = {2019}, date = {2019-02-17}, booktitle = {SPIE Medical Imaging}, abstract = {Left ventricle myocardium segmentation in cardiac CT angiography (CCTA) is essential for the assessment of myocardial perfusion. Since deep-learning methods for segmentation in CCTA suffer from differences in contrast-agent attenuation, we propose training a 3D CNN with augmentation using virtual mono-energetic reconstructions from a spectral CT scanner. We compare this with augmentation by linear intensity scaling, and combine both augmentations. We train a network with 10 conventional CCTA images and corresponding virtual mono-energetic images acquired on a spectral CT scanner and evaluate on 40 conventional CCTA images. We show that data augmentation with virtual mono-energetic images significantly improves the segmentation.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Left ventricle myocardium segmentation in cardiac CT angiography (CCTA) is essential for the assessment of myocardial perfusion. Since deep-learning methods for segmentation in CCTA suffer from differences in contrast-agent attenuation, we propose training a 3D CNN with augmentation using virtual mono-energetic reconstructions from a spectral CT scanner. We compare this with augmentation by linear intensity scaling, and combine both augmentations. We train a network with 10 conventional CCTA images and corresponding virtual mono-energetic images acquired on a spectral CT scanner and evaluate on 40 conventional CCTA images. We show that data augmentation with virtual mono-energetic images significantly improves the segmentation. |
J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI Inproceedings SPIE Medical Imaging, 2019. @inproceedings{Sander2019, title = {Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI}, author = {J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum}, url = {https://arxiv.org/pdf/1809.10430.pdf}, year = {2019}, date = {2019-02-17}, booktitle = {SPIE Medical Imaging}, abstract = {Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs. |
J.M. Wolterink, T. Leiner, I. Išgum Graph convolutional networks for coronary artery segmentation in cardiac CT angiography Inproceedings 1st International Workshop on Graph Learning in Medical Image (GLMI 2019), in press, 2019. @inproceedings{Wolterink2019, title = {Graph convolutional networks for coronary artery segmentation in cardiac CT angiography}, author = {J.M. Wolterink, T. Leiner, I. Išgum}, year = {2019}, date = {2019-08-14}, booktitle = {1st International Workshop on Graph Learning in Medical Image (GLMI 2019), in press}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
2018 |
J.M.H. Noothout, B.D de Vos, J.M. Wolterink, T. Leiner, I. Isgum CNN-based Landmark Detection in Cardiac CTA Scans Inproceedings Medical Imaging with Deep Learning. MIDL Amsterdam, 2018. @inproceedings{Noothout2018b, title = {CNN-based Landmark Detection in Cardiac CTA Scans}, author = {J.M.H. Noothout, B.D de Vos, J.M. Wolterink, T. Leiner, I. Isgum}, url = {https://openreview.net/forum?id=r1malb3jz}, year = {2018}, date = {2018-05-20}, booktitle = {Medical Imaging with Deep Learning. MIDL Amsterdam}, abstract = {Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans. |
J.M. Wolterink, T. Leiner, I. Isgum Blood vessel geometry synthesis using generative adversarial networks Inproceedings Medical Imaging with Deep Learning. MIDL Amsterdam, 2018. @inproceedings{Wolterink2018b, title = {Blood vessel geometry synthesis using generative adversarial networks}, author = {J.M. Wolterink, T. Leiner, I. Isgum}, url = {https://openreview.net/forum?id=SJ4N7isiG}, year = {2018}, date = {2018-05-20}, booktitle = {Medical Imaging with Deep Learning. MIDL Amsterdam}, abstract = {Computationally synthesized blood vessels can be used for training and evaluationof medical image analysis applications. We propose a deep generative model to synthesize blood vessel geometries, with an application to coronary arteries in cardiac CT angiography (CCTA). In the proposed method, a Wasserstein generative adversarial network (GAN) consisting of a generator and a discriminator network is trained. While the generator tries to synthesize realistic blood vessel geometries, the discriminator tries to distinguish synthesized geometries from those of real blood vessels. Both real and synthesized blood vessel geometries are parametrized as 1D signals based on the central vessel axis. The generator can optionally be provided with an attribute vector to synthesize vessels with particular characteristics. The GAN was optimized using a reference database with parametrizations of 4,412 real coronary artery geometries extracted from CCTA scans. After training, plausible coronary artery geometries could be synthesized based on random vectors sampled from a latent space. A qualitative analysis showed strong similarities between real and synthesized coronary arteries. A detailed analysis of the latent space showed that the diversity present in coronary artery anatomy was accurately captured by the generator. Results show that Wasserstein generative adversarial networks can be used to synthesize blood vessel geometries.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Computationally synthesized blood vessels can be used for training and evaluationof medical image analysis applications. We propose a deep generative model to synthesize blood vessel geometries, with an application to coronary arteries in cardiac CT angiography (CCTA). In the proposed method, a Wasserstein generative adversarial network (GAN) consisting of a generator and a discriminator network is trained. While the generator tries to synthesize realistic blood vessel geometries, the discriminator tries to distinguish synthesized geometries from those of real blood vessels. Both real and synthesized blood vessel geometries are parametrized as 1D signals based on the central vessel axis. The generator can optionally be provided with an attribute vector to synthesize vessels with particular characteristics. The GAN was optimized using a reference database with parametrizations of 4,412 real coronary artery geometries extracted from CCTA scans. After training, plausible coronary artery geometries could be synthesized based on random vectors sampled from a latent space. A qualitative analysis showed strong similarities between real and synthesized coronary arteries. A detailed analysis of the latent space showed that the diversity present in coronary artery anatomy was accurately captured by the generator. Results show that Wasserstein generative adversarial networks can be used to synthesize blood vessel geometries. |
2019 |
S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, T. Leiner, I. Išgum CNN-based segmentation of the cardiac chambers and great vessels in non-contrast-enhanced cardiac CT Conference Medical Imaging with Deep Learning. MIDL London, 2019. @conference{Bruns2019b, title = {CNN-based segmentation of the cardiac chambers and great vessels in non-contrast-enhanced cardiac CT}, author = {S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, T. Leiner, I. Išgum}, url = {https://openreview.net/forum?id=SJeqoqAaFV}, year = {2019}, date = {2019-07-08}, booktitle = {Medical Imaging with Deep Learning. MIDL London}, abstract = {Quantication of cardiac structures in non-contrast CT (NCCT) could improve cardiovascular risk stratication. However, setting a manual reference to train a fully convolutional network (FCN) for automatic segmentation of NCCT images is hardly feasible, and an FCN trained on coronary CT angiography (CCTA) images would not generalize to NCCT. Therefore, we propose to train an FCN with virtual non-contrast (VNC) images from a dual-layer detector CT scanner and a reference standard obtained on perfectly aligned CCTA images.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Quantication of cardiac structures in non-contrast CT (NCCT) could improve cardiovascular risk stratication. However, setting a manual reference to train a fully convolutional network (FCN) for automatic segmentation of NCCT images is hardly feasible, and an FCN trained on coronary CT angiography (CCTA) images would not generalize to NCCT. Therefore, we propose to train an FCN with virtual non-contrast (VNC) images from a dual-layer detector CT scanner and a reference standard obtained on perfectly aligned CCTA images. |
Nikolas Lessmann, Jelmer M. Wolterink, Majd Zreik, Max A. Viergever, Bram van Ginneken, Ivana Išgum Vertebra partitioning with thin-plate spline surfaces steered by a convolutional neural network Conference Medical Imaging with Deep Learning. MIDL London, 2019. @conference{Less19c, title = {Vertebra partitioning with thin-plate spline surfaces steered by a convolutional neural network}, author = {Nikolas Lessmann, Jelmer M. Wolterink, Majd Zreik, Max A. Viergever, Bram van Ginneken, Ivana Išgum}, url = {https://openreview.net/forum?id=B1eQv5INqV}, year = {2019}, date = {2019-07-08}, booktitle = {Medical Imaging with Deep Learning. MIDL London}, abstract = {Thin-plate splines can be used for interpolation of image values, but can also be used to represent a smooth surface, such as the boundary between two structures. We present a method for partitioning vertebra segmentation masks into two substructures, the vertebral body and the posterior elements, using a convolutional neural network that predicts the boundary between the two structures. This boundary is modeled as a thin-plate spline surface dened by a set of control points predicted by the network. The neural network is trained using the reconstruction error of a convolutional autoencoder to enable the use of unpaired data.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Thin-plate splines can be used for interpolation of image values, but can also be used to represent a smooth surface, such as the boundary between two structures. We present a method for partitioning vertebra segmentation masks into two substructures, the vertebral body and the posterior elements, using a convolutional neural network that predicts the boundary between the two structures. This boundary is modeled as a thin-plate spline surface dened by a set of control points predicted by the network. The neural network is trained using the reconstruction error of a convolutional autoencoder to enable the use of unpaired data. |
Julia M.H. Noothout, Bob D. de Vos, Jelmer M. Wolterink, Richard A.P. Takx, Tim Leiner, Ivana Išgum Deep Learning for Automatic Landmark Localization in CTA for Transcatheter Aortic Valve Implantation Conference Radiological Society of North America, 105th Annual Meeting, 2019. @conference{Noothout2019, title = {Deep Learning for Automatic Landmark Localization in CTA for Transcatheter Aortic Valve Implantation}, author = {Julia M.H. Noothout, Bob D. de Vos, Jelmer M. Wolterink, Richard A.P. Takx, Tim Leiner, Ivana Išgum}, url = {http://dlmedia.eu/landmarks_rsna2019_final-3/}, year = {2019}, date = {2019-12-03}, booktitle = {Radiological Society of North America, 105th Annual Meeting}, abstract = {PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI. |