Project 1.1 - High Dimensional Data
We develop deep learning techniques and efficient architectures for quantitative analysis of 4- and 5-D medical images that make optimal use of additional dimensions and apply them to cardiac spectral CT and MRI and sequential 4D chest CT.
Project Leader
![]() |
Dr. Ivana Išgum Amsterdam UMC i.isgum@amsterdamumc.nl |
Co-Applicants
![]() |
Prof.dr. Bram van Ginneken Radboud University Medical Center bram.vanginneken@radboudumc.nl |
![]() |
Prof.dr. Max Welling University of Amsterdam m.welling@uva.nl |
![]() |
Prof.dr.ir. Max Viergever University Medical Center Utrecht M.Viergever@umcutrecht.nl |
![]() |
Prof.dr. Tim Leiner University Medical Center Utrecht T.Leiner@umcutrecht.nl |
Researchers
![]() |
Dr. Nikolas Lessmann Radboud University Medical Center nikolas.lessmann@radboudumc.nl |
![]() |
Dr. Bob de Vos Amsterdam UMC b.d.devos@amsterdamumc.nl |
![]() |
Steffen Bruns Amsterdam UMC s.bruns@amsterdamumc.nl |
![]() |
Robbert van Hamersvelt University Medical Center Utrecht R.W.vanHamersvelt-3@umcutrecht.nl |
Publications
2020 |
S. Bruns, J.M. Wolterink, R.A.P. Takx, R.W. van Hamersvelt, D. Suchá, M.A. Viergever, T. Leiner, I. Išgum Deep learning from dual-energy information for whole-heart segmentation in dual-energy and single-energy non-contrast-enhanced cardiac CT Journal Article Medical Physics (in press), 2020. @article{Bruns2020b, title = {Deep learning from dual-energy information for whole-heart segmentation in dual-energy and single-energy non-contrast-enhanced cardiac CT}, author = {S. Bruns, J.M. Wolterink, R.A.P. Takx, R.W. van Hamersvelt, D. Suchá, M.A. Viergever, T. Leiner, I. Išgum }, url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.14451}, doi = {10.1002/mp.14451}, year = {2020}, date = {2020-08-05}, journal = {Medical Physics (in press)}, abstract = {Purpose Deep learning-based whole-heart segmentation in coronary CT angiography (CCTA) allows the extraction of quantitative imaging measures for cardiovascular risk prediction. Automatic extraction of these measures in patients undergoing only non-contrast-enhanced CT (NCCT) scanning would be valuable, but defining a manual reference standard that would allow training a deep learning-based method for whole-heart segmentation in NCCT is challenging, if not impossible. In this work, we leverage dual-energy information provided by a dual-layer detector CT scanner to obtain a reference standard in virtual non-contrast (VNC) CT images mimicking NCCT images, and train a 3D convolutional neural network (CNN) for the segmentation of VNC as well as NCCT images. Methods Eighteen patients were scanned with and without contrast enhancement on a dual-layer detector CT scanner. Contrast-enhanced acquisitions were reconstructed into a CCTA and a perfectly aligned VNC image. In each CCTA image, manual reference segmentations of the left ventricular (LV) myocardium, LV cavity, right ventricle, left atrium, right atrium, ascending aorta, and pulmonary artery trunk were obtained and propagated to the corresponding VNC image. These VNC images and reference segmentations were used to train 3D CNNs in a six-fold cross-validation for automatic segmentation in either VNC images or NCCT images reconstructed from the non-contrast-enhanced acquisition. Automatic segmentation in VNC images was evaluated using the Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD). Automatically determined volumes of the cardiac chambers and LV myocardium in NCCT were compared to reference volumes of the same patient in CCTA by Bland-Altman analysis. An additional independent multi-vendor multi-center set of single-energy NCCT images from 290 patients was used for qualitative analysis, in which two observers graded segmentations on a five-point scale. Results Automatic segmentations in VNC images showed good agreement with reference segmentations, with an average DSC of 0.897 ± 0.034 and an average ASSD of 1.42 ± 0.45 mm. Volume differences [95% confidence interval] between automatic NCCT and reference CCTA segmentations were -19 [-67; 30] mL for LV myocardium, -25 [-78; 29] mL for LV cavity, -29 [-73; 14] mL for right ventricle, -20 [-62; 21] mL for left atrium, and -19 [-73; 34] mL for right atrium, respectively. In 214 (74%) NCCT images from the independent multi-vendor multi-center set, both observers agreed that the automatic segmentation was mostly accurate (grade 3) or better. Conclusion Our automatic method produced accurate whole-heart segmentations in NCCT images using a CNN trained with VNC images from a dual-layer detector CT scanner. This method might enable quantification of additional cardiac measures from NCCT images for improved cardiovascular risk prediction. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose Deep learning-based whole-heart segmentation in coronary CT angiography (CCTA) allows the extraction of quantitative imaging measures for cardiovascular risk prediction. Automatic extraction of these measures in patients undergoing only non-contrast-enhanced CT (NCCT) scanning would be valuable, but defining a manual reference standard that would allow training a deep learning-based method for whole-heart segmentation in NCCT is challenging, if not impossible. In this work, we leverage dual-energy information provided by a dual-layer detector CT scanner to obtain a reference standard in virtual non-contrast (VNC) CT images mimicking NCCT images, and train a 3D convolutional neural network (CNN) for the segmentation of VNC as well as NCCT images. Methods Eighteen patients were scanned with and without contrast enhancement on a dual-layer detector CT scanner. Contrast-enhanced acquisitions were reconstructed into a CCTA and a perfectly aligned VNC image. In each CCTA image, manual reference segmentations of the left ventricular (LV) myocardium, LV cavity, right ventricle, left atrium, right atrium, ascending aorta, and pulmonary artery trunk were obtained and propagated to the corresponding VNC image. These VNC images and reference segmentations were used to train 3D CNNs in a six-fold cross-validation for automatic segmentation in either VNC images or NCCT images reconstructed from the non-contrast-enhanced acquisition. Automatic segmentation in VNC images was evaluated using the Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD). Automatically determined volumes of the cardiac chambers and LV myocardium in NCCT were compared to reference volumes of the same patient in CCTA by Bland-Altman analysis. An additional independent multi-vendor multi-center set of single-energy NCCT images from 290 patients was used for qualitative analysis, in which two observers graded segmentations on a five-point scale. Results Automatic segmentations in VNC images showed good agreement with reference segmentations, with an average DSC of 0.897 ± 0.034 and an average ASSD of 1.42 ± 0.45 mm. Volume differences [95% confidence interval] between automatic NCCT and reference CCTA segmentations were -19 [-67; 30] mL for LV myocardium, -25 [-78; 29] mL for LV cavity, -29 [-73; 14] mL for right ventricle, -20 [-62; 21] mL for left atrium, and -19 [-73; 34] mL for right atrium, respectively. In 214 (74%) NCCT images from the independent multi-vendor multi-center set, both observers agreed that the automatic segmentation was mostly accurate (grade 3) or better. Conclusion Our automatic method produced accurate whole-heart segmentations in NCCT images using a CNN trained with VNC images from a dual-layer detector CT scanner. This method might enable quantification of additional cardiac measures from NCCT images for improved cardiovascular risk prediction. |
B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum Mutual information for unsupervised deep learning image registration Inproceedings SPIE Medical Imaging, in press, 2020. @inproceedings{deVos2020, title = {Mutual information for unsupervised deep learning image registration}, author = {B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum}, url = {https://spie.org/MI/conferencedetails/medical-image-processing#2549729}, year = {2020}, date = {2020-02-18}, booktitle = {SPIE Medical Imaging, in press}, abstract = {Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration. |
S. Bruns, J.M. Wolterink, T.P.W. van den Boogert, J.P. Henriques, J. Baan, R.N. Planken, I. Išgum Automatic whole-heart segmentation in 4D TAVI treatment planning CT Inproceedings SPIE Medical Imaging (in press), 2020. @inproceedings{Bruns2020, title = {Automatic whole-heart segmentation in 4D TAVI treatment planning CT }, author = {S. Bruns, J.M. Wolterink, T.P.W. van den Boogert, J.P. Henriques, J. Baan, R.N. Planken, I. Išgum}, year = {2020}, date = {2020-10-14}, booktitle = {SPIE Medical Imaging (in press)}, abstract = {4D cardiac CT angiography (CCTA) images acquired for transcatheter aortic valve implantation (TAVI) planning provide a wealth of information about the morphology of the heart throughout the cardiac cycle. We propose a deep learning method to automatically segment the cardiac chambers and myocardium in 4D CCTA. We obtain automatic segmentations in 472 patients and use these to automatically identify end-systolic (ES) and end-diastolic (ED) phases, and to determine the left ventricular ejection fraction (LVEF). Our results show that automatic segmentation of cardiac structures through the cardiac cycle is feasible (median Dice similarity coefficient 0.908, median average symmetric surface distance 1.59 mm). Moreover, we demonstrate that these segmentations can be used to accurately identify ES and ED phases (bias [limits of agreement] of 1.81 [-11.0; 14.7]% and -0.02 [-14.1; 14.1]%). Finally, we show that there is correspondence between LVEF values determined from CCTA and echocardiography (-1.71 [-25.0; 21.6]%). Our automatic deep learning approach to segmentation has the potential to routinely extract functional information from 4D CCTA.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } 4D cardiac CT angiography (CCTA) images acquired for transcatheter aortic valve implantation (TAVI) planning provide a wealth of information about the morphology of the heart throughout the cardiac cycle. We propose a deep learning method to automatically segment the cardiac chambers and myocardium in 4D CCTA. We obtain automatic segmentations in 472 patients and use these to automatically identify end-systolic (ES) and end-diastolic (ED) phases, and to determine the left ventricular ejection fraction (LVEF). Our results show that automatic segmentation of cardiac structures through the cardiac cycle is feasible (median Dice similarity coefficient 0.908, median average symmetric surface distance 1.59 mm). Moreover, we demonstrate that these segmentations can be used to accurately identify ES and ED phases (bias [limits of agreement] of 1.81 [-11.0; 14.7]% and -0.02 [-14.1; 14.1]%). Finally, we show that there is correspondence between LVEF values determined from CCTA and echocardiography (-1.71 [-25.0; 21.6]%). Our automatic deep learning approach to segmentation has the potential to routinely extract functional information from 4D CCTA. |
2019 |
R.W. van Hamersvelt, I. Išgum, P.A. de Jong, M.J. Cramer, G.E. Leenders, M.J. Willemink, M. Voskuil, T. Leiner Application of speCtraL computed tomogrAphy to impRove specIficity of cardiac compuTed tomographY (CLARITY study): Rationale and Design Journal Article BMJ Open, 9 (3), pp. e025793, 2019. @article{vanHamersvelt2019, title = {Application of speCtraL computed tomogrAphy to impRove specIficity of cardiac compuTed tomographY (CLARITY study): Rationale and Design}, author = {R.W. van Hamersvelt, I. Išgum, P.A. de Jong, M.J. Cramer, G.E. Leenders, M.J. Willemink, M. Voskuil, T. Leiner}, url = {https://www.ncbi.nlm.nih.gov/pubmed/30826767}, year = {2019}, date = {2019-01-25}, journal = {BMJ Open}, volume = {9}, number = {3}, pages = {e025793}, abstract = {INTRODUCTION: Anatomic stenosis evaluation on coronary CT angiography (CCTA) lacks specificity in indicating the functional significance of a stenosis. Recent developments in CT techniques (including dual-layer spectral detector CT [SDCT] and static stress CT perfusion [CTP]) and image analyses (including fractional flow reserve [FFR] derived from CCTA images [FFRCT] and deep learning analysis [DL]) are potential strategies to increase the specificity of CCTA by combining both anatomical and functional information in one investigation. The aim of the current study is to assess the diagnostic performance of (combinations of) SDCT, CTP, FFRCT and DL for the identification of functionally significant coronary artery stenosis. METHODS AND ANALYSIS: Seventy-five patients aged 18 years and older with stable angina and known coronary artery disease and scheduled to undergo clinically indicated invasive FFR will be enrolled. All subjects will undergo the following SDCT scans: coronary calcium scoring, static stress CTP, rest CCTA and if indicated (history of myocardial infarction) a delayed enhancement acquisition. Invasive FFR of ≤0.80, measured within 30 days after the SDCT scans, will be used as reference to indicate a functionally significant stenosis. The primary study endpoint is the diagnostic performance of SDCT (including CTP) for the identification of functionally significant coronary artery stenosis. Secondary study endpoint is the diagnostic performance of SDCT, CTP, FFRCT and DL separately and combined for the identification of functionally significant coronary artery stenosis. ETHICS AND DISSEMINATION: Ethical approval was obtained. All subjects will provide written informed consent. Study findings will be disseminated through peer-reviewed conference presentations and journal publications.}, keywords = {}, pubstate = {published}, tppubtype = {article} } INTRODUCTION: Anatomic stenosis evaluation on coronary CT angiography (CCTA) lacks specificity in indicating the functional significance of a stenosis. Recent developments in CT techniques (including dual-layer spectral detector CT [SDCT] and static stress CT perfusion [CTP]) and image analyses (including fractional flow reserve [FFR] derived from CCTA images [FFRCT] and deep learning analysis [DL]) are potential strategies to increase the specificity of CCTA by combining both anatomical and functional information in one investigation. The aim of the current study is to assess the diagnostic performance of (combinations of) SDCT, CTP, FFRCT and DL for the identification of functionally significant coronary artery stenosis. METHODS AND ANALYSIS: Seventy-five patients aged 18 years and older with stable angina and known coronary artery disease and scheduled to undergo clinically indicated invasive FFR will be enrolled. All subjects will undergo the following SDCT scans: coronary calcium scoring, static stress CTP, rest CCTA and if indicated (history of myocardial infarction) a delayed enhancement acquisition. Invasive FFR of ≤0.80, measured within 30 days after the SDCT scans, will be used as reference to indicate a functionally significant stenosis. The primary study endpoint is the diagnostic performance of SDCT (including CTP) for the identification of functionally significant coronary artery stenosis. Secondary study endpoint is the diagnostic performance of SDCT, CTP, FFRCT and DL separately and combined for the identification of functionally significant coronary artery stenosis. ETHICS AND DISSEMINATION: Ethical approval was obtained. All subjects will provide written informed consent. Study findings will be disseminated through peer-reviewed conference presentations and journal publications. |
S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, T. Leiner, I. Išgum CNN-based segmentation of the cardiac chambers and great vessels in non-contrast-enhanced cardiac CT Conference Medical Imaging with Deep Learning. MIDL London, 2019. @conference{Bruns2019b, title = {CNN-based segmentation of the cardiac chambers and great vessels in non-contrast-enhanced cardiac CT}, author = {S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, T. Leiner, I. Išgum}, url = {https://openreview.net/forum?id=SJeqoqAaFV}, year = {2019}, date = {2019-07-08}, booktitle = {Medical Imaging with Deep Learning. MIDL London}, abstract = {Quantication of cardiac structures in non-contrast CT (NCCT) could improve cardiovascular risk stratication. However, setting a manual reference to train a fully convolutional network (FCN) for automatic segmentation of NCCT images is hardly feasible, and an FCN trained on coronary CT angiography (CCTA) images would not generalize to NCCT. Therefore, we propose to train an FCN with virtual non-contrast (VNC) images from a dual-layer detector CT scanner and a reference standard obtained on perfectly aligned CCTA images.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Quantication of cardiac structures in non-contrast CT (NCCT) could improve cardiovascular risk stratication. However, setting a manual reference to train a fully convolutional network (FCN) for automatic segmentation of NCCT images is hardly feasible, and an FCN trained on coronary CT angiography (CCTA) images would not generalize to NCCT. Therefore, we propose to train an FCN with virtual non-contrast (VNC) images from a dual-layer detector CT scanner and a reference standard obtained on perfectly aligned CCTA images. |
Nikolas Lessmann, Jelmer M. Wolterink, Majd Zreik, Max A. Viergever, Bram van Ginneken, Ivana Išgum Vertebra partitioning with thin-plate spline surfaces steered by a convolutional neural network Conference Medical Imaging with Deep Learning. MIDL London, 2019. @conference{Less19c, title = {Vertebra partitioning with thin-plate spline surfaces steered by a convolutional neural network}, author = {Nikolas Lessmann, Jelmer M. Wolterink, Majd Zreik, Max A. Viergever, Bram van Ginneken, Ivana Išgum}, url = {https://openreview.net/forum?id=B1eQv5INqV}, year = {2019}, date = {2019-07-08}, booktitle = {Medical Imaging with Deep Learning. MIDL London}, abstract = {Thin-plate splines can be used for interpolation of image values, but can also be used to represent a smooth surface, such as the boundary between two structures. We present a method for partitioning vertebra segmentation masks into two substructures, the vertebral body and the posterior elements, using a convolutional neural network that predicts the boundary between the two structures. This boundary is modeled as a thin-plate spline surface dened by a set of control points predicted by the network. The neural network is trained using the reconstruction error of a convolutional autoencoder to enable the use of unpaired data.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Thin-plate splines can be used for interpolation of image values, but can also be used to represent a smooth surface, such as the boundary between two structures. We present a method for partitioning vertebra segmentation masks into two substructures, the vertebral body and the posterior elements, using a convolutional neural network that predicts the boundary between the two structures. This boundary is modeled as a thin-plate spline surface dened by a set of control points predicted by the network. The neural network is trained using the reconstruction error of a convolutional autoencoder to enable the use of unpaired data. |
S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, M. Zreik, T. Leiner, I. Išgum Improving myocardium segmentation in cardiac CT angiography using spectral information Inproceedings SPIE Medical Imaging, 2019. @inproceedings{Bruns2019, title = {Improving myocardium segmentation in cardiac CT angiography using spectral information}, author = {S. Bruns, J.M. Wolterink, R.W. van Hamersvelt, M. Zreik, T. Leiner, I. Išgum}, url = {https://arxiv.org/abs/1810.03968}, year = {2019}, date = {2019-02-17}, booktitle = {SPIE Medical Imaging}, abstract = {Left ventricle myocardium segmentation in cardiac CT angiography (CCTA) is essential for the assessment of myocardial perfusion. Since deep-learning methods for segmentation in CCTA suffer from differences in contrast-agent attenuation, we propose training a 3D CNN with augmentation using virtual mono-energetic reconstructions from a spectral CT scanner. We compare this with augmentation by linear intensity scaling, and combine both augmentations. We train a network with 10 conventional CCTA images and corresponding virtual mono-energetic images acquired on a spectral CT scanner and evaluate on 40 conventional CCTA images. We show that data augmentation with virtual mono-energetic images significantly improves the segmentation.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Left ventricle myocardium segmentation in cardiac CT angiography (CCTA) is essential for the assessment of myocardial perfusion. Since deep-learning methods for segmentation in CCTA suffer from differences in contrast-agent attenuation, we propose training a 3D CNN with augmentation using virtual mono-energetic reconstructions from a spectral CT scanner. We compare this with augmentation by linear intensity scaling, and combine both augmentations. We train a network with 10 conventional CCTA images and corresponding virtual mono-energetic images acquired on a spectral CT scanner and evaluate on 40 conventional CCTA images. We show that data augmentation with virtual mono-energetic images significantly improves the segmentation. |