![]() |
Dr. Bob de Vos Amsterdam UMC b.d.devos@amsterdamumc.nl |
PostDoc
E-mail: b.d.devos@amsterdamumc.nl
Phone: –
LinkedIn; Google Scholar
Bob is working on high-dimensional cardiac MRI images together with Pie Medical Imaging.
2020 |
J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum Deep learning-based regression and classification for automatic landmark localization in medical images Journal Article IEEE Transactions on Medical Imaging (in press), 2020. @article{Noothout2020b, title = {Deep learning-based regression and classification for automatic landmark localization in medical images}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum}, url = {https://arxiv.org/pdf/2007.05295.pdf}, doi = {10.1109/TMI.2020.3009002}, year = {2020}, date = {2020-07-09}, journal = {IEEE Transactions on Medical Imaging (in press)}, abstract = {In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. }, keywords = {}, pubstate = {published}, tppubtype = {article} } In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. |
2020 |
B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum Mutual information for unsupervised deep learning image registration Inproceedings SPIE Medical Imaging, in press, 2020. @inproceedings{deVos2020, title = {Mutual information for unsupervised deep learning image registration}, author = {B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum}, url = {https://spie.org/MI/conferencedetails/medical-image-processing#2549729}, year = {2020}, date = {2020-02-18}, booktitle = {SPIE Medical Imaging, in press}, abstract = {Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration. |
J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum Automatic segmentation of the olfactory bulbs in MRI Inproceedings SPIE Medical Imaging (in press), 2020. @inproceedings{Noothout2020, title = {Automatic segmentation of the olfactory bulbs in MRI}, author = {J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum}, year = {2020}, date = {2020-10-14}, booktitle = {SPIE Medical Imaging (in press)}, abstract = {A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs. |
J. Sander, B.D. de Vos, I. Išgum Unsupervised super-resolution: creating high-resolution medical images from low-resolution anisotropic examples Inproceedings SPIE Medical Imaging (in press), 2020. @inproceedings{Sander2020, title = {Unsupervised super-resolution: creating high-resolution medical images from low-resolution anisotropic examples}, author = {J. Sander, B.D. de Vos, I. Išgum}, year = {2020}, date = {2020-10-14}, booktitle = {SPIE Medical Imaging (in press)}, abstract = {Although high resolution isotropic 3D medical images are desired in clinical practice, their acquisition is not always feasible. Instead, lower resolution images are upsampled to higher resolution using conventional interpolation methods. Sophisticated learning-based super-resolution approaches are frequently unavailable in clinical setting, because such methods require training with high-resolution isotropic examples. To address this issue, we propose a learning-based super-resolution approach that can be trained using solely anisotropic images, i.e. without high-resolution ground truth data. The method exploits the latent space, generated by autoencoders trained on anisotropic images, to increase spatial resolution in low-resolution images. The method was trained and evaluated using 100 publicly available cardiac cine MR scans from the Automated Cardiac Diagnosis Challenge (ACDC). The quantitative results show that the proposed method performs better than conventional interpolation methods. Furthermore, the qualitative results indicate that especially ner cardiac structures are synthesized with high quality. The method has the potential to be applied to other anatomies and modalities and can be easily applied to any 3D anisotropic medical image dataset. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Although high resolution isotropic 3D medical images are desired in clinical practice, their acquisition is not always feasible. Instead, lower resolution images are upsampled to higher resolution using conventional interpolation methods. Sophisticated learning-based super-resolution approaches are frequently unavailable in clinical setting, because such methods require training with high-resolution isotropic examples. To address this issue, we propose a learning-based super-resolution approach that can be trained using solely anisotropic images, i.e. without high-resolution ground truth data. The method exploits the latent space, generated by autoencoders trained on anisotropic images, to increase spatial resolution in low-resolution images. The method was trained and evaluated using 100 publicly available cardiac cine MR scans from the Automated Cardiac Diagnosis Challenge (ACDC). The quantitative results show that the proposed method performs better than conventional interpolation methods. Furthermore, the qualitative results indicate that especially ner cardiac structures are synthesized with high quality. The method has the potential to be applied to other anatomies and modalities and can be easily applied to any 3D anisotropic medical image dataset. |
2019 |
J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI Inproceedings SPIE Medical Imaging, 2019. @inproceedings{Sander2019, title = {Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI}, author = {J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum}, url = {https://arxiv.org/pdf/1809.10430.pdf}, year = {2019}, date = {2019-02-17}, booktitle = {SPIE Medical Imaging}, abstract = {Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs. |
2019 |
Julia M.H. Noothout, Bob D. de Vos, Jelmer M. Wolterink, Richard A.P. Takx, Tim Leiner, Ivana Išgum Deep Learning for Automatic Landmark Localization in CTA for Transcatheter Aortic Valve Implantation Conference Radiological Society of North America, 105th Annual Meeting, 2019. @conference{Noothout2019, title = {Deep Learning for Automatic Landmark Localization in CTA for Transcatheter Aortic Valve Implantation}, author = {Julia M.H. Noothout, Bob D. de Vos, Jelmer M. Wolterink, Richard A.P. Takx, Tim Leiner, Ivana Išgum}, url = {http://dlmedia.eu/landmarks_rsna2019_final-3/}, year = {2019}, date = {2019-12-03}, booktitle = {Radiological Society of North America, 105th Annual Meeting}, abstract = {PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI. |