Jörg received his Master of Arts in Experimental Psychology at the University of Leiden in 1997 and his MSc in Artificial Intelligence at the University of Amsterdam in 2017. His Master thesis entitled ”Combining adaptive-computation-time and learning-to-learn approaches for optimizing loss functions of base-learners”.
In 2018, Jörg joined the Image Science Institute as a Ph.D. candidate under the supervision of Dr. Ivana Išgum. His research, which is part of the program Deep learning in medical image analysis (DLMedIA), focuses on the development of learning strategies that allow learning systems (aka Deep Neural Networks) to continuously learn on different tasks w.r.t. cardiac MRI image analysis.
2020
|
B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum Mutual information for unsupervised deep learning image registration Inproceedings SPIE Medical Imaging, in press, 2020. Abstract | Links | BibTeX @inproceedings{deVos2020,
title = {Mutual information for unsupervised deep learning image registration},
author = {B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum},
url = {https://spie.org/MI/conferencedetails/medical-image-processing#2549729},
year = {2020},
date = {2020-02-18},
booktitle = {SPIE Medical Imaging, in press},
abstract = {Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration. |
J. Sander, B.D. de Vos, I. Išgum Unsupervised super-resolution: creating high-resolution medical images from low-resolution anisotropic examples Inproceedings SPIE Medical Imaging (in press), 2020. Abstract | BibTeX @inproceedings{Sander2020,
title = {Unsupervised super-resolution: creating high-resolution medical images from low-resolution anisotropic examples},
author = {J. Sander, B.D. de Vos, I. Išgum},
year = {2020},
date = {2020-10-14},
booktitle = {SPIE Medical Imaging (in press)},
abstract = {Although high resolution isotropic 3D medical images are desired in clinical practice, their acquisition is not always feasible. Instead, lower resolution images are upsampled to higher resolution using conventional interpolation methods. Sophisticated learning-based super-resolution approaches are frequently unavailable in clinical setting, because such methods require training with high-resolution isotropic examples. To address this issue, we propose a learning-based super-resolution approach that can be trained using solely anisotropic images, i.e. without high-resolution ground truth data. The method exploits the latent space, generated by autoencoders trained on anisotropic images, to increase spatial resolution in low-resolution images. The method was trained and evaluated using 100 publicly available cardiac cine MR scans from the Automated Cardiac Diagnosis Challenge (ACDC). The quantitative results show that the proposed method performs better than conventional
interpolation methods. Furthermore, the qualitative results indicate that especially ner cardiac structures are synthesized with high quality. The method has the potential to be applied to other anatomies and modalities and can be easily applied to any 3D anisotropic medical image dataset.
},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Although high resolution isotropic 3D medical images are desired in clinical practice, their acquisition is not always feasible. Instead, lower resolution images are upsampled to higher resolution using conventional interpolation methods. Sophisticated learning-based super-resolution approaches are frequently unavailable in clinical setting, because such methods require training with high-resolution isotropic examples. To address this issue, we propose a learning-based super-resolution approach that can be trained using solely anisotropic images, i.e. without high-resolution ground truth data. The method exploits the latent space, generated by autoencoders trained on anisotropic images, to increase spatial resolution in low-resolution images. The method was trained and evaluated using 100 publicly available cardiac cine MR scans from the Automated Cardiac Diagnosis Challenge (ACDC). The quantitative results show that the proposed method performs better than conventional
interpolation methods. Furthermore, the qualitative results indicate that especially ner cardiac structures are synthesized with high quality. The method has the potential to be applied to other anatomies and modalities and can be easily applied to any 3D anisotropic medical image dataset.
|
2019
|
J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI Inproceedings SPIE Medical Imaging, 2019. Abstract | Links | BibTeX @inproceedings{Sander2019,
title = {Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI},
author = {J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum},
url = {https://arxiv.org/pdf/1809.10430.pdf},
year = {2019},
date = {2019-02-17},
booktitle = {SPIE Medical Imaging},
abstract = {Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs. |