Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions can be very different, which can hurt the performance of such techniques. We propose a feature-space-transformation method to overcome these differences in feature distributions. Our method learns a mapping of the feature values of training voxels to values observed in images from the test scanner. This transformation is learned from unlabeled images of subjects scanned on both the training scanner and the test scanner. We evaluated our method on hippocampus segmentation on 27 images of the Harmonized Hippocampal Protocol (HarP), a heterogeneous dataset consisting of 1.5T and 3T MR images. The results showed that our feature space transformation improved the Dice overlap of segmentations obtained with an SVM classifier from 0.36 to 0.85 when only 10 atlases were used and from 0.79 to 0.85 when around 100 atlases were used.

Brain, Hippocampus, Machine learning, MRI, Transfer learning,
Biomedical Imaging Group Rotterdam

van Opbroek, A, Achterberg, H.C, & de Bruijne, M. (2015). Feature-space transformation improves supervised segmentation across scanners. doi:10.1007/978-3-319-27929-9_9