Dr. Munsell’s Research Published In IEEE Transactions on Biomedical Engineering

Dr. Brent Munsell’s research on “Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning” has been selected for publication in IEEE Transactions on Biomedical Engineering (TBME). As one of the top journals in biomedical engineering, TBME is ranked No. 4 by Google Scholar, according to the H5-index among biomedical technology journals. “This is an important contribution that significantly advances state-of-the-art in the field of medical image registration, and is the culmination of several years of hard work! We are extremely happy to have this research published in such a great journal” stated Dr. Munsell.

                                                           

Abstract:

Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features in the observed imaging data. Specifically, the proposed feature selection method uses a convolutional-stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art.

TBME_DL_Concept

 

Full publication is coming soon! We are currently creating the camera ready version.