[link]
Summary by Anmol Sharma 5 years ago
Alzheimer's Disease (AD) is characterized by impairment of cognitive and memory function, mostly leading to dementia in elderly subjects. For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Multimodal information like that from MRI and PET can be used to aid in diagnosis of AD in early stages. However most of the previous works in this domain either concentrate on only one domain (MRI or PET), or use hand-crafted features which are then concatenated together to form a single vector. There are increasing evidences that biomarkers from different modalities can provide complementary information in AD/MCI diagnosis.
In this paper, Suk et al. propose a Deep Boltzmann Machine (DBM) based method that performs high-level latent and shared feature representation obtained from two neuroimaging modalities (MRI and PET). Specifically they use DBM as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM.
The method first selects class-discriminative patches from a pair of MRI and PET images, by using a statistical significance test between classes. A MultiModal DBM (MM-DBM) is then built that finds a shared feature representation from the paired patches. However the MM-DBM is not trained directly on patches, instead, it's trained using binary vectors obtained after running the patches through a Restricted Boltzmann Machine (RBM) which transforms the real-valued observations into binary vectors.
The MM-DBM network's top hidden layer has multiple entries of the lower hidden layers and the label layer, to extract a shared feature representation by fusing neuroimaging information of MRI and PET. Using this multimodal model, a single fused feature representation is obtained.
Using this feature representation, A Support Vector Machine (SVM) based classification step is added. Instead of considering all patch-level classifiers' output simultaneously, the output from the SVMs are agglomerated the information of the locally distributed patches by constructing spatially distributed ‘mega-patches’ under the consideration that the disease-related brain areas are distributed over some distant brain regions with arbitrary shape and size. Following this step, the training data is divided into multiple subsets, and used to train an image-level classifier on each subset individually. The method was tested on ADNI dataset with MR and PET images.
more
less