Deep MR to CT Synthesis using Unpaired Data
Jelmer M. Wolterink
Anna M. Dinkla
Mark H. F. Savenije
Peter R. Seevinck
Cornelis A. T. van den Berg
arXiv e-Print archive - 2017 via Local arXiv
First published: 2017/08/03 (5 years ago) Abstract: MR-only radiotherapy treatment planning requires accurate MR-to-CT synthesis.
Current deep learning methods for MR-to-CT synthesis depend on pairwise aligned
MR and CT training images of the same patient. However, misalignment between
paired images could lead to errors in synthesized CT images. To overcome this,
we propose to train a generative adversarial network (GAN) with unpaired MR and
CT images. A GAN consisting of two synthesis convolutional neural networks
(CNNs) and two discriminator CNNs was trained with cycle consistency to
transform 2D brain MR image slices into 2D brain CT image slices and vice
versa. Brain MR and CT images of 24 patients were analyzed. A quantitative
evaluation showed that the model was able to synthesize CT images that closely
approximate reference CT images, and was able to outperform a GAN model trained
with paired MR and CT images.
Convert MR scans to CT scans.
Unpaired brain CT:MR images.
The dataset contains both CT and MR scans of same patient taken on the same day.
The volumes are aligned using mutual information and contain some local minor misalignments.
Train the following models:
1. Syn_ct: CNN: MR -> CT
2. Syn_mr: CNN: CT -> MR
3. Dis_ct: classify real and synthetic CT images (result of Syn_ct)
4. Dis_mr: classify real and synthetic MR images. Syn_mr(Syn_ct(MR Image))) or Syn_mr(CT image)