Generation of pseudo CTs from MR images is of interest for applications such as PET/MR attenuation-correction (AC) and MR-guided radiation therapy planning (MRgRTP). In this work, demonstrate a DL method that harnesses the style transfer capability of GAN to compute qualitatively and quantitatively accurate continuous density bone pCT images from Dixon based LAVA-Flex MRI sequence which has a short acquisition time. The method is evaluated for PET/MR attenuation correction.
Patient data:MR scans were performed using a 3T, time-of-flight (TOF) Signa PET/MR scanner (GE Healthcare, Chicago, IL, USA). For PET/MR, 25 patients were scanned using a fast LAVA-Flex protocol for pelvis region: 1.95x1.95x2.6mm resolution, 18s scan time, pelvis surface coil, FA=5°, FOV=600mm. PET and CT images were acquired with the same protocol as described in [4]. All patient studies were approved by respective Institutional Review Boards, including signed informed consent.
MRI Pre-processing:Intensity correction was performed on In-phase images using ITK N4 algorithm and normalized to the volume z-score value.
CT to MRI registration: CT images were registered to ZTE using a combination of rigid and diffeomorphic dense registration algorithms developed in ITK [5, 6].
Deep learning based pCT computation:A 2D generative adversarial neural network of the CycleGAN [7] image translation architecture was adapted to compute pCT from InPhase and Fat channels of LAVA-Flex MRI. Cyclic generator-discriminator networks working on a combination of mean absolute error (MAE) and cyclic loss functions were trained by Adam optimizer. The generator has 5 convolution layers (with ReLU activation), a merge layer and LeakyReLU activation at the last layer. The discriminator has 4 convolution layers (ReLU activations) and a sigmoid activation on the last layer. The GAN architecture is trained for 200 epochs. To increase robustness of the framework, output pCT was generated in a Bayesian method from multiple points of convergence of the model. The entire framework was implemented using Keras and Tensorflow libraries [8, 9] in Python. Training was performed on 2400 slices from a total of 20 patients. For testing, 5 cases (600 slices) separate from training set were used. Predicted slices were reconstituted back to form the whole pCT volume. pCT Evaluation: pCT-GAN was evaluated visually and by comparing histogram overlap in the bone regions (HU > 100) against reference CTAC and pCT-ZeDD[4]. MRAC map computation: Since the focus is on continuous density bone map prediction, pCT regions with HU>100 were extracted and pasted into the soft tissue MRAC map [1] and the resulting hybrid bone-MRAC map was used for PET reconstruction. PET-AC Evaluation: PET image reconstruction was performed offline using the petrecon toolbox v1.26 (GE Healthcare, Chicago, IL, USA) and standard parameter settings (2 iterations, 28 subsets, point spread function kernel, 3.0mm full-width at half-maximum (FWHM) in-plane Gaussian filter followed by axial filtering with a three slice 1:4:1 kernel). Lesion uptake values (SUV) in PET images serve as comparison metric for quality of reconstruction.