4780

AUTOMAP Image Reconstruction of Ultra-Low Field Human Brain MR Data
Neha Koonjoo 1,2,3, Bo Zhu1,2,3, Matthew Christensen1,2, John E. Kirsch1,2, and Matthew S Rosen1,2,3

1Department of Radiology, A.A Martinos Biomedical Imaging Center / MGH, Charlestown, MA, United States, 2Harvard Medical School, Boston, MA, United States, 3Department of Physics, Harvard University, Cambridge, MA, United States

Synopsis

Due to very low Boltzmann polarization, MR images acquired at ultra-low field (ULF), MR images require significant signal averaging to overcome low signal-to-noise, which results in longer scan times. Here, we apply the deep neural network image reconstruction technique, AUTOMAP (Automated Transform by Manifold Approximation), to 50% under-sampled low SNR in vivo datasets acquired at 6.5 mT. The performance of AUTOMAP on this data was compared to the conventional 3D Inverse Fast Fourier Transform (IFFT). The results for AUTOMAP reconstruction show a significant improvement in image quality and SNR.

Introduction

MR Imaging at ultra low field (ULF) suffers from low signal-to-noise ratio (SNR) due to intrinsically low Boltzman polarization. As a result, long acquisition times are needed to accommodate the additional signal averaging required to attain sufficient SNR. We recently developed a noise-robust image reconstruction approach based on a data-driven learning of the low-dimensional manifold representations of real-world data, and implemented with a deep neural network architecture. AUTOMAP – Automated Transform by Manifold Approximation1 – is an end-to-end automated k-space-to-image-space generalized reconstruction framework that learns a highly-parameterized image reconstruction function optimized for a corpus of training data and shows noise robustness. Here, the performance of AUTOMAP on in-vivo brain under-sampled 3D MR data acquired at 6.5 mT was compared with those obtained with the conventional Inverse Fast Fourier Transform (3D-IFFT) reconstruction method.

Materials and Methods

Training set: The training corpus was assembled from 50,000 2D T1-weighted brain MR images selected from the MGH-USC Human Connectome Project (HCP)2 public database. The images were cropped to 256×256 and were subsampled to 75×64, symmetrically tiled to create translational invariance and finally normalized to the maximum intensity of the data. To produce the corresponding k-space representations for training, each image was Fourier Transformed with MATLAB’s native 2D FFT function and then multiplied by the corresponding under-sampling pattern as the ULF dataset.

Architecture of NN: The NN was trained to learn an optimal feed-forward reconstruction of k-space domain into the image domain. The real and the imaginary part of datasets were trained separately. The network, described in Figure 1, was composed of 3 fully connected layers (input layer and 2 hidden layers) of dimension n2×1 and activated by the hyperbolic tangent function. The 3rd layer was reshaped to n×n for convolutional processing. Two convolutional layers convolved 128 filters of 3×3 with stride 1 followed each by a rectifier nonlinearity. The last convolution layer was finally de-convolved into the output layer with 64 filters of 3×3 with stride 1. The output layer resulted into either the reconstructed real or imaginary component of the image.

Data Acquisition: A single- channel spiral volume head coil3 was used to acquired 3D human brain data at 6.5mT. A 3D balanced Steady State Free Precession (b-SSFP)4 sequence was used with the following parameters: TR =31ms, matrix size = 64 × 75 × 15, spatial resolution = 2.5mm × 3.5mm × 8mm, and 50% under-sampled both along the phase-encode and the slice direction. Two in-vivo datasets were collected : 1) a 6-min scan with number of averages (NA)=30 and 2) a 35-min scan with NA=160.

Image Reconstruction: The in-vivo raw datasets of each slice were stacked and reconstructed with either AUTOMAP or IFFT. Due to memory limitation of the network architecture of AUTOMAP, we explicitly applied a 1D FFT along the partition direction of the 3D k-space, followed by AUTOMAP operated on the resultant hybrid space data slice-by-slice.

Image Analysis: The signal magnitude of each dataset was normalized to unity to enable fair comparison between both reconstruction methods. SNR was then computed by dividing the signal magnitude by the standard deviation of the noise. Error maps were computed using the 35-min scan as the reference image. Image quality metrics were evaluated using RMSE (root mean square error), PSNR (Peak Signal-to-Noise Ratio), and SSIM (Structure Similarity Index for Measuring image quality).

Results

Figure 2A shows the reconstruction of the 6-min in vivo dataset with AUTOMAP compared to the 3D-IFFT. A significant improvement in the image quality can be observed. The 6-min scan was then compared to the longer 35-min scan. Error maps (Figure 2B) are significantly better with AUTOMAP reconstruction with a 3-fold enhancement in the RMSE. The SSIM maps and the SSIM values in the table (Figure 2C & D) for the 6-min scan is higher than the 3D-IFFT reconstructed 6-min image. Despite no significant change in the PSNR and SNR, the visual image quality is significantly improved.

Discussion and Conclusion

The reconstruction performance of AUTOMAP on 50% under-sampled low-SNR data demonstrates the robustness of the reconstruction technique and its high immunity to noise in the ULF regime. The significant improvement in image quality will enable substantial gains in quantitative MRI scans at ULF. Future work will include the application of AUTOMAP on higher under-sampling patterns and to true (non-hybrid) 3D acquisitions where noise robustness would more likely be higher as the low-dimensional features present in the third dimension will further constrain the reconstruction problem.

Acknowledgements

No acknowledgement found.

References

1.‘Image reconstruction by domain transform manifold learning’, B. Zhu and J. Z. Liu and S. F. Cauley and B. R. Rosen and M. S. Rosen, Nature 555 487 EP - (2018).

2. ‘MGH–USC Human Connectome Project datasets with ultra-high b-value diffusion MRI’, Fan, Q. et al. NeuroImage 124, 1108–1114 (2016).

3. ‘A single channel spiral volume coil for in vivo imaging of the whole human brain at 6.5 mT’, C. D. LaPierre , M. Sarracanie, D. E. J. Waddington and M. S. Rosen, ISMRM abstract Proc. Intl. Soc. Mag. Reson. Med. 23 (2015) 5902

4. ‘Low-Cost High-Performance MRI’, M. Sarracanie, C. D. LaPierre, N. Salameh, D. E. J. Waddington, T. Witzel and M. S. Rosen, Scientific Reports 5 15177 (2015)

Figures

Figure 1: Description of neural network: a) An optimal one-to-one mapping of the sensor domain (here k-space) onto the image domain using supervised learning. The training process learns a robust low-dimensional joint manifold 𝒳×𝒴 conditioned by the reconstruction function 𝑓(𝑥) = 𝜙f ∘𝑔∘𝜙𝑥-1(𝑥); b) AUTOMAP is implemented with a deep neural network architecture composed of 3 fully-connected layers (FC1 to FC3) with hyperbolic tangent activations followed by a convolutional autoencoder (FC3 to Image) with rectifier nonlinearity activation (figure adapted from Ref 1).

Figure 2: Assessment of the image quality of a 6-min scan versus a 35-min scan dataset. A) Left panel shows a slice extracted from the 6-min scan versus the same slice from the 35-min scan on the right panel and the upper panel shows the comparison between AUTOMAP reconstruction and 3D-IFFT for the lower panel. A) shows the images with the same normalization and window level. B) Error maps were computed with the 35-min scan as the reference image. C) Structure Similarity Index for Measuring image quality (SSIM) was evaluated and the 6-min scan with AUTOMAP is better than the 3D-IFFT. D) The table summarizes the image quality metrics.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4780