Learning-based Reconstruction using Artificial Neural Network for Higher Acceleration
Kinam Kwon1, Dongchan Kim1, Hyunseok Seo1, Jaejin Cho1, Byungjai Kim1, and HyunWook Park1

1KAIST, Daejeon, Korea, Republic of

Synopsis

A long imaging time has been regarded as a major drawback of MRI, and many techniques have been proposed to overcome this problem. Parallel imaging (PI) and compressed sensing (CS) techniques utilize different sensitivity of multi-channel RF coils and sparsity of signal in a certain domain to remove aliasing artifacts that are generated by subsampling, respectively. In this study, an artificial neural networks (ANN) are applied to MR reconstruction to reduce imaging time, and it is shown that the ANN model has a potential to be comparable to PI and CS.

Introduction

A long imaging time has been regarded as a major drawback of MRI, and many techniques have been proposed to overcome this problem. Parallel imaging (PI) and compressed sensing (CS) techniques utilize different sensitivity of multi-channel RF coils and sparsity of signal in a certain domain to remove aliasing artifacts that are generated by subsampling, respectively. In addition, PI and CS have been combined to accelerate the imaging time.[1] Meanwhile, artificial neural networks (ANN) have been applied to various fields, which has shown superior performance by utilizing deep architecture and tremendous database. In this study, an ANN model is applied to MR reconstruction to reduce imaging time, and it is shown that the ANN model has a potential to be comparable to PI and CS.

Materials and Methods

Three typical brain imaging sequences such as T1-weighted (T1w), T2-weighted (T2w) and fluid attenuated inversion recovery (FLAIR) are commonly used. For the three imaging sequences, brain MR images from 12 subjects were obtained using Siemens Magnetom Verio 3T system. Three sequences were fully sampled with 216 phase encoding lines and 384 readout points, from which the experimental data were retrospectively subsampled to make database. Fig 1 shows schematic diagram of the proposed method. Learning and reconstruction are processed line by line because the aliasing artifacts from subsampling spread in phase encoding direction. The aliased image was divided into real and imaginary parts, and used as inputs of the ANN model. Sensitivity maps were estimated from 16 center lines by using ESPIRiT algorithm.[2] Likewise, the sensitivity maps are divided into real and imaginary parts and used as inputs of the model. Desired outputs were computed as follows: \[I_{d} (x,y)=|\sum_{c=1}^NS_{c}^*(x,y)I_{c}(x,y)|\] , where Id, Ic, and Sc are the desired combined image, the obtained image from channel c, and the sensitivity map of channel c, respectively. In this study, four channels were used to reconstruct the images. Two models were learned according to two subsampling patterns that have the same acceleration factor (R=2.6024). The proposed method was implemented using the well-known Caffe package.[3] The ANN model was (216×16)-(864)-(864)-(864)-(216×1), where input matrix with 216×16 consists of real and imaginary lines of the aliased images and sensitivity maps for 4 channels, output matrix with 216×1 consists of aforementioned combined line, and three hidden layers have 864 nodes. Nodes between neighboring layers were fully connected, and rectified linear unit was used as an activation function. Total 648 image slices with 216×384 were used for learning the model, and aliased images that were not used for learning were used for test of the model to show the performance of the learned model. Various hyper parameters like learning rate, the number of iterations, and weight initialization were heuristically selected.

Results

As shown in Figs. 2 and 3, reconstructed T2w images from the proposed method and SPIRiT[1] are displayed when the regular subsampling pattern of Fig 2c and the irregular subsampling pattern of Fig 3c are used, respectively. In Fig 2d-e, visible errors from the proposed method are mainly in edges, but those of SPIRiT are distributed in center region that have small difference between coil sensitivity maps. Both methods can remove aliasing artifacts, but the reconstructed image from the proposed method looks more blurry whereas that from SPIRiT looks noisy. In Fig 3b, the reconstructed image from SPIRiT becomes worse when the irregular subsampling pattern is used, even though the number of sampling lines is same as Fig 2b. However, the proposed method can remove aliasing artifacts and suppress noise well in Fig 3a.

Discussion and Conclusion

The proposed method utilizes the ANN model. The database for learning the model has profound information that can be used to remove aliasing artifacts and noise of the subsampled images. The ANN method utilizes coil sensitivity maps and subsampled data as input data, and learns relation between subsampled and full-sampled data as prior information. Learning-based method depends on database, and it is difficult to collect many datasets sufficiently. In this study, although relatively small datasets are used, the proposed method can reconstruct the subsampled images well. The proposed method needs only a feedforward operation of ANN, which requires very short reconstruction time. The proposed method can reconstruct an image from irregularly subsampled k-space data, which could be useful in dynamic imaging that is differently sampled according to motion phases. For further works, more database would be used to improve the performance, and to correct MR artifacts like EPI ghost artifact.

Acknowledgements

This research was partly supported by the Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2014M3C7033999) and Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number : HI14C1135).

References

1. Lustig M, el al. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction From Arbitrary k-space. MRM. 2010;64:457-471. 2. Uecker M, el al. ESPIRiT- An Eigenvalue Approach to Autocalibrating Parallel MRI: Where SENSE Meets GRAPPA. MRM. 2014;71:990-1001. 3. Jia Y, el al. Caffe: Convolutional Architecture for Fast Feature Embedding. In: ACM Multimedia. 2014;675-678.

Figures

Schematic diagram of the proposed method. Colored circles mean each voxel of input and output images, and gray circles mean nodes of hidden layers in ANN.

Reconstructed T2w images from (a) the proposed method and (b) SPIRiT, (c) regular subsampling pattern (R=2.6024), corresponding difference images (d) and (e) between the reference full sampled image and the reconstructed images from (a) the proposed method and (b) SPIRiT, and (f) reference full-sampled image.

Reconstructed T2w images from (a) the proposed method and (b) SPIRiT, (c) irregular subsampling pattern (R=2.6024), corresponding difference images (d) and (e) between the reference full sampled image and the reconstructed images from (a) the proposed method and (b) SPIRiT.



Proc. Intl. Soc. Mag. Reson. Med. 24 (2016)
1801