Supervised deep learning (DL) methods for MRI reconstruction is promising due to their improved reconstruction quality compared with traditional approaches. However, all current DL methods do not utilise anatomical features, a potentially useful prior, for regularising the network. This preliminary work presents a 3D CNN-based training framework that attempts to incorporate learning of anatomy prior to enhance model’s generalisation and its stability to perturbation. Preliminary results on single-channel HCP, unseen pathological HCP and IXI volumetric data (effective R=16) suggest its potential capability for achieving high acceleration while being robust against unseen anomalous data and data acquired from different MRI systems.
1. Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79(6):3055-71.
2. Schlemper J, Caballero J, Hajnal JV, Price AN, Rueckert D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans Med Imaging. 2018;37(2):491-503.
3. Yang G, Yu S, Dong H, Slabaugh G, Dragotti PL, Ye X, et al. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans Med Imaging. 2018;37(6):1310-21.
4. Aggarwal HK, Mani MP, Jacob M. MoDL: Model-Based Deep Learning Architecture for Inverse Problems. IEEE Trans Med Imaging. 2019;38(2):394-405.
5. Knoll F, Murrell T, Sriram A, Yakubova N, Zbontar J, Rabbat M, et al. Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge. Magn Reson Med. 2020.
6. Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, et al. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Transactions on Medical Imaging. 2021;40(9):2306-17.
7. Antun V, Renna F, Poon C, Adcock B, Hansen AC. On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc Natl Acad Sci U S A. 2020.
8. Genzel M, Macdonald J, März M. Solving Inverse Problems With Deep Neural Networks--Robustness Included? arXiv preprint arXiv:201104268. 2020.
9. Zhou Z, Mahfuzur Rahman Siddiquee M, Tajbakhsh N, Liang J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation2018 July 01, 2018:[arXiv:1807.10165 p.]. Available from: https://ui.adsabs.harvard.edu/abs/2018arXiv180710165Z.
10. Zhang Y, Li K, Li K, Wang L, Zhong B, Fu Y, editors. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. Computer Vision – ECCV 2018; 2018 2018//; Cham: Springer International Publishing.
11. Woo S, Park J, Lee J-Y, Kweon IS, editors. CBAM: Convolutional Block Attention Module. Computer Vision – ECCV 2018; 2018 2018//; Cham: Springer International Publishing.
12. Van Essen DC, Smith SM, Barch DM, Behrens TEJ, Yacoub E, Ugurbil K. The WU-Minn Human Connectome Project: An overview. NeuroImage. 2013;80:62-79.
13. Tustison NJ, Avants BB, Cook PA, Zheng Y, Egan A, Yushkevich PA, et al. N4ITK: Improved N3 Bias Correction. IEEE Transactions on Medical Imaging. 2010;29(6):1310-20.
14. Wang Z, Simoncelli EP, Bovik AC, editors. Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003; 2003 9-12 Nov. 2003.
15. Zhao H, Gallo O, Frosio I, Kautz J. Loss Functions for Image Restoration With Neural Networks. IEEE Transactions on Computational Imaging. 2017;3(1):47-57.
16. Micikevicius P, Narang S, Alben J, Diamos GF, Elsen E, García D, et al. Mixed Precision Training. International Conference on Learning Representations (ICLR); Vancouver2018.
17. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600-12.
18. Group BIA. IXI Dataset: Imperial College London; [Available from: http://brain-development.org/ixi-dataset/.
Figure 1. Illustration of the proposed anatomy-driven 3D DL framework. The steps of data processing for inputs of CNN-based network differ depending on the mode of network. During training (a), the original input is firstly elastically deformed by k random deformation fields, resulting in k perturbed inputs and 1 undeformed, original input. The phase and retrospective undersampling are then introduced. Components are treated as separate input channels. For inference (b), the complex input is replicated k times to produce k+1 inputs to the network together with the original input.
Figure 2. Architecture of the proposed 3D model. The complex MR volumes’ real and imaginary components are split into separate channels, giving a total of 2(k+1) input channels. In our model (k=1), the two weight-sharing UNet++-based feature extractors would extract self-similarity from x_ref and x_k. After the feature extractors, a spatial attentional fusion block fuses the feature maps and passes the spatially-modulated fused features to residual groups with channel attention for reconstruction. A learnable data consistency module is placed before the final convolution.
Figure 3. Reconstructions on two HCP single-channel volume test data (R=16). (a) Three axial slices are shown from two 3D volumes. PSNR and SSIM values for the whole volumes are given in the bottom right of error maps. (b) Sagittal and coronal views of the reconstruction for Volume 1 in (a). Top rows show the extracted fully-sampled slices from corresponding 3D volume. Middle and bottom rows show the reconstructed magnitude images by the proposed method and error maps respectively. Proposed method can successfully remove undersampling artefacts and reproduces coherent 3D structures.
Figure 4. Reconstruction results on hold-out pathological HCP single-channel volume data (R=16). Top row shows the extracted fully-sampled axial slices from the corresponding 3D volume. Middle and bottom row show the reconstructed magnitude images by the proposed method and error maps respectively. PSNR and SSIM values are given below the error maps. Pathologies are indicated by orange arrows. The proposed method shows promising results on reconstructing the anomalies correctly despite the high acceleration factor by efficiently utilising the anatomical features.
Figure 5. Results on out-of-domain single-channel IXI volume data (R=16). Top and bottom half present reconstructions of data acquired using a Philips 1.5T system and a 3T system respectively. Top rows show the fully-sampled axial slices from 4 distinct volumes. Bottom rows show the reconstructed images by the proposed method. Although contrast changes (to match HCP data) and hallucinations are notable, residual artefacts are not visible in the reconstructions. Major anatomical landmarks are also placed appropriately, indicating some robustness to feature distribution shifts.