2977

Smart BrainQuant: Ten high quality 3D clinically meaningful contrasts/maps in 6 min on 1.5T using DNN
Mingliang Chen1, Aiqi Sun1, Wei Xu1, Xingxing Zhang1, Ruibo Song1, Lu Han1, Dong Han1, and Feng Huang1

1Neusoft Medical Systems, Shanghai, China

Synopsis

Multi-contrast MR imaging is necessary for clinical diagnosis. Conventional methods require long acquisition time which impede its clinical application. Last year, we proposed a method to acquire 12 high quality contrasts/maps with 0.67×1.33×2.7 mm3 in 8 minutes on 1.5T [1]. In this work, based on deep learning with complete data fidelity, a novel technique called Smart BrainQuant is proposed to further reduce scan time and improve image quality. Feasibility experiments demonstrate that it is achievable to acquire ten 3D contrast/maps with 0.67×0.89×2.7 mm3 in less than 6 minutes on 1.5T with similar image quality to images using the fully acquired data.

Introduction

Many contrasts are necessary in clinical MRI including qualitative and quantitative images, which traditionally take long acquisition time. Last year, we published a 3D brain imaging method [1] to acquire twelve 3D brain images with high resolution, good SNR and co-registration in one single 8-min scan on 1.5T. To further accelerate the acquisition and enhance image quality, a technique called Smart BrainQuant, using deep learning with complete data fidelity reconstruction, is proposed using the same post-processing as our previous work. Preliminary results demonstrate that ten 3D contrasts/maps with 0.67×0.89×2.7 mm3 resolution can be acquired in 5 min and 40 s on 1.5T using the proposed technique. Moreover, the image quality is comparable to these using fully acquired data, which takes about 24 min.

Methods

Even though partially parallel imaging [2] (PPI) is good at unwrapping the folded image, it results in increased noise level and/or residual aliasing artifacts. Fortunately, deep neural network (DNN) is a powerful tool in image denoising and reduction of patterned artifacts, but not trivial for unwrapping. Therefore, it is meaningful to combine PPI with DNN to compensate each other. In DNN, data fidelity is previously applied in k-space to improve the final performance. To achieve a better performance of data-fidelity, a complete data fidelity scheme is proposed to use all available prior information in both k-space and image space, including acquired k-space data, coil sensitivity maps, as well as the pre-scanned images [3] . Fig. 1(a) shows the reconstruction framework of the proposed Smart BrainQuant: The under-sampled k-space was sequentially reconstructed by conventional methods and DNN, then the outputs from DNN were further enhanced with all available prior information to secure the data fidelity. Fig. 1(b) illustrates the acquisition scheme of the simulated undersampling.

Smart BrainQuant was compared to methods with and without k-space data fidelity to demonstrate the advantages of using both k-space and image space information for complete data fidelity. Furthermore, conventional methods, including Partial Fourier + SENSE and Self-Feeding Sparse SENSE [5] , were also used for comparison. 14 sets of 3D Data were acquired on a 1.5T MR scanner (NSM S15P, Neusoft Medical Systems, China) using an 8-channel head coil for network training, and 2 sets for testing. Simulated acceleration factor 4.2 was achieved with Partial Fourier (1.39) × SENSE (3). Due to axial acquisition and the coil geometry, only one direction PPI acceleration was applied.

Results

Fig. 2 shows the visual performance of proton density weighted (PDW) images from two categories of methods, which are with or without DNN respectively. Compared to first category (SENSE + PF, Self-Feeding Sparse SENSE), the second category achieved much higher SNR and less artifacts. Among methods of category 2, the complete data fidelity scheme preserved the contrast and image sharpness better than the method without data fidelity, as shown in Fig. 2(iii), and demonstrated greater capacity of reducing aliasing artifact than method with only k-space fidelity, as pointed by the yellow arrows in Fig. 2. Table. 1 presents the quantitative comparison, which indicates that proposed method obtained more accurate reconstruction. Fig. 3 presents the comparisons of 10 high quality clinically meaningful images between ground truth and the reconstructions using the proposed method.

Discussion

As shown in Figs. 2 and 3, the proposed method can achieve much better image quality than other methods used for comparison, and the image quality (SNR, CNR, contrast, sharpness) is comparable to the images using the fully acquired data. There are three reasons that the proposed method resulted in high image quality even if the acceleration factor along one direction is as high as 4.2. First, compared to conventional method (SENSE + PF, Self-Feeding Sparse SENSE), the proposed method exploits DNN to achieve lower noise and artifact level as shown in Fig.2 and Table 1; Second, compared to methods without and with only k-space data fidelity, the proposed method uses all available prior information, and hence achieved improved image quality without iteration as given in Fig. 2 and Table 1; Third, compared to DNN-only methods, the proposed method combines the merits of conventional methods and provides a good initialization for DNN, and hence makes the network easier for training.

Conclusion

With the sequential combination of conventional methods and deep learning with complete data fidelity, Smart BrainQuant can acquire ten high resolution 3D contrasts/maps with image quality comparable to the images using the fully acquired data in less than 6 min on 1.5T.

Acknowledgements

No acknowledgement found.

References

[1] Aiqi Sun, et al. ISMRM, 2018, p319

[2] Ying L, et al. SIP, 2010, 27(4):90-98

[3] Lin F H, et al. MRM, 2010, doi: org/10.1002/mrm.10718

[4] Ronneberger O, et al. SIP, 2015:234-241

[5] Huang F, et al. MRM, 2010, doi: org/10.1002/mrm.22504

Figures

Fig. 1. (a) The diagram of the proposed method. U-Net [4] is adopted as the network structure. (b) The undersampling acquisition scheme of k-space.

Fig. 2 Representative results and their zoomed images.

Fig. 3. Comparisons between the ground truth and the reconstructions using the proposed method with acceleration factor 4.2.

Table 1. The RMSE and SSIM measures of the reconstructed images

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
2977