4778

Deep transform networks for scalable learning of MR reconstruction
Anatole Moreau1,2, Florent Gbelidji1,3, Boris Mailhe1, Simon Arberet1, Xiao Chen1, Marcel Dominik Nickel4, Berthold Kiefer4, and Mariappan Nadar1

1Digital Services, Digital Technology & Innovation, Siemens Medical Solutions, Princeton, NJ, United States, 2EPITA, Le Kremlin-BicĂȘtre, France, 3CentraleSupĂ©lec, Gif-sur-Yvette, France, 4Siemens Healthcare, Application Development, Erlangen, Germany

Synopsis

In this work we introduce RadixNet, a fast, scalable, transform network architecture based on the Cooley-Tukey FFT, and use it in a fully-learnt iterative reconstruction with a residual dense U-Net image regularization. Results show that fast transform networks can be trained at 256x256 dimensions and outperform the FFT.

Purpose

The goal of this research is to propose a low-complexity approach to fully-learnt MR reconstruction. Many methods proposed so far have been inspired by iterative reconstruction algorithms [1], [2]. These methods learn image regularizers and convergence parameters of the iterative algorithm, but data consistency is still enforced using hand-crafted forward and adjoint measurement operators. More recently, a direct fully learnt reconstruction method was proposed [3], but replacing the FFT with fully-connected layers has a significant impact on computational complexity. In this work we introduce RadixNet, a fast, scalable, transform network architecture based on the Cooley-Tukey FFT, and use it in a fully-learnt iterative reconstruction with a U-Net image regularization.

Method

A reconstruction network was trained to reconstruct 2D MR images from under-sampled k-space data. The network structure was chosen to mimic 10 iterations of a proximal gradient algorithm. At each iteration, the network performs a gradient descent step followed by a regularization step, as described by the equation $$$X_n \leftarrow R_n (X_{n-1} +\alpha_n F_n^h(Y-\Pi F_n X_{n-1})) $$$ with $$$X_n$$$ the image reconstructed at nth iteration, $$$Y$$$ the under-sampled k-space measurements, $$$\Pi$$$ the fixed under-sampling operator, $$$F_n$$$ and $$$F_n^H$$$ the forward and adjoint measurement operators, $$$\alpha_n$$$ the trainable gradient step size and $$$R_n$$$ the regularizer. Regularizer networks were implemented using residual dense U-Nets with 2 scales, 8 channels in the hidden layers and depth 1 at each scale.

The operators $$$F_n$$$ and $$$F_n^h$$$ were also trained. In order to enable training while maintaining the low complexity of the FFT, a new network architecture named RadixNet is introduced (see Fig.1.) RadixNet is a recursive complex-valued transform network composed of 4 blocks reproducing the structure of the Cooley-Tukey FFT algorithm: a convolution to split the input into even and odd channels, recursive calls to another RadixNet on each decimated channel, a diagonal operator to apply twiddle factors and a final convolution to perform the butterfly. Each block can be made deeper and nonlinear. The overall complexity remains in $$$O(N \log N)$$$ as long as the split layer does not expand the total data size. RadixNet can be extended to multidimensional operators by splitting the input along all dimensions simultaneously (e.g. (2, 2) stride and 4 output channels in 2D).

Two networks were trained to reconstruct 2D axial T1w brain images from the Human Connectome Project [4]. 900 subjects were used with an 850 training / 50 validation split and 64 slices per subject for a total of 57600 images of fixed dimensions 256x256. A fixed variable density sampling pattern with 5x acceleration and front-to-back readout direction was used to generate the input k-space. One network was trained with 2D forward and adjoint FFTs as operators, the other with shallow linear 2D-RadixNets initialized with the Cooley-Tukey coefficients. The weights of the U-Net were initialized to i.i.d. Gaussian values with 0 mean and standard deviation 1e-3. Training used an $$$L_2$$$ loss and the Adam algorithm with mini-batch size 16 and learning rate 1e-4.

Results

After training, the RadixNet based reconstruction network outperformed the FFT by 1.2 dB in validation PSNR (36.6dB vs 35.4dB). PSNR evolution during training is shown in Fig. 3. Fig. 2 shows a validation example with the ground truth, zero-filled image and network outputs.

Discussion

These early empirical results show that fast transform networks can be trained at 256x256 dimensions and outperform the FFT. This could pave the way to future high-dimension applications. Deeper RadixNet could also enable the reconstruction to correct for forward imaging model imperfections such as trajectory errors or patient motion etc. Future works will include incorporating parallel imaging and scaling up to 3D.

Disclaimer

The concepts and information presented in this paper are based on research results that are not commercially available.

Acknowledgements

No acknowledgement found.

References

[1] Hammernik, Kerstin, Teresa Klatzer, Erich Kobler, Michael P. Recht, Daniel K. Sodickson, Thomas Pock, and Florian Knoll. "Learning a variational network for reconstruction of accelerated MRI data." Magnetic resonance in medicine 79, no. 6 (2018): 3055-3071.

[2] Mardani, Morteza, Enhao Gong, Joseph Y. Cheng, Shreyas S. Vasanawala, Greg Zaharchuk, Lei Xing, and John M. Pauly. "Deep Generative Adversarial Neural Networks for Compressive Sensing (GANCS) MRI." IEEE transactions on medical imaging (2018).

[3] Zhu, Bo, Jeremiah Z. Liu, Stephen F. Cauley, Bruce R. Rosen, and Matthew S. Rosen. "Image reconstruction by domain-transform manifold learning." Nature 555, no. 7697 (2018): 487.

[4] Van Essen, David C., Stephen M. Smith, Deanna M. Barch, Timothy EJ Behrens, Essa Yacoub, Kamil Ugurbil, and Wu-Minn HCP Consortium. "The WU-Minn human connectome project: an overview." Neuroimage 80 (2013): 62-79.

Figures

RadixNet

reconstruction example

Validation PSNR

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4778