Hossam El-Rewaidy^{1}, Warren J Manning^{1,2}, and Reza Nezafat^{1}

A new framework based on

**Introduction**

**Methods**

The proposed URUS reconstruction algorithm utilizes complex deep network to create artifact-free MRI images from the acquired undersampled complex k-space data (Fig. 1). The acquired 3D multi-coil k-space data was zero-filled, transformed to image space using FFT, combined using B1-weighted combination and 1 iteration of total variation minimization was performed to reduce the noise level in the input images [4]. This 3D dataset was sliced in the z-direction to multiple 2D-slices of size (N×N×2) and successively fed into 2D complex convolutional network of U-net architecture. Each 2D input image was destructed into basic components using two complex convolutional layers, each of 64 (3x3x2) filters. The complex convolution operations were performed using real-valued arithmetic convolution, where the complex convolution of an image, w: C^{NxN}, and filter, h: C^{KxK} can be represented as:

Real(w * h) = w_{r} _{*} h_{r} - w_{i} _{*} h_{i}

Imag(w * h) = w_{r} _{*} h_{i} + w_{i} _{*} h_{r}

Where * and _{*} stand for complex and real-valued convolution operation, respectively. w_{r} and h_{r} are the real parts of the image and filter, and similarly w_{i} and h_{i} are imaginary parts of image and filter, respectively. Each convolution layer is followed by radial batch normalization layer that standardize the complex data, so that its magnitude has standard distribution of both mean and standard deviation equal 1, and unchanged phase. In addition, Complex ReLU was applied for both real and imaginary parts of the data. A contracting path with three stages of downsampling were performed to increase the receptive field of the network so that artifacts removal on larger scale is enabled. Two convolutional layers followed each downsampling stage to extract higher order features on larger scale. Similarly, an expansive path was utilized to upsample the produced feature maps to reach the original image size using bilinear interpolation. The output maps were reconstructed using two convolution layers to form the output image. In training, mean-squared error function was minimized, so the network learn a mapping function between the aliased complex input and Low-dimensional-structure self-learning and thresholding compressed-sensing (LOST) reconstructed magnitude images [5]. To evaluate the performance of the proposed reconstruction technique, 3D LGE images were acquired in 217 patients using a randomly prospective undersampled k-space with acceleration factors: 3 (n=130), 4 (n=24), and 5(n=63) [3]. 3D LGE images were acquired with spatial resolution of 1.0-1.5 mm3, with 100-120 slices. The dataset was randomly divided into training (70%) and testing (30%) subsets. To quantify the scare volume, the left ventricular myocardium was manually extracted from each 2D slices in all patients with scar in the testing dataset for URUS and LOST reconstructed images. Full-width half-maximum and 6 standard deviations-based methods were used to automatically quantify the scar volume within the extracted myocardium. To investigate higher undersampling rates, each acquired k-space data was further undersampled with 2 higher factors, so that acceleration factors ranging from 5 to 7 were achieved (i.e. all data acquired with rate 3 was under sampled to rate 5, and similarly rates 4 and 5 were undersampled to rates 6 and 7, respectively). This model was implemented using PyTorch and tested on PC equipped with 12GB NVIDIA Xp GPU.

1. Vasanawala SS, Alley MT, Hargreaves BA, Barth RA, Pauly JM, Lustig M. Improved Pediatric MR Imaging with Compressed Sensing. Radiology. 2010;256:607–16.

2. Akçakaya M, Rayatzadeh H, Basha TA, Hong SN, Chan RH, Kissinger K V., et al. Accelerated Late Gadolinium Enhancement Cardiac MR Imaging with Isotropic Spatial Resolution Using Compressed Sensing: Initial Experience. Radiology. 2012;264:691–9.

3. Basha TA, Akçakaya M, Liew C, Tsao CW, Delling FN, Addae G, et al. Clinical performance of high-resolution late gadolinium enhancement imaging with compressed sensing. J Magn Reson Imaging. 2017;46:1829–38.

4. Yang J, Zhang Y, Yin W. A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data. IEEE J Sel Top Signal Process. 2010;4:288–97.

5. Akçakaya M, Basha TA, Goddu B, Goepfert LA, Kissinger K V., Tarokh V, et al. Low-dimensional-structure self-learning and thresholding: Regularization beyond compressed sensing for MRI Reconstruction. Magn Reson Med. 2011;66:756–67.

Figure 1. a) The proposed pipeline for 3D LGE reconstruction. First, the acquired k-space data are zero-filled, FFTed, combined into a single image using B1-weighted combination, and finally a total variation regularization is performed to prepare the input for the network. The resulted LGE 3D data were sliced in the z-direction and fed to a 2D complex deep neural network. b) The complex deep network consists of stacked complex convolution layers, radial batch normalization, and complex rectified linear units with 3 stages of down-sampling and up-sampling of the feature maps to collect multiscale features. All kernels in this network are of size 3×3×2.

Figure 2: LGE images with isotropic spatial resolution of 1.4 mm3 reconstructed using deep learning from a prospectively undersampled 3D LGE dataset acquired using a randomly under sampled 3D dataset with a reduction factor of 5 in a patient with hypertrophic cardiomyopathy.

Figure 3. Example images comparing LOST and URUS-reconstructed images. Red arrows point to myocardial scars. All images were reconstructed from prospectively under sampled images at R3.

Figure 4. Example images for URUS-reconstructed images for reconstruction of retrospectively under sampled images at acceleration rate 5 (acquired at rate 3) and 7 (acquired at rate 5). Red arrows point to myocardial scars.

Figure 5. Correlation analysis of the scar
per-slice area (1st row) and per-patient volume (2nd
row) percentage w.r.t. LV myocardial area and volume, respectively, for URUS
and LOST reconstructed LGE data. The scar pixels were quantified using two
automatic methods from the extracted myocardium: Full-width half-maximum (FWHM)
(left column), and 6-standard deviation (6SD) thresholding (right column).