4860

Deep Learning based Velocity Aliasing Correction for 4D Flow MRI
Haben Berhane1, Hassan Haji-Valizadeh2, Joshua Robinson1, Michael Markl2, and Cynthia Rigsby1

1Lurie Childrens Hospital of Chicago, Chicago, IL, United States, 2Northwestern University, Chicago, IL, United States

Synopsis

We developed a convolutional neural network to detect and correct velocity aliasing in 4D Flow datasets. Our network uses an Unet architecture and was trained, validated, and tested on 100, 10, and 100 datasets respectively. It was able to detect as many or more phase wrapped voxels compared to the conventional algorithm and performed better on highly aliased regions of the dataset.

Introduction

4D Flow MRI provides a comprehensive assessment of cardiovascular hemodynamics through the 3D visualization and time-resolved quantification of 3-directional blood flow velocities. 4D flow MRI data is acquired with a pre-defined, user selected, velocity sensitivity (venc), which determines the maximum blood flow velocity that can be measured without velocity aliasing. Velocity aliasing (or phase wrapping) emerges as a result of encoding the blood flow velocity into a phase difference image within limited dynamic range [- π, π]. When a local velocity component (vx, vy, vz) exceeds ±venc, the resulting phase difference will wrap within the range of [- π, π] resulting in velocity aliasing, substantial 4D flow image artifacts, and inaccurate flow quantification. As such, velocity aliasing correction has to be performed to account for phase wrapping. While correcting a wrapped phase is relatively straightforward, phir=phiw+2πk (phir:real phase, phiw:the wrapped phase, k: integer coefficient), the problem lies in identifying aliased regions or voxel in a large (3D + time + 3 velocity directions) 4D flow data set. A common method is to take advantage of the phase continuity in the temporal direction[1, 2]. By algorithmically detecting phase jumps greater than ±venc, wrapped phases can be isolated and corrected. However, this method is sensitive to noise and can fail for large regions with velocity aliasing. As a result, time-consuming and cumbersome manual identification and correction of regional velocity aliasing is often needed. Alternatively, convolutional neural networks (CNN) have demonstrated excellent results in the image labeling, object detection, and semantic segmentation. Since CNNs are able to engineer feature filters directly from the data, they provide a robust basis for feature exactors in pixel-wise classification. The goal of this study was thus to design a CNN to automatically detect and correct velocity aliasing in 4D Flow datasets compared to labeled ground truth data.

Methods

This retrospective study used 210 (100 training, 10 validation, 100 testing) aortic 4D Flow scans (110 male, 13yrs on a 1.5T system (Aera, Siemens, spatial resolution = 1.2-3.5mm3, temporal resolution=37-45ms, venc=120-400cm/s). The conventional phase unwrapping algorithm (CA) that was tested against seeked out aliased voxels by detecting any phase jumps in the X-Y plane and temporal direction that are greater than ±venc across each slice. Training data consisted of 4D flow MRI scans without any velocity aliasing. As shown in Figure 1, different levels of velocity aliasing were simulated by generating 3 additional 4D flow datasets with reduced venc by 20%, 50%, or 80%. A binary mask of the aliased region from the simulated data was used as the ground truth (voxels with velocity aliasing) when training the CNN. The CNN utilized a Unet architecture with a symmetrical encoder and decoder(Figure 2)[3]. The encoding layers are composed of two sets of convolution, batch-normalization, and rectified linear unit. Max-pooling was applied to half the total number of features. The decoder layers followed the same structure, but the feature maps were up-sampled to double their dimensions. Additionally, the decoding layer feature maps were concatenated with corresponding feature maps from the encoding layer in order to retain as much information. After the final convolution layer, a sigmoid function was used to generate a probability map for each voxel class. A dice loss function was used rather than a softmax with cross entropy to account for the class imbalance. An Adam optimizer was used, and the learning rate was kept constant at 0.0001. Training was performed for 300 epochs.

For testing, datasets were selected in which real velocity aliasing occurred in-vivo. Any velocity aliasing detected by the network were then unwrapped. During the comparison of the CNN and CA, a mask of the aorta was applied to the corrected datasets in order to eliminate noise in determining the number of corrected voxels.

Results

The total training time was 15 hours (GPU:Quadro P4000), and computation time was, on average, 1 second per dataset. The CNN detected 489±340 compared to the CA, 488±270--with an average difference of 0.9±0.7, while the validation dataset yielded an average of CNN:673±442, CA:650±450, average difference of 16±3 voxels. In general, the CNN was able to detect more phase wrapped voxels especially in scans in which there was severe aliasing present (CNN: 630±175, CA:618±207, diff: 10±2 more voxels). Figure 3 displays three examples in which our CNN was able to correct more voxels.

Discussion

Our deep learning algorithm was able to detect all of the phase wrapped voxels of the conventional algorithm in the vessel of interest as well as those missed. Future direction is to further automate additional preprocessing tasks in the workflow pipeline.

Acknowledgements

No acknowledgement found.

References

1. Salfity, M.F., et al., Extending the dynamic range of phase contrast magnetic resonance velocity imaging using advanced higher-dimensional phase unwrapping algorithms. J R Soc Interface, 2006. 3(8): p. 415-27.

2. Loecher, M., et al., Phase unwrapping in 4D MR flow with a 4D single-step laplacian algorithm. J Magn Reson Imaging, 2016. 43(4): p. 833-42.

3. Olaf Ronneberger, P.F., Thomas Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv, 2015. 1505.

Figures

Figure 1: Examples of the training data: All of the training dataset had no phase wrapping which was then simulated by lowering the venc by three different values—increasing the total training data by 3-folds.

Figure 2: Architecture of the neural network: The neural network utilized an Unet architecture with a symmetrical encoder and decoder layout. The number of convolution channels was significantly reduced in order to prevent the network from overfitting to the training data and to make the network computationally efficient.

Figure 3: Examples of phase corrections of the conventional algorithm and our neural network. Our neural network was able to perform as well or better than the conventional phase unwrapping algorithm. These examples demonstrate better performance.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4860