1221

Deep Learning of ADC Maps from Under-sampled Diffusion-Weighted Radially Sampled MRI
Yuemeng Li1, Hee Kwon Song1, Miguel Romanello Giroud Joaquim1, Stephen Pickup1, Rong Zhou1, and Yong Fan1
1Radiology, University of Pennsylvania, Philadelphia, PA, United States

Synopsis

Respiratory motion and high magnetic fields pose challenges for quantitative diffusion weighted MRI (DWI) of mouse abdomen on preclinical MRI systems. EPI-based DWI method yields inadequate suppression of motion and magnetic susceptibility artifacts. Diffusion-weighted radial spin-echo (Rad-SE-DW) produces artifact-free images but require substantially longer acquisition times. Here, we demonstrate a new deep learning concept for accelerating acquisition of RAD-SE-DW. Fully sampled Rad-SE-DW images are used to train a convolution neural network for directly extracting apparent diffusion coefficient (ADC) maps from highly under-sampled Rad-SE-DW data. Comparisons with standard ADC extraction and acceleration methods are made to support this concept.

INTRODUCTION

Quantitative metrics derived from diffusion weighted MRI (DWI) series are being investigated as biomarkers of tumors located in the abdomen in co-clinical trials. To mitigate respiratory motion, DWI employed in the clinic typically utilizes single shot echo planar imaging (EPI) with parallel acquisition. However, due to higher respiration rate of mice and increased magnetic susceptibility effects, EPI-based DWI of mouse abdomen on preclinical scanners leads to persisting artifacts, which increase with b-values. Our earlier studies have shown that radially sampled diffusion weighted spin-echo (Rad-DW-SE) acquisition methods effectively suppress motion and susceptibility artifacts over a wide range of b-values. However, compared to DWI-EPI, the acquisition time of Rad-DW-SE is substantially longer, and therefore methods to reduce data acquisition requirements are desirable. Based on the observation that the information content of a DWI series is highly redundant, and that radial under-sampling leads to incoherent image artifacts, we hypothesize that significant reductions in acquisition time may be achieved with minimal degradation of image quality by combining radial under-sampling with a deep learning based image reconstruction. In the present report we demonstrate that a deep learning model of adaptive convolutional neural networks (CNNs) can generate high quality apparent diffusion coefficient (ADC) maps from under-sampled Rad-SE-DWI data.

METHODS

All animal handling protocols were reviewed and approved by our institute’s IACUC. Genetically engineered mouse (GEM) model of pancreatic ductal adenocarcinoma (PDA) was used.1 Mice were prepared for MRI exam on a Bruker 9.4 T scanner by induction of general anesthesia (1.5% isoflurane in oxygen). Respiration was monitored and core body temperature was maintained at 37°C throughout the exam. Multi-slice Rad-SE-DWI data were acquired (FOV=32x32 mm2, 96 readouts, 403 sequentially ordered views, slices=19, thickness=1.5 mm, TR=750 msec, TE=28.7 msec, b-values = 0.64, 535, 1071, 1478, 2141 s/mm2, total acquisition time = 25 min) using contiguous slices spanning the abdominal cavity. Images were reconstructed offline using regridding implemented in Python, and ADC maps were computed using a 3-parameter model.2 The resulting images were used to train a deep learning model of CNNs with a densely connected Encoder-Decoder architecture for computing ADC maps from under sampled Rad-DW-SE k-space data under a supervised deep learning framework, referred to as DL-ADC. Specifically, b-value images from under-sampled Rad-SE-DWI k-space data (4x acceleration factor) were used as a multi-channel input to the deep learning model for generating an ADC map by minimizing a loss function that gauges the difference between the generated ADC map and an ADC map that was computed from images of full-sampled Rad-SE-DWI k-space data. DL-ADC was enhanced by spatial and channel attention layers to adaptively focus on the input image data at different spatial locations and information of different b-values. DL-ADC was compared with a deep learning model that reconstructs high-quality DWI images of multiple b-values that were subsequently used to compute ADC maps using the same 3-parameter model.2 Total DWI datasets were randomly split into training (n=44), validation (n=5), and testing (n=5) datasets. Both deep learning models were implemented using PyTorch and were trained using the same training strategy. Quantitative metrics, including Structural SIMilarity (SSIM) index, peak signal-to-noise ratio (PSNR), and normalized mean square error (NMSE) were used to compare ADC maps of testing datasets derived from the deep learning models with those computed from the fully sampled imaging data.

RESULTS

Figure-1 shows fully sampled diffusion-weighted images from a mouse bearing PDA tumor and the ADC map obtained by conventional analysis. The DW images are free of motion and susceptibility artifacts at all b-values examined with good overall quality; SNR about 7 is measured for the tumor on the highest b-value (2141 s/mm2) image. Figure-2 illustrates the network architecture of DL-ADC for computing ADC maps from under-sampled Rad-SE-DWI k-space data. Figure-3 summarizes quantitative comparison results of ADC maps learned by the deep learning models under comparison. Figure-4 shows representative ADC maps computed from fully-sampled DWI data and their corresponding ADC maps learned by the deep learning models. Finally, Figure-5 shows representative spatial attention maps that were generated by DL-ADC to adaptively modulate features learned by CNNs. These results demonstrated that the proposed method could reliably obtain ADC maps from under-sampled k-space data with minimal image quality trade-off.

DISCUSSION

Our deep learning model (DL-ADC) with a densely connected Encoder-Decoder architecture has obtained promising performance for computing ADC maps from under-sampled Rad-SE-DWI k-space data with an acceleration factor of 4, indicating that direct estimation of ADC maps from under-sampled Rad-SE-DWI data is feasible using the DL method.

CONCLUSION

High quality diffusion parameter maps of the murine abdomen can be generated with high acquisition efficiency by combining radial under-sampling with deep learning techniques. Our ongoing research is to apply the DL method to radially sampled data produced on clinical scanners.3, 4

Acknowledgements

Research reported in this study was partially supported by the National Institutes of Health under award number [U24CA231858]. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

1. Hingorani SR, Petricoin EF, Maitra A, Rajapakse V, King C, Jacobetz MA, Ross S, Conrads TP, Veenstra TD, Hitt BA, Kawaguchi Y, Johann D, Liotta LA, Crawford HC, Putt ME, Jacks T, Wright CV, Hruban RH, Lowy AM, Tuveson DA. Preinvasive and invasive ductal pancreatic cancer and its early detection in the mouse. Cancer Cell. 2003;4(6):437-50. Epub 2004/01/07. PubMed PMID: 14706336.

2. Cao J, Pickup S, Yang H, Castillo V, Clendenin C, O’Dwyer PJ, Rosen M, Zhou R, editors. Radial diffusion-weighted MRI enables motion-robustness and reproducibility for orthotopic pancreatic cancer in mouse. ISMRM; ISMRM 2020.

3. local recurrence following radical prostatectomy: assessment using a continuously acquired radial golden-angle compressed sensing acquisition. Abdominal radiology (New York). 2017;42:290-7.

4. Baete S, Yutzy S, Boada F. Radial q-space sampling for DSI. Magn Reson Med. 2016;76:769–80.

Figures

Figure-1 : Representative diffusion-weighted images and ADC map. The tumor is indicated by the red arrow.

Figure-2: A densely connected Encoder-Decoder network of CNNs (top and middle), enhanced by spatial-attention and channel-attention layers (bottom), for computing ADC maps from undersampled complex image-space data.

Figure-3. Quantitative comparison of the deep learning models on testing data for computing ADC maps from undersampled DWI k-space data at multiple b values, compared with those computed from fully sampled imaging data.

Figure-4. Representative ADC maps computed from fully sampled images and generated by the deep learning models under comparison.

Figure-5. Representative frequency attention maps learned by the deep learning model.

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
1221