Real time MRI is a promising modality for the measurement or myocardial function without the need for breath-holding or ECG triggering. To enable the quantitative assessment of non-temporally aligned image slices representing multiple heartcycles we present an automatic image analysis approach based on a segmentation using the U-net convolutional neural network model. The comparison of segmentation masks with reference data show a very good DICE coefficient of 0.94. The comparison of quantitative results achieved based on the expert-corrected conventional segmentation shows promising results and suggests that further improvement can be achieved through parameter adaptation.
Real-time
MRI is a promising modality for the measurement of myocardial function without
the need for breath-holding or ECG triggering [1, 2]. The analysis of the
continuously acquired image data however requires additional effort in
post-processing compared to conventional analysis. Cardiac and breathing phase
are detected retrospectively based on the image information, so all image
frames have to be processed. Previously published approaches for the
segmentation of the myocardium in real-time cardiac MRI sequences were based on
active contours, intensity and shape classification as well as spiral scanning
[3-5]. All three approaches require a considerable amount of user interaction
in order to preselect the slice range to segment and to derive correct
quantitative results from the image sequence.
Our purpose is the development of a fully automatic post-processing for cardiac real-time MRI. To this end, we validate a machine-learning-based segmentation approach regarding the overlap with given expert segmentations as well as the expert-provided cardiac function parameters.
Short-axis
cardiac real-time MRI data from 25 to 33 slices were acquired at 3T (Siemens
Skyra) at a resolution of 1.6mmx1.6mmx6mm, an acquisition time of 33ms for 150-720
time points (i.e., 5-24 s) using a radial FLASH sequence (volunteers and
arrhythmia patients) [1]. Reference segmentations on 172 frames have been
created by clinical experts through interactive correction of the automatically
provided segmentations of the method by Zoehrer et al [5]. Image data was
preprocessed by normalizing the intensities to the interval [0,1] representing
the 2-98 percentiles of the original histogram. We chose the u-net
convolutional network architecture, which has been successfully applied for the
segmentation in conventional cardiac cine MRI [6,7]. We used the KERAS framework
for TensorFlow (https://www.tensorflow.org/guide/keras). Spatially neighboring slices in the datasets are
not temporally aligned with regard to the heart phase. To consider the
relationship of subsequent time frames, we applied three u-nets to the
reformatted data as shown in Figure 1. Learning rates were chosen as 0.005,
0.001, and 0.001 for the xy-, xt- and yt-orientations. For the training phase
we considered 134 sequences with myocardium segmentations for either all or no
time frames. This restriction is required, because the subsequent multi-cyclic
analysis is based on the assessment of a blood pool area curve. In the post-processing
step, the thresholded maximum of the three results was filtered to provide the largest
segmented component on the one hand, and on the other hand
only accept slices with complete rings in 98% of all time frames.
For validation, 38 slice sequences were selected, 20 of which showed myocardium. Segmentation took 43.87s per sequence on average. The average Dice coefficient for slices with reference segmentation was 0.95 for blood pool and 0.94 for the myocardium. The value of the Dice coefficient is 1 if the algorithm recognizes that there is no myocardium on a slice and 0 for false positive segmentations. The number of missed segmentations was 0, one false positive slice was segmented. The average mean boundary error was 0.60mm. To test the feasibility of a fully automatic quantitative assessment of real-time MRI we analyzed an exercise dataset based on the expert corrected segmentation as well as with the automatic method using the CAFUR software [8]. Results are shown in Figure 3. Although all sequences were acquired in direct succession, the number of segmented slices decreased with an increase of the exercise level due to the through-plane motion induced by heavier breathing. The comparison was restricted to 9 slices sequences segmented at all exercise levels (rest, 50W, 70W, 90W, 100W). Heart cycles were successfully detected in all datasets. As shown in Figure 3, however, the area of the blood pool was lower in the automatic segmentation, resulting in a lower stroke volume.