4889

Fully Automated 3D Body Composition Using Fully Convolutional Neural Networks and DIXON Imaging
Alex Graff1, Dmitry Tkach1, Jian Wu1, Hyun-Kyung Chung1, Natalie Schenker-Ahmed1, David Karow1, and Christine Leon Swisher1

1Human Longevity, Inc, San Diego, CA, United States

Synopsis

Here we show the first fully automated method for body composition profiling with MRI DIXON imaging. The fully automated body composition method developed can be used for radiation-free MRI risk stratification without any manual processing steps making it more accessible clinically. This would be most likely used for risk prediction and risk stratification for diseases such as type II diabetes, cardiovascular disease, and obesity.

Introduction

Diabetes and obesity have reached epidemic proportions as a public health problem not only in the United States (US) but also globally. Globally, over 115 million people suffer from obesity-related problems including 80-85% of the risk of developing type II diabetes (T2D). In the US, diabetes affects nearly 1 in 10 adults, with a majority (90%–95%) of the cases being T2D. Body composition derived from whole-body MRI is an important predictor of metabolic syndrome as well as is associated with risk for diseases including T2D, obesity, coronary heart disease (CHD), ischemic stroke, and even cancer [1]. Findings from body composition profiles are actionable as metabolic syndrome or any of its components can be managed with lifestyle changes to delay or prevent the development of serious health problems. The physics of MRI DIXON imaging can isolate signal from water within muscle tissue and lipids in visceral adipose tissue (VAT) and abdominal subcutaneous adipose tissue (ASAT) more accurately than any other diagnostic modality. Quantification by MRI is advantageous over noisy measures such as BMI that do not account for lean mass or fat composition. Currently, the only way to retrieve body composition profiles from MRI is to manually segment the images by hand. This process is costly and time consuming making it not feasible for clinical use. Here we describe a method using a pair of convolutional neural networks (CNNs) with ResNet style architectures [2] for classification of the slices’ anatomical location within a 3D volume followed by a pair of fully convolutional networks (FCNs) with a U-Net style architecture [3] to automatically segment regions related to body composition using fat and water images derived from multi-station whole-body DIXON MRI (Figure 1). The segmented data is then used to derive quantitative body composition profile.

Methods

Model Training

The pair of classification models were developed with a binary classifier to identify slices within the abdomen for the fat models and regions pertaining to the thighs for the water models. The classifiers’ training sets included ~20,000 training images with annotation of slice location from both sets, water and fat. The pair of segmentation models were developed with randomly selected images holding out 4,000 images for validation. The pair of segmentation models included ~20,000 training images with manually segmented VAT, ASAT for fat images and posterior and anterior thigh for lean mass from water images. The pair of segmentation models were developed with randomly selected images holding out 1,000 images for quantitative validation. Whole-body volumes were qualitatively validated with data from Health Nucleus, our precision medicine clinic.

Volume Calculation

For each individual, 3D whole-body volumes for both water and fat images were constructed by composing multiple stations. Each slice was then fed through the classifier and segmentation model, if applicable, to construct 3D masks. The 3D data from the resultant segmented masks were then used to calculate the volume of lean mass, VAT, and ASAT by summing up the total number of pixels and multiplying them by the associated voxel volume.

Model Evaluation

Classification models were evaluated using the AUC ROC on the hold out validation set. The performance of the segmentation models was assessed qualitatively and via the dice coefficient on the hold-out validation set. The output of the four models was also evaluated qualitatively over the entire 3D volumes in hold-out data from the Health Nucleus.

Results

We evaluated the two classifiers using Receiver Operator Characteristic (ROC) curve for both the fat and water models, which yielded AUC > 0.98 in both train and validation sets (Figure 1). We then validated the two segmentation models qualitatively (Figure 3) and via Dice coefficient over 0.9 for both train and validation (Table 1). These results showcase the ability of neural networks to learn. We then combine the output data into 3D volumes for visualization and VAT, ASAT, and lean mass volume calculations in a fully automated approach to obtain a body composition profile.

Discussion/Conclusion

Here we show the first fully automated method for body composition profiling with MRI DIXON imaging. The fully automated body composition method developed can be used for radiation-free MRI risk stratification without any manual processing steps making it more accessible clinically. This would be most likely used for diseases such as type 2 diabetes, cardiovascular disease, and obesity, but could also be used to make novel discoveries possibly linking diseases to abnormal body compositions. Future work includes the evaluation of the predictive power of the automated body composition profile on large, longitudinal population cohorts and utilization in integrated risk models that leverage genomics and lifestyle factors as described by Bernal et al [4].

Acknowledgements

No acknowledgement found.

References

1. Linge et al. “Body Composition Profiling in the UK Biobank Imaging Study”. Obesity (2018).https://onlinelibrary.wiley.com/doi/10.1002/oby.22210

2. He et al. “Deep Residual Learning for Image Recognition”. CVPR (2015). https://arxiv.org/pdf/1512.03385.pdf

3. Ronneberger et al. “U-Net: Convolutional Networks for Biomedical Image Segmentation”. CVPR (2015). https://arxiv.org/abs/1505.04597

4. Bernal A, Schenker-Ahmed N, Karow D, Swisher CL. Predicting 10-Year Risk of Type 2 Diabetes Onset Using Lifestyle, Genomics, and Whole Body DIXON MR Imaging. ISMRM. Submitted (2019).

Figures

Figure 1: High level overview of approach: A pair of convolutional neural networks (CNNs) with ResNet style architectures for classification of the slices’ anatomical location pertaining to the abdomen and thighs for fat and water images, respectively. Slices containing anatomy of interest as classified by the CNNs within a 3D volume was followed by a pair of fully convolutional networks (FCNs) with a U-Net style architecture [3] to automatically segment regions related to body composition using fat and water images derived from multi-station whole-body DIXON MRI. The segmented data is reconstructed into a 3D volume and then used to derive a quantitative body composition profile.

Figure 2: Results of classification model: ROC AUC for slice classifiers for the (a) fat and (b) water models.

Figure 3: Results from segmentation model: (Left to right) fat and water input slices selected via classifiers (Figure 2), the corresponding manually segmented maps and predicted maps in the validation cohort, and actual versus predicted for each class.

Table 1: Segmentation models’ performances by Dice coefficient.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4889