1421

Automated Cartilage and Meniscus Segmentation of Knee MRI with Conditional Generative Adversarial Nets
Sibaji Gaj1, Mingrui Yang1, Kunio Nakamura1, and Xiaojuan Li1

1Department of Biomedical Engineering, Cleveland Clinic, Cleveland, OH, United States

Synopsis

Clinical translation of quantitative MRI techniques requires accurate cartilage and tissue segmentation. In this work, we have developed and tested a fully automated cartilage and meniscus segmentation model for knee joint using deep learning. To improve segmentation performance by incorporating multi-scale spatial constraints in objective function, the proposed model combines CGAN and U-Net and the dice and cross-entropy loss are added to the CGAN’s objective function. The segmentation performance has been improved for all six compartments and the average dice coefficient for segmentation during testing is 0.88 compared to 0.79 of existing U-Net based segmentation.

Target Audience

MR physicists, musculoskeletal radiologists and disease researchers, deep learning researchers.

Purpose:

Knee Osteoarthritis (OA) is a major degenerative disease affecting 26.9 million adults within US only 1. Though quantitative MRI techniques have been developed for OA studies and detection of early degeneration of cartilage, the major obstacles for clinical translation of these advanced techniques is the laborious and time-consuming manual or semi-automatic cartilage segmentation. Recently, a few deep learning algorithms based on convolution neural network especially U-Net architecture have been applied for automatic cartilage and tissue segmentation in knee MR images 2-4. Despite the promising results, the performance of these models are limited by the objective functions which are based on pixel-wise mapping between original and generated images, and thus fail to incorporate the natural cartilage structures. Therefore, defining a proper loss function to enforce the learning of multi-scale spatial constraints in an end-to-end training process is an open problem. In this work, conditional generative adversarial network (CGAN) is used to overcome the problem. A fully automated segmentation of cartilage and meniscus is proposed using a CGAN model.

Method:

In the proposed CGAN model, two networks are trained simultaneously: one focuses on generating realistic segmentation (generator) and the other discriminating between the manual segmentation and the generated one (discriminator). The detailed description of the proposed architecture is given in Fig.1. In state-of-the-art models, U-Net provides promising results for segmenting knee cartilages2,4. Thus, a multiclass U-Net is used as a generator. The output of the U-Net is a pixel-wise probability map for each class which feeds to the discriminator along with MRI DESS images as prior information. The typical convolutional neural network is used as a discriminator. The discriminator makes judgments at the image level instead of patch or pixel level to incorporate global information for better accuracy. In literature, it has been observed that the generator performance improves if traditional loss is incorporated with GAN loss. Thus, Dice coefficient and cross-entropy between the manually segmented and generator generated images are incorporated into the CGAN's loss function. The Adam optimizer is used with a fixed learning rate of 10e-4. To calculate the dice coefficient during testing, the output probability map of the generator is segmented using a fixed threshold of 0.9. 176 knee images from the Osteoarthritis Initiative (OAI) data set (relevant scan parameters: FOV=14 cm, Matrix= $$$ 384 \times 307$$$ (zero-filled to $$$ 384 \times 384 $$$ ), TE/TR=5/16ms, 160 slices with a thickness of 0.7mm), are used in the proposed scheme. The 176 images are randomly split into 70:20:10 ratio for training, validation and testing. Further, the 3D double-echo steady state sequences are broken into 2D slices to give input in our 2D model.

Results:

The dice coefficient comparison between the existing models and proposed model are depicted in Table 1. The proposed scheme outperforms the existing models2,4 during testing. The examples of segmented images from the test subjects are shown in Fig. 2. The color for each class is shown in Table 1. From these figures, it can be observed that the automated segmentation agrees with the reference well.

Conclusion:

In this work, a deep learning architecture combining CGAN and U-Net for fully automated multi-class cartilage and meniscus segmentation have been proposed. The model performance has been improved by incorporating the dice and cross-entropy losses to the CGAN’s loss function. Our model achieved better performance compared to the existing networks. In the future, we will extend the model by segmenting other joint tissues and potentially lesions. Combing 3D U-Net and CGAN and incorporating L1-norm and L2-norm regularization will also be investigated.

Acknowledgements

No acknowledgement found.

References

[1] Zhang Y, Jordan JM. Epidemiology of osteoarthritis. Clin Geriatr Med. 2010;26(3):355-69.

[2] Norman B, Pedoia V, Majumdar S. Use of 2D U-Net Convolutional Neural Networks for Automated Cartilage and Meniscus Segmentation of Knee MR Imaging Data to Determine Relaxometry and Morphometry. Radiology. 2018;288(1):177-185.

[3] Zhou Z, Zhao G, Kijowski R, Liu F. Deep convolutional neural network for segmentation of knee joint anatomy. Magn Reson Med. 2018

[4] A. Raj, S. Vishwanathan, B. Ajani, K. Krishnan and H. Agarwal, Automatic knee cartilage segmentation using fully volumetric convolutional neural networks for evaluation of osteoarthritis, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, 2018, pp. 851-854.

Figures

Fig 1. Model Architecture

Fig 2. Example of three segmented images are depicted in each row, The color codes for diffident class have been depicted in Table 1. The 1st is the original image, 2nd column is the manually segmented images and the third column is the Auto segmentation using our CGAN model.

Table 1. Dice coefficient comparison of proposed scheme and existing schemes.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
1421