2398

A Deep k-means Based Tissue Extraction from Reconstructed Human Brain MR Image
Madiha Arshad1, Mahmood Qureshi1, Omair Inam1, and Hammad Omer1
1Medical Image Processing Research Group (MIPRG), Department of Electrical and Computer Engineering, COMSATS University, Islamabad, Pakistan

Synopsis

Fast and accurate tissue extraction of human brain is an ongoing challenge. Two principal factors make this task difficult:(1) quality of the reconstructed images, (2) accuracy and availability of the segmentation masks. In this proposed method, firstly, a supervised deep learning framework is used for the reconstruction of solution image from the acquired uniformly under-sampled human brain data. Later, an unsupervised clustering approach i.e. k-means is used for the extraction of specific tissue from the reconstructed image. Experimental results show a successful extraction of cerebrospinal fluid (CSF), white matter and grey matter from the human brain image.

Introduction

Different tissues in MRI represent different biological information, and in some cases, there is a need to focus on certain tissues over others. Brain MRI depicts white matter, grey matter and CSF1. Brain damage is often described as either a white or grey matter injury, e.g. Alzheimer’s disease is related to white matter lesions2 and neuronal death is related to grey matter injury3. Hence there is a need to focus on certain tissue of human brain. In literature, intensity-based segmentation methods have been used which classify individual pixels/voxels based on their intensity4. Separation of the three main tissue classes based on intensity requires artifact free high quality reconstructed images with better contrast information4. However, conventional reconstruction algorithms (e-g CG-SENSE5) fail to remove aliasing artifacts from the images reconstructed from the highly under-sampled data. In the presence of aliasing artifacts in the reconstructed images, k-means fail to accurately segment and extract human brain features4. Using the fact that deep learning-based reconstruction algorithms can perform well even at higher Acceleration Factors (AF), we propose a novel method (Deep k-means) to extract a specific tissue of brain according to the region of interest.

Method

The proposed Deep k-means tissue extraction is a hybrid technique which is a combination of the supervised image reconstruction and unsupervised clustering approach i.e. k-means. In the first phase, U-Net6 is used for the reconstruction of the acquired human brain uniformly under-sampled data (AF=2). The U-Net is initially trained on a training dataset in order to reconstruct the zero filled uniformly under-sampled MR images (AF=2) of human brain using the deep learning approach shown in Figure-1. For the training data, 1407 T2-weighted human head Cartesian data7 (matrix size= 256 X 256) obtained from a 1.5T scanner is used. The uniformly under-sampled brain MR images7 are used as input whereas fully sampled MR images7 are used as labels. Training of the U-Net is performed on Python 3.7.1 by Keras using TensorFlow as a backend on Intel(R) core (TM) i7-4790 CPU, clock frequency 3.6GHz, 16 GB RAM and GPU NVIDIA GeForce GTX 780 for approximately 13 hours. RMSprop optimizer is used to minimize the loss function of mean square error.

After training, the U-Net is expected to remove the aliasing artifacts of a uniformly under-sampled MR brain image by recovering the missing data points. In doing so, it may also distort the originally acquired data points. In order to avoid this distortion, k-space correction8 is applied. After applying the k-space correction, inverse Fourier transform is applied to get the solution image.

In the second phase, an unsupervised clustering algorithm called k-means9 is used to segment the reconstructed brain image into ‘k’ non-overlapping clusters (or tissues) where each pixel belongs to a specific tissue i.e. white matter, grey matter, CSF. In k-means clustering, the choice of ‘k’ is critical. Different experiments were performed for k=3,4 and 5. In our case, k=4 is chosen on the basis of Silhouette score9. In our experiments, for k=4, the average Silhouette score is around 0.73; indicating an optimal choice of ‘k’ for brain segmentation. After segmenting the reconstructed image, a mask is created to extract the specific tissue of brain according to the region of interest. Dot product of the mask with the reconstructed image extracts the desired brain tissue. The same experiment is repeated for images uniformly under-sampled by AF=6. The results obtained from the proposed method are compared against the results obtained from applying k-means on the brain images reconstructed from CG-SENSE5 (referred to as CG-SENSE k-means).

Result

Figure-1 shows a block diagram of the proposed method. Figure-2 shows the architecture of U-Net used in our experiments to reconstruct the solution image from the acquired uniformly under-sampled data. Figure-3 shows the segmentation results along with the extracted brain tissue by the proposed method and CG-SENSE5 k-means. Table-1 shows the PSNR, RMSE and SSIM values of the reconstructed images obtained from deep learning and CG-SENSE in which fully sampled images are used as reference. Table-1 also shows the Silhouette scores to validate the segmentation results obtained from the proposed method and CG-SENSE k-means.

Discussion and Conclusion

Deep k-means accurately extracts tissues from the human brain images (reconstructed from the acquired highly under-sampled data) better than CG-SENSE k-means. The average Silhouette score of 0.73 (close to 1) validates the segmentation results obtained from the proposed method. Moreover, the proposed method reduces the computational burden by avoiding the tedious job of creating accurate segmentation masks.

Acknowledgements

No acknowledgement found.

References

1. McRobbie DW, Moore EA, Graves MJ. MRI from picture to proton. 2017. doi:10.2214/ajr.182.3.1820592

2. Kao Y-H, Chou M-C, Chen C-H, Yang Y-H. White Matter Changes in Patients with Alzheimer’s Disease and Associated Factors. Journal of clinical medicine. 2019;8(2). doi:10.3390/jcm8020167

3. Klaver R, De Vries HE, Schenk GJ, Geurts JJG. Grey matter damage in multiple sclerosis: a pathology perspective. Prion. 2013;7(1):66–75. doi:10.4161/pri.23499

4. Despotović I, Goossens B, Philips W. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications Luque-Baena RM, editor. Computational and Mathematical Methods in Medicine. 2015;2015:450341. https://doi.org/10.1155/2015/450341. doi:10.1155/2015/450341

5. Pruessmann KP, Weiger M, Börnert P, Boesiger P. Advances in sensitivity encoding with arbitrary k-space trajectories. Magnetic Resonance in Medicine. 2001;46(4):638–651. doi:10.1002/mrm.1241

6. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015. Cham: Springer International Publishing; 2015. p. 234–241.

7. Hyun CM, Kim HP, Lee SM, Lee S, Seo JK. Deep learning for undersampled MRI reconstruction. Physics in Medicine and Biology. 2018;63(13):aac71a. https://doi.org/10.1088/1361-6560/aac71a. doi:10.1088/1361-6560/aac71a

8. Arshad M, Qureshi M, Inam O, Omer H. Transfer learning in deep neural network based under-sampled MR image reconstruction. Magnetic Resonance Imaging. 2020. doi:10.1016/j.mri.2020.09.018

9. Wang F, Franco-Penya H-H, Kelleher JD, Pugh J, Ross R. An analysis of the application of simplified silhouette to the evaluation of k-means clustering validity. In: International Conference on Machine Learning and Data Mining in Pattern Recognition. Springer; 2017. p. 291–305.

Figures

Figure 1: Block diagram of the proposed method: A Deep k-means Based Tissue Extraction from the reconstructed Human Brain MR Image. (A) shows the deep learning approach for image reconstruction and (B) shows the k-means clustering algorithm used for the brain tissue extraction.

Figure 2: Architecture of U-Net used in the deep learning framework for image reconstruction

Figure 3: Results obtained from the proposed Deep k-means and CG-SENSE k-means: (A) shows the reconstruction results obtained from U-Net and CG-SENSE, (B) shows the segmentation results obtained from k-means clustering algorithm. (C-E) show the extracted white matter, gray matter and CSF from the proposed method and CG-SENSE k-means.

Table 1: PSNR, RMSE and SSIM values of the reconstructed images obtained from deep learning framework and CG-SENSE, Silhouette scores of the segmentation results obtained from k-means clustering algorithm for k=4. A higher value of Silhouette score indicates the goodness of k-means for segmentation of the reconstructed image by the proposed method.

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
2398