Takashi Abe1, Yuki Matsumoto1, Yuki Kanazawa1, Yoichi Otomi1, Maki Otomo1, Moriaki Yamanaka1, Mihoko Kondo1, Saya Matsuzaki1, Ariunbold Gankhuyag1, Enkhamgalan Dolgorsuren 1, Oyundari Gonchigsuren1, and Masafumi Harada1
1Graduate School of Biomedical Sciences, Tokushima University, Tokushima, Japan
Synopsis
- We checked the performances of different convolutional
encoder decoder (CED), one of the neural networks when synthesizing FLAIR using
T1WI and T2WI. With the shallow CED, the resolution was good but the contrast was
poor, and when the CED became deeper, contrast became better, but the
resolution became worse. Next we also added “skip-connection” to CED, but the
image quality was not improved with the Inception(GoogLeNet)-like parallel skip-connection,
and the image quality improved with the ResNet-like serial skip-connection;
that was a mixture of shallow and deep CED, and resembled the structure of
U-net.
INTRODUCTION
We reported high definition synthetic FLAIR
images can be created from T1WI and T2WI in ISMRM20181. In this study, we report
on the relationship between the construction of neural network and the quality
of synthetic FLAIR images.METHODS
We selected the cases with 2D-T1WI, 2D-T2WI and 2D-FLAIR from a glioma database. The images were reconstructed to 256x256 pixels and stored as a gray-scale png image (8-bit data, that means signal intensity ranging from 0 to 255) with default setting of WW and WL. Random noise was added to the input data. T1WI and T2WI were used as an input and we used several different neural networks for synthetizing FLAIR image. We used a convolutional encoder decoder for synthetizing FLAIR image, which has repeated 3x3 convolution and 2x2 pooling. We compare the characteristics of the synthesized images from different neural network (different depths, different construction). We programed with python 3.5 installed on Ubuntu 14.04 operating system and used Keras 1.2.2, TensorFlow 1.0, CUDA toolkit 8.0 and CuDNN v5.1 and used GPU (TITAN X Maxwell). RESULTS
Deepening the layer of CED improves the contrast but reduces the resolution. Even if the layer was deepened to four or more layers, the contrast improvement effect was poor. The CED with parallel skip connection that resembles GoogLeNet (Inception2) contributed a little to image quality improvement. But the image quality dramatically improved with the ResNet3-like serial skip-connection; that was a mixture of shallow and deep CED, and resembled the structure of U-net4. DISCUSSION
Each medical institution may have difficulty acquiring the optimal MRI image for the scan time limitation. Also, the image quality may be degraded for image interpretation due to artifacts. Image synthesis in this research may be effective for image interpretation, such as synthetizing an MR image that could not be taken due to limitation of scanning time or an image that was difficult to evaluate due to artifacts. The history of deep learning research was also the history of more effective neural network deployment. It is important to develop a network that is superior to the network used this time and further improve the image quality of the synthetic image. CONCLUSION
Our result demonstrated high definition synthetic FLAIR images can be created from T1WI and T2WI and the construction of neural network is deeply related to the quality of synthetized FLAIR images.Acknowledgements
No acknowledgement found.References
1: Abe T, Salamon N. A Deep Learning Approach to Synthesize FLAIR Image from T1WI and T2WI. Proc Intl Soc Mag Reson Med 2018;26:3130
2: Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition; 2015:1-9
3: He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016:770-778
4: Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention; 2015:234-241