Magnetic Resonance Imaging Cross-Modality Synthesis
Yawen Huang1, Leandro Beltrachini1, Ling Shao2, and Alejandro Frangi1

1Department of Electronic and Electrical Engineering, The University of Sheffield, Sheffield, United Kingdom, 2 Department of Computer Science and Digital Technologies, Northumbria University, Newcastle, United Kingdom

Synopsis

Multi-modality MRI protocols are becoming standard in the everyday clinical practise. The advantages of such acquisitions were shown to be fundamental in a wide range of applications, such as medical diagnosis and image segmentation. However, the implementation of these protocols tends to be time-consuming, consisting in one key limitation. In this paper we address this problem by presenting a novel method for synthesising any MRI modality from a single acquired image. This is done using machine learning techniques for dictionary learning. Results show that our approach can lead to significant performance over the state-of-the-art methods.

Purpose

To generate a synthetic MR image in an arbitrary modality based on an image from a different modality.

Methods

The objective of this work is to generate a synthetic MR image in an arbitrary modality based on an image from a different modality. To do so we generate a dictionary based on available data relating both modalities, then use it for obtaining a synthetic image. A common way to solve this problem consists in dividing the image in smaller samples, here called patches. Then, we have to guarantee a normalised context for two modalities consists in regularising their intensity scale in the same range. Let $$$I_i^{Mk}(j)$$$ be the j-th patch of the i-th image in a library i=(1,...m) corresponding to the k-th modality (k=1,2). Then the normalisation is done by computing

$$\widehat{I}_i^{Mk}(j)=\frac{I_i^{Mk}(j)}{\rho_k}\,\,\,\,\,(1)$$

where $$$\rho_k=\max\left \{\left \|I_i^{Mk}(j)\right \|_2 \right \}$$$.

Once the patches are normalised, we need to compute a cross modality dictionary. Let $$$\phi=\left \{\phi_1,\phi_2,...,\phi_K\right \}\in\mathbb{R}^{{n}\times{K}}$$$ be a projection dictionary, and $$$\alpha\in\mathbb{R}^{K}$$$ the sparse vector representing a normalised patch $$$\widehat{I}_i^{Mk}$$$ using such basis. Then, the objective function representing the sparse decomposition of $$$\widehat{I}_i^{Mk}$$$ is obtained solving:

$$\min_{\phi,\alpha} \left \| \widehat{I}_i^{Mk}-\phi\alpha \right \|_2^2+\lambda\left \| \alpha \right \|_0\,\,\,\,\,(2)$$

where $$$ \left \| \cdot \right \| _0$$$ denotes the $$$l_0-norm$$$ sparse constraint which fixes the number of non-zero elements of the sparse representation $$$\alpha$$$, and $$$\lambda$$$ is a regularization factor used for controlling the sparsity of the solution. The function in Eq. (1) leads to a NP-hard problem under the $$$l_0-norm$$$ constraint [1], a problem that can be solved by replacing the norm constraint from $$$l_0$$$ to $$$l_1$$$ in (1) [2]. As seen that Eq. (2) do not impose a cross-modality learning process. To do so we propose a cross-modality dictionary learning for forcing two modalities data to share the same sparse codes, i.e.

$$\langle\phi_{M1},\phi_{M2}\rangle=\arg\min_{\phi_{M1},\phi_{M2},\alpha} \frac{1}{2}\left \| \widehat{I}_i^{M1}-\phi_{M1}\alpha \right \|_2^2+\frac{1}{2}\left \| \widehat{I}_i^{M2}-\phi_{M2}\alpha \right \|_2^2+\lambda\left \| \alpha \right \|_1\,\,\,\,\, s.t.\left \|\phi(j)\right \|_2^2\leq1\,\,\,\,\,(3)$$

where $$$\phi_{Mk}$$$ is the learned dictionary with K atoms of each set. Once the paired dictionaries are obtained by solving (3), the reconstructed image with our desired modality can be represented by is the learned dictionary with K atoms of each set. Once the paired dictionaries are obtained by solving (3), the reconstructed image with our desired modality can be represented by $$$X_{M2}=\alpha_p\phi_{M2}$$$, where $$$\alpha_p=\arg\min_{\phi_{M1},\alpha} \left \| X_{M1}-\phi_{M1}\alpha \right \|_2^2+\lambda\left \| \alpha \right \|_1$$$, and $$$X_{M1}$$$ denotes the input image with the given modality.

Results

We evaluated our method in two different scenarios. Firstly we used the IXI dataset [4,5] for synthesising a proton density (PD) image considering a T2-w acquisition from the same patient. Graphical results of the proposed algorithm are shown in Figure 1. As seen that the proposed technique performs well when compared to the grand truth. To assess a quantitative analysis of the results, we computed root mean squared error (RMSE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM). Results for the entire volume are shown in Figure 2. These results are significantly better than those obtained with standard techniques.

Secondly we compared the performance of our method with the state-of-the-art MR image exampled-based contrast synthesis (MIMECS) [3]. To do so we considered SPGR and MPRAGE image modalities as done in [3]. Clearly th advantage of the presented method over the MIMECS in Figure 3, which can be seen in deep Grey Matter structures, as well as in the overall intensity profile. As done in the first example, we computed the RMSE, PSNR, and SSIM for both methods (Fig. 4). Our approach achieves the lowest RMSE and the highest PSNR and SSIM for whole subject synthesis while using only SPGR as an input.

Conclusion

In this paper, we proposed a novel approach towards MRI cross-modality synthesis. A cross-modality dictionary learning method was proposed to span the intra-information diversities, and map the diverged source data from different modalities into a unified space, improving the accuracy of the results. Experiments demonstrated the significant performance of our method over existing state-of-the-art.

Acknowledgements

The authors are grateful with Aaron Carass for providing the implementation of the MIMECS algorithm.

References

[1] G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” J. Construct. Approx., vol. 13, pp. 57-98, 1997.

[2] S. S. Chen, L. D. Donoho, and A. M. Saunders, “Atomic Decomposition by Basis Pursuit,” SIAM Review, vol. 43, no. 1, pp. 129-159, 2001.

[3] S. Roy, A. Carass, and J. L. Prince, “Magnetic Resonance Image Example-Based Contrast Synthesis,” IEEE Trans. Med. Imag., vol. 32, no. 12, pp. 2348-2363, Nov. 2013.

[4] A. L. Rowland, M. Burns, T. Hartkens, J. V. Hajnal, D. Rueckert, and D. L. G. Hill, “Information eXtraction from Images (IXI): Image Processing Workflows Using A Grid Enabled Image Dataset,” In Proceedings of Distributes Databases and Processing in Medical Image Computing (DiDaMIC), pp. 55-64, 2004.

[5] M. Burns, A. L. Rowland, D. Rueckert, J. V. Hajnal, K. Leung, D. L. G. Hill, and J. Vickers, “Information eXtraction from Images (IXI): Grid Services for Medical Imaging,” In Proceedings of Distributes Databases and Processing in Medical Image Computing (DiDaMIC), pp. 65-73, 2004.

Figures

Fig. 1. An example of low-quality T2-w MRI simultaneous super-resolution and cross-modality synthesis to high-quality PD-w MRI.


Fig. 2. Boxplots of the quality evaluation on synthesis.

Fig. 3. Synthesise performance comparison on a typical slice of MPRAGE while using SPGR as input.

Fig. 4. Boxplots of the comparison experiments on MIMECS and our method.



Proc. Intl. Soc. Mag. Reson. Med. 24 (2016)
4220