We extended Gadgetron, a widely used open-source framework, to support AI inference on clinical MR scanners. Specially designed software modules (InlineAI) was added to Gadgetron, allowing to load and apply AI neural network models on incoming MR data for compelte "in-line" fashion. That is, without any user interaction, results will be sent back to scanner and available immediately after data acquisition. Two AI based applications were developed as demenstration: Inline AI cine segmenation and perfusion flow mapping and analysis.
Artificial intelligence, especially deep-learning based algorithms, has potential to significantly improve MR imaging, reconstruction and analysis [1]. Typical development of AI based clinical imaging applications consists of two phases: Training involves optimization of model parameters iteratively given large amount of labelled datasets; Inference aims to apply the trained model on incoming single dataset. While training is generally conducted "off-line", AI inference "in-line" on the scanner will provide results immediately or shortly after data acquisition, which will improve clinical workflow. Ideally, MR data must be streamed to an AI/imaging server for image reconstruction, computing and analysis without any user interaction.
To our best knowledge, scanner computing environment supplied by vendors at this moment is often inadequate in computing power and missing AI software. To enable AI model inference on MR scanner, we extend Gadgetron [2, 3], an open-source software package currently widely used by MR research community, by adding features to: 1) interact with main stream AI software packages, such as TensorFlow [4] and PyTorch [5]; 2) allow flexible AI model deployment via the Python modules (called "Gadget") or embedded python calls (Python/C++ interfaces). We demonstrate these new abilities for inline AI inference on two clinical applications: (1) inline cine segmentation and (2) inline perfusion flow mapping and analysis. Both are currently deployed to hospitals for clinical validation.
Two schemes for model inference were implemented in Gadgetron. Python Gadget: Gadgetron modules, called "Gadget" can be implemented in python language (since all major AI packages support python). AI models can be loaded in the configuration of python gadgets and applied to incoming data repeatedly. Python/C++ interface: User can supply python scripts for loading and applying AI models. This script will be called in C++ runtime of Gadgetron, through the dedicated python/C++ interface. Python-C++ data conversion is implemented for all major MR data types, including k-space, image, ECG/respiratory waveform, XML meta data and labelled contours and anatomical landmarks.
Both schemes were implemented in Gadgetron (3.17.0, Inline AI release, https://github.com/gadgetron/gadgetron/archive/v3.17.0.tar.gz). Two AI applications were developed and deployed as demonstration. Inline AI - Cine: Retro-gated cine imaging with parallel imaging and reconstructed in Gadgetron. Resulting images were fed into the pre-trained deep learning model [6] for myocardium segmentation. All functionalities were implemented as a python gadget with TensorFlow. This gadget worked with other C++ gadgets to reconstruct cine images together with endo and epi contours that may be edited on the scanner, as shown in Fig. 1. These allows cine function metrics, such as ejection fraction, to be automatically computed from segmentation results. Inline AI – Perfusion mapping and analysis: Our inline perfusion solution [4] was further improved with two AI models. First AI model was trained to detect the LV blood pool of arterial input function (AIF) image series. The detected signal was used for pixel-wise perfusion flow mapping. The second AI model was trained to segment endo/epi boundary of myocardial as well as detect RV insertion point for all short-axis slices. The AHA Bull's eye plot was computed on pixel-wise flow maps. Both segmentation and Bull's eye plot were sent back to scanner, without any user interaction (Fig. 2). The Python/C++ interfaces were used for loading and applying both AI models. PyTorch [5] was used in this application.
Patient studies were conducted at the Barts Heart Centre and Royal Free Hospital, London, UK. This study was approved by the local Ethics Committees at both hospitals and written informed consent for research was obtained for all subjects. Anonymized data were also approval by the NIH Office of Human Subjects Research OHSR (Exemption #13156).