As cardiovascular MRI moves towards free-breathing exams and efficient data sampling, additional demand is placed on the image reconstruction. For example, non-Cartesian parallel imaging, compressed sensing and retrospective gating techniques requiring sophisticated reconstruction algorithms are becoming increasingly common in the clinical environment. For widespread application, reconstruction algorithms must be fast and integrated into clinical workflow. Here, methods for fast image reconstruction will be discussed, including open source software packages, GPU and cloud implementations of reconstruction algorithms and data compression.
Highlights
Traditional image reconstructions using standard FFT, nonuniform FFT and common parallel imaging strategies involve only small numerical steps and linear transformations, and therefore can implemented easily in your preferred reconstruction programming language (C languages, MATLAB, Python, etc) and are often available on vendor-supplied reconstruction computers.
However, when using highly undersampled imaging methods, there are cases where it is impractical or impossible to solve the reconstruction problem directly. Instead, these reconstructions can be treated as regularized optimization problems and iterative solvers are used to determine the image solution. For example, non-Cartesian parallel imaging reconstructions [5], which are numerically challenging to solve directly, and compressed sensing reconstructions [6], which involve non-linear optimization problems, require such treatments. Many algorithms exist for this optimization task with differing efficiency, but nonlinear conjugate gradient is the algorithm most commonly used for MRI reconstructions. For efficiency, parallel imaging and compressed sensing reconstructions are often combined in the optimization problem though techniques such as SPARSE-SENSE [3] and L1-SPIRiT [7]. More information on image reconstruction can be found in review papers (eg. [8]).
As real-time free-breathing imaging strategies become commonplace, motion correction and retrospective gating are included as reconstruction steps. In free-breathing acquisitions, respiratory motion correction (MOCO) can be applied using non-rigid registration between relatively stationary phases of the respiration cycle. This MOCO technique has been applied to many types of cardiac imaging including cine [1], T1 mapping [9] and late-gadolinium enhancement [10]. Alternatively, respiratory and cardiac motion resolved reconstructions can be generated using self-navigated radial acquisition schemes, such as XD-GRASP [2].
Computer hardware has become increasingly powerful, inexpensive and accessible for MR image reconstruction. In many cases, state-of-the-art hardware components are not available on vendor supplied reconstruction computers, but are available on external computers used to do fast “offline” reconstruction. For example, graphics processing units (GPUs) have become an accessible solution to improve computational power and accelerate reconstructions. GPUs can reduce reconstruction time by an order of magnitude and have been applied for 3D reconstructions [11], complex parallel imaging applications [12, 13] and nonuniform FFT computations [14]. Major GPU vendors have released application programming interfaces (APIs, such as Nvidia CUDA) allowing the programming of GPUs as an extension to C programming languages. MATLAB and Python also permit GPU processing. These tools enable straightforward transfer of reconstruction algorithms to GPUs.
Parallelizing over multiple cores or large computer clusters is another method of increasing computational power. This is especially effective when the data divided among the computers does not required shared memory, such as reconstructing each slice on a separate node. Cloud infrastructures (eg. Microsoft Azure Cloud, Amazon Elastic Compute Cloud, Google Cloud Platform) are an inexpensive way to get state-of-the-art hardware for reconstructions, without any upfront costs or upkeep.
Open source reconstruction packages, which leverage improved computational abilities and access to additional software libraries, are available to assist with complex reconstructions. For example, the Berkley Advanced Reconstruction Toolbox (BART) is an open source software providing many tools for parallel imaging and compressed sensing applications [15]. This toolbox enables fast nonuniform FFTs and multidimensional wavelet transforms, and contains numerous iterative solvers, parallel imaging calibration methods and regularization methods. BART works on a command line interface intended for offline reconstruction.
The Gadgetron is an open source reconstruction framework that uses external computers for both “offline” and “online” reconstruction [16]. For “online” reconstruction, the Gadgetron plugs into vendor reconstruction pipelines and streams data to external computers for reconstruction. Reconstructed images are displayed on the scanner with low latency and are saved into the scanner’s Dicom database. This framework is especially valuable for incorporation into clinical workflow. The Gadgetron enables complex parallel imaging and iterative reconstructions. Furthermore, the Gadgetron can leverage cloud computation for inline reconstruction [17].
A common raw data format is essential for sharing reconstruction algorithms between sites and vendors. The ISMRM raw data format was created to facilitate such sharing and has been driven by the ISMRM community [18].
All of these open source software packages are available on MR-Hub (http://www.ismrm.org/MR-Hub/).
Cardiovascular imaging data sets are generally large because we acquire multislice or 3D data, use large coil arrays and require dynamic information, making reconstructions computationally intensive. Furthermore, when streaming data line-by-line into a reconstruction pipeline, the size of each readout can greatly influence speed of data transfer.
Several algorithms have been proposed to reduce the computational burden of large coil arrays. Some algorithms aim to select a suitable subset of coils [19-21], others propose array compression through channel combination [22-24]. Perhaps the most notable example is the application of principal component analysis for array compression, which can be achieved with or without a coil sensitivity map. These compression methods are valuable to improve reconstruction speed, but the maintenance of data fidelity and SNR throughout data compression is of utmost importance.