Most MRI software platforms have their origins in the early 1990s and have grown drastically in functionality over the years. Due to the breadth of features that were added, the platforms are increasingly difficult to use. Moreover, because of largely monolithic code design, it has become challenging to integrate novel MRI techniques. This talk will look at how microservices architectures could provide a potential solution, covering the application for image reconstruction, remote sequence calculation, and external devices control. Furthermore, it will be discussed how an open development model would revitalize the currently stagnating translation of research developments into clinical practice.
After the first commercial MRI systems became available in the mid-1980s, significant hardware and technical advances have been introduced and have paved the way for the success of MRI as an invaluable diagnostic tool in medicine. However, the software platforms that control the devices have received relatively little attention. Most of the existing MRI software platforms have their origins in the early 1990s and have been designed around the traditional paradigm of the NMR pulse sequence experiment: The interface shows a set of controls that allow the user to configure the experiment; the software then reads the parameters, executes the NMR experiment accordingly, processes the acquired data, and displays the result to the user. This concept made sense in the early days of MRI because it was still unclear which sequence configurations would provide optimal results in different applications. With increasing clinical utilization of MRI, however, more and more options and sequence variations were integrated into the software packages. Single sequences can nowadays have over 150 different parameters, in addition to numerous general system settings. Due to the large number of sequence variations and the enormous amount of settings, operating today’s MRI systems has become a science of its own, which creates tremendous training and maintenance effort to ensure that exam procedures are performed consistently across the enterprise.
In addition to making the scanners difficult to use, the feature-driven evolution of the software has also created problems on the development side. As common practice at the time of development, most platforms have a monolithic design and combine user interface, sequence calculation, scanner control, image reconstruction, and image-processing functions into a single unit with shared code base. This monolithic unit also serves as the deliverable that is shipped to the end user, e.g. for distributing software updates. Because of strong code dependencies within the monolith and the huge size of the underlying code base, it has become increasingly difficult to integrate new technologies into the software, in particular those developments that break with the classical NMR pulse-sequence paradigm. At the same time, the effort needed for quality control has grown significantly due to the high overall complexity of the deliverable. As result, clinical translation of new MRI methods takes much longer than before and the burden for releasing new techniques into the market is increasing.
Therefore, a key challenge will be to redesign the software platforms in such a way that they provide the flexibility for integrating new techniques and research developments more rapidly but without the risk of cluttering both the user interface and existing code base.
Microservices Architectures
Problems from overgrown monolithic architectures are not unique to the MRI world and are well-known in almost all areas of Information Technology. A strategy that has recently become popular in software engineering to address this exact situation are microservices (1,2). Many large companies, such as Netflix, eBay, or Amazon, have implemented microservices architectures over the last years to cope with the challenges resulting from the rapid growth of their businesses.
The core idea is to divide the software into small autonomous units that each implement only one specific function. Thus, the software platform becomes a network of many small modules that communicate through defined network interfaces (APIs). Because each microservice module has a limited functional scope, the development process, testing, and system integration become much better manageable compared to a monolithic approach. The concept of modularizing software solutions is, as such, nothing novel. However, what differentiates microservices from previous approaches for modularization, such as encapsulation into loadable libraries or object-oriented programming (OOP), is the principle of autonomy. Each microservice is conceptionally seen as a fully autonomous unit that can be developed, tested, operated, and deployed independently from all other components of the system. This principle inherently forbids source code or binary dependencies between modules, so that all interactions must occur through the outside communication interface that the services are exposing.
Each microservice is typically developed by a dedicated team. Because there are no code dependencies, there is no need to use the same programming language or development environment for all services. Thus, development teams have the freedom to select the technologies that are best suited for the individual problem, be it a high-performance language such as C++, a scripting language such as Python or JavaScript, or even a scientific language such as MATLAB. Moreover, because microservices typically communicate over normal TCP/IP network channels, there is no requirement that all instances must share a single runtime environment. This creates a lot of flexibility for the composition of the services into an overall solution, which is referred to as “orchestration” (3). Services can either run on a single computer, they can be distributed across multiple computers, or they can run on cloud services that are hosted on the internet. The possibility for decentralized execution offers significant advantages in view of scalability, fault tolerance, and maintenance. Because it is possible to dynamically launch instances of a service and reroute the communication to a new instance, it becomes feasible to seamlessly react to hardware failures or to deploy updates. The latter means that bugfixes, new features, or completely rewritten implementations of components can be distributed continuously without causing major service interruptions, which is contrary to the current update practice for MRI systems that replaces the complete software package at once and requires significant downtime of the system.
Microservices typically communicate via a REST (REpresentational State Transfer) interface (4), which is a communication mechanism closely related to the HTTP protocol used by webservers and commonly utilized in mobile apps to communicate with backend servers. Essentially, REST calls are requests that a client sends to a server in the HTTP format (5), in which the kind of request and parameters are encoded in the URL address. For example, if the mobile app of a ride-sharing service requests a car, it would send an HTTP request to the company’s server in the form:
GET http://api.car-service.com/CallRide?User=KTB&Lat=40.7127&Lng=74.0059
If the server can execute this request, it would respond with a success code (typically 200), otherwise it would return an error code (typically 400 or 404):
HTTP/1.1 200 OK
If the intent of the request was to retrieve information, the server would confirm with a success code and return the requested information, typically in the JSON or XML format (6):
GET http://api.car-service.com/ShowPrice?User=KTB&Lat=40.7127&Lng=74.0059&Dest=Home
HTTP/1.1 200 OK
{
“Price”: 20.33,
“Currency”: “USD”,
“WaitTime”: 3
}
The list of available URL addresses with possible parameters and the format of the response define the application programming interface (API) of the microservice. This interface definition must be known to all components that interact with the service. Because all calls and responses are exchanged as plain text, services can talk to each other even if they have been written in different programming languages or if they are running on different operating systems. Moreover, because the communication is in human-readable format, it is relatively easy to debug and test the services. At the same time, it is also straightforward to secure the connection by switching the communication to the encrypted HTTPS format, which is of high importance for working with medical data.
How can the concept of microservices be applied to MRI scanners? A first logical step would be to utilize microservices architecture for the image reconstruction. Traditionally, all images that are generated by an MRI scanner are calculated on a single reconstruction system that is part of the scanner hardware. However, in recent years, computationally demanding algorithms such as compressed-sensing or model-based techniques have become popular. Implementing such techniques directly on the scanner has been difficult because they block the integrated reconstruction computer for too long, so that there is interference with the clinical workflow. Vendors have, therefore, started to improve the computational capabilities by integrating high-performance (HPC) servers that are equipped with multiple powerful GPU boards. However, integrating HPC servers into every MRI scanner is expensive, increases the risk of hardware failure, and is inefficient because the performance of the HPC server is not required for every examination. Thus, for a significant amount of time, the HPC server will be underutilized.
A rational strategy would be to offload expensive reconstructions to centralized compute nodes, running either on a central server cluster or even in the cloud. While the size of MRI raw data can be large, the network bandwidth available today is high enough to transfer such data to remote servers. Moreover, in most cases, the images are not immediately read by radiologists, so that certain delays are often acceptable. We have implemented a similar strategy in our center for our compressed-sensing-based GRASP DCE-MRI technique, which we have used in over 35,000 patient exams so far (7). To provide feedback to the technicians regarding the success of the scan, a quick gridding reconstruction is performed on the MRI scanner while the actual DCE-MRI reconstruction is calculated using a central server pool and pushed directly into the PACS. The development team of the Gadgetron framework has also presented a distributed-computing extension of their framework, with which compute nodes can either run on an intranet cluster or on cloud services (8).
A key advantage consists in the centralized management of the reconstructions. Updates or changes to the reconstruction parameters can be installed without physically accessing the MRI system, which significantly simplifies the maintenance. This is of particular relevance for recently proposed reconstruction techniques based on machine learning, which evolve over time and require frequent updating. A further key advantage consists in the scalability: By adjusting the number of compute nodes available in the reconstruction cluster, it is possible to dynamically adapt the compute capacity to the actual need of each imaging site. In the event of local hardware failure or during peak times, it would be possible to launch additional instances through a cloud-based service, so that consistent reconstruction times can be achieved. This could be done seamlessly without interruptions on the scanner or need to inform the operator. Such architecture would also create possibilities for per-use payment models. Instead of purchasing expensive computer hardware that might not be sufficiently utilized, imaging centers could rent compute instances on demand in the cloud. Hence, the cost for reconstruction services would become a per-use charge that can be assigned and charged to the individual exam, making the clinical integration of advanced reconstruction methods much more cost effective.
The idea of remote calculation can also be extended to the acquisition side. From a sequence-programming perspective, MRI scanners are basically 5-channel sequencers that play waveforms and readout events on 3 gradient axes and 2 RF channels. Conventionally, the waveform and timing calculation is done by code that is either compiled statically into the scanner software or compiled into dynamically loadable libraries. This means that any minor change to a sequence requires recompiling either the complete scanner software or sequence library and replacing the binary files on the scanner. Also, changes to the sequence protocols need to be done locally for every scanner, which creates an enormous effort and carries the risk of inconsistent scan protocols across scanners.
However, there is no need why the timing calculation must be done directly on the scanner. Furthermore, there is no reason why the calculations must be done by compiled code. Theoretically, it would be possible to perform the calculations remotely, transfer the instructions to the scanner in the form of a descriptive language, and execute them. The developers of the Pulseq project have recently demonstrated the feasibility of this principle (9). With Pulseq, sequences are written and calculated using Matlab scripts. The instructions are then saved in a text file, which is transferred to the scanner and executed with a generic sequence that reads the text file. Taking this idea further, it would be possible to implement the sequence calculation as a microservice running on a central sequence server. When preparing an acquisition, the scanner would contact the sequence service and identify the scanner type, so that the service knows all hardware limitations. The service would than calculate the sequence timing table and return the sequencer instructions in the JSON format. Afterwards, the scanner would execute the measurement and pass the acquired data to a reconstruction service. If required, the microservice could also send updated scanner instructions during the acquisition to enable the sequence to react to external events or sensors. To compensate for potential transfer lag, the descriptive sequence language could include a fallback loop that gets executed while the scanner is waiting for updated instructions (e.g., to keep the magnetization in a steady-state condition).
This modularized concept offers several important advantages over the current development model. First, the sequence developer can use arbitrary technology for implementing sequences. Product sequences could be implemented in C++, while research teams could use rapid prototyping languages such as JavaScript, Python, or MATLAB. It would be even possible to simply load pre-calculated sequence instructions from a database and send these to the scanner. Because the sequence implementation is completely decoupled from the scanner platform, it would be possible to develop an unlimited number of prototypic sequence variations without ever contaminating the software components that run on the scanner. This possibility could include the user interface. Currently, all available sequence controls are statically integrated into the scanner software. Thus, whenever one additional control is needed, the whole scanner software needs to be updated. Moreover, many modern acquisition techniques don’t even need the breadth of controls that have been added over the last decades. With a microservices architecture, the user interface for each sequence could be provided by the microservice itself and seamlessly embedded into the host application. This could be achieved, e.g., with an HTML document that the microservice exposes and that is loaded into a web-browser instance inside the scanner software - in place of the current static controls. This would give sequence developers the flexibility to create simplified interfaces (e.g., a single slider that allows selecting between resolution vs. scan duration) or highly experimental interfaces without ever having to touch the software components that run on the scanner.
Because the sequence instances are running on centralized servers, managing exam strategies as well as rolling out updated sequences would be much simpler. It would also eliminate the “common” problem that a certain sequence is not installed (or licensed) on a particular scanner when needed. In this regard, MRI scanners would become pure acquisition devices while the actual examination logic is managed remotely. Decoupling the sequence implementation from the scanner would furthermore facilitate adapting sequences for new hardware and make testing easier. Because the interfaces are fully transparent, it would be possible to connect a sequence module with a numerical Bloch simulator instead of an actual scanner. In this way, the sequence could be tested comprehensively without ever running it on a real MRI system, which would reduce the scanner time needed for debugging and enable automated testing procedures. It would also allow independent developers without access to an MRI scanner to develop new applications. Lastly, remote sequence calculation could also serve as a mechanism for distributing sequences. Instead of sending a set of binary files to a customer or collaboration partner, the scanners could connect to the (public) sequence server of the developer and request the sequence instructions on the fly. This would guarantee that the scanner always executes the most up-to-date version of a sequence. It would also enable pay-per-use models for acquisition techniques.
The flexibility that microservices create for third-party developers would pose a radical change from the current “academic” dissemination model in the MRI world and create new pathways for rapid clinical translation. Although many MRI researchers today develop software code in their daily work, be it a new sequence or reconstruction algorithm, research developments are rarely distributed directly. Instead, researchers typically perform a series of experiments and publish the results either in the form of a scientific publication or a patent application. At this point, the research work typically ends. Further translation into clinical practice takes only place if an industry developer becomes aware of the work and tries to reproduce it. The vendor must then decide to integrate the new method into the next scanner generation, and by selling a new scanner to a customer, the method finally arrives in the clinic. This process is highly inefficient and affected by two bottlenecks: On the one hand, the bandwidth of industry developers is usually limited while the complexity of novel MRI techniques increases. On the other hand, modern MRI systems have a very long lifespan (10-20 years), and because new techniques are only integrated into the latest generation of devices, it takes significant time until new techniques gain relevant distribution.
A microservice-based open development model would disrupt this process. It would allow universities or startup companies to develop innovative solutions independently from the large vendors and, moreover, to disseminate these developments into the clinic, for example through an app-store model. This would, for the first time, create a real incentive and professional perspective for researchers to bring their developments into clinical practice and would result in a significant gain in development power. It would enable development of highly specialized solutions, which would not be profitable for the large vendors, it would enable pay-per-use licensing models, and it would make it possible to continuously deliver innovations to installed base systems. The latter should be of high interest also for the vendors, as it would allow ongoing revenue generation from the installed base instead of merely counting on the replacement of the systems.
Creating such an open development platform would not be too difficult from a technical perspective. It would mainly require definition of reliable and documented scanner interfaces in the form of REST protocols. In addition, it would require the vendors’ agreement on a common legal framework that protects developers and makes it sustainable for third parties to commit sufficient resources into the development of MRI microservices. Bringing such agreements in place should become a joint effort within our scientific community.
1. https://www.nginx.com/blog/introduction-to-microservices
2. https://martinfowler.com/articles/microservices.html
3. https://en.wikipedia.org/wiki/Orchestration_(computing)
4. https://en.wikipedia.org/wiki/Representational_state_transfer
5. https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
6. https://en.wikipedia.org/wiki/JSON
7. Block KT, Grimm R, Feng L, Otazo R, Chandarana H, Bruno M, Geppert C, Sodickson DK. Bringing Compressed Sensing to Clinical Reality: Prototypic Setup for Evaluation in Routine Applications. In Proc. Intl. Soc. Mag. Reson. Med. 21 (2013): 3809
8. Xue H, Inati S, Sørensen TS, Kellman P, Hansen MS. Distributed MRI reconstruction using Gadgetron-based cloud computing. Magn Reson Med. 2015 Mar;73(3):1015-1025
9. Layton KJ, Kroboth S, Jia F, Littin S, Yu H, Leupold J, Nielsen JF, Stöcker T, Zaitsev M. Pulseq: A rapid and hardware-independent pulse sequence prototyping framework. Magn Reson Med. 2017 Apr;77(4):1544-1552
10. https://en.wikipedia.org/wiki/Webhook