Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
-
Ramy A. Zeineldin
, Mohamed E. Karar
Abstract
Intraoperative brain deformation, so-called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
Introduction
Accurate localization of the pathologic targets such as tumors inside the brain is one of the most challenging tasks during neurosurgery [1] because it is difficult to distinguish between pathologic structures and the healthy tissue based only on visual inspection. In addition, the brain deforms its shape in response to surgical manipulation such as dura opening, gravity, loss of cerebrospinal fluid, and swelling due to osmotic drugs and anesthesia, which results in the so-called brain shift. This may lead to a change in the tumor’s position and thus limits the utility of preoperative image data for intraoperative guidance in neurosurgery [2].
The use of preoperative magnetic resonance imaging (MRI) as the basis for intraoperative navigation is a well-established option for neurosurgical guidance during surgery [3]. Further, using intraoperative MRI can provide excellent visualization of the brain tissues including sub-structure and surrounding tissues [4]. However, intraoperative MRI is limited because of long scan times and the need for special precautions in the operating room to avoid the degradation of MRI scanning quality or artifacts. On the other hand, intraoperative ultrasound (iUS) offers portable, low cost with fast scan times ranging from seconds to minutes. The iUS modality is also easy to use and allows for a spatial resolution within 0.50 mm. But it is prone to a decrease of imaging quality during surgery mainly due to attenuation artifacts based on the different speed of sound in water and brain tissue. Therefore, registration of MRI scans taken in the planning phase with the iUS images during the surgical procedure has been suggested to correct the tissue shift of the brain.
Medical image registration is the process of aligning two or more sets of imaging data into a common coordinate system [5]. It plays a main role in comparing and combining imaging data acquired with different modalities from various viewpoints and at different times [6]. Primarily, classical image registration approaches have been proposed with two primary types: feature-based and intensity-based matching [7]. Besides, these approaches depend on one pair of images with prior domain knowledge and require robust parameter tuning and setting [7].
Recently, deep learning approaches are widely used in the field of artificial intelligence and computer vision, especially for medical applications such as anatomical and pathological feature extraction, and tumor segmentation [8]. By exploiting image pairs during the training stage, deep learning methods can optimize over all training sets providing a general solution that reflects all parts of the dataset. Moreover, these approaches can provide fast image registration assisting neurosurgeons to define the position and size of the brain shift in real-time.
Nevertheless, aligning preoperative MRI and iUS for brain shift correction is still a challenging problem due to the different characteristics of each modality and the type of information they provide. Consequently, only a few studies have applied deep learning on registering preoperative MRI and iUS for brain shift correction [9], [10], [11]. In this paper, a fast and robust deep learning-based method for automatic preoperative MRI and interventional US registration is presented to assist the neurosurgeons by correcting brain shift intraoperatively.
Methodology
The deformable image registration is considered as an optimization problem, in which a moving image (IM) is transformed into the space of the fixed image (IF). Let ϕ be the deformation field that relates the two images. Then, the energy function ɛ is calculated by the following equation:
where
Figure 1 depicts the workflow of our proposed deep registration method. Firstly, the moving image (preoperative MRI) and the fixed image (iUS) are provided to the convolutional neural network (CNN), which computes the transformation field ϕ. Then, the moving image is warped into

The workflow of the proposed deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration method using a 3D convolutional neural network. Dashed red arrows indicate process steps performed only in the training stage.
Similar to U-Net [12] architecture and our previous enhancement [8], the CNN consists of two paths: a feature extractor and an image upscaling. The first part is a contracting path that consists of repeated 3 × 3 × 3 convolutions followed by an activation unit and a 2 × 2 × 2 max pooling for down-sampling. By using a stride of 2, the spatial dimension in each step is reduced to the half, like traditional pyramid registration architectures. In the image upscaling path, each step consists of a consecutive up-sampling layer, 2 × 2 × 2 up-convolution, a batch normalization layer, and a rectified linear unit (ReLU). Then, the learned features from the first path are propagated through a skip connection that recombines it with higher resolution outputs from the second path, respectively.
Due to the applied two-step registration approach, the loss function consists of two components:
Experiments
Data and experimental setup
This study was performed using the public REtroSpective Evaluation of Cerebral Tumors (RESECT) dataset [13]. The dataset includes pre‐operative MRI, iUS images, and expert-labeled anatomical landmarks from 23 patients who have received surgeries of low‐grade gliomas (Grade II) at St. Olavs University Hospital, USA. MRI scans include two modalities: T1-weighted Gd‐enhanced and T2-weighted fluid‐attenuated inversion recovery (FLAIR) with a voxel size of 1 × 1 × 1 mm3, whereas interventional 3D US data cover the entire tumor region at three different surgical stages: before opening the dura, during and after tumor resection with resolutions ranging from 0.14 × 0.14 × 0.14 mm3 to 0.24 × 0.24 × 0.24 mm3.
In our experiments, MRI T2-FLAIR and iUS before opening the dura images are utilized as the moving IM and fixed IF images, respectively. Since MRI and iUS scans are acquired using two different settings, a pre-processing stage is mandatory. First, the iUS images are resampled to the same voxel resolution of the moving images of 1 × 1 × 1 mm3. After that, T2-FLAIR images are cropped to the same orientation and dimension in the iUS scanning. Then, all images are downsampled to a resolution of 128 × 128 × 96 for efficient CPU and memory consumption. The training phase contains 18 pairs of MRI and iUS images, whereas the remaining four cases are used for the testing phase. The proposed method was implemented in Python using Keras library and Tensor Flow backend. Adam optimizer starting at a learning rate of 0.0001 and a batch size of four was used.
Results and evaluation
The performance of our proposed method has been evaluated and compared with three public well-known registration methods. The first method is the symmetric image normalization method (SyN) as part of the Advanced Normalization Tools (ANTs) [14], where the similarity measure is cross-correlation (CC). The second method is asymmetric block-matching registration as part of the NiftyReg open source package [15]. The dense displacement sampling registration (deeds) [16] is utilized as the third baseline method.
Figure 2 shows the results of aligning two MRI T2-FLAIR (moving images) to intraoperative US (fixed images) using our proposed method. The columns show the preoperative MRI, the intraoperative US, the overlap of both images before and after deformable registration using the proposed method. In the upper case (Patient 14), our proposed method is able to align correctly the brain tumor (blue arrows) as well as sulci (white arrows). On the other hand, our trained network attempted in the second case (Patient 8) to penalize the MSE of the tumor boundaries, but other structures are affected and have a larger registration error than initial registration. Nevertheless, this gives better MRI-US registration results than the initial alignment.

A sample of MRI-iUS registration results using the proposed deep registration method. The original MRI (moving image) and colored iUS (fixed image) are shown in the first two columns. The overlay results of iUS images on MRI before and after registration procedure are presented in the third and fourth columns, respectively. The arrows indicate brain shift for tumor (blue) and other anatomical structures such as sulci (white).
For further evaluation of the proposed model, two different metrics are compared against state-of-the-art image registration techniques and summarized in Table 1: First, the MSE (refer to Eq. (2)) between predicted deformed MRI and ground truth, generated using the MINC toolkit (https://bic-mni.github.io/), is calculated. Second, the average runtime of the three experimented methods as well as our proposed approach are listed in the last row. As shown in the results, the proposed registration method outperforms the other methods in terms of MSE and average runtime. With an average MSE of 85, the proposed method is significantly better than classical approaches that yield average MSE of 1068, 1608, and 1025 for ANTs, NiftyReg, and deeds, correspondingly. Remarkably, more than three 3D MRI-US registrations per second can be performed on the same GPU using our proposed method. On the other hand, classical approaches fail to provide a similar performance ranging from an average of 14.5 s for NiftyReg to 1862 s (31 min) for ANTs software.
Evaluation results of the proposed approach compared with classical methods on the retrospective evaluation of cerebral tumors (RESECT) dataset. For each case, mean squared error (MSE) calculations are listed. The last row represents the average runtime (in seconds). Four validated cases are indicated in bold.
Pair # | Initial | ANTs | NiftyReg | Deeds | Ours |
---|---|---|---|---|---|
Patient 1 | 1678 | 960 | 1505 | 825 | 109 |
Patient 2 | 1097 | 604 | 937 | 515 | 40 |
Patient 3 | 633 | 406 | 588 | 334 | 43 |
Patient 4 | 852 | 462 | 745 | 382 | 90 |
Patient 5 | 993 | 670 | 884 | 823 | 74 |
Patient 6 | 846 | 414 | 836 | 427 | 32 |
Patient 7 | 1248 | 626 | 1108 | 758 | 28 |
Patient 8 | 1117 | 641 | 865 | 588 | 132 |
Patient 9 | 1152 | 445 | 938 | 746 | 59 |
Patient 10 | 2053 | 1023 | 1793 | 1286 | 58 |
Patient 11 | 2618 | 2105 | 2193 | 1912 | 23 |
Patient 12 | 3545 | 1725 | 2832 | 883 | 61 |
Patient 13 | 1057 | 790 | 919 | 820 | 12 |
Patient 14 | 1829 | 1303 | 1532 | 1244 | 67 |
Patient 15 | 836 | 634 | 718 | 587 | 27 |
Patient 16 | 1748 | 517 | 1407 | 1117 | 26 |
Patient 17 | 4073 | 1429 | 2879 | 1477 | 71 |
Patient 18 | 2614 | 975 | 1844 | 903 | 38 |
Patient 19 | 1532 | 567 | 1178 | 560 | 13 |
Patient 20 | 3958 | 2372 | 3135 | 2109 | 717 |
Patient 21 | 3802 | 2647 | 3242 | 2273 | 46 |
Patient 22 | 4248 | 2186 | 3309 | 1990 | 107 |
Avg. MSE | 1979 | 1068 | 1608 | 1025 | 85 |
Avg. time | 1862.0 | 14.5 | 55.0 | 0.317 |
Conclusion
In this study, a 3D deep convolutional neural network-based deformable MRI-iUS image registration was proposed. The proposed registration method can successfully correct brain shift (see Figure 2). Moreover, our deep registration method is fully automated and outperforms the state-of-the-art image registration methods in terms of both mean squared error and average runtime, as illustrated in Table 1.
We are currently working on improving and validating the proposed deep MRI-iUS registration in the clinical routine of neurosurgery to enhance brain shift correction. The overall registration performance will be further analysed using other metrics such as target registration errors (TREs).
Funding source: German Academic Exchange Service (DAAD)
Award Identifier / Grant number: 91705803
Research funding: The corresponding author is funded by the German Academic Exchange Service (DAAD) under scholarship No. 91705803.
Author contributions: Franziska Mathis-Ullrich and Oliver Burgert contributed equally to this work.
Conflict of interest: Authors state no conflict of interest. Informed consent: The patient data included in this article are from an open public dataset.
Ethical approval: This article does not contain any studies with human participants or animals performed by the authors.
References
1. Dimaio, SP, Archip, N, Hata, N, Talos, I, Warfield, SK, Majumdar, A, et al. Image-guided neurosurgery at Brigham and Women’s Hospital. IEEE Eng Med Biol Mag 2006;25:67–73. https://doi.org/10.1109/memb.2006.1705749.Suche in Google Scholar
2. Schulz, C, Waldeck, S, Mauer, UM. Intraoperative image guidance in neurosurgery: development, current indications, and future trends. Radiol Res Pract 2012;2012:1–9. https://doi.org/10.1155/2012/197364.Suche in Google Scholar
3. Miner, RC. Image-guided neurosurgery. J Med Imaging Radiat Sci 2017;48:328–35. https://doi.org/10.1016/j.jmir.2017.06.005.Suche in Google Scholar
4. Siekmann, M, Lothes, T, König, R, Wirtz, CR, Coburger, J. Experimental study of sector and linear array ultrasound accuracy and the influence of navigated 3D-reconstruction as compared to MRI in a brain tumor model. Int J Comput Assist Radiol Surg 2018;13:471–8. https://doi.org/10.1007/s11548-018-1705-y.Suche in Google Scholar
5. Karar, ME, Noack, T, Kempfert, J, Falk, V, Burgert, O. Real-time tracking of aortic valve landmarks based on 2D-2D fluoroscopic image registration. CURAC Workshop Proc 2010;1475:57–60. ISBN: 978-3-86247-078-5, http://ceur-ws.org/Vol-1475/.Suche in Google Scholar
6. Hajnal, JV, Hill, DLG, Hawkes, DJ. Medical image registration. Boca Raton, Florida (US): CRC Press; 2001:1–8 pp. https://doi.org/10.1201/9781420042474.Suche in Google Scholar
7. Liu, J, Singh, G, Al’Aref, S, Lee, B, Oleru, O, Min, JK, et al. Image registration in medical robotics and intelligent systems: fundamentals and applications. Adv Intell Syst 2019;1:1900048. https://doi.org/10.1002/aisy.201900048.Suche in Google Scholar
8. Zeineldin, RA, Karar, ME, Coburger, J, Wirtz, CR, Burgert, O. DeepSeg: deep neural network framework for automatic brain tumor segmentation using magnetic resonance FLAIR images. Int J Comput Assist Radiol Surg 2020;15:909–20. https://doi.org/10.1007/s11548-020-02186-z.Suche in Google Scholar
9. Wright, R, Khanal, B, Gomez, A, Skelton, E, Matthew, J, Hajnal, JV, et al. LSTM spatial co-transformer networks for registration of 3D fetal US and MR brain images. In: Melbourne, A, Licandro, R, DiFranco, M, Rota, P, Gau, M, Kampel, M, et al., editors. Data driven treatment response assessment and preterm, perinatal, and paediatric image analysis PIPPI 2018, DATRA 2018. Lecture notes in computer science. Cham: Springer International Publishing; 2018:149–59 pp. vol 11076. https://doi.org/10.1007/978-3-030-00807-9_15.Suche in Google Scholar
10. Heinrich, MP. Intra-operative ultrasound to MRI fusion with a public multimodal discrete registration tool. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. POCUS 2018, BIVPCS 2018, CuRIOUS 2018, CPM 2018. Lecture notes in computer science. Cham: Springer International Publishing; 2018:159–64 pp. https://doi.org/10.1007/978-3-030-01045-4_19.Suche in Google Scholar
11. Shams, R, Boucher, MA, Kadoury, S. Intraoperative brain shift correction with weighted locally linear correlations of 3DUS and MRI. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. POCUS 2018, BIVPCS 2018, CuRIOUS 2018, CPM 2018. Lecture notes in computer science. Cham: Springer International Publishing; 2018:179–84 pp. https://doi.org/10.1007/978-3-030-01045-4_22.Suche in Google Scholar
12. Ronneberger, O, Fischer, P, Brox, T. U-net: convolutional networks for biomedical image segmentation. In: Navab, N, Hornegger, J, Wells, W, Frangi, A, editors. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Cham: Springer International Publishing; 2015:234–41 pp. https://doi.org/10.1007/978-3-319-24574-4_28.Suche in Google Scholar
13. Xiao, Y, Fortin, M, Unsgärd, G, Rivaz, H, Reinertsen, I. REtroSpective evaluation of cerebral tumors (RESECT): a clinical database of pre-operative MRI and intraoperative ultrasound in low-grade glioma surgeries. Med Phys 2017;44:3875–82. https://doi.org/10.1002/mp.12268.Suche in Google Scholar
14. Avants, B, Epstein, C, Grossman, M, Gee, J. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal 2008;12:26–41. https://doi.org/10.1016/j.media.2007.06.004.Suche in Google Scholar
15. Drobny, D, Vercauteren, T, Ourselin, S, Modat, M. Registration of MRI and iUS data to compensate brain shift using a symmetric block-matching based approach. In: Stoyanov, D, Taylor, Z, Aylward, S, Tavares, JMRS, Xiao, Y, Simpson, A, et al., editors. MICCAI challenge 2018 for correction of brain shift with intraoperative ultrasound (CuRIOUS 2018). Lecture notes in computer science. Cham: Springer International Publishing; 2018:172–8 pp. vol 1. https://doi.org/10.1007/978-3-030-01045-4_21.Suche in Google Scholar
16. Heinrich, HP, Jenkinson, M, Brady, M, Schnabel, JA. MRF-based deformable registration and ventilation estimation of lung CT. IEEE Trans Med Imaging 2013;32:1239–48. https://doi.org/10.1109/tmi.2013.2246577.Suche in Google Scholar
© 2020 Ramy A. Zeineldin et al., published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Artikel in diesem Heft
- Proceedings Papers
- 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
- A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
- A novel calibration phantom for combining echocardiography with electromagnetic tracking
- Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
- Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
- Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
- Autonomous guidewire navigation in a two dimensional vascular phantom
- Cardiac radiomics: an interactive approach for 4D data exploration
- Catalogue of hazards: a fundamental part for the safe design of surgical robots
- Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
- Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
- Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
- COMPASS: localization in laparoscopic visceral surgery
- Conceptual design of force reflection control for teleoperated bone surgery
- Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
- Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
- DL-based segmentation of endoscopic scenes for mitral valve repair
- Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
- EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
- Fluoroscopy-guided robotic biopsy intervention system
- Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
- Force estimation from 4D OCT data in a human tumor xenograft mouse model
- Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
- Generalization of spatio-temporal deep learning for vision-based force estimation
- Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
- Improving endoscopic smoke detection with semi-supervised noisy student models
- Infrared marker tracking with the HoloLens for neurosurgical interventions
- Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
- Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
- Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
- Needle placement accuracy in CT-guided robotic post mortem biopsy
- New insights in diagnostic laparoscopy
- Robotized ultrasound imaging of the peripheral arteries – a phantom study
- Segmentation of the distal femur in ultrasound images
- Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
- Surgical audio information as base for haptic feedback in robotic-assisted procedures
- Surgical phase recognition by learning phase transitions
- Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
- Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
- Training of patient handover in virtual reality
- Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
- Using position-based dynamics to simulate deformation in aortic valve replacement procedure
- VertiGo – a pilot project in nystagmus detection via webcam
- Visual guidance for auditory brainstem implantation with modular software design
- Wall enhancement segmentation for intracranial aneurysm
Artikel in diesem Heft
- Proceedings Papers
- 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
- A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
- A novel calibration phantom for combining echocardiography with electromagnetic tracking
- Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
- Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
- Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
- Autonomous guidewire navigation in a two dimensional vascular phantom
- Cardiac radiomics: an interactive approach for 4D data exploration
- Catalogue of hazards: a fundamental part for the safe design of surgical robots
- Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
- Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
- Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
- COMPASS: localization in laparoscopic visceral surgery
- Conceptual design of force reflection control for teleoperated bone surgery
- Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
- Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
- DL-based segmentation of endoscopic scenes for mitral valve repair
- Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
- EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
- Fluoroscopy-guided robotic biopsy intervention system
- Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
- Force estimation from 4D OCT data in a human tumor xenograft mouse model
- Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
- Generalization of spatio-temporal deep learning for vision-based force estimation
- Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
- Improving endoscopic smoke detection with semi-supervised noisy student models
- Infrared marker tracking with the HoloLens for neurosurgical interventions
- Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
- Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
- Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
- Needle placement accuracy in CT-guided robotic post mortem biopsy
- New insights in diagnostic laparoscopy
- Robotized ultrasound imaging of the peripheral arteries – a phantom study
- Segmentation of the distal femur in ultrasound images
- Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
- Surgical audio information as base for haptic feedback in robotic-assisted procedures
- Surgical phase recognition by learning phase transitions
- Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
- Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
- Training of patient handover in virtual reality
- Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
- Using position-based dynamics to simulate deformation in aortic valve replacement procedure
- VertiGo – a pilot project in nystagmus detection via webcam
- Visual guidance for auditory brainstem implantation with modular software design
- Wall enhancement segmentation for intracranial aneurysm