Home Wall enhancement segmentation for intracranial aneurysm
Article Open Access

Wall enhancement segmentation for intracranial aneurysm

  • Annika Niemann EMAIL logo , Naomi Larsen , Bernhard Preim and Sylvia Saalfeld
Published/Copyright: September 17, 2020

Abstract

We present a tool for automatic segmentation of wall enhancement of intracranial aneurysms in black blood MRI. The results of the automatic segmentation with several configurations is compared to manual expert segmentations. While the manual segmentation includes some voxels of lower intensity not present in the automatic segmentation, overall the volume of the automatic segmentation is higher.

Introduction

A relevant part of aneurysm diagnosis is the rupture risk assessment. Recent research introduced wall enhancement in black blood MRI as a possible indication for a higher risk of aneurysm rupture. However, within these studies, the wall enhancement identification relies on the subjective criteria of the medical experts.

In contrast to other MR images, the vessels appear black instead of bright in black blood MRI. This technique allows to visualise the vessel lumen as well as the surrounding wall. Wall enhancement around aneurysms could be an indication of inflammatory reaction and wall damages. While morphological parameters are often used for rupture risk assessment, they are not sufficient [1]. Wall enhancement as visible in black blood MRI might be able to improve rupture risk assessment [2].

In a study by Fu et al. [3], a correlation between symptoms (sentinel headaches or third nerve palsy) and wall enhancement was found. Two radiologists were asked to determine whether aneurysm wall enhancement was present or not. With a similar approach, Edjlali et al. [4] observed that wall enhancement can be more frequently found in instable aneurysms than in stable aneurysms. Wang et al. [5] and Liu et al. [6] concluded that wall enhancement could help to predict aneurysm rupture. They compared pre- and post-contrast images to define wall enhancement. Results based on expert assessment are challenging to reproduce.

Roa et al. [7] compared different objective measurements for wall enhancement and concluded that a measurement based on the aneurysm-to-pituitary stalk contrast ratio is the most reliable method and robust in regard to different manufactures and magnet strength of the MR scanners. The measure was used to classify aneurysms with or without wall enhancement. We present a segmentation of different nuances of wall enhancement.

Materials and methods

We define wall enhancement as lighter values near the aneurysm. As the absolute grey values in MR images can not be compared directly, the intensities are evaluated in comparison to a reference value from the same image. The threshold for the values which are determined as enhanced, are set as percentages of a reference value. We use five thresholds to segment the images in not enhanced tissue and five wall enhancement classes. The used thresholds are further described in Section 2.2. Motivated by the results by Roa et al. [7] we use the brightest pituitary value as reference value for our wall enhancement segmentation.

Prototype

We implemented a prototype in MATLAB (MATLAB, 2020a, The MathWorks Inc) for the segmentation of wall enhancement of intracranial aneurysm. First, a black blood MRI is loaded. Then, the pituitary is identified by searching for bright circles near the centre in the image slices. The brightest value of the pituitary stalk is proposed as reference value. The user has three options to adjust the reference value: set the reference value to the brightest value occurring in the image, manually type in a value or select a new point in the pituitary. In the last case the brightest value near the selected point is used to account for imprecise point selection. With that option, a faulty automatic pituitary selection can be easily corrected. The user can use between 1 and 10 classes for wall enhancement and set the corresponding thresholds as percentages of the reference value. Additional to the semi-automatic reference value selection, it is necessary to segment the aneurysm. This is done by setting a seed point in the middle of the aneurysm and performing region growing. After the aneurysm segmentation has been performed, the neighbouring voxels can be determined and, according to their values, the amount of enhancement can be defined. The wall enhancement segmentation is overlayed in red. A darker red depicts a higher wall enhancement and a transparent red a lower wall enhancement. A summary of the wall enhancement segmentation shows the amount of voxels and respective volume of each enhancement class (Figure 1).

Figure 1: Prototype of wall enhancement segmentation tool with zoomed in view of segmentation (green rectangles). Blue: aneurysm/vessel, red cross: cursor position, red overlay: segmented wall enhancement.
Figure 1:

Prototype of wall enhancement segmentation tool with zoomed in view of segmentation (green rectangles). Blue: aneurysm/vessel, red cross: cursor position, red overlay: segmented wall enhancement.

Experiments

We used the tool to segment wall enhancement around intracranial aneurysms for 10 patients (patient ids: 1, 4–12). Our wall enhancement segmentation divided the wall enhancement into five groups. The thresholds are given as percentage of the intensity at the pituitary. Four sets of thresholds (a, b, c, d) are explored. The wall enhancement classes are based on the intensity as summarised in Table 1. The wall enhancement segmentations of the tool were compared with manual, binary segmentations.

Table 1:

Thresholds used for wall enhancement segmentation.

Class 1Class 2Class 3Class 4Class 5
a85%75%70%65%60%
b75%65%55%45%35%
c70%60%50%40%30%
d60%50%40%30%20%

Results

The segmentation results of the automatic segmentation differ depending on the selected thresholds. When all classes are combined, the automatic segmentation includes a larger volume than the manual segmentation. Figure 2 shows this exemplary for patient 1 and patient 4. For patient 4, the additional segmentation volume increases with lower thresholds. For patient 1, this does not happen, as already all voxels in the search area around the aneurysm are segmented with the highest threshold set (a). In both cases, the volume of the higher wall enhancement classes increase with lower thresholds. The same can be seen in Figure 3, where the segmented volume of each wall enhancement class is shown for patient 7 and patient 12.

Figure 2: Comparison of wall enhancement segmentation of our tool with manual segmentation for patient 5 and 10: missed volume (volume segmented by expert but not by the tool) and correct volume (volume segmented by both; for the tool the corresponding wall enhancement class of the segmentation is shown).
Figure 2:

Comparison of wall enhancement segmentation of our tool with manual segmentation for patient 5 and 10: missed volume (volume segmented by expert but not by the tool) and correct volume (volume segmented by both; for the tool the corresponding wall enhancement class of the segmentation is shown).

Figure 3: Comparison of segmented volume of manual segmentation and automatic segmentation of wall enhancement class 1 for patient 4 and 5.
Figure 3:

Comparison of segmented volume of manual segmentation and automatic segmentation of wall enhancement class 1 for patient 4 and 5.

Sometimes the manual segmentation includes voxels with intensities much smaller than the reference value at the pituitary. For example, in patient 10, the manual segmentation includes many voxels of low intensities. As Figure 4 shows, even with the lowest threshold combination (d), where the minimum threshold to include voxels in the wall enhancement segmentations is 20% of the maximal pituitary intensity, some voxels of the manual segmentation are below this threshold. With lower thresholds, more of the manual segmentation is included and sorted in higher wall enhancement class (Figure 5).

Figure 4: Result of automatic segmentation: Volume of each wall enhancement class for patient 7 and patient 9 (wall enhancement class 1 and 2 occur in 9c and 9d in very small amounts (less than 5 mm3).
Figure 4:

Result of automatic segmentation: Volume of each wall enhancement class for patient 7 and patient 9 (wall enhancement class 1 and 2 occur in 9c and 9d in very small amounts (less than 5 mm3).

Figure 5: Additional segmented volume of each wall enhancement class in patient 1 and 4.
Figure 5:

Additional segmented volume of each wall enhancement class in patient 1 and 4.

In Figure 6, the manual segmentation volume is compared to the volume segmented as wall enhancement class 1. The necessary threshold to achieve a segmentation volume of wall enhancement class 1 comparable to the manual segmentation is between 55 and 65%. For patient 5 a threshold between 45 and 55% would be optimal.

Figure 6: Histogram of voxel intensities inside manual segmention of patient 10 and corresponding thresholds d.
Figure 6:

Histogram of voxel intensities inside manual segmention of patient 10 and corresponding thresholds d.

Discussion

While a consistent and reproducible definition of wall enhancement is used, it is challenging to find thresholds suitable for all data sets. To include all of the manual segmented wall enhancement area, low thresholds (<20% or smaller) can be necessary. At the same time, these tend to include larger areas not included in the manual segmentations and increase the volume of the higher wall enhancement classes. The optimal thresholds might be further evaluated by comparing the different segmentation and resulting volumes for the wall enhancement classes to the rupture risk. That might determine which configuration for the automatic segmentation produces the best results for rupture risk prediction.

Small inaccuracies might be present in the manual segmentation due to several circumstances. The voxelisation of the smooth contours can lead to small differences at the segmentation border. The algorithm evaluates each voxel separately and decides whether wall enhancement is visible and which wall enhancement class the voxel belongs to. Partial volume effects might influence the segmentation. It is unlikely that a manual segmentation would be that detailed. Instead, an expert likely evaluates several voxels together. Therefore, the manual segmentation is more prone to include voxels with darker intensity.

A problem for manual segmentation might be the unreliable perception of grey values. While the computer evaluates the exact intensity value and compares it to the reference value, human perception of grey intensities is influenced by the surrounding values. Depending on the adjacent voxels, the same value might appear lighter or darker to a human performing the segmentation. The automatic segmentation is therefore more constant and reliable in the evaluation of grey intensities.

Here, the automatic segmentation was only compared to one manual segmentation per patient. Different persons might provide slightly different segmentations and it would be interesting to compare the automatic segmentation to other manual segmentations. Additionally, further configurations for the automatic segmentation (number of wall enhancement classes, thresholds) could be considered.

This segmentation works on individual voxels. To better correspond with human segmentations, it might be useful to develop an algorithm which decide on wall enhancement not on individual voxels but on small groups. Furthermore, the overall shape (for example avoiding small holes) might be taken in account to fit manual segmentations.

Conclusion

We presented a detailed, automatic wall enhancement segmentation for intracranial aneurysms. The automatic segmentation with five wall enhancement classes and different thresholds for these classes was compared to binary manual segmentation. The definition of thresholds, which are able to fit the manual segmentation for all patients, are challenging to find.


Corresponding author: Annika Niemann, Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany, E-mail:

Funding source: German Research Foundation

Award Identifier / Grant number: SA 3461/2-1

Funding source: Federal Ministry of Education and Research within the Forschungscampus STIMULATE

Award Identifier / Grant number: 13GW0095A

  1. Research funding: This work is partly funded by the German Research Foundation (SA 3461/2-1) and the Federal Ministry of Education and Research within the Forschungscampus STIMULATE (13GW0095A).

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: Authors state no conflict of interest.

  4. Informed consent: Informed consent has been obtained from all individuals included in this study.

  5. Ethical approval: The research related to human use complies with all the relevant national regulations, institutional policies and was performed in accordance with the tenets of the Helsinki Declaration, and has been approved by the authors’ institutional review board or equivalent committee.

References

1. Niemann, U, Berg, P, Niemann, A, Beuing, O, Preim, B, Spiliopoulou, M, et al. Rupture status classification of intracranial aneurysms using morphological parameters. In: 31st IEEE CBMS International Symposium on Computer-Based Medical Systems. Karlstad, Sweden: IEEE; 2018:48–53 pp. https://doi.org/10.1109/cbms.2018.00016.Search in Google Scholar

2. Petridis, AK, Filis, A, Chasoglou, E, Fischer, I, Dibué-Adjei, M, Bostelmann, R, et al. Aneurysm wall enhancement in black blood MRI correlates with aneurysm size. Black blood MRI could serve as an objective criterion of aneurysm stability in near future. Clin Pract 2018;8:1089. https://doi.org/10.4081/cp.2018.1089.Search in Google Scholar PubMed PubMed Central

3. Fu, Q, Guan, S, Liu, C, Wang, K, Cheng, J. Clinical significance of circumferential aneurysmal wall enhancement in symptomatic patients with unruptured intracranial aneurysms: a high-resolution MRI study. Clin Neuroradiol 2018;28:509–14. https://doi.org/10.1007/s00062-017-0598-4.Search in Google Scholar PubMed

4. Edjlali, M, Gentric, JC, Régent-Rodriguez, C, Trystram, D, Hassen, WB, Lion, S, et al. Does aneurysmal wall enhancement on vessel wall MRI help to distinguish stable from unstable intracranial aneurysms? Stroke 2014;45:3704–6. https://doi.org/10.1161/strokeaha.114.006626.Search in Google Scholar

5. Wang, GX, Wen, L, Lei, S, Qian, R, Yin, JB, Gong, ZL, et al. Wall enhancement ratio and partial wall enhancement on MRI associated with the rupture of intracranial aneurysms. J Neurointerventional Surg 2018;10:566–70. https://doi.org/10.1136/neurintsurg-2017-013308.Search in Google Scholar PubMed PubMed Central

6. Liu, P, Qi, H, Liu, A, Lv, X, Jiang, Y, Zhao, X, et al. Relationship between aneurysm wall enhancement and conventional risk factors in patients with unruptured intracranial aneurysms: a black-blood MRI study. Intervent Neuroradiol 2016;22:501–5. PMID: 27341856. https://doi.org/10.1177/1591019916653252.Search in Google Scholar PubMed PubMed Central

7. Roa, JA, Zanaty, M, Osorno-Cruz, C, Ishii, D, Bathla, G, Ortega-Gutierrez, S, et al. Objective quantification of contrast enhancement of unruptured intracranial aneurysms: a high-resolution vessel wall imaging validation study. J Neurosurg 2020:1–8. https://doi.org/10.3171/2019.12.jns192746 [Epub ahead of print].Search in Google Scholar PubMed PubMed Central

Published Online: 2020-09-17

© 2020 Annika Niemann et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Proceedings Papers
  2. 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
  3. A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
  4. A novel calibration phantom for combining echocardiography with electromagnetic tracking
  5. Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
  6. Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
  7. Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
  8. Autonomous guidewire navigation in a two dimensional vascular phantom
  9. Cardiac radiomics: an interactive approach for 4D data exploration
  10. Catalogue of hazards: a fundamental part for the safe design of surgical robots
  11. Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
  12. Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
  13. Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
  14. COMPASS: localization in laparoscopic visceral surgery
  15. Conceptual design of force reflection control for teleoperated bone surgery
  16. Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
  17. Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
  18. DL-based segmentation of endoscopic scenes for mitral valve repair
  19. Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
  20. EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
  21. Fluoroscopy-guided robotic biopsy intervention system
  22. Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
  23. Force estimation from 4D OCT data in a human tumor xenograft mouse model
  24. Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
  25. Generalization of spatio-temporal deep learning for vision-based force estimation
  26. Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
  27. Improving endoscopic smoke detection with semi-supervised noisy student models
  28. Infrared marker tracking with the HoloLens for neurosurgical interventions
  29. Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
  30. Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
  31. Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
  32. Needle placement accuracy in CT-guided robotic post mortem biopsy
  33. New insights in diagnostic laparoscopy
  34. Robotized ultrasound imaging of the peripheral arteries – a phantom study
  35. Segmentation of the distal femur in ultrasound images
  36. Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
  37. Surgical audio information as base for haptic feedback in robotic-assisted procedures
  38. Surgical phase recognition by learning phase transitions
  39. Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
  40. Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
  41. Training of patient handover in virtual reality
  42. Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
  43. Using position-based dynamics to simulate deformation in aortic valve replacement procedure
  44. VertiGo – a pilot project in nystagmus detection via webcam
  45. Visual guidance for auditory brainstem implantation with modular software design
  46. Wall enhancement segmentation for intracranial aneurysm
Downloaded on 12.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/cdbme-2020-0045/html
Scroll to top button