Home Medicine Visual guidance for auditory brainstem implantation with modular software design
Article Open Access

Visual guidance for auditory brainstem implantation with modular software design

  • EMAIL logo and
Published/Copyright: September 17, 2020

Abstract

An auditory brainstem implant (ABI) attempts to restore hearing at patients with bilateral damaged hearing nerve. The optimal placement of the ABI is challenging even with available auditory measures on the brainstem. We present a visual guidance system that aims to assist during intraoperative ABI placement. As a starting point, a surgical probe is navigated and intuitively visualized in the microscope oculars. The system is developed using modular and agile software design techniques. In a usability study, the participants were able to detect invisible targets marked in a phantom image with a millimetric precision. To the best of our knowledge, this is the first time that this kind of visual guidance is presented. In the future, the system will be expanded with surgical instruments used for the ABI placement.

Introduction

Already for decades image-guided surgery (IGS) systems assist surgeons in accurate intraoperative navigation in interventions such as cochlear implant placement and lateral skull base surgery [1]. These systems display surgical information on a separate display, requiring the surgeon to divert attention and look away from the surgical scene. This can not only be inefficient for precise neurosurgery in critical brain areas such as the brainstem but also a cause for errors.

Placing an auditory brainstem implant (ABI) on the cochlear nucleus located in the brainstem is high-risk surgery where the brainstem trauma and implant misplacements have to be avoided [2]. This surgery is an alternative for patients with profound hearing loss that are constrained for receiving a cochlear implant. The ABI electrode bypasses the cochlear (hearing) nerve, which may be damaged at these patients, to directly stimulate the auditory system on the cochlear nucleus located in the brainstem. To spot the correct position with active auditory system on the cochlear nucleus, a placement electrode with electrically evoked auditory brainstem responses assists (EABRs) [3]. Once assessed, the optimal position has to be remembered are then repeated with the ABI electrode. However, with the current technology there are no quantifiable methods for storing and repeating the optimal position [2].

This work describes implementation of a visual guidance system in the eyepieces of a surgical microscope that aims to assist during the ABI placement. The system allows to obtain the stored position determined by the placement electrode and then lead surgeon toward repeating the same spatial position with the implant electrode. As a starting point, the positions are assessed by navigating the surgical probe that can be used to point the placement/implant electrode during the surgery. The probe navigation is realized by integrating the proposed visual guidance (VG) into an in-house developed surgical navigation system (IGS). A modular, layered software architecture is applied during this work by breaking the complexity into smaller, manageable and well-defined modules that are easier to test, ensure the correctness and reuse. Development process uses the iterative, incremental agile technique SCRUM in order to accommodate the needs of the IEC 62304 standard for medical software development.

The first evaluation of the system is performed on the basis of a usability experiment measuring user performance compared to the in-house IGS system.

System overview

Image-guided surgery system

The implemented IGS system realizes intraoperative navigation by presenting the current position of the surgical instrument inside the patient brain in preoperative patient images with the standard reformatted coronal, sagittal and axial views.

The intraoperative navigation is achieved using a paired point-based registration [4] which requires at least three points to localize fiducial markers. Three approaches are installed to localize markers in the patient physical space. First is using markers such as bone-implanted screws localized manually by a surgical probe. Second employs spherical markers with installed magnetic sensors placed inside the nasal cavity prior to preoperative imaging [5]. Third approach is semi-automatic by combining both the first and second approach. In image, on the other hand, both the screws and spherical markers are localized automatically with an in-house developed algorithm [6].

Visual guidance system

Our clinical experience convinced us that surgeons want a simple interface with minimum distractions during navigation. With this in mind, we developed an interface that superimposes intuitive virtual cues on the view of the surgical site in the microscope. The superimposed virtual cues are designed to present information without spatial registration to real structures, which simplifies the clinical workflow and avoids additional uncertainties. The distance between the position of the tracked surgical instrument tip, termed as “current position”, and the optimal position determined with EABRs, termed as “target position”, is measured and projected in the visual interface of the microscope. The measured distance in an orthogonal plane at the tip of the surgical instrument relative to the target position (up/down and left/right) is termed as “lateral”. The measured distance along the direction of the tip of the surgical instrument relative to the target position is termed as “depth”. Figure 1 shows one target and three current positions together with their positional uncertainties encoded differently for the lateral and depth directions with superimposed visualization.

Figure 1: The virtual cues superimposed on the view from below on the inferior surface of the synthetic skull base in a surgical microscope Leica M500 N (Leica Microscopy Systems, Heerbrugg, Switzerland). The visual cues encode three positions of the surgical probe in (a), (b), and (c) relative to the same target. In figures: 1 and 2 – the spatial locations of the real target and surgical probe; 3 and 4 – visual cues of the target and current position in the lateral directions; 5 and 6 – visual cues of the error ellipses for spatial positional uncertainties of the target and surgical probe in the lateral direction; 7 – a visual cue of the distance between the current and target position in the depth direction; 8 – a visual cue of the closest direction between the current and target position when the distance is outside of the defined range in the lateral direction (green arrow).
Figure 1:

The virtual cues superimposed on the view from below on the inferior surface of the synthetic skull base in a surgical microscope Leica M500 N (Leica Microscopy Systems, Heerbrugg, Switzerland). The visual cues encode three positions of the surgical probe in (a), (b), and (c) relative to the same target. In figures: 1 and 2 – the spatial locations of the real target and surgical probe; 3 and 4 – visual cues of the target and current position in the lateral directions; 5 and 6 – visual cues of the error ellipses for spatial positional uncertainties of the target and surgical probe in the lateral direction; 7 – a visual cue of the distance between the current and target position in the depth direction; 8 – a visual cue of the closest direction between the current and target position when the distance is outside of the defined range in the lateral direction (green arrow).

Visualization for lateral distances: If the current and target position are distanced within the defined range, a small filled square visualizes the target position (Figure 1c) and a small filled circle visualizes the current position (Figure 1c). The latter moves (up/down and left/right) around the stationary former. When the distance is getting smaller between the two, the two visual cues will be closer to each other until the distance is zero when they will overlap each other. When the distance is far away outside of the defined range, only the closest direction between the current and target position is visualized with an arrow pointing toward the target visual cue (Figure 1b).

Visualization for depth distances: A thicker circle with the center at the target visual cue visualizes the distance between the current and target position. The radius of this circle is a function of the distance between the current and target position so that the circle minimizes when the distance decreases (e.g., Figure 1a–c) or maximizes when the distance increases (e.g., Figure 1a–c). When the distance is negative, the circle starts to blink periodically. If the distance is far away outside of the defined range, the circle keeps the same radius constantly (Figure 1c).

Tracking server system

The Tracking Server system (TS) is client server based and acts as a middleman to establish the interaction between the IGS system and the Aurora electromagnetic tracker (Northern Digital Inc. (NDI), Ontario, Canada). The handling of the tracker is realized with a generic state machine derived from a medical particle accelerator control system used for the safe treatment of patients with cancer [7]. IGS connects to TS using the OpenIGTLink [8] based protocol and sends requests such as initiating and obtaining positional measurements. There are two benefits to install the middleman. First, TS acting as a broadcaster in case of multiple instances of IGS. Sometimes during a surgical intervention, the surgeon and assistants benefit from simultaneous use. Second, the components related to tracking are isolated into an independent software unit, which facilities stability during upgrades (e.g., new tracking devices), testing and validation.

System implementation

Development process

The software lifecycle methodology follows SCRUM which is an evolutionary and iterative model compared to the more traditional waterfalls models that are focused on a steadily sequential, non-iterative development phases. The SCRUM strategy can be thought of as a cyclical process where the artifacts are continually improved and slightly better than in a cycle before. An artifact is a design concept, documentation, code, etc.

Figure 2 shows SCRUM phases and processes. In Pregame phase, a single list of requirements for any change to be made in the system is composed into Product Backlog. To organize and conduct tasks from Product Backlog into development Sprint Backlog, a collaborative meeting called Sprint Planning is done. The development phase is termed as Sprint and it takes only small fraction of the requirements and architecture to be implemented in one to four weeks period. Implementation follow up is done during a 15 min Daily Scrum meeting. Sprint Review is a meeting done at the end of each Sprint. At this point, the team decides further actions such as continue the Product Backlog implementation or trigger the Closure phase involving release, integration and validation.

Figure 2: SCRUM phases and processes.
Figure 2:

SCRUM phases and processes.

Architecture

The software architecture follows a modular approach where the components are decoupled into individual and logical modules with specific responsibilities located at the different levels in the hierarchy. The modules are structured inside a layered hierarchy which defines directional dependence: modules of the layer and their components are not allowed to depend on those residing in the layers above.

Three main layers denounce a specific responsibility of a module: Low Layer – provides basic abstracted functions; Mid Layer – provides specific functions for particular applications; and App Layer – composes particular applications from other modules.

The architecture diagrams are shown in Figure 3 for IGS/VG Systems and in Figure 4 for TS System.

Figure 3: Layered architecture implementation for image-guided surgery (IGS)/visual guidance (VG) Systems.
Figure 3:

Layered architecture implementation for image-guided surgery (IGS)/visual guidance (VG) Systems.

Figure 4: Layered architecture implementation for tracking server system (TS) System.
Figure 4:

Layered architecture implementation for tracking server system (TS) System.

Results and discussion

In a performed usability study, user performance is evaluated using three types of system combinations: 1) IGS system displayed on a screen; 2) VG system displayed in a surgical microscope; and 3) IGS + VG system combination displayed on a screen as well. The systems accept a CT image of a custom-designed Lego phantom which is registered to the physical scene using a paired point-based registration. The participants were asked to rely solely on the systems in order to localize the centers of eight undisclosed and invisible targets (one at a time) randomly distributed on plates of the Lego phantom. In IGS, the targets were displayed as green spheres (approx. 2 mm in diameter) in the cross-sectional views. In VG, the presented visual cues navigate toward the target. This setup is shown in Figure 5. The participants were categorized into two groups. The first group consisted of eight people with clinical experience in IGS (e.g., ENT surgeons). The second group consisted of six people without clinical experience in IGS (e.g., Ph.D. students in the IGS field).

Figure 5: Participants performing the experiment with a covered Lego scene using system (a) VG and (b) IGS + VG.
Figure 5:

Participants performing the experiment with a covered Lego scene using system (a) VG and (b) IGS + VG.

The following quantitative measures are evaluated: 1) the error distance between the planned and user detected target in the image; 2) the total length of the trajectory from the common starting point; and 3) the duration time for each target. The mean and (standard deviation) results are shown for the clinical and non-clinical group in Table 1 and 2, respectively.

Table 1:

Experimentally determined user target error, trajectory and duration quantities for the clinical expert group.

SystemError [mm]Trajectory [cm]Duration [s]
IGS1.6(0.80)121(71)57(45.4)
VG1.2(0.68)94(52)32(27.9)
IGS + VG1.3(0.70)85(36)28(19.5)
Table 2:

Experimentally determined user target error, trajectory and duration quantities for the non-clinical expert group.

SystemError [mm]Trajectory [cm]Duration [s]
IGS1.7(0.85)152(123)62(63.7)
VG1.3(1.14)101(48)34(26.9)
IGS + VG1.3(0.98)96(51)28(23.4)

These results demonstrate that both clinical and non-clinical experts were able to localize targets in the image with millimetric precision using all three system combination. With the former group achieving shorter trajectories. The obtained user target error and the duration time measures significantly differ between the IGS and VG systems (Wilcoxon Signed Rank test, two-sided, p-value < 0.01).

Future research will address navigation of surgical instrument that carries the ABI electrode.


Corresponding author: Milovan Regodic, Medical University of Innsbruck, Annichstr.35, 6020Innsbruck, Austria, E-mail:
Correction note: Correction added on 14 April, 2021: Due to a typesetting error, reference [6] on page 2 was not linked to a reference entry and the corresponding entry was missing from the bibliography. This has been corrected and the last two references have been relabeled accordingly to [7] and [8].

Funding source: Austrian Research Promotion Agency (FFG)

Award Identifier / Grant number: 855783

Acknowledgment

The authors thank to all participants for taking the time to participate in the experiment.

  1. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  2. Research funding: This work was funded by the Austrian Research Promotion Agency (FFG). Project: navABI. Project number: 855783.

  3. Competing interests: Authors state no conflict of interest.

  4. Informed consent: Informed consent was obtained from all individuals included in this study.

References

1. Caversaccio, M, Freysinger, W. Computer assistance for intraoperative navigation in ENT surgery. Minim Invasive Ther Allied Technol 2003;12:36–51. https://doi.org/10.1080/13645700310001577.Search in Google Scholar

2. Wong, K, Kozin, ED, Kanumuri, VV, Vachicouras, N, Miller, J, Lacour, S, et al. Auditory brainstem implants: recent progress and future perspectives. Front Neurosci 2019;13:01. https://doi.org/10.3389/fnins.2019.00010.Search in Google Scholar

3. MED-EL. Surgical guideline: Mi1200 SYNCHRONY ABI [accessed 2020 May 13]. https://s3.medel.com/documents/AW/AW32149_10_SYNCHRONY%20ABI%20Surgical%20Guideline%20-%20EN%20English.pdf.Search in Google Scholar

4. Horn, BKP. Closed-form solution of absolute orientation using unit quaternions. J Opt Soc Am A 1987;4:629. https://doi.org/10.1364/josaa.4.000629.Search in Google Scholar

5. Bardosi, Z, Plattner, C, Özbek, Y, Hofmann, T, Milosavljevic, S, Schartinger, V, et al. CIGuide: in situ augmented reality laser guidance. Int J Comput Assist Radiol Surg 2020;15:49–57. https://doi.org/10.1007/s11548-019-02066-1.Search in Google Scholar

6. Milovan, Regodic, Zoltan, Bardosi, and Wolfgang, Freysinger. “Automatic fiducial marker detection and localization in CT images: a combined approach”, Proc. SPIE 11315, Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, 113151Y (16 March 2020); https://doi.org/10.1117/12.2548852.Search in Google Scholar

7. Gutleber, J, Moser, R. The MedAustron accelerator control system:design, installation and commissioning. In: Proc. ICALEPCS2013, San Francisco, CA, USA; 2013.Search in Google Scholar

8. Tokuda, J, Fischer, GS, Papademetris, X, Yaniv, Z, Ibanez, L, Cheng, P, et al. OpenIGTLink: an open network protocol for image-guided therapy environment. Int J Med Robot 2009;5:423–34. https://doi.org/10.1002/rcs.274.Search in Google Scholar

Published Online: 2020-09-17

© 2020 Milovan Regodic and Wolfgang Freysinger, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Proceedings Papers
  2. 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
  3. A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
  4. A novel calibration phantom for combining echocardiography with electromagnetic tracking
  5. Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
  6. Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
  7. Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
  8. Autonomous guidewire navigation in a two dimensional vascular phantom
  9. Cardiac radiomics: an interactive approach for 4D data exploration
  10. Catalogue of hazards: a fundamental part for the safe design of surgical robots
  11. Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
  12. Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
  13. Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
  14. COMPASS: localization in laparoscopic visceral surgery
  15. Conceptual design of force reflection control for teleoperated bone surgery
  16. Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
  17. Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
  18. DL-based segmentation of endoscopic scenes for mitral valve repair
  19. Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
  20. EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
  21. Fluoroscopy-guided robotic biopsy intervention system
  22. Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
  23. Force estimation from 4D OCT data in a human tumor xenograft mouse model
  24. Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
  25. Generalization of spatio-temporal deep learning for vision-based force estimation
  26. Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
  27. Improving endoscopic smoke detection with semi-supervised noisy student models
  28. Infrared marker tracking with the HoloLens for neurosurgical interventions
  29. Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
  30. Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
  31. Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
  32. Needle placement accuracy in CT-guided robotic post mortem biopsy
  33. New insights in diagnostic laparoscopy
  34. Robotized ultrasound imaging of the peripheral arteries – a phantom study
  35. Segmentation of the distal femur in ultrasound images
  36. Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
  37. Surgical audio information as base for haptic feedback in robotic-assisted procedures
  38. Surgical phase recognition by learning phase transitions
  39. Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
  40. Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
  41. Training of patient handover in virtual reality
  42. Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
  43. Using position-based dynamics to simulate deformation in aortic valve replacement procedure
  44. VertiGo – a pilot project in nystagmus detection via webcam
  45. Visual guidance for auditory brainstem implantation with modular software design
  46. Wall enhancement segmentation for intracranial aneurysm
Downloaded on 27.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/cdbme-2020-0044/html
Scroll to top button