Abstract
Tracking of surgical instruments is an essential step towards the modernization of the surgical workflow by a comprehensive surgical landscape guidance system (COMPASS). Real-time tracking of a laparoscopic camera used in minimally-invasive surgery is required for applications in surgical workflow documentation, machine learning, image-localization, and intra-operative visualization. In our approach, an inertial measurement unit (IMU) assists the tool tracking in situations when no line-of-sight is available for infrared (IR) based tracking of the laparoscopic camera. The novelty of this approach lies in the localization method adjusted for the laparoscopic visceral surgery, particularly when the line-of-sight is lost. It is based on IMU tracking and the positioning of the trocar entry point. The trocar entry point is the remote center of motion (RCM), reducing degrees of freedom. We developed a method to tackle localization and a real-time tool for position and orientation estimation. The main error sources are given and evaluated in a test scenario. It reveals that for small changes in penetration length (e.g., pivoting), the IMU’s accuracy determines the error.
Introduction
Tracking of laparoscopic instruments is an application that many researchers focused on to improve surgery in fields like visualization [1], workflow registration, planning, and evaluation. New concepts have proven to show benefits for visualization, navigation, and planning in neuro, ear, nose, throat, and spinal surgery. Most concepts rely on overlay technologies that have not been established in visceral surgery since there are fewer applications for guidance as most organs in the abdomen can move freely. Therefore, we do not focus on overlay technologies to avoid risk structures or on guidance to a predefined area of interest. Instead, we define the main motivation as a real-time 3D visualization of the surgical sight and even an image-based detection of the tumor spreading. Registration of CT images to the patient may not be applicable, but with real-time detection of anatomic landmarks, overlay technologies could become attractive. In addition, our team is conducting research to analyze the diagnostic laparoscopy and possible advantages that could arise from technological progress in the field of image-based detection and localization (refer to the talk of Berlet et al.).
With only a few exceptions, researchers found the optical tracking system to be hard to use in abdominal surgery. Establishing line-of-sight to the reflective spheres is more difficult compared to other surgical fields due to bigger movements and rotations.
In this article, we strive for finding a tracking system designed for abdominal surgery. We show that orientation and position estimates can be acquired using only an IMU for timespans where no IR tracking is available and an IR tracking system to find the trocar entry point. These results motivate us to conduct further research on multi-sensor fusion. IMU sensor data can be utilized for at least two sensor inputs. One utilization is the position estimate by using the proposed method, and the other is the incremental movement from the last known position and orientation. We further plan to integrate optical flow as sensor input using a depth map reconstruction of the 3D endoscope for more stable results. This document, however, focuses on explaining the algorithm for stable trocar entry point detection and evaluates the prerequisites for and errors of position estimates.
Related work
There are standard sensors applied in the operating room for laparoscopic surgeries in the last decade. Widely used are infrared tracking systems based on reflective spheres and infrared light detectors. Another common type is electromagnetic tracking systems. The drawback of this approach is the common distortion of the metallic objects used during surgery. Furthermore, robotic arms can determine the position using a forward kinematic and measuring the joint’s angles. Other approaches focus on matching 3D CT scans to real-time images obtained either by ultrasound sensors [2], 2D cameras [3], [4] or 3D cameras.
Others [5], [6], [7] use SLAM methods for movement detection of laparoscope and organ to detect whether organs or the camera shifts and distortions by deformations are the main hazards for this approach. Similar to our approach is [8], which uses the remote center of motion (RCM) constraints for SLAM methods and show that it outperforms conventional SLAM approaches. Another method [9] uses an IMU and 2D camera information to implement a SLAM algorithm.
Often the use of IMU sensors in laparoscopic navigation is restricted to orientation information. The position trajectories are often computed by double integration of the accelerometer, which is erroneous as soon as the IMU is tilted [10].
Other researchers also investigated on integrating IMUs in sensor fusion methods to compensate the drawbacks of other sensors.
In contrast to previous approaches, we combine the RCM constraint with IMU sensors and produce a position estimate with that information. We use infrared tracking system for initialization and bypass timespans when line-of-sight is lost.
This method is not in conflict with other methods as we plan to merge several approaches to increase the accuracy of localization information up to the point that the variance is small enough for our use case.
Methods
Coordinate system definition and representation
The definition of coordinate systems should efficiently cover the laparoscopic setting and is essential for the later formulation of transformation matrices and their computation. Figure 1a depicts the essential coordinate systems. WIMU and WIR denote native sensor coordinate systems of IMU and IR sensors, respectively. We further define C to be the Camera tip with x-axis aligned to the laparoscope axis and L the coordinate system of the attached sensor. The trocar entry point is equal to the origin of T. The coordinate system W2 mounted on the patient is only needed if we expect either the infrared camera or the patient’s bed to move. We denote the homogenous

(a) System Setup, (b) Position Estimates, Errors: (c) Total Position Error (d) Deviation due to erroneous angle (e) Deviation in Trocar entry point (f) Deviation due to erroneous penetration length.
Trocar position
When the laparoscope is inside the abdomen, we collect axis of the camera and compute the trocar position in a least-squares sense.
The tuple
only when the optical tracking system has free line-of-sight to the target. Furthermore, to add a new line
to all lines
In order to find the trocar entry point p with minimum square distances
which yields
with
We can then store the transformation
Laparoscope position estimation
Based on the trocar entrypoint and the orientation we get a position estimate as follows:
with
Here,
with previously calibrated rigid transformation
Implementation
Sensors
Our prototype comprises an NDI Polaris Vega IR camera (Waterloo, Ontario, Canada) and unique geometry tools as well as MbientLab Metawear IMU sensor boards (San Francisco, California, USA). Communication with the infrared camera is established via ethernet and we read from the IMU via Bluetooth. The receiver publishes all data via unique topics via MQTT (IoT framework). Also the sensor-fusion and calibration procedures publish their data to the centralized broker. For the IoT setup there has to be a network setup with a running MQTT broker in the operating room (OR).
A 3D printed mounting is attached to the Storz 3D endoscope (Tuttlingen, Baden-Württemberg, Germany), statically connecting the reflecting spheres with the IMU sensor in a way that both coordinate systems are equal in orientation.
Evaluation and discussion
We tested six moving patterns of the laparoscope. We calibrate the trocar position in the beginning and use afterward no information of the IR Tracking System to compute the position. We observe that the position trajectory shows clear correspondence to the ground truth infrared tracking for all movements (see Figure 1b). The error of the position estimate is shown in Figure 1c. To analyze its source we investigate possible error contributions. The inaccurate part of the orientation measured by the IMU is propagating by
with the angle between the measured orientations
The error based on an incorrect trocar entry point
is linearly contributing as well as the error based on incorrect length outside the abdomen
To find
The error term
The last error is the length outside the abdomen, changes in that value can not be detected. For this moving pattern (pivoting) the laparoscope does not change its penetration length and the error is therefore comparatively small (see Figure 1f).
Conclusion
It was shown that the approach of estimating position by IMU and trocar entry point is feasible for moving patterns without a change in penetration length. We defined the contributing errors and found that the error is determined by the accuracy of the IMU’s orientation. We further showed that the trocar entry point could be determined fast.
In addition, we suggested an IoT framework for extending the localization framework and announced that studies incorporating other information are planned. For that purpose, we gathered research papers, which determine the localization information in a different way. That can serve as a starting point for extensions.
Funding source: BMBF (Bundesministerium fur Bildung und Forschung)
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Research funding: BMBF (Bundesministerium für Bildung und Forschung) funding for project COMPASS (Comprehensive Surgical Landscape Guidance System) with identification code 16SV8018.
Informed consent: Informed consent is not applicable.
Ethical approval: The conducted research is not related to either human or animals use.
Conflict of interest: Authors state no conflict of interest.
References
1. Bernhardt, S, Nicolau, SA, Soler, L, Doignon, C. The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017;37:66–90. https://doi.org/10.1016/j.media.2017.01.007.Search in Google Scholar
2. Ieiri, S, Uemura, M, Konishi, K, Souzaki, R, Nagao, Y, Tsutsumi, N, et al. Augmented reality navigation system for laparoscopic splenectomy in children based on preoperative ct image using optical tracking device. Pediatr Surg Int 2012;28:341–6. https://doi.org/10.1007/s00383-011-3034-x.Search in Google Scholar
3. Langø, T, Tangen, G, Mårvik, R, Ystgaard, B, Yavuz, Y, Kaspersen, J, et al. Navigation in laparoscopy–prototype research platform for improved image-guided surgery. Minim Invasive Ther Allied Technol 2008;17:17–33. https://doi.org/10.1080/13645700701797879.Search in Google Scholar
4. Konishi, K, Nakamoto, M, Kakeji, Y, Tanoue, K, Kawanaka, H, Yamaguchi, S, et al. A real-time navigation system for laparoscopic surgery based on three-dimensional ultrasound using magneto-optic hybrid tracking configuration. Int J Comput Assist Radiol Surg 2007;2:1–10. https://doi.org/10.1007/s11548-007-0078-4.Search in Google Scholar
5. Mountney, P, Yang, GZ. Motion compensated slam for image guided surgery. In: International conference on medical image computing and computer-assisted intervention: Springer; 2010:496–504 pp.10.1007/978-3-642-15745-5_61Search in Google Scholar PubMed
6. Grasa, G, Bernal, E, Casado, S, Gil, I, Montiel, JMM. Visual slam for handheld monocular endoscope. IEEE Trans Med Imag 2014;33:135–46. https://doi.org/10.1109/tmi.2013.2282997.Search in Google Scholar
7. Grasa, ÓG. Visual SLAM for measurement and augmented reality in laparoscopic surgery. PhD thesis: Universidad de Zaragoza; 2014.Search in Google Scholar
8. Vasconcelos, F., Mazomenos, E., Kelly, J., Stoyanov, D. Rcm-slam: Visual localisation and mapping under remote centre of motion constraints. In: 2019 International conference on robotics and automation (ICRA); 2019:9278–84 pp.10.1109/ICRA.2019.8793931Search in Google Scholar
9. Huang, CC, Hung, NM, Kumar, A. Hybrid method for 3d instrument reconstruction and tracking in laparoscopy surgery. In: 2013 International conference on control, automation and information sciences (ICCAIS); 2013:36–41 pp.10.1109/ICCAIS.2013.6720526Search in Google Scholar
10. Gan, C. Design of inertial tracking system for laparoscopic instrument trajectory analysis; 2010.Search in Google Scholar
© 2020 Regine Hartwig et al., published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Proceedings Papers
- 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
- A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
- A novel calibration phantom for combining echocardiography with electromagnetic tracking
- Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
- Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
- Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
- Autonomous guidewire navigation in a two dimensional vascular phantom
- Cardiac radiomics: an interactive approach for 4D data exploration
- Catalogue of hazards: a fundamental part for the safe design of surgical robots
- Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
- Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
- Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
- COMPASS: localization in laparoscopic visceral surgery
- Conceptual design of force reflection control for teleoperated bone surgery
- Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
- Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
- DL-based segmentation of endoscopic scenes for mitral valve repair
- Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
- EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
- Fluoroscopy-guided robotic biopsy intervention system
- Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
- Force estimation from 4D OCT data in a human tumor xenograft mouse model
- Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
- Generalization of spatio-temporal deep learning for vision-based force estimation
- Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
- Improving endoscopic smoke detection with semi-supervised noisy student models
- Infrared marker tracking with the HoloLens for neurosurgical interventions
- Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
- Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
- Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
- Needle placement accuracy in CT-guided robotic post mortem biopsy
- New insights in diagnostic laparoscopy
- Robotized ultrasound imaging of the peripheral arteries – a phantom study
- Segmentation of the distal femur in ultrasound images
- Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
- Surgical audio information as base for haptic feedback in robotic-assisted procedures
- Surgical phase recognition by learning phase transitions
- Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
- Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
- Training of patient handover in virtual reality
- Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
- Using position-based dynamics to simulate deformation in aortic valve replacement procedure
- VertiGo – a pilot project in nystagmus detection via webcam
- Visual guidance for auditory brainstem implantation with modular software design
- Wall enhancement segmentation for intracranial aneurysm
Articles in the same Issue
- Proceedings Papers
- 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
- A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
- A novel calibration phantom for combining echocardiography with electromagnetic tracking
- Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
- Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
- Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
- Autonomous guidewire navigation in a two dimensional vascular phantom
- Cardiac radiomics: an interactive approach for 4D data exploration
- Catalogue of hazards: a fundamental part for the safe design of surgical robots
- Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
- Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
- Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
- COMPASS: localization in laparoscopic visceral surgery
- Conceptual design of force reflection control for teleoperated bone surgery
- Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
- Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
- DL-based segmentation of endoscopic scenes for mitral valve repair
- Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
- EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
- Fluoroscopy-guided robotic biopsy intervention system
- Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
- Force estimation from 4D OCT data in a human tumor xenograft mouse model
- Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
- Generalization of spatio-temporal deep learning for vision-based force estimation
- Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
- Improving endoscopic smoke detection with semi-supervised noisy student models
- Infrared marker tracking with the HoloLens for neurosurgical interventions
- Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
- Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
- Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
- Needle placement accuracy in CT-guided robotic post mortem biopsy
- New insights in diagnostic laparoscopy
- Robotized ultrasound imaging of the peripheral arteries – a phantom study
- Segmentation of the distal femur in ultrasound images
- Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
- Surgical audio information as base for haptic feedback in robotic-assisted procedures
- Surgical phase recognition by learning phase transitions
- Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
- Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
- Training of patient handover in virtual reality
- Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
- Using position-based dynamics to simulate deformation in aortic valve replacement procedure
- VertiGo – a pilot project in nystagmus detection via webcam
- Visual guidance for auditory brainstem implantation with modular software design
- Wall enhancement segmentation for intracranial aneurysm