Home Medicine Robotized ultrasound imaging of the peripheral arteries – a phantom study
Article Open Access

Robotized ultrasound imaging of the peripheral arteries – a phantom study

  • EMAIL logo , EMAIL logo , , , and
Published/Copyright: September 17, 2020

Abstract

The first choice in diagnostic imaging for patients suffering from peripheral arterial disease (PAD) is 2D ultrasound (US). However, for a proper imaging process, a skilled and experienced sonographer is required. Additionally, it is a highly user-dependent operation. A robotized US system that autonomously scans the peripheral arteries has the potential to overcome these limitations. In this work, we extend a previously proposed system by a hierarchical image analysis pipeline based on convolutional neural networks (CNNs) in order to control the robot. The system was evaluated by checking its feasibility to keep the vessel lumen of a leg phantom within the US image while scanning along the artery. In 100% of the images acquired during the scan process the whole vessel lumen was visible. While defining an insensitivity margin of 2.74 mm, the mean absolute distance between vessel center and the horizontal image center line was 2.47 mm and 3.90 mm for an easy and complex scenario, respectively. In conclusion, this system presents the basis for fully automatized peripheral artery imaging in humans using a radiation-free approach.

Introduction

Today, endovascular procedures are standard in the therapy of peripheral arterial disease (PAD) [1] which has increased significantly in recent years [2]. However, a currently unsolved problem of this method is the necessity to use X-ray and contrast agent. The establishment of new radiation-free navigation methods for endovascular interventions is therefore of great relevance. In this context, ultrasound (US) as a radiation-free, affordable and real-time imaging technique has proven to be a potential alternative [3].

Nevertheless, this imaging technique has several drawbacks. First, an experienced and trained sonographer is needed to avoid artifacts due to improper scanning techniques [4]. Second, the inward pressure applied by the US transducer on the patient influences the anatomy to be imaged [5]. Last, the imaging process is still highly user-dependent and time-consuming. Robotized US imaging has the potential to overcome these drawbacks.

To address the issues of an increasing number of PAD patients and US imaging disadvantages, we have previously proposed a robotic US system for semi-automatic scanning of peripheral arteries [6]. However, the vessel detection relied on the manual selection of a template after placing the probe. This approach assumed that the center of the vessel is the center of the template, therefore a precise selection was important. In this work, the image analysis part used to determine if a vessel exists and to find the center of the vessel within the US image is replaced by a deep learning approach, taking the next step towards autonomy. The goal of this study was to prove feasibility, namely that the proposed system is able to continuously show the vessel lumen within a scanning process and to verify this by measuring the distance from the vessel center in the US image to the horizontal image center line.

Material and methods

System description

A 2D linear US probe (L12-3, Philips Healthcare, Best, Netherlands) was attached to the end effector of a robotic arm (LBR iiwa 14 R820, KUKA, Augsburg, Germany) using a custom-made probe holder. 8 bit grayscale images were transferred in real-time from the US station (EPIQ7, Philips Healthcare, Best, Netherlands) to the computer using a proprietary network protocol provided by Philips. An in-house middleware allows for a bidirectional communication between the robot controller and the computer. Both, the US station and the robotic arm, communicate with a C++ program running on the computer. The US images are forwarded from the C++ to a python program which in turn performs the image analysis. The result, namely the information if and where a vessel exists, is sent back to the C++ program as the information is used for the robot control. The whole setup is shown in Figure 1.

Figure 1: The system setup with its components (robotic arm, computer, US station) and the flow of data. A hierarchical image analysis takes place within python.
Figure 1:

The system setup with its components (robotic arm, computer, US station) and the flow of data. A hierarchical image analysis takes place within python.

Image analysis

The image analysis has two tasks: image classification and vessel center detection. The classification task focuses on classifying whether the vessel is visible in the image or not. If the image shows a vessel, the vessel center detection task identifies the vessel’s center point in the image, leading to a regression task. Both tasks are carried out by convolutional neural networks (CNNs) with architectures roughly similar to related applications of landmark detection in medical images [7], [8]. However, to improve the system’s robustness, we decoupled both tasks and propose a hierarchical image analysis pipeline as shown in Figure 1.

The classification network consists of two convolutional blocks, where each block has two convolutional layers (16 filters, size 3 × 3, ReLU activation) and a maximum pooling layer (size 2 × 2). The convolutional blocks are followed by two fully connected layers (100 neurons each, ReLU activation) and an output layer (softmax activation). The architecture of the vessel center detection network is similar, but features three convolutional blocks with an increasing number of filters from block to block (8, 12, 16) and linear activation in the output layer to provide regression.

We acquired 8,314 US images of a leg phantom built by the Division of Vascular and Endovascular Surgery (University Hospital Schleswig-Holstein, Lübeck) in cooperation with HumanX GmbH. The same US system and probe settings were used for all acquisitions and experiments (matrix size: 277 × 512 pixels, image spacing: [0.14 × 0.14] mm2, presetting: Arterial Vessel, gain: 0 dB, dynamic range: 51, streaming rate: 3.9 Hz). The images were labeled and annotated manually. To this end, a human observer decided for each image whether a vessel is visible (45.9%) or not (54.1%), and if yes, the vessel centerpoint was annotated. We implemented the neural networks using Tensorflow 2.0 and trained them on the given data.

Robot control

As in our previous work [6], a hand guidance mode allows the physician to place the US probe attached to the robotic arm on the area of interest. The probe is placed such that a cross-section of the vessel is visible. At this point, the automated part begins and the robotic arm is set to a proprietary cartesian impedance control mode. Each control step is based on the object coordinate system of the end effector (see Figure 2A) and consists of the following steps:

Figure 2: (A) Virtual system setup with the robot and the leg phantom. The world coordinate system (x-axis – red, y-axis – green, z-axis – blue) is located at the base of the robot while the object coordinate system is at the end effector of the robot. (B) Real system setup including the US station, US probe holder and US probe.
Figure 2:

(A) Virtual system setup with the robot and the leg phantom. The world coordinate system (x-axis – red, y-axis – green, z-axis – blue) is located at the base of the robot while the object coordinate system is at the end effector of the robot. (B) Real system setup including the US station, US probe holder and US probe.

The z-axis of the end effector is always the negative z-axis of the world coordinate system and corresponds to an intracorporal direction as it is assumed that the leg lays roughly parallel to the ground. In each step, the end effector is moved in positive z-direction until a total force of 6 N is reached to keep approximately the same pressure on the leg. The y-axis is approximately orthogonal to the cross-section plane of the artery since the physician places it accordingly. If the classification network identifies a vessel within the US image, the robot moves 2 mm in positive y-direction (respectively distal direction). Otherwise, the robot stops moving and the probe can be replaced. The movement in x-direction aims to keep the detected vessel center in the center of the image. Thus, the information of the vessel center detection network is used to move the robot in the according opposite x-direction in a negative feedback loop manner. An insensitivity margin of 20 pixels (2.74 mm) is implemented. This means that within this distance from the horizontal image center to the detected vessel center, no adjustment for the robot movement in x-direction is carried out.

Evaluation

Image Analysis

To assess the generalization, we performed a 10-fold Monte Carlo cross validation (80% Training, 20% Test) for both networks (100 epochs, batchsize 64). However, for deployment in the robotic system, the models were trained on the full data set without a test split.

Robot Control

Two scans of the leg phantom were performed over a distance of at least 14 cm. The image acquisition parameters were identical to the ones mentioned in Section “Materials and methods – Image analysis”. During the second scan, the leg phantom was turned by approximately 30° around the z-axis of the object coordinate system. This was to check feasibility even when the probe is not carefully placed by the physician. The quantitative evaluation regarding the robot control was twofold. First, the percentage of images acquired during the scan showing the whole vessel lumen was calculated. This is important as the lumen is used for diagnostic purposes of PAD. Second, the distance of the horizontal image center to the true vessel center along the x-axis was calculated for the saved images. This allows conclusions about the robot control and its ability to keep the vessel within the center of the image. Both metrics were assessed by manually labeling the acquired images as described in Section “Image analysis”.

Results

Image analysis

Table 1 shows the results of the cross validations for both networks. The classification reaches an accuracy of close to 100%, while the vessel detection network approximates the x-position of the vessel center with a mean absolute error of 0.47 ± 0.36 mm (mean ± standard deviation) and a maximum of 3.07 mm. The prediction error for the y-position, namely the vessel depth, is slightly higher.

Table 1:

Results of 10-fold cross validation for both networks. The results for the classification network are given as the relative number of rightly classified images, the vessel detection results are given as the mean absolute error (MAE) between the predicted vessel center and the ground truth.

Classification

(Accuracy, %)
µ ± σ99.55 ± 0.0018
Vessel detection

(MAE, mm)
µx ± σx0.47 ± 0.36
µy ± σy0.75 ± 0.47
maxx3.07
maxy5.91

Robot control

Both scans were successfully completed after scanning the full distance of 14 cm. In 100% of the images saved during the scan, the complete vessel lumen was visible. Figure 3 shows the distances of the horizontal image center to the true vessel center along x (MAE 2.47 and 3.90 mm for 0° and 30° respectively). Whenever the insensitivity margin is exceeded, the robot moves in the opposite x-direction and thus the distance from the image center line in x-direction decreases subsequently. However, a delay of several frames exists before the distance gradually decreases.

Figure 3: The distance of the vessel center in the image from the image center line in x-direction over time for two scans along the phantom leg (blue and red) and also the insensitivity margin (magenta).
Figure 3:

The distance of the vessel center in the image from the image center line in x-direction over time for two scans along the phantom leg (blue and red) and also the insensitivity margin (magenta).

Discussion & conclusion

Our results show that the image analysis works robustly on the given data. Of course, a deeper analysis of the architecture and the generalization performance has to be carried out on a real human data set. In future work, we will focus on collecting an in vivo data set from human legs as well as retraining and testing the proposed system on it.

The robotic US system is able to scan the leg phantom while keeping the vessel lumen visible within the US image. Additionally, even if the physician does not properly place the US probe on the leg phantom (non-orthogonal to the cross-section of the vessel) the system can compensate for this influence. However, the robot control only uses translational movements to follow the vessel. The system could improve the imaging by rotational adjustments due to the cylindrical anatomical shape of legs. Our system takes into account the total force at the end-effector. Ultimately, the pressure to be determined is the inward pressure. Therefore, future work will focus on a registration between the end effector and the US probe in order to calculate this value. To the best of our knowledge, this is the first robotic system to automatically acquire US images of the peripheral arteries using a deep learning approach. This phantom study provides promising results and presents the basis for fully automatized peripheral artery imaging in humans using a radiation-free approach.


Corresponding authors: Felix von Haxthausen, Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck, Germany, E-mail: ; Jannis Hagenah, Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck, Germany, E-mail:

Felix von Haxthausen and Jannis Hagenah: contributed equally.


Funding source: German Federal Ministry of Education and Research

Award Identifier / Grant number: 13GW0228

Funding source: Ministry of Economic Affairs, Employment, Transport and Technology

Acknowledgments

The authors would like to thank Till Aust for his help collecting the data set and Sven Böttger for the helpful discussions.

  1. Research funding: This study was supported by the German Federal Ministry of Education and Research (grant number 13GW0228).

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest. Informed consent & Ethical approval: Not applicable.

  4. Informed consent: Not applicable.

  5. Ethical approval: Not applicable.

References

1. Aboyans, V, Ricco, JB, Bartelink, MLEL, Björck, M, Brodmann, M, Cohnert, T, et al. 2017 ESC guidelines on the diagnosis and treatment of peripheral arterial diseases. Eur Heart J 2017;39:763–816. https://doi.org/10.1093/eurheartj/ehx095.Search in Google Scholar

2. Vos, T, Allen, C, Arora, M, Barber, RM, Bhutta, ZA, Brown, A, et al. Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990–2015: a systematic analysis for the global burden of disease study 2015. Lancet 2016;388:1545–602. https://doi.org/10.1016/S0140-6736(16)31678-6.Search in Google Scholar

3. Ascher, E, Marks, NA, Hingorani, AP, Schutzer, RW, Mutyala, M. Duplex-guided endovascular treatment for occlusive and stenotic lesions of the femoral-popliteal arterial segment: a comparative study in the first 253 cases. J Vasc Surg 2006;44:1230–7. https://doi.org/10.1016/j.jvs.2006.08.025.Search in Google Scholar

4. Hindi, A, Peterson, C, Barr, RG. Artifacts in diagnostic ultrasound. Rep Med Imag 2013;6:29–48. http://dx.doi.org/10.2147/RMI.S33464.10.2147/RMI.S33464Search in Google Scholar

5. Ishida, H, Watanabe, S. Influence of inward pressure of the transducer on lateral abdominal muscle thickness during ultrasound imaging. J Orthop Sports Phys Ther 2012;42:815–8. https://doi.org/10.2519/jospt.2012.4064.Search in Google Scholar

6. von Haxthausen, F, Aust, T, Schwegmann, H, Böttger, S, Ernst, F, García-Vázquez, V, et al. Visual servoing for semi-automated 2D ultrasound scanning of peripheral arteries. In: Proc AUTOMED (2020). Lübeck: Infinite Science Publishing; 2020, vol 1:28 p.Search in Google Scholar

7. Tetteh, G, Efremov, V, Forkert, ND, Schneider, M, Kirschke, J, Weber, B, et al. Deep vessel net: vessel segmentation, centerline prediction, and bifurcation detection in 3-d angiographic volumes. 2018:09340. arXiv preprint arXiv:1803.09340; 2018.Search in Google Scholar

8. Noothout, JMH, de Vos, BD, Wolterink, JM, Leiner, T, Išgum, I. CNN-based landmark detection in cardiac CTA scans. 2018:04963. ArXiv:1804.Search in Google Scholar

Published Online: 2020-09-17

© 2020 Felix von Haxthausen et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Proceedings Papers
  2. 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
  3. A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
  4. A novel calibration phantom for combining echocardiography with electromagnetic tracking
  5. Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
  6. Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
  7. Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
  8. Autonomous guidewire navigation in a two dimensional vascular phantom
  9. Cardiac radiomics: an interactive approach for 4D data exploration
  10. Catalogue of hazards: a fundamental part for the safe design of surgical robots
  11. Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
  12. Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
  13. Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
  14. COMPASS: localization in laparoscopic visceral surgery
  15. Conceptual design of force reflection control for teleoperated bone surgery
  16. Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
  17. Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
  18. DL-based segmentation of endoscopic scenes for mitral valve repair
  19. Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
  20. EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
  21. Fluoroscopy-guided robotic biopsy intervention system
  22. Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
  23. Force estimation from 4D OCT data in a human tumor xenograft mouse model
  24. Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
  25. Generalization of spatio-temporal deep learning for vision-based force estimation
  26. Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
  27. Improving endoscopic smoke detection with semi-supervised noisy student models
  28. Infrared marker tracking with the HoloLens for neurosurgical interventions
  29. Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
  30. Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
  31. Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
  32. Needle placement accuracy in CT-guided robotic post mortem biopsy
  33. New insights in diagnostic laparoscopy
  34. Robotized ultrasound imaging of the peripheral arteries – a phantom study
  35. Segmentation of the distal femur in ultrasound images
  36. Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
  37. Surgical audio information as base for haptic feedback in robotic-assisted procedures
  38. Surgical phase recognition by learning phase transitions
  39. Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
  40. Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
  41. Training of patient handover in virtual reality
  42. Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
  43. Using position-based dynamics to simulate deformation in aortic valve replacement procedure
  44. VertiGo – a pilot project in nystagmus detection via webcam
  45. Visual guidance for auditory brainstem implantation with modular software design
  46. Wall enhancement segmentation for intracranial aneurysm
Downloaded on 27.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/cdbme-2020-0033/html
Scroll to top button