Startseite Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
Artikel Open Access

Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia

  • Alissa Jell EMAIL logo , Lukas Bernhard EMAIL logo , Dhaval Shah und Hubertus Feußner
Veröffentlicht/Copyright: 17. September 2020
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Dysphagia, the difficulty in swallowing, is one of the most common and, at the same time, most heterogeneous symptom of the upper digestive tract. Due to its lifetime prevalence of about 5%, every 19th person is affected on average, especially with increasing age. Dysphagia occurs in both benign and malignant diseases of the esophagus and the oropharyngeal tract as well as in neuromuscular diseases. Even dysphagia caused by benign diseases can lead to significantly reduced quality of life.

The diagnostics of the actual underlying disease in patients with dysphagia is commonly conducted using a combination of endoscopy, esophageal manometry, functional assessments and radiologic means, e.g. X-ray-fluoroscopy. As these examinations are typically performed in sequential order, it remains to the physicians to combine the relevant information from each modality to form a conclusion. We argue that this is neither an intuitive, nor a standardized form of presenting the findings to the physician. To address this, we propose a novel approach for fusing time-synchronized manometric and X-ray data into a single view to provide a more comprehensive visualization method as a novel means for diagnosing dysphagia.

Introduction

Dysphagia — the inability or difficulty to swallow — affects about every 19th person during their lifetime. Developing dysphagia is especially likely for elderly people, since it occurs in up to 33% of the population above the age of 65 years [1], [2]. The symptoms range from swallowing problems in specific — e.g. very dry or fibrous — food to being unable to even swallow saliva. This has a decisive influence on the quality of life of the patients, especially if they are no longer able to eat in public or to eat sufficiently at all (leading to malnutrition).

After malignancy has been ruled out using endoscopy and tomography, the standard for diagnosing benign, oropharyngeal and esophageal dysphagia is a combination of esophageal manometry, functional assessments (e.g. FEES, fiberoptic endoscopic evaluation of swallowing) and X-ray-fluoroscopy [3], [4]. While manometry is the gold standard in the investigation of muscular (dys) function in detail, X-ray imaging provides detailed information on anatomy and localization of structural alterations for dysphagia. Usually, manometry and X-ray recordings are performed consecutively and thus cover separate timespans. Consequently, different acts of swallowing and motility events are recorded, that cannot be compared head-to-head. While motility disorders of the tubular and distal esophagus can be comprehensively clarified, oropharyngeal dysphagia still poses major problems due to swallowing in a highly dynamic system and the interaction of a large number of muscle groups and cranial nerves. Furthermore, pressure and X-ray recordings are commonly displayed side-by-side during diagnostics, which is neither an intuitive nor an ergonomic form of presenting these “two sides of the same coin” to clinicians (Figure 1).

Figure 1: Status quo: time-synchronized oral, manometric data (left side) and cinematographic imaging (right side) of a patient with oropharyngeal dysphagia. The two modalities are presented side by side.
Figure 1:

Status quo: time-synchronized oral, manometric data (left side) and cinematographic imaging (right side) of a patient with oropharyngeal dysphagia. The two modalities are presented side by side.

To address this, we propose a novel approach for fusing time-synchronized manometric pressure data and X-ray imaging into a single view to provide a more comprehensive visualization method when diagnosing oropharyngeal and esophageal dysphagia.

Related work

A related approach in the lower digestive tract has been proposed by Davidson et al. [5] for pan-colonic manometry, where fixed anatomical reference points are identified on a 3D finite element surface of the colon. The manometric data from anatomical sites is then translated to corresponding points on the geometric mesh. The work aims at providing a more comprehensive and intuitive visualization method when diagnosing abnormal colonic function, such as constipation. However, the 3D model of the colon used for this approach is not patient individual and cannot visualize movement of the colon.

For these reasons, we strongly believe that a visualization based on patient-individual X-ray-cinematography is better suited for diagnosing dysphagia in a highly dynamic area such as the oral and pharyngeal cavity and the esophagus.

Methods

We used state-of-the-art high resolution manometry probes with a 10 French diameter and 36 coated circular pressure sensors distributed along the probe in 1 cm equal distances [6]. For the manometric data recording and visualization, the software platform ViMeDat™ (Standard Instruments, Karlsruhe, Germany) with the mobile data logger MALT™ (Standard Instruments, Karlsruhe, Germany) has been used. The cinematography data provided by an extended digital imaging X-ray machine (Philips Medical Systems, Hamburg, Germany) has been recorded simultaneously at the Department of Diagnostic and Interventional Radiology at Klinikum rechts der Isar (University Hospital, Technical University Munich, Germany), such that both data sets are time-synchronized and both reflect exactly the same oro-pharyngeal-esophageal events (Figure 1). With the manometric probe inserted transorally into the esophagus, study participants were asked to swallow multiple times 10 mL of fluid contrast agent in an upright position, while an X-ray cinematography was conducted during each act of swallowing.

In a next step, the positions of the pressure sensors in the manometric probe have to be detected for each frame of the X-ray cinematography. For this, we used a template-matching algorithm (OpenCV) with a set of randomly extracted templates from separate cinematography data sets [7], [8]. Regarding this template-matching process we experienced difficulties when working with underexposed X-ray data, since fine details are lost and the sensors tend to blend into the backdrop, particularly around the mandibular corpus and angelus. A more balanced exposure might be beneficial for the detection process but comes only with an increase in radiation dose, which is not in line with radiation protection regulations in humans.

Subsequently, we fitted a cubic b-spline through the detected sensor positions in each frame to estimate the path of the entire manometric probe (Figure 2) and track its movement across the X-ray frames [9], [10], [11], [12]. This also allows for estimating sensor positions that have not been successfully detected by the template matching. Thereby, under-exposed parts of the X-ray frames with poorly visible sensors can be bridged very effectively. With these refinements in place, the automated probe recognition provided quite stable results, even in poorly exposed areas.

Figure 2: Output of the visualization engine. The basic visualization approach shown in the figure is one of several modes available to the user. Pressure color-coded in mmHg.
Figure 2:

Output of the visualization engine. The basic visualization approach shown in the figure is one of several modes available to the user. Pressure color-coded in mmHg.

In a next step, the manometric pressure data is visualized as overlay on top of the X-ray frames. The time-variant manometric pressure information was extracted from the xml-based ViMeDat™ project file structure. According to the common visualization of high-resolution manometries in spatio-temporal color plots, the manometric pressure information of each pressure transducer was embedded in the cinematography data. Pressure values along the manometry catheter are detected at 1 cm equal distances and in high-resolution manometry these values are commonly interpolated in-between these pressure transducers for a smoother visualization. While this way of visualizing is well-established in the field, we aim at evaluating different visualization concepts together with healthcare professionals to identify the optimum in the context of our cinemanography approach. Due to its clear and precise nature, we chose a simple visualization using dots above each sensor position as a starting point (Figure 3).

Figure 3: Graphical user interface with fitted cubic b-spline (green) through the detected sensors (red) along the manometric probe.
Figure 3:

Graphical user interface with fitted cubic b-spline (green) through the detected sensors (red) along the manometric probe.

For executing the process described above and visualizing the results, we developed a graphical user interface (GUI) application based on Python and PyQT (Figure 2) [13].

Results

We tested our sensor detection algorithm on a small patient data set with positive outcome. Both experienced examiners and junior doctors appreciated our program as intuitive and as enrichment for the diagnosis of benign forms of oropharyngeal or esophageal dysphagia.

As mentioned before, the template matching approach alone yields rather poor results in underexposed areas of the X-ray frames. However, when combined with the cubic spline-fitting, sensor positions can be estimated quite reliably. Though further tests need to be carried out on a broader data set to be able to provide statistically significant accuracies and evaluate the robustness of the approach.

Our new cinemanography tool for fusing manometric and cinematographic data can be used to create patient-individual, but standardized examinations or to create specific project files associated with different clinical cases that patients’ examinations can be integrated in. After time-synchronized manometric and X-ray (video) data has been loaded into a project, the sensor detection algorithm can be started. Once the analysis has terminated, the detected sensor positions are shown in the X-ray cinematography and the pressure values are overlaid. For the actual diagnostics, the clinicians can play back these augmented videos and watch the pressure variation over time at the measuring locations along the oro-pharyngeal cavity and the esophagus.

When showing fused cinemanography to medical experts, overall positive feedback was reported. As manometries are typically performed by gastroenterologists in functional laboratories, and X-ray-fluoroscopies are performed in the radiologists’ department, this new diagnostic means brings together specialists not only locally, but also in the diagnosis of oropharyngeal dysphagia.

Conclusion

Despite todays’ technical possibilities, the fusion of multiple diagnostic means of different origins is still often done by hand, meaning a medical expert reviews findings one by one or in a non-intuitive and insufficient way side by side. When investigating particularly oropharyngeal dysphagia manometric and cinematographic data is most commonly used. The goal of our work was to develop a tool for fusing manometric and X-ray-cinematographic information into augmented, patient-individual examinations. The great feedback by medical experts shows the huge potential of such fused imaging applications, which directly support medical doctors in daily clinical routine.


Corresponding authors: Alissa Jell, Klinikum rechts der Isar, Faculty of Medicine, Surgical Department, Technical University of Munich, Munich, Germany, and Klinikum rechts der Isar, Research Group Minimally-Invasive Interdisciplinary Therapeutical Intervention (MITI), Technical University of Munich, Munich, Germany, E-mail: ; and Lukas Bernhard, Klinikum rechts der Isar, Research Group Minimally-Invasive Interdisciplinary Therapeutical Intervention (MITI), Technical University of Munich, Munich, Germany, E-mail:

Alissa Jell and Lukas Bernhard contributed equally to this work.


  1. Research funding: The author state no funding involved.

  2. Informed consent: Informed consent has been obtained from all individuals included in this study.

  3. Ethical approval: The research related to human use complies with all the relevant national regulations, institutional policies and was performed in accordance with the tenets of the Helsinki Declaration, and has been approved by the authors’ institutional committee. All participants gave informed consent.

  4. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  5. Conflict of interest: Authors state no conflict of interest.

References

1. Perry, L, Love, C. Screening for dysphagia and aspiration in acute stroke: a systematic review. Dysphagia 2001;16:7–18. https://doi.org/10.1007/pl00021290.Suche in Google Scholar PubMed

2. Yang, EJ, Kim, MH, Lim, JY, Paik, NJ. Oropharyngeal dysphagia in a community-based elderly cohort: the Korean longitudinal study on health and aging. J Korean Med Sci 2013;10:1534–9. https://doi.org/10.3346/jkms.2013.28.10.1534.Suche in Google Scholar PubMed PubMed Central

3. Yadlapati, R. High resolution manometry vs. conventional line tracing for esophageal motility disorders. Gastroenterol Hepatol 2017;13:176–8.Suche in Google Scholar

4. van Hoeij, FB, Bredenoord, AJ. Clinical application of esophageal high resolution manometry in the diagnosis of esophageal motility disorders. J Neurogastroenterol Motility 2015;22:6–13. https://doi.org/10.5056/jnm15177.Suche in Google Scholar PubMed PubMed Central

5. Davidson, J, O’Grady, G, Arkwright, J, Zarate, N, Scott, S, Pullan, A, et al. Anatomical registration and three-dimensional visualization of low and high-resolution pan-colonic manometry recordings. Neuro Gastroenterol Motil 2011;23:387–171. https://doi.org/10.1111/j.1365-2982.2010.01651.x.Suche in Google Scholar PubMed PubMed Central

6. Jones, CA, Meisner, EL, Broadfoot, CK, Rosen, SP, Samuelsen, CR, McCulloch, TM. Methods for measuring swallowing pressure variability using high-resolution manometry. Front Appl Math Stat 2018;4:23. https://doi.org/10.3389/fams.2018.00023.Suche in Google Scholar PubMed PubMed Central

7. Seferidis, VE, Ghanbari, M. General approach to block-matching motion estimation. Opt Eng 1993;32:1464–75. https://doi.org/10.1117/12.138613.Suche in Google Scholar

8. Brunelli, R. Template matching techniques in computer vision: theory and practice. Hoboken, NJ, USA: John Wiley & Sons; 2009.10.1002/9780470744055Suche in Google Scholar

9. Kim, HY, De Araújo, SA. Grayscale template-matching invariant to rotation, scale, translation, brightness and contrast. In: Pacific-rim symposium on image and video technology. Berlin, Heidelberg: Springer; 2007. 100–13. https://doi.org/10.1007/978-3-540-77129-6_13.Suche in Google Scholar

10. Lin, Y, Chunbo, X. Template matching algorithm based on edge detection. In: International symposium on computer science and society. IEEE; 2011. 7–9. https://doi.org/10.1109/ISCCS.2011.9.Suche in Google Scholar

11. Korman, S, Reichman, D, Tsur, G, Avidan, S. Fast-match: fast affine template matching. In: Conference on computer vision and pattern recognition (CVPR), 2013. Portland, OR: IEEE; 2013. 1940–7. https://doi.org/10.1109/CVPR.2013.302.Suche in Google Scholar

12. Lowe, DG. Object recognition from local scale-invariant features. ICCV 1999;99:1150–7. https://doi.org/10.1109/iccv.1999.790410.Suche in Google Scholar

13. Muja, M, Lowe, D. Flann-fast library for approximate nearest neighbours user manual. Vancouver, BC, Canada: Computer Science Department, University of British Columbia; 2009.Suche in Google Scholar

Published Online: 2020-09-17

© 2020 Alissa Jell et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Proceedings Papers
  2. 4D spatio-temporal convolutional networks for object position estimation in OCT volumes
  3. A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos
  4. A novel calibration phantom for combining echocardiography with electromagnetic tracking
  5. Domain gap in adapting self-supervised depth estimation methods for stereo-endoscopy
  6. Automatic generation of checklists from business process model and notation (BPMN) models for surgical assist systems
  7. Automatic stent and catheter marker detection in X-ray fluoroscopy using adaptive thresholding and classification
  8. Autonomous guidewire navigation in a two dimensional vascular phantom
  9. Cardiac radiomics: an interactive approach for 4D data exploration
  10. Catalogue of hazards: a fundamental part for the safe design of surgical robots
  11. Catheter pose-dependent virtual angioscopy images for endovascular aortic repair: validation with a video graphics array (VGA) camera
  12. Cinemanography: fusing manometric and cinematographic data to facilitate diagnostics of dysphagia
  13. Comparison of spectral characteristics in human and pig biliary system with hyperspectral imaging (HSI)
  14. COMPASS: localization in laparoscopic visceral surgery
  15. Conceptual design of force reflection control for teleoperated bone surgery
  16. Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation
  17. Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
  18. DL-based segmentation of endoscopic scenes for mitral valve repair
  19. Endoscopic filter fluorometer for detection of accumulation of Protoporphyrin IX to improve photodynamic diagnostic (PDD)
  20. EyeRobot: enabling telemedicine using a robot arm and a head-mounted display
  21. Fluoroscopy-guided robotic biopsy intervention system
  22. Force effects on anatomical structures in transoral surgery − videolaryngoscopic prototype vs. conventional direct microlaryngoscopy
  23. Force estimation from 4D OCT data in a human tumor xenograft mouse model
  24. Frequency and average gray-level information for thermal ablation status in ultrasound B-Mode sequences
  25. Generalization of spatio-temporal deep learning for vision-based force estimation
  26. Guided capture of 3-D Ultrasound data and semiautomatic navigation using a mechatronic support arm system
  27. Improving endoscopic smoke detection with semi-supervised noisy student models
  28. Infrared marker tracking with the HoloLens for neurosurgical interventions
  29. Intraventricular flow features and cardiac mechano-energetics after mitral valve interventions – feasibility of an isolated heart model
  30. Localization of endovascular tools in X-ray images using a motorized C-arm: visualization on HoloLens
  31. Multicriterial CNN based beam generation for robotic radiosurgery of the prostate
  32. Needle placement accuracy in CT-guided robotic post mortem biopsy
  33. New insights in diagnostic laparoscopy
  34. Robotized ultrasound imaging of the peripheral arteries – a phantom study
  35. Segmentation of the distal femur in ultrasound images
  36. Shrinking tube mesh: combined mesh generation and smoothing for pathologic vessels
  37. Surgical audio information as base for haptic feedback in robotic-assisted procedures
  38. Surgical phase recognition by learning phase transitions
  39. Target tracking accuracy and latency with different 4D ultrasound systems – a robotic phantom study
  40. Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
  41. Training of patient handover in virtual reality
  42. Using formal ontology for the representation of morphological properties of anatomical structures in endoscopic surgery
  43. Using position-based dynamics to simulate deformation in aortic valve replacement procedure
  44. VertiGo – a pilot project in nystagmus detection via webcam
  45. Visual guidance for auditory brainstem implantation with modular software design
  46. Wall enhancement segmentation for intracranial aneurysm
Heruntergeladen am 12.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/cdbme-2020-0011/html
Button zum nach oben scrollen