Startseite Feasibility and reproducibility of speckle tracking echocardiography in routine assessment of the fetal heart in a low-risk population: a letter reply
Artikel Open Access

Feasibility and reproducibility of speckle tracking echocardiography in routine assessment of the fetal heart in a low-risk population: a letter reply

  • Adalina Sacco EMAIL logo , Ayisha Kazi , Diane Lambo , Victoria Jowett und Pranav Pandya
Veröffentlicht/Copyright: 7. Juli 2025

Letter Reply to Letter to the Editor JPMed.2025.0193.

We thank the authors of this Letter to the Editor for their attention to and consideration of this study. We have attempted to answer the points raised below.

  1. The four-chamber cardiac view is indeed standard in fetal cardiac screening in England, as are the four other views as set out in the FASP fetal cardiac protocol. Almost all patients at our institution have these views achieved on first attempt at the mid-trimester scan, and if they are not there is a protocol for repeat scanning. At present FASP does not recommend for either still image or video clip storage for cardiac screening in a low risk population – this is because there are centres in England that do not have the facility however this is being reviewed nationally. We therefore stored a three-second cineloop for this study. We excluded scans where the cineloop was not judged to be of sufficient quality for correct STE use. The reasons for this include some mentioned by the authors – frame rate, depth, zoom etc. This does not mean that the still images performed were of insufficient quality for routine cardiac screening. As the cineloop was solely for research – scanning times (30 min for fetal anatomy in singleton pregnancies) was not extended.

  2. We agree it is logical that the longer a piece of software is used the more familiar and competent users will become with it. However as described in the Scan and STE protocol section we conducted a pilot study which the GE research team agreed was appropriate for our main study plan. The feedback received from the GE Application Specialist team confirmed that the software was being applied correctly and did not make any corrections to any of the 20 cases. Therefore we did not consider it necessary to continue practicing before commencing the study. In addition if a software is to have clinical validity then the ‘learning curve’ needs to be appropriate i.e. 20 cases.

  3. There are clearly many more parameters that could be assessed for reproducibility. As described in the Feasibility and reproducibility protocol section we chose parameters that are common in routine clinical practice. We considered it a positive that the parameters we chose had not all already been assessed for feasibility and reliability in other studies.

  4. For intraobserver reliability the operators performed two analyses on the same cardiac cycle within the same three-second cineloop. The time interval between the two evaluations was approximately one week.

  5. As per point 3 there are not any other studies that have assessed reproducibility of the same parameters as our study, so it is not possible to compare our study with another which is exactly the same. We therefore discussed how our results compare to other studies of reproducibility with STE in general.

  6. This is incorrect, 450 scans were performed and of those 57 were excluded due to FGR (46) or a structural anomaly (11). That left a potential 393 scans which could have had STE analysis performed but of those only 200 could be done due to the issues listed in Figure 4, the main one being insufficient quality of image. All patients only had one scan – the most recent at the time of the study–included i.e. longitudinal scans of the same patient were not used. We agree that it would be a useful addition to look at reproducibility at different gestational categories.

  7. Lin’s concordance correlation coefficient (CCC) is a more appropriate measure of agreement than the intraclass correlation coefficient (ICC) when evaluating the agreement between two measurements, especially when there is a potential for systematic bias. There is always the potential for systematic bias when comparing the results of two observers – e.g. one observer tends to provide higher results than the other. The CCC measure both precision (correlation) and accuracy (closeness to line of identity) whereas the ICC is used to assess the consistency (correlation). The ICC is commonly used when there are multiple raters which is not the case in this study. However, we would agree that the ICC is also commonly used when only one method is being evaluated, as was the case in your study. We do believe that it was appropriate to have provided the CCC and the professional statistical support used for this study also still believe this.

STE has clear utility outside of pregnancy and studies seem to support use within pregnancy by experts. The software used in this study is widely available and easy to use even by non-experts. We would therefore be interested to read further studies addressing reasons why the reproducibility we found was not higher and ways in which use could be expanded in the future.


Corresponding author: Adalina Sacco, UCL – Institute for Women’s Health, 74 Huntley Street, London WC1E 6AU, UK, E-mail:

  1. Informed consent: Informed consent was obtained from all individuals included in this study.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interests: Authors state no conflict of interest.

  4. Research funding: None declared.

Received: 2025-06-19
Accepted: 2025-06-20
Published Online: 2025-07-07

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 31.10.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jpm-2025-0326/html
Button zum nach oben scrollen