The utilization of artificial intelligence in enhancing 3D/4D ultrasound analysis of fetal facial profiles
-
Muhammad Adrianes Bachnas
, Wiku Andonotopo
, Julian Dewantiningrum , Mochammad Besari Adi Pramono , Milan Stanojevic and Asim Kurjak
Abstract
Artificial intelligence (AI) has emerged as a transformative technology in the field of healthcare, offering significant advancements in various medical disciplines, including obstetrics. The integration of artificial intelligence into 3D/4D ultrasound analysis of fetal facial profiles presents numerous benefits. By leveraging machine learning and deep learning algorithms, AI can assist in the accurate and efficient interpretation of complex 3D/4D ultrasound data, enabling healthcare providers to make more informed decisions and deliver better prenatal care. One such innovation that has significantly improved the analysis of fetal facial profiles is the integration of AI in 3D/4D ultrasound imaging. In conclusion, the integration of artificial intelligence in the analysis of 3D/4D ultrasound data for fetal facial profiles offers numerous benefits, including improved accuracy, consistency, and efficiency in prenatal diagnosis and care.
Introduction
Traditionally, 2D ultrasound images have been the standard method for examining fetal development during pregnancy. However, these images have limitations in accurately capturing detailed facial features and structures. This is where 3D/4D ultrasound imaging comes into play, providing a more comprehensive view of the fetus, especially facial expression. When artificial intelligence (AI) is incorporated into 3D/4D ultrasound analysis, the benefits are manifold. One of the key advantages is the ability to enhance image quality and resolution, allowing for clearer and more precise visualization of the fetus’s facial features (Figure 1). It can be stated that the face is the mirror of the brain, meaning that in some occasions by observing fetal face we can speculate on central or peripheral nervous system function 1], [2], [3.

This illustration explains how a conventional image data with an AI touch is transformed to appear sharper. AI-enhanced software leverages a combination of advanced machine learning models and image processing techniques to improve the sharpness and smoothness of images. By training on extensive datasets and employing sophisticated algorithms, the software can effectively enhance the quality of 3D/4D ultrasound images, making them more useful for diagnostic purposes.
Additionally, AI algorithms can assist in automating the process of analysing 3D/4D ultrasound images, reducing the reliance on manual interpretation and increasing efficiency. This not only saves time for healthcare professionals but also minimizes the risk of human error in diagnosing fetal conditions. Moreover, the use of AI in 3D/4D ultrasound analysis enables predictive modelling and pattern recognition, helping to identify subtle facial markers that may indicate potential genetic syndromes or congenital abnormalities. This can be invaluable in counseling expectant parents and guiding them in making informed decisions about their pregnancy 1], [2], [3], [4], [5. Even if condition is significant and surgically correctable, the parents will be prepared, and accurately counselled about postnatal treatment options and prognosis for their child. Overall, the integration of artificial intelligence in strengthening 3D/4D ultrasound analysis of fetal facial profiles offers a more comprehensive and accurate approach to prenatal care. By leveraging the power of AI technology, healthcare providers can enhance the quality of fetal imaging, improve diagnostic accuracy, and ultimately, ensure the well-being of both mother and baby 4], [5], [6.
This review aims to evaluate and describe the various benefits that AI technology can provide in the field of prenatal diagnostics, especially in refining, sharpening, and improving the structural diagnosis of the fetal face using 3D/4D ultrasound examinations. Through this article, it is hoped that it can provide a deeper understanding of the potential and benefits of AI in the field of prenatal3D/4D ultrasound diagnostics, as well as encourage further research and development to utilize this technology in improving maternal and fetal health services in the future.
Evaluating the performance of an AI fetal face profile: accuracy, sensitivity, and specificity
The development of AI systems for medical applications has been a growing area of research, with promising advancements in various domains. In obstetrics, AI has found applications in tasks such as 3D/4D ultrasonography, with the potential to assist in prediction and diagnosis. One specific application of AI in obstetrics is the evaluation of fetal face profiles using ultrasound data 1], [2], [3.
Accurate assessment of fetal facial features during pregnancy can provide valuable information for the detection of congenital anomalies and other developmental issues. However, the performance of these AI-based fetal face profile evaluation systems needs to be rigorously assessed to ensure their clinical utility and reliability. The accuracy of the AI system will be calculated as the proportion of correct predictions, while the sensitivity and specificity will be determined by the system’s ability to correctly identify the presence or absence of specific facial features 1], [2], [3.
The implementation phase involved the integration of the AI model into a user-friendly, clinician-focused platform, providing healthcare professionals with a seamless and efficient tool to aid in the early detection and diagnosis of fetal facial anomalies. This platform leverages real-time image analysis capabilities, allowing for the rapid and reliable identification of potential issues during routine prenatal screening and monitoring 1], [2], [3.
Proposed additional advantages of using AI in depicting fetal faces
AI has already made significant strides in the field of prenatal ultrasound diagnosis, providing valuable insights and assistance to healthcare professionals [1, 2]. One of the emerging applications of AI in this domain is the enhanced depiction of fetal facial features, which holds the potential to offer additional advantages beyond the current capabilities of traditional ultrasound imaging (Figures 2–5).

During the third trimester fetal facial examination using AI-enhanced USG 3D/4D technology, where the image is exceptionally sharp and clear, several detailed anatomical features of the fetal face can be observed. The enhanced clarity and smoothness of the image, along with the ability to see the fetal face with open eyelids, allow for a comprehensive assessment of the facial anatomy.

The ability to observe the fetal face very sharply and clearly, especially fetal behaviors like the tongue expulsion, provides critical information about neuromuscular and behavioral development. AI-enhanced 4D ultrasound adds significant value by improving diagnostic accuracy, enabling comprehensive assessments, facilitating early intervention, and contributing to education and research.

The ability to perform a yawn indicates proper development and coordination of the fetal facial muscles and nervous system. This shows that the neural circuits involved in these actions are functioning well. AI can help track and analyze fetal movements and behaviors using 3D/4D ultrasound over time, providing insights into neurological and behavioral development.

A normal image of the fetal face using AI-enhanced 3D/4D ultrasound provides a comprehensive view of the facial anatomy, indicating normal development and reducing concerns about structural abnormalities.
One of the primary advantages of using AI in fetal facial depiction is the ability to provide more accurate and detailed visualizations of the developing fetus [3]. Traditional ultrasound imaging can sometimes be limited in its ability to capture the intricate details of fetal facial structures, particularly in cases where the fetus is in an unfavourable position, or the quality of the images is suboptimal 7], [8], [9. By leveraging advanced AI algorithms, healthcare providers can potentially obtain clearer and more comprehensive visualizations of the fetal face, allowing for a more thorough assessment of its development and the identification of any potential abnormalities [1, 10].
Furthermore, the use of AI in fetal facial depiction can lead to earlier detection of certain congenital conditions or syndromes that may be associated with unique facial features [3, 10]. By enhancing the ability to visualize and analyze these facial characteristics, healthcare providers can potentially identify these conditions at an earlier stage, enabling timely initiation of treatment and improved patient outcomes.
Another potential advantage of using AI in fetal facial depiction is the ability to enhance the patient experience [11]. By providing more detailed and accurate visualizations of the fetal face, healthcare providers can offer expectant parents a more engaging and informative experience during prenatal visits. This can foster a stronger connection between the parents and the developing fetus, potentially leading to improved emotional well-being and a more positive overall experience, especially when explaining any detected abnormalities.
While the integration of AI in prenatal care holds great promise, it is essential to address the challenges and limitations of this emerging technology [2, 10]. Continued research and development in this field will be crucial in ensuring the safe and effective implementation of AI-powered fetal imaging solutions, ultimately enhancing the overall quality of prenatal care and the well-being of both parents and their children.
The use of AI in fetal facial depiction can contribute to advancements in medical education and research [12]. By analyzing a large volume of fetal facial data, AI algorithms can potentially identify subtle patterns or correlations that may not be readily apparent to the human eye [11]. This could lead to the development of new diagnostic tools, the refinement of existing algorithms, and a deeper understanding of fetal facial development and its relationship to various medical conditions. AI can aid in interpreting not only the anatomy of facial structures but also facial expressions, which can be indicators of fetal brain functioning. This can offer insights into the neurological development of the fetus. AI can facilitate the tracking of fetal development over time by comparing sequential ultrasound images. This can help in monitoring the progress of identified conditions and the effectiveness of any treatments.
AI-assisted ultrasound 3D/4D analysis: overcoming the limitations of traditional manual approaches
Traditional manual 2D ultrasound fetal facial profile analysis has inherent limitations. Fetal facial features are often difficult to visualize and interpret, particularly in cases of complex malformations, and the process is highly dependent on the experience and expertise of the examiner. Moreover, this manual approach is time-consuming and can be subject to inter-observer variability. To address these challenges, the application of artificial intelligence in prenatal ultrasound 3D/4D diagnosis has emerged as a promising solution [8, 9]. AI-based techniques can rapidly analyze large amounts of ultrasound data, identify subtle patterns, and provide more consistent and objective assessments of fetal facial features. Recent studies have demonstrated the potential of AI-assisted methods to enhance the accuracy and efficiency of fetal facial profile analysis. These advanced techniques can detect a wide range of congenital abnormalities, including cleft lip and palate, facial dysmorphisms, and other craniofacial malformations, with high sensitivity and specificity 1], [2], [3, 10].
Harnessing AI for early detection of fetal facial abnormalities
Congenital malformations can have a profound impact on a child’s health and development, and early detection is crucial for providing timely medical intervention. One such group of malformations is facial abnormalities, which can be indicative of underlying genetic or structural issues. Traditionally, the identification of these anomalies has relied heavily on the expertise of experienced clinicians interpreting prenatal ultrasound scans. However, the subjective nature of this process and the potential for human error have prompted the exploration of artificial intelligence as a tool to enhance diagnostic accuracy and consistency [6].
The application of AI in obstetrics, particularly in the realm of fetal ultrasound analysis, has gained significant traction in recent years. Ultrasonography has become the primary modality for prenatal imaging, offering real-time, non-invasive, and cost-effective assessment of fetal development. AI-powered systems have the potential to revolutionize the way fetal facial abnormalities are detected, by leveraging advanced image recognition algorithms to analyze ultrasound scans with increased precision and objectivity [1, 3]. As the field of AI in obstetrics continues to evolve, the accurate detection of fetal facial abnormalities holds immense promise for improving prenatal care and reducing the burden of congenital malformations.
Fetal facial recognition and diagnosis through case studies: a novel AI methodology
In the rapidly evolving realm of medical imaging and diagnostics, the application of artificial intelligence has emerged as a transformative force, offering unprecedented opportunities to enhance clinical decision-making and improve patient outcomes. One such innovative application is the development of AI models specifically designed for fetal facial recognition and diagnosis [6, 13].
The methodology outlined in this research paper encompasses the design, implementation, and evaluation of an AI-driven framework that aims to revolutionize the way healthcare professionals’ approach fetal facial anomalies and congenital conditions. The proposed model leverages the power of convolutional neural networks to analyze a vast array of 3D/4D fetal face ultrasound images, enabling the early detection and accurate diagnosis of a wide range of fetal facial anomalies [2, 3, 5, 6].
The development of this AI model began with the careful curation and pre-processing of a comprehensive dataset of fetal ultrasound images, representing a diverse range of normal and anomalous fetal facial structures. Advanced deep learning algorithms were then employed to train the model, enabling it to recognize and classify various fetal facial characteristics with a high degree of accuracy [3, 6, 13].
AI-enhanced imaging can serve as a valuable tool for educating medical students and training healthcare professionals, providing them with high-quality examples of both normal and abnormal fetal facial profiles. Malformations and Genetic Conditions Diagnosable through Fetal Face Assessment as follows.
Cleft lip and palate
AI can detect abnormalities in the formation of the lip and palate, which are among the most common congenital malformations (Figure 6). Cleft lip and palate are among the most prevalent congenital malformations, affecting approximately 1 in 700 newborns worldwide. These craniofacial abnormalities result from the incomplete fusion of the lip and palate during embryonic development, leading to visible clefts that can significantly impact an individual’s appearance, speech, feeding, and overall quality of life [14].

Facial abnormalities such as cleft lip and palate can be identified using 4D ultrasound, particularly when enhanced with AI imaging techniques. These advanced AI imaging techniques can significantly improve the prenatal detection of facial abnormalities, allowing for better preparation and management of the condition once the baby is born.
Down syndrome (trisomy 21)
Also known as Trisomy 21, is a genetic disorder characterized by the presence of an extra copy of chromosome 21 (Figure 7). Individuals with Down syndrome often exhibit a distinct set of physical characteristics, including a flattened nasal bridge, epicanthal folds, and upslanting palpebral fissures [15].

Down syndrome, also known as trisomy 21, often presents with distinct facial characteristics. Here are some of the most notable features that can be identified: flat facial profile, upward slanting palpebras, small nose and flattened nasal bridge, small mouth and protruding tongue. The ears can be smaller and sometimes lower set. AI algorithms can highlight and oetuse the facial structures, potentially making it easier to identify the characteristic features associated with Down syndrome.
Trisomy 13
Trisomy 13 also known as Patau syndrome, is a rare chromosomal disorder characterized by the presence of an extra copy of the 13th chromosome (Figure 8). This genetic condition leads to a range of congenital abnormalities, including microphthalmia (small eyes), cleft lip/palate, and polydactyly (extra fingers or toes) [16, 17].

When using AI-enhanced 3D/4D ultrasound to observe fetuses with trisomy 13 (Patau syndrome), the clear and detailed imaging can help identify various characteristic features and anomalies associated with this condition.
Trisomy 18
Trisomy 18 also known as Edwards syndrome, is a rare chromosomal disorder characterized by the presence of an extra copy of the 18th chromosome (Figure 9). This genetic anomaly results in a range of physical and developmental abnormalities, including distinctive facial features that can be detected using artificial intelligence technologies. One of the hallmark facial features associated with trisomy 18 is micrognathia, a condition where the lower jaw is significantly smaller than the upper jaw. This can lead to a recessed or “weak” chin appearance. Additionally, individuals with trisomy 18 often have low-set, malformed ears, as well as a prominent occiput, the rounded bony projection at the back of the skull. The clinical variability associated with trisomy 18 is wide-ranging, with additional features that have been reported including bifid uvula, cleft palate, heart defects, radioulnar synostosis, genu valgum, pes cavus, fifth-finger clinodactyly, hypotonia, joint laxity, and small genitalia with hypergonadotropic hypogonadism [18]. The unique facial features associated with trisomy 18, such as micrognathia, low-set ears, and a prominent occiput, can be effectively detected using AI enhancement tools.

Edwards syndrome, also known as trisomy 18, is another chromosomal disorder that presents with distinct facial characteristics. Here are some of the most notable features that can often be identified: small head (microcephaly), prominent occiput, small mouth and jaw (micrognathia), low-set malformed ears, cleft lip and/or palate, hypertelorism and narrow palpebral fissures. AI-enhanced imaging can further improve the detection and visualization of these features by providing higher resolution images and more accurate interpretation.
Craniofacial syndromes
One area where AI-enhanced imaging has shown great promise is the detection of craniofacial syndromes, such as Treacher Collins syndrome, Apert syndrome, and Pierre Robin sequence (Figures 10 and 11). These conditions are characterized by distinct facial features, which can be effectively identified through the application of intelligent systems [19]. Craniofacial syndromes often involve a complex interplay of genetic and developmental factors, leading to unique facial characteristics [13]. AI-powered algorithms have demonstrated the ability to detect these subtle patterns, outperforming traditional manual assessment methods [20]. As the field of AI-enhanced imaging continues to evolve, the potential for its application in the early detection and management of craniofacial syndromes is becoming increasingly evident [21, 22].

Apert syndrome is a genetic disorder characterized by the premature fusion of certain skull bones, leading to distinctive facial features and other anomalies. This condition falls under a group of disorders known as craniosynostosis syndromes. The use of 3D/4D ultrasound and AI-enhanced imaging provides a powerful tool for the accurate diagnosis and comprehensive evaluation of Apert syndrome and its associated facial abnormalities.

Pierre Robin sequence (PRS) is a congenital condition characterized by a sequence of anomalies that primarily affect the development of the jaw and palate. AI-enhanced 3D/4D ultrasound imaging brings additional value to the diagnosis and evaluation of Pierre Robin sequence.
Facial dysmorphisms
Facial dysmorphisms, or subtle abnormalities in facial features, can be indicative of various genetic conditions and syndromes, and early identification of these dysmorphisms can be crucial for timely diagnosis and appropriate treatment (Figure 12). Advances in AI and machine learning have opened up new possibilities for the detection and assessment of facial dysmorphisms, offering a promising tool to assist clinicians in the identification of these subtle features [20].

: 3D/4D ultrasound combined with AI image enhancement can significantly improve the detection and analysis of various facial dysmorphisms. In the future, AI is expected to play a crucial role in diagnosing facial dysmorphism disorders by leveraging advanced imaging techniques and machine learning algorithms..
Enhancing the realism of 4D imagery through AI algorithms
The realm of 4D imagery, which encompasses the integration of three-dimensional spatial data and the temporal dimension, has witnessed a remarkable surge in interest and technological advancement in recent years. The development of robust AI algorithms has played a pivotal role in enhancing the realism and immersive quality of these dynamic visual experiences, catering to a wide range of applications, from augmented reality and virtual reality to cinematic special effects [21, 22].
One key aspect of this evolution is the growing sophistication of algorithms used to strengthen the appearance of 4D images. Recent advancements in computer vision, image processing, and machine learning have enabled the creation of more realistic and seamless 4D visual environments [23, 24]. By leveraging techniques such as object detection, image segmentation, and scene reconstruction, these AI-driven algorithms can now accurately capture and reconstruct the intricate details, lighting, and textures of real-world scenes, seamlessly integrating them into the 4D space [25].
Moreover, the integration of generative adversarial networks and other deep learning architectures has empowered these algorithms to generate synthetic 4D content that is virtually indistinguishable from reality [21, 22]. The competition between the generative and discriminative components of these models results in a final output that closely resembles the true image, reducing the occurrence of unrealistic or deceptive visual artifacts [26].
As the field of computational creativity continues to evolve, the application of AI-driven techniques in the domain of 4D imagery has become increasingly prevalent [27]. These advancements have enabled the automation of tasks that were previously only achievable through human intervention, such as the generation of dynamic visual effects, the seamless integration of virtual and physical elements, and the creation of immersive, photorealistic experiences.
To further enhance the realism of 4D imagery, ongoing research is exploring the integration of human-in-the-loop approaches, where the creative and emotional feedback from human collaborators is incorporated into the AI-driven creative process [28]. By bridging the gap between machine-generated content and human-centric aesthetics, these hybrid systems aim to produce 4D visual experiences that not only captivate the senses but also resonate with the emotional and interpret high-level conceptual aspects of the content.
Leveraging 4D image dataset methods for AI based image enhancement
One promising area in this domain is the utilization of 4D image datasets, which capture dynamic information over time, for AI-driven image enhancement. These 4D datasets, such as those obtained from magnetic resonance imaging, computational tomography, or positron emission tomography, provide a wealth of spatial and temporal information that can be harnessed to develop novel AI algorithms for enhancing medical images 29], [30], [31.
Several AI programs names have been employed in the field of 4D image enhancement, including but not limited to convolutional neural networks, recurrent neural networks, and generative adversarial networks [32, 33]. These AI models can be trained to learn the complex patterns and relationships within 4D image datasets, enabling them to perform tasks such as image denoising, super-resolution, and segmentation with enhanced accuracy and efficiency [13].
The process of training an observer to carry out conventional 4D image enhancement AI processes typically involve the following steps:
Training to pre-process the 4D image data, including tasks such as image registration, normalization, and feature extraction, to prepare the data for input into the AI models [34].
Guiding through the process of training and fine-tuning the AI models for specific 4D image enhancement tasks, such as noise reduction, sharpening, or segmentation, using established techniques like transfer learning, data augmentation, and hyperparameter optimization [35].
By following this comprehensive training process, the observer can develop the necessary skills and knowledge to effectively leverage AI-driven techniques for 4D image enhancement, ultimately leading to improved diagnostic capabilities and better patient outcomes in the healthcare domain.
Automated AI enhancement of fetal facial profile using deep learning on ultrasound 3D/4D images
Developing robust and generalizable medical AI models requires access to large, diverse datasets. This is particularly challenging in the medical imaging domain, where datasets are often limited in size and scope, hindering the performance and applicability of deep learning-based approaches. AI’s ability to analyze large datasets and identify subtle patterns can lead to personalized prenatal care plans tailored to the specific needs of each pregnancy, potentially improving outcomes [1].
To address this challenge, we conducted a systematic process and validating an AI model on a large dataset of ultrasound images. The dataset consisted of high-quality 3D/4D ultrasound scans from diverse regions, covering a wide range of anatomical structures and pathologies. The training process involved fine-tuning the image of 3D/4D fetal face profile on the ultrasound dataset, leveraging techniques such as data augmentation and transfer learning to maximize the model’s performance [31].
Our findings show great potential in the rapid technological progress of AI in medical imaging, where the AI-processed results can achieve the most sophisticated, smoothest, sharpest images and at the same time show better generalization capabilities compared to conventional 3D/4D images before final processing by AI. The ability to leverage large and diverse 3D/4D datasets and apply AI to a variety of downstream tasks holds great promise in advancing the clinical application of AI in medical 3D/4D imaging [2, 3].
However, it is important to note that the success of medical AI is not solely dependent on the availability of large datasets. Factors such as data quality, annotation accuracy, and addressing the complexity of the medical domain are also crucial for developing clinically relevant and trustworthy AI systems. Ongoing efforts in this direction, coupled with the continued advancements in foundation models, hold the potential to transform the landscape of medical imaging AI [2, 3, 33].
Exploring the implications of AI-driven fetal face profiling
The recent advancements in artificial intelligence have extended their reach into the realm of prenatal healthcare, giving rise to the intriguing prospect of AI-driven fetal face profiling. This innovative technology holds the potential to revolutionize the way we approach obstetric diagnostics, offering a deeper understanding of fetal development and potential health implications.
Concurrently, the application of AI in other areas of obstetric imaging, such as ultrasonography and magnetic resonance imaging, has demonstrated remarkable potential. AI-powered analysis of these modalities can assist in the early detection of fetal abnormalities, enabling timely interventions and potentially reducing the incidence of severe birth defects [2, 3].
The implications of AI-driven fetal face profiling extend beyond mere diagnostic capabilities. By providing a deeper insight into fetal development, this technology could inform personalized prenatal care, allowing healthcare providers to tailor their approach to the unique needs of each individual pregnancy.
However, the integration of AI in obstetric diagnostics is not without its challenges. Ensuring the reliability, accuracy, and ethical implementation of these systems is paramount, as they will ultimately impact the lives of both mothers and their unborn children [10].
As the field of AI-driven fetal face profiling continues to evolve, it is crucial to consider the broader implications and potential applications of this technology. By carefully navigating the complexities and addressing the challenges, we can harness the power of AI to enhance prenatal care, improve fetal outcomes, and ultimately, contribute to the well-being of both mother and child.
The ethical implications of AI in prenatal diagnostics
As advancements in technology continue to transform the healthcare landscape, the integration of artificial intelligence into prenatal diagnostics has raised several critical ethical considerations that warrant careful examination. One of the primary concerns revolves around patient data privacy, as the extensive collection and analysis of sensitive genetic and health information by AI systems raises significant questions about the appropriate handling and protection of such data [11, 36].
Moreover, the issue of patient consent in the release of this personal information becomes paramount, with the need to ensure that individuals fully understand the implications and provide informed consent before their data is utilized.
Another area of ethical concern is the potential for bias within the AI algorithms used in prenatal diagnostics. Given the complex nature of the data and the inherent biases that can exist within training datasets, there is a risk that the algorithms may generate results that reflect and perpetuate societal biases, potentially leading to discrimination or inaccurate diagnoses [11, 37].
To address these ethical challenges, it is crucial that healthcare providers, policymakers, and AI developers work collaboratively to establish robust ethical frameworks and guidelines that prioritize patient privacy, informed consent, and algorithmic transparency [11, 38]. By doing so, the medical community can harness the power of AI in prenatal diagnostics while upholding the highest ethical standards and preserving the well-being of both patients and the broader society 39], [40], [41.
AI-assisted ultrasound fetal facial profile diagnostics: findings and future directions
Advances in artificial intelligence have revolutionized various aspects of healthcare, and the field of obstetrics is no exception. The integration of AI into fetal ultrasound imaging has shown promising results in enhancing diagnostic capabilities and improving patient outcomes 1], [2], [3], [4], [5.
One key area of exploration is the use of AI-assisted ultrasound for fetal facial profile assessment. Facial features can provide valuable insights into fetal development, genetic conditions, and potential congenital anomalies [1, 2]. Current manual analysis of fetal facial profiles is time-consuming and relies heavily on the expertise of healthcare providers 42], [43], [44.
Looking towards the future, there are several promising research directions to explore. Integrating AI with standard 4D ultrasound equipment could enhance real-time fetal facial profile analysis, enabling healthcare providers to make more informed decisions during pregnancy [1, 2]. Additionally, exploring the use of AI at different stages of pregnancy, from early detection to late-term monitoring, could lead to earlier identification of developmental issues and facilitate timely interventions 1], [2], [3], [4], [5.
Another area of interest is the potential for AI-assisted ultrasound to contribute to the prediction and management of high-risk pregnancies. By identifying subtle facial abnormalities or deviations from the norm, AI algorithms could assist in the early detection of genetic disorders or congenital anomalies, allowing for appropriate medical care and counseling to be provided 1], [2], [3], [4], [5.
Furthermore, the integration of AI with fetal ultrasound could lead to advancements in personalized medicine. By analyzing the unique facial characteristics of each fetus, healthcare providers may be able to tailor prenatal care and develop customized treatment plans, ultimately improving maternal and fetal outcomes 1], [2], [3], [4], [5.
As the field of AI-assisted ultrasound continues to evolve, it is essential to address the ethical and regulatory considerations that come with the use of these technologies. Ensuring patient privacy, data security, and the appropriate training and oversight of AI systems are critical to the successful implementation of these advancements [10].
Conclusions
One of the key benefits of using AI in this context is the ability to quickly and accurately identify any potential abnormalities or developmental issues in the fetal facial profile. By comparing the 3D/4D ultrasound images to a database of known norms and anomalies, AI algorithms can quickly flag any areas of concern, allowing for earlier detection and intervention. Furthermore, AI technology can also help to reduce the margin of error in the analysis process, as it is not prone to the same fatigue or subjectivity that can affect human practitioners. This means that healthcare providers can have greater confidence in the accuracy of their diagnostic assessments, ultimately leading to more informed decision-making and better outcomes for patients. In addition to improving the diagnostic process, AI technology can also help to streamline the workflow of healthcare providers by automating certain aspects of the analysis process. This frees up valuable time for medical professionals to focus on other aspects of patient care, ultimately leading to a more efficient and effective healthcare system. Overall, the benefits of using AI technology in strengthening 3D/4D ultrasound analysis of fetal facial profiles are clear. From improving accuracy and efficiency to streamlining workflow and ultimately leading to better outcomes for patients, AI has the potential to revolutionize prenatal diagnostics and improve the quality of care for expectant mothers and their babies. As this technology continues to evolve and become more widely adopted, the future of prenatal healthcare looks brighter than ever.
Acknowledgments
We appreciate the Indonesian Society of Obstetrics and Gynecology (ISOG) and the Indonesian Society of Maternal-Fetal Medicine (HKFM) for encouraging and supporting the work of this review article.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: None declared.
-
Conflict of interests: The authors state no conflict of interest.
-
Research funding: None declared.
-
Data availability: Not applicable.
References
1. He, F, Wang, Y, Xiu, Y, Zhang, Y, Chen, L. Artificial intelligence in prenatal ultrasound diagnosis. Front Med 2021;8:729978. https://doi.org/10.3389/fmed.2021.729978.Search in Google Scholar PubMed PubMed Central
2. Kim, HY, Cho, GJ, Kwon, HS. Applications of artificial intelligence in obstetrics. Ultrasonography 2023;42:2–9. https://doi.org/10.14366/usg.22063.Search in Google Scholar PubMed PubMed Central
3. Xiao, S, Zhang, J, Zhu, Y, Zhang, Z, Cao, H, Xie, M, et al.. Application and progress of artificial intelligence in fetal ultrasound. J Clin Med 2023;12:3298. https://doi.org/10.3390/jcm12093298.Search in Google Scholar PubMed PubMed Central
4. Hiersch, L, Melamed, N. Fetal growth velocity and body proportion in the assessment of growth. Am J Obstet Gynecol 2018;218:S700–11.e1. https://doi.org/10.1016/j.ajog.2017.12.014.Search in Google Scholar PubMed
5. Medjedovic, E, Stanojevic, M, Jonuzovic-Prosic, S, Ribic, E, Begic, Z, Cerovac, A, et al.. Artificial intelligence as a new answer to old challenges in maternal-fetal medicine and obstetrics. Technol Health Care 2024;32:1273–87. https://doi.org/10.3233/THC-231482.Search in Google Scholar PubMed
6. Bindiya, HM, Chethana, HT, Pavan Kumar, ST. Detection of anomalies in fetus using convolution neural network. IJ Inform Technol Comput Sci 2018;11:77–86. https://doi.org/10.5815/ijitcs.2018.11.08.Search in Google Scholar
7. Kurjak, A, Pooh, RK, Merce, LT, Carrera, JM, Salihagic-Kadic, A, Andonotopo, W. Structural and functional early human development assessed by three-dimensional and four-dimensional sonography. Fertil Steril 2005;84:1285–99. https://doi.org/10.1016/j.fertnstert.2005.03.084.Search in Google Scholar PubMed
8. Kurjak, A, Miskovic, B, Andonotopo, W, Stanojevic, M, Azumendi, G, Vrcic, H. How useful is 3D and 4D ultrasound in perinatal medicine? J Perinat Med 2007;35:10–27. https://doi.org/10.1515/JPM.2007.002.Search in Google Scholar PubMed
9. Kurjak, A, Azumendi, G, Andonotopo, W, Salihagic-Kadic, A. Three- and four-dimensional ultrasonography for the structural and functional evaluation of the fetal face. Am J Obstet Gynecol 2007;196:16–28. https://doi.org/10.1016/j.ajog.2006.06.090.Search in Google Scholar PubMed
10. O’Sullivan, ME, Considine, EC, O’Riordan, M, Marnane, WP, Rennie, JM, Boylan, GB. Challenges of developing robust AI for intrapartum fetal heart rate monitoring. Front Artif Intell 2021;4:765210. https://doi.org/10.3389/frai.2021.765210.Search in Google Scholar PubMed PubMed Central
11. Jha, D, Rauniyar, A, Srivastava, A, Hagos, DH, Tomar, NK, Sharma, V, et al.. Ensuring trustworthy medical artificial intelligence through ethical and philosophical principles. New York, NY: Cornell University; 2023.Search in Google Scholar
12. Grunhut, J, Marques, O, Wyatt, ATM. Needs, challenges, and applications of artificial intelligence in medical education curriculum. JMIR Med Educ 2022;8:e35587. https://doi.org/10.2196/35587.Search in Google Scholar PubMed PubMed Central
13. Pinto-Coelho, L. How artificial intelligence is shaping medical imaging technology: a survey of innovations and applications. Bioengineering (Basel) 2023;10:1435. https://doi.org/10.3390/bioengineering10121435.Search in Google Scholar PubMed PubMed Central
14. Levin, J, Rispel, LC. Epidemiology and clinical profile of individuals with cleft lip and palate utilising specialised academic treatment centres in South Africa. Public Library of Science 2019;14:e0215931. https://doi.org/10.1371/journal.pone.0215931.Search in Google Scholar PubMed PubMed Central
15. Krinsky-McHale, SJ, Jenkins, EC, Zigman, WB, Silverman, W. Ophthalmic disorders in adults with Down syndrome. London: Hindawi Publishing Corporation; 2012.10.1155/2012/974253Search in Google Scholar PubMed PubMed Central
16. Forés-Martos, J, Cervera, R, Chirivella-Perez, E, Ramos-Jarero, A, Climent, J. A genomic approach to study down syndrome and cancer inverse comorbidity: untangling the chromosome 21. Lausanne: Frontiers Media; 2015.10.3389/fphys.2015.00010Search in Google Scholar PubMed PubMed Central
17. Castro-Hamoy, LD, Tumulak, MJR, Cagayan, MSFS, Sy, PA, Mira, NRC, Laurino, M. Attitudes of Filipino parents of children with Down syndrome on noninvasive prenatal testing. Springer Sci Business Media 2022;13:411–25. https://doi.org/10.1007/s12687-022-00597-w.Search in Google Scholar PubMed PubMed Central
18. Gropman, A, Rogol, AD, Fenno, I, Sadeghin, T, Sinn, S, Jameson, R, et al.. Clinical variability and novel neurodevelopmental findings in 49, XXXXY syndrome. Hoboken, NJ: Wiley; 2010:1523–30 pp.10.1002/ajmg.a.33307Search in Google Scholar PubMed
19. Nagi, R, Konidena, A, Rakesh, N, Gupta, R, Pal, A, Mann, AK. Clinical applications and performance of intelligent systems in dental and maxillofacial radiology: a review. Imaging Sci Dent 2020;50:81–92. https://doi.org/10.5624/isd.2020.50.2.81.Search in Google Scholar PubMed PubMed Central
20. AlSuwaidan, L. Deep learning based classification of dermatological disorders. Los Angeles, CA: Sage Publishing; 2023.10.1177/11795972221138470Search in Google Scholar PubMed PubMed Central
21. Bi, WL, Hosny, A, Schabath, MB, Giger, ML, Birkbak, NJ, Mehrtash, A, et al.. Artificial intelligence in cancer imaging: clinical challenges and applications. Wiley 2019;69:127–57. https://doi.org/10.3322/caac.21552.Search in Google Scholar PubMed PubMed Central
22. Cetinić, E, She, J. Understanding and creating art with AI: review and outlook. Assoc Comput Machinery 2022;18:1–22. https://doi.org/10.1145/3475799.Search in Google Scholar
23. Ryskeldiev, B, Ilić, S, Ochiai, Y, Elliott, L, Nikonole, H, Billinghurst, M. Creative immersive AI: emerging challenges and opportunities for creative applications of AI in immersive media. In: Online virtual conference, Yokohama, Japan; 2021.10.1145/3411763.3450399Search in Google Scholar
24. Abgaz, Y, Souza, RR, Methuku, J, Koch, G, Dorn, A. A methodology for semantic enrichment of cultural heritage images using artificial intelligence technologies. Multidiscipl Digital Publish Inst 2021;7:121. https://doi.org/10.3390/jimaging7080121.Search in Google Scholar PubMed PubMed Central
25. Janowicz, K, Gao, S, McKenzie, G, Hu, Y, Bhaduri, B. Geo AI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. Taylor Francis 2019;34:625–36. https://doi.org/10.1080/13658816.2019.1684500.Search in Google Scholar
26. Anantrasirichai, N, Bull, D. Artificial intelligence in the creative industries: a review. Springer Sci Business Media 2021;55:589–656. https://doi.org/10.1007/s10462-021-10039-7.Search in Google Scholar
27. Basalla, M, Apruzzese, G, Brocke, JV. Creativity of deep learning: conceptualization and assessment. arXiv preprint arXiv:2012.02282 2022. https://doi.org/10.5220/0010783500003116.Search in Google Scholar
28. Chung, NC. Human in the loop for machine creativity. New York, NY: Cornell University; 2021.Search in Google Scholar
29. Cheng, R, Roth, HR, Lay, N, Lü, L, Türkbey, B, Gandler, W, et al.. Automatic MR prostate segmentation by deep learning with holistically-nested networks. J Med Imaging (Bellingham) 2017;4:041302. https://doi.org/10.1117/12.2254558.Search in Google Scholar
30. Lundervold, AS, Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Elsevier BV 2019;29:102–27. https://doi.org/10.1016/j.zemedi.2018.11.002.Search in Google Scholar PubMed
31. Langlotz, CP, Allen, B, Erickson, BJ, Kalpathy-Cramer, J, Bigelow, K, Cook, TS, et al.. A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/the academy workshop. Radiol Soc North Am 2019;291:781–91. https://doi.org/10.1148/radiol.2019190613.Search in Google Scholar PubMed PubMed Central
32. Kaissis, G, Makowski, MR, Rueckert, D, Braren, R. Secure, privacy-preserving and federated machine learning in medical imaging. Nature Portfolio 2020;(6):305–11. https://doi.org/10.1038/s42256-020-0186-1.Search in Google Scholar
33. Wang, S, Cao, G, Wang, Y, Liao, S, Wang, Q, Shi, J, et al.. Review and prospect: artificial intelligence in advanced medical imaging. Lausanne: Frontiers Media; 2021.10.3389/fradi.2021.781868Search in Google Scholar PubMed PubMed Central
34. Hosny, A, Parmar, C, Quackenbush, J, Schwartz, LH, Aerts, HJ. Artificial intelligence in radiology. Nature Portfolio 2018;18:500–10. https://doi.org/10.1038/s41568-018-0016-5.Search in Google Scholar PubMed PubMed Central
35. Nampalle, KB, Singh, P, Narayan, UV, Raman, B. DeepMediX: a deep learning-driven resource-efficient medical diagnosis across the spectrum. New York, NY: Cornell University; 2023.Search in Google Scholar
36. Nasir, S, Khan, RA, Bai, S. Ethical framework for harnessing the power of AI in healthcare and beyond. New York, NY: Cornell University; 2023.10.1109/ACCESS.2024.3369912Search in Google Scholar
37. Shandhi, MMH, Dunn, JP. AI in medicine: where are we now and where are we going? Cell Rep Med 2022;3:100861. https://doi.org/10.1016/j.xcrm.2022.100861.Search in Google Scholar PubMed PubMed Central
38. Davenport, TH, Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthcare J 2019;6:94–8. https://doi.org/10.7861/futurehosp.6-2-94.Search in Google Scholar PubMed PubMed Central
39. Dorr, DA, Adams, L, Embí, PJ. Harnessing the promise of artificial intelligence responsibly. American Med Assoc 2023;329:1347. https://doi.org/10.1001/jama.2023.2771.Search in Google Scholar PubMed
40. Petersson, L, Vincent, K, Svedberg, P, Nygren, JM, Larsson, I. Ethical considerations in implementing AI for mortality prediction in the emergency department: linking theory and practice. Los Angeles, CA: SAGE Publishing; 2023.10.1177/20552076231206588Search in Google Scholar PubMed PubMed Central
41. Al-antari, MA. Artificial intelligence for medical diagnostics—existing and future AI technology! Diagnostics (Basel) 2023;13:688. https://doi.org/10.3390/diagnostics13040688.Search in Google Scholar PubMed PubMed Central
42. Kurjak, A, Andonotopo, W, Hafner, T, Salihagic Kadic, A, Stanojevic, M, Azumendi, G, et al.. Normal standards for fetal neurobehavioral developments--longitudinal quantification by four-dimensional sonography. J Perinat Med 2006;34:56–65. https://doi.org/10.1515/JPM.2006.007.Search in Google Scholar PubMed
43. Kurjak, A, Miskovic, B, Stanojevic, M, Amiel-Tison, C, Ahmed, B, Azumendi, G, et al.. New scoring system for fetal neurobehavior assessed by three- and four-dimensional sonography. J Perinat Med 2008;36:73–81. https://doi.org/10.1515/JPM.2008.007.Search in Google Scholar PubMed
44. Emir, AK, Andonotopo, W, Bachnas, MA, Sulistyowati, S, Stanojevic, M, Kurjak, A. 4D assessment of motoric function in a singleton acephalous fetus: the role of the KANET test. Case Rep Perinat Med 2017;6:20170022. https://doi.org/10.1515/crpm-2017-0022.Search in Google Scholar
© 2024 Walter de Gruyter GmbH, Berlin/Boston
Articles in the same Issue
- Frontmatter
- Review
- The utilization of artificial intelligence in enhancing 3D/4D ultrasound analysis of fetal facial profiles
- Commentary
- Respect for history: an important dimension of contemporary obstetrics and gynecology
- Opinion Papers
- Fetoscopic laser photocoagulation: a medically reasonable treatment option in the management of types II and III vasa previa
- Efficacy and safety of 2-drug regime dolutegravir/lamivudine in pregnancy and breastfeeding – clinical implications and perspectives
- Original Articles – Obstetrics
- The potential impact of universal screening for vasa previa in the prevention of stillbirths
- Preinduction cervical ripening in an outpatient setting: a prospective pilot study of a synthetic osmotic dilator compared with a double-balloon catheter
- Transperineal sonographic assessment of the angle of progression before the onset of labour: how well does it predict the mode of delivery in late-term pregnancy
- Prediction of intrapartum caesarean section in vaginal breech birth: development of models for nulliparous and multiparous women
- Complementary and Alternative Medicine use among pregnant women attending antenatal clinic: a point to ponder
- Molecular evidence that GBS early neonatal sepsis results from ascending infection: comparative hybrid genomics analyses show that microorganisms in the vaginal ecosystem, amniotic fluid, chorioamniotic membranes, and neonatal blood are the same
- Original Articles – Fetus
- Echocardiographic markers at diagnosis of persistent pulmonary hypertension of the newborn
- New measurement indicator of ultrasound assessment of the fetal pancreas based on anatomical landmarks and its application to fetuses with gestational diabetes mellitus
- Original Articles – Neonates
- Mode of delivery and behavioral and neuropsychological outcomes in children at 10 years of age
- Assessment of the validity and reliability of edinburgh postpartum depression scale in Turkish men
- Letter to the Editor
- Frequency and persistence of wide pulse pressure in the newborn population
Articles in the same Issue
- Frontmatter
- Review
- The utilization of artificial intelligence in enhancing 3D/4D ultrasound analysis of fetal facial profiles
- Commentary
- Respect for history: an important dimension of contemporary obstetrics and gynecology
- Opinion Papers
- Fetoscopic laser photocoagulation: a medically reasonable treatment option in the management of types II and III vasa previa
- Efficacy and safety of 2-drug regime dolutegravir/lamivudine in pregnancy and breastfeeding – clinical implications and perspectives
- Original Articles – Obstetrics
- The potential impact of universal screening for vasa previa in the prevention of stillbirths
- Preinduction cervical ripening in an outpatient setting: a prospective pilot study of a synthetic osmotic dilator compared with a double-balloon catheter
- Transperineal sonographic assessment of the angle of progression before the onset of labour: how well does it predict the mode of delivery in late-term pregnancy
- Prediction of intrapartum caesarean section in vaginal breech birth: development of models for nulliparous and multiparous women
- Complementary and Alternative Medicine use among pregnant women attending antenatal clinic: a point to ponder
- Molecular evidence that GBS early neonatal sepsis results from ascending infection: comparative hybrid genomics analyses show that microorganisms in the vaginal ecosystem, amniotic fluid, chorioamniotic membranes, and neonatal blood are the same
- Original Articles – Fetus
- Echocardiographic markers at diagnosis of persistent pulmonary hypertension of the newborn
- New measurement indicator of ultrasound assessment of the fetal pancreas based on anatomical landmarks and its application to fetuses with gestational diabetes mellitus
- Original Articles – Neonates
- Mode of delivery and behavioral and neuropsychological outcomes in children at 10 years of age
- Assessment of the validity and reliability of edinburgh postpartum depression scale in Turkish men
- Letter to the Editor
- Frequency and persistence of wide pulse pressure in the newborn population