Home AI-driven microscopy: from classical analysis to deep learning applications
Article Open Access

AI-driven microscopy: from classical analysis to deep learning applications

  • Sreenivas Bhattiprolu EMAIL logo
Published/Copyright: May 22, 2025
Become an author with De Gruyter Brill

Abstract

Microscopy has revolutionized life sciences by enabling detailed visualization of cellular and subcellular processes. Recent advancements in microscope technology have enhanced our ability to capture complex biological events, generating vast amounts of high-dimensional data. While this opens new avenues for discovery, it also introduces significant challenges in data analysis and interpretation. Modern microscopes can produce terabytes of data in a single experiment, often combining multiple imaging modalities across three or four dimensions. Techniques such as light sheet microscopy (J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science, vol. 305, no. 5686, pp. 1007–1009, 2004), STED (S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett., vol. 19, no. 11, pp. 780–782, 1994), STORM (M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods, vol. 3, no. 10, pp. 793–795, 2006), and lattice light sheet microscopy (B.-C. Chen, et al., “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,” Science, vol. 346, no. 6208, 2014) have dramatically improved spatial resolution and acquisition speeds. These approaches yield datasets reaching tens of terabytes or more, exposing the limitations of traditional manual and semi-automated analysis methods, which are time-consuming, biased, and struggle with the multidimensional nature of modern microscopy data. To address these challenges, researchers are increasingly adopting artificial intelligence (AI), particularly deep learning models such as convolutional neural networks and transformers (“Deep learning in microscopy,” Nat. Methods, 2019). AI-based tools can automate complex tasks like denoising (A. Krull, T. O. Buchholz, and F. Jug, “Noise2Void – learning denoising from single noisy images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2129–2137), segmentation (J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, “Segment anything in medical images,” Nat. Commun., vol. 15, p. 654, 2024), and virtual staining (Y. N. Nygate, et al., “Holographic virtual staining of individual biological cells,” Proc. Natl. Acad. Sci. (PNAS), vol. 117, no. 17, 2020), adapting to diverse imaging conditions and enabling scalable analysis of large datasets. This VIEWS article presents the perspective of ZEISS on how AI is transforming microscopy workflows. We highlight practical applications through real-world case studies and discuss how emerging computational tools are accelerating scientific discovery by making sense of complex, high-volume image data.

1 The evolution of microscopy analysis: from manual to AI-driven approaches

The journey toward automated microscopy analysis began earlier than many might expect. In 1966, Judith Prewitt and her colleagues took a pioneering step by developing one of the first systematic approaches to analyzing microscopy images [1]. At a time when digital computing was in its infancy, they identified 35 key parameters that could help distinguish between red blood cells (erythrocytes) and different types of white blood cells (leukocytes) such as lymphocytes, monocytes, and neutrophils. These parameters included measurements like nuclear area, nuclear-cytoplasmic contrast, and cytoplasmic area. Remarkably, they demonstrated the power of multi-parameter analysis by creating a three-dimensional decision space, plotting these parameters (Figure 1). This visualization showed how different cell types naturally clustered into distinct regions. While they noted it was impossible to visualize the effect of additional dimensions, they presciently suggested that the increasing discriminatory power observed when moving from one to three dimensions could extend into multi-dimensional space through appropriate analytical methods. Today’s AI systems have made this vision a reality, routinely leveraging tens, hundreds or thousands of dimensions to achieve unprecedented accuracy in cell classification – something that would have seemed like science fiction in the 1960s [2].

Figure 1: 
Recreation of Prewitt’s 1966 three-dimensional decision space for blood cell classification. The visualization shows how different blood cell types (Lymphocytes-L, Erythrocytes-E, Monocytes-M, and Neutrophils-N) cluster in distinct regions when plotted using three key parameters: nuclear area, nuclear-cytoplasmic contrast, and cytoplasmic area.
Figure 1:

Recreation of Prewitt’s 1966 three-dimensional decision space for blood cell classification. The visualization shows how different blood cell types (Lymphocytes-L, Erythrocytes-E, Monocytes-M, and Neutrophils-N) cluster in distinct regions when plotted using three key parameters: nuclear area, nuclear-cytoplasmic contrast, and cytoplasmic area.

As microscopy techniques evolved, so did the methods for analyzing images. The most basic approach, thresholding, worked well for fluorescence microscopy where the objects of interest (like stained cells) are easily discriminated against a dark background (Figure 2). More sophisticated techniques, such as watershed segmentation [3], helped researchers separate touching objects like clustered cells. However, these methods had a significant drawback: they often required careful manual adjustment of parameters, making results heavily dependent on individual users and potentially difficult to reproduce.

Figure 2: 
Comparison of manual and automated thresholding approaches for nuclear segmentation. (a) Original fluorescence microscopy image showing DAPI-stained cell nuclei. (b) Manual segmentation result using a user-defined threshold value of 30, highlighting the subjectivity of manual threshold selection. (c) Automated segmentation using Otsu’s method, which objectively determined a threshold value of 38, demonstrating how algorithmic approaches can reduce user bias in image analysis. Note the similar segmentation results despite different threshold values, illustrating the robustness of automated methods.
Figure 2:

Comparison of manual and automated thresholding approaches for nuclear segmentation. (a) Original fluorescence microscopy image showing DAPI-stained cell nuclei. (b) Manual segmentation result using a user-defined threshold value of 30, highlighting the subjectivity of manual threshold selection. (c) Automated segmentation using Otsu’s method, which objectively determined a threshold value of 38, demonstrating how algorithmic approaches can reduce user bias in image analysis. Note the similar segmentation results despite different threshold values, illustrating the robustness of automated methods.

The field took an important step forward with the development of automated analysis tools like Otsu’s method [4], which could automatically determine optimal threshold values for segmentation (Figure 2c). This removed some user bias from the analysis process. However, while these methods worked well for simple cases, they struggled with more complex scenarios. For instance, analyzing brightfield microscopy images of wound healing assays, where researchers need to distinguish between areas containing cells and empty spaces (Figure 3a), required more sophisticated approaches. This involved applying entropy filtering [5] to highlight textural differences between cell-covered regions and scratch areas (Figure 3b), followed by Otsu thresholding to segment these regions (Figure 3c), enabling quantitative measurement of wound closure over time.

Figure 3: 
Evolution of methods for analyzing cell migration in wound healing assays. (a) Original brightfield microscopy image showing a scratch wound surrounded by cells. (b) Entropy-filtered image highlighting textural differences between cell-covered regions (high entropy) and the scratch area (low entropy). (c) Segmentation result after applying Otsu thresholding to the entropy-filtered image, distinguishing between cellular regions (yellow) and scratch area (cyan). (d) AI-based segmentation using a machine learning classifier trained on multiple image features (similar to Prewitt’s multi-parameter approach), demonstrating how modern methods can achieve accurate segmentation by considering multiple parameters simultaneously.
Figure 3:

Evolution of methods for analyzing cell migration in wound healing assays. (a) Original brightfield microscopy image showing a scratch wound surrounded by cells. (b) Entropy-filtered image highlighting textural differences between cell-covered regions (high entropy) and the scratch area (low entropy). (c) Segmentation result after applying Otsu thresholding to the entropy-filtered image, distinguishing between cellular regions (yellow) and scratch area (cyan). (d) AI-based segmentation using a machine learning classifier trained on multiple image features (similar to Prewitt’s multi-parameter approach), demonstrating how modern methods can achieve accurate segmentation by considering multiple parameters simultaneously.

These tools helped identify cellular regions based on their complex patterns rather than just their brightness. Yet even these methods had limitations, particularly when analyzing complex images like electron microscopy data of subcellular structures. As shown in an EM image of mouse retina (Figure 4a), structures like mitochondria are embedded in a complex cellular environment with similar intensity values and textures, making traditional feature-based segmentation unreliable.

Figure 4: 
AI-powered segmentation of mitochondria in electron microscopy images. (a) Original electron microscopy image of mouse retina showing mitochondria embedded in a complex cellular environment. Traditional segmentation methods struggle with this type of data due to similar intensity values between mitochondria and surrounding structures. (b) Deep learning-based instance segmentation result (blue overlay), demonstrating AI’s ability to accurately identify individual mitochondria despite the challenging image context.
Figure 4:

AI-powered segmentation of mitochondria in electron microscopy images. (a) Original electron microscopy image of mouse retina showing mitochondria embedded in a complex cellular environment. Traditional segmentation methods struggle with this type of data due to similar intensity values between mitochondria and surrounding structures. (b) Deep learning-based instance segmentation result (blue overlay), demonstrating AI’s ability to accurately identify individual mitochondria despite the challenging image context.

These traditional methods laid important groundwork but ultimately revealed the need for more powerful analytical tools. The introduction of artificial intelligence techniques provided a quantum leap in analytical capabilities. While early machine learning approaches like Random Forest [6] and Support Vector Machines [7] automated the analysis of multiple image features, deep learning has revolutionized the field by automatically discovering relevant features from training data. The power of this approach is evident in the accurate segmentation of mitochondria from the complex EM image (Figure 4b), a task that would be virtually impossible with traditional methods. This capability has transformed microscopy workflows, enabling everything from basic cell counting to sophisticated tasks like denoising images [8] and achieving super-resolution [9].

The impact of AI extends beyond just improving analysis accuracy. Today’s AI-powered tools provide intuitive interfaces that allow researchers to automate complex analysis tasks without needing expertise in programming or computer vision [10], [11], [12]. Cloud-based platforms have eliminated the need for expensive local computing resources, while parallel processing capabilities have dramatically reduced analysis time for large datasets. Most importantly, AI has enabled entirely new possibilities in microscopy, from real-time control of microscopes based on image analysis to predicting cell properties without the need for specific staining.

As we’ll see in the following case studies, these advances are transforming how researchers approach microscopy across a wide range of applications, from basic research to drug discovery.

2 Case studies: AI segmentation in action

While artificial intelligence has transformed many areas of biological research, from predicting protein structures to enhancing microscope images, one of its most significant impacts has been in image segmentation – the process of identifying and separating meaningful structures within microscopy images. Segmentation often represents the critical first step in automated microscopy analysis workflows, and historically, it has been one of the most challenging steps to automate effectively.

AI-powered segmentation in microscopy typically employs two distinct approaches. Semantic segmentation classifies regions within an image without distinguishing individual objects. As shown in our example (Figure 5), when applied to a brightfield image of cells, semantic segmentation identifies cellular regions (highlighted in pink) from background, making it particularly useful for applications like measuring cell confluence or analyzing tissue organization. In contrast, instance segmentation identifies and separates individual objects, as demonstrated by the same cellular image where each cell is marked with a distinct color. This capability is essential for applications requiring single-cell analysis, such as tracking cell movements or analyzing morphological changes in individual cells over time.

Figure 5: 
Comparison of semantic and instance segmentation approaches in cell analysis. (a) Original brightfield microscopy image showing densely packed cells. (b) Semantic segmentation result where all cellular regions are highlighted in a single color (pink), useful for measuring overall cell coverage. (c) Instance segmentation result where each cell is assigned a unique color, enabling individual cell analysis.
Figure 5:

Comparison of semantic and instance segmentation approaches in cell analysis. (a) Original brightfield microscopy image showing densely packed cells. (b) Semantic segmentation result where all cellular regions are highlighted in a single color (pink), useful for measuring overall cell coverage. (c) Instance segmentation result where each cell is assigned a unique color, enabling individual cell analysis.

These AI-based segmentation methods have fundamentally changed how researchers approach image analysis, enabling the processing of large datasets with consistency and speed that would be impossible to achieve manually. The following case studies demonstrate how AI-powered segmentation has automated microscopy tasks across diverse applications, from marine biology to neuroscience. Each example illustrates both the technical capabilities of AI segmentation and its practical impact on research productivity and reproducibility.

2.1 Case study 1: intelligent microscopy: AI-guided acquisition for automated specimen detection

For over a century, microscopists have faced a fundamental challenge: manually scanning large areas to find specific objects of interest for detailed imaging. This process is not only time-consuming but also prone to operator fatigue and bias, especially when searching for rare specimens [13]. AI-powered guided acquisition now offers an elegant solution to this long-standing challenge.

A particularly illustrative example comes from marine biology, where researchers need to locate and image specific plankton species within diverse seawater samples (Figure 6). The traditional approach of manually scanning slides to find rare Dinophysis specimens would require hours of careful observation. Instead, an AI-guided workflow was implemented that combines automated scanning with intelligent detection.

Figure 6: 
AI-powered guided acquisition workflow for selective imaging of plankton specimens. Left: Automated microscope (ZEISS Celldiscoverer 7) setup for intelligent acquisition. Center: Overview scan in widefield mode showing AI-based instance segmentation (blue box) identifying a Dinophysis specimen of interest. Right top: High-resolution confocal z-stack image of the identified specimen showing detailed cellular structure. Right bottom: 3D visualization of the segmented subcellular structures within the same specimen.
Figure 6:

AI-powered guided acquisition workflow for selective imaging of plankton specimens. Left: Automated microscope (ZEISS Celldiscoverer 7) setup for intelligent acquisition. Center: Overview scan in widefield mode showing AI-based instance segmentation (blue box) identifying a Dinophysis specimen of interest. Right top: High-resolution confocal z-stack image of the identified specimen showing detailed cellular structure. Right bottom: 3D visualization of the segmented subcellular structures within the same specimen.

The implementation utilized an automated high throughput microscope for image acquisition. The workflow begins with overview scans in widefield mode to efficiently survey large sample areas. Within each field of view, an instance segmentation model based on a modified Mask2Former architecture [14] identifies organisms of interest. These segmentation results are then fed back to the imaging software, which automatically triggers high-resolution 3D confocal imaging of the identified organisms. The resulting confocal z-stacks reveal detailed subcellular structures, including organelles like chloroplasts, whose arrangement and size provide valuable insights into cell state and health.

This automated approach, implemented during a research expedition by EMBL Heidelberg’s Mobile Labs, demonstrates how AI can transform microscopy workflows. The seamless integration of AI-driven detection with automated microscope control transforms what was once a manual, time-intensive process into an efficient, automated workflow.

The implications extend far beyond marine biology. Similar approaches can revolutionize any application requiring the identification and detailed imaging of specific structures within complex samples, from detecting cells undergoing various stages of mitosis to analyzing rare cellular events. By combining automated detection with intelligent microscope control, AI is fundamentally changing how we approach microscopy data acquisition, making complex imaging workflows both more efficient and more reproducible.

2.2 Case study 2: neural circuit mapping: deep learning approaches to dendritic spine analysis

Understanding neurological diseases requires detailed analysis of neural circuits, particularly the intricate structures of dendritic spines and neuronal projections [15]. These microscopic features hold crucial information about synapse formation and function, making their accurate analysis essential for understanding disease progression and developing therapeutic approaches. However, their complex morphology and subtle structural differences present significant challenges for traditional imaging analysis methods.

A study of primary neurons expressing a fluorescent protein demonstrates how AI can transform this challenging analysis (Figure 7). The neurons, isolated from mouse brain and cultured in multiwell plates, were imaged using ZEISS Celldiscoverer 7 microscope with LSM 900 and Airyscan 2. The resulting images reveal complex neuronal architecture, with extensive branching patterns and numerous dendritic spines. The key challenge lay in accurately distinguishing individual components – the cell body, neurites, and the multitude of dendritic spines emerging from these projections.

Figure 7: 
AI-based segmentation of neuronal structures from confocal microscopy images. Left: Original fluorescence image showing neuronal structure with complex dendritic branching patterns and numerous spines. Right: AI-based segmentation results showing cell body (blue) and neurites (yellow), with individual dendritic spines detected and labeled with unique colors, enabling quantitative analysis of spine distribution and morphology. (Sample courtesy: R. Thomas and D. L. Benson, Icahn School of Medicine at Mount Sinai, New York, USA).
Figure 7:

AI-based segmentation of neuronal structures from confocal microscopy images. Left: Original fluorescence image showing neuronal structure with complex dendritic branching patterns and numerous spines. Right: AI-based segmentation results showing cell body (blue) and neurites (yellow), with individual dendritic spines detected and labeled with unique colors, enabling quantitative analysis of spine distribution and morphology. (Sample courtesy: R. Thomas and D. L. Benson, Icahn School of Medicine at Mount Sinai, New York, USA).

To address this, separate deep learning models were implemented. A semantic segmentation model based on U-Net architecture [16] was trained to identify the cell body and neurites, while an instance segmentation model was developed to detect individual dendritic spines. This dual-model approach reflects the different nature of these structures – neurons as continuous networks and spines as distinct objects.

The trained models were then applied to process the complete 3D dataset. The results demonstrate the power of this approach: the segmentation clearly distinguishes the cell body (blue) and neurites (yellow), while simultaneously identifying each dendritic spine as a unique object (shown in various colors) (Figure 7). This comprehensive segmentation enables detailed quantitative analysis of neuronal architecture. For this neuron, analysis revealed eight neurites with an average branch depth of 4.125 (number of branching levels) and a mean path tortuosity of 0.825 (ratio of actual path length to straight-line distance). The total neurite length measured 152.34 μm, with 1,294 spines identified, yielding a spine density of 1.06 spines per micrometer. Such detailed measurements would be impractical to obtain manually, demonstrating how AI-powered analysis can provide rich quantitative insights into neural circuit organization and potential dysfunction in disease states.

2.3 Case study 3: three-dimensional cell analysis: AI solutions for Complex In Vitro Models

FDA’s approval of Complex In Vitro Models (CIVMs) for preclinical research has transformed drug discovery by providing more physiologically relevant testing platforms [17]. These 3D models better mimic the complexity of living tissue, but they also present significant challenges for traditional imaging and analysis methods. How do you accurately measure cellular responses when cells are arranged in complex three-dimensional structures?

This challenge is illustrated in a study of drug effects on 3D spheroids of human ovarian cancer cells (A2780 cells). The spheroids, developed by Inventia, were imaged using ZEISS Celldiscoverer 7 microscope with Airyscan technology, capturing multiple cellular components: nuclei (Hoechst), endoplasmic reticulum/soma (CoA), and actin filaments (phalloidin). These multichannel 3D images reveal intricate arrangements of cells in three dimensions (Figure 8), providing rich information about cellular organization and morphology. Traditional analysis methods, such as examining maximum intensity projections, would miss critical information about cell-to-cell interactions and spatial relationships within these complex structures.

Figure 8: 
AI-based analysis of drug response in 3D ovarian cancer cultures. Top: AI-aided segmentation of individual cells in 3D matrix cultures at different drug concentrations (cisplatin) showing cellular organization and morphology. The 3D cultures were stained for Hoechst to identify individual cells through their nuclei, and phalloidin to detect the cell cytoskeleton and thus cell boundary. Bottom: Quantitative analysis showing the effect of drug concentration on nucleus-to-cell volume ratios. Left to right: control, 25 µM drug concentration, and 12.5 µM drug concentration, demonstrating dose-dependent changes in cellular architecture. (Images courtesy of Martin Engel, Inventia, acquired using ZEISS Celldiscoverer 7 with Airyscan).
Figure 8:

AI-based analysis of drug response in 3D ovarian cancer cultures. Top: AI-aided segmentation of individual cells in 3D matrix cultures at different drug concentrations (cisplatin) showing cellular organization and morphology. The 3D cultures were stained for Hoechst to identify individual cells through their nuclei, and phalloidin to detect the cell cytoskeleton and thus cell boundary. Bottom: Quantitative analysis showing the effect of drug concentration on nucleus-to-cell volume ratios. Left to right: control, 25 µM drug concentration, and 12.5 µM drug concentration, demonstrating dose-dependent changes in cellular architecture. (Images courtesy of Martin Engel, Inventia, acquired using ZEISS Celldiscoverer 7 with Airyscan).

The solution combines AI-powered analysis with advanced 3D imaging. Two deep learning models were developed: one for nuclear segmentation and another for whole-cell boundary detection. These models can accurately identify individual cells within densely packed spheroids. The resulting segmentation reveals each cell as a distinct entity, colored uniquely to show its position and morphology within the 3D structure. The trained models were integrated into comprehensive analysis pipelines, with distributed computing infrastructure employed to process multiple spheroids across multiwell plates in parallel.

Analysis of drug responses revealed clear dose-dependent effects on cellular architecture. The quantitative analysis demonstrates how nuclear-to-cell volume ratios shift with different drug concentrations, providing insights into cellular responses to treatment. This automated approach enables researchers to track morphological changes in individual cells, measure cell-to-cell interactions in 3D space, and quantify treatment responses across multiple samples while maintaining consistent analysis parameters.

The implications for drug discovery are significant. By enabling detailed analysis of cellular responses in more physiologically relevant 3D models, this AI-powered approach helps bridge the gap between traditional in vitro testing and actual tissue responses. The scalability of the solution makes it particularly valuable for high content, high-throughput, drug discovery applications, where large numbers of compounds need to be evaluated efficiently and reproducibly.

2.4 Case study 4: cardiac architecture analysis: deep learning for complex anatomical structures

Understanding cardiac structure and function requires detailed analysis of complex three-dimensional arrangements of chambers, vessels, and tissue. While X-ray microscopy can capture this intricate architecture in remarkable detail, analyzing the resulting data presents significant challenges. The primary difficulty lies in distinguishing between structures with similar densities, particularly when some regions are inadequately visualized due to staining limitations.

A study using a Wistar rat heart demonstrates how AI can overcome these challenges. The heart was imaged using a ZEISS Xradia Versa X-ray microscope at high resolution (24.44 μm isotropic), producing detailed 3D volumes that reveal the complete cardiac architecture (Figure 10). While the raw data shows complex vasculature and chamber organization, the similar X-ray absorption of different cardiac structures makes traditional threshold-based segmentation ineffective.

To address this challenge, multiple semantic segmentation models were implemented for the analysis. Expert annotators carefully marked different cardiac components across multiple slices of the 3D volume, creating a comprehensive training dataset (Figure 9). Each structure – the four chambers, major vessels, and vessel networks – was labeled with distinct colors to train separate models. The trained models were then used to segment the entire 3D volume.

Figure 9: 
AI segmentation of 3D X-ray microscopy data of rat heart. Left: Expert annotations used for training semantic segmentation models showing different cardiac structures (Red: Left ventricle, Green: Left Atrium, Yellow: Pulmonary vein connected via Aorta, Purple: Background). Right: Additional annotations showing Right Ventricle in Cyan and background in Purple. Data courtesy of Lara S.F. Konijnenberg MD PhD, Department of Cardiology, Radboud University Medical Center, Nijmegen, Netherlands.
Figure 9:

AI segmentation of 3D X-ray microscopy data of rat heart. Left: Expert annotations used for training semantic segmentation models showing different cardiac structures (Red: Left ventricle, Green: Left Atrium, Yellow: Pulmonary vein connected via Aorta, Purple: Background). Right: Additional annotations showing Right Ventricle in Cyan and background in Purple. Data courtesy of Lara S.F. Konijnenberg MD PhD, Department of Cardiology, Radboud University Medical Center, Nijmegen, Netherlands.

The resulting analysis revealed quantitative insights previously difficult to obtain. The left ventricle, which pumps oxygenated blood throughout the body, has a volume of 324.066 mm3 – approximately three times larger than the right ventricle (100.568 mm3), reflecting its more demanding role in systemic circulation. The atrial chambers show volumes of 84.017 mm3 and 118.294 mm3 for left and right respectively, proportions that align with their biological function. Perhaps most striking is the total length of the segmented vessel network, exceeding 10.4 m, which quantitatively demonstrates the remarkable complexity of cardiac vasculature.

This approach demonstrates how AI can overcome traditional limitations in analyzing complex anatomical structures, even with challenging image data. Beyond cardiac research, these techniques could revolutionize the study of other complex anatomical structures where conventional analysis methods prove insufficient. The combination of advanced X-ray imaging, expert annotation, and deep learning opens new possibilities for quantitative analysis in both research and potential clinical applications (Figure 10).

Figure 10: 
Three-dimensional visualization of cardiac structure. Top: Raw X-ray microscopy volume showing detailed cardiac architecture and vasculature. Bottom: Segmented 3D visualization with labeled cardiac chambers and major vessels. The complete segmentation enables detailed quantitative analysis of chamber volumes and vessel networks. Imaging performed using ZEISS Xradia Versa X-ray microscope at 24.44 μm isotropic resolution.
Figure 10:

Three-dimensional visualization of cardiac structure. Top: Raw X-ray microscopy volume showing detailed cardiac architecture and vasculature. Bottom: Segmented 3D visualization with labeled cardiac chambers and major vessels. The complete segmentation enables detailed quantitative analysis of chamber volumes and vessel networks. Imaging performed using ZEISS Xradia Versa X-ray microscope at 24.44 μm isotropic resolution.

2.5 Case study 5: dynamic organoid analysis: AI-enabled tracking of development

The analysis of organoid development presents unique challenges in microscopy, particularly when tracking cellular behavior over time. A compelling example comes from a study of intestinal organoids, where researchers needed to track individual nuclei across 170 time points in a live-cell movie of a 1,000 μm diameter micropatterned gastrulation organoid. The organoid, generated from CAG:H2B-eGFP-expressing mESCs, was imaged using a ZEISS Lattice Lightsheet 7 microscope, capturing a field of view of 300 μm × ∼500 μm every 10 min for 28 h.

Pre-trained models provide an excellent starting point for many biological image analysis tasks, offering sophisticated capabilities without the need for custom training. For example, Cellpose [18], a popular deep learning solution, can effectively segment cells across various imaging conditions. However, when tracking individual cells over extended time periods, even the best pre-trained models may benefit from customization to handle experiment-specific challenges such as varying intensity levels and complex cellular arrangements. While these models often provide parameter adjustment options like threshold modifications, achieving optimal results for specific experimental conditions frequently requires a more tailored approach (Figure 11).

Figure 11: 
Comparison of analysis approaches for organoid cell segmentation. Left: Original fluorescence image showing H2B-eGFP labeled nuclei in a gastrulation organoid. Center: Results using a pre-trained model with default threshold (threshold = 0), showing incomplete detection with missing nuclei and artificial straight edges that don’t follow true nuclear boundaries. Right: Results with modified threshold settings (threshold = −6), showing artifacts such as wavy edges and exaggerated nuclear sizes, demonstrating how aggressive parameter adjustments can compromise segmentation accuracy. Sample courtesy of Clayton Schwarz of Labs of Anna-Katerina Hadjantonakis at Memorial Sloan Kettering Cancer Center and Eric Siggia at Rockefeller University.
Figure 11:

Comparison of analysis approaches for organoid cell segmentation. Left: Original fluorescence image showing H2B-eGFP labeled nuclei in a gastrulation organoid. Center: Results using a pre-trained model with default threshold (threshold = 0), showing incomplete detection with missing nuclei and artificial straight edges that don’t follow true nuclear boundaries. Right: Results with modified threshold settings (threshold = −6), showing artifacts such as wavy edges and exaggerated nuclear sizes, demonstrating how aggressive parameter adjustments can compromise segmentation accuracy. Sample courtesy of Clayton Schwarz of Labs of Anna-Katerina Hadjantonakis at Memorial Sloan Kettering Cancer Center and Eric Siggia at Rockefeller University.

The solution came through an iterative, human-in-the-loop approach to model training. An instance segmentation model was developed, with the workflow beginning with targeted annotations in carefully selected regions of the organoid (Figure 12). Expert annotators marked a handful of nuclei and background regions, ensuring representation of both central and peripheral areas where imaging conditions could vary significantly. After initial training, the model’s performance was evaluated, and additional annotations were made in regions where the model showed poor performance. This process continued through three iterations, with each round addressing specific weaknesses in the model’s performance.

Figure 12: 
Iterative training process for custom AI model development. Annotated regions from three successive training rounds are highlighted: first training (red boxes), second training (yellow boxes), and third training (blue boxes). Annotations include both nuclei (grey) and background (purple) across different regions of the image, ensuring representation of varying intensity levels and object densities.
Figure 12:

Iterative training process for custom AI model development. Annotated regions from three successive training rounds are highlighted: first training (red boxes), second training (yellow boxes), and third training (blue boxes). Annotations include both nuclei (grey) and background (purple) across different regions of the image, ensuring representation of varying intensity levels and object densities.

The results demonstrate the power of this iterative approach (Figure 13). The custom-trained model was applied to the entire time series dataset, achieving consistent and reliable nuclei detection across all frames. Most crucially for time series analysis, the model successfully identified every nucleus in each time point – a critical requirement, as missing objects in any frame would break the continuity needed for tracking analysis. This consistent detection from the first to the last time point, combined with precise boundary segmentation, enabled reliable tracking of cell divisions throughout the 28-h experiment.

Figure 13: 
Validation of custom model performance across time points in organoid development. Three representative time points (Time 1, Time 85, and Time 170) from a 28-h time-lapse sequence demonstrate consistent nuclear segmentation throughout the experiment. Each uniquely colored region represents an individual nucleus, showing precise boundary detection and reliable segmentation across all frames – essential for downstream tracking analysis.
Figure 13:

Validation of custom model performance across time points in organoid development. Three representative time points (Time 1, Time 85, and Time 170) from a 28-h time-lapse sequence demonstrate consistent nuclear segmentation throughout the experiment. Each uniquely colored region represents an individual nucleus, showing precise boundary detection and reliable segmentation across all frames – essential for downstream tracking analysis.

This approach demonstrates how careful model development can overcome the limitations of pre-trained solutions. By combining the high-speed imaging capabilities of lattice light-sheet microscopy with robust AI-powered analysis, researchers can now reliably track cellular behavior over extended periods, opening new possibilities for studying organoid development and cellular dynamics.

3 Conclusions

The integration of artificial intelligence in microscopy represents a fundamental shift in how we acquire, analyze, and interpret imaging data. From its early roots in the 1960s with Prewitt’s pioneering work in feature-based analysis to today’s sophisticated deep learning models, the field has evolved to address increasingly complex challenges in biological imaging. Our case studies demonstrate this evolution: from automated detection and selective imaging of marine plankton, to precise segmentation of neuronal structures, to analysis of complex 3D organoids, to the segmentation of intricate cardiac architectures, and finally to the dynamic analysis of developing organoids in time-series data. Through these examples, we see how AI has moved beyond being just an analytical tool to become an integral part of the experimental workflow.

The accessibility of pre-trained models and user-friendly AI platforms has democratized advanced image analysis, enabling researchers across disciplines to tackle previously intractable problems. Cloud-based solutions and scalable computing infrastructure have eliminated many traditional bottlenecks, while improvements in model training approaches have reduced the burden of data annotation. Whether segmenting individual cells in 3D cultures, tracing neuronal structures, or analyzing complex anatomical features, these tools are making sophisticated analysis accessible to researchers regardless of their computational expertise.

However, as we look to a future of increased automation and AI capabilities, we must remember that these tools are meant to augment, not replace, human expertise. While AI can provide seemingly convincing results, the critical role of expert validation cannot be automated away. As seen in our case studies, successful implementation of AI in microscopy requires thoughtful integration of biological knowledge, technical expertise, and rigorous validation of results.

The future of AI in microscopy is not just about more powerful algorithms or faster processing; it’s about enabling scientists to ask new questions and explore biological systems in ways previously impossible. From automating complex imaging workflows to revealing quantitative insights in challenging 3D datasets, AI is expanding the boundaries of what’s possible in microscopy. By embracing these technologies while maintaining scientific rigor, we can accelerate discovery across fields – from basic cellular biology to drug development, from neuroscience to anatomical research. The combination of advanced imaging technologies, sophisticated AI tools, and human expertise promises to reveal new insights into the fundamental mechanisms of life and disease.


Corresponding author: Sreenivas Bhattiprolu, Carl Zeiss X-Ray Microscopy, Inc., 5300 Central Pkwy, Dublin, CA 94568, USA, E-mail: 

Acknowledgments

I extend my gratitude to all the case study contributors, including Martin Engel from Inventia and Delisa Garcia, Federico Ribaudo, Marion Lang, and Joy James Costa from ZEISS. Their contributions were instrumental in showcasing the transformative impact of AI on microscopy through real-world examples.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The author states no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

[1] J. M. S. Prewitt and M. L. Mendelsohn, “The analysis of cell images,” Ann. N. Y. Acad. Sci., vol. 128, pp. 1035–1053, 1966. https://doi.org/10.1111/j.1749-6632.1965.tb11715.x.Search in Google Scholar PubMed

[2] Y. Amitay, Y. Bussi, B. Feinstein, S. Bagon, I. Milo, and L. Keren, “CellSighter: a neural network to classify cells in highly multiplexed images,” Nat. Commun., vol. 14, p. 4302, 2023. https://doi.org/10.1038/s41467-023-40066-7.Search in Google Scholar PubMed PubMed Central

[3] A. Kornilov, I. Safonov, and I. Yakimchuk, “A review of watershed implementations for segmentation of volumetric images,” J. Imaging, vol. 8, no. 5, p. 127, 2022. https://doi.org/10.3390/jimaging8050127.Search in Google Scholar PubMed PubMed Central

[4] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., vol. 9, no. 1, pp. 62–66, 1979. https://doi.org/10.1109/TSMC.1979.4310076.Search in Google Scholar

[5] Entropy (Information Theory), Wikipedia [Online]. https://en.wikipedia.org/wiki/Entropy_(information_theory) [accessed: Feb. 25, 2025].Search in Google Scholar

[6] L. Breiman, “Random forests,” Mach. Learn., vol. 45, pp. 5–32, 2001. https://doi.org/10.1023/A:1010933404324.10.1023/A:1010933404324Search in Google Scholar

[7] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intell. Syst., vol. 13, no. 4, pp. 18–28, 1998. https://doi.org/10.1109/5254.708428.Search in Google Scholar

[8] A. Krull, T. O. Buchholz, and F. Jug, “Noise2Void – learning denoising from single noisy images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2129–2137.10.1109/CVPR.2019.00223Search in Google Scholar

[9] C. Qiao, et al.., “Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes,” Nat. Biotechnol., vol. 41, pp. 367–377, 2023. https://doi.org/10.1038/s41587-022-01471-3.Search in Google Scholar PubMed

[10] Napari [Online]. https://napari.org/0.4.15/ [accessed: Feb. 25, 2025].Search in Google Scholar

[11] C. McQuin, et al.., “CellProfiler 3.0: next-generation image processing for biology,” PLoS Biol., vol. 16, no. 7, p. e2005970, 2018. https://doi.org/10.1371/journal.pbio.2005970.Search in Google Scholar PubMed PubMed Central

[12] arivis Cloud, ZEISS Digital Microscopy Platform [Online]. https://www.apeer.com/ [accessed: Feb. 25, 2025].Search in Google Scholar

[13] P. F. Culverhouse, R. Williams, B. Reguera, V. Herry, and S. González‐Gil, “Do experts make mistakes? A comparison of human and machine identification of dinoflagellates,” Mar. Ecol.: Prog. Ser., vol. 247, pp. 17–25, 2003. https://doi.org/10.3354/meps247017.Search in Google Scholar

[14] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar, “Masked-attention Mask transformer for universal image segmentation,” arXiv preprint, arXiv:2112.01527, 2022. https://doi.org/10.48550/arXiv.2112.01527.Search in Google Scholar

[15] S. Saxena and S. Liebscher, “Editorial: circuit mechanisms of neurodegenerative diseases,” Front. Neurosci., vol. 14, 2020, Art. no. 593329. https://doi.org/10.3389/fnins.2020.593329.Search in Google Scholar PubMed PubMed Central

[16] O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” arXiv preprint, arXiv:1505.04597, 2015. https://doi.org/10.48550/arXiv.1505.04597.Search in Google Scholar

[17] P. J. H. Zushin, S. Mukherjee, and J. C. Wu, “FDA Modernization Act 2.0: transitioning beyond animal models with human cells, organoids, and AI/ML-based approaches,” J. Clin. Invest., vol. 133, no. 21, 2023. https://doi.org/10.1172/JCI175824.Search in Google Scholar PubMed PubMed Central

[18] C. Stringer, T. Wang, M. Michaelos, and M. Pachitariu, “Cellpose: a generalist algorithm for cellular segmentation,” Nat. Methods, vol. 18, pp. 100–106, 2021. https://doi.org/10.1038/s41592-020-01018-x.Search in Google Scholar PubMed

Received: 2024-12-15
Accepted: 2025-04-25
Published Online: 2025-05-22

© 2025 the author(s), published by De Gruyter on behalf of Thoss Media

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 20.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/mim-2024-0033/html
Scroll to top button