Abstract
The formulation of constitutive relations is central to turbulence modeling, yet their evolution has always been constrained by requirements of physical consistency, mathematical well-posedness and numerical stability. This review integrates these constraints into a unified framework, tracing how they have shaped the trajectory of turbulence closures from classical formulations to contemporary data-driven approaches. Fundamental principles including conservation laws, realizability, invariance, dimensional homogeneity, memory effects and asymptotic consistency delineate the admissible space of models while simultaneously guiding their refinement. Historical progress, from the Boussinesq approximation through nonlinear eddy-viscosity models to explicit algebraic stress closures, demonstrates the progressive incorporation of such constraints as corrective mechanisms that enhance robustness and predictive fidelity. In parallel, recent advances in physics-informed and machine learning-based models confirm that these same constraints remain indispensable for ensuring generalizability, physical admissibility and solver compatibility. By framing turbulence modeling through the lens of constraints, this review highlights their dual role as both limitations and design principles while underscoring their continuing relevance in shaping next-generation hybrid frameworks that unite physical rigor with data-driven adaptability.
1 Introduction
Turbulence remains one of the grand challenges of classical and modern physics, representing a highly complex, nonlinear and multiscale phenomenon that governs the transport of momentum, heat and mass in both natural environments and engineering. Its ubiquity across disciplines, including aerodynamics, propulsion, energy systems, atmospheric dynamics, ocean circulation and biomedical flows, underscores the fundamental importance of accurate turbulence prediction. Yet, despite over a century of study, turbulence continues to defy complete theoretical understanding, and its modeling remains an area of active research.
From a computational standpoint, solving the Navier–Stokes equations directly for turbulent flows through Direct Numerical Simulation (DNS) is infeasible for most engineering applications due to the enormous range of interacting spatial and temporal scales particularly at high Reynolds numbers. While Large Eddy Simulation (LES) reduces this burden by resolving large-scale motions and modeling only the smaller scales, its cost remains prohibitive for many practical flows. As a result, Reynolds-Averaged Navier–Stokes (RANS) closures, which approximate the effects of turbulence through modeled constitutive relations, continue to serve as the backbone of industrial Computational Fluid Dynamics (CFD). These models, however, rely on closure assumptions that inevitably introduce limitations in accuracy and universality.
The development of turbulence modeling has been historically shaped by constraints that remain central today. Early contributions by Reynolds [1] introduced the decomposition of flow into mean and fluctuating components, highlighting the need for closure of the Reynolds stress terms. Prandtl’s mixing-length theory [2] provided one of the first practical closures, embedding dimensional and near-wall constraints into turbulence descriptions. Later, Kolmogorov’s similarity theory [3] formalized inertial-range scaling and universality, embedding asymptotic consistency into turbulence theory. The k-ε model of Launder and Spalding [4] further demonstrated the role of dimensional consistency in constructing widely applicable closures. More advanced Reynolds Stress Models (RSM) and algebraic stress models in the 1980s and 1990s explicitly incorporated invariance, realizability and history effects, while Durbin’s elliptic-relaxation model [5], [6] demonstrated the importance of non-local memory effects in near-wall turbulence. These developments illustrate that turbulence modeling has always advanced through the systematic imposition of constraints on constitutive relations while balancing physical fidelity, mathematical tractability and computational feasibility. Importantly, the same principles that once guided the formulation of mixing-length models, similarity laws and realizability maps now inform the design of Physics-Informed Machine Learning (PIML) frameworks where invariance, realizability and asymptotic consistency are embedded directly into neural architectures and loss functions to prevent nonphysical predictions.
Turbulence modeling is inherently constrained by several interdependent requirements. At the most fundamental level, physical constraints demand consistency with conservation laws, realizability, symmetry, invariance, dimensional homogeneity and memory effects. Mathematical constraints impose well-posedness and near-wall asymptotics. Numerical constraints arise from the need for stable, convergent and computationally feasible implementations in CFD solvers. Collectively, these factors define the permissible space of turbulence closures guiding their formulation while simultaneously limiting their applicability.
Recent years have witnessed the emergence of physics-informed and data-driven turbulence modeling, which seek to augment traditional approaches by leveraging Machine Learning (ML) and high-fidelity datasets. These frameworks aim to enhance predictive capability by embedding physical constraints such as Galilean invariance, tensor symmetries, realizability and dimensional consistency directly into learning architectures. Galilean invariance ensures that model predictions remain unaffected by uniform translations of the observer’s frame, preserving the universality of turbulence physics. Tensor symmetries enforce the correct transformation behavior of the Reynolds stress tensor under rotations and reflections, guaranteeing that modeled stresses respect fundamental principles of continuum mechanics. Realizability conditions restrict Reynolds stresses to physically admissible states, preventing nonphysical outcomes such as negative turbulent kinetic energy or stress states outside the Lumley’s triangle. Dimensional consistency maintains proper scaling among modeled quantities, ensuring that closures generalize across flow regimes and unit systems. Together, these constraints prevent nonphysical predictions, stabilize numerical implementations and improve the generalizability of ML turbulence models.
The purpose of this review is to provide a comprehensive and structured examination of the constraints that govern turbulence modeling, with a focus on how these requirements shape both classical constitutive relations and emerging ML closures. While some reviews have surveyed turbulence modeling from the perspectives of physical insight, numerical implementation or data-driven advances, relatively few have explicitly synthesized these developments through the unifying lens of constraints. This paper addresses that gap by systematically analyzing four central themes: the formulation of constitutive relations in turbulence modeling including physics-informed and data-driven approaches; the spectrum of constraints spanning physical, mathematical and numerical aspects; the implications of these constraints for improving model stability, robustness and predictive fidelity; and the open challenges and future directions in embedding these constraints into next-generation hybrid frameworks. In doing so, the review underscores that constraints are not simply limitations on model formulation but essential design principles that continue to guide the evolution of turbulence closures toward greater generality, accuracy and physical consistency.
2 Constitutive relations in turbulence modeling
The core of most turbulence models lies in the formulation of constitutive relations that represent the Reynolds stress tensor or subgrid-scale stresses. Early approaches, notably the Boussinesq approximation [7], established a linear link between Reynolds stresses and the mean strain rate via the concept of eddy viscosity. While effective in simple flows, these models struggle to capture complex phenomena such as flow separation, curvature and anisotropy. To overcome these limitations, nonlinear eddy viscosity models and Explicit Algebraic Reynolds Stress Models (EARSM) have been developed with significant contributions from Craft et al. [8] and Wallin and Johansson [9] who introduced nonlinear tensor bases to more effectively capture anisotropic turbulence effects. Recent researches have leveraged ML techniques to enhance low-fidelity turbulence models employing data-driven closures, regression-based corrections and neural networks trained on high-fidelity DNS data. Despite the computational cost, these approaches improve predictive accuracy in complex flow regimes.
2.1 Boussinesq constitutive relation
The Boussinesq approximation is the most widely used constitutive relation in turbulence modeling. It assumes that the Reynolds stresses are linearly proportional to the mean rate of strain:
where μ t is the turbulent eddy viscosity and k is the turbulent kinetic energy.
The Boussinesq hypothesis offers the advantages of simplicity and computational efficiency serving as the foundational assumption for widely used linear eddy viscosity models such as the linear k − ɛ and k − ω models. However, its applicability is limited by the assumption of isotropic turbulent viscosity, which leads to poor performance in complex flow scenarios exhibiting strong anisotropy, streamline curvature, rotational effects, or flow separation.
2.2 Lumley constitutive relation
The linear Boussinesq hypothesis assumes an isotropic turbulent viscosity, implying locally isotropic turbulence. However, it tends to break down in complex flows characterized by curvature, rotation, separation, or strong strain. To address these limitations, nonlinear eddy viscosity models based on the hypothesis of Lumley [10] and Pope [11] have been developed by incorporating higher-order tensorial terms into the constitutive equations. These models extend the constitutive relation for the Reynolds stress tensor by including nonlinear functions of the mean strain-rate (S ij ) and rotation-rate (Ω ij ) tensors:
where a ij is the Reynolds-stress anisotropy tensor.
where the coefficients C 1–C 7 are scalar functions of the invariants of the strain-rate and rotation-rate tensors, following the general tensor-basis representation of Pope [11]. Specific turbulence closures correspond to particular choices of these coefficients, which may be taken as constants or as invariant-dependent functions.
Such nonlinear terms enable the models to capture secondary flows, streamline curvature effects and anisotropic stress distributions that linear eddy viscosity models cannot resolve. Gatski and Speziale [12] developed a systematic derivation of EARSM from a hierarchy of second-order closure models extending Pope’s [11] methodology to three-dimensional turbulent flows in non-inertial frames. The resulting models offer a more physically consistent alternative to linear eddy viscosity models by incorporating nonlinear effects of strain and rotation. The models demonstrated predictive accuracy close to second-order closures in benchmark turbulent flow problems. Craft et al. [8] proposed a cubic expansion of the strain and rotation rate tensors to better predict turbulence in curved and swirling flows. These models retained the relative simplicity of the eddy viscosity concept while extending its applicability to more complex situations.
A notable advancement in the development of nonlinear eddy viscosity models is the EARSM by Wallin and Johansson [9] which enhances the traditional k − ɛ framework by incorporating nonlinear constitutive relations derived from the full Reynolds stress transport equations. Their formulation employed the Cayley-Hamilton theorem to construct a complete tensorial basis for nonlinear eddy viscosity models ensuring the systematic inclusion of all invariant tensor functions of the strain-rate and rotation-rate tensors. This method significantly improved predictive accuracy in canonical turbulent flows while offering enhanced theoretical consistency and broader applicability. The EARSM has since been applied successfully to a range of complex flow scenarios including swirling jets, separated boundary layers and turbomachinery flows where conventional linear eddy viscosity models often fail to capture secondary flow structures or turbulence induced instabilities.
2.3 Data-driven constitutive relation
Recent advancements in computational capabilities and the growing availability of high-fidelity simulation data particularly from DNS and LES have significantly advanced the development of data-driven approaches to constitutive modeling in turbulent flows. These approaches aim to overcome the limitations of traditional turbulence models by learning the complex and nonlinear relationship between mean flow features and the Reynolds stress tensor directly from data, without relying on phenomenological assumptions such as the Boussinesq or Lumley hypotheses.
Machine learning techniques particularly symbolic regression, artificial neural networks and ensemble methods like random forests have emerged as powerful tools in this area. These approaches are typically designed to either (1) predict the full Reynolds stress tensor, (2) approximate the anisotropic part of the Reynolds stress tensor, or (3) model the discrepancy between traditional RANS predictions and high-fidelity data.
Symbolic regression has gained attention for its ability to generate interpretable and closed-form expressions for constitutive relations. Weatheritt and Sandberg [13] applied symbolic regression through gene expression programming to develop data-driven turbulence models that modify the algebraic Reynolds stress-strain relationship. Focusing on the anisotropic component of the Reynolds stress tensor, their approach uses high-fidelity DNS data to evolve algebraic expressions for Reynolds stress anisotropy without relying on predefined model structures. By validating their framework on benchmark cases such as the backward-facing step and periodic hills, they showed that the resulting models not only improve predictions compared to the baseline RANS model but also generalize well to flow conditions outside the training set. Their work represents a pioneering contribution in applying evolutionary ML to the development of interpretable and tensor-based turbulence models.
Discrepancy-based learning frameworks aim to correct the output of existing RANS models by learning the discrepancy between RANS-predicted and DNS-resolved Reynolds stresses. Ling and Templeton [14] proposed a ML framework to identify regions in turbulent flows where RANS models are likely to fail due to violations of key modeling assumptions. Using classifiers such as support vector machines and random forests, the study predicted where assumptions like isotropy, linearity and non-negativity of eddy viscosity break down based on features derived from DNS and LES data.
Wu et al. [15] developed a Bayesian calibration-prediction framework to quantify and reduce model-form uncertainties in RANS turbulence simulations, particularly those arising from Reynolds stress discrepancies. Unlike previous approaches that required observational data for the target flow, their method calibrates the Reynolds stress discrepancy using a related flow with available data and then extrapolates the learned uncertainty distribution to the target flow, where no data is available. This is accomplished by projecting the discrepancy onto physically meaningful quantities such as turbulent kinetic energy and anisotropy shape parameters while ensuring realizability and smoothness. The framework was validated on benchmark flows, including periodic hills and square ducts, and was shown to significantly improve predictions of complex flow features compared to baseline RANS models, demonstrating its effectiveness for data-scarce predictive simulations in engineering applications.
A particularly influential methodology is the field inversion and ML approach introduced by Parish and Duraisamy [16]. This approach combines Bayesian field inversion with supervised ML to infer and correct model-form discrepancies in computational physics models, particularly closure models. Rather than tuning parameters, it identifies spatially distributed corrections directly from data and reconstructs them as functional relationships using ML, enabling improved predictive simulations. The method was demonstrated on a nonlinear ordinary differential equation and turbulent channel flow, showing enhanced model accuracy and quantified uncertainties.
Despite their successes, data-driven turbulence models continue to face critical limitations. A primary concern is their limited extrapolation capability. Models trained on specific datasets may perform poorly when applied to flow regimes not represented in the training data. Moreover, many data-driven approaches especially those that lack embedded physical knowledge risk violating essential physical constraints such as conservation laws, realizability and symmetry [17]. These issues reduce their robustness and hinder their deployment in practical engineering applications.
These challenges have prompted the development of physics-informed machine learning frameworks, which aim to incorporate known physical laws such as conservation principles, realizability conditions and symmetry constraints into the ML modeling to improve robustness and reliability.
2.4 Physics-informed machine learning constitutive relation
Physics-informed machine learning has emerged as a transformative paradigm for enhancing turbulence modeling by embedding physical laws directly into data-driven frameworks. Unlike purely data-driven methods, PIML integrates physical constraints such as differential conservation laws, realizability conditions and invariance principles to guide ML models, thereby improving their interpretability, generalizability and robustness across diverse flow conditions.
A landmark contribution in this area is the Tensor Basis Neural Network (TBNN) proposed by Ling et al. [18], which embeds Galilean invariance directly into the learning architecture by expressing the anisotropic Reynolds stress tensor as a tensorial expansion over a set of invariant bases. In the TBNN formulation, the Reynolds stress is modeled through its anisotropic component, defined as the normalized deviation from isotropy, which correctly vanishes in homogeneous, shear-free turbulence. Rather than predicting individual tensor components in a coordinate-dependent manner, TBNN represents the anisotropy tensor as a linear combination of fixed invariant tensor bases constructed from the local mean strain-rate and rotation-rate tensors. These bases correspond to the integrity basis derived by Pope [11] and ensure that the modeled anisotropy transforms appropriately under rotations and reflections of the reference frame.
The scalar coefficients associated with each tensor basis element are modeled as nonlinear functions of a reduced set of scalar invariants formed from the strain-rate and rotation-rate tensors. These invariants are non-dimensionalized using appropriate turbulence time scales and serve as the sole inputs to the neural network. Consequently, the learning task is confined to identifying the functional dependence of the expansion coefficients on invariant flow features, while the tensorial structure of the Reynolds stress is imposed analytically through the prescribed basis representation.
By construction, this separation between learned scalar relationships and fixed tensor bases guarantees Galilean invariance and Material Frame Indifference (MFI), while simultaneously ensuring recovery of isotropy in the absence of mean velocity gradients. In this sense, TBNN may be interpreted as a data-driven generalization of classical nonlinear eddy-viscosity and EARSM, in which invariant-dependent coefficients are inferred from data rather than specified a priori. The explicit embedding of symmetry and invariance constraints within the network architecture constitutes the central methodological contribution of TBNN and underpins its improved physical consistency relative to conventional component-wise neural-network closures. This approach has demonstrated superior performance in canonical turbulent flows, such as channel flow and flow over periodic hills, where traditional models often struggle to accurately capture anisotropic stress behavior.
Wang et al. [19] introduced a PIML framework to identify and correct discrepancies in RANS turbulence models. By leveraging DNS data, the authors trained ML models to predict discrepancies in Reynolds stress tensors as functions of mean flow features while embedding critical physical constraints such as realizability, symmetry and Galilean invariance. This approach significantly enhances the accuracy and generalizability of RANS predictions across different flow regimes.
Wu et al. [20] presented a systematic methodology for selecting input feature variables for ML applications in turbulence modeling. They proposed a framework in which Reynolds stresses are decomposed into linear and nonlinear components. This decomposition enables an implicit treatment of the linear portion, improving the conditioning of the RANS equations when solving for mean velocity. The framework follows a three-step process: generating high-fidelity data, performing field inversion to uncover model discrepancies, and applying supervised learning to construct physically consistent discrepancy functions. Demonstrated on several canonical flow cases, the approach significantly improved the accuracy and generalizability of RANS predictions while preserving physical plausibility.
Furthermore, Physics-Informed Neural Networks (PINNs) pioneered by Raissi et al. [21] offer a generalizable framework for embedding physical laws, expressed as Partial Differential Equations (PDEs), directly into the loss function of neural networks. By doing so, it enables the solution of both forward and inverse problems without requiring large datasets. The method was demonstrated on various nonlinear PDEs, showing strong potential for modeling complex physical systems in a data-efficient and interpretable manner.
Duraisamy [22] highlighted the value of integrating ML into governing equations for consistency and generalizability. Building on this, Zhang et al. [23] embedded neural networks into RANS equations during training to ensure model-consistent learning, reducing ill-conditioning and improving robustness. They introduced an ensemble Kalman method with adaptive step size to train nonlinear eddy viscosity models using indirect data (e.g., sparse velocities, lift and drag), avoiding the need for full-field Reynolds stress data. Their non-intrusive and adjoint-free approach enhances compatibility with various CFD solvers while Hessian-based adaptivity improves training efficiency. Applied to square duct and periodic hill flows, the model showed strong predictive performance and marks the first use of ensemble Kalman methods in this context.
Han et al. [24] proposed an equivariant neural operator namely the Vector-Cloud Neural Network with Equivariance (VCNN-e) designed to model nonlocal tensorial constitutive relations particularly the Reynolds stress tensor in turbulent flows. The VCNN-e preserves key invariance properties including translational and permutational invariance as well as rotational equivariance. It maps local clouds of flow features to tensorial outputs using a region-to-point architecture, thereby capturing complex nonlocal?> dependencies. This work paves the way for nonlocal, non-equilibrium, physically consistent and data-driven constitutive models in RANS-based turbulence modeling.
Recently, Durbin [25] has provided a comprehensive examination of algebraic tensorial representations for Reynolds stress modeling, emphasizing the formulation of constitutive relations that respect Galilean invariance and physical realizability. The paper outlines the use of tensor bases to express Reynolds stresses in terms of flow invariants and eigenvalue decompositions, facilitating interpretable and constraint-consistent model development. Key aspects include the treatment of anisotropy through barycentric representations and the incorporation of effects such as rotation, curvature and scalar transport. These algebraic frameworks offer essential foundations for integrating physics-based constraints into data-driven turbulence models, aligning with recent efforts to develop interpretable, generalizable and physically consistent closures within the context of PIML.
Despite significant progress, PIML turbulence models continue to face fundamental open challenges. Although the incorporation of physical constraints has improved robustness and internal consistency, it has not fully mitigated limited extrapolation beyond the training manifold, sensitivity to noisy or biased high-fidelity data, or degraded performance in flows characterized by strong separation, shock–turbulence interaction, and unsteady multi-scale dynamics. Models trained primarily on canonical configurations often exhibit pronounced loss of accuracy when applied to complex industrial geometries, where the dominant flow physics departs from the training regime. These limitations indicate that current PIML formulations remain inherently incomplete and motivate the systematic treatment of physical, mathematical, and numerical constraints as core modeling principles rather than auxiliary regularization strategies.
A closely related and persistent deficiency of neural-network-based turbulence closures is the absence of systematic uncertainty quantification, whereby deterministic point predictions provide no measure of confidence and obscure the reliability of inferred constitutive corrections. Neglecting epistemic uncertainty arising from limited data and model-form inadequacy frequently leads to overconfident predictions and severe degradation in extrapolative regimes. Recent Bayesian and ensemble-based approaches, including stochastic field inversion, address this limitation by representing closure corrections as random fields and enabling the propagation of constitutive uncertainty to quantities of interest. When embedded within PIML frameworks, uncertainty quantification naturally complements realizability and invariance constraints by identifying regions where constitutive assumptions are weakly informed by data, thereby unifying data-driven closure modeling, model-form uncertainty, and physical admissibility within a single predictive framework.
3 Constraints in turbulence modeling
Turbulence models are governed by a set of foundational constraints that ensure their predictions remain consistent with physical reality, mathematically sound and computationally feasible. These constraints not only shape the form of the models but also define the boundaries of their applicability. This section provides an overview of the major constraint categories that influence turbulence model development.
3.1 Physical constraints
The formulation of turbulence models is fundamentally governed by physical constraints that ensure robustness, consistency and applicability across a wide range of flow conditions. These constraints arise directly from the principles of continuum mechanics and fluid dynamics, reflecting the requirement that all closures remain faithful to the governing laws of physics. At the most basic level, turbulence models must respect the conservation of mass, momentum and energy, preserving the integrity of the Navier–Stokes equations. Beyond conservation, realizability conditions impose strict limits on turbulence quantities, such as ensuring that the Reynolds stress tensor remains positive semi-definite, thereby excluding nonphysical states. Symmetry considerations require that turbulence models reproduce isotropy in the absence of mean strain, while invariance properties such as Galilean invariance demand that predictions remain independent of the observer’s frame of reference. Dimensional consistency further constrains models to respect the principles of homogeneity and similarity, ensuring scale-independent formulations that remain valid across flow regimes. Finally, memory effects recognize that turbulence responds to changes in mean strain and rotation with finite time lags, introducing temporal nonlocality that must be embedded into constitutive relations. Neglecting any of these constraints risks producing models that yield unstable or nonphysical behavior, undermining their reliability. For this reason, both classical closures and emerging physics-informed data-driven approaches must incorporate these physical principles as essential design criteria, thereby ensuring that turbulence models achieve predictive fidelity while retaining theoretical soundness.
3.1.1 Conservation laws
At the core of turbulence modeling lies the requirement that all closures respect the fundamental conservation laws of mass, momentum and energy. These principles, rooted in continuum mechanics, represent non-negotiable constraints that must be satisfied regardless of whether the modeling strategy is analytical, empirical or data-driven. Adherence to conservation laws ensures that turbulence models remain consistent with the Navier–Stokes equations and maintain the physical balance of the flow. Failure to satisfy these laws inevitably leads to nonphysical predictions, numerical instabilities or a breakdown of generalizability across different flow regimes.
Mass conservation, embodied in the continuity equation, ensures that the net mass flux through any control volume remains zero. This principle must be respected in both incompressible and compressible turbulence modeling to maintain consistency in mass transport, particularly in complex flow configurations such as boundary layers or jets. Momentum conservation, derived from the Navier–Stokes equations, governs the balance between inertial, pressure, viscous and turbulent transport forces. Incorporating Reynolds stress closures into the Navier–Stokes equations necessitates strict compliance with the conservation laws of mass and momentum. Any violation of these balance constraints alters the effective force terms in the governing equations and leads to nonphysical solutions for the velocity and pressure fields. Energy conservation is especially critical in compressible flows and high-speed aerodynamics, where the accurate representation of kinetic and internal energy exchanges including dissipation and production, determines the fidelity of temperature and pressure fields.
Recent advancements in PIML have demonstrated the critical role of embedding physical laws particularly conservation principles directly into the model training to improve robustness and mitigate the risk of overfitting to nonphysical patterns. Duraisamy [22] emphasized that incorporating governing equations, such as the Navier–Stokes equations, into the learning framework can significantly constrain the hypothesis space of ML models, ensuring that the resulting closures remain consistent with known physical behavior even in regions with sparse or noisy data. This perspective highlights the importance of physics-based regularization in maintaining model generalizability and avoiding spurious predictions driven solely by statistical correlation rather than causal physical mechanisms.
Building upon this paradigm, Zhang et al. [23] proposed a novel training framework that integrates a neural network-based nonlinear eddy viscosity model directly into the RANS solver during training. This approach enforces model-consistent learning, wherein the neural network is optimized not merely to fit high-fidelity data, but to ensure consistency between predicted and observed flow quantities (e.g., velocity profiles, lift and drag) within the closed-loop solution of the RANS equations. Their method employs an ensemble Kalman inversion technique with adaptive step sizing, which facilitates stable convergence even in the presence of indirect or limited observation data. By embedding conservation laws and the underlying RANS structure directly into the training loop, this approach significantly reduces the susceptibility of learned models to overfit unphysical artifacts in training data, thereby enhancing their applicability to high-Reynolds-number flows and complex geometries.
3.1.2 Realizability condition
A realizability condition is a fundamental constraint in turbulence modeling that ensures the predicted turbulent quantities remain physically admissible across the entire flow domain. This condition plays a critical role in preventing the emergence of nonphysical behavior such as negative turbulent kinetic energy or unrealizable stress states which can compromise numerical stability and model credibility particularly in regions with strong anisotropy or near stagnation points. The foundation of the realizability condition is the requirement that the Reynolds-stress tensor be positive semi-definite. The Reynolds stress is defined as the Reynolds average of the dyadic product of the velocity fluctuations. For any arbitrary real vector, the associated quadratic form of the Reynolds-stress tensor corresponds to the Reynolds average of the square of the projection of the velocity fluctuations onto that vector. Because the average of a squared quantity is necessarily non-negative, this quadratic form is non-negative for all vectors, and the Reynolds-stress tensor therefore admits no negative eigenvalues. As a direct consequence, the turbulent kinetic energy, defined as one half of the trace of the Reynolds stress, is necessarily non-negative and the normal stress components remain physically admissible. This positive semi-definiteness provides the mathematical basis of the realizability requirement. Furthermore, the Cauchy-Schwarz inequality must be satisfied for the shear stress components ensuring physically valid correlations between velocity fluctuations.
A central tool for visualizing and enforcing realizability constraint in turbulence modeling is the mapping of Reynolds stress anisotropy states into bounded geometric domains, the most well-known being Lumley’s triangle (Figure 1) and the barycentric map (Figure 2). Lumley [26] demonstrated that the positive semi-definiteness of the Reynolds stress tensor, combined with the statistical symmetries of turbulence, constrains the second and third invariants of the anisotropy tensor to a bounded region in the (II, III) invariant plane, producing what is now known as Lumley’s triangle. This finite domain arises from the requirement that the Reynolds stress tensor remain physically realizable, meaning it must yield non-negative turbulent kinetic energy and satisfy the symmetry properties of homogeneous turbulence. The vertices of Lumley’s triangle correspond to the three-limiting turbulence componentiality states: one-component turbulence in which all turbulent kinetic energy is aligned in a single direction; two-component turbulence where energy is distributed within a plane; and isotropic turbulence where energy is equally shared among all directions. The isotropy vertex defines both a symmetry condition for homogeneous, shear-free turbulence and a mathematical boundary of the realizable Reynolds stress space, making Lumley’s triangle a cornerstone tool for evaluating whether turbulence models produce physically admissible and symmetry-consistent predictions. Building on this foundation, Banerjee et al. [27] introduced the barycentric map, which reformulates the same realizable anisotropy domain using the eigenvalues of the Reynolds stress anisotropy tensor within a linear barycentric coordinate system. This representation preserves the same physical and realizability limits as Lumley’s invariant-based formulation, while providing a more uniform and intuitive geometric interpretation of turbulence anisotropy. The approach begins by separating the turbulent kinetic energy from the directional distribution of turbulent fluctuations through the definition of the anisotropy tensor, which is symmetric, traceless, and fully characterizes turbulence componentiality. The anisotropy tensor is described by three real eigenvalues whose sum is zero and whose admissible range is constrained by the positive semi-definiteness of the Reynolds stress tensor.

Lumley’s triangle.

Barycentric map.
These eigenvalues represent the relative distribution of turbulent kinetic energy among the principal directions and define three limiting turbulence states: one-component turbulence, two-component turbulence, and isotropic turbulence. These limiting states form the vertices of a triangular realizability domain. Any physically admissible anisotropy state can then be expressed as a convex combination of the limiting states by introducing barycentric coordinates that are linear functions of the anisotropy eigenvalues and sum to unity. Each turbulence state is thus mapped to a unique point within or on the boundary of the triangle.
Because the mapping depends only on the anisotropy eigenvalues, it is invariant under coordinate rotations and independent of the observer’s reference frame. The resulting barycentric map provides a bounded and linear representation of turbulence anisotropy, in contrast to invariant-based representations such as Lumley’s triangle, which rely on nonlinear combinations of invariants. Consequently, the barycentric map offers a clear and physically meaningful visualization of turbulence componentiality and has become a valuable diagnostic and constraint-enforcement tool in both classical and data-driven turbulence modeling frameworks.
These representations serve not only as visualization techniques but as critical diagnostic tools for assessing whether turbulence model predictions remain within the physically realizable domain, especially in regions of pronounced anisotropy such as near-wall layers, stagnation points and flow separation zones. When models violate this constraint by producing negative eigenvalues or predictions outside the realizability bounds, they often generate nonphysical stresses that can lead to numerical instabilities. Consequently, frameworks such as Lumley’s triangle and the barycentric map are integral to the development, evaluation and calibration of both conventional and data-driven turbulence closures, ensuring adherence to fundamental physical principles and enhancing model robustness across a wide range of flow conditions.
Traditional eddy-viscosity models, such as the standard k-ε model developed by Launder and Spalding [4], have long served as the backbone of RANS turbulence modeling due to their relative simplicity and robustness across a wide range of engineering flows; however, they lack mechanisms to enforce key realizability conditions such as the positive semi-definiteness of the Reynolds stress tensor and the non-negativity of turbulent kinetic energy. This deficiency often leads to the overprediction of turbulence levels particularly in regions with strong adverse pressure gradients, stagnation and flow separation resulting in unphysical stress distributions and diminished predictive accuracy. To overcome these limitations, Shih et al. [28] introduced the realizable k-ε model, which represents a significant advancement by modifying the eddy-viscosity formulation and incorporating a new transport equation for the dissipation rate. The model ensures positivity of normal Reynolds stresses and imposes a bounded turbulence time scale achieved by dynamically adjusting model coefficients based on the local mean strain rate. As a result, the realizable k-ε model demonstrates improved performance in complex flows particularly those involving strong streamline curvature, rotation, separation and stagnation. These efforts mark an important step in embedding physical constraints directly into turbulence closures, thereby enhancing both the accuracy and robustness of RANS simulations.
In data-driven turbulence modeling, enforcing realizability constraint is critical to ensure that predicted Reynolds stress tensors remain physically admissible by being positive semi-definite and confined within the realizable anisotropy space. To address this challenge, Wang et al. [19] developed a ML framework that predicts the eigenvalues and eigenvectors of the Reynolds stress anisotropy tensor while constraining them within a realizable domain using the barycentric map, thereby guaranteeing the positive semi-definiteness of the reconstructed stresses. Their approach employs a parametrization guided by invariant representations and tensor basis expansions, embedding the mathematical structure of realizable Reynolds stresses directly into the learning architecture and preventing nonphysical behaviors such as negative turbulent kinetic energy or non-symmetric stress tensors. The barycentric map, with its linear and convex structure, is particularly advantageous for training and validation because it enables smooth interpolation between turbulence componentiality states while preserving correct isotropy and one- or two-component limits. Parameterizing Reynolds stresses in eigenvalue–eigenvector form and constraining them within these geometric realizability maps ensures that models maintain compliance with fundamental physical limits independently of the underlying learning algorithm. Collectively, these developments demonstrate how the theoretical foundations established by Lumley [26] and Banerjee et al. [27] now underpin the construction of robust data-driven turbulence closures, bridging empirical modeling with physics-based fidelity and embedding symmetry and realizability constraints as core elements of modern ML turbulence models.
3.1.3 Symmetry conditions
Among the essential constraints on constitutive relations in turbulence modeling are symmetry conditions, which ensure that modeled stresses reflect the fundamental invariance and symmetry properties of the Navier–Stokes equations. One of the most important of these is the requirement that, for eddy viscosity models, turbulence reduces to an isotropic state in the absence of mean strain or rotation. In homogeneous and shear-free turbulence, where production of anisotropy vanishes, the Reynolds stress tensor must collapse to an isotropic form with equal normal stresses and zero shear stresses. The isotropy condition serves as a fundamental physical requirement for any turbulence closure, providing a benchmark for assessing both the model’s physical consistency and its compliance with the statistical symmetry’s characteristic of homogeneous turbulence [29], [30], [31].
The importance of isotropy extends beyond serving as a limiting case, as it also defines a cornerstone of the realizable space of Reynolds stresses. The Reynolds stress tensor, being a symmetric and positive semi-definite second-order tensor, occupies a bounded region of anisotropy states constrained by both realizability conditions and symmetry properties. Lumley [26] formally derived these bounds, showing that the set of all physically admissible Reynolds stresses can be mapped into a triangular domain in the anisotropy invariant plane, now known as Lumley’s triangle. Within this space, complete isotropy forms one extreme corner, representing the state where all eigenvalues of the Reynolds stress tensor are equal, while the other corners correspond to one-component and two-component turbulence. This relationship emphasizes that isotropy serves as both a symmetry requirement and a mathematical bound of realizability, meaning any closure that predicts stresses outside this limit violates physical symmetry as well as realizability constraints.
Building on Lumley’s work, Banerjee et al. [27] introduced the barycentric map, an alternative geometric representation of the realizable anisotropy space derived directly from the eigenvalues of the Reynolds stress tensor. In this framework, isotropy again defines a critical vertex of the realizable domain, reinforcing its role as both a symmetry limit and a realizability boundary. The barycentric map provides a linear and visually intuitive representation of how turbulence models transition between isotropic, two-component and one-component states, making it a valuable diagnostic tool for assessing whether a model respects both symmetry and realizability constraints.
For turbulence closures, this dual role of isotropy as a symmetry condition and a realizability bound imposes strict constraints on constitutive relations. Eddy-viscosity models are required to reduce to an isotropic eddy-viscosity formulation when mean strain is absent, whereas Reynolds stress models and nonlinear closures must ensure that the anisotropy tensor vanishes under these conditions. EARSM [9] achieve this by constructing the anisotropy tensor from objective invariants of the mean strain and rotation, guaranteeing a return to isotropy when those invariants vanish. Failure to satisfy this condition can produce spurious anisotropy in homogeneous turbulence, violating both physical symmetry and the mathematical realizable domain of the Reynolds stress tensor [30], [31].
Beyond isotropy in homogeneous turbulence, symmetry conditions include frame indifference under coordinate transformations and the preservation of tensor symmetry properties. The Reynolds stress tensor must remain symmetric under any modeled closure formulation. More broadly, the constitutive relations must respect the symmetry group of the flow, including invariance under reflections and rotations of the coordinate axes. These constraints are not simply mathematical formalities, as violating them can cause models to produce spurious preferred directions and unphysical energy distributions in flows that should remain symmetric [11], [12].
In data-driven turbulence modeling, symmetry and realizability constraints have become increasingly critical. ML-based closures trained on limited flow datasets can easily produce nonphysical anisotropic stress in free-stream or homogeneous regions unless isotropy and realizability are explicitly embedded. Modern approaches address this by incorporating invariant tensor bases and eigenvalue parameterizations that enforce both isotropy and the positive semi-definiteness of the Reynolds stress tensor. For example, Ling et al. [18] ensured isotropy recovery in their TBNN by using an integrity basis that collapses to zero anisotropy in the absence of mean gradients, while Wang et al. [19] incorporated realizability by constraining predicted eigenvalues within the barycentric map domain.
Isotropy in homogeneous turbulence represents more than a symmetry condition as it establishes both a physical and mathematical boundary for the realizable Reynolds stress space. The seminal contributions of Lumley and Banerjee formalized this relationship, demonstrating that enforcing isotropy is intrinsically linked to keeping turbulence models within the physically admissible domain. Consequently, both symmetry and realizability constraints serve as complementary foundations for the formulation of constitutive relations in turbulence modeling, underpinning approaches ranging from traditional RANS closures to modern data-driven frameworks.
3.1.4 Galilean invariance and material frame indifference
A fundamental requirement for any turbulence model is that its predictions remain independent of the observer’s frame of reference. This requirement is expressed through the principles of Galilean invariance and MFI, also referred to as objectivity, which govern the form of the governing equations and constitutive closures. While these concepts are closely related, they are not identical, and their precise interpretation and application within turbulence modeling have been the focus of significant theoretical development and ongoing debate [31], [32], [33].
The foundation of Galilean invariance is the requirement that the Reynolds stress be independent of the observer’s inertial reference frame. Under a Galilean transformation corresponding to a uniform translation with constant velocity, both the instantaneous and mean velocities are shifted by the same amount, while the velocity fluctuations remain unchanged. Since the Reynolds stress depends solely on these fluctuations, it is invariant under uniform translations of the reference frame. Consequently, any admissible Reynolds-stress closure must not depend on the absolute value of the mean velocity, but only on Galilean-invariant quantities such as mean velocity gradients, turbulence scales, or invariants constructed from them.
At a minimum, Galilean invariance is universally accepted as a non-negotiable constraint. It requires that the governing equations of fluid motion and their closures remain unchanged when observed from any inertial frame that differs only by a uniform translation. For turbulence modeling, this means that the modeled Reynolds stresses or eddy viscosities must depend solely on velocity gradients and invariant flow quantities, and not on the absolute velocity of the fluid. A model that changes its predictions when a constant velocity is added to the entire flow field is considered physically inadmissible. This requirement is critical for both RANS closures and LES subgrid-scale models, ensuring that predictions remain consistent across inertial frames regardless of the background motion of the observer [11], [34], [35].
Extending beyond uniform translations, rotational invariance and the broader concept of MFI impose stricter conditions. MFI requires that constitutive relations, such as the modeled Reynolds stress tensor, remain unaffected under any superimposed rigid-body motion including both translations and rotations of the reference frame. In continuum mechanics, MFI is often treated as a universal axiom for constitutive equations, guaranteeing that material responses are objective and independent of the observer’s motion [32]. For turbulence modeling, this translates into demanding that the Reynolds stresses transform as true second-order tensors under any coordinate rotation and that the functional forms of closures be constructed from objective quantities such as invariant combinations of the strain-rate and rotation-rate tensors [11], [12]. This requirement has driven the use of tensor integrity bases and invariant polynomial expansions in the development of nonlinear eddy-viscosity models and EARSM, which guarantee objectivity by design [9].
In turbulence modeling, MFI should be interpreted primarily in terms of covariance or objectivity of the modeled stress under rigid coordinate transformations and, where applicable, form invariance with respect to the appropriately transformed mean equations, including those expressed in rotating reference frames, rather than as a direct analogue of material objectivity in classical constitutive theory. Because Reynolds and subgrid-scale stresses are statistical quantities defined through a specific averaging or filtering operation, classical continuum-mechanics arguments for MFI do not transfer verbatim. Both the mean-fluctuation decomposition and the governing equations themselves are modified under non-inertial observer transformations. Consequently, Galilean invariance remains a mandatory requirement, whereas MFI is most effectively enforced by constructing closures from objective tensor bases and invariant quantities such as integrity bases formed from the mean rate-of-strain tensor and an appropriate measure of rotation and by explicitly distinguishing physical system rotation from changes in the observer frame, particularly in curved or rotating flows and in solver-coupled ML closures.
However, the direct applicability of MFI as a strict constraint in turbulence modeling has been the subject of significant discussion. Speziale [34] argued strongly that turbulence closures should satisfy MFI to maintain physical consistency, emphasizing that failing to do so can lead to nonphysical stress responses in flows with significant curvature, rotation or three-dimensional secondary motions. He demonstrated that many linear eddy-viscosity models, while Galilean invariant, are not fully frame-indifferent because they lack nonlinear coupling between strain and rotation, leading to inaccurate anisotropy predictions in rotating or curved flows. This insight motivated the development of nonlinear eddy-viscosity models and higher-order closures that incorporate MFI to capture the physics of turbulence under complex kinematics [8], [12].
On the other hand, some researchers have questioned whether MFI, as formulated in continuum mechanics, should be imposed rigidly on turbulence models. Unlike constitutive laws for materials, turbulence models describe the statistics of a chaotic and multi-scale flow field rather than a direct material response. Critics argue that because turbulence modeling involves closure assumptions on averaged quantities rather than exact constitutive relations, enforcing MFI too strictly may overconstrain the model space and prevent empirical tuning necessary for engineering accuracy. This debate has led to a pragmatic view in the turbulence modeling community: Galilean invariance is mandatory, while MFI is highly desirable and often incorporated through invariant tensor bases and objective inputs, but in some cases can be relaxed to allow models to better fit experimental or DNS data in specific flow regimes [10], [31].
This question has resurfaced in the context of data-driven turbulence modeling where ensuring invariance properties is crucial for generalization. ML-based Reynolds stress closures that ignore Galilean invariance or objectivity tend to overfit to coordinate-specific patterns and produce nonphysical predictions when applied to new flows. To address this, researchers have embedded invariance constraint directly into learning architectures. Ling et al. [18] developed the TBNN, which enforces both Galilean invariance and MFI by constructing the anisotropy tensor as a linear combination of invariant tensor bases. Wang et al. [19] incorporated realizability and frame-indifference by representing Reynolds stress anisotropy within barycentric coordinates and invariant feature spaces, thereby ensuring predictions remained confined to a realizable and objective domain. These approaches show that even for data-driven models, incorporating MFI-like constraint enhances physical consistency and model robustness across diverse flow conditions.
Invariance properties form a core set of physical constraints for turbulence models. Galilean invariance is universally required to ensure frame-independent predictions, while MFI or objectivity, though debated, are widely recognized as highly desirable, particularly for models intended for complex three-dimensional and rotating flows. The integration of this constraint, whether in classical algebraic closures or modern ML frameworks, remains essential for developing turbulence models that are physically consistent, robust across reference frames, and generalizable across flow regimes and geometries.
3.1.5 Dimensional consistency
Dimensional consistency is one of the most fundamental constraints in the formulation of turbulence models, requiring that all modeled terms obey the principle of dimensional homogeneity. This ensures that the governing equations remain valid under changes in units and maintain their predictive capability across different flow regimes and Reynolds numbers. Despite being a fundamental modeling requirement, dimensional homogeneity is often not rigorously enforced, particularly in hybrid or empirically blended turbulence models. Such oversights can result in physically inconsistent predictions and compromise the model’s applicability and generalizability across different flow regimes. In turbulence modeling, this constraint is particularly crucial because it ensures that turbulence quantities such as Reynolds stresses, eddy viscosity and turbulent kinetic energy are properly scaled with respect to the characteristic properties of the flow. For instance, in eddy-viscosity-based models, dimensional consistency dictates that the turbulent viscosity must have dimensions of kinematic viscosity and therefore be constructed from turbulence quantities such as the turbulent kinetic energy and dissipation rate or from the square of a length and velocity scale [31], [35]. Violation of this requirement results in turbulence closures that lack robustness and fail to generalize beyond their calibration range particularly in high-Reynolds-number flows, compressible regimes and geometrically complex configurations.
The importance of dimensional consistency was strongly emphasized by Spalart and Speziale [36] in their critique of an earlier formulation proposed by Wang [37], who attempted to construct constitutive relations for the Reynolds stress tensor without explicitly involving turbulence scales such as turbulent kinetic energy. They emphasized that the Reynolds stress tensor cannot be consistently expressed using only mean velocity gradients and thermodynamic quantities without introducing turbulence-specific dimensional scales. This critique highlighted a more fundamental principle, emphasizing that dimensional consistency is not merely a modeling convention but a core physical requirement that ensures the correct representation of turbulent transport phenomena. Their argument aligns with the foundational texts of turbulence theory [38], [39], which establish that turbulent diffusion operates independently of molecular viscosity and must be modeled using its own physically consistent scales.
Moreover, ensuring dimensional consistency becomes increasingly important in modern modeling paradigms, including PIML and hybrid RANS-LES approaches, where diverse sources of data and model terms are combined. In such cases, failure to respect dimensional homogeneity can lead to hidden instabilities, misleading training outcomes and poor extrapolation to new flow conditions. For example, in data-driven closures, embedding non-dimensional and dimensionally consistent input features such as invariant combinations of strain-rate and rotation tensors ensures that the learned models remain physically meaningful and transferable [19], [20].
Dimensional consistency is a universal constraint that underpins both traditional and modern turbulence models. It ensures that constitutive relations respect the physics of turbulence, scale appropriately across different flow regimes and remain valid under transformation. As Spalart and Speziale [36] assert, any formulation that violates dimensional homogeneity risks undermining the predictive credibility and physical admissibility of the model. Therefore, dimensional consistency should be regarded not only as a formal mathematical requirement but also as a core physical principle guiding the construction of robust, generalizable and interpretable turbulence closures.
3.1.6 Memory effects, history effects and rapid distortion theory
The memory effect, also referred to in parts of the literature as the history effect, arises from the recognition that turbulence does not respond instantaneously to variations in the mean strain or rotation field. Instead, turbulent structures retain a finite-time “memory” of their prior states, which continues to influence their present dynamics. This finite response time manifests as a lag between imposed mean-flow distortions and the corresponding adjustment of the Reynolds stress tensor. The importance of accounting for these effects was emphasized early by Hinze [29], [40], who argued that the common eddy-viscosity assumption of an instantaneous and linear relationship between Reynolds stresses and the mean rate of strain fails to capture the physical reality of turbulence, particularly in rapidly changing or non-equilibrium flows. Physically, this lag reflects the finite timescales over which turbulent eddies redistribute energy and reorient their structures, so that the Reynolds stress tensor at a given instant is influenced not only by the instantaneous mean velocity gradients but also by their recent temporal evolution. Models that depend solely on the current local state of mean strain and rotation, such as linear eddy-viscosity closures, can therefore misrepresent transient or rapidly evolving flows.
Rapid Distortion Theory (RDT) provides a theoretical foundation for the short-time limit of turbulence response to mean-flow changes. Originating with Batchelor and Proudman [41] and refined by Hunt [42] and Cambon and Jacquin [43], RDT analytically describes the short-time evolution of turbulence subjected to strong mean strains or rotations, where nonlinear turbulent-turbulent interactions are temporarily negligible. In this regime, the turbulence retains a clear “memory” of the imposed distortion, and the theory provides exact solutions for the evolution of Reynolds stresses and anisotropy under idealized conditions. While RDT is not a general turbulence closure, it has been widely used to derive and calibrate rapid-distortion and return-to-isotropy terms in Reynolds stress modeling [30], [44], [45], making it a natural asymptotic constraint for high-strain-rate regimes.
From the standpoint of modeling constraints, incorporating memory effects serves both a physical and a mathematical role. Physically, it ensures that the model captures the finite-rate adjustment of turbulence to changes in the mean flow, which is essential for accurately representing highly inhomogeneous or rapidly evolving flows such as wakes, shear layers and turbomachinery passages. Mathematically, embedding temporal or material derivatives into closure relations acts as a form of regularization, stabilizing numerical predictions by preventing unrealistically abrupt variations in the Reynolds stresses during unsteady simulations. In the limit of extremely rapid distortions, such formulations should reduce to the predictions of RDT, thereby anchoring the model’s behavior to a physically justified asymptote.
In conventional turbulence modeling, memory effects are represented through history-integral formulations or relaxation models that embed the influence of past flow states into current stress predictions. Speziale et al. [44] implemented memory constraint in a Reynolds-stress model via Lagrangian history-based pressure-strain closures, while Durbin’s elliptic relaxation method [5], [6] employed spatial transport equations to capture the non-local influence of wall effects, embedding their “memory” in the turbulence field. Speziale [30], [34] advanced this approach by introducing constitutive relations with material derivatives of the Reynolds stress or anisotropy tensor, thereby defining a characteristic time scale over which turbulence retains information from prior configurations. This formulation is consistent with RDT, which characterizes the short-time, linear response of turbulence to sudden mean-flow distortions and demonstrates that stress evolution depends on the integrated history of mean strain and rotation rather than solely on instantaneous values.
In modern physics-informed data-driven turbulence modeling, memory-effect constraint has re-emerged as a critical ingredient for generalization. Data-driven closures that rely purely on instantaneous flow features are prone to overfitting and may fail in unsteady or non-equilibrium conditions. To address this, PIML frameworks have integrated strain-rotation histories into their architectures. For example, Wang et al. [19] enforced realizability while implicitly capturing history effects through eigen-decomposed, invariant-based stress predictions that learn from both instantaneous and evolving flow features.
Memory-effect constraint represents an essential, though often overlooked, category of restrictions on constitutive relations in turbulence modeling. Rooted in the physical reality that turbulence exhibits finite response times and reinforced by the theoretical framework of RDT, this constraint has guided the formulation of advanced Reynolds-stress closures and is increasingly influencing the development of physics-informed data-driven models. By embedding history dependence in a way that remains consistent with invariance principles, realizability conditions and dimensional consistency, turbulence models can achieve improved accuracy and robustness in predicting unsteady, non-equilibrium and geometrically complex three-dimensional flows.
3.2 Mathematical and numerical constraints
The formulation of constitutive relations in turbulence modeling is constrained not only by physical considerations but also by mathematical and numerical requirements that ensure stability, realizability and computational feasibility. These constraints arise from the necessity of preserving the essential properties of turbulence while enabling robust implementation in CFD solvers. Without adherence to these constraints, turbulence closures may produce ill-posed systems, unstable numerical behavior or unphysical solutions that compromise both predictive accuracy and engineering utility.
3.2.1 Well-posedness and stability
A primary mathematical requirement is that turbulence models yield well-posed systems of equations, meaning they must possess unique, bounded and stable solutions under appropriate boundary and initial conditions. Ill-posedness may lead to unbounded growth of modeled quantities, spurious oscillations or nonphysical singularities [26], [31]. For instance, nonlinear Reynolds stress closures can become unstable in pure rotation or rapid strain regimes if not properly regularized [12], [34].
From a numerical standpoint, stability and convergence play equally critical roles. Discretization schemes must handle stiff source terms in turbulence transport equations, particularly in near-wall regions or highly strained flows. Patel et al. [46] and Menter [47] emphasized that improper treatment of production and dissipation terms can lead to solver divergence. To address this, it is recommended to treat negative (destruction) terms implicitly and positive (production) terms explicitly, as this increases the diagonal dominance of the matrix system and enhances numerical stability. This recommendation represents a clear numerical constraint that governs how turbulence closures must be discretized for robust application. The issue is magnified in advanced Reynolds stress models, where additional non-linear terms exacerbate stiffness and can trigger convergence failures unless carefully stabilized.
3.2.2 Consistency with asymptotic limits
Another essential constraint is that turbulence models must remain consistent with asymptotic limits derived from turbulence theory. Models that fail to recover these limits risk producing unphysical predictions outside their calibration range. Asymptotic consistency in turbulence modeling is typically categorized into three major regimes that impose essential constraints on closure development. First, models must recover Kolmogorov’s inertial-range scaling, ensuring that energy cascade dynamics and dissipation in the inertial subrange follow the universal laws established by Kolmogorov [3]. Failure to reproduce these scalings leads to inaccuracies in small-scale turbulence representation. Second, near-wall behavior provides stringent asymptotic constraints, as wall-bounded turbulence requires models to correctly capture the law of the wall [48] and the viscous damping of turbulent stresses. To address this, wall functions [4], low-Reynolds number damping functions [49] and elliptic-relaxation formulations [5], [6] were introduced, embedding the necessary near-wall asymptotics. Finally, models must remain consistent with low- and high-Reynolds number limits. At very low Reynolds numbers, turbulence must decay consistently with viscous dissipation, while at very high Reynolds numbers, closures must capture self-similar asymptotic states in free shear flows.
3.2.3 Computational feasibility
Beyond physical and mathematical fidelity, turbulence models must remain computationally feasible, particularly for industrial and engineering applications where simulation time directly impacts design cycles. High-fidelity methods such as DNS and LES provide unmatched accuracy but are computationally intractable for high-Reynolds-number engineering flows. Instead, RANS closures remain the practical standard, balancing reduced accuracy against significant computational savings.
Computational feasibility is increasingly relevant for data-driven closures, where ML may introduce additional computational overhead. Efficient algorithms are therefore necessary to maintain tractability. Duraisamy [22] highlighted that embedding data-driven models directly into RANS solvers requires careful optimization to prevent prohibitive computational costs while still improving predictive accuracy. Thus, computational feasibility emerges as a practical constraint that often forces turbulence modeling to accept reduced fidelity in exchange for robustness, stability and applicability in real-world design environments.
To consolidate the wide range of physical, mathematical, and numerical constraints discussed in this section, Table 1 provides a compact summary of the key requirements governing turbulence closures. The table highlights the physical role of each constraint such as realizability, invariance, dimensional consistency, and memory effects and outlines how these principles are enforced in both classical RANS formulations and modern ML-based models. By presenting the physical implications together with their typical mathematical and computational implementations, Table 1 serves as a quick reference guide that complements the detailed discussions above. This summary underscores the central message of this review that constraints are not merely restrictive conditions but essential design principles that ensure physical admissibility, numerical robustness, and generalizability across both traditional and data-driven turbulence modeling.
Summary of key constraints governing turbulence constitutive relations and their enforcement in RANS/ML models.
| Constraint | Category | Physical implication | Enforcement in RANS | Implementation in ML |
|---|---|---|---|---|
| Conservation laws (mass, momentum, energy) | Physical | Ensure consistency with the Navier–Stokes equations and global physical balance | Enforced intrinsically through the RANS equations; closure enters only via modeled Reynolds stresses | Governing equations embedded in loss functions or solved in-the-loop (e.g., PINNs, solver-coupled ML) |
| Realizability | Physical | Reynolds stresses must remain physically admissible (positive semi-definite, non-negative turbulent kinetic energy) | Bounds on model coefficients; realizable k − ɛ formulations; stress-limiters | Eigenvalue-eigenvector parameterization; barycentric-map constraints; realizability-constrained loss functions |
| Symmetry/isotropy | Physical | Correct recovery of isotropic turbulence in homogeneous, shear-free flows | Anisotropy tensor vanishes when strain and rotation invariants vanish | Tensor-basis representations that collapse to isotropy by construction |
| Galilean invariance | Physical | Predictions independent of uniform translation of the reference frame | Dependence only on velocity gradients, not absolute velocity | Invariant input features; TBNN |
| Material frame indifference (objectivity) | Physical | Correct transformation under rotations and rigid-body motions | Invariant tensor bases (Pope [11]); nonlinear stress-strain relations (EARSM) | Equivariant architectures; invariant polynomial or tensor bases |
| Dimensional consistency | Physical/mathematical | Correct scaling across Reynolds numbers and unit systems | Eddy viscosity constructed from turbulence scales (k, ɛ, ω) | Non-dimensional input features; invariant normalization |
| Memory/history effects | Physical | Finite-time response of turbulence to changes in mean strain and rotation | Reynolds-stress transport models; relaxation terms; elliptic relaxation formulations | Recurrent networks; history-augmented features; temporal embeddings |
| Well-posedness | Mathematical | Existence, boundedness and stability of solutions | Regularization of nonlinear closures; bounded model coefficients | Physics-based regularization; stability-aware training |
| Asymptotic consistency (near-wall, inertial range) | Mathematical | Correct limiting behavior near walls and at extreme Reynolds numbers | Wall functions; damping functions; elliptic relaxation | Asymptotic-aware loss terms; hybrid RANS-ML blending |
| Numerical stability | Numerical | Robust convergence of CFD solvers | Implicit treatment of stiff terms; clipping and limiters | Blended ML corrections; solver-consistent training |
| Computational feasibility | Numerical | Practical cost for engineering applications | Preference for algebraic or two-equation closures | Lightweight surrogate models; non-intrusive ML corrections |
4 Implications for turbulence model development
The constraints discussed in the previous section not only limit the permissible forms of turbulence models but also serve as constructive guidelines for their refinement and innovation. Each constraint embodies a principle of physical, mathematical or numerical consistency that, when respected, contributes to greater accuracy, stability and reliability in turbulence predictions. For example, the enforcement of realizability condition prevents the emergence of nonphysical stress states, thereby improving numerical stability and ensuring that modeled turbulence quantities remain physically admissible across diverse flow conditions. Likewise, satisfying invariance properties ensures that turbulence models produce predictions independent of the observer’s frame of reference, greatly enhancing their generality and applicability to different geometries and flow configurations. Constraints related to asymptotic behavior anchor models to well-established limits, such as near-wall scaling or inertial-range dynamics, which provides confidence that the closure will reproduce correct physics in extreme flow regimes. Similarly, dimensional consistency ensures that turbulence closures can be applied universally across scales and unit systems without recalibration, preserving their predictive integrity. Collectively, these constraints act not as restrictions but as design principles, providing a structured pathway for both improving traditional turbulence models and informing the development of novel, hybrid or data-driven approaches that aspire to balance physical fidelity with practical robustness.
4.1 Constraint-driven model refinement
Building on the recognition that constraints act as guiding principles, the historical evolution of turbulence closures can be understood as a progressive refinement process in which successive generations of models systematically embedded additional physical and mathematical constraints. Early eddy-viscosity formulations provided the first practical engineering tools, but their shortcomings in realizability, invariance, dimensional consistency and near-wall behavior prompted systematic modifications that embedded these constraints more explicitly, leading to greater accuracy and robustness across diverse flow regimes. This subsection examines how classical models such as the k − ɛ, k − ω and RSM were iteratively improved through the targeted enforcement of physical and mathematical requirements, leading to greater stability, robustness and predictive accuracy across diverse flow regimes.
The classical k − ɛ model of Launder and Spalding [4] and the k − ω model of Wilcox [35] remain cornerstones of RANS turbulence modeling. Their success derived from dimensional consistency and simplicity, but they suffered from limitations including overprediction of turbulent kinetic energy in stagnation regions and failure to enforce realizability of Reynolds stresses. To overcome these shortcomings, realizability constraints were progressively introduced. The realizable k − ɛ model of Shih et al. [28] modified the eddy-viscosity formulation by introducing a variable C μ dependent on the mean strain rate, ensuring that normal stresses remained positive and that turbulent kinetic energy could not become negative. This incorporation of realizability improved numerical stability and predictive fidelity, particularly in flows involving strong curvature, separation or recirculation.
Another major refinement stemmed from the requirement of Galilean invariance and MFI, which dictate that constitutive relations remain independent of the observer’s frame of reference. Traditional linear eddy-viscosity models failed to capture anisotropy in complex flows, leading to the development of nonlinear eddy-viscosity models and EARSM. Speziale [34] and Gatski and Speziale [12] formulated tensorial expansions of the Reynolds stress anisotropy based on invariant representations of the strain-rate and rotation tensors, thereby embedding frame-indifference by construction. Wallin and Johansson [9] extended this approach, producing EARSM capable of capturing anisotropy while maintaining realizability and objectivity, significantly improving predictions in rotating and three-dimensional shear flows.
Constraints arising from near-wall turbulence represent another area where refinements have played a critical role. Early RANS closures either relied on empirical wall functions [4] or introduced damping functions for low-Reynolds-number corrections [49], both of which imposed restrictive assumptions. Durbin [5], [6] introduced the elliptic relaxation framework, embedding wall effects as nonlocal constraints through elliptic equations. This innovation captured the “memory” of wall blocking and inhomogeneity, significantly improving predictions in boundary layers, separated flows and wall-bounded turbulence without resorting to purely empirical damping terms.
The recognition of memory effects, specifically the fact that turbulence does not respond instantaneously to variations in mean strain or rotation, contributed to the refinement of RSM. Speziale et al. [44] incorporated Lagrangian history-based pressure-strain closures, embedding temporal nonlocality and improving the representation of stress anisotropy in non-equilibrium flows. These refinements ensured consistency with RDT predictions, aligning closures with physical constraints on turbulence response in high-strain or rapidly distorted flows.
Constraint-driven refinements have also motivated the development of blended models that dynamically adapt between regimes. The k − ω SST model [47] combined the near-wall accuracy of the k − ω formulation with the free-shear performance of the k − ɛ model, guided by asymptotic consistency and numerical stability considerations. These blending strategies illustrate how constraints, most notably the near-wall asymptotic limit and free-stream decay, have played a central role in shaping turbulence model architecture.
The progressive embedding of realizability, invariance, near-wall asymptotics and memory effects demonstrates how constraints have acted as corrective forces driving turbulence model refinement. What began as simple two-equation closures evolved into a spectrum of constraint-driven formulations, each addressing specific deficiencies while extending predictive capability across increasingly complex flows. These refinements illustrate that constraints are not only boundary conditions on permissible model forms but also powerful design principles that have guided turbulence modeling from its empirical origins toward more physically grounded and robust frameworks.
4.2 Constraints in data-driven and machine-learning models
While constraint-driven refinement has long guided the development of traditional turbulence closures, the emergence of data-driven and ML-based approaches presents both new opportunities and new challenges. Unlike conventional models, which embed constraints directly into their constitutive relations, data-driven frameworks risk overfitting to high-fidelity training data and producing nonphysical predictions if left unconstrained. As such, modern research has increasingly focused on incorporating realizability, invariance, dimensional consistency and memory effects into learning architectures, either through physics-informed features, constraint-based loss functions or tensor-basis representations. This subsection explores how the same constraints that guided the refinement of classical models now serve as essential design principles for ensuring the robustness, generalizability and physical fidelity of ML turbulence closures.
One of the most critical requirements for ML closures is Galilean invariance and MFL, ensuring that model predictions remain independent of the observer’s reference frame. Ling et al. [18] addressed this by introducing the TBNN, which constructs the Reynolds stress anisotropy tensor as a tensorial expansion over an integrity basis of invariants derived from the strain and rotation rate tensors. This architecture guarantees that invariance is satisfied by construction, marking a major advance in embedding symmetry constraints into ML frameworks.
Another vital consideration is realizability, which requires the Reynolds stress tensor to remain positive semi-definite and bounded within the physically admissible anisotropy space. The Bayesian uncertainty quantification framework introduced by Xiao et al. [50] laid the conceptual foundation by embedding physical constraints into the assessment of RANS model-form uncertainties. This work was subsequently extended by Wang et al. [19], who developed a PIML framework that operationalized these ideas by learning discrepancies between RANS-predicted and DNS Reynolds stresses while representing anisotropy within the barycentric map to ensure realizability. Building on this, Wu et al. [20] introduced a more comprehensive framework that explicitly incorporated eigenvalue-eigenvector parameterization and invariant-space loss functions into neural network training, thereby preventing nonphysical behaviors such as negative turbulent kinetic energy or stress states outside the Lumley’s triangle. Collectively, these studies illustrate how realizability, long embedded in traditional closures, has been reinterpreted as a critical design principle in data-driven turbulence modeling.
ML models must also preserve dimensional consistency and recover known asymptotic limits. Physics-informed feature engineering often involves the use of non-dimensional invariants derived from the mean velocity gradient tensor, ensuring scale-independent predictions across Reynolds numbers and geometries. Duraisamy [22] emphasized that embedding dimensional constraints is essential to avoid producing closures that fail under scaling transformations or in extrapolated regimes.
Another important direction is the incorporation of memory effects into data-driven models. Turbulence does not respond instantaneously to distortions, and this finite response time can be encoded into ML models through recurrent neural networks and sequence-learning architectures. Parish and Duraisamy [16] demonstrated how field inversion and ML techniques can be adapted to incorporate history dependence, aligning learned closures with the temporal nonlocality inherent in turbulence.
The same constraints that shaped the refinement of classical turbulence models, including realizability, invariance, dimensional consistency, near-wall asymptotics and memory effects, are now being repurposed as design principles for ML turbulence closures. Embedding these constraints into ML architectures prevents nonphysical predictions and enhances robustness, generalizability and transferability across diverse flow regimes. This demonstrates that, rather than being superseded by data-driven approaches, the foundational constraints of turbulence modeling remain indispensable, ensuring that modern physics-informed models preserve both empirical accuracy and physical fidelity.
4.3 Outlook
Taken together, the discussions above highlight that constraints serve as a unifying framework across both classical and modern turbulence model development. In traditional closures, constraints such as realizability, invariance, asymptotic behavior and dimensional consistency provided corrective pathways that transformed early eddy-viscosity models into more stable and physically consistent formulations. In contemporary ML and data-driven models, the very same principles now act as safeguards against overfitting, unphysical predictions and loss of generalizability, ensuring that data-rich approaches remain firmly grounded in turbulence physics.
This perspective underscores that constraints are not merely obstacles but essential design criteria that bridge theory, computation and application. By embedding these constraints in a systematic manner, turbulence models whether empirical, physics based or data driven achieve enhanced predictive reliability and maintain robustness across a wide spectrum of flow conditions. Looking ahead, the integration of constraints into hybrid frameworks that blend physics-informed learning with traditional closures represents a promising direction. Such models have the potential to combine the universality and stability afforded by constraints with the adaptability and accuracy enabled by modern data-driven methods. In this way, constraints remain the cornerstone of turbulence modeling, guiding its evolution from classical formulations to next-generation predictive tools.
5 Challenges in turbulence modeling
Turbulence modeling has undergone substantial advancement through a progression from classical empirical closures to physics-informed and data-driven approaches. These newer frameworks aim to overcome the limitations of traditional models by leveraging ML, embedding physical constraints and exploiting high-fidelity data such as DNS and LES. Despite these advances, turbulence continues to be a fundamentally nonlinear and multiscale phenomenon and major challenges persist. Many of these challenges can be traced to the foundational constraints of turbulence modeling such as realizability, invariance, dimensional consistency, memory effects and asymptotic limits, which have historically shaped the formulation and refinement of constitutive relations. While these constraints provide essential guardrails that ensure physical admissibility, numerical stability and theoretical consistency, they also impose restrictions that complicate the representation of turbulence across diverse flow conditions. As a result, efforts to embed these principles in constitutive relations inevitably reveal deeper difficulties: the nonlinear and multiscale nature of turbulence challenges the adequacy of simple closures, the scarcity of high-fidelity data limits the calibration and generalization of modern data-driven models, and the integration of physical constraints into ML frameworks raises questions of stability and interpretability. These interrelated issues define the central obstacles for both classical and emerging turbulence models, motivating the following examination of specific challenges that continue to shape model development.
5.1 Constitutive complexity and nonlinearity
A central difficulty in turbulence modeling lies in representing the constitutive relation between the Reynolds stress tensor and the mean rate-of-strain. Turbulence is inherently nonlinear, anisotropic and multiscale, making simple linear eddy-viscosity assumptions inadequate in flows with strong curvature, rotation or separation. Classical approaches attempted to correct this through nonlinear eddy-viscosity models and EARSM, which introduced higher-order tensorial dependencies while maintaining computational feasibility. However, these models are often difficult to calibrate and validate, and their nonlinear terms can lead to numerical stiffness. Capturing the true complexity of turbulence likely requires hybrid approaches that integrate higher-order physics-based representations with ML architectures designed to preserve physical constraints.
5.2 Data scarcity and generalization
Machine learning methods have introduced new possibilities for turbulence closure modeling, but they rely heavily on the availability of high-quality training data. DNS and well-resolved LES datasets provide rich information, yet they are computationally expensive and limited to relatively low Reynolds numbers or canonical configurations. As a result, data-driven models trained on such datasets often struggle to generalize to complex engineering flows, where operating conditions and geometries differ substantially from the training regime. Without explicit incorporation of realizability, invariance and asymptotic scaling, ML models risk overfitting, producing nonphysical stress predictions when extrapolated. Addressing this issue requires the integration of multi-fidelity datasets, data assimilation techniques and uncertainty quantification frameworks that provide a rigorous measure of model reliability beyond the training regime.
5.3 Integration of physics and learning
Perhaps the most pressing challenge is reconciling the flexibility of data-driven models with the rigor of physics-based constraints. Purely empirical neural networks act as “black boxes,” lacking transparency, interpretability and guaranteed consistency with conservation laws. PIML offers a promising pathway by embedding governing equations, invariant bases and realizability maps directly into learning architectures. For example, tensor-basis networks have been developed to enforce Galilean invariance, while eigenvalue-eigenvector parameterizations ensure realizability within the Lumley’s triangle or barycentric map. Recent efforts also include physics-constrained deep learning frameworks, which employ eigenspace perturbations or invariant loss functions to enforce conservation and stability during training. The challenge remains to design architectures that balance flexibility with constraint fidelity while ensuring numerical robustness when coupled to CFD solvers.
5.4 Numerical stability and solver integration
Embedding ML closures into CFD solvers introduces another layer of complexity. The nonlinear response of neural networks can destabilize iterative solvers, particularly in regions of rapid flow distortion or near walls. Ensuring numerical stability requires careful treatment of source terms, implicit-explicit splitting and robust coupling strategies. This echoes lessons from traditional models where implicit treatment of destruction terms and explicit treatment of production terms improved solver robustness. Future ML-integrated closures must adopt similar stabilization strategies, potentially incorporating adaptive blending between baseline RANS predictions and ML corrections to maintain solver convergence and physical plausibility.
6 Conclusion and future directions
Turbulence modeling remains a cornerstone of computational fluid dynamics, enabling predictive simulations across a broad spectrum of engineering and scientific domains. However, its development is fundamentally constrained by an intricate web of physical, mathematical, numerical and empirical requirements that dictate the structure, stability and applicability of constitutive relations. Dimensional consistency, physical realizability, invariance, conservation laws, asymptotic limits and near-wall behavior are not merely technical considerations but essential constraints that ensure turbulence models produce reliable and physically meaningful predictions. At the same time, these constraints impose limitations on achievable accuracy and generality, forcing modelers to continually balance physical fidelity against computational feasibility.
This review has highlighted how these constraints shape both classical and modern turbulence modeling approaches. Traditional closures, such as the k-ε, k-ω and RSM, provided the first practical predictive tools but relied on simplifying assumptions that often failed in flows with strong anisotropy, separation or rotation. Over time, systematic refinements were introduced through the enforcement of realizability, invariance, near-wall asymptotics and memory effects, resulting in models of greater robustness and predictive power. Yet, even with these refinements, persistent challenges remain in reconciling constitutive complexity, nonlinearity and universality with tractable numerical implementations.
The emergence of data-driven and ML approaches has opened new opportunities for turbulence modeling but has also introduced novel challenges. Unlike classical models, which embed constraints directly in their constitutive forms, ML frameworks risk producing nonphysical predictions if left unconstrained. Embedding realizability, invariance, dimensional consistency and asymptotic scaling into learning architectures through the use of invariant inputs, tensor-basis expansions or constraint-based loss functions has therefore become a critical focus of current research. The integration of PIML offers a promising pathway, enabling the combination of the adaptability of data-driven models with the rigor and interpretability of physics-based approaches. Still, issues of generalization to unseen flows, interpretability of learned closures and numerical stability in solver integration remain unresolved.
Looking ahead, several research directions appear particularly promising. First, constraint-aware learning frameworks must be advanced to ensure that ML closures respect fundamental symmetries, conservation laws and realizability conditions. Second, uncertainty quantification should be integrated into both traditional and ML models to assess prediction confidence and guide extrapolation to new regimes. Third, multi-fidelity modeling strategies hold significant potential, leveraging DNS, LES and experimental data to improve lower-fidelity models while preserving scaling laws. Fourth, the development of interpretable neural architectures that encode tensorial structures and asymptotic limits will be critical for ensuring transparency and trust. Finally, physics-guided hybrid models that combine the empirical robustness of RANS closures, the fidelity of LES and the flexibility of ML represent a practical and scalable pathway for advancing turbulence modeling.
In conclusion, the future of turbulence modeling lies not in abandoning classical principles but in integrating them with modern computational tools. The constraints that once defined the limits of turbulence closures are now being reinterpreted as essential design criteria for next-generation models. By embedding these principles into data-driven frameworks and hybrid approaches, researchers can develop models that are not only more accurate but also physically consistent, numerically stable and broadly applicable across the diverse flow regimes encountered in engineering and science.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: ChatGPT was used to improve language.
-
Conflict of interest: The author states no conflict of interest.
-
Research funding: None declared.
-
Data availability: Not applicable.
References
[1] O. Reynolds, “On the dynamical theory of incompressible viscous fluids and the determination of the criterion,” Philos. Trans. R. Soc. London, Ser. A, vol. 186, pp. 123–164, 1895. https://doi.org/10.1098/rsta.1895.0004.Search in Google Scholar
[2] L. Prandtl, “Bericht über Untersuchungen zur Ausgebildeten Turbulenz,” Z. Angew. Math. Mech., vol. 5, no. 2, pp. 136–139, 1925, https://doi.org/10.1002/zamm.19250050212.Search in Google Scholar
[3] A. N. Kolmogorov, “The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers,” Proc. R. Soc. A, vol. 434, no. 1890, pp. 9–13, 1991. Translated from Doklady Akademii Nauk SSSR, Vol. 30, 1941, pp. 299–303. https://doi.org/10.1098/rspa.1991.0075.Search in Google Scholar
[4] B. E. Launder and D. B. Spalding, “The numerical computation of turbulent flows,” Comput. Methods Appl. Mech. Eng., vol. 3, no. No.2, pp. 269–289, 1974, https://doi.org/10.1016/0045-7825(74)90029-2.Search in Google Scholar
[5] P. A. Durbin, “Near-wall turbulence closure modeling without ‘damping functions’,” Theor. Comput. Fluid Dyn., vol. 3, no. Sep, pp. 1–13, 1991, https://doi.org/10.1007/BF00271513.Search in Google Scholar
[6] P. A. Durbin, “Reynolds stress model for near-wall turbulence,” J. Fluid Mech., vol. 249, pp. 465–498, 1993. https://doi.org/10.1017/S0022112093001259.Search in Google Scholar
[7] J. Boussinesq, “Essai sur la Théorie des Eaux Courantes,” Mém. Prés. Div. Sav. Acad. Sci., vol. 23, no. 1, pp. 1–680, 1877.Search in Google Scholar
[8] T. J. Craft, B. E. Launder, and K. Suga, “Development and application of a cubic eddy-viscosity model of turbulence,” Int. J. Heat Fluid Flow, vol. 17, no. 2, pp. 108–115, 1996, https://doi.org/10.1016/0142-727X(95)00079-6.Search in Google Scholar
[9] S. Wallin and A. V. Johansson, “An explicit algebraic Reynolds stress model for incompressible and compressible turbulent flows,” J. Fluid Mech., vol. 403, no. Jan, pp. 89–132, 2000, https://doi.org/10.1017/S0022112099007004.Search in Google Scholar
[10] J. L. Lumley, “Toward a turbulent constitutive relation,” J. Fluid Mech., vol. 41, no. 2, pp. 413–434, 1970, https://doi.org/10.1017/S0022112070000678.Search in Google Scholar
[11] S. B. Pope, “A more general effective-viscosity hypothesis,” J. Fluid Mech., vol. 72, no. 2, pp. 331–340, 1975, https://doi.org/10.1017/S0022112075003382.Search in Google Scholar
[12] T. B. Gatski and C. G. Speziale, “On explicit algebraic stress models for complex turbulent flows,” J. Fluid Mech., vol. 254, pp. 59–78, 1993, https://doi.org/10.1017/S0022112093002034.Search in Google Scholar
[13] J. Weatheritt and R. D. Sandberg, “A novel evolutionary algorithm applied to algebraic modifications of the RANS stress-strain relationship,” J. Comput. Phys., vol. 325, no. Nov, pp. 22–37, 2016, https://doi.org/10.1016/j.jcp.2016.08.015.Search in Google Scholar
[14] J. Ling and J. Templeton, “Evaluation of machine learning algorithms for prediction of regions of high Reynolds-averaged Navier–Stokes uncertainty,” Phys. Fluids, vol. 27, no. 8, p. 085103, 2015. https://doi.org/10.1063/1.4927765.Search in Google Scholar
[15] J. L. Wu, J. X. Wang, and H. Xiao, “A bayesian calibration-prediction method for reducing model-form uncertainties with application in RANS simulations,” Flow Turbul. Combust., vol. 97, pp. 761–786, 2016, https://doi.org/10.1007/s10494-016-9725-6.Search in Google Scholar
[16] E. J. Parish and K. Duraisamy, “A paradigm for data-driven predictive modeling using field inversion and machine learning,” J. Comput. Phys., vol. 305, no. Jan, pp. 758–774, 2016, https://doi.org/10.1016/j.jcp.2015.11.012.Search in Google Scholar
[17] K. Duraisamy, G. Iaccarino, and H. Xiao, “Turbulence modeling in the age of data,” Annu. Rev. Fluid Mech., vol. 51, pp. 357–377, 2019, https://doi.org/10.1146/annurev-fluid-010518-040547.Search in Google Scholar
[18] J. Ling, A. Kurzawski, and J. Templeton, “Reynolds-averaged turbulence modeling using deep neural networks with embedded invariance,” J. Fluid Mech., vol. 807, no. Nov, pp. 155–166, 2016, https://doi.org/10.1017/jfm.2016.615.Search in Google Scholar
[19] J. X. Wang, J. L. Wu, and H. Xiao, “Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data,” Phys. Rev. Fluid., vol. 2, no. Mar, p. 034603, 2017, https://doi.org/10.1103/PhysRevFluids.2.034603.Search in Google Scholar
[20] J. L. Wu, H. Xiao, and E. Paterson, “Physics-informed machine learning approach for augmenting turbulence models: a comprehensive framework,” Phys. Rev. Fluid., vol. 3, no. Jul, p. 074602, 2018, https://doi.org/10.1103/PhysRevFluids.3.074602.Search in Google Scholar
[21] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” J. Comput. Phys., vol. 378, no. Feb, pp. 686–707, 2019, https://doi.org/10.1016/j.jcp.2018.10.045.Search in Google Scholar
[22] K. Duraisamy, “Perspectives on machine learning-augmented Reynolds-averaged and large eddy simulation models of turbulence,” Phys. Rev. Fluid., vol. 6, p. 050504, 2021, https://doi.org/10.1103/PhysRevFluids.6.050504.Search in Google Scholar
[23] X.-L. Zhang, H. Xiao, X. Luo, and G. He, “Ensemble Kalman method for learning turbulence models from indirect observation data,” J. Fluid Mech., vol. 949, no. Sep, p. A26, 2022. https://doi.org/10.1017/jfm.2022.744.Search in Google Scholar
[24] J. Han, X. H. Zhou, and H. Xiao, “An equivariant neural operator for developing nonlocal tensorial constitutive models,” J. Comput. Phys., vol. 488, no. Sep, p. 112243, 2023, https://doi.org/10.1016/j.jcp.2023.112243.Search in Google Scholar
[25] P. A. Durbin, “Algebraic tensorial representations,” in Data Driven Analysis and Modeling of Turbulent Flows, K. Duraisamy, Ed., London, Academic Press, 2025, pp. 241–264.10.1016/B978-0-32-395043-5.00012-7Search in Google Scholar
[26] J. L. Lumley, “Computational modeling of turbulent flows,” Adv. Appl. Mech., vol. 18, pp. 123–176, 1979, https://doi.org/10.1016/S0065-2156(08)70266-7.Search in Google Scholar
[27] S. Banerjee, R. Krahl, F. Durst, and Ch. Zenger, “Presentation of anisotropy properties of turbulence: invariants versus eigenvalue approaches,” J. Turbul., vol. 8, p. N32, 2007, https://doi.org/10.1080/14685240701506896.Search in Google Scholar
[28] T.-H. Shih, W. W. Liou, A. Shabbir, Z. Yang, and J. Zhu, “A new k–ε eddy viscosity model for high Reynolds number turbulent flows,” Comput. Fluids, vol. 24, no. 3, pp. 227–238, 1995. https://doi.org/10.1016/0045-7930(94)00032-T.Search in Google Scholar
[29] J. O. Hinze, Turbulence, New York, McGraw–Hill, 1975.Search in Google Scholar
[30] C. G. Speziale, “Analytical methods for the development of Reynolds stress closures in turbulence,” Annu. Rev. Fluid Mech., vol. 23, no. Jan, pp. 107–157, 1991. https://doi.org/10.1146/annurev.fl.23.010191.000543.Search in Google Scholar
[31] S. B. Pope, Turbulent Flows, Cambridge, Cambridge University Press, 2000.10.1017/CBO9780511840531Search in Google Scholar
[32] C. Truesdell and W. Noll, The Non-linear Field Theories of Mechanics, Berlin, Springer, 2004.10.1007/978-3-662-10388-3Search in Google Scholar
[33] M. E. Gurtin, E. Fried, and L. Anand, The Mechanics and Thermodynamics of Continua, Cambridge, Cambridge University Press, 2010.10.1017/CBO9780511762956Search in Google Scholar
[34] C. G. Speziale, “On nonlinear k–l and k–ε models of turbulence,” J. Fluid Mech., vol. 178, pp. 459–475, 1987, https://doi.org/10.1017/S0022112087001319.Search in Google Scholar
[35] D. C. Wilcox, Turbulence Modeling for CFD, DCW Industries, 1993.Search in Google Scholar
[36] P. R. Spalart and C. G. Speziale, “A note on constraints in turbulence modelling,” J. Fluid Mech., vol. 391, pp. 373–376, 1999, https://doi.org/10.1017/S0022112099005388.Search in Google Scholar
[37] L. Wang, “Frame-indifferent and positive-definite Reynolds stress–strain relation,” J. Fluid Mech., vol. 352, pp. 341–358, 1997. https://doi.org/10.1017/S0022112097007532.Search in Google Scholar
[38] H. Tennekes and J. L. Lumley, A First Course in Turbulence, Cambridge, MA, MIT Press, 1972.10.7551/mitpress/3014.001.0001Search in Google Scholar
[39] A. A. Townsend, The Structure of Turbulent Shear Flow, Cambridge, Cambridge University Press, 1976.Search in Google Scholar
[40] J. O. Hinze, Memory Effects in Turbulence, NASA Technical Memorandum 75516, 1979.Search in Google Scholar
[41] G. K. Batchelor and I. Proudman, “The effect of rapid distortion of a fluid in turbulent motion,” Q. J. Mech. Appl. Math., vol. 7, no. 1, pp. 83–103, 1954, https://doi.org/10.1093/qjmam/7.1.83.Search in Google Scholar
[42] J. C. R. Hunt, “A theory of turbulent flow round two-dimensional bluff bodies,” J. Fluid Mech., vol. 61, no. 4, pp. 625–706, 1973, https://doi.org/10.1017/S0022112073000893.Search in Google Scholar
[43] C. Cambon and L. Jacquin, “Spectral approach to non-isotropic turbulence subjected to rotation,” J. Fluid Mech., vol. 202, pp. 295–317, 1989, https://doi.org/10.1017/S0022112089001199.Search in Google Scholar
[44] C. G. Speziale, S. Sarkar, and T. B. Gatski, “Modelling the pressure–strain correlation of turbulence: an invariant dynamical systems approach,” J. Fluid Mech., vol. 227, pp. 245–272, 1991, https://doi.org/10.1017/S0022112091000101.Search in Google Scholar
[45] C. G. Speziale and S. Sarkar, “Second-order closure models for supersonic turbulent flows,” NASA Contract. Rep., ICASE Report No. 91-9, p. 187508, 1991.10.2514/6.1991-217Search in Google Scholar
[46] V. C. Patel, W. Rodi, and G. Scheuerer, “Turbulence models for near-wall and low Reynolds number flows: a review,” AIAA J., vol. 23, no. 9, pp. 1308–1319, 1985. https://doi.org/10.2514/3.9086.Search in Google Scholar
[47] F. R. Menter, “Two-equation eddy-viscosity turbulence models for engineering applications,” AIAA J., vol. 32, no. 8, pp. 1598–1605, 1994, https://doi.org/10.2514/3.12149.Search in Google Scholar
[48] D. B. Spalding, “A single formula for the ‘law of the wall’,” J. Appl. Mech., vol. 28, no. 3, pp. 455–458, 1961, https://doi.org/10.1115/1.3641728.Search in Google Scholar
[49] W. P. Jones and B. E. Launder, “The prediction of laminarization with a two-equation model of turbulence,” Int. J. Heat Mass Transfer, vol. 15, no. 2, pp. 301–314, 1972, https://doi.org/10.1016/0017-9310(72)90076-2.Search in Google Scholar
[50] H. Xiao, J. L. Wu, J. X. Wang, R. Sun, and C. J. Roy, “Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: a data-driven, physics-informed Bayesian approach,” J. Comput. Phys., vol. 324, pp. 115–136, 2016. https://doi.org/10.1016/j.jcp.2016.07.038.Search in Google Scholar
© 2026 Walter de Gruyter GmbH, Berlin/Boston