Home Mathematics Computational uncertainty quantification for random non-autonomous second order linear differential equations via adapted gPC: a comparative case study with random Fröbenius method and Monte Carlo simulation
Article Open Access

Computational uncertainty quantification for random non-autonomous second order linear differential equations via adapted gPC: a comparative case study with random Fröbenius method and Monte Carlo simulation

  • Julia Calatayud , Juan Carlos Cortés EMAIL logo and Marc Jornet
Published/Copyright: December 31, 2018

Abstract

This paper presents a methodology to quantify computationally the uncertainty in a class of differential equations often met in Mathematical Physics, namely random non-autonomous second-order linear differential equations, via adaptive generalized Polynomial Chaos (gPC) and the stochastic Galerkin projection technique. Unlike the random Fröbenius method, which can only deal with particular random linear differential equations and needs the random inputs (coefficients and forcing term) to be analytic, adaptive gPC allows approximating the expectation and covariance of the solution stochastic process to general random second-order linear differential equations. The random inputs are allowed to functionally depend on random variables that may be independent or dependent, both absolutely continuous or discrete with infinitely many point masses. These hypotheses include a wide variety of particular differential equations, which might not be solvable via the random Fröbenius method, in which the random input coefficients may be expressed via a Karhunen-Loève expansion.

MSC 2010: 34F05; 60H35; 93E03

1 Introduction and Preliminaries

Many laws of Physics are formulated via differential equations. In practice, input parameters (coefficients, forcing/source term and initial/boundary conditions) of these equations are set from experimental data, thus containing the uncertainty involved in measurement errors. Furthermore, input parameters are often not exactly known because of insufficient information, limited understanding of some underlying phenomena, inherent uncertainty, etc. All these facts motivate that input parameters of classical differential equations are treated as random variables or stochastic processes rather than deterministic constants or functions, respectively. This approach leads to random differential equations (RDEs) [1, 2]. The random behavior of the solution stochastic process can be understood if one obtains its main statistical features, say expectation, variance, covariance, etc.

A powerful tool to deal with RDEs is generalized Polynomial Chaos (gPC) [3, 4]. Let (Ω, 𝓕, ℙ) be a complete probability space. We will work in the Hilbert space (L2(Ω), 〈⋅, ⋅〉) that consists of second-order random variables, i.e., random variables with finite variance, where the inner product is defined by 〈ζ1, ζ2〉 = 𝔼[ζ1ζ2], being 𝔼[⋅] the expectation operator. In its classical formulation, gPC consists in writing a random vector ζ : Ω → ℝn as a limit of multivariate polynomials evaluated at a random vector Z : Ω → ℝn : ζi=0Pζ̂iϕi(Z). Here {ϕi(Z)}i=0 is a sequence of orthogonal polynomials in Z : 𝔼[ϕi(Z)ϕj(Z)] = nϕi(z)ϕj(z) dℙZ(z) = γiδij, where ℙZ = ℙ ∘ Z−1 is the law of Z and δij is the Kronecker delta symbol. A stochastic Galerkin method can be applied to approximate the solution to RDEs [3, Ch. 6]. For some applications of this theory, see for example [5, 6].

Given the random vector Z, the sequence {ϕi(Z)}i=0 of orthogonal polynomials is taken from the Askey-Wiener scheme of hypergeometric orthogonal polynomials, by taking into account the density function fZ of Z (if Z is absolutely continuous) or the discrete masses of Z (if Z is discrete), [3, 4].

In the recent articles [7, 8, 9], an adaptive gPC method has been developed to approximate the solutions of RDEs. Instead of taking the orthogonal polynomials from the Askey-Wiener scheme, the authors construct them directly from the random inputs that are involved in the corresponding RDE’s formulation.

More explicitly, in [7], it is considered the RDE F(t, y, ) = 0, y(t0) = y0, where F : ℝ2q+1 → ℝq and y(t) = (y1(t), …, yq(t)), where ⊤ denotes the transpose operator. The set {ζ1, …, ζs} represents independent and absolutely continuous random input parameters in the RDE.

For each 1 ≤ is, it is considered the canonical basis of polynomials in ζi of degree at most p : Cip = {1, ζi, (ζi)2, …, (ζi)p}. One defines the following inner product, with weight function given by the density of ζi : 〈g(ζi), h(ζi)〉ζi = g(ζi)h(ζi)fζi(ζi) dζi. Using a Gram-Schmidt orthonormalization procedure, one obtains a sequence of orthonormal polynomials in ζi with respect to ,ζi:Ξip={ϕ0i(ζi),,ϕpi(ζi)}. The authors build a sequence of orthonormal multivariate polynomials in ζ = (ζ1, …, ζs) of degree at most p with respect to the inner product 〈g(ζ), h(ζ)〉ζ = sg(ζ)h(ζ)fζ(ζ) dζ. To do so, they build the simple tensor product ϕj(ζ)=ϕp11(ζ1)ϕpss(ζs),1jP, where j is associated in a bijective manner to the multi-index (p1, …, ps) in such a way that 1 corresponds to (0, …, 0) (for example, a graded lexicographic ordering [3, p. 66]) and P = (p + s)!/(p!s!). By the independence between ζ1, …, ζs, the built sequence Ξ={ϕj(ζ)}j=1P is orthonormal with respect to 〈, 〉ζ.

Once the basis is constructed, one looks for an approximate solution y(t)j=1Pyj(t)ϕj(ζ). Then, F(t,j=1Pyj(t)ϕj(ζ),j=1Py˙j(t)ϕj(ζ))=0. To obtain the deterministic coefficients yj(t), one computes the inner products F(t,j=1Pyj(t)ϕj(ζ),j=1Py˙j(t)ϕj(ζ)),ϕk(ζ)ζ=0,k=1,,P. In this manner, one arrives at a deterministic system of P differential equations, which may be solved by standard numerical techniques. Once y1(t), …, yP(t) have been computed, the expectation of the actual solution y(t) is approximated by y1(t) and the covariance matrix is approximated by i=1Pyi(t)yi(t).

In [8], the authors use the Random Variable Transformation technique [10, Th. 1] in case that some random input parameters appearing in the RDE come from mappings of absolutely continuous random variables, whose probability density function is known.

In [9], the authors focus on the case that the random inputs ζ1, …, ζs are not independent. They consider the canonical bases Cip = {1, ζi, (ζi)2, …, (ζi)p}, for 1 ≤ is, and construct a sequence of multivariate polynomials in ζ, via a simple tensor product: ϕj(ζ)=ζ1p1ζsps, where 1 ≤ jP corresponds to the multi-index (p1, …, ps) and P = (p + s)!/(p!s!). Notice that this new sequence {ϕj(ζ)}j=1P is not orthonormal with respect to 〈, 〉ζ. However, one proceeds with the RDE as in [7] and, in practice, one obtains good approximations of the expectation and covariance of y(t).

Based on ample numerical evidence, the gPC-based methods described in [3, 4, 7, 8, 9] converge in the mean square sense at spectral rate. Some theoretical results that justify this assertion are presented in [3, pp. 33–35, p. 73], [11, Th. 2.2], [12, 13, 14, 15].

In this paper we deal with an important class of differential equations with uncertainty often met in Mathematical Physics, namely general random non-autonomous second-order linear differential equations:

X¨(t)+A(t)X˙(t)+B(t)X(t)=C(t),tR,X(t0)=Y0,X˙(t0)=Y1.(1)

Our goal is to obtain approximations of the solution stochastic process X(t) as well as of its main statistical features, by taking advantage of the adaptive gPC techniques [7, 9]. Here, A(t), B(t) and C(t) are stochastic processes and Y0 and Y1 are random variables in an underlying complete probability space (Ω, 𝓕, ℙ). The term X(t) is the solution stochastic process to the random IVP (1) in some probabilistic sense. We will detail conditions for existence and uniqueness of solution in the following section.

Particular cases of (1) (with no random forcing term, C(t)) have been treated in the extant literature by using the random Fröbenius method. Specifically, Airy, Hermite, Legendre, Laguerre and Bessel differential equations have been randomized and rigorously studied in [16, 17, 18, 19, 20, 21], respectively. The study includes the computation of the expectation and the variance of the solution stochastic process.

In our recent contributions [22, 23], we have studied the general problem (1) when A(t), B(t) and C(t) are analytic stochastic processes in the mean square sense. As it has been proved there, the random power series solution converges in the mean square sense when A(t) and B(t) are analytic processes in the L(Ω) sense, C(t) is a mean square convergent random power series, and the initial conditions Y0 and Y1 belong to L2(Ω). Under those assumptions, the expectation and variance statistics of the solution process X(t) can be rigorously approximated.

In [24] the authors study RDEs by taking advantage of homotopy analysis and they provide a complete set of illustrative examples dealing with random second-order linear differential equations.

In this paper, we want to go one step further and we will perform a computational analysis based upon adaptive gPC, by showing its capability to deal with the general random IVP (1) that comprises Airy, Hermite, Legendre, Laguerre and Bessel differential equations, or any other formulation of (1) based on analytic data processes, just as particular cases. We will thus resolve the future line of research brought up in [23, Section 5].

The paper is organized as follows. Section 2 describes the application of adaptive gPC to solve the random IVP (1) and the computation of the expectation and covariance of X(t). The study is split into two cases depending on the probabilistic dependence of the random inputs. In Section 3, we show the algorithms corresponding to the theory previously developed in Section 2. Section 4 is addressed to show particular examples of (1) where adaptive gPC, Fröbenius method and Monte Carlo simulation are carried out to obtain approximations for the expectation, variance and covariance of the solution stochastic process. It is evinced that adaptive gPC provides the same results as the Fröbenius method with small orders of basis p, and, moreover, in cases where the Fröbenius method is not applicable, adaptive gPC might be successful. Finally, in Section 5, conclusions are drawn.

2 Method

Consider the random IVP (1), where

A(t)=a0(t)+i=1dAai(t)γi,B(t)=b0(t)+i=1dBbi(t)ηi,C(t)=c0(t)+i=1dCci(t)ξi,(2)

being γ1, …, γdA, η1, …, ηdB and ξ1, …, ξdC random variables (not necessarily independent) and a0(t), …, adA(t), b0(t), …, bdB(t) and c0(t), …, cdC(t) real functions. Representation (2) for the input stochastic processes includes truncated random power series [2, p. 99] and Karhunen-Loève expansions [3, Ch. 4], [25, Ch. 5]. This is an improvement with respect to the random Fröbenius method used in [16, 17, 18, 19, 20, 21, 22, 23], in which A(t), B(t) and C(t) are only expressed as random power series.

As we are interested in constructive computational aspects of uncertainty quantification, we will assume that there exists a unique solution stochastic process X(t) to IVP (1) in some probabilistic sense, for instance, sample path [1, SP problem] [2, Appendix A], or Lq(Ω) sense [2], in such a way that 𝔼[X(t)2] < ∞ for each t. We detail the conditions under which there exists a unique solution X(t) to (1) in the following propositions. The proofs are simple consequences of the references cited therein. Proposition 2.1, which is concerned with sample path solutions, is a direct consequence of the deterministic theory on ordinary differential equations (Carathéodory theory on the existence of absolutely continuous solutions [26, pp. 28–30]). Proposition 2.2 takes advantage of a natural generalization to Lq(Ω) random calculus of the classical Picard theorem for deterministic ordinary differential equations [2, Th. 5.1.2].

Proposition 2.1

(Sample path solution). [26, pp. 2830] IfA(t), B(t) andC(t) have real integrable sample paths, then there exists a unique solution stochastic processX(t) to (1) withC1sample paths and derivative Ẋ(t) with absolutely continuous sample paths (i.e., X(t) is a classical solution that belongs to the Sobolev spaceW2,1). Moreover, ifA(t), B(t) andC(t) have continuous sample paths, thenX(t) hasC2sample paths.

Proposition 2.2

(Lq(Ω) solution). [2, Ch. 5], [23] IfA(t) andB(t) are continuous stochastic processes in the L(Ω) sense, and the source termC(t) is continuous in the Lq(Ω) setting, then there exists a unique solutionX(t) to (1) in the Lq(Ω) sense.

Our goal is to approximate the solution stochastic process X(t) to the random IVP (1) by using adaptive gPC, which is described in [7, 9] and has been reviewed in Section 1. In the case that the random inputs γ1, …, γdA, η1, …, ηdB, ξ1, …, ξdC, Y0 and Y1 are independent, we will use the method from [7], whereas in the case that they are not independent, [7, 9], the random inputs are assumed to be absolutely continuous, so that the weights in the inner products are given by density functions. Notice, however, that a discrete distribution with infinitely many point masses can be given to the random inputs. Indeed, the corresponding inner product becomes an integral with respect to a discrete law, which is a series with weights being the probabilities of the point masses. Moreover, since the support has infinite cardinality, the corresponding canonical basis of polynomials has infinite dimension, so that its length p can grow up to infinity.

For ease of notation and to identify the notation with the one used in Section 1, we denote the random inputs γ1, …, γdA, η1, …, ηdB, ξ1, …, ξdC, Y0 and Y1 as ζ1, …, ζs, where s = dA + dB+dC + 2. The random variables ζ1, …, ζs are not necessarily independent, and they are absolutely continuous or discrete random variables with infinitely many point masses. We will denote ζ = (ζ1, …, ζs). The space of polynomials evaluated at ζi of degree at most p will be denoted by 𝓟p[ζi]. The space of multivariate polynomials evaluated at ζ of degree at most P will be written as PPs[ζ].

In the next development, we distinguish two cases depending on whether the random inputs ζ1, …, ζs are independent or not.

2.1 The random inputs are independent

In the notation from [7] and Section 1, let Cip={1,ζi,,ζip} be the canonical basis of 𝓟p[ζi], for i = 1, …, s. Let Ξip={ϕ0i(ζi),,ϕpi(ζi)} be the orthonormalization of Cip with respect to the inner product defined by the law ℙζi, via a Gram-Schmidt procedure. Let Ξ = {ϕ1(ζ), …, ϕP(ζ)} be the orthonormal basis of PPs[ζ] with respect to the law ℙζ = ℙζ1 × ⋅s × ℙζs, where P = (p + s)!/(p!s!).

We approximate the solution stochastic process X(t) ≈ i=1Pxi(t)ϕi(ζ) by imposing the right-hand side to be a solution to random IVP (1):

i=1Px¨i(t)ϕi(ζ)+a0(t)+i=1dAai(t)γii=1Px˙i(t)ϕi(ζ)+b0(t)+i=1dBbi(t)ηii=1Pxi(t)ϕi(ζ)=c0(t)+i=1dCci(t)ξi.(3)

We apply the stochastic Galerkin projection technique. By multiplying by ϕk(ζ), k = 1, …, P, applying expectations, using the orthonormality of Ξ and the fact that ϕ1 = 1, we obtain:

x¨k(t)+a0(t)x˙k(t)+i=1dAj=1Pai(t)x˙j(t)E[γiϕj(ζ)ϕk(ζ)]+b0(t)xk(t)+i=1dBj=1Pbi(t)xj(t)E[ηiϕj(ζ)ϕk(ζ)]=c0(t)δ1k+i=1dCci(t)E[ξiϕk(ζ)].(4)

Let us put this equation in matrix form. Consider the P × P matrices M and N defined by

Mkj(t)=i=1dAai(t)E[γiϕj(ζ)ϕk(ζ)],Nkj(t)=i=1dBbi(t)E[ηiϕj(ζ)ϕk(ζ)],(5)

for k, j = 1, …, P. Consider the vector q of length P with

qk=i=1dCci(t)E[ξiϕk(ζ)],(6)

for k = 1 …, P. We rewrite (4) as a deterministic system of P differential equations:

x¨(t)+(M(t)+a0(t)IP)x˙(t)+(N(t)+b0(t)Ip)x(t)=q(t)+c0(t)e1,(7)

where x(t) = (x1(t), …, xP(t)), IP is the P × P identity matrix and e1 is the first vector of the canonical basis: (1, 0, …, 0). It remains to find the initial condition for (7). From i=1Pxi(t0)ϕi(ζ) = Y0 and i=1Pi(t0)ϕi(ζ) = Y1, we obtain that xk(t0) = 𝔼[Y0ϕk(ζ)] and k(t0) = 𝔼[Y1ϕk(ζ)], for k = 1, …, P. Thus, the initial conditions become x(t0) = y and (t0) = y′, where y = (y1, …, yP) and y=(y1,,yP),

yk=E[Y0ϕk(ζ)],yk=E[Y1ϕk(ζ)],(8)

for k = 1, …, P.

The system of deterministic differential equations can be solved by using standard numerical techniques. Once we have computed the solution (x1(t), …, xP(t)), we have obtained the approximation i=1Pxi(t)ϕi(ζ) for the solution stochastic process X(t). Moreover, one can approximate the expectation and covariance of X(t):

E[X(t)]x1(t),Cov[X(t1),X(t2)]i=2Pxi(t1)xi(t2).(9)

2.2 The random inputs may not be independent

In the notation from [9] and Section 1, let Cip={1,ζi,,ζip} be the canonical basis of 𝓟p[ζi], for i = 1, …, s. We construct the basis Ξ = {ϕ1, …, ϕP} of PPs[ζ] as in [9]. This basis is not orthonormal with respect to the law ℙζ.

We approximate the solution stochastic process X(t) ≈ i=1Pxi(t)ϕi(ζ) by imposing the right-hand side to be a solution to random IVP (1). One obtains (3). By multiplying by ϕk(ζ) and applying expectations, k = 1, …, P, we derive that

i=1Px¨i(t)E[ϕi(ζ)ϕk(ζ)]+a0(t)i=1Px˙i(t)E[ϕi(ζ)ϕk(ζ)]+i=1dAj=1Pai(t)x˙j(t)E[γiϕj(ζ)ϕk(ζ)]+b0(t)i=1Pxi(t)E[ϕi(ζ)ϕk(ζ)]+i=1dBj=1Pbi(t)xj(t)E[ηiϕj(ζ)ϕk(ζ)]=c0(t)E[ϕk(ζ)]+i=1dCci(t)E[ξiϕk(ζ)].(10)

Define the P × P matrix R and the vector h of length P as

Rik=E[ϕi(ζ)ϕk(ζ)],hk=E[ϕk(ζ)],(11)

for i, k = 1, …, P. Expression (10) can be written in matrix form as a deterministic system of P differential equations:

Rx¨(t)+(M(t)+a0(t)R)x˙(t)+(N(t)+b0(t)R)x(t)=q(t)+c0(t)h.(12)

The initial conditions are given by Rx(t0) = y and R(t0) = y′.

This system of deterministic differential equations is solvable by standard numerical techniques. Once we have computed the approximation i=1Pxi(t)ϕi(ζ) of the solution stochastic process X(t), the expectation and covariance of X(t) can be approximated as follows:

E[X(t)]i=1Pxi(t)E[ϕi(ζ)],Cov[X(t1),X(t2)]i=1Pj=1Pxi(t1)xj(t2)Cov[ϕi(ζ),ϕj(ζ)].(13)

3 Algorithm

In this section we present the algorithm corresponding to Section 2. From the random inputs A(t), B(t) and C(t) having expression (2) and the initial conditions Y0 and Y1, we will show the steps to be followed in order to approximate the expectation and covariance of the solution stochastic process X(t). As in Section 2, denote the random input parameters by ζ1, …, ζs.

Case ζ1, …, ζs are independent:

  1. Define the canonical basis Cip={1,ζi,,ζip},i=1,,s.

  2. Via a Gram-Schmidt procedure, orthonormalize Cip to a new basis Ξip={ϕ0i(ζ),,ϕpi(ζ)} with respect to the probability law ℙζi of ζi. In the software Mathematica®, this can be readily done with the built-in function Orthogonalize. For example, if p = 3 and the probability distribution is dist, then the command could be:

    Expand[Orthogonalize[{1, Z, Z^2, Z^3},

    Integrate[#1 #2 PDF[dist, Z], {Z, -Infinity, Infinity}] &]]

  3. By using a simple tensor product, define the orthonormal basis with respect to the joint law ℙζ = ℙζ1 × ⋅s × ℙζs, Ξ = {ϕ1(ζ), …, ϕP(ζ)}.

  4. Construct the matrices M(t) and N(t) given by (5), the vector q(t) defined by (6), and the initial conditions y and y′ given by (8). All the involved expectations can be calculated with the built-in function Expectation from Mathematica®.

  5. Solve numerically the deterministic system of P differential equations given by (7) with initial conditions x(t0) = y and (t0) = y′. This system does not pose serious numerical challenges. We thus integrate the equations over time with the standard NDSolve routine from Mathematica®: write the instruction

    NDSolve[eqns,function,{t,t0,T}]

    with automatic method, step size, etc. (the built-in function will automatically try to estimate the best method for a particular computation).

  6. Approximate the expectation and covariance of the unknown solution stochastic process by using (9).

Case ζ1, …, ζs are not independent:

  1. Define the canonical basis Cip={1,ζi,,ζip},i=1,,s.

  2. By using a simple tensor product, define the basis Ξ = {ϕ1(ζ), …, ϕP(ζ)}.

  3. Construct the matrices M(t) and N(t) given by (5), the vector q(t) defined by (6), the matrix R(t) and the vector h given by (11), and the vectors y and y′ expressed by (8). All the involved expectations can be calculated with the built-in function Expectation from Mathematica®.

  4. Solve numerically the deterministic system of P differential equations given by (12) with initial conditions Rx(t0) = y and R(t0) = y′. This system does not pose serious numerical challenges. We thus integrate the equations over time with the standard NDSolve routine from Mathematica® with the option

    Method -> {"EquationSimplification" -> "Residual"}

    (to deal with the corresponding system of differential-algebraic equations): write the instruction

    NDSolve[eqns,function,{t,t0,T},

    Method -> {"EquationSimplification" -> "Residual"}]

    with automatic method, step size, etc. (the built-in function will automatically try to pick the best method for a particular computation).

  5. Approximate the expectation and covariance of the unknown solution stochastic process by using (13).

4 Examples

In this section we show particular examples of the random IVP (1) to which we apply adaptive gPC to approximate the expectation and covariance of the solution stochastic process X(t).

We will compare the results with Monte Carlo simulation. This method is based on sampling. Sample from the probability distributions of A(t), B(t), C(t), Y0 and Y1 to obtain, say m realizations, for m large:

A(1)(t),,A(m)(t),B(1)(t),,B(m)(t),C(1)(t),,C(m)(t),Y0(1),,Y0(m),Y1(1),,Y1(m).

Then we solve the m deterministic initial value problems

X¨(i)(t)+A(i)(t)X˙(i)(t)+B(i)(t)X(i)(t)=C(i)(t),tR,X(i)(t0)=Y0(i),X˙(i)(t0)=Y1(i),

so that we obtain m realizations of X(t): X(1)(t), …, X(m)(t). The Law of Large Numbers permits approximating 𝔼[X(t)] and 𝕍[X(t)] by computing the sample mean and sample variance of X(1)(t), …, X(m)(t):

E[X(t)]μm(t)=1mi=1mX(i)(t),V[X(t)]1m1i=1m(X(i)(t)μm(t))2.

The results of adaptive gPC agree with Monte Carlo simulation, although the convergence rate of Monte Carlo is much slower (its error convergence rate is inversely proportional to the square root of the number of realizations [3, p. 53]).

The result of the expectation will also be compared with the dishonest method [27, p. 149]. It consists in estimating 𝔼[X(t)] by substituting A(t), B(t), C(t), Y0 and Y1 in (1) by their corresponding expected values. Denoting μX(t) = 𝔼[X(t)], the idea is that, since 𝔼[(t)] = d2dt2(μX(t)) and 𝔼[(t)] = ddt(μX(t)), because of the commutation between the mean square limit and the expectation operator (see [2, Ch. 4]), one solves:

d2dt2(μX(t))+E[A(t)]ddt(μX(t))+E[B(t)]μX(t)=E[C(t)],tR,μX(t0)=E[Y0],ddt(μX(t0))=E[Y1].

In our context, the dishonest method will work on cases where ℂov[A(t), (t)] and ℂov[B(t),X(t)] are small, but in general, there is no certainty that this may hold. Thus, this method is a naive approximation to the true expectation, with no theoretical support, although with a certain use in the literature [27].

When possible, the results obtained via adaptive gPC for the expectation and variance will be compared with the random Fröbenius method. The convergence of the random Fröbenius method will be guaranteed by previous studies, see [22, 23].

Several conclusions are drawn from these examples. Adaptive gPC allows for random inputs (2) more general than the random Fröbenius method: A(t), B(t) and C(t) may not be analytic, they may be represented via a truncated Karhunen-Loève expansion, etc. Moreover, with a small length p of the bases, accurate results are obtained (this is due to the well-known spectral convergence of gPC-based methods). In practical applications, a disadvantage of adaptive gPC is that random parameter inputs cannot have a finite number of point masses (otherwise the space of polynomials evaluated at them would have finite dimension). From a computational standpoint, a large number s of random input parameters may make the computations inviable, as the order P of the basis increases as P = (p + s)!/(p!s!).

Example 4.1

Airy-type differential equations appear in a variety of applications to Mathematical Physics, such as the description of the solution to the Schrödinger equation for a particle confined within a triangular potential, in the solution for the one-dimensional motion of a quantum particle affected by a constant force, or in the theory of diffraction of radio waves around the Earth’s surface [28]. Airy’s random differential equation is given by [16]:

X¨(t)+AtX(t)=0,tR,X(0)=Y0,X˙(0)=Y1,(14)

where A, Y0 and Y1 are random variables. It is well-known that the solution to the deterministic Airy’s differential equation is highly oscillatory, hence it is expected that, in dealing with its stochastic counterpart, differences between distinct methods will be highlighted.

Existence and uniqueness of sample path solution is guaranteed by Proposition 2.1. Concerning the existence and uniqueness of mean square solution, we refer to [16, 22] or Proposition 2.2, under the assumption that A is a bounded random variable. This assumption on boundedness is not a restriction in practice, as one may truncate the random variable A with a support as large as desired.

In [16], the following distributions for A, Y0 and Y1 are set: A ∼ Beta(2, 3), Y0 ∼ Normal(1, 1) and Y1 ∼ Normal(2, 1). They are assumed to be independent. Approximations for the expectation and variance via the random Fröbenius method and Monte Carlo simulation are obtained in [16]. We use adaptive gPC (independent case) with p = 3, p = 4, ζ1 = A, ζ2 = Y0 and ζ3 = Y1, η1 = A, A(t) = 0, C(t) = 0 and b1(t) = t. The results obtained are shown in Table 1 (expectation), Table 2 (variance) and Table 3 (covariance). The order of truncation in the random Fröbenius method is denoted by N. Observe that gPC expansions have converged for t ∈ [0, 2] with order p = 3. This rapid convergence shows the potentiality of this approach.

Table 1

Approximation of 𝔼[X(t)]. Example 4.1, assuming independent random data.

tgPC p = 3gPC p = 4Fröb. N = 3Fröb. N = 5dishonestMC 50,000MC 100,000
0.0011110.997011.00138
0.251.498701.498701.498701.498701.498701.495191.49976
0.501.98752 1.987521.987521.987521.987521.983531.98829
0.752.451082.451082.451082.451082.451022.446672.45160
1.002.868562.868562.868562.868562.868182.863832.86893
1.253.214943.214943.214943.214943.213393.210083.21534
1.503.463103.463103.463103.463103.458123.458313.46376
1.753.586603.586603.586603.586603.573403.582153.58784
2.003.563353.563353.563363.563353.532863.559483.56552

Table 2

Approximation of 𝕍[X(t)]. Example 4.1, assuming independent random data.

t gPC p = 3gPC p = 4Fröb. N = 3Fröb. N = 5MC 50,000MC 100,000
0.0011110.996100.99530
0.251.060351.060351.060351.060351.059021.05642
0.501.231421.231421.231421.231421.234081.22793
0.751.492611.492611.492611.492611.500411.48944
1.001.813921.813921.813921.813921.827441.81127
1.252.158702.158702.158702.158702.177682.15721
1.502.493792.493792.493792.493792.516902.49462
1.75 2.805602.805602.805602.805602.830292.81030
2.003.115303.115303.115303.115303.137833.12559

Table 3

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 3 and p = 4. Example 4.1, assuming independent random data.

ts00.250.5 0.7511.251.51.752
01.0.9989590.9916840.9720720.9344360.8739650.7873230.6732990.533429
0.25 0.9989591.060351.115071.155861.175161.165651.121031.036940.911972
0.50.9916841.115071.231421.332421.408741.450671.449061.396471.28856
0.750.9720721.155861.332421.492611.625611.71961.762861.745091.65909
10.9344361.175161.408741.625611.813921.960322.050992.073212.01713
1.250.8739651.165651.450671.71961.960322.15872.29972.36882.35387
1.50.7873231.121031.449061.762862.050992.29972.493792.617932.65822
1.750.6732991.036941.396471.745092.073212.36882.617932.80562.91699
20.5334290.9119721.288561.659092.017132.353872.658222.916993.1153

In Figure 1, we focus on the convergence of gPC expansions. The solid line reflects the expectations, while the dashed lines represent confidence intervals constructed with the rule mean ± deviation (the standard deviation stands for the square root of the variance). Observe that, as we move away from t = 0, larger orders of p are required to achieve good approximations of the statistics of X(t). Indeed, Galerkin projections deviate from the exact solution after a certain time. Realize also that larger orders of p are needed to get accurate results of the standard deviation than for the expectation (statistical moments of order 2 are harder to approximate than moments of order 1). For p = 3 and p = 4, the approximate expectations agree up to time t = 7, whereas the standard deviations up to t = 4.5. For p = 2 and p = 3, similar means are obtained until t = 6, and similar standard deviations up to t = 4. Notice that the convergence deteriorates for p = 1: the results for p = 1 and p = 2 agree until t = 4 for the expectation, but up to instant t = 1.5 for the standard deviation. As p grows, the approximation of the statistics will improve for larger t.

Figure 1 Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming independent random data.
Figure 1

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming independent random data.

By using the random Fröbenius method from [16], an example of Airy’s differential equation with dependent random inputs is performed. It is set (A, Y0, Y1) to have a multivariate Gaussian distribution, with mean vector and covariance matrix given by

μ=0.412,Σ=0.040.00010.050.000110.50.0050.51,

respectively. In Table 4, Table 5 and Table 6, the results obtained via adaptive gPC with p = 3, p = 4 (dependent case) and [16] are shown. Adaptive gPC converges for small order of basis p.

Table 4

Approximation of 𝔼[X(t)]. Example 4.1, assuming dependent random data.

tgPC p = 3gPC p = 4Fröb. N = 4Fröb. N = 5dishonestMC 50,000MC 100,000
0.00111111.001881.00597
0.251.498701.498701.498701.498701.498701.501661.50581
0.501.987551.987551.987551.987551.987521.991561.99575
0.752.451212.451212.451212.451212.451022.456222.46041
1.002.868952.868952.868952.868952.868182.874852.87900
1.253.215893.215893.215893.215893.213393.222473.22656
1.503.465033.465033.465033.465033.458123.471983.47601
1.753.590103.590103.590103.590103.573403.597003.60101
2.003.569143.569143.569153.569143.532863.575463.57949

Table 5

Approximation of 𝕍[X(t)]. Example 4.1, assuming dependent random data.

tgPC p = 3gPC p = 4Fröb. N = 3Fröb. N = 4MC 50,000MC 100,000
0.0011110.9992230.99992
0.251.309971.309971.309971.309971.307131.30991
0.501.725351.725351.725351.725351.719891.72525
0.752.212412.212412.212412.212412.203952.21230
1.002.721222.721222.721222.721222.709572.72125
1.253.192363.192363.192363.192363.177453.19283
1.503.573613.573613.573613.573613.555373.57484
1.753.844593.844593.844543.844583.822623.84669
2.004.040874.040874.040904.040864.014204.04342

Table 6

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 3 and p = 4. Example 4.1, assuming dependent random data.

ts00.250.50.7511.251.51.752
01.1.123891.240641.341811.417931.459141.456141.401441.29068
0.251.123891.309971.487711.646831.775331.860311.889191.851251.7394
0.51.240641.487711.725351.941522.121872.250612.312022.292262.18163
0.751.341811.646831.941522.212412.44312.615342.710622.712352.60832
1 1.417931.775332.121872.44312.721222.936193.067423.096093.00785
1.251.459141.860312.250612.615342.936193.192363.362193.425523.36639
1.51.456141.889192.312022.710623.067423.362193.573613.681353.66862
1.751.401441.851252.292262.712353.096093.425523.681353.844593.89856
21.290681.73942.181632.608323.007853.366393.668623.898564.04087

In Figure 2, we analyze the convergence of gPC expansions by depicting the expectation (solid line) and confidence interval (dashed lines) for X(t), where the confidence interval is constructed as mean ± deviation. Analogous comments to those from Figure 1 apply in this case again. For orders p = 3 and p = 4, the expectations agree up to time t = 6, while the standard deviations coincide until t = 4.6. For p = 2 and p = 3, the means are similar until t = 6, whereas the dispersion estimates start separating from t = 3.8. Finally, for p = 1 and p = 2, the approximations for the average statistic coincide till t = 4.5, and for the deviation statistic until t = 2.5.

Figure 2 Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming dependent random data.
Figure 2

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3, 4. Example 4.1, assuming dependent random data.

Example 4.2

Consider the random differential equation

X¨(t)+(γ1+γ2t)X˙(t)+(η1+t)X(t)=ξ1cos(t)+g(t),tR,X(0)=Y0,X˙(0)=Y1,(15)

where γ1 ∼ Poisson(3), γ2 ∼ Uniform(0, 1), η1 ∼ Gamma(2, 2), Y0 = −1, Y1 ∼ Exponential(4), ξ1 ∼ Uniform(−8, 2) and g(t) = e−1/t1(0,∞)(t).

Proposition 2.1 ensures the existence and uniqueness of a sample path solution. To apply Proposition 2.2, one would need to truncate the supports of γ1 and η1. These truncations can be constructed on intervals as large as desired, in order to maintain the results.

The input random variables ζ1 = γ1, ζ2 = γ2, ζ3 = η1, ζ4 = ξ1 and ζ5 = Y1 are assumed to be independent. The involved functions are a1(t) = 1, a2(t) = t, b1(t) = 1, b2(t) = t, c0(t) = g(t) and c1(t) = cos(t). Notice that C(t) is not an analytic stochastic process, because g(t) is not a real analytic function. The random Fröbenius method is not applicable for the random IVP (15). However, we are going to see that adaptive gPC (independent case) with p = 6 and p = 7 provides reliable approximations of the expectation and covariance of X(t). We will compare the results with Monte Carlo simulation. In Table 7, Table 8 and Table 9 we show the estimates obtained.

Table 7

Approximation of 𝔼[X(t)]. Example 4.2, assuming independent random data.

tgPC p = 6gPC p = 7dishonestMC 50,000MC 100,000
0.00−1−1−1−1−1
0.25−0.930972−0.930972−0.931372−0.931035−0.930364
0.50−0.855779−0.855779−0.852372−0.855937−0.854386
0.75−0.780021−0.780021−0.759103−0.780573−0.778022
1.00−0.700758−0.700758−0.647653−0.702042−0.698437
1.25−0.609156−0.609156−0.518169−0.611266−0.606832
1.50−0.496445−0.496446−0.374486−0.499156−0.494407
1.75−0.359632−0.359635−0.222874−0.362532−0.358036
2.00−0.203726−0.203737−0.070806−0.206408−0.202560

Table 8

Approximation of 𝕍[X(t)]. Example 4.2, assuming independent random data.

tgPC p = 6gPC p = 7MC 50,000MC 100,000
0.000000
0.250.01142710.01142710.01153780.0114953
0.500.08979160.08979160.09047030.090160
0.750.2361350.2361360.2372880.237066
1.000.3716250.3716390.3728990.373058
1.250.4269210.4270210.4286900.428342
1.500.3884850.3888990.3913050.38978
1.750.2896220.2907200.2936310.291429
2.000.1829540.1849220.1879060.185917

Table 9

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 7. Example 4.2, assuming independent random data.

ts00.250.50.7511.251.51.752
0000000000
0.2500.01142710.03109970.04860650.05851420.05972070.05374730.0429770.0295893
0.500.03109970.08979160.144310.1769030.1827700.1656210.1325540.0905903
0.7500.04860650.144310.2361360.2939290.3076340.2814950.2265470.154877
1 00.05851420.1769030.2939290.3716390.3949910.3664850.298390.206021
1.25 00.05972070.1827700.3076340.3949910.4270210.4032600.3343110.235733
1.5 00.05374730.1656210.2814950.3664850.4032600.3888990.3305750.241178
1.75 00.0429770.1325540.2265470.298390.3343110.3305750.290720.223109
2 00.02958930.09059030.1548770.2060210.2357330.2411780.2231090.184922

In Figure 3, we focus on the convergence of gPC expansions. We depict the estimates of the expectations (solid line) and confidence intervals (dashed lines), with the rule mean ± deviation, for orders p = 4, 5, 6, 7. Note that convergence is achieved for t ∈ [0, 10].

Figure 3 Expectation and confidence interval for the solution stochastic process, for orders of basis p = 4, 5, 6, 7. Example 4.2, assuming independent random data.
Figure 3

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 4, 5, 6, 7. Example 4.2, assuming independent random data.

Example 4.3

Consider the random differential equation

X¨(t)+B(t)X(t)=C,t[0,1],X(0)=Y0,X˙(0)=Y1,(16)

where B(t) is a standard Brownian motion on [0, 1], C ∼ Poisson(2), and the initial conditions are distributed as Y0 ∼ Beta(1/2, 1/2) and Y1 = 0. These random inputs are assumed to be independent.

This stochastic system has a unique solution in the sample path sense, by Proposition 2.1. In principle, one cannot ensure the existence of a mean square solution, since the sample paths of Brownian motion are not bounded.

Consider the Karhunen-Loève expansion of Brownian motion [25, p. 216]:

B(t)=j=12j12πsinj12πtξj,

where ξ1, ξ2, … are independent and Normal(0, 1) random variables. The series is understood in L2([0, 1] × Ω). We truncate the Karhunen-Loève expansion so that B(t) will have the form in (2). If we take dB = 7, we are capturing more than 97% of the total variance of X. Thus, we take

B(t)=j=172j12πsin((j12)πt)ξj.

The random inputs become ζ1 = ξ1, …, ζ7 = ξ7, ζ8 = C and ζ9 = Y0, with functions bj(t)=2(j1/2)π sin((j−1/2)πt), 1 ≤ j ≤ 7, and c1(t) = 1.

Notice that, if one truncates ξ1, …, ξ7 to a large but bounded support, Proposition 2.2 entails that there exists a solution stochastic process in the mean square sense.

In Table 10, Table 11 and Table 12, we show the results obtained by adaptive gPC with p = 2, p = 3 and Monte Carlo simulation. Similar estimates are obtained for p = 2 and p = 3, which agrees with the convergence of gPC-based representations.

Table 10

Approximation of 𝔼[X(t)]. Example 4.3, assuming independent random data.

tgPC p = 2gPC p = 3dishonestMC 50,000MC 100,000
0.00 0.5 0.50.50.4993020.499056
0.250.5625040.5625040.56250.5617320.561438
0.50 0.750140.750140.750.7491960.748740
0.751.063651.063651.06251.062451.06179
1.001.505361.505361.51.503961.50311

Table 11

Approximation of 𝕍[X(t)]. Example 4.3, assuming independent random data.

tgPC p = 2gPC p = 3MC 50,000MC 100,000
0.000.1250.1250.1248490.124826
0.250.1269740.1269740.1268830.126841
0.50 0.1570080.1570080.1572670.157033
0.750.2902630.2902650.2918110.290766
1.000.6645510.6645920.6700670.666466

Table 12

Approximation of ℂov[X(t), X(s)] via adapted gPC with p = 3. Example 4.3, assuming independent random data.

ts00.250.50.751
00.1250.1250010.1250330.1252480.126043
0.250.1250010.1269740.1329490.1431040.157885
0.50.1250330.1329490.1570080.1976340.255578
0.75 0.1252480.1431040.1976340.2902650.422829
1 0.1260430.1578850.2555780.4228290.664592

In Figure 4, we show graphically the convergence of gPC expansions on [0, 1]: we plot the approximate expectations (solid line) and confidence intervals (dashed lines) for X(t), where the confidence interval is constructed as mean ± deviation. For p = 1, 2, 3, no differences in the estimates are observed.

Figure 4 Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3. Example 4.3, assuming independent random data.
Figure 4

Expectation and confidence interval for the solution stochastic process, for orders of basis p = 1, 2, 3. Example 4.3, assuming independent random data.

5 Conclusions

In this paper, we have quantified computationally the uncertainty of random non-autonomous second-order linear differential equations via adaptive gPC. After reviewing adaptive gPC from the extant literature, we have provided a methodology and an algorithm to approximate computationally the expectation and covariance of the solution stochastic process. The hypotheses from our algorithm allow both independent and dependent random parameter inputs, being both absolutely continuous or discrete with infinitely many point masses. The generality of our computational results allows the random input coefficients to be truncated random power series or truncated Karhunen-Loève expansions. The former case permits comparing our methodology with the random Fröbenius method, an approach already used in the literature with particular random second-order linear differential equations, and Monte Carlo simulation. A wide variety of examples show that adaptive gPC becomes successful quantifying the uncertainty of random non-autonomous second-order linear differential equations, even when the random Fröbenius method is not applicable.

Acknowledgement

This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. Marc Jornet acknowledges the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València. The authors are grateful for the valuable comments raised by the reviewer, which have improved the final version of the paper.

  1. Conflict of interest

    Conflict of Interest Statement: The authors declare that there is no conflict of interests regarding the publication of this article.

References

[1] Strand J.L., Random Ordinary Differential Equations, J. Differ. Equ., 1970, 7, 538–553.10.1016/0022-0396(70)90100-2Search in Google Scholar

[2] Soong T.T., Random Differential Equations in Science and Engineering, 1973, New York: Academic Press.Search in Google Scholar

[3] Xiu D., Numerical Methods for Stochastic Computations. A Spectral Method Approach, 2010, Princeton University Press.10.1515/9781400835348Search in Google Scholar

[4] Xiu D., Karniadakis G.E, The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 2002, 24 (2), 619–644.10.1137/S1064827501387826Search in Google Scholar

[5] Williams M.M.R., Polynomial chaos functions and stochastic differential equations, Ann. Nucl. Energy, 2006, 33 (9), 774–785.10.1016/j.anucene.2006.04.005Search in Google Scholar

[6] Chen-Charpentier B.M., Stanescu D., Epidemic models with random coefficients, Math. Comput. Model., 2010, 52 (7–8), 1004–1010.10.1016/j.mcm.2010.01.014Search in Google Scholar

[7] Chen-Charpentier B.M., Cortés J.C., Licea J.A., Romero J.V., Roselló M.D., Santonja F.J., Villanueva R.J., Constructing adaptive generalized polynomial chaos method to measure the uncertainty in continuous models: A computational approach, Math. Comput. Simulat., 2015, 109, 113–129.10.1016/j.matcom.2014.09.002Search in Google Scholar

[8] Cortés J.C., Romero J.V., Roselló M.D., Villanueva R.J., Improving adaptive generalized polynomial chaos method to solve nonlinear random differential equations by the random variable transformation technique, Commun. Nonlinear Sci. Numer. Simulat., 2017, 50, 1–15.10.1016/j.cnsns.2017.02.011Search in Google Scholar

[9] Cortés J.C., Romero J.V., Roselló M.D., Santonja F.J., Villanueva R.J., Solving Continuous Models with Dependent Uncertainty: A Computational Approach, Abstr. Appl. Anal., 2013.10.1155/2013/983839Search in Google Scholar

[10] Cortés J.C., Navarro-Quiles A., Romero J.V., Roselló M.D., Probabilistic solution of random autonomous first-order linear systems of ordinary differential equations, Rom. Rep. Phys., 2016, 68 (4), 1397–1406.10.1016/j.aml.2016.12.015Search in Google Scholar

[11] Gottlieb D., Xiu D., Galerkin method for wave equations with uncertain coefficients, Commun. Comput. Phys., 2008, 3 (2), 505–518.Search in Google Scholar

[12] Ernst O.G., Mugler A., Starkloff H.J., Ullmann E., On the convergence of generalized polynomial chaos expansions, ESAIM-Math. Model. Num., 2012, 46 (2), 317–339.10.1051/m2an/2011045Search in Google Scholar

[13] Shi W., Zhang C., Error analysis of generalized polynomial chaos for nonlinear random ordinary differential equations, Appl. Numer. Math., 2012, 62 (12), 1954–1964.10.1016/j.apnum.2012.08.007Search in Google Scholar

[14] Shi W., Zhang C., Generalized polynomial chaos for nonlinear random delay differential equations, Appl. Numer. Math., 2017, 115, 16–31.10.1016/j.apnum.2016.12.004Search in Google Scholar

[15] Calatayud J., Cortés J.C., Jornet M., On the convergence of adaptive gPC for non-linear random difference equations: Theoretical analysis and some practical recommendations, J. Nonlinear Sci. App., 2018, 11 (9), 1077–1084.10.22436/jnsa.011.09.06Search in Google Scholar

[16] Cortés J.C., Jódar L., Camacho F., Villafuerte L., Random Airy type differential equations: Mean square exact and numerical solutions, Comput. Math. Appl., 2010, 60, 1237–1244.10.1016/j.camwa.2010.05.046Search in Google Scholar

[17] Calbo G., Cortés J.C., Jódar L., Random Hermite differential equations: Mean square power series solutions and statistical properties, Appl. Math. Comput., 2011, 218 (7), 3654–3666.10.1016/j.amc.2011.09.008Search in Google Scholar

[18] Calbo G., Cortés J.C., Jódar L., Villafuerte L., Solving the random Legendre differential equation: Mean square power series solution and its statistical functions, Comput. Math. Appl., 2011, 61 (9), 2782–2792.10.1016/j.camwa.2011.03.045Search in Google Scholar

[19] Calatayud J., Cortés J.C., Jornet M., Improving the approximation of the first and second order statistics of the response process to the random Legendre differential equation, 2018, arXiv preprint, arXiv:1807.03141.10.1007/s00009-019-1338-6Search in Google Scholar

[20] Cortés J.C., Jódar L., Company R., Villafuerte L., Laguerre random polynomials: definition, differential and statistical properties, Utilitas Mathematica, 2015, 98, 283–295.Search in Google Scholar

[21] Cortés J.C., Jódar L., Villafuerte L., Mean square solution of Bessel differential equation with uncertainties, J. Comput. Appl. Math., 2017, 309 (1), 383–395.10.1016/j.cam.2016.01.034Search in Google Scholar

[22] Calatayud J., Cortés J.C., Jornet M., Villafuerte L., Random non-autonomous second order linear differential equations: mean square analytic solutions and their statistical properties, Adv. Differ. Equ., 2018, 392, 1–29.10.1186/s13662-018-1848-8Search in Google Scholar

[23] Calatayud J., Cortés J.C., Jornet M., Some notes to extend the study on random non-autonomous second order linear differential equations appearing in Mathematical Modeling, Math. Comput. Appl., 2018, 23 (4), 76–89.10.3390/mca23040076Search in Google Scholar

[24] Golmankhaneh A.K., Porghoveh N.A., Baleanu D., Mean square solutions of second-order random differential equations by using homotopy analysis method, Rom. Rep. Phys., 2013, 65 (2).Search in Google Scholar

[25] Lord G.J., Powell C.E., Shardlow T., An Introduction to Computational Stochastic PDEs, 2014, New York: Cambridge Texts in Applied Mathematics, Cambridge University Press.10.1017/CBO9781139017329Search in Google Scholar

[26] Hale J.K., Ordinary Differential Equations (2nd ed.), 1980, Malabar: Robert E. Krieger Publishing Company.Search in Google Scholar

[27] Henderson D., Plaschko P., Stochastic Differential Equations in Science and Engineering, 2006, Singapore: World Scientific.10.1142/5806Search in Google Scholar

[28] Vallée O., Soares M., Airy Functions and Applications to Physics, 2004, London: Imperial College Press.10.1142/p345Search in Google Scholar

Received: 2018-04-26
Accepted: 2018-12-10
Published Online: 2018-12-31

© 2018 Calatayud et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Articles in the same Issue

  1. Regular Articles
  2. Algebraic proofs for shallow water bi–Hamiltonian systems for three cocycle of the semi-direct product of Kac–Moody and Virasoro Lie algebras
  3. On a viscous two-fluid channel flow including evaporation
  4. Generation of pseudo-random numbers with the use of inverse chaotic transformation
  5. Singular Cauchy problem for the general Euler-Poisson-Darboux equation
  6. Ternary and n-ary f-distributive structures
  7. On the fine Simpson moduli spaces of 1-dimensional sheaves supported on plane quartics
  8. Evaluation of integrals with hypergeometric and logarithmic functions
  9. Bounded solutions of self-adjoint second order linear difference equations with periodic coeffients
  10. Oscillation of first order linear differential equations with several non-monotone delays
  11. Existence and regularity of mild solutions in some interpolation spaces for functional partial differential equations with nonlocal initial conditions
  12. The log-concavity of the q-derangement numbers of type B
  13. Generalized state maps and states on pseudo equality algebras
  14. Monotone subsequence via ultrapower
  15. Note on group irregularity strength of disconnected graphs
  16. On the security of the Courtois-Finiasz-Sendrier signature
  17. A further study on ordered regular equivalence relations in ordered semihypergroups
  18. On the structure vector field of a real hypersurface in complex quadric
  19. Rank relations between a {0, 1}-matrix and its complement
  20. Lie n superderivations and generalized Lie n superderivations of superalgebras
  21. Time parallelization scheme with an adaptive time step size for solving stiff initial value problems
  22. Stability problems and numerical integration on the Lie group SO(3) × R3 × R3
  23. On some fixed point results for (s, p, α)-contractive mappings in b-metric-like spaces and applications to integral equations
  24. On algebraic characterization of SSC of the Jahangir’s graph 𝓙n,m
  25. A greedy algorithm for interval greedoids
  26. On nonlinear evolution equation of second order in Banach spaces
  27. A primal-dual approach of weak vector equilibrium problems
  28. On new strong versions of Browder type theorems
  29. A Geršgorin-type eigenvalue localization set with n parameters for stochastic matrices
  30. Restriction conditions on PL(7, 2) codes (3 ≤ |𝓖i| ≤ 7)
  31. Singular integrals with variable kernel and fractional differentiation in homogeneous Morrey-Herz-type Hardy spaces with variable exponents
  32. Introduction to disoriented knot theory
  33. Restricted triangulation on circulant graphs
  34. Boundedness control sets for linear systems on Lie groups
  35. Chen’s inequalities for submanifolds in (κ, μ)-contact space form with a semi-symmetric metric connection
  36. Disjointed sum of products by a novel technique of orthogonalizing ORing
  37. A parametric linearizing approach for quadratically inequality constrained quadratic programs
  38. Generalizations of Steffensen’s inequality via the extension of Montgomery identity
  39. Vector fields satisfying the barycenter property
  40. On the freeness of hypersurface arrangements consisting of hyperplanes and spheres
  41. Biderivations of the higher rank Witt algebra without anti-symmetric condition
  42. Some remarks on spectra of nuclear operators
  43. Recursive interpolating sequences
  44. Involutory biquandles and singular knots and links
  45. Constacyclic codes over 𝔽pm[u1, u2,⋯,uk]/〈 ui2 = ui, uiuj = ujui
  46. Topological entropy for positively weak measure expansive shadowable maps
  47. Oscillation and non-oscillation of half-linear differential equations with coeffcients determined by functions having mean values
  48. On 𝓠-regular semigroups
  49. One kind power mean of the hybrid Gauss sums
  50. A reduced space branch and bound algorithm for a class of sum of ratios problems
  51. Some recurrence formulas for the Hermite polynomials and their squares
  52. A relaxed block splitting preconditioner for complex symmetric indefinite linear systems
  53. On f - prime radical in ordered semigroups
  54. Positive solutions of semipositone singular fractional differential systems with a parameter and integral boundary conditions
  55. Disjoint hypercyclicity equals disjoint supercyclicity for families of Taylor-type operators
  56. A stochastic differential game of low carbon technology sharing in collaborative innovation system of superior enterprises and inferior enterprises under uncertain environment
  57. Dynamic behavior analysis of a prey-predator model with ratio-dependent Monod-Haldane functional response
  58. The points and diameters of quantales
  59. Directed colimits of some flatness properties and purity of epimorphisms in S-posets
  60. Super (a, d)-H-antimagic labeling of subdivided graphs
  61. On the power sum problem of Lucas polynomials and its divisible property
  62. Existence of solutions for a shear thickening fluid-particle system with non-Newtonian potential
  63. On generalized P-reducible Finsler manifolds
  64. On Banach and Kuratowski Theorem, K-Lusin sets and strong sequences
  65. On the boundedness of square function generated by the Bessel differential operator in weighted Lebesque Lp,α spaces
  66. On the different kinds of separability of the space of Borel functions
  67. Curves in the Lorentz-Minkowski plane: elasticae, catenaries and grim-reapers
  68. Functional analysis method for the M/G/1 queueing model with single working vacation
  69. Existence of asymptotically periodic solutions for semilinear evolution equations with nonlocal initial conditions
  70. The existence of solutions to certain type of nonlinear difference-differential equations
  71. Domination in 4-regular Knödel graphs
  72. Stepanov-like pseudo almost periodic functions on time scales and applications to dynamic equations with delay
  73. Algebras of right ample semigroups
  74. Random attractors for stochastic retarded reaction-diffusion equations with multiplicative white noise on unbounded domains
  75. Nontrivial periodic solutions to delay difference equations via Morse theory
  76. A note on the three-way generalization of the Jordan canonical form
  77. On some varieties of ai-semirings satisfying xp+1x
  78. Abstract-valued Orlicz spaces of range-varying type
  79. On the recursive properties of one kind hybrid power mean involving two-term exponential sums and Gauss sums
  80. Arithmetic of generalized Dedekind sums and their modularity
  81. Multipreconditioned GMRES for simulating stochastic automata networks
  82. Regularization and error estimates for an inverse heat problem under the conformable derivative
  83. Transitivity of the εm-relation on (m-idempotent) hyperrings
  84. Learning Bayesian networks based on bi-velocity discrete particle swarm optimization with mutation operator
  85. Simultaneous prediction in the generalized linear model
  86. Two asymptotic expansions for gamma function developed by Windschitl’s formula
  87. State maps on semihoops
  88. 𝓜𝓝-convergence and lim-inf𝓜-convergence in partially ordered sets
  89. Stability and convergence of a local discontinuous Galerkin finite element method for the general Lax equation
  90. New topology in residuated lattices
  91. Optimality and duality in set-valued optimization utilizing limit sets
  92. An improved Schwarz Lemma at the boundary
  93. Initial layer problem of the Boussinesq system for Rayleigh-Bénard convection with infinite Prandtl number limit
  94. Toeplitz matrices whose elements are coefficients of Bazilevič functions
  95. Epi-mild normality
  96. Nonlinear elastic beam problems with the parameter near resonance
  97. Orlicz difference bodies
  98. The Picard group of Brauer-Severi varieties
  99. Galoisian and qualitative approaches to linear Polyanin-Zaitsev vector fields
  100. Weak group inverse
  101. Infinite growth of solutions of second order complex differential equation
  102. Semi-Hurewicz-Type properties in ditopological texture spaces
  103. Chaos and bifurcation in the controlled chaotic system
  104. Translatability and translatable semigroups
  105. Sharp bounds for partition dimension of generalized Möbius ladders
  106. Uniqueness theorems for L-functions in the extended Selberg class
  107. An effective algorithm for globally solving quadratic programs using parametric linearization technique
  108. Bounds of Strong EMT Strength for certain Subdivision of Star and Bistar
  109. On categorical aspects of S -quantales
  110. On the algebraicity of coefficients of half-integral weight mock modular forms
  111. Dunkl analogue of Szász-mirakjan operators of blending type
  112. Majorization, “useful” Csiszár divergence and “useful” Zipf-Mandelbrot law
  113. Global stability of a distributed delayed viral model with general incidence rate
  114. Analyzing a generalized pest-natural enemy model with nonlinear impulsive control
  115. Boundary value problems of a discrete generalized beam equation via variational methods
  116. Common fixed point theorem of six self-mappings in Menger spaces using (CLRST) property
  117. Periodic and subharmonic solutions for a 2nth-order p-Laplacian difference equation containing both advances and retardations
  118. Spectrum of free-form Sudoku graphs
  119. Regularity of fuzzy convergence spaces
  120. The well-posedness of solution to a compressible non-Newtonian fluid with self-gravitational potential
  121. On further refinements for Young inequalities
  122. Pretty good state transfer on 1-sum of star graphs
  123. On a conjecture about generalized Q-recurrence
  124. Univariate approximating schemes and their non-tensor product generalization
  125. Multi-term fractional differential equations with nonlocal boundary conditions
  126. Homoclinic and heteroclinic solutions to a hepatitis C evolution model
  127. Regularity of one-sided multilinear fractional maximal functions
  128. Galois connections between sets of paths and closure operators in simple graphs
  129. KGSA: A Gravitational Search Algorithm for Multimodal Optimization based on K-Means Niching Technique and a Novel Elitism Strategy
  130. θ-type Calderón-Zygmund Operators and Commutators in Variable Exponents Herz space
  131. An integral that counts the zeros of a function
  132. On rough sets induced by fuzzy relations approach in semigroups
  133. Computational uncertainty quantification for random non-autonomous second order linear differential equations via adapted gPC: a comparative case study with random Fröbenius method and Monte Carlo simulation
  134. The fourth order strongly noncanonical operators
  135. Topical Issue on Cyber-security Mathematics
  136. Review of Cryptographic Schemes applied to Remote Electronic Voting systems: remaining challenges and the upcoming post-quantum paradigm
  137. Linearity in decimation-based generators: an improved cryptanalysis on the shrinking generator
  138. On dynamic network security: A random decentering algorithm on graphs
Downloaded on 10.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2018-0134/html
Scroll to top button