Startseite Multistep variable methods for exact integration of perturbed stiff linear systems
Artikel Öffentlich zugänglich

Multistep variable methods for exact integration of perturbed stiff linear systems

  • José A. Reyes , Mónica Cortés-Molina und Fernando García-Alonso EMAIL logo
Veröffentlicht/Copyright: 14. März 2016

Abstract

A family of real and analytical functions with values within the ring of M(m, R) is introduced. The solution for linear systems of differential equations is expressed as a series of Φ-functions. This new multistep method is defined for variable-step and variable-order, maintains the good properties of the Φ-function series method. It incorporates to compute the coefficients of the algorithm a recurrent algebraic procedure, based in the existing relation between the divided differences and the elemental and complete symmetrical functions. In addition, under certain hypotheses, the new multistep method calculates the exact solution of the perturbed problem.

The new method is implemented in a computational algorithm which enables us to resolve in a general manner some physics and engineering IVP’s modeled by means systems of differential equations. The good behaviour and precision of the method is evidenced by contrasting the results with other-reputed algorithms and even with methods based on Scheifele’s G-functions.

MSC 2010: 65L04; 65L05; 65L06

1 Introduction

Perturbed harmonic oscillators are of particular interest in many areas of physics and engineering. They are also of great interest in Astrodynamics, as newtonian equations of motion can be reduced to harmonic oscillators by means of the Kustaanheimo–Stiefel [19] and Burdet–Ferrándiz [2] transformations. In addition, all natural phenomena, which can be modelled using perturbed oscillators, permit a model using perturbed differential linear equation systems.

The numerical methods used to solve these kinds of systems should preferably have the property that, if the terms of perturbation disappear from the independent variable t at an arbitrary moment, the numerical method then integrates the non-perturbed system without any discretisation error [19]. Methods with this desirable property are found in [3, 5, 10, 11, 15, 16].

In [4] the Φ-function series method is presented, for the precise integration of perturbed linear differential equations. Compared with the G-function series [16], this method has the advantage of integrating the perturbed problem without any discretisation error, under certain conditions and with only the first two terms of the series. It also maintains the desirable properties of the Scheifele methods.

The Φ-function series method is extremely precise. However, it is difficult to calculate coefficients for each specific case it is applied to, which makes it difficult to implement on a computer.

In order to solve this problem, this article describes a variable-step, variable-order multistep method (VSVO), which maintains the good properties of the Φ-function series method and incorporates an algebraic procedure to compute the coefficients of the algorithm.

This multistep method is obtained by approximating the perturbation function derivatives, which appear in the series method, using divided differences. This allows us to establish a recurrent calculation procedure to compute the coefficients of the algorithm. This recurrent procedure is based on the relationship existing between the divided differences of the perturbation function and the elementary and complete symmetric functions.

Both explicit and implicit multistep VSVO algorithms are constructed and, from these, a predictorcorrector algorithm. A computational algorithm is also designed to implement the method on a computer.

The excellent behaviour of the method can be seen when it is applied to stiff and highly oscillatory problems, comparing the numerical results obtained with those calculated by other well-known integrators and even with methods based on Scheifele’s G-functions.

2 Basic ideas and formulations

Let us consider the following IVP:

(2.1)x+Ax=εf(x(t),t)x(0)=x0,t[a,b]=I

where x : RRm, AM(m, R) and f : Rm × RRm. The components of the vector perturbation field f(x(t), t) are fi(x(t), t) with i = 1, …, m and the field is continuous, with continuous derivatives until a certain order that satisfies the conditions for existence and uniqueness of solution. This type of system is called a perturbed linear system.

Assuming that g(t) = f(x(t), t) is analytical in I with regard to t, where it is sufficient that f is analytical in its arguments. In terms of the linear operator derivation D, with respect to the variable t, (2.1) can be written as follows:

(2.2)(D+A)x=εg(t)x(0)=x0,t[0,T]=I

for which it is supposed that x(t) will be the only solution, in I, which can be developed in a power series.

Applying the operator (D + B) to (2.2), where BM(m, R), and noting L2 = D2 + (A + B)D + BA, the new IVP is obtained:

(2.3)L2(x)=(D+B)εg(t)x(0)=x0,x(0)=Ax0+εg(0)=x0

whose exact solution x(t) is the same as that of (2.1) and (2.2).

The idea that leads us to consider this ‘enlarged’ IVP, is that of cancelling the perturbation with the operator (D + B).

Given that g(t) is analytical in its arguments, we can write

(2.4)g(t)=f(x(t),t)=n=0gn)(0)n!tn=n=0tnn!cn

with [4]

(2.5)L2(x)=εn=1tnn!(cn+1+Bcn)x(0)=x0,x(0)=Ax0+εg(0)=x0.

The solution of the IVP (2.5) is obtained by adding a specific unperturbed IVP solution with null initial conditions to the general solution of the perturbed IVP with given initial conditions. The former can be obtained by resolving the following specific IVPs:

(2.6)Xj+(A+B)Xj+BAXj=tjj!Im,j=0,1,...Xj(0)=0_,Xj(0)=0_

where Xj is a real function with values in the ring M(m, R) of the squared matrices of order m, with Im and 0 being, respectively, the unit and neutral elements of said ring. The solutions of (2.6) are the so-called Φ-functions.

Definition 2.1.
(2.7)Φj+2(t)=Xj(t),j0,j  .
Proposition 2.1

(law of derivation). The Φ-functions verify:

(2.8)Φj(t)  =Φj1(t),j3,j.
Proposition 2.2

(law of recurrence). The Φ-functions verify the following recurrence law:

(2.9)Φj2(t)+(A+B)Φj1(t)+BAΦj(t)=tj2(j2)!Im,j4,j.

In order to complete the construction of the Φ-functions, given in (2.7), are defined Φ0(t) and Φ1(t).

Definition 2.2.

Φ0(t) and Φ1(t), are respectively, the solutions of the following IVP

(2.10)X(t)+(A+B)Xt+BAX(t)=0¯,      X(0)=  Im,    X(0)=0¯
(2.11)X(t)+(A+B)Xt+BAX(t)=0¯,      X(0)=0¯,    X(0)=Im.

The law of derivation presented in Proposition 2.1, is completed by the proposition below.

Proposition 2.3.
Φ2(t)=Φ1(t).
Theorem 2.1.

The solution of the IVP

(2.12)L2(x)=0¯x(0)=x0,            x(0)=Ax0+εg(0)=x0

is [4]

(2.13)Φ0(t)x0+Φ1(t)x0.
Theorem 2.2.

The solution of the IVP (2.5), in terms of Φ-functions [4], is given by

(2.14)x(t)=Φ0(t)x0+Φ1(t)x0+εn=2Φn(t)(cn1+Bcn2).
Proposition 2.4

(truncation error). Carrying out a truncation of m + 1 Φ-functions, with m ≥2,

xm(t)=Φ0(t)x0+Φ1(t)x0+εn=0m2Φn+2(t)(cn1+Bcn)

the truncation error corresponding to xm(t), shall be given by [4]

(2.15)Em(t)=εn=m1tnn!(cn+1+Bcn).

As a result the truncation error is small with . If = 0, that is, if the perturbation disappear in an arbitrary instant of the independent variable t, the Φ-functions integrates without discretisation error (2.5).

The above results and their proofs are presented in detail in [4].

3 Multistep methods

The series method described in [4] is very precise, however there is a difficulty adapting it to each specific problem. To solve this difficulty, we will proceed to describe the conversion of the series method to a multistep method similar to the SMF [[10], [11]] and EIpPC [15], VSVO [26] methods, which has the advantage of precisely integrating the perturbed problem under certain conditions.

Denoting by tn = tn−1 + hn, with n = 1, 2, …, an approach to the solution x(t), in the point t = t1, ie x1 = x(t1) is given by:

(3.1)x1=Φ0(h1)x0+Φ1(h1)x0+εn=0m2Φn+2(h1)(cn+1+Bcn).

Let us suppose that we have calculated an approximation to the solution x(t) and its derivative x′(t) in the point t = tn, we shall call these approximations xn and xn, respectively.

As

(3.2)L2(x(t))  =  (D+B)εg(t)x(tn)=xn,  x1(tn)=xn=-Axn+εg(tn)

to calculate an approximation to the solution at the point tn + 1, the change was made to the independent variable t = τ + tn, becoming (3.2) in the system:

(3.3)L2(x(τ))  =  (D+B)εg(τ)x(0)=xn,  x(0)=xn.

Calculating the expansion coefficients

(3.4)f(x(τ),τ+tn)=g(τ)=k=0τnk!qk

where

(3.5)qk=dkg(0)dτk=dkg(tn)dtk=(g1(k)(tn),...,gm(k)(tn))t.

The approximation to the solution in point tn+1 = (n + 1)h, is given by

(3.6)xn+1=Φ0(hn+1)xn+Φ1(hn+1)xn+εn=0m2Φk+2(hn+1)(qk+1+Bqk).

3.1 Explicit multistep method MDFSpE, for perturbed systems

In order to obtain an explicit method the derivatives of the perturbation function are substituted by divided differences.

To make a variable step explicit multistep method of p-step, series

k=0Φk(t)(qk+1+Bqk)

is truncated, such that the higher order derivative is (p − 1), i.e, qp−1 = g(p−1)(tn), so that:

(3.7)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εΦ1(hn+1)q0                    +εk=0p2Φk+2(h)(qk+1+Bqk)+εΦp+1(hn+1)Bqp1.

Properly rearranging (3.7), we obtain:

(3.8)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εk=0p1(Φk+1(hn+1)+Φk+2(hn+1)B)qk.

Therefore it would be necessary to use divided differences (p − 1)-th order of each of the component fields of the function g, in the values tn, …, tnp+1, that is ggi[tn, …, tnp+1] function with i = 1, & m.

As the divided differences of an arbitrary function g(t), satisfy the identity

(3.9)g[tn,...,tni]=j=0Pj[0,H1,...,Hi]gj)(tn)

with Pk(t)=tk/k! and Hi=tntn-i [15,25].

Denoting by

(3.10)Dp,n=((j1)!gi[tn,...,tnj+1])j=1,...,pi=1,...,m

the next matrix, with m × p order and H = max {H1, …, Hp−1}, it verify the identity

(3.11)Dp,nt=Ap(gj(i1)(tn))j=1,...,mi=1,...,p+Op×m

where Op×m is a matrix of order p × m, whose i-th row is

(O(Hpi+1)O(Hpi+1)

and Ap is the non singular matrix of order p:

(3.12)Ap=(1P1[0]P2[0]...Pp1[0]011!P2[0,H1]...1!Pp1[0,H1]001...2!Pn1[0,H1,H2]000...1)p×p

Using a more compact notation

(3.13)Dp,nt=Ap×Zp×m+Op×m,      Zp×m=(gj(i1)(tn))j=1,...,mi=1,...,p

Truncating the expansion obtained and solving Zp×m it results in:

(3.14)Zp×m=Ap1×(Dp,nt)p×m

replacing in (3.8) the derivatives of the components fields of the perturbed function, we can write

(3.15)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εk=0p1(Φk+1(hn+1)+Φk+2(hn+1)B)pk+1

being pj with j = 1, …, p the j-th. column of the matrix Dp,n×Apt.

Let (dij)j=1,...,pi=1,...,p be the matrix Apt, then

(3.16)pj=(i=1pg1[tn,...,tn(i1)]dij(i1)!i=1pg2[tn,...,tn(i1)]dij(i1)!i=1pgm[tn,...,tn(i1)]dij(i1)!)

which substituted in k=0p1(Φk+1(hn+1)+Φk+2(hn+1)B)  pk+1 allow us to write

(3.17)k=0p1(Φk+1(hn+1)+Φk+2(hn+1)B)pk+1=i=1p(k=0p1(i1)!di,k+1(Φk+1(hn+1)+Φk+2(hn+1)B)(g1[tn,,tn(i1)]g2[tn,,tn(i1)]gm[tn,,tn(i1)]))

Defining

(3.18)Λi=k=0p1(i1)!di,k+1(Φk+1(hn+1)+Φk+2(hn+1)B),    i=1,...,p

we obtain

(3.19)k=0p1(Φk+1(hn+1)+Φk+2(hn+1)B)pk+1=i=1pΛi(g1[tn,...,tn(i1)]g2[tn,...,tn(i1)]gm[tn,...,tn(i1)])

Replacing (3.19) in (3.15), we obtain the next formula for a explicit multistep method

(3.20)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εi=1pΛi(g1[tn,...,tn(i1)]g2[tn,...,tn(i1)]gm[tn,...,tn(i1)])

We introduce the next notation, we obtain the next definition.

Definition 3.1.

Let xn be the approximation to the value of the solution at the point tn, and let

Λi=k=0p1(i1)!di,k+1(Φk+1(hn+1)+Φk+2(hn+1)B),    (dij)i=1,...,pi=1,...,p=AptFn,it=(g1[tn,...,tn(i1)],...,gm[tn,...,tn(i1)]).

The starting values for x are: x0, x1, x2, x3, …, xp−1.

The explicit method MDFSpE, variable step size of p step, for perturbed linear systems, it is formulated through the next equation:

xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εi=1pΛiFn,i,    np1.

3.2 Implicit multistep method MDFSpI, for perturbed systems

In order to obtain an implicit method of p steps, we will use the same idea as in the previous section, the k=0Φk(t)(qk+1+Bqk) series is truncated, such that the higher order derivative is p, i.e, qp = g(p)(tn), to correspond with the latest g[tn+1, …, tn+1−p] divided difference, so that:

(3.21)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εΦ1(hn+1)q0+εk=0p1Φk+2(hn+1)(qk+1+Bqk)+εΦp+2(h)Bqp

rearrange (3.21), we obtain:

(3.22)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εk=0p(Φk+1(hn+1)+Φk+2(hn+1)B)qk.

It will be necessary to use the divided differences of order p-th of each of the fields of components of the function of g, in the values tn+1, …, tn+1−p.

Given that hn+1 = tn+1tn, it is defined

(3.23)Hi=tntn-i.

Denoting by

(3.24)Dp,n+1=((j1)!gi[tn+1,...,tnj+2])j=1,...,p+1i=1,...,m

the next matrix, with m × (p + 1) order, and H = max {hn+1, H1, …, Hp−1}, it verify the identity

(3.25)Dp,n+1t=Bp(gj(i1)(tn))j=1,...,mi=1,...,p+1+O(p+1)×m

where O(p+1)×m is a matrix of order (p + 1) × m, whose i-th row is

(O(Hpi+2)...O(Hpi+2))

and Bp is the non singular matrix of order p + 1

(3.26)Bp=(1P1[hn+1]P2[hn+1]...Pp[hn+1]011!P2[hn+1,0]...1!Pp[hn+1,0]001...2!Pp[hn+1,0,H1]000...1)(p+1)×(p+1)

Using a more compact notation:

(3.27)Dp,n+1t=Bp×Z(p+1)×m+O(p+1)×m,    Z(p+1)×m=(gj(i1)(tn))j=1,...,mi=1,...,p+1.

Truncating the expansion obtained and solving for Z(p+1)×m results in:

(3.28)Z(p+1)×m=Bp1×(Dp,n+1t)(p+1)×m

replacing in (3.22) the derivatives of the components fields of the perturbed function, we can write

(3.29)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εk=0p(Φk+1(hn+1)+Φk+2(hn+1)B)pk+1

being pj with j = 1, …, p + 1 the j-th column of the matrix Dp,n+1×Bpt.

Let (dij)j=1,,(p+1)i=1,,(p+1) be the matrix Bpt, then

(3.30)pj=(i=1p+1g1[tn+1,...,tn+1(i1)]dij(i1)!i=1p+1g2[tn+1,...,tn+1(i1)]dij(i1)!i=1p+1gm[tn+1,...,tn+1(i1)]dij(i1)!)

which substituted in k=0p(Φk+1(hn+1)+Φk+2(hn+1)B)pk+1 allow us to write:

(3.31)k=0p(Φk+1(hn+1)+Φk+2(hn+1)B)pk+1=i=1p+1(k=0p(i1)!di,k+1(Φk+1(hn+1)      +Φk+2(hn+1)B)(g1[tn+1,...,tn+1(i1)]g2[tn+1,...,tn+1(i1)]gm[tn+1,...,tn+1(i1)]))

Defining

(3.32)Γi=k=0p(i1)!di,k+1(Φk+1(hn+1)+Φk+2(hn+1)B),    i=1,...,p+1

we obtain

(3.33)k=0p(Φk+1(hn+1)+Φk+2(hn+1)B)pk+1=i=1p+1Γi(g1[tn+1,...,tn+1(i1)]g2[tn+1,...,tn+1(i1)]gm[tn+1,...,tn+1(i1)])

Replacing (3.33) in (3.29), we obtain the next formula for a implicit multistep method:

(3.34)xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εi=1p+1Γi(g1[tn+1,...,tn+1(i1)]g2[tn+1,...,tn+1(i1)]gm[tn+1,...,tn+1(i1)])

The next notation is introduced, giving rise to the following definition.

Definition 3.2.

Let xn be the approximation to the value of the solution at the point tn, and

Γi=k=0p(i1)!di,k+1(Φk+1(hn+1)+Φk+2(hn+1)B),    Bpt=(dij)i=1,...,p+1i=1,...,p+1Tn,tt=(g1[tn+1,...,tn+1(i1)],...,gm[tn+1,...,tn+1(i1)]).

The starting values for x are: x are: x0, x1, x2, x3, …, xp−1.

The implicit method MDFSpI, variable step size of p step, for perturbed linear systems, it is formulated

through the next equation:

xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εi=1p+1ΓiTn,i,    np1.

3.3 Predictor-corrector multistep method MDFSpPC, for perturbed systems

We define the predictor-corrector method, with variable step size of p step MDFSpPC for perturbed linear systems, which has as predictor to MDFpE and as corrector to MDFSpI.

The predictor-corrector method used is like P(EC)μE1−t with μ = t = 1.

4 Recurrent calculus of the matrices Ap-t, Bp-t and new definition the multistep methods

The next methods described in Definitions 3.1 and 3.2, present the difficulty that the coefficients of the matrices Apt=(dij)j=1,,pi=1,,pandBpt=(dij)j=1,,p+1i=1,,p+1 are not expressed in a recurrent way, which leads to difficulties in its codification to automatize its calculation.

Once this problem is resulted, the methods will be able to be codified and will enable one to chose the step size and the number of steps, that each execution requires.

The problem is then reduced to find a recurrent formula, that allows us to calculate the elements of matricesApt and Bpt.

4.1 Recurrent calculus of Ap-t and new definition the multistep explicit method

To obtain the recurrent calculus of the matrix elements Apt=(dij)j=1,,pi=1,,pandBpt=(dij)j=1,,p+1i=1,,p+1, allowing us to construct a variablestep, variable-order method (VSVO), we use elementary symmetric functions en,r and complete symmetric functions hn,r [9, 15, 23], defined as:

(4.1)en,0=1,    en,r=i1<i2<...<irnti1...  tir,    en,r=0,    r<0
(4.2)hn,r=|λ|=rSλtα,    λ=(λ1...λn)Nn,    |λ|=λ1+...+λn

and Sλ = {all the different permutations α = (α1αn) of λ} with tα=t1α1tnαn.

Particularly hn,0 = 1 and hn,1 = en,1. In the case r < 0, it is defined as hn,r = 0.

Between the divided differences of g(t) = tm, that we will denoted by tm[t1, …, tn] and the complete symmetrical polynomial the next relation holds: tm[t1, …, tn] = hn,mn+1.

Considering the point tn, it is defined the complete symmetric functions [15] and [23]:

(4.3)qi,j(n)=tj1[Hn,...,Hn(i1)],    σi,j(n)=(1)jiej1,ji

in the values Hnk = tnkt* with k = 0, …, i − 1 and t*∊ [a, b].

The square matrices of order k,

(4.4)Pk,n=(qi,j(n))j=1...ki=1...k,      Sk,n=(σi,j(n))j=1...ki=1...k

are inverse to each other.

As Hnj = tnjt* and Hj = tntnj we can write (tnt*) − Hj = Hnj with j = 0, …, i − 1. In the particular case t* = tn, we will get Hnj = −Hj with j = 0, …, i − 1.

The divided differences of one function g satisfy the property:

(4.5)g[tn,...,tn(i1)]=j=0qi,j+1(n)1j!g(j)(t*).

If H = max {|Hn|, …, |Hn−(i−1)|}, as qi,j(n) have order ji in H, due to the last result, we can write

(4.6)g[tn,  tn1,...,tn(i1)]=j=0p1qi,j+1(n)1j!g(j)(t*)+O(Hp(i1)),    i=1,...,p.

Considering t* = tn and expressing those equalities in a matricial way, we have

(4.7)(g[tn+1]g[tn,tn1]g[tn,...,tn(p1)])=(q1,1(n)...q1,p(n)q2,1(n)...q2,p(n)qp,1(n)...qp,p(n))(g(tn)g(tn)1!g(p1)(tn)(p1)!)+(O(Hp)O(Hp1)O(H))

and as qi,j+1(n) = hi,j in the arguments Hn, …, Hn−(i−1), we can write

(4.8)(g[tn]g[tn,tn1]g[tn,...,tn(p1)])=(1h1,1...h1,p101...h2,p100...1)(g(tn)g(tn)1!g(p1)(tn)(p1)!)+(O(Hp)O(Hp1)O(H)).

As σi,j(n)= σi-1,j-1(n)-Hn-j+2σi,j-1(n) for i,j ≥ 2 [23], if we consider t* = tn, then

(4.9)Sp,n=(σij(n))j=1,...,pi=1,...,p,{σ1,1(n)=1σ1,j(n)=0,σ1,j(n)=0,1<jpσi,1(n)=0,1<ipσi,j(n)=σi1,j1(n)Hnj+2σi,j1(n),  2i,jp.

The recurrent form of the matrix is Apt   got through:

(4.10)Apt=Mp×Pp,nt×Np=Mp×Sp,nt×Np

that is to say

(4.11)di,j=(j1)!σj,i(n)(i1)!i,j=1,...,p

where Mp=(mij)j=1,...,pi=1,...,p is a diagonal matrix, such that mii = 1/i!, with i = 0, …, p − 1 and Np=Mp1.

The expressions (4.10) and (4.11) allow us to compute the Apt matrix by recurrence, from Sp,n+1t matrix.

Substituting (4.11) in Definitions 3.1, we obtain the explicit method modified.

Definition 4.1.

Let xn be the approximation to the value of the solution at the point tn, and let

Λi=k=1p(k1)!σk,i(Φk(hn+1)+Φk+1(hn+1)B),i=1,...,pFn,tt=(g1[tn,...,tn(t1)],...,gm[tn,...,tn(t1)]).

The starting values for x are: x0, x1, x2, x3, …, xp−1.

The explicit method MDFSpE, variable step size of p step, for perturbed linear systems, it is formulated through the next equation:

xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εt=1pΛtFn,i,      np1.

4.2 Recurrent calculus of and new definition the multistep implicit method

As in the case above, we use elementary symmetric functions en,r and complete symmetric functions hn,r to obtain the recurrent calculus matrix elements Bpt=(dij)j=1,...,(p+1)i=1,...,(p+1) allowing us to construct a variable-step, variable-order method (VSVO).

Taking hn+1 = tn+1t*, the divided differences of one function g satisfy the property:

(4.12)g1[tn+1,tn,...,tn(i1)]=j=0qi+1,j+1(n)1j!gj(t*).

If H = max {|hn + 1|, |Hn|, …, |Hn − (i − 1)|} as qi,j(n) have order ji in H, due to the last result, we can write

(4.13)g1[tn+1,...,tn(i1)]=j=0p1qi+1,j+1(n)1j!gj(t*)+O(Hp(i1)),i=1,...,p.

Considering t* = tn and expressing those equalities in a matricial way, we have

(4.14)(g[tn+1]g[tn+1,tn]g[tn+1,...,tn(p1)])=(q1,1(n)...q1,p+1(n)q2,1(n)...q2,p+1(n)qp+1,1(n)...qp+1,p+1(n))(g(tn)g(tn)1!g(p)(tn)p!)+(O(Hp+1)O(Hp)O(H))

and as qi+1,j+1(n) = hi+1,ji in the arguments hn+1, Hn, …, Hn−(i−1), we can write

(4.15)(g[tn+1]g[tn+1,tn]g[tn+1,...,tn(p1)])=(1h1,1...h1,p+101...h2,p+100...1)(g(tn)g(tn)1!g(p)(tn)p!)+(O(Hp+1)O(Hp)O(H)).

As σi,j(n) = σi−1,j−1(n) − Hnj+3σi,j−1(n) for i, j ≥ 2 [23], if we consider t* = tn, then

(4.16)Sp,n+1=(σij(n))j=1...(p+1)i=1...(p+1),{σ1,1(n)=1σ1,1(n)=hn+1σ1,j(n)=02<jp+1σi,1(n)=02<ip+1σi,1(n)=σii,j1(n)Hnj+3σi,j1(n),2i,jp+1.

The recurrent form of the matrix Bpt is got through:

(4.17)Bpt=Mp+1×Sp,n+1t×Np+1

that is

(4.18)di,j=(j1)!σj,i(n)(i1)!,i,j=1,...,p+1.

In this case Mp+1=(mij)j=1,...,p+1i=1,...,p+1 is a diagonal matrix, such that mii = 1/i!, with i = 0, …, p and Np+1=Mp+11.

The expressions (4.17) and (4.18) allow us to compute the Bpt matrix by recurrence, from Sp,n+1t matrix.

Substituting (4.18) in Definition 3.2, we obtain the implicit method modified.

Definition 4.2.

Let xn be the approximation to the value of the solution at the point tn, and let

Γi=k=1pk!σk+1,i(Φk+1(hn+1)  +  Φk+2(hn+1)B),        i=1,...,p+1Tn,it=(g1[tn+1,...,tn+1(i1)],....,gm[tn+1,...,tn+1(i1)]).

The starting values for x are: x0, x1, x2, x3, …, xp−1.

The implicit method MDFSpI, variable step size of p step, for perturbed linear systems, it is formulated through the next equation:

xn+1=(Φ0(hn+1)Φ1(hn+1)A)xn+εt=1p+1ΓiTn,i,np1.

4.3 New predictor-corrector multistep method

We define the predictor-corrector method, with variable step size of p step MDFSpPC for perturbed linear systems, which has as predictor to MDFSpE and as corrector to MDFSpI, with the previous definition.

5 Numerical experiments

In this section we use the MDFSpPC method to solve the test problems proposed in [4], showing the validity of this VSVO method, as we can obtain similar precision to that obtained with the Φ-function series method without having to design the recurrences in each case.

The solutions obtained using the MDFSpPC method for stiff and highly oscillatory problems are compared with those calculated using the best-known codes:

  1. LSODE methods, causes a numerical solution to be found using the Livermore Stiff ODE solver.

  2. GEAR causes a numerical solution to be found by way of a Burlirsch-Stoer rational extrapolation method.

  3. MGEAR [msteppart] is a multistep method suitable for stiff systems.

Using in the last ones the implementations of MAPLE to ensure that the results are not distorted by a deficient programmation that favours the new code.

The results obtained using the new method are also compared with those calculated using the multistep methods based on Scheifele’s G-functions, specifically with the EIpPC [15].

5.1 Problem 1

Let us consider the following stiff problem, which appears in [6, 7, 22]:

(5.1){x1(t)=2x1(t)+x2(t)+2sin(t)x2(t)=(β+2)x1(t)+(β+1)(x2(t)cos(t)+sin(t))

with initial conditions x1(0) = 2, x2(0) = 3, and exact solution, independent of β,

(5.2)x1(t)  =  2et+  sin(t)x2(t)  =  2et+  cos(t).

The eigenvalues of the system are −1 and β which enables its degree of stiffness to be regulated. For the case β = −1000, the following stiff problem is obtained, proposed in [8]:

(5.3)(x1(t)x2(t))+(21998999)(x1(t)x2(t))=(2sin(t)999(cos(t)sin(t)))x1(0)=2,x2(0)=3.

In this article has been resolved (5.3) by the method MDFSpPC, based on the series of Φ-functions [4], using the matrix

B=(129999991)

as a matrix for annulment of the perturbation function. The IPV expanded which is obtained of applied the differential operator (D + B) to (5.3), is expressed as:

(5.4)L2(x)=(x1(t)x2(t))+(1100199911000)(x1(t)x2(t))+(2999110000)(x1(t)x2(t))=(00)

this being the problem solved by the method MDFSpPC.

The results are compared with those obtained by integrators, LSODE, MGEAR, GEAR, implemented in MAPLE and the multistep method EIPpPC [15], based on the Scheifele G-functions.

In Fig. 1 contrasts the decimal logarithm of module of the relative error of the solution x(t), calculated using Φ-functions method MDFSpPC, with step size h = 10−3, 40 digits and p = 11, with the numerical integration codes MGEAR[msteppart] with errorper = Float(1, −13), LSODE with a tolerance of 10−25, GEAR with errorper = Float(1, −22) and EIpPC with step size h = 10−3, 40 digits and p = 11.

Figure 1. Problem 1, relative error.
Figure 1.

Problem 1, relative error.

In Fig. 2 we show an efficiency plot where Φ-functions multistep methods are compared with integrations using well known general purpose codes. The computation time is represented in the horizontal axis, in logarithmic scale, and the decimal logarithm of the integration error at the last point, is shown in the vertical axis. The tolerances used in the standard codes are displayed in the figure into parentheses, marking each time-error point.

Figure 2. Problem 1, efficiency plot for the integration of x(t) at last point
Figure 2.

Problem 1, efficiency plot for the integration of x(t) at last point

We show the results of a few runs of it where the number of Φ-functions has been kept fixed at 4 and the number of digits used in the computations, that of course limit the attainable accuracy, has been varied to illustrate the behaviour of the method. The accuracy increases as the number of digits do, with a no noticeable major computational overhead. That number of digits is marked by the relevant point in the curve, with the figure followed by ‘d’. To make the comparisons as honest as we can, the length of the mantissa used by MAPLE is adjusted according to the tolerances required to the integrator, so that for tolerances 10−13, 10−15, 10−17, 10−21, and 10−25 in LSODE and 10−13, 10−15, 10−17, 10−21, 10−25, 10−31, and 10−36 in GEAR. We use 13 + 4 digits, 15 + 4 digits, 17 + 4 digits, 21 + 4 digits, 25 + 4 digits, 31 + 4 digits, and 36 + 4 digits to avoid spurious increase of the computation times.

5.2 Problem 2

Let’s consider the highly oscillatory problem proposed by Petzold [13] and [12], which contains a harmonic oscillator:

(5.5)(x1(t)x2(t))+(0λ210)(x1(t)x2(t))=(asin(λt)0)(x1(0)x2(0))=(a2λ1).

Although its solution can be calculated exactly by means of analytical procedures, this example has been chosen to illustrate how the MDFSpPC works for highly oscillatory harmonic perturbation functions.

For more easily a matrix B, which annihilates the function of disturbance, is applied the procedure described by Steffensen [20] and [21].

We define a new variable x3(t) = a sin(λt), which allows to express the system (5.5) as follows:

(5.6)(x1(t)x2(t)x3(t))+(0λ20100000)(x1(t)x2(t)x3(t))=(asin(λt)0aλcos(λt))(x1(0)x2(0)x3(0))=(a2λ10)

and exact solution

(5.7)(x1(t)x2(t)x3(t))=(λ(1a2λt)sin(λt)a2λtcos(λt)(1a2λt)cos(λt)asin(λt))

By applying the operator (D + B) to the system (5.6), we obtain:

(5.8)L2(x)=(x1(t)x2(t)x3(t))+(0λ21110λ200)(x1(t)x2(t)x3(t))+(0001100λ40)(x1(t)x2(t)x3(t))=(000)x(0)=(x1(0)x2(0)x3(0))=(a2λ10)x(0)=(x1(0)x2(0)x3(0))=(λ2a2λaλ).

This problem is resolved by λ = 10, by applying the MDFSpPC method.

The results are compared with those obtained by integrators, LSODE, MGEAR, GEAR, implemented in MAPLE and the multistep method EIPpPC [15], based on the Scheifele G functions.

In Fig. 3 contrasts the decimal logarithm of module of the relative error of the solution x(t), calculated using Φ-functions method MDFSpPC, with step size h = 10−3, 40 digits and p = 17, with the numerical integration codes MGEAR[msteppart] with errorper = Float(1, −13), LSODE with a tolerance of 10−25, GEAR with errorper = Float(1, −31) and EIpPC with step size h = 10−3, 40 digits and p = 17.

Figure 3. Problem 2, relative error.
Figure 3.

Problem 2, relative error.

The results for the integration of the function xare shown in Fig. 4 in which the information is arranged as in Fig. 2.

Figure 4. Problem 2, efficiency plot for the integration of x(t) at last point.
Figure 4.

Problem 2, efficiency plot for the integration of x(t) at last point.

5.3 Problem 3

Denk [1] proposed the following highly oscillatory problem:

(5.9)(x1(t)x2(t))+(01ϰ20)(x1(t)x2(t))=x2(0t)

where x1(0) = 10−5, x2(0) = 1 − 10−5ϰcot k with ϰ = 314.16.

With eigenvalues ±xi and with an exact solution

(5.10)(x1(t)x2(t))=(t+105(cos(ϰt)cot(ϰ)sin(ϰt))1105x(sin(ϰt)+cot(ϰ)cos(ϰt))).

A new variable ϰ3(t) = −ϰ2t is defined [20] and [21], which permits the system (5.9) to be expressed as follows:

(5.11)(x1(t)x2(t)x3(t))+(010ϰ200000)(x1(t)x2(t)x3(t))=ϰ2(0t1)(x1(0)x2(0)x3(0))=(1051105cot(ϰ)0)

and exact solution

(5.12)(x1(t)x2(t)x3(t))=(t+105(cos(ϰt)cot(ϰ)sin(ϰt))1105x(sin(ϰt)+cot(ϰ)cos(ϰt))ϰ2t).

By applying the operator (D + B) to the system (5.11), with the Β=(100001100) result:

(5.13)L2(x)(x1(t)x2(t)x3(t))+(010ϰ200000)(x1(t)x2(t)x3(t))+(010000010)(x1(t)x2(t)x3(t))=(000)x(0)=(x1(0)x2(0)x3(0))=(1051105ϰcot(ϰ)0)x(0)=(x1(0)x2(0)x3(0))=(1105ϰcot(ϰ)105ϰ2ϰ2).

In Fig. 5 contrasts the decimal logarithm of module of the relative error of the solution x(t), calculated using Φ-functions method MDFSpPC, with step size h = 10−3, 40 digits and p = 2, with the numerical integration codes MGEAR[msteppart] with errorper = Float(1, −10), LSODE with a tolerance of 10−25, GEAR with errorper = Float(1, −22) and EIpPC with step size h = 10−3, 40 digits and p = 8.

Figure 5. Problem 3, relative error.
Figure 5.

Problem 3, relative error.

The results for the integration of the function x(t) are shown in Fig. 6 in which the information is arranged as in Fig. 2.

Figure 6. Problem 3, efficiency plot for the integration of x(t) at last point.
Figure 6.

Problem 3, efficiency plot for the integration of x(t) at last point.

5.4 Problem 4

This example shows an application of the MDFSpPC to a problem of quasiperiodic orbits studied by [18], which can also be found in [[14], [[17]], ]22], [24]], among others.

Let

(5.14)x(t)  +x(t)  =  103eit,x(0)  =  1,        x(0)  =  0.9995  i,

for which the analytical solution is:

(5.15)x(t)  =  (1    5    104it)eit=  (cos(t)  +  5    104tsin(t))  +  i  (sin(t)    5    104tcos(t)).

The solution represents motion on a perturbation of a circular orbit in the complex plane. The problem may be solved either as a single equation in complex arithmetic or a pair of uncoupled equations.

Noting x(t) = u(t) + iv(t), and by substituting in (5.14), we get the following second order system:

(5.16){u(t)+u(t)=103cos(t)v(t)+v(t)=103sin(t)u(0)=1,            u(0)=0,          v(0)=0,          v(0)=0.9995.

By defining the variables x1(t) = u(t), x2(t) = u′(t), x3(t) = v(t) and x4(t) = v′(t) becomes the system of first order linear equations:

(5.17)(x1(t)x2(t)x3(t)x4(t))+(0100100000010010)(x1(t)x2(t)x3(t)x4(t))=103(0cos(t)0sin(t))(x1(0)x2(0)x3(0)x4(0))=(1000.9995)

with double eigenvalues ±i and exact solution:

(5.18)(x1(t)x2(t)x3(t)x4(t))=(cos(t)+5104tsin(t)0.9995sin(t)+5104tcos(t)sin(t)5104tcos(t)0.9995cos(t)+5104tsin(t)).

The matrix which annihilates the disturbance function is

B=(1000000100100100).

Applying the operator (D + B) to the system (5.17) with the result:

(5.19)L2(x)=(x1(t)x2(t)x3(t)x4(t))+(0100100100110110)(x1(t)x2(t)x3(t)x4(t))+(0100001000011000)(x1(t)x2(t)x3(t)x4(t))=(0000)(x1(0)x2(0)x3(0)x4(0))=(1000.9995),                    (x1(0)x2(0)x3(0)x4(0))=(10.9990.99950).

In Fig. 7 contrasts the decimal logarithm of the relative error model of the solution x(t), calculated using Φ-functions method MDFSpPC, with step size h = 10−3, 40 digits and p = 10, with the numerical integration codes MGEAR[msteppart] with errorper = Float(1, −13), LSODE with a tolerance of 10−25, GEAR with errorper = Float(1, −22) and EIpPC with step size h = 10−3, 40 digits and p = 10.

Figure 7. Problem 4, relative error.
Figure 7.

Problem 4, relative error.

6 Conclusions

The multistep method MDFSpPC introduced in this paper it is based on the ideas developed by G. Scheifele and it is constructed taking as point of departure the Φ-functions series method.

The new method allows a more accurate integration of a wide range of problems, and under certain hypotheses, the multistep method calculates the exact solution of the perturbed problem.

Moreover the multistep method is defined for variable-step and variable-order, VSVO, maintains the good properties of the Φ-functions series method and it incorporates an algebraic recurrent procedure to compute the coefficients of the algorithm, what facilitates its implementation on a computer.

This new method can successfully compete with well known general and special-purpose integrators as shown in the examples, where it gains in the attainable accuracy and the efficiency of several orders of magnitude have been shown for different stiff perturbed problems.

Funding:

This work has been supported by GRE09-13 project of the University of Alicante and the project of the Generalitat Valenciana GV/2011/032.

References

[1]G. Denk, A new numerical method for the integration of highly oscillatory second-order ordinary differential equations, Appl. Numer. Math. 13 (1993), 57–67.10.1016/0168-9274(93)90131-ASuche in Google Scholar

[2]J. M. Ferrándiz, A general canonical transformation increasing the number of variables with application to the two–body problem, Celestial Mech. 41 (1988), 343–357.10.1007/BF01238770Suche in Google Scholar

[3]F. Garcia-Alonso, J. A. Reyes, J. M. Ferrándiz, and J. Vigo-Aguiar, Accurate numerical integration of perturbed oscillatory systems in two frequencies, ACM Trans. Math. Software TOMS, 36 (2009), No. 4, article 21.10.1145/1555386.1555390Suche in Google Scholar

[4]F. Garcia-Alonso and J. A. Reyes, A new approach for exact integration of some perturbed stiff linear systems of oscillatory type, Appl. Math. Comput. 215 (2009), 2649–2662.10.1016/j.amc.2009.09.005Suche in Google Scholar

[5]F. Garcia-Alonso, J. A. Reyes, and Y. Villacampa, A new approach for multistep numerical methods in several frequencies for perturbed oscillators, Adv. Engrg. Softw. 45 (2012), 252–260.10.1016/j.advengsoft.2011.10.002Suche in Google Scholar

[6]L. G. Ixaru, G. Vanden Berghe, and H. De Meyer, Exponentially fitted variable two–step BDF algorithm for first order ODE’s, Comp. Phys. Commun. 100 (2003), 56–70.10.1016/S0010-4655(96)00147-6Suche in Google Scholar

[7]L. G. Ixaru, G. Vanden Berghe, and H. De Meyer, Frequency evaluation in exponential fitting multistep algorithms for ODE’s, J. Comp. Appl. Math. 140 (2002), 423–434.10.1016/S0377-0427(01)00474-5Suche in Google Scholar

[8]J. D. Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley and Sons Ltd., New York, 1991.Suche in Google Scholar

[9]I. G. MacDonald, Symmetric Functions and Hall Polynomials, Oxford University Press Inc., New York, 1998.10.1090/ulect/012/01Suche in Google Scholar

[10]P.Martin and J. M. Ferrándiz, Behaviour of the SMF method for the numerical integration of satellite orbits, Celestial Mech. 63 (1995), 29–40.10.1007/BF00691913Suche in Google Scholar

[11]P.Martin and J. M. Ferrándiz, Multistep numerical methods based on Scheifele G-functions with application to satellite dynamics, SIAM J. Numer. Anal. 34 (1997), 359–375.10.1137/S003614299426505XSuche in Google Scholar

[12]M. Palacios, Métodos multirevolución simétricos para propagación de órbitas en intervalos grandes de tiempo, Monografias de la Real Academia de ciencias de Zaragoza 22 (2003), 55–66.Suche in Google Scholar

[13]L. R. Petzold, An Efficient numerical method for highly oscillatory ordinary differential equations, SIAM J. Numer. Anal. 18 (1981), No. 3, 455–479.10.1137/0718030Suche in Google Scholar

[14]J. I. Ramos, Piecewise-linearized methods for initial-value problems with oscillating solutions, Appl. Math. Comput. 181 (2006), 123–146.10.1016/j.amc.2006.01.020Suche in Google Scholar

[15]J. A. Reyes, F. Garcia-Alonso, J. M. Ferrándiz, and J. Vigo-Aguiar, Numeric multistep variable methods for perturbed linear system integration, Appl. Math. Comput. 190 (2007), 63–79.10.1016/j.amc.2007.01.017Suche in Google Scholar

[16]G. Scheifele, On numerical integration of perturbed linear oscillating systems, Z. Angew. Math. Phys. 22 (1971), 186–210.10.1007/BF01624061Suche in Google Scholar

[17]T. E. Simos and J. Vigo-Aguiar, Exponentially fitted symplectic integrator, Phys. Rev. E. 67 (2003), 1–7.10.1103/PhysRevE.67.016701Suche in Google Scholar PubMed

[18]E. L. Stiefel and D. G. Bettis, Stabilization of Cowell’s method, Numer. Math. 13 (1969), 154–175.10.1007/BF02163234Suche in Google Scholar

[19]E. L. Stiefel and G. Scheifele, Linear and Regular Celestial Mechanics, Springer, Berlin–Heldelberg–New York, 1971.10.1007/978-3-642-65027-7Suche in Google Scholar

[20]J. F. Steffensen, On the differential equations of Hill in the theory of the motion of the Moon, Acta Math. 93 (1955), 169–177.10.1007/BF02392522Suche in Google Scholar

[21]J. F. Steffensen, On the differential equations of Hill in the theory of the motion of the Moon (II), Acta Math. 95 (1956), 25– 37.10.1007/BF02401096Suche in Google Scholar

[22]H. Van de Vyver, Two-step hybrid methods adapted to the numerical integration of perturbed oscillators, arXiv:math/0612637v1 [math.NA], 2006.arXiv:math/0612637v1Suche in Google Scholar

[23]J. Vigo-Aguiar, An approach to variable coefficients methods for special differential equations, Int. J. Appl. Math. 1 (1999), No. 8, 911–921.Suche in Google Scholar

[24]J. Vigo-Aguiar and J. M. Ferrándiz, A general procedure for the adaptation of multistep algorithms to the integration of oscillatory problems, SIAM J. Numer. Anal. 35 (1998), No. 4, 1684–1708.10.1137/S0036142995286763Suche in Google Scholar

[25]J. Vigo-Aguiar and J. M. Ferrándiz, Higher-order variable-step algorithms adapted to the accurate numerical integration of perturbed oscillators, Computer in Physics 12 (1998), 467–470.10.1063/1.168717Suche in Google Scholar

[26]J. Vigo-Aguiar and J. M. Ferrándiz, VSVO adapted multistep methods for the numerical integration of second order differential equations, Appl. Math. Lett. 11 (1998), 83–89.10.1016/S0893-9659(98)00037-8Suche in Google Scholar

Received: 2013-3-25
Accepted: 2014-12-2
Published Online: 2016-3-14
Published in Print: 2016-3-1

© 2016 by Walter de Gruyter Berlin/Boston

Heruntergeladen am 26.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jnma-2013-1001/html
Button zum nach oben scrollen