Home A convergence analysis of SOR iterative methods for linear systems with weak H-matrices
Article Open Access

A convergence analysis of SOR iterative methods for linear systems with weak H-matrices

  • Cheng-yi Zhang EMAIL logo , Zichen Xue and Shuanghua Luo
Published/Copyright: October 10, 2016

Abstract

It is well known that SOR iterative methods are convergent for linear systems, whose coefficient matrices are strictly or irreducibly diagonally dominant matrices and strong H-matrices (whose comparison matrices are nonsingular M-matrices). However, the same can not be true in case of those iterative methods for linear systems with weak H-matrices (whose comparison matrices are singular M-matrices). This paper proposes some necessary and sufficient conditions such that SOR iterative methods are convergent for linear systems with weak H-matrices. Furthermore, some numerical examples are given to demonstrate the convergence results obtained in this paper.

MSC 2010: 15A06; 15A18; 15A42

1 Introduction

In this paper we consider the solution methods for the system of n linear equations

(1)Ax=b,

where A = (aij) ∈ Cn×n and is nonsingular, b, xCn and x unknown.

In order to solve system (1) by iterative methods, the coefficient matrix A = (Aij) ∈ Cn×n is split into

(2)A=MN,

where MCn×n is nonsingular and NCn×n. Then, the general form of iterative methods for (1) can be described as follows:

(3)x(i+1)=M1Nx(i)+M1b,i=0,1,2,......

The matrix H = M−1N is called the iterative matrix of the iteration (3). It is well-known that (3) converges for any given x(0) if and only if ρ(H) < 1 (see [15]), where ρ(H) denotes the spectral radius of the matrix H. Thus, to establish the convergence results of iterative methods, we mainly study the spectral radius of the iteration matrix in iteration (3).

For simplicity, we let

(4)A=ILU,

where I is the identity matrix, L and U are strictly lower and strictly upper triangular, respectively. According to the standard decomposition (4), the iteration matrices for SOR iterative methods of (1) are listed in the following.

The forward, backward and symmetric SOR methods (FSOR, BSOR and SSOR) iteration matrices are

(5)HFSOR(ω)=(IωL)1[(1ω)I+ωU],
(6)HBSOR(ω)=(IωU)1[(1ω)I+ωL],

and

(7)HSSOR(ω)=HBSOR(ω)HFSOR(ω)

where ω ∈ (0, 1) is the overrelaxation parameter.

Recently, the class of strong H-matrices including strictly or irreducibly diagonally dominant matrices (whose comparison matrices are nonsingular M-matrices) has been extended to encompass a wider set, known as the set of general Hmatrices (whose comparison matrices are M-matrices). In a recent paper, a partition of the n × n general H−matrix set Hn was obtained in [68]. Here, we give a different partition: general H−matrix set Hn are partitioned into two mutually exclusive classes: the strong class, HnS, where the comparison matrices of all general H−matrices are nonsingular, and the weak class, HnW, where the comparison matrices of all general H−matrices are singular, in which singular and nonsingular H−matrices coexist. As is shown in [15, 927], classical iterative methods such as Jacobi, Gauss-Seidel, symmetric Gauss-Seidel, JOR and SOR(SSOR) iterative methods for linear systems, whose coefficient matrices are strong H−matrices including strictly or irreducibly diagonally dominant matrices, are convergent to the unique solution of (1) for any choice of the initial guess x(0). However, the same need not be true in case of some iterative methods for the linear systems with weak H−matrices. Let us investigate the following example.

Example 1.1

Assume that either A or B is the coefficient matrix of linear system (1), where

A=[211121112]andB=[211121112].

It is verified that both A and B are weak Hmatrices and not strong Hmatrices, since their comparison matrices are both singular Mmatrices. How can we get the convergence on FSOR-, BSOR- and SSOR iterative methods for linear systems with this class of matrices without direct computations?

In last years, Zhang et al. in [28] and Zhang et al. in [29] studied the convergence of Jacobli, Gauss-Seidel and symmetric Gauss-Seidel iterative methods for the linear systems with nonstrictly diagonally dominant matrices and general H−matrices, and established some significant results. In this paper the convergence of FSOR-, BSOR- and SSOR- iterative method will be studied for the linear systems with weak H−matrices and some necessary and sufficient conditions are proposed, such that these iterative methods are convergent for the linear systems with this class of matrices. Then some numerical examples are given to demonstrate the convergence results obtained in this paper.

The paper is organized as follows. Some notations and preliminary results about general H-matrices are given in Section 2. The main results of this paper are given in Section 3, where we give some necessary and sufficient conditions on convergence for FSOR-, BSOR- and SSOR- iterative methods of linear systems with weak H-matrices. In Section 4, some numerical examples are given to demonstrate the convergence results obtained in this paper. Future work is given in Section 4.

2 Preliminaries

In this section we give some notations and preliminary results relating to the special matrices that are used in this paper.

m×n (ℝm×n) will be used to denote the set of all m × n complex (real) matrices. ℤ denotes the set of all integers. Let α ⊆ 〈n〉 = {1, 2,...,n} ⊂ ℤ. For nonempty index sets α, β ⊆ 〈n〉, A(α, β) is the submatrix of A ∈ ℂn×n with row indices in α and column indices in β. The submatrix A(α, α) is abbreviated to A(α). Let A ∈ ℂn×n, α ⊂ 〈nand α' = 〈n〉 − α. If A(α) is nonsingular, the matrix

(8)A/α=A(α)A(α,α)[A(α)]1A(α,α)
is called the Schur complement with respect to A(α), indices in both α and α' are arranged with increasing order. We shall confine ourselves to the nonsingular A(α) as far as A/α is concerned.

Let A = (aij) ∈ ℂm×n and B = (bij) ∈ ℂm×n, A ° B = (aij bij) ∈ ℂm×n denotes the Hadamard product of the matrices A and B. A matrix A = (aij) ∈ ℝn×n is called nonnegative if aij ≥ 0 for all i, j ∈ 〈n〉. A matrix A = (aij) ∈ ℝn×nis called a Zmatrix if aij ≤ 0 for all ii = j. We will use Zn to denote the set of all n × n Zmatrices. A matrix A = (aij) ∈ Znis called an Mmatrix if A can be expressed in the form A = sIB, where B ≥ 0, and sρ(B), the spectral radius of B. If s > ρ(B), A is called a nonsingular Mmatrix; if s= ρ(B), A is called a singular Mmatrix. Mn, Mn and Mn0 will be used to denote the set of all n × n M−matrices, the set of all n × n nonsingular M−matrices and the set of all n × n singular M−matrices, respectively. It is easy to see that

(9)Mn=MnMn0andMnMn0=.

The comparison matrix of a given matrix A = (aij) ∈ ℂn×n, denoted by μ(A) = (μij), is defined by

(10)uij={|aii|,ifi=j,|aij|,ifij.

It is clear that μ(A) ∈ Zn for a matrix A ∈ ℂn×n. The set of equimodular matrices associated with A, denoted by ω(A) = {B ∈ ℂn×n: μ(B) = μ(A)}. Note that both A and μ(A) are in ω(A). A matrix A = (aij) ∈ ℂn×n is called a general Hmatrix if μ(A) ∈ Mn (See [1]). If μ(A)MnA is called a strong Hmatrix; if μ(A)Mn0, A is called a weak Hmatrix. Hn, HnS and HnW will denote the set of all n × n general H−matrices, the set of all n × n strong H−matrices and the set of all n × n weak H−matrices, respectively. Similarly to equalities (9), we have

(11)Hn=HnSHnWandHnSHnW=.

For n2, an n × n complex matrix A is reducible if there exists an n × n permutation matrix P such that

(12)PAPT=[A11A120A22],

where A11 is an r × r submatrix and A22 is an (nr) × (nr) submatrix, where 1r < n. If no such permutation matrix exists, then A is called irreducible. If A is a 1 × 1 complex matrix, then A is irreducible if its single entry is nonzero, and reducible otherwise.

Definition 2.1

A matrix A ∈ ℂn×nis called diagonally dominant by row if

(13)|aii|j=1,jin|aij|
holds for all i ∈ 〈n〉. If inequality in (13) holds strictly for all i ∈ 〈n〉, A is called strictly diagonally dominant by row. If A is irreducible and the inequality in (13) holds strictly for at least one i ∈ 〈n〉, A is called irreducibly diagonally dominant by row. If (13) holds with equality for all i ∈ 〈n〉, A is called diagonally equipotent by row.

Dn(SDn, IDn) and DEn will be used to denote the sets of all n × n (strictly, irreducibly) diagonally dominant matrices and the set of all n × n diagonally equipotent matrices, respectively.

Definition 2.2

A matrix A ∈ ℂn×nis called generalized diagonally dominant if there exist positive constants αi, i ∈ 〈n〉, such that

(14)αi|aii|j=1,jinαj|aij|
holds for all i ∈ 〈n〉. If inequality in (14) holds strictly for all i ∈ 〈n〉, A is called generalized strictly diagonally dominant. If (14) holds with equality for all i ∈ 〈n〉, A is called generalized diagonally equipotent.

We denote the sets of all n × n generalized (strictly) diagonally dominant matrices and the set of all n × n generalized diagonally equipotent matrices by GDn(GSDn) and GDEn, respectively.

Lemma 2.3

(See [3032]). Let ADn(GDn). ThenAHnIif and only if A has no (generalized) diagonally equipotent principal submatrices. Furthermore, if ADn) ∩ Zn (GDnZn), thenAMnif and only if A has no (generalized) diagonally equipotent principal submatrices.

Lemma 2.4

(See [31, 32]). DEnGDEnHnW and SDnIDnHnS=GSDn.

Lemma 2.5

(See [6]). GDn ⊂ Hn

Lemma 2.6

(See [6]). Let A ∈ ℂn×nbe irreducible. Then AHn if and only if AGDn

More importantly, under the condition of "reducibility", we have the following conclusion.

Lemma 2.7

(See [6, 29]). Let A ∈ ℂn × nbe reducible. Then AHn if and only if in the Frobenius normal form of A

(15)PAPT=[R11R12R1s0R22R2s00Rss],
each irreducible diagonal square block Rii is generalized diagonally dominant, where P is a permutation matrix, Rii = A(αi) is either 1 × 1 zero matrices or irreducible square matrices, Rij = A(αi, αj), ij, i, j = 1, 2,... ,s, further, αiαj = ∅ for ij, andi=1Sαi=n.
Lemma 2.8

(See [6, 29]). A matrixAHnWif and only if in the Frobenius normal from (15) of A, each irreducible diagonal square block Rii is generalized diagonally dominant and has at least one generalized diagonally equipotent principal submatrix.

The following definitions and lemmas come from [28, 29].

Definition 2.9

Let E = (eiθrs) ∈ ℂn×n, whereeiθrs = cosrs + i sin θrs, i=1and θrs ∈ ℝ for all r, s ∈ 〈n〉. The matrix E = (eiθrs) ∈ ℂn × nis called a πray pattern matrix if

1. θrs+ θsr = 2kβ holds for all r, s ∈ 〈n〉, rs, where k ∈ ℤ;

2. θrsθrt = θts + (2k + 1)π holds for all r, s, t ∈ 〈nand rs, rt, ts, where k ∈ ℤ;

3. θrr = 0 for all r ∈ 〈n〉.

Definition 2.10

Any complex matrix A = (ars) ∈ ℂn × n has the following form:

(16)A=ein.|A|Eiθ=(ein.|ars|eiθrs)n×n
where η ∈ ℝ, |A| = (|ars|) ∈ ℝn × nand E = (eiθrs) ∈ ℂn ×nwith θrs ∈ ℝ and θrr = 0 for r, s ∈ 〈n〉. The matrix Eiθ is called a ray pattern matrix of the matrix A. If the ray pattern matrix Eiθ of the matrix A given in (16) is a πray pattern matrix, then A is called a πray matrix.

Rnπ denote the set of all n × n π −ray matrices. Obviously, if a matrix ARnπ, then ξ.ARnπ for all ξ ∈ ℂ.

Lemma 2.11

Let a matrix A = DALAUA = (ars) ∈ ℂn ×n with DA = diag(a11, a22, ... , ann). ThenARnπif and only if there exists an n × n unitary diagonal matrix D such that D−1AD = e.(|DA|−|LA|−|UA|) for η ∈ ℝ.

Lemma 2.12

A matrix AHnW is singular if and only if the matrix A has at least either one zero principal submatrix or one irreducible principal submatrix A k = A(i1, i2, ... ,ik), 1 < kn, such thatDAK1AKGDEKRKπ, where DAk = diag(ai1i1,... ,aikik).

3 Main results

In numerical linear algebra, the successive overrelaxation iterative method, simultaneously introduced by Frankel (1950) [33] and Young (1950) [34], is a famous iterative method used to solve a linear system of equations. This iterative method is also called the accelerate Liebmann method by Frankel (1950) [33] and the other many subsequent researchers. Kahan (1958) [35] calls it the extrapolated Gauss-Seidel method. It is often called the method of systematic overrelaxation. Frankel showed that for the numerical solutation of the Dirichlet problem for a rectangle, successive overrelaxation iterative method gave substantially larger (by an order of magnitude) asymptotic rates of convergence than those for the point Jacobi and point Gauss-Seidel iterative methods with suitable chosen relaxation factor. Young (1950) [34] and Young (1954) [36] showed that these conclusions held more generally for matrices satisfying his definition of propertly A, and that these results could be rigorously applied to the iterative solution of matrix equations arising from discrete approximations to a large class of elliptic partial differential equations for general regions.

Later, this iterative method was developed as three iterative methods, i.e., the forward, backward and symmetric successive overrelaxation (FSOR-, BSOR- and SSOR-) iterative methods. Though these iterative methods can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is strictly or irreducibly diagonally dominant matrix, Hermitian positive definite matrix, strong H−matrix and consistently ordered p-cyclic matrix. Some classic results on convergence of SOR iterative methods are as follows:

Theorem 3.1

(See [1315, 23]). Let ASDn ∪ IDn. Then ρ(HFSOR) < 1, ρ(HBSOR) < 1 and ρ(HSSOR) < 1, where HFSOR, HBSOR and HSSOR are defined in (5) , (6) and (7) , respectively, and therefore the sequence {x(i)} generated by the FSOR-, BSOR- and SSOR-scheme (3) , respectively, converges to the unique solution of (1) for any choice of the initial guess x(0).

Theorem 3.2

(See [37]). LetAHnSThen the sequence {x(i)} generated by the FSOR-, BSOR- and SSOR-scheme (3) , respectively, converges to the unique solution of (1) for any choice of the initial guess x(0).

Theorem 3.3

(See [4, 5, 38]). Let A ∈ ℂn × nbe a Hermitian positive definite matrix. Then the sequence {x(i)} generated by the FSOR-, BSOR- and SSOR-scheme (3) , respectively, converges to the unique solution of (1) for any choice of the initial guess x(0).

In this section, we mainly study convergence of SOR iterative methods for the linear systems with weak H-matrices.

Theorem 3.4

Let A = ILU ∈ DE nbe irreducible. Then for ω ∈ (0, 1), ρ(HFSOR(ω)) < 1 and ρ(HBSOR(ω)) < 1, i.e. the sequence {x(i))} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0)if and only if AARnπ.

Proof

The sufficiency can be proved by contradiction. We assume that there exists an eigenvalue λ of HFSOR such that |λ| ≥ 1. According to Equality (5),

(17)det(λIHFSOR(ω))=det{λI(IωL)1[(1ω)I+ωU]}=det[(IωL)1]det{λ(IωL)[(1ω)I+ωU]}=det[(λ1+ω)IλωLωU]det(IωL)=(λ1+ω)det[Iλωλ1+ωLωλ1+ωU]det(IωL)=0.

Since |λ ≥ 1, λ − 1 + ω ≠ 0. Hence, equality (17) yields

(18)detA(λ,ω)=det[Iλωλ1+ωLωλ1+ωU]=0

i.e.

(19)A(λ,ω)=Iλωλ1+ωLωλ1+ωU

is singular. Set λ = μe with μ ≥ 1 and θR. Then 1 − cosθ ≥ 0, μ − 1 ≥ 0 and μ2 − 1 ≥ 0. Again, ω ∈ (0, 1) shows 1 − ω > 0. Therefore, we have

(20)|λ1+ω|2|λω|2=|μeiθ1+ω2||μeiθω|2=μ22(1ω)μcosθ+(1ω)2μ2ω2=[μ2(1+ω)2μcosθ+1ω](1ω)=[μ2+12μcosθ+μ2ωω](1ω)[2μ2μcosθ+(μ21)ω](1ω)=[2μ(1cosθ)+(μ21)ω](1ω)0.

and

(21)|λ1+ω|2ω2=|μeiθ1+ω|2ω2=μ22(1ω)μcosθ+(1ω)2ω2=[μ(1ω)]2ω2+2μ(1ω)2μ(1ω)cosθ=[μ1+ω]2ω2+2μ(1ω)(1cosθ)ω2ω20.

This shows that

(22)|λωλ1+ω|1,|ωλ1+ω|1.

Since A = ILUDEn is irreducible, both L ≠ 0 and U ≠ 0. As a result, (19) and (22) indicate that A(λ, ω) ∈ Dn and irreducible. Again, since A(λ, ω) is singular and hence A(λ,ω)HnW, it follows from Lemma 2.12 that A(λ,ω)An0DEn, i.e. there exists an unitary diagonal matrix D such that

(23)D1A(λ,ω)D=Iλωλ1+ωD1LDωλ1+ωD1UD=I|λωλ1+ω||L||ωλ1+ω||U|DEn.

Since A = ILU ∈ DE n,

(24)μ(A)=I|L|π|U|DEn

(23) and (24) shows

(25)|λωλ1+ω|=1,|ωλ1+ω|=1.

Because of |λ ≥ 1 and ω ∈ (0,1), the latter equality of (25) implies |λ| = 1. As a result, (25) and (19) show A(λ, ω) = ILU = A. From (23), it is easy to see A(λ,ω)=ARnπ. This contradicts ARnπ. Thus, ρ(HFSOR(ω)) < 1, i.e. FSOR-method converges.

The following will prove the necessity by contradiction. Assume that ARnπ. Then it follows from Lemma 2.11 that there exists an n × n unitary diagonal matrix D such that A = ILU = A = ID |L| D−1D |U| D−1 and

(26)HFSOR(ω)=(IωL)1[(1ω)I+ωU]=D(Iω|L|)1[(1ω)I+ω|U|]D1.

Hence,

(27)det(IHFSOR(ω))=det{I(IωL)1[(1ω)I+ωU]}=det{I(Iω|L|)1[(1ω)I+ω|U|]}=det[Iω|L|(1ω)Iω|U|]det(Iω|L|)=ωdet[I|U||L|]det(Iω|L|).

Sice A = ILU ∈ DE n and is irreducible, Lemma 2.4 shows that I|L||U|HnWRnπ and is irreducible. Lemma 2.12 shows that I − |L| − |U| is singular and hence det(I − |L| − |U|) = 0. Therefore, (27) yields det(IHFSOR(ω)) = 0, which shows that 1 is an eigenvalue of HFSOR(ω). Then, we have that ρ(HFSOR(ω)) ≥ 1, i.e. FSOR-method doesn’t converge. This is a contradiction. Thus, the assumption is incorrect and hence, ARnπ. This completes the necessity.

In the same way, we can prove that for ω ∈ (0,1), BSOR-method converges, i.e. ρ(HBSOR(ω)) < 1 if and only if AAn0. Here, we finish the proof. □

Theorem 3.5

Let A = ILU = (aij) ∈ Dn with aii ≠ 0 for all i ∈ 〈n〉. Then for ω ∈ (0, 1), ρ(HFSOR(ω)) < 1 and ρ(HBSOR(ω) < 1, i.e. the sequence {x(i))} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0)if and only if A is nonsingular.

Proof

The conclusion of this theorem is not difficult to be obtained form Lemma 2.1 2, Theorem 3.1 and Theorem 3.4. □

In what follows we will propose the convergence result for SSOR iterative method for linear system with weak H-matrices including nonstrictly diagonally dominant matrices. Firstly, the following lemma will be given for the convenience of the proof.

Lemma 3.6

([32]). LetA=[EULF]C2n×2n, where E, F, L, UCn × nand E is nonsingular. Then A/E is nonsingular if and only if A is nonsingular, where A/E = FLE−1U is the Schur complement of A with respect to E.

Theorem 3.7

Let A = ILUDEn be irreducible. Then for ω ∈ (0, 1), ρ(HSOR(ω)) < 1, i.e. the sequence {x(i))} generated by the SSOR iterative schemes (7) converges to the unique solution of (1) for any choice of the initial guess x(0)if and only if AARnπ.

Proof

The sufficiency can be proved by contradiction. We assume that there exists an eigenvalue λ of HSSOR such that |λ| ≥ 1. According to equalities (5), (6) and (7),

(28)det(λIHSSOR(ω))=det{λI(IωU)1[(1ω)I+ωL](IωL)1[(1ω)I+ωU]}=det{λ(IωU)[(1ω)I+ωL](IωL)1[(1ω)I+ωU]}det(IωU)=λdet{(IωU)λ1[(1ω)I+ωL](IωL)1[(1ω)I+ωU]}det(IωU)=0.

Equality (28) gives

(29)detB(λ,ω)=det{(IωU)λ1[(1ω)I+ωL](IωL)1[(1ω)I+ωU]}=0,

i.e.

(30)B(λ,ω)=(IωU)λ1[(1ω)I+ωL](IωL)1[(1ω)I+ωU]

is singular. Let R = IωL, S = λ.IωU) , T = λ−1[(1 − ω)I + ωL], V = (1 − ω)I + ωU and

(31)B=[RVTS]=[IωL[(1ω)I+ωU]λ1[(1ω)I+ωL]IωU].

Then, B(λ, ω) = ℬ/R is the Schur complement of with respect to the principal submatrix R. Since B(λ, ω) = ℬ/R is singular, Lemma 3.6 shows that is also singular. Again since A is irreducible, both L ≠ 0 and U ≠ 0. As a result, is also irreducible. Since A = ILU = (aij) ∈ DE n with unit diagonal entries,

(32)1=j=1i1|aij|+j=i+1n|aij|,i=1,2,...,n.

Thus, for all ω ∈ (0, 1) and |λ ≥ 1, we have that both

1ωj=1i1|aij|(1ω)ωj=i+1n|aij|=ω(1j=1i1|aij|j=i+1n|aij|)=0

and

1ωj=1+1n|aij||λ|1[(1ω)+ωj=1i1|aij|]ω(1j=1i1|aij|j=i+1n|aij|)=0

hold for all iN = {1, 2,..., n}. Immediately, we obtain

(33)1=ωj=1i1|aij|+(1ω)+ωj=i+1n|aij|,1ωj=i+1n|aij|+|λ|1[(1ω)+ωj=1i1|aij|],i=1,2,...,n.

(33) shows that D2n. Again, is irreducible and singular, and hence Lemma 2.4 shows that A(λ,ω)H2nW. Then, it follows from Lemma 2.12 that BDE2nB2nπ. Consequently |λ| = 1. Let λ = e with θR. Since BR2nπ, Lemma 2.12 shows that there exists an n × n unitary diagonal matrix D such that = diag(D, D) and

(34)D~1BD~=D~1[IωL(1ω)I+ωUλ1[(1ω)I+ωL]IωU]D~1=D~1[IωL[(1ω)I+ωU]eiθ[(1ω)I+ωL]IωU]D~1=[IωD1LD[(1ω)I+ωD1UD]eiθ[(1ω)I+ωD1LD]IωD1UD]=[Iω|L|[(1ω)I+ω|U|]eiθ[(1ω)I+ω|L|]Iω|U|]=[Iω|L|[(1ω)I+ω|U|][(1ω)I+ω|L|]Iω|U|].

The latter two equalities of (34) indicate that θ = 2kβ; where k is an integer and thus λ = ei2kβ = 1, and there exists an n × n unitary diagonal matrix D such that D−1AD = I − |L| − |U|, i.e. AAn0. However, this contradicts AAn0. According to the proof above, we have that ρ(HSSOR(ω)) < 1, i.e. SSOR-method converges.

The following will prove the necessity by contradiction. Assume that AAn0. Then there exists an n × n unitary diagonal matrix D such that A = ILU = ID |L| D−1D |U| D−1 and hence

(35)HSSOR(ω)=(IωU)1[(1ω)I+ωL](IωL)1[(1ω)I+ωU]=D{(Iω|U|)1[(1ω)I+ω|L|](1ω|L|)1[(1ω)I+ω|U|]}D1.

Thus,

(36)det(IHSSOR(ω))=det{I(Iω|U|)1[(1ω)I+ω|L|](1ω|L|)1[(1ω)I+ω|U|]}=det{(Iω|U|)[(1ω)I+ω|L|](1ω|L|)1[(1ω)I+ω|U|]}det(Iω|U|).

Set C(ω) = (Iω |U|)− [(1−ω)I + ω|L|](Iω |L|)−1 [(1−ω)I + ω |U|] and let = Iω |L|, Ŝ = Iω |U|, = (1 − ω)I + ω |L|, = (1 − ω)I + ω |U| and

(37)B^=[R^V^T^S^]=[Iω|L|[(1ω)I+ω|U|][(1ω)I+ω|L|Iω|U|].

Then, C(ω) = ℬ̂/ is the Schur complement of ℬ̂ with respect to the principal submatrix . (33) and (37) show B^H2nWR2nπ. It comes from Lemma 2.12 that ℬ̂ is singular. Therefore, Lemma 3.6 yields that C(ω) is singular, i.e.

detC(ω)=det{(Iω|U|)[(1ω)I+ω|L|](Iω|L|)1[(1ω)I+ω|U|]}=0.

(36) gives det(IHSSOR) = 0, which shows that 1 is an eigenvalue of HSSOR(ω). Then, we have that ρ(HSSOR(ω)) ≥ 1, i.e. SSOR-method doesn’t converge. This is a contradiction. Thus, the assumption is incorrect and AAn0. This completes the proof. □

Theorem 3.8

Let A = ILU = (aij) ∈ Dn with aii ≠ 0 for all i ∈ 〈n〉. Then for ω ∈ (0, 1), ρ(HSSOR(ω)) < 1, i.e. the sequence {x(i)} generated by the SSOR iterative schemes (7) converges to the unique solution of (1) for any choice of the initial guess x(0)if and only if A is nonsingular.

Proof

The conclusion of this theorem is easy to be obtained form Lemma 2.1 2, Theorem 3.1 and Theorem 3. 7. □

Theorem 3.9

LetA=ILUHnWbe irreducible. Then for ω ∈ (0, 1), ρ(HFSOR(ω)) < 1 and ρ(HBSOR(ω)) < 1, i.e. the sequence {x(i)} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0)if and only if AARnπ.

Proof

Since A=ILUHnW is irreducible, it follows form Lemma 2.3 and Lemma 2.6 that A = ILUGDEn. Then, from Definition 2. 2, there exists a positive diagonal matrix D such that  = D−1AD = ID−1LDD−1UD = IÛDEn and is irreducible, where = D−1LD and Û = D−1UD. Theorem 3.4 shows that for ω ∈ (0, 1), ρ(ĤFSOR(ω)) < 1 and ρ(ĤBSOR(ω)) < 1, i.e. the sequence {x(i)} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0) if and only if A^Rnπ. Again, from (5), we have

(38)H^FSOR(ω)=(IωL^)1[(1ω)I+ωU^]=(IωD1LD)1[(1ω)I+ωD1UD]=D1{(IωL)1[(1ω)I+ωU]}D=D1HFSOR(ω)D.

In the same way, we can get

(39)H^BSOR(ω)=D1HBSOR(ω)D.

from (6). Since A^=D1ADRnπ and D is a positive diagonal matrix, it follows from Definition 2.9 and Definition 2.10 that ARnπ. Therefore, for ω ∈ (0, 1), ρ(HFSOR(ω)) = ρ(ĤFSOR(ω)) < 1 and ρ(HBSOR(ω)) = ρ(ĤBSOR(ω)) < 1, i.e. the sequence {x(i)} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0) if and only if ARnπ. This completes the proof.□

Theorem 3.10

LetA=ILUHnWbe irreducible. Then for ω ∈ (0, 1), ρ(HSSOR(ω)) < 1, i.e. the sequence {x(i)} generated by the SSOR iterative schemes (7) converges to the unique solution of (1) for any choice of the initial guess x(0) if and only ifARnπ.

Proof

From (5), (6), (7), (38) and (39),

(40)H^SSOR(ω)=D1[HBSOR(ω)HFSOR(ω)]D=D1HSSOR(ω)D.

Therefore, similarly as in the proof of Theorem 3. 9, we have with Theorem 3.7 that for ω ∈ (0, 1), ρ(HSSOR(ω)) = ρ(ĤSSOR(ω)) < 1 i.e. the sequence {x(i)} generated by the SSOR iterative schemes (7) converges to the unique solution of (1) for any choice of the initial guess x(0) if and only if ARnπ. □

Theorem 3.11

LetA=ILU=(aij)HnWwith aii ≠ 0 for all i ∈ 〈n〉. Then for ω ∈ (0, 1), ρ(HFSOR(ω)) < 1 and ρ(HBSOR(ω)) < 1, i.e. the sequence {x(i)} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0)if and only if A is nonsingular.

Proof

If AHnW is irreducible, it follows from Theorem 3.9 that the conclusion of this theorem is true. If AHnW is reducible, since AHnW with aii ≠ 0 for all i ∈ 〈n〉, HFSOR(ω) and HBSOR(ω) exist and Theorem 2.7 shows that there exists some i ∈ 〈n〉 such that diagonal square block Rii in the Frobenius normal from (15) of A is irreducible and generalized diagonally equipotent. Let HFSORRii and HBSORRii denote the FSOR- and BSOR-iteration matrices associated with diagonal square block Rii. Direct computations give

(41)ρ(HFSOR)=max1isρ(HFSORRii)andρ(HBSOR)=max1isρ(HBSORRii).

Since RiiGDEn is irreducible, Theorem 3.9 shows that for ω ∈ (0, 1), if ρ(HFSOR(ω))=maxiisρ(HFSORRii)<1 and ρ(HBSOR(ω))=maxiisρ(HBSORRii)<1, i.e. the sequence {x(i)} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0), then Rii=A(αi)R|ai|π. Again, RiiGDEn but RiiR|ai|π. Lemma 2.12 shows Rii = A(αi) is nonsingular. However, it is easy to obtain that Rjj = A(αj) is nonsingular that satisfies ρ(HFSOR(ω)Rjj)=maxiisρ(HFSORRii)<1 and ρ(HBSOR(ω)Rjj)=maxiisρ(HBSORRii)<1. Thus, (15) shows that A is nonsingular. This completes the proof of the necessity.

Let us prove the sufficiency. Assume that A is nonsingular, then each diagonal square block Rii in the Frobenius normal from (15) of A is nonsingular for all i ∈ 〈n〉. Since AHnW, Theorem 2.7 shows that some diagonal square block Rii in the Frobenius normal from (15) of A is irreducible and generalized diagonally equipotent and the other diagonal square block Rjj is generalized strictly diagonally dominant or generalized irreducibly diagonally dominant. Again, each irreducible and generalized diagonally equipotent diagonal square block Rii is nonsingular. Lemma 2.12 yields that Rii=A(αi)R|αi|π. Then it follows from Theorem 3. 1, Theorem 3.2 and Theorem 3.9 that ρ(HFSOR(ω))=max1<i<sρ(HFSORRii)<1 and ρ(HBSOR(ω))=max1<i<sρ(HBSORRii)<1, i.e. the sequence x(i))} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess x(0). This completes the proof. □

Theorem 3.12

A=ILU=(aij)HnWwithaii ≠ 0 for all i ∈ 〈n〉. Then for ω ∈ (0, 1), ρ(HSSOR(ω)) < 1, i.e. the sequence {x(i)} generated by the SSOR iterative schemes (7) converges to the unique solution of (1) for any choice of the initial guess x(0)if and only if A is nonsingular.

Proof

Similar to the proof of Theorem 3.1 1, the conclusion of this theorem is easy to be obtained form Theorem 3.1, Theorem 3.2 and Theorem 3.1 0. □

4 Numerical examples

In this section, some numerical examples are given to demonstrate the convergence results obtained in this paper.

Example 4.1

Let the coefficient matrix A of linear system (1) be given by the following n × n matrix

(42)An=[1100000121000001210000012000000021000001210000011].

It is easy to see that AnDEn is irreducible. Since

Dn1AnDn=|DAn||LAn|+|UAn|,

where

Dn=diag[1,1,,(1)K1,,(1)n1],

it follows from Lemma 2.11 that AnRnπ. Then, Lemma 2.12 shows that An is nonsingular. Therefore, Theorem 3.4 and Theorem 3.7 show that for ω ∈ (0, 1),

ρ(HFSOR(ω))<1,ρ(HBSOR(ω))<1andρ(HSSOR(ω))<1,

i.e. the sequence {x(i))} generated by the FSOR-, BSOR- and SSOR-iterative schemes (5), (6) and (7) converges to the unique solution of (1) for any choice of the initial guess x(0).

In what follows, the computations on the spectral radii ρ1 = ρ(HFSOR(ω)) , ρ2 = ρ(HBSOR(ω)) and ρ3 = ρ(HSSOR(ω)) of FSOR-, BSOR- and SSOR-iterative matrices for A100 were performed on PC computer with Matlab 7.0 to verify that the results above are true. The computational results are shown in Table 1.

Table 1

The comparison of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω

ω00.10.20.30.40.50.60.70.80.91
ρ110.9000.8000.7020.6060.5140.4300.3580.5140.8901.284
ρ210.9000.8000.7020.6060.5140.4300.3580.5140.8901.284
ρ310.8080.6240.4490.3100.1190.3240.7801.5863.4571.640

It is shown in Table 1 and Fig. 1 that: (i) the changes of ρ(HFSOR(ω)) and ρ(HBSOR(ω)) are identical with ω increasing. They gradually decrease from 1 to 0.358 with ω increasing from 0 to 0.7 while they gradually increase from 0.358 to 1.284 with ω increasing from 0.7 to 1. This shows the optimal value of ω should be ωopt ∈ (0:50; 0:80) such that the SOR iterative method converges faster to the unique solution of (1) for any choice of the initial guess x0.

Fig. 1 The change of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω.
Fig. 1

The change of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω.

(ii) ρ(HSSOR(ω)) performs better than ρ(HFSOR(ω)) and (HBSOR(ω)). It decreases quickly from 1 to 0.119 with ω increasing from 0 to 0.5 while it increases fast from 0.119 to 1.640 with ω increasing from 0.5 to 1. The optimal value of ω should be ωopt ∈ (0:40; 0:60) such that the SOR iterative method converges faster to the unique solution of (1) for any choice of the initial guess x0. It follows from Table 1 and Fig. 1 that SSOR iterative method is superior to the other two SOR iterative methods.

Example 4.2

Let the coefficient matrix A of linear system (1) be given by the following 6 × 6 matrix

(43)An=[511111151111115111000211000121000112].

Although ADE6 are reducible, there is not any principal submatrix Ak(k < 6) in A such that DAk1AkRkπ, Lemma 2.12 shows that A is nonsingular. Therefore, Theorem 3.5 and Theorem 3.8 show that for ω ∈ (0, 1),

ρ=ρ(HFSOR(ω))<1,ρ2=ρ(HBSOR(ω))<1andρ3=ρ(HSSOR(ω))<1,

i.e. the sequence {x(i))} generated by the FSOR-, BSOR- and SSOR-iterative schemes (5), (6) and (7) converges to the unique solution of (1) for any choice of the initial guess x(0).

The computations on Matlab 7.0 of PC yield some comparison results on the spectral radius of SSOR iterative matrices, see Table 2.

Table 2

The comparison of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω

ω00.10.20.30.40.50.60.70.80.91
ρ110.9240.8460.7650.6790.6200.5730.5270.5100.7881.092
ρ210.9240.8490.7740.7000.6280.5610.4980.4490.7121.000
ρ310.8490.6970.5430.4170.3320.3510.3420.4430.5320.585

It is shown in Table 2 and Fig. 2 that (i) the change of ρ(HFSOR(ω)) is similar to the one of ρ(HBSOR(ω)). They gradually decrease to their minimal values then gradually increase from their minimal values with ω increasing from 0 to 1. This shows the optimal value of ω for FSOR- and BSOR-iterative method should be ωopt ∈ (0:70; 0:90). But ρ(HFSOR(ωopt)) > ρ(HFSOR(ωopt)) shows that the BSOR iterative method converges much faster than the FSOR does to the unique solution of (1) for any choice of the initial guess x0.

Fig. 2 The change of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω
Fig. 2

The change of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω

(ii) Similarly as in Example 1, ρ(HSSOR(ω)) performs better than ρ(HFSOR(ω)) and ρ(HBSOR(ω)). It decreases quickly from 1 to 0.332 with ω increasing from 0 to 0.5 while it quickly increases near about ω = 0.5 and then decreases gradually to 0.342 at ω = 0.7. Finally, it increases quickly from 0.342 to 0.585 with ω increasing from 0.7 to 1. The optimal value of ω should be ωopt ∈ (0.40, 0.60) such that the SOR iterative method converges faster to the unique solution of (1) for any choice of the initial guess x0. It follows from Table 2 and Fig. 2 that SSOR iterative method is superior to the other two SOR iterative methods.

5 Further work

In this paper some necessary and sufficient conditions are proposed such that SOR iterative methods, including FSOR, BSOR and SSOR iterative methods, are convergent for linear systems with weak H-matrices. The class of weak H-matrices with singular comparison matrices is a subclass of general H−matrices [29] and has some theoretical problems. In particular, the convergence problem on AOR iterative methods for this class of matrices is an open problem and is a focus of our further work.


Corresponding author: School of Science, Xi’an Polytechnic University, Xi’an Shaanxi 710048, China

Acknowledgement

This work is supported by the National Natural Science Foundations of China (Nos.11201362, 11601409, 11271297), the Natural Science Foundation of Shaaxi Province of China (No. 2016JM1009) and the Science Foundation of the Education Department of Shaanxi Province of China (No. 14JK1305).

References

[1] Berman A., Plemmons R.J.: Nonnegative Matrices in the Mathematical Sciences. Academic, New York, 1979.10.1016/B978-0-12-092250-5.50009-6Search in Google Scholar

[2] Demmel J.W.: Applied Numerical Linear Algebra. SIAM Press, 1997.10.1137/1.9781611971446Search in Google Scholar

[3] Golub G.H., Van Loan C.F.: Matrix Computations, third ed. Johns Hopkins University Press, Baltimore, 1996.Search in Google Scholar

[4] Saad Y.: Iterative methods for sparse linear systmes. PWS publishing Company, Boston, 1996.Search in Google Scholar

[5] Varga R.S.: Matrix Iterative Analysis, Second ed. Springer-Verlag, Berlin, Heidelberg, 2000.10.1007/978-3-642-05156-2Search in Google Scholar

[6] Bru R., Corral C., Gimenez I. and Mas J.: Classes of general H-matrices. Linear Algebra Appl., 429(2008), 2358-2366.10.1016/j.laa.2007.10.030Search in Google Scholar

[7] Bru R., Corral C., Gimenez I. and Mas J.: Schur coplement of general H –matrices. Numer. Linear Algebra Appl., 16(2009), 935-974.10.1002/nla.668Search in Google Scholar

[8] Bru R., Gimenez I. and Hadjidimos A.: Is A ∈ ℂn,n a general H –matrices. Linear Algebra Appl., 436(2012), 364-380.10.1016/j.laa.2011.03.009Search in Google Scholar

[9] Cveković L.J., Herceg D.: Convergence theory for AOR method. Journal of Computational Mathematics, 8(1990), 128-134.Search in Google Scholar

[10] Darvishi M.T., Hessari P.: On convergence of the generalized AOR method for linear systems with diagonally dominant coef?cient matrices. Applied Mathematics and Computation, 176(2006), 128-133.10.1016/j.amc.2005.09.051Search in Google Scholar

[11] Evans D.J., Martins M.M.: On the convergence of the extrapolated AOR method. Internat. J. Computer Math, 43(1992), 161-171.10.1080/00207169208804083Search in Google Scholar

[12] Gao Z.X., Huang T.Z.: Convergence of AOR method. Applied Mathematics and Computation, 176(2006), 134-140.10.1016/j.amc.2005.09.020Search in Google Scholar

[13] Hadjidimos A.: Accelerated overrelaxation method. Mathematics of Computation. 32(1978), 149-157.10.1090/S0025-5718-1978-0483340-6Search in Google Scholar

[14] James K.R., Riha W.: Convergence Criteria for Successive Overrelaxation. SIAM Journal on Numerical Analysis. 12(1975), 137-143.10.1137/0712013Search in Google Scholar

[15] James K.R.: Convergence of Matrix Iterations Subject to Diagonal Dominance. SIAM Journal on Numerical Analysis. 10(1973), 478-484.10.1137/0710042Search in Google Scholar

[16] Li W.: On nekrasov matrices. Linear Algebra Appl., 281(1998), 87-96.10.1016/S0024-3795(98)10031-9Search in Google Scholar

[17] Martins M.M.: On an Accelerated Overrelaxation Iterative Method for Linear Systems With Strictly Diagonally Dominant Matrix. Mathematics of Computation, 35(1980), 1269-1273.10.1090/S0025-5718-1980-0583503-4Search in Google Scholar

[18] Ortega J.M., Plemmons R.J.: Extension of the Ostrowski-Reich theorem for SOR iterations. Linear Algebra Appl., 28(1979), 177-191.10.1016/0024-3795(79)90131-9Search in Google Scholar

[19] Plemmons R.J.: M –matrix characterization I: Nonsingular M –matrix. Linear Algebra Appl., 18(1977), 175-188.10.1016/0024-3795(77)90073-8Search in Google Scholar

[20] Song Y.Z.: On the convergence of the MAOR method, Journal of Computational and Applied Mathematics, 79(1997), 299-317.10.1016/S0377-0427(97)00008-3Search in Google Scholar

[21] Song Y.Z.: On the convergence of the generalized AOR method. Linear Algebra and its Applications, 256(1997), 199-218.10.1016/S0024-3795(96)00028-6Search in Google Scholar

[22] Tian G.X., Huang T.Z., Cui S.Y.: Convergence of generalized AOR iterative method for linear systems with strictly diagonally dominant matrices. Journal of Computational and Applied Mathematics, 213(2008), 240-247.10.1016/j.cam.2007.01.016Search in Google Scholar

[23] Varga R.S.: On recurring theorems on diagonal dominance. Linear Algebra Appl., 13(1976), 1-9.10.1016/0024-3795(76)90037-9Search in Google Scholar

[24] Wang X.M.: Convergence for the MSOR iterative method applied to H-matrices. Applied Numerical Mathematics, 21(1996), 469-479.10.1016/S0168-9274(96)00016-5Search in Google Scholar

[25] Wang X.M.: Convergence theory for the general GAOR type iterative method and the MSOR iterative method applied to H-matrices. Linear Algebra and its Applications, 250(1997), 1-19.10.1016/0024-3795(95)00391-6Search in Google Scholar

[26] Xiang S.H., Zhang S.L.: A convergence analysis of block accelerated over-relaxation iterative methods for weak block H-matrices to partition ?, Linear Algebra and its Applications. 418(2006), 20-32.10.1016/j.laa.2006.01.013Search in Google Scholar

[27] Young D.M.: Iterative solution of large linear systmes. Academic Press, New York, 1971.Search in Google Scholar

[28] Zhang C.Y., Xu F.M., Xu Z.B., Li J.C.: General H-matrices and their Schur complements. Frontiers of Mathematics in China, 9(2014), 1141-1168.10.1007/s11464-014-0395-1Search in Google Scholar

[29] Zhang C.Y., Ye D., Zhong C.L., Luo S.H.: Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices. Electronic Journal of Linear Algebra, 30(2015), 843-870.10.13001/1081-3810.1972Search in Google Scholar

[30] Zhang C.Y. and Li Y.T.: Diagonal Dominant Matrices and the Determing of H matrices and M matrices, Guangxi Sciences, 12(2005), 1161-164.Search in Google Scholar

[31] Zhang C.Y., Xu C.X., Li Y.T.: The Eigenvalue Distribution on Schur Complements of H matrices, Linear Algebra Appl., 422(2007), 250-264.10.1016/j.laa.2006.09.022Search in Google Scholar

[32] Zhang C.Y., Luo S.H., Xu C.X., Jiang H.Y.: Schur complements of generally diagonally dominant matrices and criterion for irreducibility of matrices. Electronic Journal of Linear Algebra, 18(2009), 69-87.10.13001/1081-3810.1295Search in Google Scholar

[33] Frankel S.P.: Convergence rates of iterative treatments of partial differential equations. Math. Tables Aids Comput., 4(1950), 65-75.10.2307/2002770Search in Google Scholar

[34] Young D.M.: Iterative methods for solving partial differential equations of elliptic type. Doctoral Thesis, Harvard University, Cambridge, MA, 1950.Search in Google Scholar

[35] Kahan W.: Gauss-Seidel methods of solving large systems of linear equations. Doctoral Thesis, University of Toronto, Toronto, Canada, 1958.Search in Google Scholar

[36] Young D.M.: Iterative methods for solving partial differential equations of elliptic type. Trans. Amer. Math. Soc. 76(1954) 92-111.10.1090/S0002-9947-1954-0059635-7Search in Google Scholar

[37] Neumaier A. and Varga R.S.: Exact convergence and divergence domains for the symmetric successive overrelaxation iterative (SSOR) method applied to H -matrices. Linear Algebra Appl., 58(1984), 261-272.10.1016/0024-3795(84)90216-7Search in Google Scholar

[38] Meurant G.: Computer Solution of Large Linear Systems. Studies in Mathematics and its Applications, Vol. 28, North-Holland Publishing Co., Amsterdam, 1999.Search in Google Scholar

Received: 2016-6-18
Accepted: 2016-9-2
Published Online: 2016-10-10
Published in Print: 2016-1-1

© 2016 Zhang et al., published by De Gruyter Open

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Articles in the same Issue

  1. Regular Article
  2. A metric graph satisfying w41=1 that cannot be lifted to a curve satisfying dim(W41)=1
  3. Regular Article
  4. On the Riemann-Hilbert problem in multiply connected domains
  5. Regular Article
  6. Hamilton cycles in almost distance-hereditary graphs
  7. Regular Article
  8. Locally adequate semigroup algebras
  9. Regular Article
  10. Parabolic oblique derivative problem with discontinuous coefficients in generalized weighted Morrey spaces
  11. Corrigendum
  12. Corrigendum to: parabolic oblique derivative problem with discontinuous coefficients in generalized weighted Morrey spaces
  13. Regular Article
  14. Some new bounds of the minimum eigenvalue for the Hadamard product of an M-matrix and an inverse M-matrix
  15. Regular Article
  16. Integral inequalities involving generalized Erdélyi-Kober fractional integral operators
  17. Regular Article
  18. Results on the deficiencies of some differential-difference polynomials of meromorphic functions
  19. Regular Article
  20. General numerical radius inequalities for matrices of operators
  21. Regular Article
  22. The best uniform quadratic approximation of circular arcs with high accuracy
  23. Regular Article
  24. Multiple gcd-closed sets and determinants of matrices associated with arithmetic functions
  25. Regular Article
  26. A note on the rate of convergence for Chebyshev-Lobatto and Radau systems
  27. Regular Article
  28. On the weakly(α, ψ, ξ)-contractive condition for multi-valued operators in metric spaces and related fixed point results
  29. Regular Article
  30. Existence of a common solution for a system of nonlinear integral equations via fixed point methods in b-metric spaces
  31. Regular Article
  32. Bounds for the Z-eigenpair of general nonnegative tensors
  33. Regular Article
  34. Subsymmetry and asymmetry models for multiway square contingency tables with ordered categories
  35. Regular Article
  36. End-regular and End-orthodox generalized lexicographic products of bipartite graphs
  37. Regular Article
  38. Refinement of the Jensen integral inequality
  39. Regular Article
  40. New iterative codes for 𝓗-tensors and an application
  41. Regular Article
  42. A result for O2-convergence to be topological in posets
  43. Regular Article
  44. A fixed point approach to the Mittag-Leffler-Hyers-Ulam stability of a fractional integral equation
  45. Regular Article
  46. Uncertainty orders on the sublinear expectation space
  47. Regular Article
  48. Generalized derivations of Lie triple systems
  49. Regular Article
  50. The BV solution of the parabolic equation with degeneracy on the boundary
  51. Regular Article
  52. Malliavin method for optimal investment in financial markets with memory
  53. Regular Article
  54. Parabolic sublinear operators with rough kernel generated by parabolic calderön-zygmund operators and parabolic local campanato space estimates for their commutators on the parabolic generalized local morrey spaces
  55. Regular Article
  56. On annihilators in BL-algebras
  57. Regular Article
  58. On derivations of quantales
  59. Regular Article
  60. On the closed subfields of Q¯~p
  61. Regular Article
  62. A class of tridiagonal operators associated to some subshifts
  63. Regular Article
  64. Some notes to existence and stability of the positive periodic solutions for a delayed nonlinear differential equations
  65. Regular Article
  66. Weighted fractional differential equations with infinite delay in Banach spaces
  67. Regular Article
  68. Laplace-Stieltjes transform of the system mean lifetime via geometric process model
  69. Regular Article
  70. Various limit theorems for ratios from the uniform distribution
  71. Regular Article
  72. On α-almost Artinian modules
  73. Regular Article
  74. Limit theorems for the weights and the degrees in anN-interactions random graph model
  75. Regular Article
  76. An analysis on the stability of a state dependent delay differential equation
  77. Regular Article
  78. The hybrid mean value of Dedekind sums and two-term exponential sums
  79. Regular Article
  80. New modification of Maheshwari’s method with optimal eighth order convergence for solving nonlinear equations
  81. Regular Article
  82. On the concept of general solution for impulsive differential equations of fractional-order q ∈ (2,3)
  83. Regular Article
  84. A Riesz representation theory for completely regular Hausdorff spaces and its applications
  85. Regular Article
  86. Oscillation of impulsive conformable fractional differential equations
  87. Regular Article
  88. Dynamics of doubly stochastic quadratic operators on a finite-dimensional simplex
  89. Regular Article
  90. Homoclinic solutions of 2nth-order difference equations containing both advance and retardation
  91. Regular Article
  92. When do L-fuzzy ideals of a ring generate a distributive lattice?
  93. Regular Article
  94. Fully degenerate poly-Bernoulli numbers and polynomials
  95. Commentary
  96. Commentary to: Generalized derivations of Lie triple systems
  97. Regular Article
  98. Simple sufficient conditions for starlikeness and convexity for meromorphic functions
  99. Regular Article
  100. Global stability analysis and control of leptospirosis
  101. Regular Article
  102. Random attractors for stochastic two-compartment Gray-Scott equations with a multiplicative noise
  103. Regular Article
  104. The fuzzy metric space based on fuzzy measure
  105. Regular Article
  106. A classification of low dimensional multiplicative Hom-Lie superalgebras
  107. Regular Article
  108. Structures of W(2.2) Lie conformal algebra
  109. Regular Article
  110. On the number of spanning trees, the Laplacian eigenvalues, and the Laplacian Estrada index of subdivided-line graphs
  111. Regular Article
  112. Parabolic Marcinkiewicz integrals on product spaces and extrapolation
  113. Regular Article
  114. Prime, weakly prime and almost prime elements in multiplication lattice modules
  115. Regular Article
  116. Pochhammer symbol with negative indices. A new rule for the method of brackets
  117. Regular Article
  118. Outcome space range reduction method for global optimization of sum of affine ratios problem
  119. Regular Article
  120. Factorization theorems for strong maps between matroids of arbitrary cardinality
  121. Regular Article
  122. A convergence analysis of SOR iterative methods for linear systems with weak H-matrices
  123. Regular Article
  124. Existence theory for sequential fractional differential equations with anti-periodic type boundary conditions
  125. Regular Article
  126. Some congruences for 3-component multipartitions
  127. Regular Article
  128. Bound for the largest singular value of nonnegative rectangular tensors
  129. Regular Article
  130. Convolutions of harmonic right half-plane mappings
  131. Regular Article
  132. On homological classification of pomonoids by GP-po-flatness of S-posets
  133. Regular Article
  134. On CSQ-normal subgroups of finite groups
  135. Regular Article
  136. The homogeneous balance of undetermined coefficients method and its application
  137. Regular Article
  138. On the saturated numerical semigroups
  139. Regular Article
  140. The Bruhat rank of a binary symmetric staircase pattern
  141. Regular Article
  142. Fixed point theorems for cyclic contractive mappings via altering distance functions in metric-like spaces
  143. Regular Article
  144. Singularities of lightcone pedals of spacelike curves in Lorentz-Minkowski 3-space
  145. Regular Article
  146. An S-type upper bound for the largest singular value of nonnegative rectangular tensors
  147. Regular Article
  148. Fuzzy ideals of ordered semigroups with fuzzy orderings
  149. Regular Article
  150. On meromorphic functions for sharing two sets and three sets in m-punctured complex plane
  151. Regular Article
  152. An incremental approach to obtaining attribute reduction for dynamic decision systems
  153. Regular Article
  154. Very true operators on MTL-algebras
  155. Regular Article
  156. Value distribution of meromorphic solutions of homogeneous and non-homogeneous complex linear differential-difference equations
  157. Regular Article
  158. A class of 3-dimensional almost Kenmotsu manifolds with harmonic curvature tensors
  159. Regular Article
  160. Robust dynamic output feedback fault-tolerant control for Takagi-Sugeno fuzzy systems with interval time-varying delay via improved delay partitioning approach
  161. Regular Article
  162. New bounds for the minimum eigenvalue of M-matrices
  163. Regular Article
  164. Semi-quotient mappings and spaces
  165. Regular Article
  166. Fractional multilinear integrals with rough kernels on generalized weighted Morrey spaces
  167. Regular Article
  168. A family of singular functions and its relation to harmonic fractal analysis and fuzzy logic
  169. Regular Article
  170. Solution to Fredholm integral inclusions via (F, δb)-contractions
  171. Regular Article
  172. An Ulam stability result on quasi-b-metric-like spaces
  173. Regular Article
  174. On the arrowhead-Fibonacci numbers
  175. Regular Article
  176. Rough semigroups and rough fuzzy semigroups based on fuzzy ideals
  177. Regular Article
  178. The general solution of impulsive systems with Riemann-Liouville fractional derivatives
  179. Regular Article
  180. A remark on local fractional calculus and ordinary derivatives
  181. Regular Article
  182. Elastic Sturmian spirals in the Lorentz-Minkowski plane
  183. Topical Issue: Metaheuristics: Methods and Applications
  184. Bias-variance decomposition in Genetic Programming
  185. Topical Issue: Metaheuristics: Methods and Applications
  186. A novel generalized oppositional biogeography-based optimization algorithm: application to peak to average power ratio reduction in OFDM systems
  187. Special Issue on Recent Developments in Differential Equations
  188. Modeling of vibration for functionally graded beams
  189. Special Issue on Recent Developments in Differential Equations
  190. Decomposition of a second-order linear time-varying differential system as the series connection of two first order commutative pairs
  191. Special Issue on Recent Developments in Differential Equations
  192. Differential equations associated with generalized Bell polynomials and their zeros
  193. Special Issue on Recent Developments in Differential Equations
  194. Differential equations for p, q-Touchard polynomials
  195. Special Issue on Recent Developments in Differential Equations
  196. A new approach to nonlinear singular integral operators depending on three parameters
  197. Special Issue on Recent Developments in Differential Equations
  198. Performance and stochastic stability of the adaptive fading extended Kalman filter with the matrix forgetting factor
  199. Special Issue on Recent Developments in Differential Equations
  200. On new characterization of inextensible flows of space-like curves in de Sitter space
  201. Special Issue on Recent Developments in Differential Equations
  202. Convergence theorems for a family of multivalued nonexpansive mappings in hyperbolic spaces
  203. Special Issue on Recent Developments in Differential Equations
  204. Fractional virus epidemic model on financial networks
  205. Special Issue on Recent Developments in Differential Equations
  206. Reductions and conservation laws for BBM and modified BBM equations
  207. Special Issue on Recent Developments in Differential Equations
  208. Extinction of a two species non-autonomous competitive system with Beddington-DeAngelis functional response and the effect of toxic substances
Downloaded on 17.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2016-0065/html
Scroll to top button