Startseite A note on the parallel GSAOR method for block diagonally dominant matrices
Artikel Öffentlich zugänglich

A note on the parallel GSAOR method for block diagonally dominant matrices

  • Xuezhong Wang EMAIL logo und Cuixia Li
Veröffentlicht/Copyright: 18. März 2016

Abstract

Recently, Liu et al. [Q. B. Liu, G. L. Chen, J.Huang, On the parallel GSAOR method for block diagonallydominant matrices, Appl. Math. Comput. 215(2009), 707–715] study the convergence of the parallel multisplitting generalized SAOR iterative methods based on the generalized AOR iterative method for solving a linear system whose coefficient matrix is a block diagonally dominant matrix or a generalized block diagonal dominant matrix. In this paper, we extend the domain of convergence from 1≤ωi< 2/(1+μ1(PMQ)) and 1≤ωi < 2/(1+μ2(PMQ)) to 0 < ωi < 2/(1 + μ1(PMQ)) and 0 < ωi < 2/(1 + μ2(PMQ)) for the parallel multisplitting generalized SAOR iterative methods.

MSC 2010: 65F10; 65F50

1 Introduction

For the linear system

(1.1)Aχ=b

where A 𝜖n×n is nonsingular, and x, b 𝜖n are n-dimensional vectors. Let us consider the splitting of the matrix A of linear system (1.1):

A=DLU

where D = diag(A), −L and −U are strictly lower and strictly upper triangular parts of A, respectively. James [2] presented a generalized accelerated overrelaxation (GAOR) method given by

𝜒m+1=L1(r,Ω)𝜒m+  (DrΩL)1b,              m=  0,  1,  2,  .  .  .

or

𝜒m+1=U1(r,Ω)𝜒m+  (DrΩU)1b,              m=  0,  1,  2,  .  .  .

where

L1(r,Ω)  =  (DrΩL)1{(IΩ)D+(1r)ΩL+ΩU}

and

U1(r,Ω)=(DrΩU)1{(IΩ)D+(1r)ΩU+ΩL}

are iterative matrices and Ω = diag(ω1, ω2 …, ωn) with ωiϵR+ and r 𝜖 [0, 1].

Then generalized symmetric AOR method (GSAOR) can be defined as follows [8].

𝜒m+1=T𝜒m+Cb,                  m=  0,  1,  2,  .  .  .

where T = U1(r, Ω)L1(r,Ω) and C = U1(r,Ω)(DrΩL)−1Ω + (DrΩU)−1Ω.

In the following, we recall the mathematical descriptions of the block linear system and the BMM introduced in [5, 6].

Let s(≤ n) and ni(≤ n), i = 1, 2, ..., s, be given positive integers satisfying i=1sni = n, and denote

Vn(n1,...,ns)={χn|χ=(χ1T,...,xsT)T,χini}Ln(n1,...,ns)={An×n|A=(Aij)s×s,Aijni×nj}

for the convenience, we will simply use 𝕃n for 𝕃n(n1, ..., ns) and Vn for Vn(n1, ..., ns). Then, the block linear system to be solved can be expressed as the form

(1.2)Aχ=b,ALn,χ,bVn

where A ϵ 𝕃n is nonsingular and b ϵ Vn are general known coefficient matrix and right-hand vector, respectively, and x ϵ Vn is the unknown vector.

If block matrices Mk, Nk, Ek ϵ 𝕃n, k = 1, 2, ..., 𝛼, satisfy:

  • 1) A = MkNk, Mk nonsingular, k = 1, 2, ..., α;

  • 2) Ek = diag(E11(k),...,Ess(k)),, k = 1,2,…,α

  • 3)k=1αEii(k)=1,i = 1, 2, ..., s;

then we call the collection of triples (Mk, Nk, Ek), k = 1, 2, ..., 𝛼, is a block matrices multisplitting (BMM) of the block matrix A ϵ 𝕃n where || · || denotes the consistent matrix norm.

O’Leary and White [4] invented the matrix multisplitting method in 1985 for solving parallely the large sparse linear systems on the multiprocessor systems and was further studied by many authors [1,514,17–citenum19].

Suppose that we have a multiprocessor with α processors connected to a host processor, that is, the same number of processors as splittings, and that all processors have the last update vector xk, then the kth processor only computes those entries of the vector

Mk1Nk𝜒k+Mk1b

which correspond to the block diagonal entries Eii(k) of the block matrix Ek. The processor then scales these entries so as to be able to deliver the vector

Ek(Mk1Nk𝜒k+Mk1b)

to the host processor, performing the parallel multisplitting scheme

𝜒m+1=k=1αEkMk1Nk𝜒m+k=1αEkMk1b=H𝜒m+Gb,m=0,1,2,.

In this paper, we investigate the domain of convergence of block GSAOR multisplittings methods for solving linear system (1.1). When the coe_cient matrix A is a block H-matrix or a block strictly diagonally dominant matrix.

2 Parallel multisplitting GSAOR methods

Given a positive integer 𝛼 (𝛼s), we separate the number set {1, 2,…, s} into a nonempty subsets Jk, k = 1, 2,…,α such that

Jk{1,  2,  .  .  .  ,  s},k=1αJk={1,  2,  .  .  .  ,  s}.

Note that there may be overlappings among the subsets J1, J2,…, J𝛼. Corresponding to this separation, we introduce matrices

D=diag(A11,,Ass)LnLk=(Lij(k))Ln,Lij(k)={Lij(k),i,jJk,i>j0otherwiseUk=(Uij(k))Ln,Uij(k)={Uij(k),ij0otherwiseEk=diag(E11(k),,Ess(k))Ln,Eii(k)={Eii(k),iJk0otherwisei,j=1,2,,s,k=1,2,,α.

Obviously, D is a block diagonal matrix, Lk, k = 1, 2,…, 𝛼, are block strictly lower triangular matrices, Uk, k = 1, 2,…, 𝛼, are general block matrices, and Ek, k = 1, 2,…, 𝛼, are block diagonal matrices. If they satisfy:

  1. D is nonsingular;

  2. A = DLkUk, k = 1, 2,…, 𝛼;

  3. k=1αEk = I;

then the collection of triples (DUk, Lk, Ek) and (DLk, Uk, Ek), k = 1, 2, ...,α, are BMM of the block matrix A ϵ 𝕃n. Here, I denotes the identity matrix of order n × n.

Let (Mk, Nk, Ek), k = 1, 2,…, 𝛼, is a BMM of the block matrix A 𝜖 𝕃n .We will definite local parallel multisplittings blockwise relaxation generalized SAOR method (LMBGSAOR) and global parallel multisplittings blockwise relaxation generalized SAOR method (GMBGSAOR).

Algorithm 2.1(local parallel multisplittings blockwise relaxation method).

Given the initial vector.

For m = 0, 1, 2, ... repeat (I) and (II), until convergence.

  • (I) For k = 1, 2,…, 𝛼, (parallel) solving yk:

    Mkyk=Nk𝜒m+b.
  • (II) Computing

    𝜒m+1=k=1αEkyk.

Algorithm 2.1 associated with LMBGSAOR method can be written as

(2.1)χm+1=HLMBGSAORχm+GLMBGSAORb,m=  0,  1,  .  .  .

where

(2.2)HLMBGSAOR=k=1αEkU(k)(A)L(k)(A)U(k)(A)=(DrΩUk)1{(IΩ)D+(1r)ΩUk+ΩLk}L(k)(A)=(DrΩLk)1{(IΩ)D+(1r)ΩLk+ΩUk}GLMBGSAOR=k=1αEk{U(k)(A)(DrΩLk)1Ω+(DrΩUk)1Ω}.

By using a suitable positive relaxation parameter 𝛽, we will establish global parallel multisplitting blockwise relaxation GSAOR method which is based on Algorithm 2.1.

Algorithm 2.2(global parallel multisplittings blockwise relaxation method).

Given the initial vector.

For m = 0, 1, 2, ... repeat (I) and (II), until convergence.

  • (I) For k = 1, 2,…, α, (parallel) solving yk:

    Mkyk=Nk𝜒m+b.
  • (II) Computing

𝜒m+1=βk=1αEkyk+(1β)𝜒m.

Algorithm 2.2 associated with GMBGSAOR method can be written as

(2.3)χm+1=HGMBGSAORχm+βGGMBGSAORb,m=0,1,...

where HGMBGSAOR = 𝛽HGMBGSAOR + (1 − 𝛽)I.

3 Preliminaries

We shall use the following notations and Lemmas. A matrix A = (aij) is called a Z-matrix if for any ij, aij ≤ 0. A Z-matrix is a nonsingular M-matrix if A is nonsingular and A−1 ≥ 0. Additionally, we denote the spectral radius of A by ρ(A). It is well-known that if A ≥ 0 and there exists a vector x > 0 such that Ax < 𝛼x, then 𝜌(A) < 𝛼 [3]. Let

Ln.I(n1,,ns)={A=(Aij)n×n|Aiini×ninonsingular,i=1,,s}Ln.Id(n1,,ns)={A=diag(A11),(A12),,(Ass)|Aiini×ninonsingular,i=1,,s}

We will review the concepts of strictly block diagonally dominant matrix and block H-matrix.

Definition 3.1([1, 20]). Let A ϵ 𝕃n,I, (I) block comparison matrix ℳ(A) = ((ℳ(A))ij) ϵℝs×s and (II) block comparison matrix 𝒩(A) = ((𝒩(A))ij) ϵℝ s×s are defined respectively as follows:

((A))ij={||Aij1||1,i=j,||Aij||,ij,i,j=1,,s(𝒩(A))ij={1,i=j,||Aij1Aij||,ij,i,j=1,,s

where ∥·∥ is a consistent matrix norm such that ∥I∥ = 1.

For block matrices A ϵ 𝕃n,I, we define D(A) = diag(A11, A22,…, Ass), B(A) = D(A) − A, J(A) = D(A)−1B(A),μ1(A) = 𝜌(Jℳ(A)), μ2(A) = 𝜌(I − 𝒩(A)). In [1], Liu et al. show thatℳ (IJ(M)) = 𝒩 (IJ(M)), μ2(A) ≤ μ1(A).

Definition 3.2([15, 16]). Let A ϵ𝕃n,I . A matrix A is said to be a strictly (I) block diagonally dominant matrix, if

||Aii1||1>ij||Aij||,j=1,2,...,s.

A matrix A is said to be a strictly (II) block diagonally dominant matrix, if

ij||Aii1Aij||<1,j=1,2,...,s.

Remark 3.1. From Definition 3.2, we know a strictly (I) block diagonally dominant matrix must be a strictly (II) block diagonally dominant matrix, but not conversely.

Definition 3.3([1, 15, 16]). Let A ϵ𝕃n,I. A matrix A is said to be a (I) block H-matrix (HB(I)(P,Q)-matrix) with respect to nonsingular matrices P and Q if there are two matrices P,QLn,Id such that ℳ(PAQ) is an M-matrix; A matrix A is said to be a (II) block H-matrix (HB(II)(P,Q)-matrix) with respect to nonsingular matrices P and Q if 𝒩(PAQ) is an M-matrix.

Remark 3.2([1]). From Definition 3.3, we obtain a (I) block H-matrix (HB(I)(P,Q)-matrix) must be a (II) block H-matrix (HB(II)(P,Q)-matrix), but conversely, it is not true.

Combining Remarks 3.1 and 3.2, we have

Remark 3.3. A strictly (I) block diagonally dominant matrix must be (HB(I)(P,Q)-matrix; A strictly (II) block diagonally dominant matrix must be a (II) block H-matrix (HB(II)(P,Q)-matrix.

Definition 3.4([15]). If there exists a block diagonal matrix X such that AX is a strictly block diagonally dominant matrix, then A is said to be a block H-matrix (HB(I)(P,Q)-matrix and (HB(II)(P,Q)-matrix).

Definition 3.5([1, 5, 6]). Let Aϵ𝕃n.We call [A] = (∥Aij∥) ϵℝN×N the block absolute value of the block matrix M. The block absolute value [x] ∈ RN of a block vector x ϵ Vn is defined in an analogous way.

These kinds of block absolute values have the following important properties.

Lemma 3.1([1, 5, 6]). Let L, M 𝜖 𝕃n, x, y 𝜖 Vn and r 𝜖1. Then

1) |[L] − [M]| ≤ [L + M] ≤ [L] + [M] (|[x] − [y]| ≤ [x + y] ≤ [x] + [y]);

2) [LM] ≤ [L][M] ([xy] ≤ [x][y]);

3) [rM] ≤ |r|[M] ([rx] ≤ |r|[x]);

4)ρ(M) ≤ ρ(|M|) ≤ρ ([M]) (here, ∥·∥ is either ∥·∥or ∥·∥1 ).

Lemma 3.2([5, 6]). Let A 𝜖 𝕃n.I be a strictly block diagonally dominant matrix, then

1) A is nonsingular;

2) [(A)−1] ≤ℳ(A)−1;

3)ρ(J(ℳ(A))) < 1.

Let

ΩBI(A)={F=(Fij)𝕃n,I(n1,n2,,ns)|||Fii1||=||Aii1||,||Fij||=||Aij||,ij,i,j=1,2,,s}ΩBII(A)={F=(Fij)𝕃n,I(n1,n2,,ns)|||Fii1Fij||=||Aii1Aij||,i,j=1,2,,s}

denote respectively the set of (I) and (II) matrices such that the absolute values of whose elements are equal to absolute values of corresponding elements of matrix A.

4 Main results

For the present Algorithms 2.1 and 2.2, we give convergence theorems for block diagonally dominant matrices and block H-matrices.

Theorem 4.1. Let M ϵ 𝕃n,I(n1, n2,…, ns) be a strictly (I) block diagonally dominant matrix, AΩBI(PMQ) and the collection of triples (DUk, Lk, Ek) and (DLk, Uk, Ek), k = 1, 2,…, 𝛼, are BMM of the block matrix A ϵ 𝕃n,I (n1, n2,…, ns). Assume that

(4.1)(A)=(D)[Lk][Uk]=(D)[B],k=1,2,...,α
if
0<ωi<21+μ1(PMQ),i=1,2...,n

then LMBGSAOR method converges for any initial vector x0ϵ Vs.

Proof. By Lemma 3.1, we know

ρ(HLMBGSAOR)ρ(|HLMBGSAOR|)ρ([HLMBGSAOR])

and then, the iteration (2.1) converges for any initial vector x0VN if and only if

ρ([HLMBGSAOR])<1.

Since AΩBI (PMQ), thenℳ(A) = ℳ(PMQ). Because

A  𝕃n,I(n1,  n2,  .  .  .  ,  ns)

is a strictly (I) block diagonally dominant matrix, so A 𝜖 𝕃n,I (n1, n2,…, ns) is a strictly (I) block diagonally dominant matrix and μ1 = 𝜌(J(ℳ(A))) = 𝜌 (J(ℳ(PMQ))) = μ1(PMQ), and thus, we have μ1 = 𝜌 (J(ℳ(PMQ))) =μ1(PMQ) < 1, it follows from Lemma 3.2. Let B = Lk + Uk, by (2.2), we know that [B] = [Lk] + [Uk], k =1, 2,…, 𝛼, clearly, DrΩLk, k = 1, 2,…, 𝛼 are strictly (I) block diagonally dominant matrix and ℳ(D) −r[Ω][B] are strictly diagonally dominant matrix for 0 < ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n, and r ϵ [0, 1] which follow from A is a strictly (I) block diagonally dominant matrix. Since

ℳ (D)r[Ω][B]ℳ (D)r[Ω][Uk]ℳ (D)

for 0 < ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n, r ϵ [0, 1], k = 1, 2,…, 𝛼, and ℳ(A) is a strictly diagonally dominant matrix, we have ℳ(D)−r[Ω][B] and ℳ(D) are strictly diagonally dominantM-matrices, for 0 < ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n, r ϵ[0, 1]. Therefore, ℳ(D) − r[Ω][Uk] are strictly diagonally dominant M-matrices, and then DrΩUk, k = 1, 2,…, 𝛼, are strictly (I) block diagonally dominant matrices, for 0 < ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n, and r ϵ [0, 1].

Let L¯k=D1Lk and, U¯k=D1Ukthen IrΩL¯k and IrΩU¯k are also strictly (I) block diagonally dominant matrices, for 0 < ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n, r ϵ [0, 1], k = 1, 2,…, 𝛼. Thus, by Lemma 3.1, we have

(M(IrΩLk¯))1=(Ir[Ω][L¯k])1[(IrΩUk¯)1](M(IrΩUk¯))1=(Ir[Ω][U¯k])1.

From (4.1), we have

[U(k)(A)]=[(DrΩUk)1{(IΩ)D+(1r)ΩUk+ΩLk}]=[(IrΩU¯k)1{IΩ+(1r)ΩU¯k+ΩL¯k}][(IrΩU¯k)1]{[IΩ]+(1r)[ΩU¯k]+[ΩL¯k]}(Ir[Ω][U¯k])1{[IΩ]+(1r)[Ω][U¯k]+[Ω][L¯k]}=I+(Ir[Ω][U¯k])1{[IΩ]I+[Ω][U¯k+L¯k]}.

Since L¯k=D1Lk and U¯k=D1Uk we have [L¯k] ≤ ℳ(D)−1[Lk] and [U¯k] ≤ ℳ(D)−1[Uk] which follow from Lemma 3.1 and Lemma 3.2, and then

[U¯k]+[L¯k]ℳ (D)1[Uk  +Lk]=ℳ (D)e1[B]=J(ℳ (A)),  k  =1,  2,  .  .  .  ,  α.

Therefore, we have

[U(k)(A)]  I  (Ir[Ω][U¯k])1(IT([Ω]))

where T([Ω]) = [IΩ ] + [Ω]J(ℳ(A)). Note that(Ir[Ω][U¯k(k)])1I, k = 1, 2,…, 𝛼, and then

[U(k)(A)]IIT([Ω]))  =T([Ω]).

Similar to the above proving process, we have

[L(k)(A)]I  (IT([Ω]))  =T([Ω]).

Let ϑ1 = max{ωi}, ϑ2 = min{ ωi } and f(t) = |1−t|ϑ+tJ(ℳ(A)), where t > 0. Obviously, f(t) is nonincreasing for 0 < t ≤ 1 and f(t) is nondecreasing for t≤ 1. Therefore, we have

T([Ω])(1ϑ2)I+ϑ2J(A),0<ωi1,i=1,2,,nT([Ω])(ϑ11)I+ϑ1J(M(A)),1<ωi<21+μ1(PMQ),i=1,2,,n.

Let e denotes the vector e = (1, 1,…, 1)T ϵ Vs and J𝜀(ℳ(A)) = J(ℳ(A)) + 𝜀eeT, since J(ℳ(A)) is nonnegative, the matrix J𝜀(ℳ(A)) has only positive entries and irreducible for any 𝜀 > 0. By the Perron-Frobenius theorem [3] for any 𝜀 > 0, there exists a vector x𝜀 > 0 such that

Jε((A))xε=ρ(Jε((A)))xε=ρεxε

where 𝜌𝜀 = 𝜌 (J𝜀 (ℳ (A))). Moreover, if 𝜀 > 0 is small enough, we have 𝜌𝜀 < 1 by continuity of the spectral radius. Thus, we can get ϑ1 − 1 + ϑ1𝜌𝜀 < 1 and 1 − ϑ2 + ϑ2𝜌𝜀 < 1. And then

[HLMBGSAOR]𝜒εk=1α[Ek][U(k)(A)][L(k)(A)]𝜒εk=1α[Ek]T([Ω])2xε.

Step 1. For 0 < ωi 1, i = 1, 2,…, n,

[HLMBGSAOR]𝜒ε(1𝝑2+𝝑2ρε)2𝜒ε<𝜒ε;

Step 2. For 1 < ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n,

[HLMBGSAOR]𝜒ε(𝝑11+𝝑2ρε)2𝜒ε<𝜒ε.

Then, we have [HLMBGSAOR]x < x and ρ([HLMBGSAOR]) < 1. □

Theorem 4.2. Let M ϵ 𝕃n,I(n1, n2,…, ns) be an HB(I)(P,Q) -matrix, AΩBI(PMQ), and the collection of triples (DLk, Uk, Ek) and (DUk, Lk, Ek), k = 1, 2,…, 𝛼, are BMM of the block matrix A ∈ 𝕃n,I(n1, n2,…, ns). Assume that

(A)  =(D)    [Lk]    [Uk]  =(D)    [B],k=  1,  2,  .  .  .  ,α.

If

0<ωi<21+μ1(PMQ),i=1,2,...,n

then LMBGSAOR method converges for any initial vector x0 ϵVs.

Proof. Since AΩBI(PMQ), thenℳ(A) = ℳ(PMQ). Because M ∈ 𝕃n,I(n1, n2,…, ns) is an (HB(I)(P,Q)-matrix,so A ∈ 𝕃n,I (n1, n2,…, ns) is an block (HB(I)(I,I)-matrix. From Definition 3.5, there exists a block diagonal matrix X such that AX is a strictly (I) block diagonally dominant matrix. Let HLMBGSAOR(A) and HLMBGSAOR(AX) denote the iterative matrices of LMBGSAOR methods for block matrix A and AX, respectively. By simple calculation, we have HLMBGSAOR(A) and HLMBGSAOR(AX) are similar. Since similar matrices have the same eigenvalues, it follows that (HLMBGSAOR(A)) = 𝜌(HLMBGSAOR(AX)) < 1. □

Theorem 4.3. Let M ∈ 𝕃n,I (n1, n2,…, ns) be a strictly (II) block diagonally dominant matrix, AΩBII(PMQ)and the collection of triples (DUk, Lk, Ek) and (DLk, Uk, Ek), k = 1, 2,…, 𝛼 are BMM of the block matrixA ∈ 𝕃n,I (n1, n2,…, ns). Assume that

(A)  =(D)    [Lk]    [Uk]  =(D)    [B],k=  1,  2,  .  .  .  ,α.

If

0  <ωi<21  +μ2(PMQ)  ,i=  1,  2,  .  .  .  ,n

then LMBGSAOR method converges for any initial vector x0Vs.

Proof. The proof goes along the same lines as in the Theorem 4.1 except that strictly (I) block diagonally dominant matrix and ΩBI(PMQ) play the roles of strictly (II) block diagonally dominant matrix and ΩBII(PMQ), respectively, which completes the proof. □

Theorem 4.4. Let M ∈ 𝕃n,I (n1, n2,…, ns) be an HB(II)(P,Q)-matrix, AΩBII(PMQ), and the collection of triples (DLk, Uk, Ek) and (DUk, Lk, Ek), k = 1, 2,…, α, are BMM of the block matrix A ∈ 𝕃n,I (n1, n2,…, ns). Assume that

(A)  =(D)    [Lk]    [Uk]  =(D)    [B],k=  1,  2,  .  .  .  ,α.

If

0  <ωi<21  +μ2(PMQ)  ,i=  1,  2,  .  .  .  ,n

then LMBGSAOR method converges for any initial vector x0Vs.

Proof. The proof is similar to that given in Theorem 4.2, so omitted. □

Using GMBGSAOR method, we can also get the following convergence results.

Theorem 4.5. Let M ∈ 𝕃n,I (n1, n2,…, ns) be a strictly (I) block diagonally dominant matrix, AΩBI(PMQ)and the collection of triples (DUk, Lk, Ek) and (DLk, Uk, Ek), k = 1, 2,…, 𝛼, are BMM of the block matrix A ∈ 𝕃n,I (n1, n2,…, ns). Assume that

(A)  =(D)    [Lk]    [Uk]  =(D)    [B],k=  1,  2,  .  .  .  ,α

(a) if 0 < ωi ≤ 1, i = 1, 2,…, n, and0  <β<  2/(1  +h22), h2 = 1 − I2 + I2μ1(PMQ);

(b) if 1 ≤ ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n, and0  <β<  2/(1  +h12), h1 = I1 − 1 + I1μ1(PMQ).

Then GMBGSAOR method converges for any initial vector x0Vs, where ϑ1 = max{ωi}, ϑ2 = min{ωi}.

Proof. Since ρ(HGMBGSAOR) ≤ ρ(|HGMBGSAOR|) ≤ ρ([HGMBGSAOR]), the iteration (2.3) converges for any initial vector x0Vs if and only if

ρ([HGMBGSAOR])  <  1.

Similar to the proof of Theorem 4.1, we have

μ1  =ρ(J(ℳ (A)))  =  μ1(PMQ)  <  1

and there exists 𝜀 > 0 such that J𝜀(ℳ(A)) = J(ℳ(A)) + 𝜀eeT has only positive entries and irreducible for any 𝜀 > 0. By the Perron-Frobenius theorem for any 𝜀 > 0, there exists a vector x𝜀 > 0 such that

Jε((A))χε=ρ(Jε((A)))χε=ρεχε

where 𝜌𝜀 =𝜌(J𝜀(ℳ(A))). Moreover, if 𝜀 > 0 is small enough, we have ρ𝜀 < 1, by continuity of the spectral radius. Under the condition of Theorem 4.5, we not only get h1 = ϑ1 −1+ ϑ1ρ𝜀 < 1 and h2 = 1− ϑ2 + ϑ2ρ𝜀 < 1, but also βh12+|1+β|<1 and βh22+|1+β|<1 then

xεβk=1α[Ek][U(k)(A)][L(k)(A)]χε+|1β|χεβk=1α[Ek]T([Ω])2χε+|1β|χε.
Case (a). For 0 < ωi ≤ 1, i = 1, 2,…, n.
[HLMBGSAOR]χεβ(1  𝝑2  +𝝑2ρε)2χε+  |1  β|χε=  (βh22+  |1  β|)χε<χε.
Case (b). For 1 < ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n.
[HLMBGSAOR]χεβ(𝝑1  1  +𝝑1ρε)2χε+  |1  β|χε=  (βh12+  |1  β|)χε<χε.

Then [HGMBGSAOR]x𝜀 < x𝜀 and ρ([HGMBGSAOR]) < 1. □

Theorem 4.6. Let M ∈ 𝕃n,I (n1, n2,…, ns) be an HB(I)(P,Q)-matrix, AΩBI(PMQ), and the collection of triples (DLk, Uk, Ek) and (DUk, Lk, Ek), k = 1, 2,…, 𝛼, are BMM of the block matrix A ∈ Ln,I(n1, n2,…, ns). Assume that

(A)=(D)[Lk][Uk]=(D)[B],k=1,2,...,α

(a) if 0 < ωi ≤ 1, i = 1, 2,…, n, and0  <β<  2/(1  +h22), h2 = 1 − ϑ2 + ϑ2μ1(PMQ);

(b) if 1 ≤ ωi < 2/(1 + μ1(PMQ)), i = 1, 2,…, n, and0  <β<  2/(1  +h12), h1 = ϑ1 − 1 + ϑ1μ1(PMQ).

Then GMBGSAOR method converges for any initial vector x0Vs, where ϑ1 = max{ωi}, ϑ2 = min{ωi}.

Theorem 4.7. Let M ∈ 𝕃n,I (n1, n2,…, ns) be a strictly (II) block diagonally dominant matrix, AΩBII(PMQ)and the collection of triples (DUk, Lk, Ek) and (DLk, Uk, Ek), k = 1, 2, ...,α are BMM of the block matrix A ∈ 𝕃n,I (n1, n2,…, ns). Assume that

(A)=(D)[Lk][Uk]=(D)[B],k=1,2,...,α

(a) if 0 < ωi ≤ 1, i = 1, 2,…, n, and0  <β<  2/(1  +h22), h2 = 1 − ϑ2 + ϑ2μ1(PMQ);

(b) if 1 ≤ ωi < 2/(1 + μ2(PMQ)), i = 1, 2,…, n, and0  <β<  2/(1  +h12), h1 = ϑ1 − 1 + ϑ1μ2(PMQ).

Then GMBGSAOR method converges for any initial vector x0Vs, where ϑ1 = max{ωi}, ϑ2 = min{ωi}.

Theorem 4.8. Let M ∈ 𝕃n,I (n1, n2,…, ns) be an HB(II)(P,Q)-matrix), AΩBII(PMQ) and the collection of triples (DLk, Uk, Ek) and (DUk, Lk, Ek), k = 1, 2,…, 𝛼, are BMM of the block matrix A ∈ 𝕃n,I (n1, n2,…, ns). Assume that

(A)=(D)[Lk][Uk]=(D)[B],k=1,2,...,α

(a) if 0 < ωi ≤ 1, i = 1, 2,…, n, and0  <β<  2/(1  +h22), h2 = 1 − ϑ2 + ϑ2μ2(PMQ);

(b) if 1 ≤ ωi < 2/(1 + μ2(PMQ)), i = 1, 2,…, n, and0  <β<  2/(1  +h12), h1 = ϑ1 − 1 + ϑ1μ2(PMQ).

Then GMBGSAOR method converges for any initial vector x0Vs, where ϑ1 = max{ωi}, ϑ2 = min{ωi}.

Proof. Similar to the proof of Theorem 4.2, Theorem 4.3, and Theorem 4.4, we can prove Theorem 4.6, Theorem 4.7, and Theorem 4.8, respectively. So omitted. □

5 Conclusion

Liu et al. [1] consider the convergence of block parallel multisplittings GSAOR iterative methods for 1 ≤ ωi < 2/(1 + μ1(PMQ)) or 1 ≤ωi < 2/(1 + μ2(PMQ)), i = 1, 2,…, n. In this paper, we extend interval of ωi, i = 1, 2,…, n, to (0, 2/(1 + μ1(PMQ))) or (0, 2/(1 + μ2(PMQ))) for block parallel multisplittings GSAOR iterative methods.

Acknowledgment

We express our thanks to the anonymous referees who made much useful and detailed suggestions that helped us to correct some minor errors and improve the quality of the paper.

References

[1] Q. B. Liu, G. L. Chen, and J. Huang, On the parallel GSAOR method for block diagonally dominant matrices, Appl. Math. Comput., 215(2009), 707–715.10.1016/j.amc.2009.05.055Suche in Google Scholar

[2] K. R. James, Convergence of matrix iterations subject to diagonal dominance, SIAM J. Numer. Anal., 10(1986), 117–132.10.1137/0710042Suche in Google Scholar

[3] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, SIAM, Philadelphia, PA, 1994.10.1137/1.9781611971262Suche in Google Scholar

[4] D. P. O’Leary and R. E. White, Multi-splittings of matrices and parallel solution of linear systems, SIAM J. Alg. Disc. Math., 6(1985), 630–640.10.1137/0606062Suche in Google Scholar

[5] Z. Z. Bai, Parallel matrix multisplitting block relaxation iteration methods, Math. Numer. Sinica, 17(1995), 238–252.Suche in Google Scholar

[6] Z. Z. Bai, A class of asynchronous parallel multisplitting blockwise relaxation methods, Parallel Computing, 25(1999), 681–701.10.1016/S0167-8191(99)00015-0Suche in Google Scholar

[7] M. Neumann and R. J. Plemmons, Convergence of parallel multisplitting iterative methods for M-matrices, Lin. Alg. Appl., 88-89(1987), 559–573.10.1016/0024-3795(87)90125-XSuche in Google Scholar

[8] L. T. Zhang, T. Z. Huang, T. X. Gu, and X. L. Guo, Convergence of relaxed multisplitting USAOR methods for H-matrices linear systems, Appl. Math. Comput., 202(2008), 121–132.10.1016/j.amc.2008.01.034Suche in Google Scholar

[9] L. Elsner, Comparisons of weak regular splittings and multisplitting methods, Numer. Math., 56(1989), 283–289.10.1007/BF01409790Suche in Google Scholar

[10] A. Frommer and G.Mayer, Convergence of relaxed parallel multisplitting methods, Lin. Alg. Appl., 119(1989), 141–152.10.1016/0024-3795(89)90074-8Suche in Google Scholar

[11] R. E. White, Multisplitting of a symmetric positive deRnite matrix, SIAM J. Matrix Anal. Appl., 11(1990), 69–82.10.1137/0611004Suche in Google Scholar

[12] R. E. White, Multisplitting with di_erent weighting schemes, SIAM J. Matrix Anal. Appl., 10(1989), 481–493.10.1137/0610034Suche in Google Scholar

[13] W. Li, Comparison results for parallel multisplitting methods with applications to AOR methods, Lin. Alg. Appl., 206(2008), 738–747.10.1016/S0024-3795(01)00276-2Suche in Google Scholar

[14] D. R.Wang, On the convergence of the parallel multisplitting AOR algorithm, Lin. Alg. Appl., 154-156(1991), 473–486.10.1016/0024-3795(91)90390-ISuche in Google Scholar

[15] D. G. Feingold and R. S. Varga, Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem, Paci_c J. Math., 12(1962), 1241–1249.10.2140/pjm.1962.12.1241Suche in Google Scholar

[16] B. Polman, Incomplete blockwise factorizations of (block) H-matrices, Lin. Alg. Appl., 90(1987), 119–132.10.1016/0024-3795(87)90310-7Suche in Google Scholar

[17] R. Bru, V. Migallon, J. Penadés, and D. B. Szyld, Parallel synchronous and asynchronous two-stage multisplitting methods, Elec. Tran. Num. Anal., 3(1995), 24–38.Suche in Google Scholar

[18] D. B. Szyld, Synchronous and asynchronous two-stage multisplitting methods, in: Proc. 5th SIAM Conference on Applied Linear Algebra, (Ed.J. G. Lewis), SIAM, Philadelphia, 1994.Suche in Google Scholar

[19] J. Mos and D. B. Szyld, Nonstationary Parallel Relaxed Multisplitting Methods, Lin. Alg. Appl., 241-243(1996), 733–747.10.1016/0024-3795(95)00583-8Suche in Google Scholar

[20] Y. Z. Song, On the convergence of GAOR methods, Math. Numer. Sinica, 4(1989), 405–412 (in Chinese).Suche in Google Scholar

Received: 2014-1-15
Accepted: 2014-5-28
Published Online: 2016-3-18
Published in Print: 2016-3-1

© 2016 by Walter de Gruyter Berlin/Boston

Heruntergeladen am 26.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jnma-2014-1001/html?lang=de
Button zum nach oben scrollen