Startseite A relaxed block splitting preconditioner for complex symmetric indefinite linear systems
Artikel Open Access

A relaxed block splitting preconditioner for complex symmetric indefinite linear systems

  • Yunying Huang und Guoliang Chen EMAIL logo
Veröffentlicht/Copyright: 7. Juni 2018

Abstract

In this paper, we propose a relaxed block splitting preconditioner for a class of complex symmetric indefinite linear systems to accelerate the convergence rate of the Krylov subspace iteration method and the relaxed preconditioner is much closer to the original block two-by-two coefficient matrix. We study the spectral properties and the eigenvector distributions of the corresponding preconditioned matrix. In addition, the degree of the minimal polynomial of the preconditioned matrix is also derived. Finally, some numerical experiments are presented to illustrate the effectiveness of the relaxed splitting preconditioner.

MSC 2010: 65F10; 75F50

1 Introduction

Consider the following large sparse nonsingular complex symmetric linear systems

Au=b,ACn×nandu,bCn,(1)

where A = W + iT, W, T ∈ ℝn×n are symmetric matrices and i = 1 denotes the imaginary unit. Such complex linear system (1) arise in many areas of scientific computing and engineering applications, such as FFT-based solution of certain time-dependent PDEs [1], distributed control problems [2], diffuse optical tomography [3], PDE-constrained optimization problems [4], quantum mechanics [5] and so on; see [6,7,8] and references therein for other applications.

Let u = x + iy and b = f + ig with x, y, f, g ∈ ℝn, then the complex linear system (1) can be equivalently rewritten as the real block two-by-two linear system as follows

A~u=WTTWxy=fg=b,(2)

or the following two-by-two block real equivalent formulation

Au=TWWTxy=gf=b.(3)

It can be seen that the linear systems (2) and (3) both avoid using complex arithmetic, but the coefficient matrix in (2) and (3) have become doubled in size. Because it is more attractive to use iterative methods rather than direct methods for solving (1) or (2), then many efficient iterative methods as well as their numerical properties have been studied in the literature, see [9,10]. When both W and T are symmetric positive semi-definite with at least one of them being positive definite, some efficient iterative methods have been proposed. Recently, based on the Hermitian and skew-Hermitian splitting (HSS) method [11], Bai et al. [12] developed the modified HSS (MHSS) iteration method to solve the complex linear system (1), which is convergent for any positive constant α. To accelerate the convergence rate of the MHSS iteration method, Bai et al. [13] introduced the preconditioned MHSS (PMHSS) which is unconditionally convergent and shows h-independent convergence behavior. In [14], Bai further analyzed algebraic and convergence properties of the PMHSS iteration method for solving complex linear systems. Moreover, there are also some other effective iterative methods, such as the lopsided PMHSS (LPMHSS) iteration method, the accelerated PMHSS (APMHSS) iteration method, the preconditioned generalized SOR (PGSOR) iterative method, the skew-normal splitting (SNS) method, the double-step scale splitting (DSS) iteration method [15,16,17,18,19], and so on.

If the matrices W and T are symmetric positive semi-definite, some efficient preconditioners have been presented. Bai [20] proposed the rotated block triangular (RBT) preconditioners which is based on the PMHSS preconditioning matrix [21] for the linear system (2). Lang and Ren [22] established the inexact RBT (IRBT) preconditioners. For solving the linear system (2), Liang and Zhang [23] developed the symmetric SOR (SSOR) method.

However, when W ∈ ℝn×n is symmetric indefinite, T ∈ ℝn×n is symmetric positive definite, the matrices αI + W, αV + W and αW + T2 are indefinite or singular which may lead to the slow convergence speeds of the MHSS method, the PMHSS method and the SNS method. To overcome these problems, in [24], the Hermitian normal splitting (HNS) method and its variant simplified HNS (SHNS) method have been introduced by Wu for solving the complex symmetric linear system (1). Zhang and Dai [25] presented a preconditioned SHNS (PSHNS) iteration method and constructed a new preconditioner. In this paper, we solve the linear system (3) with W being symmetric indefinite and T being symmetric positive definite. In order to solve real formulation (3), Zhang and Dai [26] established a new block preconditioner and also analyzed the spectral properties of the corresponding preconditioned matrix. An improved block (IB) splitting preconditioner was proposed by Zhang et al. in [27] and they gave the convergence property of the corresponding iteration method under suitable conditions.

In this paper, by using the relaxing technique, we construct a relaxed block splitting preconditioner for the real block two-by-two linear system (3). The relaxed splitting preconditioner is much closer to the original coefficient matrix than the block preconditioners in [26]. We analyze the spectral properties of the corresponding preconditioned matrix and show that the corresponding preconditioned matrix has an eigenvalue 1 with algebraic multiplicity at least n. The structure of the eigenvectors and the dimension of the Krylov subspace for the preconditioned matrix are also derived.

The organization of the paper is as follows. In Section 2, a relaxed block splitting preconditioner is presented. In Section 3, we study the distribution of eigenvalues, the form of the eigenvectors and the dimension of the Krylov subspace of the corresponding preconditioned matrix. In Section 4, some numerical examples are presented to compare the new preconditioner with the PSHNS preconditioner and the HSS preconditioner. Finally, we give some brief concluding remarks in Section 5.

2 A new relaxed splitting preconditioner

In this section, by employing the relaxation techniques, we establish a relaxed block splitting preconditioner for solving the linear system (3).

Pan et al. [28] employed the positive-definite and skew-Hermitian splitting (PSS) iteration method in [29] to develop a PSS preconditioner for saddle point problems. For the linear system (3), the coefficient matrix 𝓐 can be split as follows

A=J+K,(4)

where J=T000andK=0WWT. Recently, based on the PSS preconditioner, Zhang and Dai presented a preconditioner in [26] as follows

P1=1α(αI+K)(αI+J)=1ααIWWαI+TαI+T00αI=αI+TWW(I+1αT)αI+T.(5)

Then the difference between the preconditioner 𝓟1 and the coefficient matrix 𝓐 is given as follows

R1=P1A=αI01αWTαI.(6)

A general criterion for an efficient preconditioner is that it should be as close as possible to the coefficient matrix 𝓐 which makes the preconditioned matrix have a clustered spectrum, and hope that it is easy to implement. Inspired by the idea of the relaxation technique in [30,31,32,33], we give a relaxed splitting preconditioner which is defined as follows

P2=1ααIWWTT00αI=TW1αWTT.(7)

From (3) and (7), we get the difference between the preconditioner 𝓟2 and the coefficient matrix 𝓐

R2=P2A=001αWTW0.(8)

It is easy to see that the (2, 1)-block matrix in (8) is different from that in (6) and the (1, 1)-block and (2, 2)-block matrix in (8) now vanish, which indicates that the preconditioner 𝓟2 is much closer to the coefficient matrix 𝓐 than the preconditioner 𝓟1.

From (8), we know that the preconditioner 𝓟2 can be established by the following splitting of the coefficient matrix 𝓐

A=P2R2=TW1αWTT001αWTW0.(9)

In the following section, we will analyze the spectral properties of the preconditioned matrix P21 𝓐.

3 Spectral analysis of the preconditioned matrix

In this section, we mainly study the spectral properties of the preconditioned matrix P21 𝓐 and give an upper bound of the dimension of the Krylov subspace for this preconditioned matrix. Therefore, firstly we should get the inverse of the matrix 𝓟2 and the explicit form of the matrix P21 will be given in the following lemma. The proof of this lemma consists of straightforward calculations and is omitted here.

Lemma 3.1

Let

P=αIWWTandP=T00αI.

Here 𝓟 has the following block-triangular factorization

P=I01αWIαI00T+1αW2I1αW0I.

Then we obtain

P21=αP^1P1=T11αT1W(T+1αW2)1WT1W(T+1αW2)11α(T+1αW2)1W(T+1αW2)1.(10)

In the following, we will analyze the eigenvalue distributions of the preconditioned matrix P21 𝓐.

Theorem 3.2

Assume that the coefficient matrix 𝓐 is nonsingular, W ∈ ℝn×nis symmetric indefinite andT ∈ ℝn×nis symmetric positive definite. Letαbe a real positive constant andU = WT−1W. The preconditioner 𝓟2is defined as in (7). Then the preconditioned matrixP21 𝓐 has eigenvalues at 1 with multiplicity at leastnand the remaining eigenvalues are λi, i = 1, 2, ⋯, n, where λi (i = 1, 2, ⋯, n) are the eigenvalues of the matrixαT−1(αI + U)−1(T + U).

Proof

From (9) and Lemma 3.1, we have

P21A=IP21R2=IT1W(T+1αW2)1W(1αTI)0(T+1αW2)1W(1αTI)0=IT1W(T+1αW2)1W(1αTI)0(T+1αW2)1W(1αTI)I.

It follows from U = WT−1W that

IT1W(T+1αW2)1W(1αTI)=T1(TW(T+1αW2)1W(1αTI))=T1(TW(W(W1TW1+1αI)W)1W(1αTI))=T1(T(1αI+W1TW1)1(1αTI))=T1(T(1αU+I)1U(1αTI))=αT1(αI+U)1(T+U).

Hence, we get

P21A=αT1(αI+U)1(T+U)0(T+1αW2)1W(1αTI)I.(11)

Then, it is obvious from (11) that the results are proved. □

The detail implementation of the preconditioning process will be described as follows. Applying the preconditioner 𝓟2 within a Krylov subspace method, the following system should be solved at each step

P2z=1ααIWWTT00αIz=r,(12)

where z=[z1T,z2T]T and r=[r1T,r2T]T are the current and generalized residual vectors, respectively, z1, z2, r1, r2 ∈ ℝn. Based on Lemma 3.1 and (12), we have

z1z2=αT1001αII1αW0I1αI00(T+1αW2)1I01αWIr1r2.(13)

Then, we can derive the implementing process of the preconditioner 𝓟2 as follows.

Algorithm 3.3

(The preconditioner 𝓟2). For a given vectorr=[r1T,r2T]T , the vectorz=[z1T,z2T]Tcan be computed by (13) from the following steps:

  1. Computet1=r21αWr1;

  2. Solve(T+1αW2)z2=t1;

  3. Computet2 = r1 + Wz2;

  4. Solve Tz1 = t2;

  5. Set the generalized residual vectorz=[z1T,z2T]T

It can be seen from Algorithm 3.3 that we need to solve two sub-linear systems with coefficient matrices T + 1αW2 and T at each iteration. Based on the assumptions of the matrices W and T in Section 1, we know that the two matrices T + 1αW2 and T are both symmetric positive definite. Hence, we can solve the two sub-linear systems (T + 1αW2)z2 = t1 and Tz1 = t2 by applying the sparse Cholesky decomposition, the conjugate gradient (CG) method or the preconditioned CG method. In order to prove the effectiveness of the preconditioner 𝓟2, we compare it with the PSHNS preconditioner and the HSS preconditioner which are given as follows:

PPSHNS=12α(αW+iI)(αT+I)andPHSS=1αI+T00αI+TαIWWαI.

Then, the implementing process about the PSHNS preconditioner and the HSS preconditioner with GMRES will be described as follows, respectively.

Algorithm 3.4

(The PSHNS Preconditioner). For a given vector r, the vector z can be computed from the following steps:

  1. Solve (αV + iW)d = 2αr;

  2. Solve (αT + WV–1W)z = Wd.

Algorithm 3.5

(The HSS Preconditioner). For a given vectorr=[r1T,r2T]T,thevectorz=[z1T,z2T]Tcan be computed from the following steps:

  1. Solve (αI + T)ν1 = r1;

  2. Solve (αI + T)ν2 = r2;

  3. Compute μ1 = αν21;

  4. Solve(αI+1αW2)z2=μ1;

  5. Compute z1 = ν1+1αWz2;

  6. Set the generalized residual vectorz=[z1T,z2T]T.

From Algorithm 3.4, we can see that the PSHNS preconditioner iteration involves the complex coefficient matrix αV + iW whose corresponding linear subsystem may be hard to be solved. Meanwhile, it can be seen from Algorithm 3.5 that the implementation of the HSS preconditioner need to solve three symmetric positive definite linear subsystems.

In the following, we will discuss the eigenvector distributions of the preconditioned matrix P21A in detail.

Theorem 3.6

Let the preconditioner 𝓟2be defined as in (7), then the preconditioned matrixP21Ahas n + i + j (0 ≤ i + jn) linearly independent eigenvectors, which are described as follows:

  1. n eigenvectors 0vk(k=1,2,,n) that correspond to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.

  2. i (0 ≤ in) eigenvectors uk1vk1(1ki) that correspond to the eigenvalue 1, where uk10,Tuk1=αuk1 and vk1 are arbitrary vectors.

  3. j (0 ≤ jn) eigenvectors uk2vk2(1kj) that correspond to the eigenvalues λk ≠ 1, where α(T+U)uk2=λk(αI+U)Tuk2,uk20andvk2=11λk(T+1αW2)1(1αWTW)uk2.

Proof

Let λ be an eigenvalue of the preconditioned matrix P21Aanduv be the corresponding eigenvector. Then

αT1(αI+U)1(T+U)0(T+1αW2)1W(1αTI)Iuv=λuv.

By simple calculation, we have

α(T+U)u=λ(αI+U)Tu,(T+1αW2)1W(1αTI)u=(1λ)v.(14)

If λ = 1, the Eq. (14) become

α(T+U)u=(αI+U)Tu,(T+1αW2)1W(1αTI)u=0.(15)

Then we have the following two situations.

  1. When u = 0, the Eq. (15) are always true. Hence, there are n linearly independent eigenvectors 0vk (k = 1, 2, ⋯, n) corresponding to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.

  2. When u ≠ 0, through a simple calculation for (15), we have Tu = αu. Then, there will be i (0 ≤ in) linearly independent eigenvectors uk1vk1(1ki) that correspond to the eigenvalue 1, where uk10,Tuk1=αuk1andvk1 are arbitrary vectors.

Next, we consider the case λ ≠ 1. It can be seen from (14) that the results in (3) hold.

Finally, we show that the n + i + j eigenvectors are linearly independent. Let c = [c1, c2, ⋯, cn]T, c1 = [c11,c21,,ci1]Tandc2=[c12,c22,,cj2]T be three vectors with 0 ≤ i, jn. Then, we have to show that

00v1vnc1cn+u11ui1v11vi1c11ci1+u12uj2v12vj2c12cj2=00(16)

holds if and only if the vectors c, c1 and c2 are all zero vectors, where the first matrix consists of the eigenvectors corresponding to the eigenvalue 1 for the case (1), the second matrix consists of those for the case (2) and the third matrix consists of the eigenvectors corresponding to λ ≠ 1 for the case (3). By multiplying matrix P21A from left on both sides of Eq. (16), we get

00v1vnc1cn+u11ui1v11vi1c11ci1+u12uj2v12vj2λ1c12λjcj2=00.(17)

From the difference between (17) and (16), we obtain

u12uj2v12vj2(λ11)c12(λj1)cj2=00.

Because the eigenvalues λk ≠ 1 and uk2vk2(k=1,,j) are linearly independent, we know that ck2 = 0 (k = 1, ⋯, j). Thus, the Eq. (16) reduces to

00v1vnc1cn+u11ui1v11vi1c11ci1=00.

As the vectors uk1 (k = 1, ⋯, i) are also linearly independent, then we have ck1 = 0 (k = 1, ⋯, i). Hence, the above equation becomes

00v1vnc1cn=00.

Since vk (k = 1, ⋯, n) are linearly independent, we have ck = 0 (k = 1, ⋯, n). Therefore, the n + i + j eigenvectors are linearly independent. The proof of this theorem is completed  □

Based on the Krylov subspace theory, we know that the iterative method with an optimal property (such as GMRES [34] method) will terminate at the moment when the degree of the minimal polynomial is attained. Next, we will give an upper bound of the degree of the minimal polynomial of the preconditioned matrix P21A.

Theorem 3.7

Assume that the conditions of Theorem 3.2 are satisfied and let the preconditioner 𝓟2be defined as in(7). Then the degree of the minimal polynomial of the preconditioned matrixP21A is at most n + 1. Thus, the dimension of the Krylov subspace 𝓚( P21A, b) is at most n + 1.

Proof

From (11), we know that the preconditioned matrix can be expressed as

P21A=θ10θ2I,

where θ1 = αT–1(αI + U)–1(T + U) and θ2=(T+1αW2)1W(1αTI). Let μi (i = 1, ⋯, n) be the eigenvalues of matrix θ1. The characteristic polynomial of the preconditioned matrix P21A is

(P21AI)ni=1n(P21AμiI).

Then

(P21AI)i=1n(P21AμiI)=(θ1I)i=1n(θ1μiI)0θ2i=1n(θ1μiI)0.

Because μi (i = 1, ⋯, n) are the eigenvalues of θ1, then by the Hamilton-Cayley theorem, we have

i=1n(θ1μiI)=0.

Therefore, we obtain that the degree of the minimal polynomial of the preconditioned matrix P21A is at most n+1. It is well known that the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace. So the dimension of the Krylov subspace 𝓚( P21A, b) is also at most n + 1. Thus, the proof of this theorem is completed. □

4 Numerical experiments

In this section, some numerical experiments are presented to illustrate the effectiveness of the preconditioner 𝓟2 for the linear system (3). We compare the performance of the preconditioner 𝓟2 with the PSHNS preconditioner and the HSS preconditioner. We denote the number of iteration steps as ”Iter”, the elapsed CPU time in seconds as “CPU”, and the relative residual norm as “RES”. Unless otherwise specified, we use left preconditioning with the GMRES method [34]. All the computations are implemented in MATLAB on a PC computer with Intel (R) Core (TM) i5 CPU 2.50GHz, and 4.00 GB memory.

In our implementations, the zero vector is adopted as the initial vector and the iteration stops when

RES:=gTxkWyk22+fWxk+Tyk22g22+f22<106.

The maximum number of iteration steps allowed is set to 3000 for the unpreconditioned GMRES method, and to 500 for the preconditioned GMRES method. It should be emphasized that the sub-linear systems arising from the application of the preconditioners are solved by direct methods. In matlab, the sub-systems of linear equations are solved through sparse Cholesky or LU factorization in combination with AMD or column AMD reordering. A symbol “-” is used to indicate that the method does not obtain the required stopping criterion before maximum iterations or out of memory. We test three values of the parameter α, that is, α = 0.001, α = 0.01, α = 1.

Example 1

We consider the following complex symmetric linear system which comes from [13, 24, 26]

[(ω2M+K)+i(ωCv+CH)]x=b.

where M and K are the inertia and stiffness matrices, Cv and CH are the viscous and hysteretic damping matrices, respectively, ω is the driving circular frequency, K = IVm + VmI, Vm = h–2tridiag(–1, 2, n-1) ∈ ℝm×mis a tridiagonal matrix, ω = 2π, h=1m+1,Cv=12M,CH=μKwith μ = 0.02 being a damping coefficient. We choose W = h2(–ω2M + K), T = h2(ωCv + CH) and test M = 10I, 15I, 25I, 35I, 50I. In this example, we set m = 32 and the right-hand side b = 𝓐 * ones(2m2, 1).

By choosing different α and M, we list the numerical results of the unpreconditioned GMRES method, the 𝓟2 preconditioned GMERS method, the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method in Table 1. Table 1 indicates that applying the 𝓟2 preconditioned GMERS method requires less iteration steps and CPU times than applying the other preconditioned GMRES methods by choosing suitable parameters. In Table 1, we find that the PSHNS preconditioner and the HSS preconditioner are both more sensitive to the parameter α than the preconditioner 𝓟2. Figure 1 describes the eigenvalue distributions of the original coefficient matrix 𝓐, the preconditioned matrix PHSS1A(forα=0.06), the preconditioned matrix PPSHNS1A(forα=0.06) and the preconditioned matrix P21A (for α = 0.06 and α = 0.006) with M = 35I. From Figure 1, we observe that the eigenvalue distributions of the preconditioned matrix P21A are much better than those of the other preconditioned matrices. Moreover, the smaller parameter α generally makes the eigenvalues of the preconditioned matrix P21A become more and more clustered.

Fig. 1 The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices
PHSS−1A,PPSHNS−1AandP2−1A$\begin{array}{}
\displaystyle 
\mathcal{P}^{-1}_{\rm{HSS}}\mathcal{A}, \mathcal{P}^{-1}_{\rm{PSHNS}}\mathcal{A} \,\,\text{and}\,\,
\mathcal{P}^{-1}_{2}\mathcal{A}
\end{array}$ for Example 1 with M = 35I.
Fig. 1

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices PHSS1A,PPSHNS1AandP21A for Example 1 with M = 35I.

Table 1

Numerical results of GMRES and Preconditioned GMRES methods for Example 1

PreconditionerαM101151251351501
unpreconditionerIter126155197243305
CPU0.262060.365780.568390.838681.34459
𝓟PSHNSα = 0.001Iter71768793108
CPU0.168120.1891970.2245580.2418210.300449
𝓟HSSα = 0.001Iter2222232631
CPU0.0526880.0537150.056160.0608380.077092
𝓟2α = 0.001Iter1211121213
CPU0.0329330.0392890.0415180.0352580.037504
𝓟PSHNSα = 0.01Iter71768792108
CPU0.042930.044550.055940.060780.007809
𝓟HSSα = 0.01Iter1516202127
CPU0.008580.008850.011420.011870.01464
𝓟2α = 0.01Iter1211101111
CPU0.007920.007760.007140.007510.00761
𝓟PSHNSα = 1Iter3134373527
CPU0.069310.07750.082970.077330.05919
𝓟HSSα = 1Iter7773544029
CPU0.188270.182290.130740.089410.06585
𝓟2α = 1Iter1515181614
CPU0.036410.036530.04180.03830.03443

Example 2

We consider the following complex symmetric linear system [25, 26]

[(TmIm+ImTmk2h2(ImIm))+iσ2(ImIm)]x=b,

where Tm = tridiag(–1, 2, –1) is a tridiagonal matrix with order m and k denotes the wavenumber, σ2 = 0.1, h=1m+1and b = 𝓐 * ones(2m2, 1). Here, we choose W = TmIm + ImTmk2h2(ImIm) and T = σ2(ImIm).

Table 2 shows that these three kinds preconditioned GMRES methods present much better convergence behavior than the unpreconditioned GMRES method. Meanwhile, the 𝓟2 preconditioned GMERS method gives a better numerical results than the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method with different parameters α in terms of iteration steps and CPU times. From Figure 2, we find that the eigenvalue distributions of the preconditioned matrix P21A are much better than those of the other preconditioned matrices and the eigenvalues of the preconditioned matrix P21A will be more gathered as α decrease.

Fig. 2 The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices
PHSS−1A,PPSHNS−1AandP2−1A$\begin{array}{}
\displaystyle 
\mathcal{P}^{-1}_{\rm{HSS}}\mathcal{A}, \mathcal{P}^{-1}_{\rm{PSHNS}}\mathcal{A} \,\,\text{and}\,\,
\mathcal{P}^{-1}_{2}\mathcal{A}
\end{array}$ for Example 2 with k = 20, m = 32.
Fig. 2

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices PHSS1A,PPSHNS1AandP21A for Example 2 with k = 20, m = 32.

Table 2

Numerical results of GMRES and Preconditioned GMRES methods for Example 2

Preconditionerαk1020304050
m216232264212822562
unpreconditionerIter47121324443-
CPU0.028110.240585.2089729.3303-
𝓟PSHNSα = 0.001Iter296812311699
CPU0.0732580.1549541.3494143.92082717.513853
𝓟HSSα = 0.001Iter122445103211
CPU0.0190480.0471630.4637137.020476160.268749
𝓟2α = 0.001Iter510163038
CPU0.0138130.028350.1566611.39487911.409131
𝓟PSHNSα = 0.01Iter296812311598
CPU0.031010. 155991.339494.0621930.38906
𝓟HSSα = 0.01Iter919304857
CPU0.009130.036170.231541.6447214.81967
𝓟2α = 0.01Itere8131817
CPU0.008090.023180.125370.687793.90741
𝓟PSHNSα = 1Iter1429413934
CPU0.0607450.049310.2352270.8598894.907534
𝓟HSSα = 1Iter2952626465
CPU0.0338330.1273790.7056473.46484623.253554
𝓟2α = 1Iter1017202122
CPU0.0191890.0374290.1766740.8429074.844297

5 Conclusions

In this paper, we have proposed a relaxed block splitting preconditioner 𝓟2 for the real block two-by-two linear system (3) and analyzed the eigenvalue distributions of the corresponding preconditioned matrix. Meanwhile, the structure of the eigenvectors and an upper bound for the degree of the minimal polynomial of the preconditioned matrix P21A were also derived. Some numerical experiments are given to illustrate the effectiveness of the preconditioner 𝓟2 for solving the linear system (3).

Acknowledgement

The authors would like to thank the referees for their very detailed comments and suggestions which improved the presentation of this paper greatly.

This work is supported by the National Natural Science Foundation of China No. 11471122 and supported in part by Science and Technology Commission of Shanghai Municipality (No. 18dz2271000).

References

[1] Betts J.-T., Kolmanovsky I., Practical methods foroptimal controlusing nonlinear programming, Appl. Mech. Rev., 2002, 55, 6810.1115/1.1483351Suche in Google Scholar

[2] Lass O., Vallejos M., Borzi A., Douglas C. C., Implementation and analysis of multigrid schemes with finite elements for elliptic optimal control problems, Computing, 2009, 84, 27-4810.1007/s00607-008-0024-5Suche in Google Scholar

[3] Arridge S. R., Optical tomography in medical imaging, Inverse Problem., 1999, 15, R41-R9310.1088/0266-5611/15/2/022Suche in Google Scholar

[4] Zheng Z., Zhang G.-F., Zhu M.-Z., A note on preconditioners for complex linear systems arising from PDE-constrained optimization problems, Appl. Math. Lett., 2016, 61, 114-12110.1016/j.aml.2016.04.013Suche in Google Scholar

[5] van Dijk W., Toyama F. M., Accurate numerical solutions of the time-dependent Schrödinger equation, Phys. Rev. E., 2007, 75, 036707-1-036707-1010.1103/PhysRevE.75.036707Suche in Google Scholar PubMed

[6] Benzi M., Bertaccini D., Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal., 2008, 28, 598-61810.1093/imanum/drm039Suche in Google Scholar

[7] Feriani A., Perotti F., Simoncini V., Iterative system solvers for the frequency analysis of linear mechanical systems, Comput. Methods Appl. Mech. Engrg., 2000, 190, 1719-173910.1016/S0045-7825(00)00187-0Suche in Google Scholar

[8] Bao G., Sun W.-W., A fast algorithm for the electromagnetic scattering from a large cavity, SIAM J. Sci. Comput., 2005, 27, 553-57410.1137/S1064827503428539Suche in Google Scholar

[9] Benzi M., Golub G. H., Liesen J., Numerical solution of saddle point problems, Acta Numer., 2005, 14, 1-13710.1017/S0962492904000212Suche in Google Scholar

[10] Saad Y., Iterative Methods for Sparse Linear Systems, SIAM, 200310.1137/1.9780898718003Suche in Google Scholar

[11] Bai Z.-Z., Golub G. H., Ng M. K., Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 2003, 24, 603-62610.1137/S0895479801395458Suche in Google Scholar

[12] Bai Z.-Z., Benzi M., Chen F., Modified HSS iteration methods for a class of complex symmetric linear systems, Computing, 2010, 87, 93-11110.1007/s00607-010-0077-0Suche in Google Scholar

[13] Bai Z.-Z., Benzi M., Chen F., On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms., 2011, 56, 297-31710.1007/s11075-010-9441-6Suche in Google Scholar

[14] Bai Z.-Z., On preconditioned iteration methods for complex linear systems, J. Eng. Math., 2015, 93, 41-6010.1007/s10665-013-9670-5Suche in Google Scholar

[15] Li X., Yang A.-L., Wu Y.-J., Lopsided PMHSS iteration method for a class of complex symmetric linear systems, Numer. Algorithms., 2014, 66, 555-56810.1007/s11075-013-9748-1Suche in Google Scholar

[16] Zheng Q.-Q., Ma C.-F., Accelerated PMHSS iteration methods for complex symmetric linear systems, Numer. Algorithms., 2016, 73, 501-51610.1007/s11075-016-0105-zSuche in Google Scholar

[17] Hezari D., Edalatpour V., Salkuyeh D. K., Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations, Numer. Linear Algebra Appl., 2014, 22, 761-77610.1002/nla.1987Suche in Google Scholar

[18] Bai Z.-Z., Several splittings for non-Hermitian linear systems, Sci. China Ser. A: Math., 2008, 51, 1339-134810.1007/s11425-008-0106-zSuche in Google Scholar

[19] Zheng Z., Huang F.-L., Peng Y.-C., Double-step scale splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett., 2017, 73, 91-9710.1016/j.aml.2017.04.017Suche in Google Scholar

[20] Bai Z.-Z., Rotated block triangular preconditioning based on PMHSS, Sci. China Math. 2013, 56, 2523-253810.1007/s11425-013-4695-9Suche in Google Scholar

[21] Bai Z.-Z., Benzi M., Chen F., Wang Z.-Q., Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal., 2013, 33, 343-36910.1093/imanum/drs001Suche in Google Scholar

[22] Lang C., Ren Z.-R., Inexact rotated block triangular preconditioners for a class of block two-by-two matrices, J. Eng. Math., 2015, 93, 87-9810.1007/s10665-013-9674-1Suche in Google Scholar

[23] Liang Z.-Z., Zhang G.-F., On SSOR iteration method for a class of block two-by-two linear systems, Numer. Algorithms., 2016, 71, 655-67110.1007/s11075-015-0015-5Suche in Google Scholar

[24] Wu S.-L., Several variants of the Hermitian and skew-Hermitian splitting method for a class of complex symmetric linear systems, Numer. Linear Algebra Appl., 2015, 22, 338-35610.1002/nla.1952Suche in Google Scholar

[25] Zhang J.-H., Dai H., A new splitting preconditioner for the iterative solution of complex symmetric indefinite linear systems, Appl. Math. Lett., 2015, 49, 100-10610.1016/j.aml.2015.05.006Suche in Google Scholar

[26] Zhang J.-H., Dai H., A new block preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms., 2017, 74, 889-90310.1007/s11075-016-0175-ySuche in Google Scholar

[27] Zhang J.-L., Fan H.-T., Gu C.-Q., An improved block splitting preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms., 2018, 77, 451-47810.1007/s11075-017-0323-zSuche in Google Scholar

[28] Pan J.-Y., Ng M. K., Bai Z.-Z., New preconditioners for saddle point problems, Appl. Math. Comput., 2006, 172, 762-77110.1016/j.amc.2004.11.016Suche in Google Scholar

[29] Bai Z.-Z., Golub G. H., Lu L.-Z., Yin J.-F., Block triangular and skew-Hermitian splitting method for positive definite linear systems, SIAM J. Sci. Comput., 2005, 26, 844-86310.1137/S1064827503428114Suche in Google Scholar

[30] Cao Y., Dong J.-L., Wang Y.-M., A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation, J. Comput. Appl. Math., 2015, 273, 41-6010.1016/j.cam.2014.06.001Suche in Google Scholar

[31] Fan H.-T., Zheng B., Zhu X.-Y., A relaxed positive semi-definite and skew-Hermitian splitting preconditioner for non-Hermitian generalized saddle point problems, Numer. Algor., 2016, 72, 813-83410.1007/s11075-015-0068-5Suche in Google Scholar

[32] Cao Y., Yao L.-Q., Jiang M.-Q., Niu Q., A relaxed HSS preconditioner for saddle point problems from meshfree discretization, J. Comput. Math., 2013, 31, 398-42110.4208/jcm.1304-m4209Suche in Google Scholar

[33] Zhang K., Zhang J.-L., Gu C.-Q., A new relaxed PSS preconditioner for nonsymmetric saddle point problems, Appl. Math. Comput., 2017, 308, 115-12910.1016/j.amc.2017.03.022Suche in Google Scholar

[34] Saad Y., Schultz M. H., GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 1986, 7, 856-86910.1137/0907058Suche in Google Scholar

Received: 2017-10-08
Accepted: 2018-03-29
Published Online: 2018-06-07

© 2018 Huang and Chen, published by De Gruyter

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Artikel in diesem Heft

  1. Regular Articles
  2. Algebraic proofs for shallow water bi–Hamiltonian systems for three cocycle of the semi-direct product of Kac–Moody and Virasoro Lie algebras
  3. On a viscous two-fluid channel flow including evaporation
  4. Generation of pseudo-random numbers with the use of inverse chaotic transformation
  5. Singular Cauchy problem for the general Euler-Poisson-Darboux equation
  6. Ternary and n-ary f-distributive structures
  7. On the fine Simpson moduli spaces of 1-dimensional sheaves supported on plane quartics
  8. Evaluation of integrals with hypergeometric and logarithmic functions
  9. Bounded solutions of self-adjoint second order linear difference equations with periodic coeffients
  10. Oscillation of first order linear differential equations with several non-monotone delays
  11. Existence and regularity of mild solutions in some interpolation spaces for functional partial differential equations with nonlocal initial conditions
  12. The log-concavity of the q-derangement numbers of type B
  13. Generalized state maps and states on pseudo equality algebras
  14. Monotone subsequence via ultrapower
  15. Note on group irregularity strength of disconnected graphs
  16. On the security of the Courtois-Finiasz-Sendrier signature
  17. A further study on ordered regular equivalence relations in ordered semihypergroups
  18. On the structure vector field of a real hypersurface in complex quadric
  19. Rank relations between a {0, 1}-matrix and its complement
  20. Lie n superderivations and generalized Lie n superderivations of superalgebras
  21. Time parallelization scheme with an adaptive time step size for solving stiff initial value problems
  22. Stability problems and numerical integration on the Lie group SO(3) × R3 × R3
  23. On some fixed point results for (s, p, α)-contractive mappings in b-metric-like spaces and applications to integral equations
  24. On algebraic characterization of SSC of the Jahangir’s graph 𝓙n,m
  25. A greedy algorithm for interval greedoids
  26. On nonlinear evolution equation of second order in Banach spaces
  27. A primal-dual approach of weak vector equilibrium problems
  28. On new strong versions of Browder type theorems
  29. A Geršgorin-type eigenvalue localization set with n parameters for stochastic matrices
  30. Restriction conditions on PL(7, 2) codes (3 ≤ |𝓖i| ≤ 7)
  31. Singular integrals with variable kernel and fractional differentiation in homogeneous Morrey-Herz-type Hardy spaces with variable exponents
  32. Introduction to disoriented knot theory
  33. Restricted triangulation on circulant graphs
  34. Boundedness control sets for linear systems on Lie groups
  35. Chen’s inequalities for submanifolds in (κ, μ)-contact space form with a semi-symmetric metric connection
  36. Disjointed sum of products by a novel technique of orthogonalizing ORing
  37. A parametric linearizing approach for quadratically inequality constrained quadratic programs
  38. Generalizations of Steffensen’s inequality via the extension of Montgomery identity
  39. Vector fields satisfying the barycenter property
  40. On the freeness of hypersurface arrangements consisting of hyperplanes and spheres
  41. Biderivations of the higher rank Witt algebra without anti-symmetric condition
  42. Some remarks on spectra of nuclear operators
  43. Recursive interpolating sequences
  44. Involutory biquandles and singular knots and links
  45. Constacyclic codes over 𝔽pm[u1, u2,⋯,uk]/〈 ui2 = ui, uiuj = ujui
  46. Topological entropy for positively weak measure expansive shadowable maps
  47. Oscillation and non-oscillation of half-linear differential equations with coeffcients determined by functions having mean values
  48. On 𝓠-regular semigroups
  49. One kind power mean of the hybrid Gauss sums
  50. A reduced space branch and bound algorithm for a class of sum of ratios problems
  51. Some recurrence formulas for the Hermite polynomials and their squares
  52. A relaxed block splitting preconditioner for complex symmetric indefinite linear systems
  53. On f - prime radical in ordered semigroups
  54. Positive solutions of semipositone singular fractional differential systems with a parameter and integral boundary conditions
  55. Disjoint hypercyclicity equals disjoint supercyclicity for families of Taylor-type operators
  56. A stochastic differential game of low carbon technology sharing in collaborative innovation system of superior enterprises and inferior enterprises under uncertain environment
  57. Dynamic behavior analysis of a prey-predator model with ratio-dependent Monod-Haldane functional response
  58. The points and diameters of quantales
  59. Directed colimits of some flatness properties and purity of epimorphisms in S-posets
  60. Super (a, d)-H-antimagic labeling of subdivided graphs
  61. On the power sum problem of Lucas polynomials and its divisible property
  62. Existence of solutions for a shear thickening fluid-particle system with non-Newtonian potential
  63. On generalized P-reducible Finsler manifolds
  64. On Banach and Kuratowski Theorem, K-Lusin sets and strong sequences
  65. On the boundedness of square function generated by the Bessel differential operator in weighted Lebesque Lp,α spaces
  66. On the different kinds of separability of the space of Borel functions
  67. Curves in the Lorentz-Minkowski plane: elasticae, catenaries and grim-reapers
  68. Functional analysis method for the M/G/1 queueing model with single working vacation
  69. Existence of asymptotically periodic solutions for semilinear evolution equations with nonlocal initial conditions
  70. The existence of solutions to certain type of nonlinear difference-differential equations
  71. Domination in 4-regular Knödel graphs
  72. Stepanov-like pseudo almost periodic functions on time scales and applications to dynamic equations with delay
  73. Algebras of right ample semigroups
  74. Random attractors for stochastic retarded reaction-diffusion equations with multiplicative white noise on unbounded domains
  75. Nontrivial periodic solutions to delay difference equations via Morse theory
  76. A note on the three-way generalization of the Jordan canonical form
  77. On some varieties of ai-semirings satisfying xp+1x
  78. Abstract-valued Orlicz spaces of range-varying type
  79. On the recursive properties of one kind hybrid power mean involving two-term exponential sums and Gauss sums
  80. Arithmetic of generalized Dedekind sums and their modularity
  81. Multipreconditioned GMRES for simulating stochastic automata networks
  82. Regularization and error estimates for an inverse heat problem under the conformable derivative
  83. Transitivity of the εm-relation on (m-idempotent) hyperrings
  84. Learning Bayesian networks based on bi-velocity discrete particle swarm optimization with mutation operator
  85. Simultaneous prediction in the generalized linear model
  86. Two asymptotic expansions for gamma function developed by Windschitl’s formula
  87. State maps on semihoops
  88. 𝓜𝓝-convergence and lim-inf𝓜-convergence in partially ordered sets
  89. Stability and convergence of a local discontinuous Galerkin finite element method for the general Lax equation
  90. New topology in residuated lattices
  91. Optimality and duality in set-valued optimization utilizing limit sets
  92. An improved Schwarz Lemma at the boundary
  93. Initial layer problem of the Boussinesq system for Rayleigh-Bénard convection with infinite Prandtl number limit
  94. Toeplitz matrices whose elements are coefficients of Bazilevič functions
  95. Epi-mild normality
  96. Nonlinear elastic beam problems with the parameter near resonance
  97. Orlicz difference bodies
  98. The Picard group of Brauer-Severi varieties
  99. Galoisian and qualitative approaches to linear Polyanin-Zaitsev vector fields
  100. Weak group inverse
  101. Infinite growth of solutions of second order complex differential equation
  102. Semi-Hurewicz-Type properties in ditopological texture spaces
  103. Chaos and bifurcation in the controlled chaotic system
  104. Translatability and translatable semigroups
  105. Sharp bounds for partition dimension of generalized Möbius ladders
  106. Uniqueness theorems for L-functions in the extended Selberg class
  107. An effective algorithm for globally solving quadratic programs using parametric linearization technique
  108. Bounds of Strong EMT Strength for certain Subdivision of Star and Bistar
  109. On categorical aspects of S -quantales
  110. On the algebraicity of coefficients of half-integral weight mock modular forms
  111. Dunkl analogue of Szász-mirakjan operators of blending type
  112. Majorization, “useful” Csiszár divergence and “useful” Zipf-Mandelbrot law
  113. Global stability of a distributed delayed viral model with general incidence rate
  114. Analyzing a generalized pest-natural enemy model with nonlinear impulsive control
  115. Boundary value problems of a discrete generalized beam equation via variational methods
  116. Common fixed point theorem of six self-mappings in Menger spaces using (CLRST) property
  117. Periodic and subharmonic solutions for a 2nth-order p-Laplacian difference equation containing both advances and retardations
  118. Spectrum of free-form Sudoku graphs
  119. Regularity of fuzzy convergence spaces
  120. The well-posedness of solution to a compressible non-Newtonian fluid with self-gravitational potential
  121. On further refinements for Young inequalities
  122. Pretty good state transfer on 1-sum of star graphs
  123. On a conjecture about generalized Q-recurrence
  124. Univariate approximating schemes and their non-tensor product generalization
  125. Multi-term fractional differential equations with nonlocal boundary conditions
  126. Homoclinic and heteroclinic solutions to a hepatitis C evolution model
  127. Regularity of one-sided multilinear fractional maximal functions
  128. Galois connections between sets of paths and closure operators in simple graphs
  129. KGSA: A Gravitational Search Algorithm for Multimodal Optimization based on K-Means Niching Technique and a Novel Elitism Strategy
  130. θ-type Calderón-Zygmund Operators and Commutators in Variable Exponents Herz space
  131. An integral that counts the zeros of a function
  132. On rough sets induced by fuzzy relations approach in semigroups
  133. Computational uncertainty quantification for random non-autonomous second order linear differential equations via adapted gPC: a comparative case study with random Fröbenius method and Monte Carlo simulation
  134. The fourth order strongly noncanonical operators
  135. Topical Issue on Cyber-security Mathematics
  136. Review of Cryptographic Schemes applied to Remote Electronic Voting systems: remaining challenges and the upcoming post-quantum paradigm
  137. Linearity in decimation-based generators: an improved cryptanalysis on the shrinking generator
  138. On dynamic network security: A random decentering algorithm on graphs
Heruntergeladen am 6.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/math-2018-0051/html?lang=de
Button zum nach oben scrollen