Startseite Mathematik Rayleigh-Ritz Majorization Error Bounds for the Linear Response Eigenvalue Problem
Artikel Open Access

Rayleigh-Ritz Majorization Error Bounds for the Linear Response Eigenvalue Problem

  • Zhongming Teng EMAIL logo und Hong-Xiu Zhong
Veröffentlicht/Copyright: 9. Juli 2019

Abstract

In the linear response eigenvalue problem arising from computational quantum chemistry and physics, one needs to compute a few of smallest positive eigenvalues together with the corresponding eigenvectors. For such a task, most of efficient algorithms are based on an important notion that is the so-called pair of deflating subspaces. If a pair of deflating subspaces is at hand, the computed approximated eigenvalues are partial eigenvalues of the linear response eigenvalue problem. In the case the pair of deflating subspaces is not available, only approximate one, in a recent paper [SIAM J. Matrix Anal. Appl., 35(2), pp.765-782, 2014], Zhang, Xue and Li obtained the relationships between the accuracy in eigenvalue approximations and the distances from the exact deflating subspaces to their approximate ones. In this paper, we establish majorization type results for these relationships. From our majorization results, various bounds are readily available to estimate how accurate the approximate eigenvalues based on information on the approximate accuracy of a pair of approximate deflating subspaces. These results will provide theoretical foundations for assessing the relative performance of certain iterative methods in the linear response eigenvalue problem.

MSC 2010: 65F15; 65L15

1 Introduction

In computational quantum chemistry and physics, the excitation states and absorption spectra for molecules or surface of solids are described by the random phase approximation (RPA) or the Bethe-Salpeter (BS) equation [1, 2]. One important question in RPA or BS equation is to compute a few eigenpairs associated with the smallest positive eigenvalues of the following eigenvalue problem:

Hw=ABBAuv=λuv=λw, (1.1)

where A, B ∈ ℝn×n are both symmetric matrices and ABBA is positive definite [3, 4, 5]. The matrix 𝓗 in (1.1) is a special Hamiltonian matrix whose eigenvalues are real and in pairs {λ, −λ}. Therefore, we can order the 2n eigenvalues of 𝓗 as

λnλ1<λ1λn. (1.2)

Through a similarity transformation, the eigenvalue problem (1.1) can be equivalently transformed into

Hz=0KM0yx=λyx=λz, (1.3)

where K = AB and M = A + B are both n × n real symmetric positive definite matrices. But, in consistent with [3], throughout the rest of paper, we relax the condition on K and M to that they are symmetric positive semi-definite and one of them is definite, unless explicitly stated differently. This means that possibly λ1 = 0. This eigenvalue problem was still referred to as the linear response eigenvalue problem (LREP) [3, 6] and will be so in this paper, too. There are immense recent interest in developing new theories, efficient numerical algorithms of LREP and the associated excitation response calculations of molecules for materials design in energy science [7, 8, 9, 10].

As the dimension n is usually very large, LREP (1.3) is generally solved by iterative methods, such as the Locally Optimal Block Preconditioned 4D Conjugate Gradient Method (LOBP4DCG) [6] and its space-extended variation [11], the block Chebyshev-Davidson method [12], and the generalized Lanczos type methods [13, 14]. These efficient numerical algorithms are all based on the concept of the so-called pair of deflating subspaces which is a generation of the invariant subspace in the standard eigenvalue problem. For given k-dimensional subspaces 𝓧 ⊆ ℝn and 𝓨 ⊆ ℝn, {𝓧, 𝓨} is called a pair of deflating subspaces if they satisfy

KXYandMYX.

Whenever such a pair of deflating subspaces is available, LREP (1.3) can be projected into a much smaller problem in the form of (1.3) whose spectrum is part of that of H. That means a pair of deflating subspaces can be used to extract the eigenvalues corresponding to the deflating subspace pair. In practice, such a pair of exact deflating subspaces is usually not available, only approximate one, then the projection method computes good the approximate eigenvalues of LREP (1.3). Quantifying how good the approximate eigenvalues is the objective of this paper.

For the standard symmetric eigenvalue problem, there are existing results to estimate how well of such approximate eigenvalues may be; see [15, 16, 17, 18, 19]. For LREP (1.3), in [20], residual-based error bounds for approximate eigenvalues computed through the pair of approximated deflating subspaces are obtained. These results bound eigenvalue approximations characterized by the certain residuals. Another set of results also bound the eigenvalue errors in [21] but in terms of the canonical angles between the approximate deflating subspace pair and the exact pair. In this paper, we put forward two improvements to the Rayleigh-Ritz approximation theories in [21] by using weak majorization. The Rayleigh-Ritz majorization type eigenvalue error bounds have been well established in the symmetric eigenvalue problem; see [15, 17]. Compared with typical inequality results, majorization type bounds provide a succinct way to express numerous useful inequalities involving two vectors. The major goals of this paper are two-fold: to establish Rayleigh-Ritz majorization error bounds for LREP, and to extend our results by considering the unequal dimension between the exact and approximate pair of deflating subspaces. These improvements are helpful to understand how approximate eigenvalues move towards the associated exact eigenvalues in iterative methods of LREP.

The rest of the paper is organized as follows. In Section 2, some notations and preliminaries including the concept of majorization and the canonical angles of two subspaces are collected for use later. Section 3 contains our main results on how the vector of differences between the exact eigenvalues and their approximations is weakly majorized by the canonical angles from the exact to approximate pair of deflating subspaces. In Section 4, some numerical examples are presented to support our analysis. Finally, Conclusions are given in Section 5.

2 Preliminaries

2.1 Basic definitions

m×n is the set of all m × n real matrices, ℝn = ℝn×1, and ℝ = ℝ1. For 𝓧 ⊆ ℝn, dim(𝓧) is the dimension of 𝓧. For X ∈ ℝm×n, XT is its transpose, 𝓡(X) is the column space of X, and the submatrix X(:,i:j) of X consists of column i to column j. In is the n × n identity matrix or simply I if its dimension is clear from the context. For x ∈ ℝn and y ∈ ℝn,

x(k:)=[xk,,xn,0,,0n]T,for kn<,[xk,,x]T,for kn,

the notation xy is used to compare x with y componentwise, and xy = [x1y1, …, xnyn]T denotes the Hadamard product of x and y, in particular, x∘2 = xx. For scalars αi, diag(α1, α2, …, αk) ∈ ℝk×k is a diagonal matrix with diagonal entries α1, α2, …, αk.

For x = [x1,x2, …, xn]T ∈ ℝn, x=[x1,x2,,xn]T is the rearrangement of xi in descending order, i.e., x1x2xn, and x=[x1,x2,,xn]T represents x with its elements rearranged in ascending order. We use λ(A) = λ(A) to denote the vector of eigenvalues of a symmetric matrix A arranged in descending order. σ(B) = σ(B) = [σ1(B), σ2(B), …, σn(B)]T is denoted as the vector of singular values of B ∈ ℝm×n arranged in descending order, where σi(B) ≥ 0 and mn.

For vectors x, y ∈ ℝn, the usually inner product and its induced norm are defined by

x,y=xTy,x2=x,x.

Consider two subspaces 𝓧 and 𝓨 in ℝn, and suppose

k=dim(X)dim(Y)=. (2.1)

Let X ∈ ℝn×k and Y ∈ ℝn× be orthonormal basis matrices of 𝓧 and 𝓨, respectively, i.e.,

XTX=Ik,R(X)=X,andYTY=I,R(Y)=Y.

The vector of cosines of canonical angels from[1] the subspace 𝓧 to the subspace 𝓨 is defined by cosΘ(𝓧, 𝓨) = σ(XTY) with

Θ(X,Y)=Θ(X,Y)=[θ1(X,Y),,θk(X,Y)]T, (2.2)

i.e., θ1(𝓧, 𝓨) ≥ … ≥ θk(𝓧, 𝓨). It is clear that Θ(𝓧, 𝓨) so defined is invariant with respect to the different choice of the orthonormal basis matrices X and Y. Therefore, in what follows we sometimes place a matrix in one of or both arguments of θi(⋅, ⋅) and Θ(⋅, ⋅), with the understanding that it refers to the subspace spanned by the columns of the matrix argument.

Lemma 2.1

([22, Proposition 2.1 and Proposition 2.4]). Let 𝓧 and 𝓨 be two subspaces inn.

  1. If dim(𝓧) = k ≤ dim(𝓨) = , for any 𝓨̂ ⊆ 𝓨 with dim(𝓨̂) = dim(𝓧) = k, we have Θ(𝓧, 𝓨̂) ≥ Θ(𝓧, 𝓨). In addition, there exists a k-dimensional subspace 𝓨̂1 ⊆ 𝓨 such that Θ(𝓧, 𝓨̂1) = Θ(𝓧, 𝓨).

  2. If dim(𝓧) = dim(𝓨) = , for any 𝓧̂ ⊆ 𝓧 with dim(𝓧̂) = k, we have Θ(𝓧, 𝓨)(1:k)Θ(𝓧̂, 𝓨) ≥ Θ(𝓧, 𝓨)(k+1:).

Lemma 2.2

([23, Proposition 2.2]). Let 𝓧 and 𝓨 be two subspaces inn satisfying (2.1). Then

θi(X,Y)=0,for km0+1ik,

where m0 = dim(𝓧 ∩ 𝓨).

Lemma 2.3

([17, Theorem 4.7]). Let 𝓧 and 𝓨 be two subspaces inn with dim(𝓧) = dim(𝓨) = k. Then, λ (Π𝓧Π𝓨)(1:k) = sinΘ(𝓧, 𝓨), where Π𝓧 and Π𝓨 are orthogonal projectors onto subspaces 𝓧 and 𝓨, respectively.

For any given symmetric and positive definite matrix W ∈ ℝn×n, the W-inner product and its induced W-norm are defined by

x,yW=yTWx,xW=x,xW.

For two subspaces 𝓧, 𝓨 ⊆ ℝn satisfying (2.1), let X and Y be the W-orthonormal basis matrices of subspaces 𝓧 and 𝓨, respectively, i.e.,

XTWX=Ik,R(X)=X,andYTWY=I,R(Y)=Y.

Similarly to the standard canonical angles (2.2), we define

cosΘW(X,Y)=σ(XTWY),

where

ΘW(X,Y)=ΘW(X,Y)=[θW(1)(X,Y),,θW(k)(X,Y)]T.

denotes the vector of the W-canonical angles from 𝓧 to 𝓨 in descending order. Let W = CCT with nonsingular C ∈ ℝn×n. Then, by [24, Theorem 4.2],

ΘW(X,Y)=Θ(CTX,CTY). (2.3)

Therefore, Lemmas 2.1 and 2.2 in which the standard canonical angles are simply replaced by W-canonical angles still hold.

2.2 Majorization and weak majorization

For x, y ∈ ℝn, we say that x is weakly majorized by y, in symbols xw y, if

i=1kxii=1kyi,for 1kn.

If in addition,

i=1nxi=i=1nyi,

we say that the vector x is (strongly) majorized by y, written xy.

To facilitate our discussion, we collect several simple and general properties of the majorization and weak majorization in Lemma 2.4, and the reader is referred to [25] for proofs and more.

Lemma 2.4

Let x, y ∈ ℝn and xw y.

  1. If yz, then xw z.

  2. If uw v, then x + ux + uw y + v. This also holds with allw replaced by ≺.

  3. If u R+n, then xuw yu. In particular, if α is a positive real number, then xw y implies α xw α y.

The following lemmas on the majorization or weak majorization relations of eigenvalues and singular values are critical to our main results.

Lemma 2.5

([25, Theorem II.3.6]). Let A be an n × n matrix. We haveλ(A)∣≺wσ(A). In particular, if AT = A, we haveλ(A)∣ = σ(A).

Lemma 2.6

([25, Exercise II.1.14]). Let A and B be symmetric matrices. The eigenvalues of A, B and A + B satisfy λ(A + B)≺wλ(A) + λ(B).

Lemma 2.7

([25, Corollary III.4.2]). Let A and B be symmetric matrices. The eigenvalues of A, B and AB satisfy λ(A)−λ(B)≺λ(AB).

Lemma 2.8

([26, Theorem 1]). For B, D, E ∈ ℝn×n, we have

σ(DBET)wσ(B)δ,

where δ is any vector which weakly majorizes σ(D) ∘ σ(E).

Lemma 2.8 implies that if B ∈ ℝn×m, D ∈ ℝk×n and E ∈ ℝ×m where kmn, then

σ(DBET)wσ(B)(1:k)σ(D)(1:k)σ(E)(1:k). (2.4)

The weakly majorization relationship (2.4) is obtained by extending B, D and E with zero blocks to n × n matrices. The extension with zero blocks only appends zero singular values and does not change the ranks.

3 Main results

Many theoretical properties of LREP (1.3) have been established in [3]. In particular, the following theorem presents decompositions on K and M, which is necessary for our later developments.

Lemma 3.1

([, Theorem 2.3]). The following statements hold for any symmetric matrices K, M ∈ ℝn×n with M being positive definite.

  1. There exists a nonsingular Ψ ∈ ℝn×n such that

    K=ΨΛ2ΨTandM=ΦΦT, (3.1)

    where Λ = diag(λ1, λ2, …, λn), λ12λ22λn2 and Φ = Ψ−T.

  2. If K is also definite, then all λi > 0 and H is diagonalizable:

    HΨΛΨΛΦΦ=ΨΛΨΛΦΦΛΛ.

    The eigen-decomposition of KM and MK are

    (KM)Ψ=ΨΛ2and(MK)Φ=ΦΛ2,

    respectively.

    As we have introduced in Section 1, for two k-dimensional subspaces 𝓧 and 𝓨 in ℝn, we call {𝓧, 𝓨} a pair of deflating subspaces if

    KXYandMYX. (3.2)

    Let X, Y ∈ ℝn×k be the basis matrices for 𝓧 and 𝓨, respectively. Then (3.2) implies that there exist KR ∈ ℝk×k and MR ∈ ℝk×k such that

    KX=YKRandMY=XMR,

    or equivalently,

    HYX=YXHRwithHR=KRMR.

    Furthermore, if

    HRz^=KRMRy^x^=λ^y^x^=λ^z^,

    then (λ^,Yy^Xx^) is an eigenpair of H.

    Roughly speaking, most of efficient algorithms for LREP usually generate a sequence of approximate deflating subspace pairs that hopefully converge to or contain subspaces near the pair of deflating subspaces associated with the first few smallest λi. Therefore, in the rest of the paper, we focus on the pair of deflating subspaces {𝓡(Φ1), 𝓡(Ψ1)} where Φ1 = Φ(:,1:k) and Ψ1 = Ψ(:,1:k), and let {𝓤, 𝓥} satisfying

    dim(U)=dim(V)=kandθ1(U,V)<π2 (3.3)

    be a pair of approximate deflating subspaces of {𝓡(Φ1), 𝓡(Ψ1)} if = k, or, if > k, contain a k-dimensional subspace pair approximating {𝓡(Φ1), 𝓡(Ψ1)}. Let U, V ∈ ℝn× be any basis matrices for 𝓤 and 𝓥, respectively. Then, the condition θ1(𝓤, 𝓥) < π/2 implies UTV being nonsingular. By [6], the best approximation eigenpairs in sense of the trace minimization principle are obtained via computing the eigenpairs of

    HSR=KSRMSR,

    where KSR=W1TUTKUW11,MSR=W2TVTMVW21, and W1, W2 ∈ ℝ× are from factorizing UTV = W1T W2 and nonsingular. In particular, if UTV = I, then HSR becomes

    HSR=UTKUVTMV.

    Notice that HSR so defined is of LREP type since both KSR and MSR are symmetric and have the same definiteness property as their corresponding K and M. Therefore, we can denote its eigenvalues by

    μμ1μ1μ,

    in which some of μi should be good approximate eigenvalues of H. Moreover, by [3, Theorem 2.9], these eigenvalues of HSR are independence of the basis matrices U and V, and factorization UTV = W1T W2 which is not unique. Now, we would like to bound the errors in μi as approximations to λi for 1 ≤ ik in terms of the distances from {𝓡(Φ1), 𝓡(Ψ1)} to {𝓤, 𝓥}. For the purpose, Zhang, Xue and Li in [21] established an inequality in the case = k, i.e.,

    i=1k(μi2λi2)tan2θM1(1)(U,MV)i=1kλi2+λn2λ12cos2θM1(1)(U,MV)sinΘM1(U,Φ1)22. (3.4)

    We first continue the effort to extend the result by using the weak majorization replacing the traditional inequality in (3.4). To prove our main results, we also need the following lemma to provide special basis matrices for the pair {𝓤, 𝓥} which are used in the proof of [21, Theorem 3.1].

Lemma 3.2

Let 𝓤, 𝓥 ⊆ ℝn satisfy (3.3) and = k. These exist basis matrices U, V ∈ ℝn×k of 𝓤 and 𝓥, respectively, and an orthonormal matrix P ∈ ℝn×n such that

UTV=Ik,U~=ΨTU=PTΓ10,V~=ΦTV=PTΓΣ, (3.5)

where Γ = diag(γ1, …, γk) with γi=cosθM1(i)(U,MV) for 1 ≤ ik and

  1. for 2kn,

    Σ=n2kkΣ~0.k.=diag(sinθM1(1)(U,MV),,sinθM1(k)(U,MV)),
  2. for 2k > n,

    Σ=nkΣ~0nk2kn.,Σ~=diag(sinθM1(1)(U,MV),,sinθM1(nk)(U,MV)).

Theorem 3.1

Let {𝓤, 𝓥} satisfying (3.3) with = k be an approximation to {𝓡(Φ1), 𝓡(Ψ1)}, μ = [μk, …, μ1]T and λ = [λk, …, λ1]T. Assume that M is definite. We have

0μ2λ2wδsin2ΘM1(U,Φ1)+tan2θM1(1)(U,MV)λ2, (3.6)

where δ=λn2cos2θM1(1)(U,MV)λ12cos2θM1(1)(U,MV),,λnk+12cos2θM1(k)(U,MV)λ12cos2θM1(1)(U,MV)T.

Proof

Since the eigenvalues of HSR are unchanged with different choices of the basis matrices for 𝓤 and 𝓥, we choose the basis matrices U and V as mentioned in Lemma 3.2. Let , , P, and Γ be defined in Lemma 3.2. Partition P, and Λ as

P=nkkP11P12P21P22knk.U~=nkkU~1U~2k,Λ=nkkΛ1Λ2knk. (3.7)

Then, it follows by (3.5) that,

U~1=P11TΓ1,U~2=P12TΓ1,U~TV~=Ik,V~TV~=Ik. (3.8)

If 2k > n, we caution the reader that here γi = cos θM1(i) (𝓤, M𝓥) = 1 for nk + 1 ≤ ik by Lemma 2.2 are used to prove T = Ik. Notice that M−1 = ΨΨT, RU~=RP11TP12T and P12P12T=IkP11P11T. Thus, by (2.3), we have

cosΘM1(U,Φ1)=cosΘ(U~,I(:,1:k))=cosΘP11TP12T,Ik0=σ(P11), (3.9)
sinΘM1(U,Φ1)=eσ(P11P11T)=λ(IkP11P11T)=λ(P12P12T), (3.10)

where e=[1,,1k]T.

By Lemma 3.1, K = ΨΛ2ΨT, M = ΦΦT, μ∘2 = λ ((UTKU)(VTMV)) and λ∘2 = λ(Λ12). Therefore, by λμ known in [3, Theorem 4.1], we have

0μ2λ2=λ(UTKU)(VTMV)λ(Λ12)=λU~TΛ2U~V~TV~λ(Λ12)=λU~TΛ2U~λ(Λ12)(by (3.8))=λU~1TΛ12U~1+U~2TΛ22U~2λ(Λ12)=λU~1TΛ12U~1+U~2TΛ22U~2λ(U~1TΛ12U~1)+λ(U~1TΛ12U~1)λ(Λ12)λ(U~2TΛ22U~2)+λ(U~1TΛ12U~1)λ(Λ12). (3.11)

The last line holds because of Lemmas 2.4(b) and 2.7. Now, we bound separately the two terms in the sum on the right-hand side of the last line. We start with the first term. By (3.10), and Lemmas 2.4(c), 2.5 and 2.8, we get

λ(U~2TΛ22U~2)=λ(Λ22U~2U~2T)(1:k)wσ(Λ22U~2U~2T)(1:k)wσ(Λ22)(1:k)σ(U~2U~2T)(1:k)=σ(Λ22)(1:k)σ(P12TΓ1ΓTP12)(1:k)wσ(Λ22)(1:k)σ(P12T)(1:k)σ(Γ1ΓT)σ(P12)(1:k)=σ(Λ22)(1:k)σ(Γ2)sin2ΘM1(U,Φ1). (3.12)

Considering the second term, by (3.8), and Lemmas 2.4 and 2.7, we have

λ(U~1TΛ12U~1)λ(Λ12)=λ(Λ1U~1U~1TΛ1)λ(Λ12)=λ(Λ1P11TΓ1ΓTP11Λ1)λ(Λ12)1γ12λ(Λ1P11TP11Λ1)λ(Λ12)=1γ12λ(Λ1P11TP11Λ1)1γ12λ(Λ12)+1γ12γ12λ(Λ12)1γ12λΛ1(P11TP11Ik)Λ1+1γ12γ12λ(Λ12)=1γ12λ(IkP11TP11)1/2(Λ12)(IkP11TP11)1/2+1γ12γ12λ(Λ12)λ12γ12λ(IkP11TP11)+1γ12γ12λ(Λ12)=λ12γ12eλ(P11TP11)+1γ12γ12λ(Λ12)=λ12γ12ecos2ΘM1(U,Φ1)+1γ12γ12λ(Λ12)=λ12γ12sin2ΘM1(U,Φ1)+1γ12γ12λ(Λ12). (3.13)

At last, (3.11), (3.12) and (3.13) together give

μ2λ2=λ(UTKU)(VTMV)λ(Λ12)wσ(Λ22)(1:k)σ(Γ2)sin2ΘM1(U,Φ1)λ12γ12sin2ΘM1(U,Φ1)+1γ12γ12λ(Λ12)=σ(Λ22)(1:k)σ(Γ2)λ12γ12esin2ΘM1(U,Φ1)+tan2θM1(1)(U,MV)λ(Λ12), (3.14)

which leads to (3.6).□

Remark 3.1

Listed below are some comments for Theorem 3.1.

  1. Recall (3.6) and (3.14). It is noted that, if 2k > n, σ(Λ22)(1:k)=[λn2,,λk+12,0,,02kn]T in (3.14) is not equal to [λn2,,λk+12,λk2,,λnk+122kn]T in δ of (3.6). In fact, in such a case, the last 2kn entries of ΘM−1(𝓤,Φ1) are all zero by Lemma 2.2. Thus, the weak majorization relationship (3.6) still holds for the case 2k > n.

  2. The weak majorization bound (3.6) directly implies that, for j = 1, …, k,

    i=1j(ui2λi2)i=1jλni+12cos2θM1(i)(U,MV)λ12cos2θM1(1)(U,MV)sin2θM1(i)(U,Φ1)+tan2θM1(1)(U,MV)i=1jλi2λn2λ12cos2θM1(1)(U,MV)i=1jsin2θM1(i)(U,Φ1)+tan2θM1(1)(U,MV)i=1jλi2. (3.15)

    Two implied inequalities by (3.15) that are often sufficient for numerical purposes are (3.4) and

    max1ik(ui2λi2)λn2λ12cos2θM1(1)(U,MV)sin2θM1(1)(U,Φ1)+λ12tan2θM1(1)(U,MV).
  3. In some numerical methods for LREP, such as the first Lanczos method [13, 27] and Chebyshev-Davidson method [12], the subspace 𝓤 is chosen such that 𝓤 = M𝓥, which leads to cos θM1(i) (𝓤, M𝓥) = 1 for 1 ≤ ik and tan θM1(1) (𝓤, M𝓥) = 0. Then, the majorization relationship (3.6) reduces to

    0μ2λ2w[(λn2λ12),,(λnk+12λ12)]Tsin2ΘM1(U,Φ1).
  4. Suppose that K is definite. Then, a simple modification by exchanging the roles of K and M in the above proof leads to the following majorization relationship

    0μ2λ2wδKsin2ΘK1(V,Ψ^1)+tan2θK1(1)(V,KU)λ2, (3.16)

    where δK=λn2cos2θK1(1)(V,KU)λ12cos2θK1(1)(V,KU),,λnk+12cos2θK1(k)(V,KU)λ12cos2θK1(1)(V,KU)T and Ψ͡1 = Ψ͡(:,1:k) coming from the new decompositions M = Φ͡Λ2Φ͡T and K = Ψ͡Ψ͡T instead of (3.1). In particular, if M is also definite, there is no need to distinguish Ψ from Ψ͡ in (3.16) because of 𝓡(Ψ) = 𝓡(Ψ͡).

    As stated in [21, Example 3.1], if ΘM−1(𝓤,Φ1) = 0, then μ2λ2 = 0. However, ΘM−1(𝓤, M𝓥) is not necessary being 0 because of the indefiniteness of K. This phenomenon is overcome under the assumption that K and M are both definite, and in the following theorem, we will establish the associated majorization upper bounds for sinΘM−1(𝓤, M𝓥) and sinΘK−1(𝓥, K𝓤), respectively, by using sinΘM−1(𝓤,Φ1) and sinΘK−1(𝓥,Ψ1).

Theorem 3.2

Let {𝓤, 𝓥} satisfy (3.3) and = k. Suppose both K and M being definite. We have

sinΘM1(U,MV)wsinΘM1(U,Φ1)+κsinΘK1(V,Ψ1), (3.17)
sinΘK1(V,KU)wsinΘK1(V,Ψ1)+κsinΘM1(U,Φ1), (3.18)

where κ=λnλ1,,λnk+1λkT.

Proof

Use the same notations and basis matrices U and V for 𝓤 and 𝓥, respectively, as the proof of Theorem 3.1. Partition , P and Λ as in (3.7) and as

V~=nkkV~1V~2k.

Thus, by (2.3) and (3.9),

sinΘM1(U,Φ1)=sinΘ(U~,I(:,1:k))=sinΘP11TP12T,Ik0=σ(P12)(1:k), (3.19)
sinΘK1(V,Ψ1)=sinΘ(Λ1V~,I(:,1:k))=σΛ21V~2(V~TΛ2V~)1/2(1:k). (3.20)

The first equality in (3.20) holds because 𝓡(I(:,1:k)) = 𝓡(Λ−1I(:,1:k)). By Lemmas 2.3 and 2.6,

sinΘM1(U,MV)=sinΘ(U~,V~)=λ(ΠR(U~)ΠR(V~))(1:k)=λΠR(U~)I(:,1:k)I(:,1:k)T+I(:,1:k)I(:,1:k)TΠR(V~)(1:k)wλΠR(U~)I(:,1:k)I(:,1:k)T(1:k)+λI(:,1:k)I(:,1:k)TΠR(V~)(1:k)=sinΘ(U~,I(:,1:k))+sinΘ(I(:,1:k),V~). (3.21)

Now, we need to relate sinΘ(I(:,1:k), ) to sinΘ(Λ−1, I(:,1:k)). It follows by T = Ik and Lemma 2.8 that

sinΘ(I(:,1:k),V~)=σ(V~2)(1:k)=σΛ2Λ21V~2(V~TΛ2V~)1/2(V~TΛ2V~)1/2(1:k)wσ(Λ2)(1:k)σΛ21V~2(V~TΛ2V~)1/2(1:k)σ(V~TΛ2V~)1/2σ(Λ2)(1:k)σΛ21V~2(V~TΛ2V~)1/2(1:k)σ(Λ1)(1:k)=κsinΘ(Λ1V~,I(:,1:k)). (3.22)

Then, (3.19), (3.20), (3.21) and (3.22) together yield (3.17). Similarly, to prove (3.18), we consider

sinΘK1(V,KU)=sinΘ(Λ1ΦTV,Λ1ΦTKU)=sinΘ(Λ1V~,ΛU~)=λ(ΠR(Λ1V~)ΠR(ΛU~))(1:k)=λΠR(Λ1V~)I(:,1:k)I(:,1:k)T+I(:,1:k)I(:,1:k)TΠR(ΛU~)(1:k)wλΠR(Λ1V~)I(:,1:k)I(:,1:k)T(1:k)+λI(:,1:k)I(:,1:k)TΠR(ΛU~)(1:k)=sinΘ(Λ1V~,I(:,1:k))+sinΘ(I(:,1:k),ΛU~), (3.23)

and clarify the majorization relationship between sinΘ(I(:,1:k), Λ) with sinΘ(, I(:,1:k)), i.e.,

sinΘ(I(:,1:k),ΛU~)=sinΘ(I(:,1:k),ΛP1)=σΛ2P12T(P1TΛ2P1)1/2(1:k)wσ(Λ2)(1:k)σ(P12T)(1:k)σ(P1TΛ2P1)1/2σ(Λ2)(1:k)σ(P12)(1:k)σ(Λ1)(1:k)=κsinΘM1(U,Φ1), (3.24)

where P1 = [P11, P12]T. At last, (3.18) is obtained by combining (3.19), (3.20), (3.23) and (3.24).□

Theorems 3.1 and 3.2 are established in assuming = k. But, the case > k is more common in practical eigenvalue computations of LREP, e.g., in the first Lanczos type method for LREP, the subspaces 𝓤 and 𝓥 are the Krylov subspaces generated by initial vectors v0 ∈ ℝn and u0 = Mv0, i.e., 𝓤 = 𝓚(MK, u0) and 𝓥 = 𝓚(KM, v0), and usually the pair of Krylov subspaces {𝓤, 𝓥} as a whole is not close to any pair of deflating subspaces but more likely it contains a subspace pair of lower dimension being a good approximation of {𝓡(Φ1), 𝓡(Ψ1)}. Thus, it is natural to generalize Theorems 3.1 and 3.2 for the case > k. This gives Theorem 3.3 and 3.4 below. Similar comments we made in Remark 3.1 are also valid for Theorem 3.3.

Theorem 3.3

Let {𝓤, 𝓥} satisfying (3.3) contain a pair of k-dimensional subspaces approximating {𝓡(Φ1), 𝓡(Ψ1)}. Assume that M is definite. Using the notations of Theorem 3.1, we have

0μ2λ2wωcos2θM1(1)(U,MV)sin2ΘM1(U,Φ1)+tan2θM1(1)(U,MV)λ2, (3.25)

where ω=(λn2λ12),,(λnk+12λ12)T.

Proof

By Lemma 2.1(a), we choose k-dimensional subspaces 𝓤1 ⊂ 𝓤 satisfying

ΘM1(U1,Φ1)=ΘM1(U,Φ1), (3.26)

and 𝓥1 ⊂ 𝓥 such that ΘM−1(𝓤1,M𝓥1) = ΘM−1(𝓤1,M𝓥). Since θ1(𝓤, 𝓥) < π/2, by Lemma 2.1(b), we have

ΘM1(U1,MV1)=ΘM1(U1,MV)ΘM1(U,MV)(1:k)<π2. (3.27)

Let U1, V1 ∈ ℝn×k be basis matrices of 𝓤1 and 𝓥1, respectively. Then, (3.27) means U1TV1 being nonsingular. We consider the matrix

H~SR=0K~SRM~SR0,

where K~SR=(V1TU1)TU1TKU1(V1TU1)1 and M~SR=V1TMV1. Since 𝓤1 ⊂ 𝓤 and 𝓥1 ⊂ 𝓥, we have λ(SRSR) ≥ μ∘2 by [3, Theorem 4.1]. By Theorem 3.1,

0μ2λ2λ(K~SRM~SR)λ2wωcos2θM1(1)(U1,MV1)sin2ΘM1(U1,Φ1)+tan2θM1(1)(U1,MV1)λ2. (3.28)

At last, combine (3.26), (3.27) and (3.28) to give (3.25).□

Theorem 3.4

Let {𝓤, 𝓥} satisfy (3.3), and both K and M be definite. Using the notations of Theorem 3.2, we have

sinΘM1(U,MV)(k+1:)wsinΘM1(U,Φ1)+κsinΘK1(V,Ψ1), (3.29)
sinΘK1(V,KU)(k+1:)wsinΘK1(V,Ψ1)+κsinΘM1(U,Φ1). (3.30)

Proof

Similarly to the proof of Theorem 3.3, let k-dimensional subspaces 𝓤2 ⊂ 𝓤 and 𝓥2 ⊂ 𝓥 be chosen such that

ΘM1(U2,Φ1)=ΘM1(U,Φ1)andΘK1(V2,Ψ1)=ΘK1(V,Ψ1).

Based on the results in Lemma 2.1 and Theorem 3.2, we have

sinΘM1(U,MV)(k+1:)sinΘM1(U2,MV)sinΘM1(U2,MV2)wsinΘM1(U2,Φ1)+κsinΘK1(V2,Ψ1)=sinΘM1(U,Φ1)+κsinΘK1(V,Ψ1),

which gives (3.29). Similarly we can prove (3.30).□

Theorems 3.2 and 3.4 can be regarded as an extension of [21, Lemma 3.3] in which κ is a scalar λn/λ1. Comparing to the results of Theorem 3.2, one may argue that an unsatisfactory part in Theorem 3.4 is the majorization upper bounds on the first smallest k components of the vectors sinΘM−1(𝓤, M𝓥) and sinΘK−1(𝓥, K𝓤), not for the first largest k components. In fact, if > k, sinΘM−1(𝓤,Φ1) = sinΘK−1(𝓥,Ψ1) = 0 fails to yield sinΘM−1(𝓤, M𝓥)(1:k) = 0 or sinΘK−1(𝓥, K𝓤)(1:k) = 0 even if both K and M are definite. For example, we consider an LREP with K = M−1 = ΨΨT, and let 𝓤 and 𝓥 be spanned by the columns of M−1-orthonormal and K−1-orthonormal basis matrices U = [Φ1, u] ∈ ℝn× and V = [Ψ1, v] ∈ ℝn×, respectively, where = k + 1,

u=Φ(:,k+1:k+2)×10andv=Ψ(:,k+1:k+2)×2222.

Notice that σ(UTV)=σIk0022 satisfies the condition of Theorem 3.4, and

cosΘM1(U,Φ1)=σ(Φ1TM1U)=σ([Ik,0]).

Therefore, sinΘM−1(𝓤,Φ1) = 0. Similarly, we can check sinΘK−1(𝓥, Ψ1) = 0. However, in this example,

cosΘM1(U,MV)=cosΘK1(V,KU)=σ(UTV)=σIk0022,

which leads to sin θM1(1)(U,MV)=sinθK1(1)(V,KU)=2/2.

4 Numerical examples

In this section, we present some numerical examples to illustrate our results in Theorems 3.1 and 3.3. In particular, we will demonstrate the terms associated with ΘM−1(𝓤, M𝓥) in majorization upper bounds of (3.6) and (3.25) are not able to be neglected.

Example 4.1

We first examine majorization upper bounds of Theorem 3.1. For simplicity, we consider diagonal matrices for K and M in this example. Take K = M = diag(λ1, …, λn) with n = 100 and λi = i/n for 1 ≤ in. In such a case, Φ = M1/2 and Ψ = M−1/2. Let k = = 3 and

E=1000100014n+4sin4tan4nn+nsin(n)tan(n).

Consider two pairs of approximate deflating subspaces {𝓤1, 𝓥1} and {𝓤2, 𝓥2} which are spanned by the basis matrices U1 = Φ1 + η × E with η = 10−5, V1 = U1, and U2 = U1, V2 = M−1U2, respectively. In such a way, the pairs {𝓤1, 𝓥1} and {𝓤2, 𝓥2} satisfy the condition (3.3). We are interested in bounding the differences between the eigenvalues λi for 1 ≤ ik and their approximations μi by (3.6). We measure the following errors, for 1 ≤ jk,

ε1,j=i=1j(μi2λi2),ε2,j=i=1j(λni+12cos2θM1(i)(U,MV)λ12cos2θM1(1)(U,MV))sin2θM1(i)(U,Φ1) (4.1)
+tan2θM1(1)(U,MV)i=1jλi2, (4.2)
ε3,j=i=1j(λni+12λ12)sin2θM1(i)(U,Φ1). (4.3)

Since 𝓤2 = M𝓥2, by Remark 3.1(c), then ε2,j = ε3,j for 1 ≤ jk are the upper bounds on the approximate eigenvalue errors ε1,j associated with {𝓤2, 𝓥2}.

Table 4.1 reports the errors ε2,j and ε3,j as defined in (4.2) and (4.3) on the eigenvalue approximations ε1,j as defined in (4.1) corresponding to {𝓤1, 𝓥1} and {𝓤2, 𝓥2}, respectively. It follows from Table 4.1 that our upper bounds provided by Theorem 3.1 are rather sharp in this example, and they are comparable to the observed errors ε1,j. In particular, for the pair of approximate deflating subspaces {𝓤1, 𝓥1}, though ε3,2 > ε1,2 and ε3,3 > ε1,3, ε3,1 in absence of the terms cos2 θM1(1) (𝓤, M𝓥) and tan2 θM1(1) (𝓤, M𝓥) is a little smaller than ε1,1 in this example. However, for the pair {𝓤2, 𝓥2}, ε3,j for 1 ≤ jk are always available to bound ε1,j.

Table 4.1

Eigenvalue approximations ε1,j corresponding to the approximate deflating subspace pairs {𝓤1, 𝓥1} and {𝓤2, 𝓥2} together with their associated ε2,j and ε3,j in Example 4.1.

Results of {𝓤1, 𝓥1} Results of {𝓤2, 𝓥2}
j ε1,j ε2,j ε3,j ε1,j ε2,j = ε3,j
1 5.7693 × 10−5 2.9445 × 10−4 5.4394 × 10−5 2.6003 × 10−5 5.4394 × 10−5
2 5.9402 × 10−5 4.4015 × 10−4 9.9443 × 10−5 2.6995 × 10−5 9.9443 × 10−5
3 5.9407 × 10−5 4.6531 × 10−4 9.9459 × 10−5 2.6997 × 10−5 9.9459 × 10−5

Example 4.2

In this example, we use a pair of matrices K and M from the linear response analysis of sodium dimer Na2 [12] with order n = 1862. Let k = 3 < = 6, U1 = [ϕ1, E]+η × F, V1 = U1, U2 = U1 and V2 = M−1U2 where η and E as defined in Example 4.1, and

F=1000000000016n6n+6sin6cos6tan6cot6nnnn+nsin(n)cos(n)tan(n)cot(n).

As Example 4.1, we also consider to bound the eigenvalue approximations based on the approximate deflating subspace pairs {𝓤1, 𝓥1} and {𝓤2, 𝓥2}, respectively, where 𝓤1 = 𝓡(U1), 𝓥1 = 𝓡(V1), 𝓤2 = 𝓡(U2) and 𝓥2 = 𝓡(V2). Since > k here, by Theorem 3.3, we compute ε1,j, ε̃2,j and ε3,j for 1 ≤ jk in Table 4.2 where

ε~2,j=i=1jλni+12λ12cos2θM1(1)(U,MV)sin2θM1(i)(U,Φ1)+tan2θM1(1)(U,MV)i=1jλi2,

and ε1,j, ε3,j as defined in (4.1) and (4.3), respectively. Table 4.2 suggests that our bounds ε̃2,j for 1 ≤ jk are also very sharp in the case > k, while ε3,j for 1 ≤ jk in the numerical results of {𝓤1, 𝓥1} underestimate ε1,j too much in this example.

Table 4.2

Approximate eigenvalue errors ε1,j associated with the pairs {𝓤1, 𝓥1} and {𝓤2, 𝓥2} together with their corresponding ε̃2,j and ε3,j of Example 4.2.

Results of {𝓤1, 𝓥1} Results of {𝓤2, 𝓥2}
j ε1,j ε̃2,j (bound for ε1,j) ε3,j ε1,j ε̃2,j = ε3,j
1 0.7247 0.7799 1.0741 × 10−7 2.0972 × 10−9 1.0741 × 10−7
2 1.2544 1.5431 1.1451 × 10−7 3.0659 × 10−9 1.1451 × 10−7
3 1.5944 2.1888 1.1892 × 10−7 3.5445 × 10−9 1.1892 × 10−7

5 Conclusion

The pair of k-dimensional deflating subspaces {𝓡(Φ1), 𝓡(Ψ1)} due to its ability in recovering the eigenvalues of interest plays a vital role in some efficient numerical methods of LREP. Given a pair of -dimensional subspaces {𝓤, 𝓥} with k which approximates or contains a pair of k-dimensional subspaces approximating {𝓡(Φ1), 𝓡(Ψ1)}, Zhang, Xue and Li [21] established the Rayleigh-Ritz error bounds of LREP on the differences between the approximate eigenvalues and the eigenvalues of interest by the canonical angles between the exact and approximate pair of deflating subspaces in the case = k. There are two major contributions in this paper to improve the exist results in [21]. One is to obtain the Rayleigh-Ritz majorization type upper bounds for the approximate eigenvalue errors of LREP, i.e., Theorem 3.1. The other one is to extend Theorem 3.1 to Theorem 3.3 by considering > k. Numerical examples are presented to confirm the sharpness of these upper bounds, and to demonstrate the necessity of the terms on ΘM−1(𝓤, M𝓥) in Theorems 3.1 and 3.3.

From the point of view that the generalized linear response eigenvalue problem

Hz=0KM0yx=λE+Eyx=λEz

where K and M are n × n symmetric positive semi-definite and one of them is definite, E± are n × n nonsingular matrices and E+T = E, can be equivalently transformed to the standard LREP by decomposing E+ = CDT [11, 28]. The development of Theorems 3.1, 3.2, 3.3 and 3.4 can be made to work for the generalized linear response eigenvalue problem by simple modifications: replacing ΘM−1(𝓤, M𝓥) by ΘM−1(E𝓤, M𝓥), ΘM−1(𝓤, Φ1) by ΘM−1(E𝓤, EΦ1), ΘK−1(𝓥,K𝓤) by ΘK−1(E+𝓥,K𝓤) and ΘK−1(𝓥, Ψ1) by ΘK−1(E+𝓥, E+Ψ1). We omit the details.

Acknowledgement

The work of the first author is supported in part by National Natural Science Foundation of China NSFC-11601081 and the research fund for distinguished young scholars of Fujian Agriculture and Forestry University No. xjq201727. The work of the second author is supported in part by National Natural Science Foundation of China NSFC-11701225 and NSFC-11471122, the Fundamental Research Funds for the Central Universities No. JUSRP11719, and the Natural Science Foundation of Jiangsu Province No. BK20170173.

References

[1] Saad Y., Chelikowsky J.R., Shontz S.M., Numerical methods for electronic structure calculations of materials, SIAM Rev., 2010, 52(1), 3–5410.1137/060651653Suche in Google Scholar

[2] Shao M., da Jornada F.H., Lin L., Yang C., Deslippe J., Louie S.G., A structure preserving Lanczos algorithm for computing the optical absorption spectrum, SIAM J. Matrix Anal. Appl., 2018, 39(2), 683–71110.2172/1398473Suche in Google Scholar

[3] Bai Z., Li R.-C., Minimization principle for linear response eigenvalue problem, I: Theory, SIAM J. Matrix Anal. Appl., 2012, 33(4), 1075–110010.1137/110838960Suche in Google Scholar

[4] Li T., Li R.-C., Lin W.-W., A symmetric structure-preserving ΓQR algorithm for linear response eigenvalue problems, Linear Algebra Appl., 2017, 520, 191–21410.1016/j.laa.2017.01.005Suche in Google Scholar

[5] Wang W.-G., Zhang L.-H., Li R.-C., Error bounds for approximate deflating subspaces for linear response eigenvalue problems, Linear Algebra Appl., 2017, 528, 273–28910.1016/j.laa.2016.08.023Suche in Google Scholar

[6] Bai Z., Li R.-C., Minimization principle for linear response eigenvalue problem, II: Computation, SIAM J. Matrix Anal. Appl., 2013, 34(2), 392–41610.1137/110838972Suche in Google Scholar

[7] Rocca D., Lu D., Galli G. Ab initio calculations of optical absorpation spectra: solution of the Bethe-Salpeter equation within density matrix perturbation theory, J. Chem. Phys., 2010, 133(16), 16410910.1063/1.3494540Suche in Google Scholar PubMed

[8] Shao M., Felipe H., Yang C., Deslippe J., Louie S.G., Structure preserving parallel algorithms for solving the Bethe-Salpeter eigenvalue problem, Linear Algebra Appl., 2016, 488, 148–16710.1016/j.laa.2015.09.036Suche in Google Scholar

[9] Vecharynsk E., Brabec J., Shao M., Govind N., Yang C., Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory, Comput. Phys. Commun., 2017, 221, 42–5210.1016/j.cpc.2017.07.017Suche in Google Scholar

[10] Zhong H.-X., Xu H., Weighted Golub-Kahan-Lanczos bidiagonalization algorithms, Electron. Trans. Numer. Anal., 2017, 47, 153–17810.1553/etna_vol47s153Suche in Google Scholar

[11] Bai Z., Li R.-C., Lin W.-W., Linear response eigenvalue problem solved by extended locally optimal preconditioned conjugate gradient methods, Sci. China Math., 2016, 59(8), 1–1810.1007/s11425-016-0297-1Suche in Google Scholar

[12] Teng Z., Zhou Y., Li R.-C., A block Chebyshev-Davidson method for linear response eigenvalue problems, Adv. Comput. Math., 2016, 42(5), 1103–112810.1007/s10444-016-9455-2Suche in Google Scholar

[13] Teng Z., Li R.-C., Convergence analysis of Lanczos-type methods for the linear response eigenvalue problem, J. Comput. Appl. Math., 2013, 247, 17–3310.1016/j.cam.2013.01.003Suche in Google Scholar

[14] Tsiper E.V., A classical mechanics technique for quantum linear response, J. Phys. B: At. Mol. Opt. Phys., 2001, 34(12), L401–L40710.1088/0953-4075/34/12/102Suche in Google Scholar

[15] Argentati M.E., Knyazev A.V., Paige C.C., Panayotov I., Bounds on changes in Ritz values for a perturbed invariant subspace of a Hermitian matrix, SIAM J. Matrix Anal. Appl., 2008, 30(2), 548–55910.1137/070684628Suche in Google Scholar

[16] Cao Z.-H., Xie J.-J., Li R.-C., A sharp version of Kahan’s theorem on clustered eigenvalues, Linear Algebra Appl., 1996, 245, 147–15510.1016/0024-3795(94)00226-6Suche in Google Scholar

[17] Knyazev A.V., Argentati M.E., Rayleigh-Ritz majorization error bounds with applications to FEM, SIAM J. Matrix Anal. Appl., 2010, 31(3), 1521–153710.1137/08072574XSuche in Google Scholar

[18] Li C.-K., Li R.-C., A note on eigenvalues of perturbed Hermitian matrices, Linear Algebra Appl., 2005, 395, 183–19010.1016/j.laa.2004.08.026Suche in Google Scholar

[19] Ovtchinnikov E., Cluster robust error estimates for the Rayleigh-Ritz approximation II: Estimates for eigenvalues, Linear Algebra Appl., 2006, 415(1), 188–20910.1016/j.laa.2005.06.041Suche in Google Scholar

[20] Zhang L.-H., Lin W.-W., Li R.-C., Backward perturbation analysis and residual-based error bounds for the linear response eigenvalue problem, BIT Numer. Math., 2015, 55(3), 869–89610.1007/s10543-014-0519-8Suche in Google Scholar

[21] Zhang L.-H., Xue J., Li R.-C., Rayleigh-Ritz approximation for the linear response eigenvalue problem, SIAM J. Matrix Anal. Appl., 2014, 35(2), 765–78210.1137/130946563Suche in Google Scholar

[22] Li R.-C., Zhang L.-H., Convergence of the block Lanczos method for eigenvalue clusters, Numer. Math., 2015, 131(1), 83–11310.1007/s00211-014-0681-6Suche in Google Scholar

[23] Teng Z., Zhang L., Li R.-C., Cluster-robust accuracy bounds for Ritz subspaces, Linear Algebra Appl., 2015, 480, 11–2610.1016/j.laa.2015.04.016Suche in Google Scholar

[24] Knyazev A.V., Argentati M.E., Principal angles between subspaces in an a-based scalar product: algorithms and perturbation estimates, SIAM J. Sci. Comput., 2002, 23(6), 2008–204010.1137/S1064827500377332Suche in Google Scholar

[25] Bhatia R., Matrix Analysis. Graduate Texts in Mathematics, 1996, vol. 169, Springer, New York.10.1007/978-1-4612-0653-8Suche in Google Scholar

[26] Bapat R.B., Majorization and singular values II, SIAM J. Matrix Anal. Appl., 1989, 10(4), 429–43410.1137/0610030Suche in Google Scholar

[27] Teng Z., Zhang L.-H., A block Lanczos method for the linear response eigenvalue problem, Electron. Trans. Numer. Anal., 2017, 46, 505–523Suche in Google Scholar

[28] Bai Z., Li R.-C., Minimization principles and computation for the generalized linear response eigenvalue problem, BIT Numer. Math., 2014, 54(1), 31–5410.1007/s10543-014-0472-6Suche in Google Scholar

Received: 2018-10-06
Accepted: 2019-05-05
Published Online: 2019-07-09

© 2019 Teng and Zhong, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Artikel in diesem Heft

  1. Regular Articles
  2. On the Gevrey ultradifferentiability of weak solutions of an abstract evolution equation with a scalar type spectral operator of orders less than one
  3. Centralizers of automorphisms permuting free generators
  4. Extreme points and support points of conformal mappings
  5. Arithmetical properties of double Möbius-Bernoulli numbers
  6. The product of quasi-ideal refined generalised quasi-adequate transversals
  7. Characterizations of the Solution Sets of Generalized Convex Fuzzy Optimization Problem
  8. Augmented, free and tensor generalized digroups
  9. Time-dependent attractor of wave equations with nonlinear damping and linear memory
  10. A new smoothing method for solving nonlinear complementarity problems
  11. Almost periodic solution of a discrete competitive system with delays and feedback controls
  12. On a problem of Hasse and Ramachandra
  13. Hopf bifurcation and stability in a Beddington-DeAngelis predator-prey model with stage structure for predator and time delay incorporating prey refuge
  14. A note on the formulas for the Drazin inverse of the sum of two matrices
  15. Completeness theorem for probability models with finitely many valued measure
  16. Periodic solution for ϕ-Laplacian neutral differential equation
  17. Asymptotic orbital shadowing property for diffeomorphisms
  18. Modular equations of a continued fraction of order six
  19. Solutions with concentration and cavitation to the Riemann problem for the isentropic relativistic Euler system for the extended Chaplygin gas
  20. Stability Problems and Analytical Integration for the Clebsch’s System
  21. Topological Indices of Para-line Graphs of V-Phenylenic Nanostructures
  22. On split Lie color triple systems
  23. Triangular Surface Patch Based on Bivariate Meyer-König-Zeller Operator
  24. Generators for maximal subgroups of Conway group Co1
  25. Positivity preserving operator splitting nonstandard finite difference methods for SEIR reaction diffusion model
  26. Characterizations of Convex spaces and Anti-matroids via Derived Operators
  27. On Partitions and Arf Semigroups
  28. Arithmetic properties for Andrews’ (48,6)- and (48,18)-singular overpartitions
  29. A concise proof to the spectral and nuclear norm bounds through tensor partitions
  30. A categorical approach to abstract convex spaces and interval spaces
  31. Dynamics of two-species delayed competitive stage-structured model described by differential-difference equations
  32. Parity results for broken 11-diamond partitions
  33. A new fourth power mean of two-term exponential sums
  34. The new operations on complete ideals
  35. Soft covering based rough graphs and corresponding decision making
  36. Complete convergence for arrays of ratios of order statistics
  37. Sufficient and necessary conditions of convergence for ρ͠ mixing random variables
  38. Attractors of dynamical systems in locally compact spaces
  39. Random attractors for stochastic retarded strongly damped wave equations with additive noise on bounded domains
  40. Statistical approximation properties of λ-Bernstein operators based on q-integers
  41. An investigation of fractional Bagley-Torvik equation
  42. Pentavalent arc-transitive Cayley graphs on Frobenius groups with soluble vertex stabilizer
  43. On the hybrid power mean of two kind different trigonometric sums
  44. Embedding of Supplementary Results in Strong EMT Valuations and Strength
  45. On Diophantine approximation by unlike powers of primes
  46. A General Version of the Nullstellensatz for Arbitrary Fields
  47. A new representation of α-openness, α-continuity, α-irresoluteness, and α-compactness in L-fuzzy pretopological spaces
  48. Random Polygons and Estimations of π
  49. The optimal pebbling of spindle graphs
  50. MBJ-neutrosophic ideals of BCK/BCI-algebras
  51. A note on the structure of a finite group G having a subgroup H maximal in 〈H, Hg
  52. A fuzzy multi-objective linear programming with interval-typed triangular fuzzy numbers
  53. Variational-like inequalities for n-dimensional fuzzy-vector-valued functions and fuzzy optimization
  54. Stability property of the prey free equilibrium point
  55. Rayleigh-Ritz Majorization Error Bounds for the Linear Response Eigenvalue Problem
  56. Hyper-Wiener indices of polyphenyl chains and polyphenyl spiders
  57. Razumikhin-type theorem on time-changed stochastic functional differential equations with Markovian switching
  58. Fixed Points of Meromorphic Functions and Their Higher Order Differences and Shifts
  59. Properties and Inference for a New Class of Generalized Rayleigh Distributions with an Application
  60. Nonfragile observer-based guaranteed cost finite-time control of discrete-time positive impulsive switched systems
  61. Empirical likelihood confidence regions of the parameters in a partially single-index varying-coefficient model
  62. Algebraic loop structures on algebra comultiplications
  63. Two weight estimates for a class of (p, q) type sublinear operators and their commutators
  64. Dynamic of a nonautonomous two-species impulsive competitive system with infinite delays
  65. 2-closures of primitive permutation groups of holomorph type
  66. Monotonicity properties and inequalities related to generalized Grötzsch ring functions
  67. Variation inequalities related to Schrödinger operators on weighted Morrey spaces
  68. Research on cooperation strategy between government and green supply chain based on differential game
  69. Extinction of a two species competitive stage-structured system with the effect of toxic substance and harvesting
  70. *-Ricci soliton on (κ, μ)′-almost Kenmotsu manifolds
  71. Some improved bounds on two energy-like invariants of some derived graphs
  72. Pricing under dynamic risk measures
  73. Finite groups with star-free noncyclic graphs
  74. A degree approach to relationship among fuzzy convex structures, fuzzy closure systems and fuzzy Alexandrov topologies
  75. S-shaped connected component of radial positive solutions for a prescribed mean curvature problem in an annular domain
  76. On Diophantine equations involving Lucas sequences
  77. A new way to represent functions as series
  78. Stability and Hopf bifurcation periodic orbits in delay coupled Lotka-Volterra ring system
  79. Some remarks on a pair of seemingly unrelated regression models
  80. Lyapunov stable homoclinic classes for smooth vector fields
  81. Stabilizers in EQ-algebras
  82. The properties of solutions for several types of Painlevé equations concerning fixed-points, zeros and poles
  83. Spectrum perturbations of compact operators in a Banach space
  84. The non-commuting graph of a non-central hypergroup
  85. Lie symmetry analysis and conservation law for the equation arising from higher order Broer-Kaup equation
  86. Positive solutions of the discrete Dirichlet problem involving the mean curvature operator
  87. Dislocated quasi cone b-metric space over Banach algebra and contraction principles with application to functional equations
  88. On the Gevrey ultradifferentiability of weak solutions of an abstract evolution equation with a scalar type spectral operator on the open semi-axis
  89. Differential polynomials of L-functions with truncated shared values
  90. Exclusion sets in the S-type eigenvalue localization sets for tensors
  91. Continuous linear operators on Orlicz-Bochner spaces
  92. Non-trivial solutions for Schrödinger-Poisson systems involving critical nonlocal term and potential vanishing at infinity
  93. Characterizations of Benson proper efficiency of set-valued optimization in real linear spaces
  94. A quantitative obstruction to collapsing surfaces
  95. Dynamic behaviors of a Lotka-Volterra type predator-prey system with Allee effect on the predator species and density dependent birth rate on the prey species
  96. Coexistence for a kind of stochastic three-species competitive models
  97. Algebraic and qualitative remarks about the family yy′ = (αxm+k–1 + βxmk–1)y + γx2m–2k–1
  98. On the two-term exponential sums and character sums of polynomials
  99. F-biharmonic maps into general Riemannian manifolds
  100. Embeddings of harmonic mixed norm spaces on smoothly bounded domains in ℝn
  101. Asymptotic behavior for non-autonomous stochastic plate equation on unbounded domains
  102. Power graphs and exchange property for resolving sets
  103. On nearly Hurewicz spaces
  104. Least eigenvalue of the connected graphs whose complements are cacti
  105. Determinants of two kinds of matrices whose elements involve sine functions
  106. A characterization of translational hulls of a strongly right type B semigroup
  107. Common fixed point results for two families of multivalued A–dominated contractive mappings on closed ball with applications
  108. Lp estimates for maximal functions along surfaces of revolution on product spaces
  109. Path-induced closure operators on graphs for defining digital Jordan surfaces
  110. Irreducible modules with highest weight vectors over modular Witt and special Lie superalgebras
  111. Existence of periodic solutions with prescribed minimal period of a 2nth-order discrete system
  112. Injective hulls of many-sorted ordered algebras
  113. Random uniform exponential attractor for stochastic non-autonomous reaction-diffusion equation with multiplicative noise in ℝ3
  114. Global properties of virus dynamics with B-cell impairment
  115. The monotonicity of ratios involving arc tangent function with applications
  116. A family of Cantorvals
  117. An asymptotic property of branching-type overloaded polling networks
  118. Almost periodic solutions of a commensalism system with Michaelis-Menten type harvesting on time scales
  119. Explicit order 3/2 Runge-Kutta method for numerical solutions of stochastic differential equations by using Itô-Taylor expansion
  120. L-fuzzy ideals and L-fuzzy subalgebras of Novikov algebras
  121. L-topological-convex spaces generated by L-convex bases
  122. An optimal fourth-order family of modified Cauchy methods for finding solutions of nonlinear equations and their dynamical behavior
  123. New error bounds for linear complementarity problems of Σ-SDD matrices and SB-matrices
  124. Hankel determinant of order three for familiar subsets of analytic functions related with sine function
  125. On some automorphic properties of Galois traces of class invariants from generalized Weber functions of level 5
  126. Results on existence for generalized nD Navier-Stokes equations
  127. Regular Banach space net and abstract-valued Orlicz space of range-varying type
  128. Some properties of pre-quasi operator ideal of type generalized Cesáro sequence space defined by weighted means
  129. On a new convergence in topological spaces
  130. On a fixed point theorem with application to functional equations
  131. Coupled system of a fractional order differential equations with weighted initial conditions
  132. Rough quotient in topological rough sets
  133. Split Hausdorff internal topologies on posets
  134. A preconditioned AOR iterative scheme for systems of linear equations with L-matrics
  135. New handy and accurate approximation for the Gaussian integrals with applications to science and engineering
  136. Special Issue on Graph Theory (GWGT 2019)
  137. The general position problem and strong resolving graphs
  138. Connected domination game played on Cartesian products
  139. On minimum algebraic connectivity of graphs whose complements are bicyclic
  140. A novel method to construct NSSD molecular graphs
Heruntergeladen am 24.12.2025 von https://www.degruyterbrill.com/document/doi/10.1515/math-2019-0052/html?lang=de
Button zum nach oben scrollen