Home Left and right inverse eigenpairs problem with a submatrix constraint for the generalized centrosymmetric matrix
Article Open Access

Left and right inverse eigenpairs problem with a submatrix constraint for the generalized centrosymmetric matrix

  • Fan-Liang Li EMAIL logo
Published/Copyright: June 18, 2020

Abstract

Left and right inverse eigenpairs problem is a special inverse eigenvalue problem. There are many meaningful results about this problem. However, few authors have considered the left and right inverse eigenpairs problem with a submatrix constraint. In this article, we will consider the left and right inverse eigenpairs problem with the leading principal submatrix constraint for the generalized centrosymmetric matrix and its optimal approximation problem. Combining the special properties of left and right eigenpairs and the generalized singular value decomposition, we derive the solvability conditions of the problem and its general solutions. With the invariance of the Frobenius norm under orthogonal transformations, we obtain the unique solution of optimal approximation problem. We present an algorithm and numerical experiment to give the optimal approximation solution. Our results extend and unify many results for left and right inverse eigenpairs problem and the inverse eigenvalue problem of centrosymmetric matrices with a submatrix constraint.

MSC 2010: 65F18; 15A24

1 Introduction

Throughout this article we use some notations as follows. Let C n×m be the set of all n × m complex matrices, R n×m be the set of all n × m real matrices, C n = C n×1, R n = R n×1, R denote the set of all real numbers, OR n×n denote the set of all n × n orthogonal matrices, R(A), A T , r(A), tr(A) and A + be the column space, the transpose, rank, trace and the Moore–Penrose generalized inverse of a matrix A, respectively. I n is the identity matrix of size n. Let e i be the ith column of I n , and set J n = (e n ,…,e 1). For A, BR n×m , 〈A, B〉 = tr(B T A) denotes the inner product of matrices A and B. The induced matrix norm is called the Frobenius norm, i.e. | | A | | = A , A 1 / 2 = ( tr ( A T A ) ) 1 / 2 , then R n×m is a Hilbert inner product space.

Generally, the left and right inverse eigenpairs problem is as follows: given partial left and right eigenpairs (eigenvalue and corresponding eigenvector) (γ j , y j ), j = 1,…,l; (λ i , x i ), i = 1,…,h, and a special n × m matrix set S, to find AS such that

(1.1) { A x i = λ i x i , i = 1, , h , y j T A = γ j y j T , j = 1, , l ,

where hn and ln. If X = (x 1,…,x h ), Λ = diag(λ 1,…,λ h ), Y = (y 1,…,y l ), Γ = diag(γ 1,…,γ l ), then (1.1) is equivalent to

(1.2) { A X = X Λ , Y T A = Γ Y T .

This problem, which mainly arises in perturbation analysis of matrix eigenvalue and in recursive matters, has some practical applications in economic and scientific computation fields [1,2,3].

Many important results have been achieved on this problem associated with many kinds of matrix sets. Li et al. [4,5,6,7,8,9] have solved the left and right inverse eigenpairs problems for skew-centrosymmetric matrices, generalized centrosymmetric matrices, κ-persymmetric matrices, symmetrizable matrices, orthogonal matrices and κ-Hermitian matrices by using the special properties of eigenpairs of matrix. Zhang and Xie [10], Ouyang [11], Liang and Dia [12] and Yin and Huang [13] have, respectively, solved the left and right inverse eigenvalue problems for real matrices, semipositive subdefinite matrices, generalized reflexive and anti-reflexive matrices and (R,S) symmetric matrices with the special structure of matrix.

Arav et al. [2] and Loewy and Mehrmann [3] studied the recursive inverse eigenvalue problem which arises in the Leontief economic model. Namely, given eigenvalue λ i of A i , in which A i is the ith leading principal submatrix of A, corresponding left eigenvector y i and right eigenvector x i of λ i , construct a matrix AC n×m such that

{ A i x i = λ i x i , y i T A i = λ i y i T , i = 1 , , n .

This recursive inverse eigenvalue problem is a special case of the left and right inverse eigenvalue problem with the leading principal submatrix constraint. Few authors have considered the left and right inverse eigenpairs problem with a submatrix constraint. In this article, we will consider the left and right inverse eigenpairs problem with the leading principal submatrix constraint for the generalized centrosymmetric matrix, which has not been discussed.

Definition 1

Let κ be a real fixed product of disjoint transpositions and J be the associated permutation matrix. A = (a ij ) ∈ R n×m , if a ij = a κ(i)κ(j) (or a ij = −a κ(i)κ(j)), then A is called the generalized centrosymmetric matrix (or generalized centro-skewsymmetric matrix), and GCSR n×m (or GCSSR n×n ) denote the set of all generalized centrosymmetric matrices (or the set of all generalized centro-skewsymmetric matrices).

From Definition 1, it is easy to derive the following conclusions.

  1. J T = J and J 2 = I n . Real matrices and centrosymmetric matrices are the special cases of generalized centrosymmetric matrices with κ(i) = i and κ(i) = ni + 1 or J = I n and J = J n , respectively.

  2. A ∈ GCSR n×n if and only if A = JAJ and A ∈ GCSSR n×n if and only if A = −JAJ.

  3. R n×n = GCSR n×n ⊕ GCSSR n×n , where the notation V 1V 2 stands for the orthogonal direct sum of linear subspaces V 1 and V 2.

Centrosymmetry, persymmetry and symmetry are three important symmetric structures of a square n × n matrix and have profound applications, such as engineering, statistics and so on [14,15,16]. There are many meaningful results about the inverse problem and the inverse eigenvalue problem of centrosymmetric matrices with a submatrix constraint. Peng et al. [17] and Bai [18] discussed the inverse problem and the inverse eigenvalue problem of centrosymmetric matrices with a principal submatrix constraint, respectively. Zhao et al. [19] studied least squares solutions to AX = B for symmetric centrosymmetric matrices under a central principal submatrix constraint and the optimal approximation. The matrix inverse problem (or inverse eigenvalue problem) with a submatrix constraint is also called the matrix extension problem. Since de Boor and Golub [20] first put forward and considered the Jacobi matrix extension problem in 1978, many authors have studied the matrix extension problem and a series of meaningful results have been achieved [17,18,19,21,22,23,24,25,26].

Assume (λ i ,x i ), i = 1,…,m, be right eigenpairs of A; (μ i ,y i ), j = 1,…,h, be left eigenpairs of A. Let X = (x 1,…,x m ) ∈ C n×m , Λ = diag(λ 1,…,λ m ) ∈ C m×m ; Y = (y 1,…,y h ) ∈ C n×h , Γ = diag(μ 1,…,μ h ) ∈ C h×h . The problems studied in this article can be described as follows.

Problem I.

Given X = (x 1,…,x m ) ∈ C n×m , Y = (y 1,…,y h ) ∈ C n×h , Λ = diag(λ 1,…,λ m ) ∈ C m×m , Γ = diag(μ 1,…,μ h ) ∈ C h×h , A 0R p×p , mn, hn, pn, find A ∈ GCSR n×n such that

{ A X = X Λ , Y T A = Γ Y T , A [ 1 : p ] = A 0 ,

where A[1:p] denotes the p × p leading principal submatrix.

Problem II.

Given A* ∈ R n×n , find A ˆ S E such that

A A ˆ = min A S E A A ,

where S E is the solution set of Problem I.

This article is organized as follows. In Section 2, we first study the special properties of eigenpairs and the structure of generalized centrosymmetric matrices. Then, we provide the solvability conditions for and the general solutions of Problem I. In Section 3, we first attest the existence and uniqueness theorem of Problem II and then present the unique approximation solution with the orthogonal invariance of the Frobenius norm. Finally, we provide an algorithm to compute the unique approximation solution. Some conclusions are provided in Section 4.

2 Solvability conditions of Problem I

Definition 2

Let xC n . If Jx = x (or Jx = −x), then x is called the generalized symmetric (or generalized skew-symmetric) vector. Denote the set of all generalized symmetric (or generalized skew-symmetric) vectors by GC n (or GSC n ).

Denote

P 1 = 1 2 ( I n + J ) , P 2 = 1 2 ( I n J ) .

Let (u 1,u 2,…,u nr ) and (u nr+1,u nr+2,…,u n ) are the orthonormal bases for R(P 1) and R(P 2), respectively, and are denoted as K 1 = (u 1,u 2,…,u nr ), K 2 = (u nr+1,u nr+2,…,u n ) and K = (K 1, K 2). Combining Definitions 1 and 2, it is easy to derive the following equalities.

(2.1) P 1 = K 1 K 1 T , P 2 = K 2 K 2 T .

(2.2) J = K 1 K 1 T K 2 K 2 T = K ( I n r 0 0 I r ) K T .

Combining conclusion (2) of Definition 1, (2.1) and (2.2), it is easy to derive the following lemma.

Lemma 1

AGCSR n×n if and only if

A = K ( A 11 0 0 A 22 ) K T ,

where A 11 = K 1 T A K 1 R ( n r ) × ( n r ) , A 22 = K 2 T A K 2 R r × r .

If J = J n and n = 2k, then

K = 1 2 ( I k I k J k J k ) .

If J = J n , and n = 2k + 1, then

K = 1 2 ( I k 0 I k 0 2 0 J k 0 J k ) .

Similarly, we have the following splitting of centrosymmetric matrices into smaller submatrices using K.

Lemma 2

[27] (1) If ACSR 2k×2k , then A can be written as

A = ( B C J k J k C J k B J k ) = K ( B + C 0 0 B C ) K T , B , C R k × k .

(2) If ACSR (2k+1)×(2k+1) , then A can be written as

A = ( B p C J k q T d q T J k J k C J k p J k B J k ) = K ( B + C 2 p 0 2 q T d 0 0 0 B C ) K T , B , C R k × k , p , q R k , d R ,

where CSR n×n denotes the set of all n × n centrosymmetric matrices, k = [ n 2 ] denotes the largest integer number that is not greater than n 2 . In fact, Lemma 2 is a special result of Lemma 1 with J = J n .

For a real matrix AR n×m , its complex right eigenpairs are conjugate pairs. That is, if a + b 1 and x + 1 y are one of its right eigenpairs, then a b 1 and x 1 y are one of its right eigenpairs. This implies Ax = axby and Ay = bx + ay, i.e.,

A ( x , y ) = ( x , y ) ( a b b a ) .

Therefore, in Problem I, we can assume that XR n×m and

Λ = diag ( λ ¯ 1 , , λ ¯ m ¯ ) R m × m ,

where λ ¯ i , i = 1 , , m ¯ are real numbers or 2 × 2 real matrices, m ¯ m . m ¯ = m holds if and only if all right eigenvalues of A are real numbers. We can also prove that the complex left eigenpairs of A are conjugate pairs. Hence, in Problem I, we can also assume that YR n×h and

Γ = diag ( μ ¯ 1 , , μ ¯ h ¯ ) R h × h ,

where μ ¯ i , i = 1 , , h ¯ are real numbers or 2 × 2 real matrices, h ¯ h . h ¯ = h holds if and only if all left eigenvalues of A are real numbers.

Let A ∈ GCSR n×n , if Ax = λx, where λ is a number, xC n , and x ≠ 0, then we have

J A J J x = λ J x , A J x = λ J x .

Hence, we have A(x ± Jx) = λ(x ± Jx). It is obvious that x + Jx and xJx is a generalized symmetric vector and a generalized skew-symmetric vector, respectively. If a + b 1 and x + 1 y are one of its right eigenpairs, then we have

(2.3) A ( x , y ) = ( x , y ) ( a b b a ) .

According to conclusion (2) of Definition 1, we have

(2.4) A J ( x , y ) = J ( x , y ) ( a b b a ) .

Combining (2.3) and (2.4) implies

A [ ( x , y ) ± J ( x , y ) ] = [ ( x , y ) ± J ( x , y ) ] ( a b b a ) .

It is easy to see that the columns of (x; y) + J(x; y) (or (x; y) − J(x; y)) is a generalized symmetric vector (or generalized skew-symmetric vector). Hence, the right eigenvectors of the generalized centrosymmetric matrix can be expressed as generalized symmetric vectors or generalized skew-symmetric vectors. It is clear that the left eigenvectors of the generalized centrosymmetric matrix have the same properties as the right ones. According to the aforementioned analysis, in Problem I, we may assume as follows:

(2.5) X = ( X 1 , X 2 ) R n × m , X 1 = J X 1 R n × m 1 , X 2 = J X 2 R n × ( m m 1 ) , Y = ( Y 1 , Y 2 ) R n × h , Y 1 = J Y 1 R n × h 1 , Y 2 = J Y 2 R n × ( h h 1 ) ,

Λ = ( Λ 1 0 0 Λ 2 ) , Λ 1 = diag ( λ ¯ 1 , , λ ¯ m ¯ 1 ) R m 1 × m 1 , Λ 2 = diag ( λ ¯ m ¯ 1 + 1 , , λ ¯ m ¯ ) R ( m m 1 ) × ( m m 1 ) ,

Γ = ( Γ 1 0 0 Γ 2 ) , Γ 1 = diag ( μ ¯ 1 , , μ ¯ h ¯ 1 ) R h 1 × h 1 , Γ 2 = diag ( μ ¯ h ¯ 1 + 1 , , μ ¯ h ¯ ) R ( h h 1 ) × ( h h 1 ) .

Lemma 3

[6] If X, Λ, Y, Γ are given by (2.5), then (AX = XΛ,Y T A = ΓY T ) has a solution in GCSR n×n if and only if

(2.6) Y 1 T X 1 Λ 1 = Γ 1 Y 1 T X 1 , X 1 Λ 1 = X 1 Λ 1 X 1 + X 1 , Y 1 Γ 1 = Y 1 Γ 1 Y 1 + Y 1 ,

(2.7) Y 2 T X 2 Λ 2 = Γ 2 Y 2 T X 2 , X 2 Λ 2 = X 2 Λ 2 X 2 + X 2 , Y 2 Γ 2 = Y 2 Γ 2 Y 2 + Y 2 .

Moreover, its general solution can be expressed as

(2.8) A = A 10 + E F G , F G C S R n × n ,

where

(2.9) A 10 = X 1 Λ 1 X 1 + + ( Y 1 T ) + Γ 1 Y 1 T ( I n X 1 X 1 + ) + X 2 Λ 2 X 2 + + ( Y 2 T ) + Γ 2 Y 2 T ( I n X 2 X 2 + ) G C S R n × n ,

(2.10) E = I n Y 1 Y 1 + Y 2 Y 2 + G C S R n × n , G = I n X 1 X 1 + X 2 X 2 + G C S R n × n .

Combining Lemmas 1 and 3, it is easy to prove that A in (2.8) can be expressed as

(2.11) A = K ( A 110 + E 1 F 1 G 1 0 0 A 210 + E 2 F 2 G 2 ) K T ,

where A 110, A 210 are denoted by A 10, E 1, E 2 are denoted by E, G 1, G 2 are denoted by G, and A 110, E 1, G 1R (nk)×(nk), A 210, E 2, G 2R k×k , for any F 1R (nk)×(nk), F 2R k×k .

Denote

(2.12) ( I p , 0 ) K 1 E 1 = E ¯ 1 , ( I p , 0 ) K 2 E 2 = E ¯ 2 , G 1 K 1 T ( I p , 0 ) T = G ¯ 1 , G 2 K 2 T ( I p , 0 ) T = G ¯ 2 , A 0 ( I p , 0 ) K 1 A 110 K 1 T ( I p , 0 ) T ( I p , 0 ) K 2 A 210 K 2 T ( I p , 0 ) T = A ¯ 0 .

Combining (2.11) and (2.12), A[1:p] = A 0 if and only if the following equation holds.

(2.13) E ¯ 1 F 1 G ¯ 1 + E ¯ 2 F 2 G ¯ 2 = A ¯ 0 .

Suppose that the generalized singular value decomposition (GSVD) of matrix pairs ( E ¯ 1 T , E ¯ 2 T ) is as follows:

(2.14) E ¯ 1 T = Q 1 Σ 1 S T , E ¯ 2 T = Q 2 Σ 2 S T ,

where Q 1OR (nk)×(nk), Q 2OR k×k , SR p×p is nonsingular, and

(2.15) Σ 1 = ( I r 1 0 0 0 0 Θ 1 0 0 0 0 0 0 ) , Σ 2 = ( 0 0 0 0 0 Θ 2 0 0 0 0 I s 1 r 1 t 1 0 ) r 1 t 1 s 1 r 1 t 1 p s 1 r 1 t 1 s 1 r 1 t 1 p s 1

with s 1 = r ( E ¯ 1 , E ¯ 2 ) T , r 1 = s 1 r ( E ¯ 2 T ) , t 1 = r ( E ¯ 1 T ) + r ( E ¯ 2 T ) s 1 , Θ 1 = diag ( γ 1 , , γ t 1 ) , Θ 2 = diag ( δ 1 , , δ t 1 ) , with 1 γ t 1 γ 1 > 0 , 0 < δ 1 δ t 1 1 , γ i 2 + δ i 2 = 1 , i = 1 , , t 1 .

Suppose that the GSVD of matrix pairs ( G ¯ 1 , G ¯ 2 ) is

(2.16) G ¯ 1 = P 1 Σ 3 W T , G ¯ 2 = P 2 Σ 4 W T ,

where P 1OR (nk)×(nk), P 2OR k×k , WR p×p is nonsingular, and

(2.17) Σ 3 = ( I r 1 0 0 0 0 Θ 3 0 0 0 0 0 0 ) , Σ 4 = ( 0 0 0 0 0 Θ 4 0 0 0 0 I s 2 r 2 t 2 0 ) r 2 t 2 s 2 r 2 t 2 p s 2 r 2 t 2 s 2 r 2 t 2 p s 2

with s 2 = r ( G ¯ 1 G ¯ 2 ) , r 2 = s 2 r ( G ¯ 2 ) , t 2 = r ( G ¯ 1 ) + r ( G ¯ 2 ) s 2 , Θ 3 = diag ( α 1 , , α t 2 ) , Θ 4 = diag ( β 1 , , β t 2 ) , with 1 α t 2 α 1 > 0 , 0 < β 1 β t 2 1 , α i 2 + β i 2 = 1 , i = 1 , , t 2 , where r 1, t 1, s 1r 1t 1, ps 1, r 2, t 2, s 2r 2t 2, ps 2 denote the number of columns of the sub-block-matrix of Σ 1, Σ 2, Σ 3 and Σ 4.

Combining (2.14) and (2.16) implies that (2.13) can be written as

(2.18) Σ 1 T Q 1 T F 1 P 1 Σ 3 + Σ 2 T Q 2 T F 2 P 2 Σ 4 = S 1 A ¯ 0 W T .

Partition Q 1 T F 1 P 1 , Q 2 T F 2 P 2 , S 1 A ¯ 0 W T into the following form:

(2.19) Q 1 T F 1 P 1 = ( F 111 F 112 F 113 F 121 F 122 F 123 F 131 F 132 F 133 ) , Q 2 T F 2 P 2 = ( F 211 F 212 F 213 F 221 F 222 F 223 F 231 F 232 F 233 ) ,

(2.20) S 1 A ¯ 0 W T = ( A 011 A 012 A 013 A 014 A 021 A 022 A 023 A 024 A 031 A 032 A 033 A 034 A 041 A 042 A 043 A 044 ) .

Combining (2.19) and (2.20) implies that (2.18) can be written as

(2.21) ( F 111 F 112 Θ 3 0 0 Θ 1 F 121 Θ 1 F 122 Θ 3 + Θ 2 F 222 Θ 4 Θ 2 F 223 0 0 F 232 Θ 4 F 233 0 0 0 0 0 ) = ( A 011 A 012 A 013 A 014 A 021 A 022 A 023 A 024 A 031 A 032 A 033 A 034 A 041 A 042 A 043 A 044 ) .

Combining Lemma 3 and (2.11)–(2.21) derives the following theorem.

Theorem 1

If X, Λ, Y, Γ are given by (2.5) and given A 0R p×p , then Problem I has a solution in GCSR n×n if and only if (2.6), (2.7) and the following equations hold:

(2.22) A 013 = 0 , A 014 = 0 , A 024 = 0 , A 031 = 0 , A 034 = 0 , A 041 = 0 , A 042 = 0 , A 043 = 0 , A 044 = 0 .

Moreover, the general solution is

(2.23) A = K ( A 110 + E 1 F 1 G 1 0 0 A 210 + E 2 F 2 G 2 ) K T ,

where A 110, E 1, G 1, A 210, E 2, G 2 are denoted by (2.11), and

(2.24) F 1 = Q 1 ( A 011 A 012 Θ 3 1 F 113 Θ 1 1 A 021 Θ 1 1 ( A 022 Θ 2 F 222 Θ 4 ) Θ 3 1 F 123 F 131 F 132 F 133 ) P 1 T ,

(2.25) F 2 = Q 2 ( F 211 F 212 F 213 F 221 F 222 Θ 2 1 A 023 F 231 A 032 Θ 4 1 A 033 ) P 2 T

where F 113, F 123, F 131, F 133, F 211, F 212, F 213, F 221, F 222 and F 231 are the arbitrary matrices.

3 An expression of the solution of Problem II

From (2.23), it is easy to prove that the solution set S E of Problem I is a nonempty closed convex set if Problem I has a solution in GCSR n×n . We claim that for any given A* ∈ R n×n , there exists a unique optimal approximation for Problem II.

Combining (2.8)–(2.11) and Lemma 1, (2.23) can be written as

(3.1) A = A 10 + E F G , F = K ( F 1 0 0 F 2 ) K T ,

where E and G are denoted by (2.10), F 1 and F 2 are denoted by (2.24) and (2.25), respectively.

According to conclusion (3) of Definition 1, for any A* ∈ R n×n , there exist only one A 1 G C S R n × n and only one A 2 G C S S R n × n which satisfy

(3.2) A = A 1 + A 2 ,

where

(3.3) A 1 = 1 2 ( A + J A J ) , A 2 = 1 2 ( A J A J ) .

According to Lemma 1, A 1 can be written as

(3.4) A 1 = K ( A 11 0 0 A 22 ) K T ,

where A 11 R ( n k ) × ( n k ) , A 22 R n × n are given by A 1 . Denote

(3.5) Q 1 T A 11 P 1 = ( A 111 A 112 A 113 A 121 A 122 A 123 A 131 A 132 A 133 ) , Q 2 T A 22 P 2 = ( A 211 A 212 A 213 A 221 A 222 A 223 A 231 A 232 A 233 ) .

Theorem 2

Given X, Y, Λ, Γ according to (2.5) and A 0. If they satisfy the conditions of Theorem 1, and given A* ∈ R n×n , then Problem II has the unique solution A ˆ . Moreover, A ˆ can be expressed as

(3.6) A ˆ = A 10 + E F ˆ G ,

where A 10, E, G are denoted by (2.9) and (2.10) with

(3.7) F ˆ = K ( F ˆ 1 0 0 F ˆ 2 ) K T ,

where

(3.8) F ˆ 1 = Q 1 ( A 011 A 012 Θ 3 1 A 113 Θ 1 1 A 021 Θ 1 1 ( A 022 Θ 2 A 222 Θ 4 ) Θ 3 1 A 123 A 131 A 132 A 133 ) P 1 T ,

(3.9) F ˆ 2 = Q 2 ( A 211 A 212 A 213 A 221 A 222 Θ 2 1 A 023 A 231 A 032 Θ 4 1 A 033 ) P 2 T .

Proof

Combining Theorem 1 and (3.2), for any AS E , we have

A A 2 = A 1 A 2 + A 2 2 = A 1 A 10 E F G 2 + A 2 2 .

According to (2.10), it is easy to prove that E, F are orthogonal projection matrices. Hence, there exist orthogonal projection matrices E ¯ , G ¯ which satisfy

(3.10) E ¯ + E = I n , E ¯ E = 0 ; G ¯ + G = I n , G ¯ G = 0 .

From this, we have

A A 2 = ( E ¯ + E ) A 1 A 10 E F G 2 + A 2 2 = E ¯ A 1 A 10 2 + E A 1 A 10 E F G 2 + A 2 2 = E ¯ A 1 A 10 2 + E A 1 A 10 ( G + G ¯ ) E F G 2 + A 2 2 = E ¯ A 1 A 10 2 + E A 1 A 10 G E F G 2 + E A 1 A 10 G ¯ 2 + A 2 2 .

This implies that

min for any A S E A A min F denoted by ( 3.1 ) E ( A 1 A 10 ) G E F G .

According to (2.9) and (2.10), it is easy to prove EA 10 G = 0. Hence, we have

min for any A S E A A min F denoted by ( 3.1 ) E ( E A 1 G E F G ) .

It is clear that if F ˆ = A 1 + E ¯ F ¯ G ¯ , for any F ¯ G C S R n × n , then

(3.11) E A 1 G E F ˆ G = 0 .

Combining Lemma 1, (2.11) and (3.10), we have

E ¯ = K ( I n k E 1 0 0 I k E 2 ) K T , G ¯ = K ( I n k G 1 0 0 I k G 2 ) K T ,

where E 1, E 2, G 1, G 2 are denoted by (2.11).

F ¯ = K ( F ¯ 1 0 0 F ¯ 2 ) K T , F ¯ 1 R ( n k ) × ( n k ) , F ¯ 2 R k × k .

Denote

Q 1 T ( I n k E 1 ) F ¯ 1 ( I n k G 1 ) P 1 = ( F ¯ 111 F ¯ 112 F ¯ 113 F ¯ 121 F ¯ 122 F ¯ 123 F ¯ 131 F ¯ 132 F ¯ 133 ) , Q 2 T ( I k E 2 ) F ¯ 2 ( I k G 2 ) P 2 = ( F ¯ 211 F ¯ 212 F ¯ 213 F ¯ 221 F ¯ 222 F ¯ 223 F ¯ 231 F ¯ 232 F ¯ 233 ) .

Combining (3.1) and (3.5), we have

(3.12) ( A 011 A 012 Θ 3 1 F 113 Θ 1 1 A 021 Θ 1 1 ( A 022 Θ 2 F 222 Θ 4 ) Θ 3 1 F 123 F 131 F 132 F 133 ) = ( A 111 A 112 A 113 A 121 A 122 A 123 A 131 A 132 A 133 ) + ( F ¯ 111 F ¯ 112 F ¯ 113 F ¯ 121 F ¯ 122 F ¯ 123 F ¯ 131 F ¯ 132 F ¯ 133 ) ,

(3.13) ( F 211 F 212 F 213 F 221 F 222 Θ 2 1 A 023 F 231 A 032 Θ 4 1 A 033 ) = ( A 211 A 212 A 213 A 221 A 222 A 223 A 231 A 232 A 233 ) + ( F ¯ 211 F ¯ 212 F ¯ 213 F ¯ 221 F ¯ 222 F ¯ 223 F ¯ 231 F ¯ 232 F ¯ 233 ) .

(3.12) and (3.13) imply that if F ¯ satisfies the following conditions, then we can also obtain (3.11).

(3.14) F ¯ 111 = A 011 A 111 , F ¯ 112 = A 012 Θ 3 1 A 112 , F ¯ 113 = 0 , F ¯ 121 = Θ 1 1 A 021 A 121 , F ¯ 122 = Θ 1 1 ( A 022 Θ 2 F 222 Θ 4 ) Θ 3 1 A 122 , F ¯ 123 = 0 , F ¯ 131 = 0 , F ¯ 132 = 0 , F ¯ 133 = 0 . F ¯ 211 = 0 , F ¯ 212 = 0 , F ¯ 213 = 0 , F ¯ 221 = 0 , F ¯ 222 = F 222 A 222 , F ¯ 223 = Θ 2 1 A 023 A 223 , F ¯ 231 = 0 , F ¯ 232 = A 032 Θ 4 1 A 232 , F ¯ 233 = A 033 A 233 .

According to (3.14), we have

(3.15) F 113 = A 113 , F 123 = A 123 , F 131 = A 131 , F 132 = A 132 , F 133 = A 133 , F 211 = A 211 , F 212 = A 212 , F 213 = A 213 , F 221 = A 221 , F 222 = A 222 , F 231 = A 231 .

(3.15) gives the results of Theorem 2.□

Algorithm

1. Input A*, A 0, and input X, Y, Λ, Γ according to (2.5).

2. Compute Y 1 T X 1 Λ 1 , Γ 1 Y 1 T X 1 , X 1 Λ 1 , X 1 Λ 1 X 1 + X 1 , Y 1 Γ 1 , Y 1 Γ 1 Y 1 + Y 1 , Y 2 T X 2 Λ 2 , Γ 2 Y 2 T X 2 , X 2 Λ 2 , X 2 Λ 2 X 2 + X 2 , Y 2 Γ 2 , Y 2 Γ 2 Y 2 + Y 2 . If (2.6) and (2.7) hold, then go to 3; otherwise stop.

3. Compute A 10, E, G according to (2.9) and (2.10), and compute A 110, A 210, E 1, E 2, G 1, G 2 according to (2.11).

4. Compute E ¯ 1 , E ¯ 2 , G ¯ 1 , G ¯ 2 , A ¯ 0 according to (2.12).

5. Compute the GSVD of matrix pairs ( E ¯ 1 T , E ¯ 2 T ) and ( G ¯ 1 , G ¯ 2 ) , respectively.

6. Partition S 1 A ¯ 0 W T according to (2.20). If (2.22) holds, go to 7; otherwise stop.

7. Compute A 1 according to (3.3).

8. Compute A 11 , A 22 according to (3.4), and partition Q 1 T A 11 P 1 , Q 2 T A 22 P 2 according to (3.5).

9. Compute F ˆ 1 and F ˆ 2 according to (3.8) and (3.9), respectively.

10. Compute F ˆ according to (3.7).

11. Calculate A ˆ according to (3.6).

Example (n = 10, h = 6, l = 2, p = 3).

Give J and choose a random matrix A in GCSR10×10 as follows.

J = ( 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 ) ,

A = ( 0.9129 0.3401 0.3762 0.1782 0.8459 0.2490 0.3496 0.4997 0.8081 0.2734 0.4842 0.8901 0.4871 0.6404 0.3066 0.4359 0.7860 0.2541 0.7008 0.4527 0.4296 0.6955 0.6290 0.3674 0.7706 0.3658 0.6734 0.7211 0.2705 0.9054 0.3787 0.5113 0.2782 0.6369 0.3789 0.6256 0.7456 0.3834 0.6683 0.5673 0.9355 0.4189 0.4208 0.5631 0.7648 0.3268 0.6183 0.2866 0.4675 0.3522 0.4675 0.3522 0.2866 0.6183 0.3268 0.7648 0.5631 0.4208 0.9355 0.4189 0.6683 0.5673 0.3834 0.7456 0.6256 0.3789 0.6369 0.2782 0.3787 0.5113 0.2705 0.9054 0.7211 0.6734 0.3658 0.7706 0.3674 0.6290 0.4296 0.6955 0.8081 0.2734 0.4997 0.3496 0.2490 0.8459 0.1782 0.3762 0.9129 0.3401 0.7008 0.4527 0.2541 0.2541 0.4359 0.3066 0.6404 0.4871 0.4842 0.8901 ) .

Compute the eigenvalues and the right eigenvectors of A, choose partial eigenpairs of A and obtain X 1, X 2, Λ 1, Λ 2 according to (2.5).

X 1 = ( 0.2852 0.2264 0.2136 0.3242 0.0454 0.0978 0.3553 0.5749 0 0.3076 0.0458 0.1363 0.3044 0.2003 0.0148 0.3044 0.2003 0.0148 0.3076 0.0458 0.1363 0.3553 0.5749 0 0.2852 0.2264 0.2136 0.3242 0.0454 0.0978 ) , X 2 = ( 0.3432 0.1016 0.0780 0.0483 0.5103 0 0.3278 0.4345 0.1443 0.2424 0.0393 0.0108 0.4623 0.1051 0.0290 0.4623 0.1051 0.0290 0.2424 0.0393 0.0108 0.3278 0.4345 0.1443 0.3432 0.1016 0.0780 0.0483 0.5103 0 ) , Λ 1 = ( 5.2477 0 0 0 0.7218 0.2883 0 0.2883 0.7128 ) , Λ 2 = ( 0.9026 0 0 0 0.2113 0.0948 0 0.0948 0.2113 ) .

Compute the eigenvalues and the right eigenvectors of A T , choose partial eigenpairs of A T and obtain Y 1, Y 2, Γ 1, Γ 2 according to (2.5).

Y 1 = ( 0.1898 0.4155 0.1347 0.0559 0.5197 0.5197 0.0559 0.1347 0.1898 0.4155 ) , Y 2 = ( 0.3865 0.0860 0.2930 0.4379 0.2560 0.2560 0.4379 0.2930 0.3865 0.0860 ) , Γ 1 = ( 0.2847 ) , Γ 2 = ( 0.4610 ) .

It is clear that (2.6) and (2.7) hold. Input A 0 is

A 0 = ( 0.9129 0.3401 0.3762 0.4842 0.8901 0.4871 0.4296 0.6955 0.6290 ) .

We can also prove that (2.22) holds. For a given matrix

A = ( 0.4326 0.1867 0.2944 0.3999 1.6041 1.0106 0.0000 0.5689 0.6232 0.3899 1.6656 0.7258 1.3362 0.6900 0.2573 0.6145 0.3179 0.2556 0.7990 0.08808 0.1253 0.5883 0.7143 0.8156 1.0565 0.5077 1.0950 0.3775 0.9409 0.6355 0.2877 2.1832 1.6236 0.7119 1.4151 1.6924 1.8740 0.2959 0.9921 0.5596 1.1465 0.1364 0.6918 1.2902 0.8051 0.5913 0.4282 1.4751 0.2120 0.4437 1.1909 0.1139 0.8580 0.6686 0.5287 0.6436 0.8956 0.2340 0.2379 0.9499 1.1892 1.0668 1.2540 1.1908 0.2193 0.3803 0.7310 0.1184 1.0078 0.7812 0.0376 0.0593 1.5937 1.2025 0.9219 1.0091 0.5779 0.3148 0.7420 0.5690 0.3273 0.0956 1.4410 0.0198 2.1707 0.0195 0.0403 1.4435 1.0823 0.8217 0.1746 0.8323 0.5711 0.1567 0.0592 0.0482 0.6771 0.3510 0.1315 0.2656 ) ,

by Algorithm, the unique solution of Problem II is

A ˆ = ( 0.9266 0.3432 0.3968 0.0394 0.6860 0.3670 0.6008 0.4813 0.8063 0.2759 0.4930 0.8920 0.4941 0.9728 0.5025 0.2942 0.4168 0.2558 0.6723 0.4423 0.4356 0.6968 0.6337 0.3239 0.7364 0.3935 0.7209 0.7153 0.2671 0.9055 0.1859 0.7142 0.3284 0.3009 0.9766 1.0656 0.0199 0.39999 0.4142 0.7545 0.9811 0.4541 0.4843 0.5740 0.7820 0.4512 0.4709 0.2349 0.3621 0.3539 0.3621 0.3539 0.2349 0.4709 0.4512 0.7820 0.5740 0.4843 0.9811 0.4541 0.4142 0.7545 0.3999 0.0199 0.0656 0.9766 0.3009 0.3284 0.1859 0.7142 0.2671 0.9055 0.7153 0.7209 0.3935 0.7364 0.3239 0.6337 0.4356 0.6968 0.8063 0.2759 0.4813 0.6008 0.3670 0.6860 0.0394 0.3868 0.9266 0.3432 0.6723 0.4423 0.2558 0.4168 0.2942 0.5025 0.9728 0.4941 0.4930 0.8920 ) .

4 Conclusion

In this article, we have obtained the necessary and sufficient conditions and associated general solutions of Problem I (Theorem 1). For given matrix A* ∈ R n×n , the unique optimal approximation solution of Problem II has been derived (Theorem 2). Our results extend and unify many results for left and right inverse eigenpairs problem, the inverse problem and the inverse eigenvalue problem of centrosymmetric matrices with a submatrix constraint, which is the first motivation of this work. For instance, in Problem I, if Y = 0, then this problem becomes Problem I in [17]; in Problem I, if p = 0, this problem becomes Problem I in [4,5,6,7,8,9,10,11,12,13].

The left and right eigenpairs of a real matrix are not all real eigenpairs, and its complex eigenpairs are all conjugate pairs. Hence, the supposition for Problem I in [4,5,6,7,10,11] is not suitable. In this article, we derive the suitable supposition for Problem I (X, Y, Λ, Γ are given by (2.5)), which is another motivation of this work.


tel: +81 15973179263

Acknowledgements

This research was supported by the Natural Science Foundation of Hunan Province (2015JJ4090) and by Scientific Fund of Hunan Provincial Education Department of China (Grant no 13C1139). The authors are grateful to the anonymous reviewer and Dr. Justyna Zuk for their valuable comments and careful reading of the original manuscript of this article.

  1. Conflicts of interest: The authors declare that there is no conflict of interests regarding the publication of this paper.

References

[1] J. H. Wilkinson, The Algebraic Problem, Oxford University Press, Oxford, 1965.Search in Google Scholar

[2] M. Arav, D. Hershkowitz, V. Mehrmann, and H. Schneider, The recursive inverse eigenvalue problem, SIAM J. Matrix Anal. Appl. 22 (2000), no. 2, 392–412, 10.1137/s0895479899354044.Search in Google Scholar

[3] R. Loewy and V. Mehrmann, A note on the symmetric recursive inverse eigenvalue problem, SIAM J. Matrix Anal. Appl. 25 (2003), no. 1, 180–187, 10.1137/s0895479802408839.Search in Google Scholar

[4] F. L. Li, X. Y. Hu, and L. Zhang, Left and right eigenpairs problem of skew-centrosymmetric matrices, Appl. Math. Comput. 177 (2006), no. 1, 105–110, 10.1016/j.amc.2005.10.035.Search in Google Scholar

[5] F. L. Li, X. Y. Hu, and L. Zhang, Left and right inverse eigenpairs problem for κ-persymmetric matrices, Math. Numer. Sin. 29 (2007), no. 4, 337–344.Search in Google Scholar

[6] F. L. Li, X. Y. Hu, and L. Zhang, Left and right inverse eigenpairs problem of generalized centrosymmetric matrices and its optimal approximation problem, Appl. Math. Comput. 212 (2009), no. 2, 481–487, 10.1016/j.amc.2009.02.035.Search in Google Scholar

[7] F. L. Li and K. K. Zhang, Left and right inverse eigenpairs problem for the symmetrizable matrices, Proceedings of the Ninth International Conference on Matrix Theory and its Applications, 2010, vol. 1, pp. 179–182.Search in Google Scholar

[8] F. L. Li, Left and right inverse eigenpairs problem of orthogonal matrices, Appl. Math. 3 (2012), 1972–1976.10.4236/am.2012.312271Search in Google Scholar

[9] F. L. Li, X. Y. Hu, and L. Zhang, Left and right inverse eigenpairs problem for κ-Hermitian matrices, J. Appl. Math. 2013 (2013), 230408, 10.1155/2013/230408.Search in Google Scholar

[10] L. Zhang and D. X. Xie, A class of inverse eigenvalue problems, Math. Sci. Acta. 13 (1993), no. 1, 94–99.Search in Google Scholar

[11] B. Y. Ouyang, A class of left and right inverse eigenvalue problem for semipositive subdefinite matrices, Math. Numer. Sin. 20 (1998), no. 4, 345–352.Search in Google Scholar

[12] M. L. Liang and L. F. Dai, The left and right inverse eigenvalue problems of generalized reflexive and anti-reflexive matrices, J. Comput. Appl. Math. 234 (2010), no. 3, 743–749, 10.1016/j.cam.2010.01.014.Search in Google Scholar

[13] F. Yin and G. X. Huang, Left and right inverse eigenvalue problem of (R,S) symmetric matrices and its optimal approximation problem, Appl. Math. Comput. 219 (2013), no. 17, 9261–9269, 10.1016/j.amc.2013.03.059.Search in Google Scholar

[14] A. L. Andrew, Centrosymmetric matrices, SIAM Rev. 40 (1998), no. 3, 697–698, 10.2307/2653241.Search in Google Scholar

[15] R. D. Hill and S. R. Waters, On κ-real and κ-Hermitian matrices, Linear Algebra Appl. 169 (1992), 17–29, 10.1016/0024-3795(92)90167-9.Search in Google Scholar

[16] I. S. Pressman, Matrices with multiple symmetry properties: applications of centrohermitian and perhermitian matrices, Linear Algebra Appl. 284 (1998), no. 1–3, 239–258, 10.1016/s0024-3795(98)10144-1.Search in Google Scholar

[17] Z. Y. Peng, X. Y. Hu, and L. Zhang, The inverse problem of centrosymmetric matrices with a submatrix constraint, J. Comput. Math. 22 (2004), no. 4, 535–544.Search in Google Scholar

[18] Z. J. Bai, The inverse eigenproblem of centrosymmetric matrices with a submatrix constraint and its approximation, SIAM J. Matrix Anal. Appl. 26 (2005), no. 4, 1100–1114, 10.1137/s0895479803434185.Search in Google Scholar

[19] L. J. Zhao, X. Y. Hu, and L. Zhang, Least squares solutions to AX = B for symmetric centrosymmetric matrices under a central principal submatrix constraint and the optimal approximation, Linear Algebra Appl. 428 (2008), 871–880.10.1016/j.laa.2007.08.019Search in Google Scholar

[20] C. de Boor and G. H. Golub, The numerical stable reconstruction of a Jacobi matrix from spectral data, Linear Algebra Appl. 21 (1978), no. 3, 245–260, 10.1016/0024-3795(78)90086-1.Search in Google Scholar

[21] L. S. Gong, X. Y. Hu, and L. Zhang, The expansion problem of anti-symmetric matrix under a linear constraint and the optimal approximation, J. Comput. Appl. Math. 197 (2006), no. 1, 44–52, 10.1016/j.cam.2005.10.021.Search in Google Scholar

[22] A. P. Liao and Y. Lei, Least-squares solutions of matrix inverse problem for bi-symmetric matrices with a submatrix constraint, Numer. Linear Algebra Appl. 14 (2007), no. 5, 425–444, 10.1002/nla.530.Search in Google Scholar

[23] Y. X. Yuan and H. Dai, The nearness problems for symmetric matrix with a submatrix constraint, J. Comput. Appl. Math. 213 (2008), no. 1, 224–231, 10.1016/j.cam.2007.01.033.Search in Google Scholar

[24] J. F. Li, X. Y. Hu, and L. Zhang, The nearness problems for symmetric centrosymmetric with a special submatrix constraint, Numer. Algor. 55 (2010), 39–57, 10.1007/s110075-009-9356-2.Search in Google Scholar

[25] Z. H. Peng and H. M. Xin, The reflexive least squares solutions of the general coupled matrix equations with a submatrix constraint, Appl. Math. Comput. 225 (2013), 425–445, 10.1016/j.amc.2013.09.062.Search in Google Scholar

[26] F. Yin, K. Guo, G. X. Huang, and B. Huang, The inverse eigenproblem with a submatrix constraint and the associated approximation problem for (R,S)-symmetric matrices, J. Comput. Appl. Math. 268 (2014), no. 10, 23–33, 10.1016/j.cam.2014.01.038.Search in Google Scholar

[27] A. Collar, On centrosymmetric and centroskew matrices, Quart. J. Mech. Appl. Math. 159 (1962), no. 3, 265–281, 10.1093/qjmam/15.3.265.Search in Google Scholar

Received: 2019-06-20
Accepted: 2020-01-18
Published Online: 2020-06-18

© 2020 Fan-Liang Li, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Non-occurrence of the Lavrentiev phenomenon for a class of convex nonautonomous Lagrangians
  3. Strong and weak convergence of Ishikawa iterations for best proximity pairs
  4. Curve and surface construction based on the generalized toric-Bernstein basis functions
  5. The non-negative spectrum of a digraph
  6. Bounds on F-index of tricyclic graphs with fixed pendant vertices
  7. Crank-Nicolson orthogonal spline collocation method combined with WSGI difference scheme for the two-dimensional time-fractional diffusion-wave equation
  8. Hardy’s inequalities and integral operators on Herz-Morrey spaces
  9. The 2-pebbling property of squares of paths and Graham’s conjecture
  10. Existence conditions for periodic solutions of second-order neutral delay differential equations with piecewise constant arguments
  11. Orthogonal polynomials for exponential weights x2α(1 – x2)2ρe–2Q(x) on [0, 1)
  12. Rough sets based on fuzzy ideals in distributive lattices
  13. On more general forms of proportional fractional operators
  14. The hyperbolic polygons of type (ϵ, n) and Möbius transformations
  15. Tripled best proximity point in complete metric spaces
  16. Metric completions, the Heine-Borel property, and approachability
  17. Functional identities on upper triangular matrix rings
  18. Uniqueness on entire functions and their nth order exact differences with two shared values
  19. The adaptive finite element method for the Steklov eigenvalue problem in inverse scattering
  20. Existence of a common solution to systems of integral equations via fixed point results
  21. Fixed point results for multivalued mappings of Ćirić type via F-contractions on quasi metric spaces
  22. Some inequalities on the spectral radius of nonnegative tensors
  23. Some results in cone metric spaces with applications in homotopy theory
  24. On the Malcev products of some classes of epigroups, I
  25. Self-injectivity of semigroup algebras
  26. Cauchy matrix and Liouville formula of quaternion impulsive dynamic equations on time scales
  27. On the symmetrized s-divergence
  28. On multivalued Suzuki-type θ-contractions and related applications
  29. Approximation operators based on preconcepts
  30. Two types of hypergeometric degenerate Cauchy numbers
  31. The molecular characterization of anisotropic Herz-type Hardy spaces with two variable exponents
  32. Discussions on the almost 𝒵-contraction
  33. On a predator-prey system interaction under fluctuating water level with nonselective harvesting
  34. On split involutive regular BiHom-Lie superalgebras
  35. Weighted CBMO estimates for commutators of matrix Hausdorff operator on the Heisenberg group
  36. Inverse Sturm-Liouville problem with analytical functions in the boundary condition
  37. The L-ordered L-semihypergroups
  38. Global structure of sign-changing solutions for discrete Dirichlet problems
  39. Analysis of F-contractions in function weighted metric spaces with an application
  40. On finite dual Cayley graphs
  41. Left and right inverse eigenpairs problem with a submatrix constraint for the generalized centrosymmetric matrix
  42. Controllability of fractional stochastic evolution equations with nonlocal conditions and noncompact semigroups
  43. Levinson-type inequalities via new Green functions and Montgomery identity
  44. The core inverse and constrained matrix approximation problem
  45. A pair of equations in unlike powers of primes and powers of 2
  46. Miscellaneous equalities for idempotent matrices with applications
  47. B-maximal commutators, commutators of B-singular integral operators and B-Riesz potentials on B-Morrey spaces
  48. Rate of convergence of uniform transport processes to a Brownian sheet
  49. Curves in the Lorentz-Minkowski plane with curvature depending on their position
  50. Sequential change-point detection in a multinomial logistic regression model
  51. Tiny zero-sum sequences over some special groups
  52. A boundedness result for Marcinkiewicz integral operator
  53. On a functional equation that has the quadratic-multiplicative property
  54. The spectrum generated by s-numbers and pre-quasi normed Orlicz-Cesáro mean sequence spaces
  55. Positive coincidence points for a class of nonlinear operators and their applications to matrix equations
  56. Asymptotic relations for the products of elements of some positive sequences
  57. Jordan {g,h}-derivations on triangular algebras
  58. A systolic inequality with remainder in the real projective plane
  59. A new characterization of L2(p2)
  60. Nonlinear boundary value problems for mixed-type fractional equations and Ulam-Hyers stability
  61. Asymptotic normality and mean consistency of LS estimators in the errors-in-variables model with dependent errors
  62. Some non-commuting solutions of the Yang-Baxter-like matrix equation
  63. General (p,q)-mixed projection bodies
  64. An extension of the method of brackets. Part 2
  65. A new approach in the context of ordered incomplete partial b-metric spaces
  66. Sharper existence and uniqueness results for solutions to fourth-order boundary value problems and elastic beam analysis
  67. Remark on subgroup intersection graph of finite abelian groups
  68. Detectable sensation of a stochastic smoking model
  69. Almost Kenmotsu 3-h-manifolds with transversely Killing-type Ricci operators
  70. Some inequalities for star duality of the radial Blaschke-Minkowski homomorphisms
  71. Results on nonlocal stochastic integro-differential equations driven by a fractional Brownian motion
  72. On surrounding quasi-contractions on non-triangular metric spaces
  73. SEMT valuation and strength of subdivided star of K 1,4
  74. Weak solutions and optimal controls of stochastic fractional reaction-diffusion systems
  75. Gradient estimates for a weighted nonlinear parabolic equation and applications
  76. On the equivalence of three-dimensional differential systems
  77. Free nonunitary Rota-Baxter family algebras and typed leaf-spaced decorated planar rooted forests
  78. The prime and maximal spectra and the reticulation of residuated lattices with applications to De Morgan residuated lattices
  79. Explicit determinantal formula for a class of banded matrices
  80. Dynamics of a diffusive delayed competition and cooperation system
  81. Error term of the mean value theorem for binary Egyptian fractions
  82. The integral part of a nonlinear form with a square, a cube and a biquadrate
  83. Meromorphic solutions of certain nonlinear difference equations
  84. Characterizations for the potential operators on Carleson curves in local generalized Morrey spaces
  85. Some integral curves with a new frame
  86. Meromorphic exact solutions of the (2 + 1)-dimensional generalized Calogero-Bogoyavlenskii-Schiff equation
  87. Towards a homological generalization of the direct summand theorem
  88. A standard form in (some) free fields: How to construct minimal linear representations
  89. On the determination of the number of positive and negative polynomial zeros and their isolation
  90. Perturbation of the one-dimensional time-independent Schrödinger equation with a rectangular potential barrier
  91. Simply connected topological spaces of weighted composition operators
  92. Generalized derivatives and optimization problems for n-dimensional fuzzy-number-valued functions
  93. A study of uniformities on the space of uniformly continuous mappings
  94. The strong nil-cleanness of semigroup rings
  95. On an equivalence between regular ordered Γ-semigroups and regular ordered semigroups
  96. Evolution of the first eigenvalue of the Laplace operator and the p-Laplace operator under a forced mean curvature flow
  97. Noetherian properties in composite generalized power series rings
  98. Inequalities for the generalized trigonometric and hyperbolic functions
  99. Blow-up analyses in nonlocal reaction diffusion equations with time-dependent coefficients under Neumann boundary conditions
  100. A new characterization of a proper type B semigroup
  101. Constructions of pseudorandom binary lattices using cyclotomic classes in finite fields
  102. Estimates of entropy numbers in probabilistic setting
  103. Ramsey numbers of partial order graphs (comparability graphs) and implications in ring theory
  104. S-shaped connected component of positive solutions for second-order discrete Neumann boundary value problems
  105. The logarithmic mean of two convex functionals
  106. A modified Tikhonov regularization method based on Hermite expansion for solving the Cauchy problem of the Laplace equation
  107. Approximation properties of tensor norms and operator ideals for Banach spaces
  108. A multi-power and multi-splitting inner-outer iteration for PageRank computation
  109. The edge-regular complete maps
  110. Ramanujan’s function k(τ)=r(τ)r2(2τ) and its modularity
  111. Finite groups with some weakly pronormal subgroups
  112. A new refinement of Jensen’s inequality with applications in information theory
  113. Skew-symmetric and essentially unitary operators via Berezin symbols
  114. The limit Riemann solutions to nonisentropic Chaplygin Euler equations
  115. On singularities of real algebraic sets and applications to kinematics
  116. Results on analytic functions defined by Laplace-Stieltjes transforms with perfect ϕ-type
  117. New (p, q)-estimates for different types of integral inequalities via (α, m)-convex mappings
  118. Boundary value problems of Hilfer-type fractional integro-differential equations and inclusions with nonlocal integro-multipoint boundary conditions
  119. Boundary layer analysis for a 2-D Keller-Segel model
  120. On some extensions of Gauss’ work and applications
  121. A study on strongly convex hyper S-subposets in hyper S-posets
  122. On the Gevrey ultradifferentiability of weak solutions of an abstract evolution equation with a scalar type spectral operator on the real axis
  123. Special Issue on Graph Theory (GWGT 2019), Part II
  124. On applications of bipartite graph associated with algebraic structures
  125. Further new results on strong resolving partitions for graphs
  126. The second out-neighborhood for local tournaments
  127. On the N-spectrum of oriented graphs
  128. The H-force sets of the graphs satisfying the condition of Ore’s theorem
  129. Bipartite graphs with close domination and k-domination numbers
  130. On the sandpile model of modified wheels II
  131. Connected even factors in k-tree
  132. On triangular matroids induced by n3-configurations
  133. The domination number of round digraphs
  134. Special Issue on Variational/Hemivariational Inequalities
  135. A new blow-up criterion for the Nabc family of Camassa-Holm type equation with both dissipation and dispersion
  136. On the finite approximate controllability for Hilfer fractional evolution systems with nonlocal conditions
  137. On the well-posedness of differential quasi-variational-hemivariational inequalities
  138. An efficient approach for the numerical solution of fifth-order KdV equations
  139. Generalized fractional integral inequalities of Hermite-Hadamard-type for a convex function
  140. Karush-Kuhn-Tucker optimality conditions for a class of robust optimization problems with an interval-valued objective function
  141. An equivalent quasinorm for the Lipschitz space of noncommutative martingales
  142. Optimal control of a viscous generalized θ-type dispersive equation with weak dissipation
  143. Special Issue on Problems, Methods and Applications of Nonlinear analysis
  144. Generalized Picone inequalities and their applications to (p,q)-Laplace equations
  145. Positive solutions for parametric (p(z),q(z))-equations
  146. Revisiting the sub- and super-solution method for the classical radial solutions of the mean curvature equation
  147. (p,Q) systems with critical singular exponential nonlinearities in the Heisenberg group
  148. Quasilinear Dirichlet problems with competing operators and convection
  149. Hyers-Ulam-Rassias stability of (m, n)-Jordan derivations
  150. Special Issue on Evolution Equations, Theory and Applications
  151. Instantaneous blow-up of solutions to the Cauchy problem for the fractional Khokhlov-Zabolotskaya equation
  152. Three classes of decomposable distributions
Downloaded on 13.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2020-0020/html?lang=en
Scroll to top button