Home Mathematics W-MPD–N-DMP-solutions of constrained quaternion matrix equations
Article Open Access

W-MPD–N-DMP-solutions of constrained quaternion matrix equations

  • Ivan I. Kyrchei EMAIL logo , Dijana Mosić and Predrag Stanimirović
Published/Copyright: January 21, 2023

Abstract

The solvability of several new constrained quaternion matrix equations is investigated, and their unique solutions are presented in terms of the weighted MPD inverse and weighted DMP inverse of suitable matrices. It is interesting to consider some exceptional cases of these new equations and corresponding solutions. Determinantal representations for the solutions of the equations as mentioned earlier are established as sums of appropriate minors. In order to illustrate the obtained results, a numerical example is shown.

MSC 2010: 15A24; 15A09; 15A15; 15B33

1 Introduction

Standardly, we state C (or R ) and H , respectively, for the complex (real) numbers and the quaternion skew field { 0 + 1 i + 2 j + 3 k i 2 = j 2 = k 2 = ijk = 1 , 0 , 1 , 2 , 3 R } . The conjugate and norm of = 0 + 1 i + 2 j + 3 k H are given by ¯ = 0 1 i 2 j 3 k and = ¯ = ¯ = 0 2 + 1 2 + 2 2 + 3 2 , respectively.

Let A H m × n , where H m × n is the set of all m × n matrices over H . Denote by A and rank ( A ) , respectively, the conjugate-transpose and rank of A . The right/left range space and the right/left null space are defined as

N r ( A ) = { z H n × 1 : A z = 0 } , C r ( A ) = { v H m × 1 : v = A z , z H n × 1 } , N l ( A ) = { z H 1 × m : z A = 0 } , l ( A ) = { v H 1 × n : v = z A , z H 1 × m } .

A quaternion matrix A H n × n , for which A = A holds, is Hermitian.

Generalized inverses play an essential role in the study of the solvability of various equations and systems of equations [1]. The most well-known generalized inverses are the Moore-Penrose and Drazin inverses. For A H n × n , the Drazin inverse A D X uniquely solves the equations X = X A X , A k = X A k + 1 , and X A = A X such that k = Ind ( A ) denotes the index of A . The Moore-Penrose inverse A X of A H n × m is uniquely defined by the equations A = A X A , X = X A X , A X = ( A X ) , and X A = ( X A ) .

Next, we use the following notations:

Ind ( A , W ) = max { Ind ( AW ) , Ind ( WA ) } , H ( m , n ) ( k ) = { { A , W } A H m × n , W H n × m , Ind ( A , W ) = k } , H r ( m , n ) ( k ) = { { A , W } { A , W } H ( m , n ) ( k ) , rank ( A ) = r } ,

( A , W B,N ) H ( m , n p , q ) ( k l ) { A , W } H ( m , n ) ( k ) , { B , N } H ( p , q ) ( l ) , ( A , W B,N ) H r , s ( m , n p , q ) ( k l ) { A , W } H r ( m , n ) ( k ) , { B , N } H s ( p , q ) ( l ) ,

and also the notation from [2,3]:

U C r , ( P ) C r ( U ) C r ( P ) , U l , ( Q ) l ( U ) l ( Q ) , U O ( P,Q ) C r ( U ) C r ( P ) , l ( U ) l ( Q ) .

The weighted Drazin inverse was defined for rectangular matrices as a generalization of the Drazin inverse [4]. Let { A , W } H ( m , n ) ( k ) . The W -weighted Drazin inverse of A is given by the expression

A D , W = A [ (WA) D ] 2 = [ (AW) D ] 2 A .

Significant results about the W -weighted Drazin inverse are proposed in [511].

Recent research on the core inverse [12] and the core-EP inverse [13,14] have revived increased interest in studying new generalized inverses, in particular, obtained by combining them. The notion of the DMP inverse, that is combining the Drazin inverse and Moore-Penrose inverse, was introduced initially for complex square matrices in [15]. Then, it was extended as weighted DMP inverse to rectangular matrices in [16], to bounded linear Hilbert space operators in [17], and to quaternion matrices in [18]. For { A , W } H ( m , n ) ( k ) , the W -weighted DMP ( W -DMP) inverse of A is expressed as follows:

(1.1) A D , , W = W A D , W WA A ,

and uniquely determined by the equations

X = XAX , XA = WA D , W WA , (WA) k X = (WA) k A .

As the dual W -DMP inverse, the W -weighted MPD ( W -MPD) inverse of A is defined as follows:

(1.2) A , D , W = A AWA D , W W .

Note that the DMP inverse A D , = A D A A and the MPD inverse A , D = A A A D are special cases of the W -DMP inverse and the W -MPD inverse, respectively, when m = n and W = I n .

Recently, many authors considered the properties of DMP and W -DMP inverses. In particular, characterizations and representations for DMP and W -DMP inverses can be found in [1923]. Especially, integral representations of the DMP inverse were established in [24], and determinantal ( D -) representations for DMP and W -DMP inverses in [18,25]. In [26], one can see an iterative method for finding the DMP inverse. There are extensions of DMP inverse for operators [27], for elements of rings [28], and for tensors in [29].

Many problems in engineering, mathematics, and so on were converted into some linear matrix equations. Many authors considered constrained quaternion matrix equations (CQMEs) since they have applications in quantum mechanics, quantum physics, computer science, signal and image processing, rigid mechanics, statistic, control theory, field theory, and other fields. The solvability of CQMEs is investigated by various techniques, such as the usage of different representations and characterization of matrices with quaternion entries, the Kronecker product, and utilization of the Moore-Penrose inverse and other kinds of generalized inverses. Some interesting results can be found in [3035].

In [2], the solvability of some CQMEs is studied. It is proved that the unique solutions of these equations can be expressed by the corresponding DMP and MPD inverses and adequate element-wise D -expressions. A characterization of the best-approximate solution X = A C B as well as a characterization of the Drazin-inverse solution X = A D C B D of the general linear equation A X B = C is also given [1,2].

2 Preliminaries and detailed motivation

In order to further generalize the best-approximation solutions and results proved in [2], we study several new CQMEs. The solutions of new CQMEs are uniquely determined and presented in terms of the weighted MPD inverse and weighted DMP inverse of corresponding matrices. Considering important particular cases of these CQMEs, the best-approximation solution is obtained as a particular case. It is interesting to develop D -representations for the solutions of examined CQMEs.

The main research directions of this article are briefly described as follows:

  1. The first result is related to solvability of the CQME

    (2.1) AXB = AW A D , W WC NB D , N NB , X O ( A (AW) k , (NB) l B ) ,

    where ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × q . It has been shown that the solution to (2.1) is uniquely determined by W -MPD inverse of A and N -DMP inverse of B .

  2. Furthermore, we study the CQME

    (2.2) AXB = AW A D , W WAA C B B NB D , N N , X O ( (WA) k , (BN) l ) ,

    in which ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × q . To this end, we prove that its unique solution is expressed by the W -DMP inverse of A and N -MPD inverse of B .

  3. Using the W -DMP inverse of A and N -DMP inverse of B , we verify solvability of the two-sided CQME

    (2.3) AXB = AW A D , W WAA C NB D , N NB , X O ( (WA) k , (NB) l B ) ,

    where ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × q .

  4. Utilizing the W -MPD inverse and N -MPD inverse of A and B , respectively, we solve the equation

    (2.4) AXB = AW A D , W WC B B NB D , N NB , X O ( A (AW) k , (BN) l ) ,

    where ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × q .

  5. We investigate special cases of the CQMEs (2.1)–(2.4) as well as equations with the same solutions.

  6. For the CQMEs (2.1)–(2.4) and their special cases, we provide the D -representations of their solutions.

  7. Developed D -representations are described by a numerical example.

The content of this research is systemized as follows. Preliminaries, detailed motivation, and explanation of considered problems are presented in Section 2. Solvability of equations (2.1)–(2.4) is studied in Section 3. D -representations of solutions to the aforementioned equations are developed in Section 4. A numerical example is given in Section 5.

3 W -MPD– N -DMP-solutions of constrained equations

Two-sided CQMEs (2.1)–(2.4) are solved in this section. Based on the corresponding weighted MPD and DMP inverses and the matrix C , the solutions of these equations are characterized and represented.

Equation (2.1) is considered in Theorem 3.1.

Theorem 3.1

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × q , the unique solver of (2.1) is equal to

(3.1) X = A , D , W C B D , , N .

Proof

For X presented as in (3.1), by (1.2) and (1.1), one can see

A X B = A A , D , W C B D , , N B = A A A W A D , W W C N B D , N N B B B = A W A D , W W C N B D , N N B .

Since

C r ( X ) = C r ( A , D , W C B D , , N ) C r ( A , D , W ) = C r ( A A W A D , W W ) = C r ( A ( A W ) k ( A D , W W ) k ) = C r ( A ( A W ) k )

and

l ( X ) = l ( B D , , N ) l ( N B D , N N B B ) = l ( (NB) l B ) ,

we deduce that X solves (2.1).

To prove the uniqueness of solution (3.1), we set Y and X for two solutions of equation (2.1). From C r ( Y ) C r ( A ( A W ) k ) , C r ( X ) C r ( A ( A W ) k ) , and A ( Y X ) B = 0 , it follows

( Y X ) B C r ( A (AW) k ) N r ( A ) C r ( A , D , W A ) N r ( A , D , W A ) = { 0 } .

Based on the previous, ( Y X ) B = 0 in conjunction with l ( Y ) l ( (NB) l B ) and l ( X ) l ( (NB) l B ) gives

Y X l ( (NB) l B ) N l ( B ) l ( B B D , , N ) N l ( B B D , , N ) = { 0 } .

Thus, equation (2.1) has the unique solution in the form (3.1).□

Solvability of new restricted equations is obtained under additional restrictions in Theorem 3.1. If m = n and W = I n (or p = q and N = I p ) in Theorem 3.1, then A , D , W ( B D , , N ) becomes A , D ( B D , ) and the corresponding equations can be solved.

When we add the assumption C O ( (AW) k ) , (NB) l in Theorem 3.1, we obtain a novel property of the best-approximate solution X = A C B of the equation A X B = C .

Corollary 3.1

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × q , the expression

(3.2) X = A C B

solves in a unique way the following CQME:

AXB = C , X O ( A (AW) k , (NB) l B ) , C O ( (AW) k ) , (NB) l .

Proof

Because C C r , ( (AW) k ) and C l , ( (NB) l ) , we obtain C = AW A D , W W C = CNB D , N NB . Using Theorem 3.1 and

X = A , D , W C B D , , N = A A W A D , W W C N B D , N N B = A C B ,

we complete the proof.□

Consider that (3.1) is a solution for one more kind of CQME.

Theorem 3.2

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × n , the matrix X defined in (3.1) presents the unique solution to

(AW) k AXB (NB) l = (AW) k C (NB) l , X O ( A (AW) k , (NB) l B ) .

Proof

According to Theorem 3.1, X given by (3.1) satisfies AXB = AW A D , W WC NB D , N NB . Hence,

(AW) k AXB (NB) l = (AW) k AW A D , W WC NB D , N NB (NB) l = (AW) k AW (AW) D WC (NB) D NB (NB) l = (AW) k C (NB) l .

The proof can be completed as in the proof of Theorem 3.1.□

As a specific input of Theorems 3.1 and 3.2, for m = n , p = q , W = I n , and N = I p , the result [2, Theorem 3.1] can be concluded.

We solve the next one-sided equations in a particular choice p = q and N = I p = B (or m = n and W = I n = A ) in Theorems 3.1 and 3.2.

Corollary 3.2

If { A , W } H ( m , n ) ( k ) and C H m × n , then the expression

(3.3) X = A , D , W C

solves uniquely the CQMEs

  1. AX = AW A D , W WC , X C r , ( A (AW) k ) ;

  2. (AW) k AX = (AW) k C , X C r , ( A (AW) k ) .

Corollary 3.3

For { B,N } H ( p , q ) ( l ) and C H m × n ,

(3.4) X = C B D , , N

presents the unique solution to CQMEs

  1. XB = C NB D , N NB , X l , ( (NB) l B ) ;

  2. XB (NB) l = C (NB) l , X l , ( (NB) l B ) .

Applying A D , , W and B , D , N , the solution to (2.2) is obtained as a continuation of results verified in Theorem 3.1.

Theorem 3.3

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × n , the CQME (2.2) is uniquely solvable by

(3.5) X = A D , , W C B , D , N .

Note that the unique solution to the following equation is also represented by (3.1).

Theorem 3.4

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × n , the matrix X defined in (3.1) presents the unique solution to

(AW) k AXB (NB) l = (AW) k AA C B B (NB) l , X O ( (WA) k , (BN) l ) .

We now state one-sided equations whose solvability arises from Theorems 3.3 and 3.4.

Corollary 3.4

If { A , W } H ( m , n ) ( k ) and C H m × n , then

(3.6) X = A D , , W C

presents the unique solution to CQMEs

  1. AX = AW A D , W WAA C , X C r , ( (WA) k ) ;

  2. (AW) k AX = (AW) k AA C , X C r , ( (WA) k ) .

Corollary 3.5

For { B,N } H ( p , q ) ( l ) and C H m × n ,

(3.7) X = C B , D , N

presents the unique solution to CQMEs

  1. XB = C B B NB D , N N , X l , ( (BN) l ) ;

  2. XB (NB) l = (AW) k AA C B B (NB) l , X l , ( (BN) l ) .

To prove the solvability of (2.3), we use the W -DMP inverse and N -DMP inverse of A and B , respectively.

Theorem 3.5

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × n , the unique solution to (2.3) is

(3.8) X = A D , , W C B D , , N .

One more equation can be studied by applying the W -DMP inverse and N -DMP inverse of A and B , respectively.

Theorem 3.6

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × n , (3.1) presents the unique solution to

(AW) k AXB (NB) l = (AW) k AA C (NB) l , X O ( (WA) k , (NB) l B ) .

According to Theorems 3.5 and 3.6, we can obtain solutions of the next equations.

Corollary 3.6

For { A , W } H ( m , n ) ( k ) and C H m × n ,

(3.9) X = A D , , W C

presents the unique solution to CQMEs

  1. AX = AW A D , W WAA C , X C r , ( (WA) k ) ;

  2. (AW) k AX = (AW) k AA C , X C r , ( (WA) k ) .

Corollary 3.7

For { B,N } H ( p , q ) ( l ) and C H m × n , the expression

(3.10) X = C B D , , N

uniquely solves CQMEs

  1. XB = C NB D , N NB , X l , ( (NB) l B ) ;

  2. XB (NB) l = C (NB) l , X l , ( (NB) l B ) .

The two-sided equation (2.4) can be solved in terms of the W -MPD inverse and N -MPD inverse of A and B , respectively.

Theorem 3.7

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × n , the unique solver of equation (2.4) is as follows:

(3.11) X = A , D , W C B , D , N .

The following result can be checked, too.

Theorem 3.8

For ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × n , (3.1) presents the unique solution to

(AW) k AXB (NB) l = (AW) k C B B (NB) l , X O ( A (AW) k , (BN) l ) .

Theorems 3.7 and 3.8 imply solvability of the next one-sided equations.

Corollary 3.8

For { A , W } H ( m , n ) ( k ) and C H m × n ,

(3.12) X = A , D , W C

presents the unique solution to CQMEs

  1. AX = AW A D , W WC , X C r , ( A (AW) k ) ;

  2. (AW) k AX = (AW) k C , X C r , ( A (AW) k ) .

Corollary 3.9

For { B,N } H ( p , q ) ( l ) and C H m × n ,

(3.13) X = C B , D , N

presents the unique solution to CQMEs

  1. XB = C B B NB D , N NB , X l , ( (BN) l ) ;

  2. XB (NB) l = C B B (NB) l , X l , ( (BN) l ) .

4 Cramer’s representations of obtained solutions

The classical Cramer’s rule in the case of solving complex matrix equations is based on the applications of the usual determinant, which could not be used for quaternion matrix equations. In the noncommutative quaternion case, the row ( R -) and column ( C -) determinants (recently introduced in [36,37]) demonstrated their usability in D -representations of generalized inverses and solutions to appropriate matrix equations (see, e.g., [3841]).

The following notations are used to express D -representations of generalized inverses. Primarily, A χ λ denotes a submatrix of A H m × n with rows (resp. columns) grasped by λ { λ 1 , , λ k } { 1 , , m } (resp. χ { χ 1 , , χ k } { 1 , , n } ). For a Hermitian A , the notation A λ λ stands for the corresponding principal minor of det A . Furthermore, L k , n { λ : λ = ( λ 1 , , λ k ) , 1 λ 1 < < λ k n } denotes strictly increasing sequences of 1 k n integers collected from { 1 , , n } . For fixed i λ and j χ , the standard notation includes I r , m { i } { λ : λ L r , m , i λ } , J r , n { j } { χ : χ L r , n , j χ } .

Denote by a . j (resp. a . j ) the j th column, and by a i . (resp. a i . ) the i th row of A (resp. A ). Suppose that A i . ( b ) and A . j ( c ) stand for the matrices obtained from A by replacing its i th row with the row b and its j th column with the column c , respectively.

Based on D -representations of the Moore-Penrose inverse [38] and the W -weighted Drazin inverse [42], D -representations of the W -DMP and W -MPD inverses are given in the following lemmas.

Lemma 4.1

[18] Let { A , W } H r ( m , n ) ( k ) and rank ( W A ) k = r 1 . The D -representations of the W-DMP inverse A d , , W = ( a i j d , , W ) is expressed as follows:

(i) If U = W A is an arbitrary matrix, then

(4.1) a i j d , , W = λ I r , m { j } rdet j ( ( A A ) j . ( ω ˜ i . ) ) λ λ χ I r , m A A λ λ λ I r 1 , n U 2 k + 1 ( U 2 k + 1 ) λ λ 2 ,

where ω ˜ i . is the i-th row of Ω ˜ = Ω U k + 1 A . The expression Ω = ( ω i s ) is given by

(4.2) ω i s = λ I r 1 , n { s } rdet s ( ( U 2 k + 1 ( U 2 k + 1 ) ) s . ( ϕ ^ i . ) ) λ λ ,

where ϕ ^ i . denotes the i-th row of Φ ^ U Φ U 2 k ( U 2 k + 1 ) H n × n , and Φ = ( ϕ t z ) H m × n is defined by

(4.3) ϕ t z = λ I r 1 , n { z } rdet z ( ( U 2 k + 1 ( U 2 k + 1 ) ) z . ( u ˇ t . ) ) λ λ .

Here, u ˜ t . is the t-th row of U k ( U 2 k + 1 ) U ˜ H n × n .

(ii) If U = W A is Hermitian, then

(4.4) a i j d , , W = λ I r , m { j } rdet j ( ( A A ) j . ( ω ˜ i . ) ) λ λ λ I r , m A A λ λ λ I r 1 , n U k + 2 λ λ 2 ,

where ω ˜ i . is the i-th row from Ω ˜ = Ω U k + 1 A . Elementwise representation of Ω = ( ω i s ) is expressed as follows:

(4.5) ω i s λ I r 1 , n { s } rdet s ( U s . k + 2 ( u i . ( k + 1 ) ) ) λ λ ,

and u i . ( k + 1 ) denotes the i-th row of U k + 1 .

The weighted MPD inverse is represented similarly.

Lemma 4.2

[18] Let { A , W } H r ( m , n ) ( k ) and rank ( A W ) k = r 1 . Then, D -representations of the W-MPD inverse A , d , W = ( a i j , d , W ) are defined as follows:

(i) In the case of arbitrary V = A W ,

(4.6) a i j , d , W = χ J r , n { i } cdet i ( ( A A ) . i ( υ ˜ . j ) ) χ χ χ J r , n A A χ χ χ J r 1 , m ( V 2 k + 1 ) V 2 k + 1 χ χ 2 ,

where υ ˜ . j is the jth column in ϒ ˜ = A ( A W ) k + 1 ϒ . The elements of ϒ = ( υ t j ) are defined as follows:

(4.7) υ t j χ J r 1 , m { t } cdet t ( ( ( V 2 k + 1 ) V 2 k + 1 ) . t ( ψ ^ . j ) ) χ χ ,

where ψ ^ . j denotes the jth column of Ψ ^ ( V 2 k + 1 ) V 2 k Ψ V H m × n . Then, Ψ = ( ψ s l ) H m × m satisfies

(4.8) ψ s l = χ J r 1 , m { s } cdet s ( ( ( V 2 k + 1 ) V 2 k + 1 ) . s ( v ˜ . l ) ) χ χ ,

where v ˜ . l stands for the l-th column of ( V 2 k + 1 ) V k V ˜ H m × m .

(ii) If the matrix V = A W is Hermitian, then

(4.9) a i j , d , W = χ J r , n { i } cdet i ( ( A A ) . i ( υ ˜ . j ) ) χ χ χ J r , n A A χ χ χ J r 1 , m V k + 2 χ χ 2 ,

where υ ˜ . j represents the j-th column of the expression ϒ ˜ = A V k + 1 ϒ . Elements of ϒ = ( υ s j ) are expressed as follows:

(4.10) υ s j = χ J r 1 , m { s } cdet s ( V . s k + 2 ( v . j ( k + 1 ) ) ) χ χ ,

with v . j ( k + 1 ) representing the j-th column of V k + 1 .

Furthermore, we develop the D -representations of solutions to CQMEs from Section 3. We start with the D -representations of (3.1) and its special appearances (3.3), (3.4).

Theorem 4.1

Assume that ( A , W B, N ) H r , s ( m , n p , q ) ( k l ) , rank ( A W ) k = rank ( V k ) = r 1 , rank ( N B ) l = rank ( U l ) = s 1 , and C H m × q . The matrix X = [ x i j ] H n × p in the form (3.1) is represented in one of subsequent element-wise representations.

(1)

(4.11) x i j = λ I s , p { j } rdet j ( ( B B ) j . ( ω i . ( 1 ) ) ) λ λ χ J r , n A A χ χ χ J r 1 , m ( V 2 k + 1 ) V 2 k + 1 χ χ 2 λ I s , p B B λ λ λ I s 1 , q U 2 l + 1 ( U 2 l + 1 ) λ λ 2 ,

where

(4.12) ω i . ( 1 ) = f = 1 m g = 1 n χ J r , n { i } cdet i ( ( A A ) . i ( υ ˜ . f ) ) χ χ c f g ω ˜ g . = g = 1 n χ J r , n { i } cdet i ( ( A A ) . i ( c . g ( 1 ) ) ) χ χ ω ˜ g . = g = 1 n λ i g ω ˜ g . ,

and υ ˜ . f denotes the f th column of ϒ ˜ = A V k + 1 ϒ , and ω ˜ g . is the g th row of Ω ˜ = Ω U l + 1 B . Here, ϒ = ( υ t j ) is determined by (4.7), and Ω can be found by (4.2) given that Φ ^ N B Φ U 2 l ( U 2 l + 1 ) H q × q , where Φ is of an appropriate size and index.

(2)

(4.13) x i j = χ J r , n { i } cdet i ( ( A A ) . i ( υ . j ( 1 ) ) ) χ χ χ J r , n A A χ χ χ J r 1 , m ( V 2 k + 1 ) V 2 k + 1 χ χ 2 λ I s , p B B λ λ λ I s 1 , q U 2 l + 1 ( U 2 l + 1 ) λ λ 2 ,

where

(4.14) υ . j ( 1 ) = f = 1 m g = 1 q υ ˜ . f c f g λ I s , p { j } rdet j ( ( B B ) j . ( ω ˜ g . ) ) λ λ = f = 1 m υ ˜ . f λ I s , p { j } rdet j ( ( B B ) j . ( c f . ( 2 ) ) ) λ λ = f = 1 m υ ˜ . f μ f j .

Proof

According to (3.1) and the representations (4.6) of A , D , W = ( a i j , D , W ) and (4.1) of B D , , N = ( b i j D , , N ) , one obtains

(4.15) x i j = f = 1 m g = 1 q a i f , D , W c f g b g j D , , N = f = 1 m g = 1 n χ J r , n { i } cdet i ( ( A A ) . i ( υ ˜ . f ) ) χ χ χ J r , n A A χ χ χ J r 1 , m ( V 2 k + 1 ) V 2 k + 1 χ χ 2 c f g × λ I s , p { j } rdet j ( ( B B ) j . ( ω ˜ g . ) ) λ λ χ I s , p B B λ λ λ I s 1 , q U 2 l + 1 ( U 2 l + 1 ) λ λ 2 ,

where υ ˜ . f stands for the f th column of the matrix expression ϒ ˜ = A ( A W ) k + 1 ϒ and ω ˜ g . is the g th row of Ω ˜ = Ω ( N B ) l + 1 B . The matrix ϒ = ( υ t j ) is determined by (4.7). Furthermore, Ω is defined in (4.2) and Φ ^ N B Φ U 2 l ( U 2 l + 1 ) H q × q and Φ is of appropriate dimension and index.

To acquire demonstrative formulae, we make convolutions of (4.15). Since the column vector can be multiplied from the right and the row vector – from the left, the subsequent two cases are used.

(1) If we carry out a series of the following successive designations, namely,

c . g ( 1 ) = f = 1 m υ ˜ . f c f g , λ i g χ J r , n { i } cdet i ( ( A A ) . i ( c . g ( 1 ) ) ) χ χ , ω i . ( 1 ) g = 1 q λ i g ω ˜ g . ,

then equality (4.11) follows.

(2) A series of successive designations

c f . ( 2 ) = g = 1 q c f g ω ˜ . f , μ f j λ I s , p { j } rdet j ( ( B B ) j . ( c f . ( 2 ) ) ) λ λ , υ . j ( 1 ) f = 1 m υ ˜ . f μ f j ,

give the equality (4.13).□

If the matrices V = A W and U = N B are Hermitian, then it can be given some simplifications of the obtained expressions (4.11) and (4.13). In these cases, we use D -representations of W -DMP and W -MPD inverses, (4.4) and (4.9), respectively. The following corollaries can be proved similarly to the proof of Theorem 4.1.

Corollary 4.1

Let ( A , W B,N ) H r , s ( m , n p , q ) ( k l ) , rank ( A W ) k = rank ( V k ) = r 1 , rank ( N B ) l = rank ( U l ) = s 1 , and C H m × q . Suppose that V = A W and U = N B are both Hermitian. The matrix X = [ x i j ] H n × p of the form (3.1) is defined in one of the subsequent representations:

x i j = λ I s , p { j } rdet j ( ( B B ) j . ( ω i . ( 1 ) ) ) λ λ χ J r , n A A χ χ χ J r 1 , m V k + 2 χ χ 2 λ I s , p B B λ λ λ I s 1 , q U l + 2 λ λ 2 = χ J r , n { i } cdet i ( ( A A ) . i ( υ . j ( 1 ) ) ) χ χ χ J r , n A A χ χ χ J r 1 , m V k + 2 χ χ 2 λ I s , p B B λ λ λ I s 1 , p U l + 2 λ λ 2 .

Here, ω i . ( 1 ) is determined by (4.12) and υ . j ( 1 ) is determined by (4.14), where υ ˜ . f is the f-th column in ϒ ˜ = A V k + 1 ϒ and ω ˜ g . is the g-th row of Ω ˜ = Ω U l + 1 B . The matrix ϒ = ( υ t j ) is determined by (4.10), and Ω can be found by (4.5) with an appropriate size and index.

By using Theorem 4.1 and Corollary 4.1, it is easy to obtain the following mixed cases.

Corollary 4.2

Let us suppose ( A , W B,N ) H r , s ( m , n p , q ) ( k l ) , rank ( A W ) k = rank ( V k ) = r 1 , rank ( N B ) l = rank ( U l ) = s 1 , and C H m × q . Furthermore, suppose that V = A W is Hermitian and U = N B is an arbitrary matrix. The matrix X = [ x i j ] H n × p in the form (3.1) is defined as follows:

x i j = λ I s , p { j } rdet j ( ( B B ) j . ( ω i . ( 1 ) ) ) λ λ χ J r , n A A χ χ χ J r 1 , m V k + 2 χ χ 2 λ I s , p B B λ λ λ I s 1 , q U 2 l + 1 ( U 2 l + 1 ) λ λ 2 = χ J r , n { i } cdet i ( ( A A ) . i ( υ . j ( 1 ) ) ) χ χ χ J r , n A A χ χ χ J r 1 , m V k + 2 χ χ 2 λ I s , p B B λ λ λ I s 1 , q U 2 l + 1 ( U 2 l + 1 ) λ λ 2 .

Here, ω i . ( 1 ) is determined by (4.12) and υ . j ( 1 ) is determined by (4.14), such that υ ˜ . f is the f-th column of ϒ ˜ = A V k + 1 ϒ and ω ˜ g . is the g-th row of Ω ˜ = Ω U l + 1 B . The matrix ϒ = ( υ t j ) is determined by (4.10), and Ω can be found by (4.2) with an appropriate size and index.

Corollary 4.3

Let ( A , W B,N ) H r , s ( m , n p , q ) ( k l ) , rank ( A W ) k = rank ( V k ) = r 1 , rank ( N B ) l = rank ( U l ) = s 1 , and C H m × q . Suppose that V = A W is an arbitrary matrix and U = N B is Hermitian. The matrix X = [ x i j ] H n × p determined in (3.1) is defined in one of the subsequent element-wise representations:

x i j = λ I s , p { j } rdet j ( ( B B ) j . ( ω i . ( 1 ) ) ) λ λ χ J r , n A A χ χ χ J r 1 , m ( V 2 k + 1 ) V 2 k + 1 χ χ 2 λ I s , p B B λ λ λ I s 1 , q U l + 2 λ λ 2 = χ J r , n { i } cdet i ( ( A A ) . i ( υ . j ( 1 ) ) ) χ χ χ J r , n A A χ χ χ J r 1 , m V k + 2 χ χ 2 λ I s , p B B λ λ λ I s 1 , p U l + 2 λ λ 2 .

Here, ω i . ( 1 ) is determined by (4.12) and υ . j ( 1 ) is determined by (4.14), where υ ˜ . f means the f-th column of ϒ ˜ = A V k + 1 ϒ and ω ˜ g . is the g-th row of Ω ˜ = Ω U l + 1 B . The matrix ϒ = ( υ t j ) is determined by (4.7), and Ω can be found by (4.5) with an appropriate size and index.

Theorem 4.1 inducts the following algorithm for computing X H n × p by (3.1).

Algorithm 4.1

  1. Compute the matrices V = A W and U = N B and the values r = rank ( V ) , s = rank ( U ) , k = Ind ( V ) , l = Ind ( U ) , r 1 = rank ( V k ) , and s 1 = rank ( U l ) .

  2. Calculate the matrices V ˜ = ( V 2 k + 1 ) V 2 k and Ψ by (4.8) and Ψ ˜ = ( V 2 k + 1 ) V 2 k Ψ V , then generate the matrix ϒ by (4.7) and ϒ ˜ = A V k + 1 ϒ .

  3. Calculate the matrices U ˜ = U 2 l ( U 2 l + 1 ) , Φ by (4.3) and Φ ˜ = U Φ U 2 l ( V 2 l + 1 ) , then generate the matrix Ω by (4.2), and Ω ˜ = Ω U l + 1 B .

  4. Compute the minor sums

    χ J r , n A A χ χ , χ J r 1 , m ( V 2 k + 1 ) V 2 k + 1 χ χ 2 , λ I s , p B B λ λ , and λ I s 1 , q U 2 l + 1 ( U 2 l + 1 ) λ λ 2 .

  5. Next, we have two equivalent ways.

    1. For equality (4.11):

      1. Compute the matrix C 1 = ϒ ˜ C .

      2. Generate the matrix

        [ ζ i g ] n × q = χ J r , n { i } cdet i ( ( A A ) . i ( c . g ( 1 ) ) ) χ χ n × q .

      3. Compute the row vector

        [ ω i . ( 1 ) ] 1 × n = g = 1 n ζ i g ω ˜ g . 1 × n .

      4. Compute and return X by (4.11).

    2. For equality (4.13):

      1. Compute the matrix C 2 = C Ω ˜ .

      2. Generate the matrix

        [ μ f j ] m × p = λ I s , p { j } rdet j ( ( B B ) j . ( c f . ( 2 ) ) ) λ λ m × p .

      3. Compute the column vector

        [ υ . j ( 1 ) ] p = f = 1 m υ ˜ . f μ f j p .

      4. Compute and return X by (4.13).

Algorithms inducted by Corollaries 4.14.3 can be proposed in a similar way.

Furthermore, we note that the D -representations for the solution (3.2) were explored completely in [43].

In a particular case that p = q and N = I p = B in accordance to Corollary 3.2, it follows.

Corollary 4.4

Let { A , W } H ( m , n ) ( k ) , rank ( ( A W ) k ) = rank ( V k ) = r 1 , and C H m × q . Then, X = [ x i j ] H n × q defined by (3.3) is expressible element-wise in one of the following two possible cases:

(1) If A W = V is arbitrary, then

x i j = χ J r , n { i } cdet i ( ( A A ) . i ( υ . j ( 1 ) ) ) χ χ χ J r , n A A χ χ χ J r 1 , m ( V 2 k + 1 ) V 2 k + 1 χ χ 2 ,

with υ . j ( 1 ) denoting the j-th column of ϒ 1 = A V k + 1 ϒ C and ϒ is expressed by (4.7).

(2) When A W = V is Hermitian, then

x i j = χ J r , n { i } cdet i ( ( A A ) . i ( υ . j ( 1 ) ) ) χ χ χ J r , n A A χ χ χ J r 1 , m V k + 2 χ χ 2 .

Here, ϒ = ( υ t j ) is defined by (4.10).

In a particular case that m = n and W = I n = A , we have the next result.

Corollary 4.5

Let { B,N } H ( p , q ) ( l ) , rank ( N B ) l = rank ( U l ) = s 1 , and C H n × q . Then, X = [ x i j ] H n × q defined by (3.4) can be expressed as element-wise in one of the following two possible cases:

(1) If N B = U is arbitrary, then

x i j = λ I s , p { j } rdet j ( ( B B ) j . ( ω i . ( 1 ) ) ) λ λ λ I s , p B B λ λ λ I s 1 , q U 2 l + 1 ( U 2 l + 1 ) λ λ 2 ,

where ω i . ( 1 ) is the i-th row of Ω 1 = C Ω U l + 1 B , and Ω can be found by (4.2) given that Φ ^ U Φ U 2 l ( U 2 l + 1 ) H q × q , where Φ is an appropriate size and index.

(2) When N B = U is Hermitian, then

x i j = λ I s , p { j } rdet j ( ( B B ) j . ( ω i . ( 1 ) ) ) λ λ λ I s , p B B λ λ λ I s 1 , q U l + 2 λ λ 2 ,

where ω i . ( 1 ) substitutes the i-th row of Ω 1 = C Ω U l + 1 B . The matrix Ω can be found by (4.5) with an appropriate size and index.

Now, we derive D -representations of (3.5) and its particular cases (3.6), (3.7).

Theorem 4.2

Let us assume ( A , W B,N ) H r , s ( m , n p , q ) ( k l ) , rank ( W A ) k = rank ( U k ) = r 1 , rank ( B N ) l = rank ( V l ) = s 1 , and C H m × q . The matrix X = [ x i j ] H n × p of the form (3.5) is given in one of the following representations:

(1) In arbitrary case, it follows

(4.16) x i j = c ^ i j χ I r , m A A λ λ λ I r 1 , n U 2 k + 1 ( U 2 k + 1 ) λ λ 2 χ J s , q B B χ χ χ J s 1 , p ( V 2 l + 1 ) V 2 l + 1 χ χ 2 ,

where C ^ = ( c ^ i j ) = Ω ^ C ϒ ^ . Here, the matrices Ω ^ = ( ω ^ i f ) and ϒ ^ = ( υ ^ g j ) are determined by the following

(4.17) ω ^ i f = λ I r , m { f } rdet f ( ( A A ) f . ( ω ˜ i . ) ) λ λ ,

(4.18) υ ^ g j = χ J s , q { g } cdet g ( ( B B ) . g ( υ ˜ . j ) ) χ χ ,

where ω ˜ i . is i th row of Ω ˜ = Ω U k + 1 A and υ ˜ . j denotes the j th column covered in ϒ ˜ = B V l + 1 ϒ . Here, the matrices Ω and ϒ are determined by (4.2) and (4.7) with appropriate sizes and indexes.

(2) When the both matrices U = W A and V = B N are Hermitian, then

(4.19) x i j = c ^ i j χ I r , m A A λ λ λ I r 1 , n U k + 2 λ λ 2 χ J s , q B B χ χ χ J s 1 , p V l + 2 χ χ 2 ,

where the matrix C ^ = ( c ˆ i j ) = Ω ^ C ϒ ^ is defined as in the case (1) with the difference that Ω and ϒ are determined by (4.5) and (4.10), respectively, with appropriate sizes and indexes.

(3) When the matrix U = W A is Hermitian and V = B N is arbitrary, then

x i j = c ^ i j χ I r , m A A λ λ λ I r 1 , n U k + 2 λ λ 2 χ J s , q B B χ χ χ J s 1 , p ( V 2 l + 1 ) V 2 l + 1 χ χ 2 .

Here, the matrices Ω and ϒ are determined by (4.5) and (4.7), respectively, with appropriate sizes and indexes.

(4) When the matrix U = W A is arbitrary and V = B N is Hermitian, then

x i j = c ^ i j χ I r , m A A λ λ λ I r 1 , n U 2 k + 1 ( U 2 k + 1 ) λ λ 2 χ J s , q B B χ χ χ J s 1 , p V l + 2 χ χ 2 .

Here, the matrices Ω and ϒ are determined by (4.2) and (4.10), respectively, with appropriate sizes and indexes.

Proof

(1) According to (3.5) and the representations (4.1) of the W -DMP inverse A D , , W = ( a i j D , , W ) and (4.6) of the W -MPD inverse B , D , N = ( b i j , D , N ) , one obtains

(4.20) x i j = f = 1 m g = 1 q a i f D , , W c f g b g j , D , N = f = 1 m g = 1 n λ I r , m { f } rdet f ( ( A A ) f . ( ω ˜ i . ) ) λ λ χ I r , m A A λ λ λ I r 1 , n U 2 k + 1 ( U 2 k + 1 ) λ λ 2 c f g × χ J s , q { g } cdet g ( ( B B ) . g ( υ ˜ . j ) ) χ χ χ J s , q B B χ χ χ J s 1 , p ( V 2 l + 1 ) V 2 l + 1 χ χ 2 ,

where ω ˜ i . (resp. υ ˜ . j ) denotes the i th row of Ω ˜ = Ω ( W A ) k + 1 A (resp. the j th column of ϒ ˜ = B ( B N ) l + 1 ϒ ). The matrices ϒ and Ω are determined by (4.7) and (4.2), respectively, with appropriate sizes and indices. The row determinant cannot be multiplied from the right and the column determinant from the left by c f g in the equality (4.20). So, the convolutions similar as those made in Theorem 4.2 cannot be used. Denote

ω ^ i f = λ I r , m { f } rdet f ( ( A A ) f . ( ω ˜ i . ) ) λ λ , υ ^ g j = χ J s , q { g } cdet g ( ( B B ) . g ( υ ˜ . j ) ) χ χ ,

and construct the matrices Ω ^ = ( ω ^ i f ) and ϒ ^ = ( υ ^ g j ) . By putting Ω ^ C ϒ ^ = C ^ = ( c ^ i j ) , the equality (4.16) holds.

(2) If the both matrices U = W A and V = B N are Hermitian, the D -representations of A D , , W and B , D , N are expressed by (4.4) and (4.9), respectively. Doing so gives equation (4.19).

The mixed cases (3) and (4) can be obtained in the similar way.□

From Theorem 4.2, the particular one-side cases of (3.5) obviously follow.

Let p = q and N = I p = B . Then, in accordance to Corollary 3.4, we have the next corollary.

Corollary 4.6

Suppose the conditions { A , W } H ( m , n ) ( k ) , rank ( W A ) k = rank ( U k ) = r 1 , and C H m × n . The matrix X = [ x i j ] H n × n given by (3.6) is defined in one of the following representations:

(1) If U = W A is arbitrary, then

x i j = c ˆ i j ( 1 ) χ I r , m A A λ λ λ I r 1 , n U 2 k + 1 ( U 2 k + 1 ) λ λ 2 ,

where C ^ 1 = ( c ˆ i j ( 1 ) ) = Ω ^ C . Here, the matrix Ω ^ = ( ω ^ i f ) is determined by equality (4.17) with Ω that is defined by (4.2).

(2) When U = W A is Hermitian, then

x i j = c ˆ i j ( 1 ) χ I r , m A A λ λ λ I r 1 , n U k + 2 λ λ 2 .

In this case, Ω is determined by (4.5).

Let now m = n and W = I n = A . Then, in accordance to Corollary 3.5, it follows.

Corollary 4.7

Let ( B,N ) H ( p , q ) ( l ) , rank ( B N ) l = rank ( V l ) = s 1 , and C H n × q . The matrix X = [ x i j ] H n × p of the form (3.6) is determined in one of the following representations:

(1) When V = B N is arbitrary, then

x i j = c ˆ i j ( 2 ) χ J s , q B B χ χ χ J s 1 , p ( V 2 l + 1 ) V 2 l + 1 χ χ 2 ,

where C ^ 2 = ( c ˆ i j ( 2 ) ) = C ϒ ^ . Here, ϒ ^ = ( υ ^ g j ) is determined by (4.18) and is defined by (4.7) with an appropriate size and index.

(2) When V = B N is Hermitian, then

x i j = c ˆ i j ( 2 ) χ J s , q B B χ χ χ J s 1 , p V l + 2 χ χ 2 .

In this case, ϒ is determined by (4.10).

Theorem 4.2 initiates the next algorithm for computing X H n × p by (3.5).

Algorithm 4.2

  1. Calculate the matrices U = W A and V = B N and the values r = rank ( U ) , s = rank ( V ) , k = Ind ( U ) , l = Ind ( V ) , r 1 = rank ( U k ) , and s 1 = rank ( V l ) .

  2. Find the matrices U ˜ = U 2 k ( U 2 k + 1 ) and Φ by (4.3) and Φ ˜ = U Φ U 2 k ( V 2 k + 1 ) , then generate the matrix Ω by (4.2), and Ω ˜ = Ω U k + 1 A .

  3. Find the matrices V ˜ = ( V 2 l + 1 ) V 2 l and Ψ by (4.8) and Ψ ˜ = ( V 2 l + 1 ) V 2 l Ψ V , then generate the matrix ϒ by (4.7), and ϒ ˜ = B V l + 1 ϒ .

  4. Compute values of minor sums χ I r , m A A λ λ , λ I r 1 , n U 2 k + 1 ( U 2 k + 1 ) λ λ 2 , χ J s , q B B χ χ , and χ J s 1 , p ( V 2 l + 1 ) V 2 l + 1 χ χ 2 .

  5. Generate the matrices Ω ^ by (4.17) and ϒ ^ by (4.18).

  6. Find C ^ = ( c ˆ i j ) = Ω ^ C ϒ ^ .

  7. Compute and return X by (4.16).

Algorithms inducted by Corollaries 4.14.3 can be proposed by the similar way.

Now, we obtain the D -representations for the solution (3.8).

Theorem 4.3

Let ( A , W B,N ) H r , s ( m , n p , q ) ( k l ) and C H m × q . Denote U 1 = W A and U 2 = N B . Suppose that rank ( U 1 k ) = r 1 , rank ( U 2 l ) = s 1 . The matrix X = [ x i j ] H n × p possessing the form (3.8) is expressed element-wise in one of the subsequent possible representations:

(1) In arbitrary case, it follows

(4.21) x i j = λ I s , p { j } rdet j ( ( B B ) j . ( λ ˜ i . ) ) λ λ λ I r , m A A λ λ λ I r 1 , n U 1 2 k + 1 ( U 1 2 k + 1 ) λ λ 2 λ I s , p B B λ λ λ I s 1 , q U 2 2 l + 1 ( U 2 2 l + 1 ) λ λ 2 ,

where

(4.22) λ ^ i . = f = 1 m g = 1 p λ I r , m { f } rdet f ( ( A A ) f . ( ω ˜ i . ( 1 ) ) ) λ λ c f g ω ˜ g . ( 2 ) = f = 1 m g = 1 p λ i f c f g ω ˜ g . ( 2 ) .

Here, ω ˜ i . ( 1 ) is the i th row of Ω ˜ 1 = Ω 1 U 1 k + 1 A and ω ˜ g . ( 2 ) is the g th row of Ω ˜ 2 = Ω 2 U 2 l + 1 B . The matrices Ω i , i = 1 , 2 , are determined by (4.2) with appropriate sizes and indices.

(2) When the both matrices U 1 = W A and U 2 = N B are Hermitian, then

(4.23) x i j = λ I s , p { j } rdet j ( ( B B ) j . ( λ ˜ i . ) ) λ λ λ I r , m A A λ λ λ I r 1 , n U 1 k + 2 λ λ 2 λ I s , p B B λ λ λ I s 1 , q U 2 l + 2 λ λ 2 ,

where λ ˜ i . is determined by (4.22) such that the matrices Ω i , i = 1 , 2 , are determined by (4.5) with appropriate sizes and indexes.

(3) When the matrix U 1 = W A is Hermitian and U 2 = N B is arbitrary, then

x i j = λ I s , p { j } rdet j ( ( B B ) j . ( λ ˜ i . ) ) λ λ λ I r , m A A λ λ λ I r 1 , n U 1 k + 2 λ λ 2 λ I s , p B B λ λ λ I s 1 , q U 2 2 l + 1 ( U 2 2 l + 1 ) λ λ 2 .

Here, the matrices Ω 1 and Ω 2 are determined by (4.5) and (4.2), respectively, with appropriate sizes and indexes.

(4) When the matrix U 1 = W A is arbitrary and U 2 = N B is Hermitian, then

x i j = λ I s , p { j } rdet j ( ( B B ) j . ( λ ˜ i . ) ) λ λ λ I r , m A A λ λ λ I r 1 , n U 1 2 k + 1 ( U 1 2 k + 1 ) λ λ 2 λ I s , p B B λ λ λ I s 1 , q U 2 l + 2 λ λ 2 .

In this case, the matrices Ω 1 and Ω 2 are determined by (4.2) and (4.5), respectively, with appropriate sizes and indexes.

Proof

According to (3.8) and the representations (4.1) of A D , , W = ( a i j D , , W ) and B D , , N = ( b i j D , , N ) , one obtains

(4.24) x i j = f = 1 m g = 1 q a i f D , , W c f g b g j D , , N = f = 1 m g = 1 p λ I r , m { f } rdet f ( ( A A ) f . ( ω ˜ i . ( 1 ) ) ) λ λ λ I r , m A A λ λ λ I r 1 , n U 1 2 k + 1 ( U 1 2 k + 1 ) λ λ 2 c f g × λ I s , p { j } rdet j ( ( B B ) j . ( ω ˜ g . ( 2 ) ) ) λ λ χ I s , p B B λ λ λ I s 1 , q U 2 2 l + 1 ( U 2 2 l + 1 ) λ λ 2 ,

where ω ˜ i . ( 1 ) is the i th row of Ω ˜ 1 = Ω 1 U 1 k + 1 A and ω ˜ g . ( 2 ) is the g th row of Ω ˜ 2 = Ω 2 U 2 l + 1 B . The matrices Ω i , i = 1 , 2 , are determined by (4.2) with appropriate sizes and indexes.

To obtain expressive formulas, we make convolutions of (4.24).

Using a series of the successive designations,

λ i f λ I r , m { f } rdet f ( ( A A ) f . ( ω ˜ i . ( 1 ) ) ) λ λ , λ ˜ i . = f = 1 m g = 1 p λ i f c f g ω ˜ g . ( 2 ) ,

obtain equation (4.21).

(2) If both matrices U 1 = W A and U 2 = N B are Hermitian, then the D -representations of A D , , W and B D , , N are expressed by (4.4). Making that provide equation (4.23).

The mixed cases (3) and (4) hold in the similar way.□

Theorem 4.3 inducts the algorithm for computing X H n × p by (3.8).

Algorithm 4.3

  1. Compute the matrices U 1 = W A and U 2 = N B , and the values r = rank ( U 1 ) , s = rank ( U 2 ) , k = Ind ( U ) , l = Ind ( U 1 ) , r 1 = rank ( U 1 k ) , and s 1 = rank ( U 2 l ) .

  2. Find the matrices U ˜ 1 = U 1 2 k ( U 1 2 k + 1 ) and U ˜ 2 = U 2 2 l ( U 2 2 l + 1 ) .

  3. By (4.3), generate the matrices Φ i and compute Φ ^ i = U i Φ i U ˜ i for all i = 1 , 2 .

  4. By (4.2), generate the matrices Ω i , i = 1 , 2 , and compute Ω ˜ 1 = Ω 1 U 1 k + 1 A and Ω ˜ 2 = Ω 2 U 2 l + 1 B

  5. Compute the values of minor sums

    χ I r , m A A λ λ , λ I r 1 , n U 1 2 k + 1 ( U 1 2 k + 1 ) λ λ 2 , λ I s , p B B λ λ , and λ I s 1 , p U 2 2 l + 1 ( U 2 2 l + 1 ) λ λ 2 .

  6. Generate the matrix

    [ λ f j ] m × p = λ I r , m { f } rdet f ( ( A A ) f . ( ω ˜ i . ( 1 ) ) ) λ λ H m × p .

  7. Compute the row vector

    λ ˜ i . = f = 1 m g = 1 p λ i f c f g ω ˜ g . ( 2 ) H p .

  8. Compute and return X by (4.21).

Finally, we provide D -representations for the solution (3.11).

Theorem 4.4

Suppose ( A , W B,N ) H ( m , n p , q ) ( k l ) and C H m × q . Denote V 1 = A W and V 2 = B N . Suppose that rank ( V 1 k ) = r 1 and rank ( V 2 l ) = s 1 . The matrix X = [ x i j ] H n × p defined as in (3.11) is expressed in one of the following representations:

(1) In arbitrary case, it follows

(4.25) x i j = χ J r , n { i } cdet i ( ( A A ) . i ( μ ˜ . j ) ) χ χ χ J r , n A A χ χ χ J r 1 , m ( V 1 2 k + 1 ) V 1 2 k + 1 χ χ 2 χ J s , q B B χ χ χ J s 1 , p ( V 2 2 l + 1 ) V 2 2 l + 1 χ χ 2 ,

where

μ ˜ . j = f = 1 m g = 1 q υ ˜ . f ( 1 ) c f g χ J s , q { g } cdet g ( ( B B ) . g ( υ ˜ . j ( 2 ) ) ) χ χ = f = 1 m g = 1 q υ ˜ . f ( 1 ) c f g μ g j .

Here, υ ˜ . f ( 1 ) is the f th column of ϒ ˜ 1 = A V 1 k + 1 ϒ 1 and υ ˜ . j ( 2 ) is the j th column of ϒ ˜ 2 = B V 2 l + 1 ϒ 2 . The matrices ϒ z , ( z = 1 , 2 ) , are determined by (4.7) with appropriate sizes and indexes.

(2) When both matrices V 1 = A W and V 2 = B N are Hermitian, then

(4.26) x i j = χ J r , n { i } cdet i ( ( A A ) . i ( λ ˜ . j ) ) χ χ χ J r , n A A χ χ χ J r 1 , m V 1 k + 2 χ χ 2 χ J s , q B B χ χ χ J s 1 , p V 2 l + 2 χ χ 2 .

In this case, both matrices ϒ z , ( z = 1 , 2 ) , are determined by (4.10) with appropriate sizes and indexes.

(3) When the matrix V 1 = A W is Hermitian and V 2 = B N is arbitrary, then

x i j = χ J r , n { i } cdet i ( ( A A ) . i ( λ ˜ . j ) ) χ χ χ J r , n A A χ χ χ J r 1 , m V 1 k + 2 χ χ 2 χ J s , q B B χ χ χ J s 1 , p ( V 2 2 l + 1 ) V 2 2 l + 1 χ χ 2 .

Here, ϒ 1 and ϒ 2 are determined by (4.10) and (4.7), respectively, with appropriate sizes and indexes.

(4) When the matrix V 1 = A W is arbitrary and V 2 = B N is Hermitian, then

x i j = χ J r , n { i } cdet i ( ( A A ) . i ( λ ˜ . j ) ) χ χ χ J r , n A A χ χ χ J r 1 , m ( V 1 2 k + 1 ) V 1 2 k + 1 χ χ 2 χ J s , q B B χ χ χ J s 1 , p V 2 l + 2 χ χ 2 .

Here, ϒ 1 and ϒ 2 are determined by (4.7) and (4.10), respectively, with appropriate sizes and indexes.

Proof

According to (3.11) and the representations (4.6) of the W -MPD inverses A , D , W = ( a i j , D , W ) and B , D , N = ( b i j , D , N ) , one obtains

(4.27) x i j = f = 1 m g = 1 q a i f , D , W c f g b g j , D , N = f = 1 m g = 1 n χ J r , n { i } cdet i ( ( A A ) . i ( υ ˜ . f ( 1 ) ) ) χ χ χ J r , n A A χ χ χ J r 1 , m ( V 1 2 k + 1 ) V 1 2 k + 1 χ χ 2 c f g × χ J s , q { g } cdet g ( ( B B ) . g ( υ ˜ . j ( 2 ) ) ) χ χ χ J s , q B B χ χ χ J s 1 , p ( V 2 2 l + 1 ) V 2 2 l + 1 χ χ 2 ,

where υ ˜ . f ( 1 ) is the f th column of ϒ ˜ 1 = A V 1 k + 1 ϒ 1 and υ ˜ . j ( 2 ) denotes the j th column of ϒ ˜ 2 = B V 2 l + 1 ϒ 2 . Matrices ϒ z , ( z = 1 , 2 ), are determined by (4.7) with appropriate dimensions and indices.

To derive meaningful representations, we make convolutions of (4.27). We carry out the following consecutive designations:

μ g j χ J s , q { g } cdet g ( ( B B ) . g ( υ ˜ . j ( 2 ) ) ) χ χ , μ ˜ . j = f = 1 m g = 1 q υ ˜ . f ( 1 ) c f g μ g j .

Then, from this, equality (4.25) holds.

(2) If both matrices V 1 = A W and V 2 = B N are Hermitian, then D -representations of A , D , W and B , D , N are expressed by (4.9). Making that gives equality (4.26).

The mixed cases (3) and (4) can be obtained in the similar way.□

Theorem 4.4 yields an algorithm similar to Algorithm 4.3.

5 Example

Consider

A = 1 i 0 j 0 k , W = k i j 0 0 1 , B = 0 i 0 k 1 i 1 0 0 1 k j , N = k 0 i 0 j k 0 1 0 1 0 k , and C = k j 0 i 0 j .

Since

V = A W = 2 k i i 2 k , W A = 0 j j j k 0 j 0 k ,

and rank ( A W ) 2 = rank ( A W ) = rank ( W A ) 2 = rank ( W A ) = 2 , we have k = Ind ( A , W ) = 1 . Similarly, from

B N = k j 0 i 1 j i + k j 1 + j k 0 i 0 i + k 1 j i i k , U = N B = i j 0 0 k 0 0 0 0 ,

and rank ( B N ) = 3 , rank ( N B ) 2 = rank ( N B ) = 2 , rank ( B N ) 3 = rank ( B N ) 2 = 2 , it follows that l = Ind ( A , W ) = 2 .

The unique X H 3 × 4 in (3.1) will be found similarly as in Algorithm 4.1.

  1. Calculate V = A W , U = N B , and

    V ˜ = ( V 3 ) V = 15 12 j 12 j 15 , ( V 3 ) V 3 = 45 36 j 36 j 45 ,

    U ˜ = U 2 ( U 5 ) = 6 i k 1 + j 0 2 + 3 j k 0 0 0 0 , U 5 ( U 5 ) = 14 3 i 2 k 0 3 i + 2 k 1 0 0 0 0 .

  2. By (4.7) and (4.8), generate, respectively, the matrices

    ϒ = 177147 0 0 177147 , Ψ = 243 0 0 243

    and compute

    ϒ ˜ = A V 2 ϒ = 531441 1 j i 0 0 k , Ψ ˜ = ( V 3 ) V 2 Ψ V = 10935 8748 j 8748 j 10935 .

  3. On the other hand, by (4.3), generate the matrix

    Φ = i 2 j 0 0 k 0 0 0 0

    and compute

    Φ ˜ = U Φ U 4 ( V 5 ) = 6 i k 1 + j 0 2 + 3 j k 0 0 0 0 .

    Furthermore, by (4.2) it follows that

    Ω = i 2 j 0 0 k 0 0 0 0 , then Ω ˜ = Ω U 3 B = 0 k 1 1 i 1 0 k 0 0 0 0 0 .

  4. Compute the values of minor sums

    χ J 2 , 3 A A χ χ = 3 , χ J 2 , 2 ( V 3 ) V 3 χ χ 2 = 72 9 2 = 531441 ,

    λ I 3 , 4 B B λ λ = 2 , λ I 2 , 3 U 5 ( U 5 ) λ λ 2 = 1 .

  5. Compute the matrices

    C 1 = ϒ C = 531441 0 j 1 j k 0 j 0 0 , Λ = 531441 0 j 1 3 j 2 k j 3 j k 2 k .

  6. Find the matrix

    Ω 1 = Λ Ω ˜ = 531441 k j 0 i 2 j 3 i + 2 k 3 j 2 + 3 j j 3 i k 3 j 1 3 j .

  7. Finally, by (4.11), compute and return

    X = 1 3 k 0 0 0 2 j 0 3 j 2 j 0 3 j 0 .

6 Conclusion

The solvability of several novel CQMEs is investigated. Constraints are expressed in terms of the right/left range space and the right/left null space assumptions on solutions relative to input matrices. It is essential to mention that the presented results remain original even in the domain of complex matrices.

Obtained solutions to considered CQMEs are expressions of the general pattern X = A , D , W C B D , , N and A , D , W C B , D , N involving the W-MPD/W-DMP inverse of A and the N-DMP/N-MPD inverse of B . Particular choices of considered solutions lead to the best-approximate solution A C B and the Drazin-inverse solution X = A D C B D . Obtained results confirm the theoretical importance of weighted DMP and MPD inverses.

Determinantal representations of solutions to novel CQMEs and solutions to several particular cases are developed. An illustrative numerical example is presented.

Acknowledgements

Ivan I. Kyrchei thanks the Erwin Schrödinger Institute for Mathematics and Physics at the University of Vienna for the support given through the Special Research Fellowship Programme for Ukrainian Scientists.

  1. Funding information: Dijana Mosić and Predrag Stanimirović are supported from the Ministry of Education, Science and Technological Development, Republic of Serbia (Grants 451-03-68/2022-14/200124). Predrag Stanimirović is supported by the Science Fund of the Republic of Serbia (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications).

  2. Conflict of interest: The authors declare that there are no conflict of interest.

  3. Data availability statement: Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

[1] A. Ben-Israel and T. N. E. Grevile, Generalized inverses, theory and applications, 2nd edition, Canadian Mathematical Society, Springer, New York, 2003. Search in Google Scholar

[2] I. I. Kyrchei, D. Mosić, and P. S. Stanimirović, MPD-DMP-solutions to quaternion two-sided restricted matrix equations, Comput. Appl. Math. 40 (2021), 177.10.1007/s40314-021-01566-8Search in Google Scholar

[3] I. I. Kyrchei, D. Mosić, and P. S. Stanimirović, MPCEP-∗CEPMP-solutions of some restricted quaternion matrix equations, Adv. Appl. Clifford Algebras 32 (2022), 16. 10.1007/s00006-021-01192-xSearch in Google Scholar

[4] R. E. Cline and T. N. E. Greville, A Drazin inverse for rectangular matrices, Linear Algebra Appl. 29 (1980), 53–62. 10.1016/0024-3795(80)90230-XSearch in Google Scholar

[5] N. Castro-González and J. Y. Velez-Cerrada, The weighted Drazin inverse of perturbed matrices with related support idempotents, Appl. Math. Comput. 187 (2007), no. 2, 756–764. 10.1016/j.amc.2006.08.154Search in Google Scholar

[6] A. Hernández, M. Lattanzi, and N. Thome, On some new pre-orders defined by weighted Drazin inverses, Appl. Math. Comput. 282 (2016), 108–116. 10.1016/j.amc.2016.01.055Search in Google Scholar

[7] D. Mosić, Weighted pre-orders in a Banach algebra, Linear Algebra Appl. 533 (2017), 161–185. 10.1016/j.laa.2017.07.027Search in Google Scholar

[8] D. Mosić, and L. Wang, Weighted extended g-Drazin inverse, Aequat. Math. 94 (2020), 151–161. 10.1007/s00010-019-00656-7Search in Google Scholar

[9] P. S. Stanimirović, V. N. Katsikis, and H. Ma, Representations and properties of the W-weighted Drazin inverse, Linear Multilinear Algebra 65 (2017), 1080–1096. 10.1080/03081087.2016.1228810Search in Google Scholar

[10] Y. Wei, Integral representation of the W-weighted Drazin inverse, Appl. Math. Comput. 144 (2003), no. 1, 3–10. 10.1016/S0096-3003(02)00386-7Search in Google Scholar

[11] Y. Wei, C. W. Woo, and T. Lei, A note on the perturbation of the W-weighted Drazin inverse, Appl. Math. Comput. 149 (2004), no. 2, 423–430. 10.1016/S0096-3003(03)00150-4Search in Google Scholar

[12] O. M. Baksalary, and G. Trenkler, Core inverse of matrices, Linear Multilinear Algebra 58 (2010), 681–697. 10.1080/03081080902778222Search in Google Scholar

[13] K. Manjunatha Prasad, and K. S. Mohana, Core-EP inverse, Linear Multilinear Algebra 62 (2014), no. 6, 792–802. 10.1080/03081087.2013.791690Search in Google Scholar

[14] K. Manjunatha Prasad and M. David Raj, Bordering method to compute Core-EP inverse, Spec. Matrices 6 (2018), 193–200. 10.1515/spma-2018-0016Search in Google Scholar

[15] S. B. Malik, and N. Thome, On a new generalized inverse for matrices of an arbitrary index, Appl. Math. Comput. 226 (2014), 575–580. 10.1016/j.amc.2013.10.060Search in Google Scholar

[16] L. S. Meng, The DMP inverse for rectangular matrices, Filomat 31 (2017), no. 19, 6015–6019. 10.2298/FIL1719015MSearch in Google Scholar

[17] D. Mosić, Weighted gDMP inverse of operators between Hilbert spaces, Bull. Korean Math. Soc. 55 (2018), 1263–1271. Search in Google Scholar

[18] I. I. Kyrchei, Weighted quaternion core-EP, DMP, MPD, and CMP inverses and their determinantal representations, Rev. R. Acad. Cienc. Exactas Fiiis. Nat. Ser. A Mat. RACSAM 114 (2020), 198. 10.1007/s13398-020-00930-3Search in Google Scholar

[19] D. E. Ferreyra, F. E. Levis, and N. Thome, Maximal classes of matrices determining generalized inverses, Appl. Math. Comput. 333 (2018), 42–52. 10.1016/j.amc.2018.03.102Search in Google Scholar

[20] H. Ma, X. Gao, and P. S. Stanimirović, Characterizations, iterative method, sign pattern and perturbation analysis for the DMP inverse with its applications, Appl. Math. Comput. 378 (2020), 125196. 10.1016/j.amc.2020.125196Search in Google Scholar

[21] D. Mosić, Maximal classes of operators determining some weighted generalized inverses, Linear Multilinear Algebra 68 (2020), no. 11, 2201–2220. 10.1080/03081087.2019.1575328Search in Google Scholar

[22] F. Pablos Romo, On Drazin-Moore-Penrose inverses of finite potent endomorphisms, Linear Multilinear Algebra 69 (2021), no. 4, 627–647. 10.1080/03081087.2019.1612834Search in Google Scholar

[23] A. Yu and C. Deng, Characterization of DMP inverse in Hilbert space, Calcolo 53 (2016), 331–341. 10.1007/s10092-015-0151-2Search in Google Scholar

[24] M. Zhou, and J. Chen, Integral representations of two generalized core inverses, Appl. Math. Comput. 333 (2018), 187–193. 10.1016/j.amc.2018.03.085Search in Google Scholar

[25] I. I. Kyrchei, Determinantal representations of the quaternion core inverse and its generalizations, Adv. Appl. Clifford Algebras 29 (2019), 104. 10.1007/s00006-019-1024-6Search in Google Scholar

[26] X. Liu and N. Cai, High-order iterative methods for the DMP inverse, J. Math. 2018 (2018), 8175935. 10.1155/2018/8175935Search in Google Scholar

[27] D. Mosić and D. S. Djordjević, The gDMP inverse of Hilbert space operators, J. Spectr. Theor. 8 (2018), no. 2, 555–573. 10.4171/JST/207Search in Google Scholar

[28] H. Zhu, On DMP inverses and m-EP elements in rings, Linear Multilinear Algebra 67 (2019), no. 4, 756–766. 10.1080/03081087.2018.1432546Search in Google Scholar

[29] B. Wang, H. Du, and H. Ma, Perturbation bounds for DMP and CMP inverses of tensors via Einstein product, Comp. Appl. Math. 39 (2020), 28. 10.1007/s40314-019-1007-1Search in Google Scholar

[30] Z.-H. He and Q.-W. Wang, A real quaternion matrix equation with applications, Linear Multilinear Algebra 61 (2013), no. 6, 725–740. 10.1080/03081087.2012.703192Search in Google Scholar

[31] Z.-H. He and M. Wang, A quaternion matrix equation with two different restrictions, Adv. Appl. Clifford Algebras 31 (2021), 25. 10.1007/s00006-021-01122-xSearch in Google Scholar

[32] T. Klimchuk and V. V. Sergeichuk, Hermitian and nonnegative definite solutions of linear matrix equations, Spec. Matrices 2 (2014), 180–186. 10.2478/spma-2014-0018Search in Google Scholar

[33] G.-J. Song and S. Yu, The solution of a generalized Sylvester quaternion matrix equation and its application, Adv. Appl. Clifford Algebras 27 (2017), 2473–2492. 10.1007/s00006-017-0782-2Search in Google Scholar

[34] Q.-W. Wang and J. Jiang, Extreme ranks of (skew-)Hermitian solutions to a quaternion matrix equation, Electron. J. Linear Algebra 20 (2010), 552–557. 10.13001/1081-3810.1393Search in Google Scholar

[35] X. Wang, Y. Li, and L. Dai, On Hermitian and skew-Hermitian splitting iteration methods for the linear matrix equation AXB=C, Comput. Math. Appl. 65 (2013), 657–664. 10.1016/j.camwa.2012.11.010Search in Google Scholar

[36] I. I. Kyrchei, Cramer’s rule for quaternionic systems of linear equations, J. Math. Sci. 155 (2008), no. 6, 839–858. 10.1007/s10958-008-9245-6Search in Google Scholar

[37] I. I. Kyrchei, The theory of the column and row determinants in a quaternion linear algebra, In: A. R. Baswell (Ed.), Advances in Mathematics Research, vol. 15, Nova Science Publishers, New York, 2012, pp. 301–359. Search in Google Scholar

[38] I. I. Kyrchei, Determinantal representation of the Moore-Penrose inverse matrix over the quaternion skew field, J. Math. Sci. 180 (2012), no. 1, 413–431. 10.1007/s10958-011-0626-xSearch in Google Scholar

[39] I. I. Kyrchei, Determinantal representations of the Drazin inverse over the quaternion skew field with applications to some matrix equations, Appl. Math. Comput. 238 (2014), 193–207. 10.1016/j.amc.2014.03.125Search in Google Scholar

[40] I. I. Kyrchei, Cramer’s rules of η-(skew-)Hermitian solutions to the quaternion Sylvester-type matrix equations, Adv. Appl. Clifford Algebras 29 (2019), no. 3, 56. 10.1007/s00006-019-0972-1Search in Google Scholar

[41] I. I. Kyrchei, Determinantal representations of general and (skew-)Hermitian solutions to the generalized Sylvester-type quaternion matrix equation, Abstr. Appl. Anal. 2019 (2019), 5926832. 10.1155/2019/5926832Search in Google Scholar

[42] I. I. Kyrchei, Determinantal representations of the Drazin and W-weighted Drazin inverses over the quaternion skew field with applications, In: S. Griffin (Ed.), Quaternions: Theory and Applications, Nova Science Publishers, New York, 2017, pp. 201–275. Search in Google Scholar

[43] I. I. Kyrchei, Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations, Linear Algebra Appl. 438 (2013), no. 1, 136–152.10.1016/j.laa.2012.07.049Search in Google Scholar

Received: 2022-08-05
Revised: 2022-12-04
Accepted: 2022-12-26
Published Online: 2023-01-21

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Determinants of some Hessenberg matrices with generating functions
  3. On monotone Markov chains and properties of monotone matrix roots
  4. On the spectral properties of real antitridiagonal Hankel matrices
  5. The complete positivity of symmetric tridiagonal and pentadiagonal matrices
  6. Two n × n G-classes of matrices having finite intersection
  7. On new universal realizability criteria
  8. On inverse sum indeg energy of graphs
  9. Incidence matrices and line graphs of mixed graphs
  10. Diagonal dominance and invertibility of matrices
  11. New versions of refinements and reverses of Young-type inequalities with the Kantorovich constant
  12. W-MPD–N-DMP-solutions of constrained quaternion matrix equations
  13. Representing the Stirling polynomials σn(x) in dependence of n and an application to polynomial zero identities
  14. The effect of removing a 2-downer edge or a cut 2-downer edge triangle for an eigenvalue
  15. Idempotent operator and its applications in Schur complements on Hilbert C*-module
  16. On the distance energy of k-uniform hypergraphs
  17. The bipartite Laplacian matrix of a nonsingular tree
  18. Combined matrix of diagonally equipotent matrices
  19. Walks and eigenvalues of signed graphs
  20. On 3-by-3 row stochastic matrices
  21. Legendre pairs of lengths ≡ 0 (mod 5)
  22. Integral Laplacian graphs with a unique repeated Laplacian eigenvalue, I
  23. Communication
  24. Class of finite-dimensional matrices with diagonals that majorize their spectrum
  25. Corrigendum
  26. Corrigendum to “Spectra universally realizable by doubly stochastic matrices”
Downloaded on 9.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/spma-2022-0183/html
Scroll to top button