Home Improved modified gradient-based iterative algorithm and its relaxed version for the complex conjugate and transpose Sylvester matrix equations
Article Open Access

Improved modified gradient-based iterative algorithm and its relaxed version for the complex conjugate and transpose Sylvester matrix equations

  • Zhengge Huang EMAIL logo and Jingjing Cui
Published/Copyright: November 27, 2024
Become an author with De Gruyter Brill

Abstract

In this article, we present two new algorithms referred to as the improved modified gradient-based iterative (IMGI) algorithm and its relaxed version (IMRGI) for solving the complex conjugate and transpose (CCT) Sylvester matrix equations, which often arise from control theory, system theory, and so forth. Compared with the gradient-based iterative (GI) (A.-G. Wu, L.-L. Lv, and G.-R. Duan, Iterative algorithms for solving a class of complex conjugate and transpose matrix equations, Appl. Math. Comput. 217 (2011), 8343–8353) and the relaxed GI (RGI) (W.-L. Wang, C.-Q. Song, and S.-P. Ji, Iterative solution to a class of complex matrix equations and its application in time-varying linear system, J. Appl. Math. Comput. 67 (2021), 317–341) algorithms, the proposed ones can make full use of the latest information and need less computations, which leads to higher computational efficiency. With the real representation of a complex matrix as a tool, we establish sufficient and necessary conditions for the convergence of the IMGI and the IMRGI algorithms. Finally, some numerical examples are given to illustrate the effectiveness and advantages of the proposed algorithms.

MSC 2010: 65F10; 65H10

1 Introduction

Sylvester matrix equations are often encountered in many scientific and engineering fields, such as signal processes, stability theory, observes design, control theory, matrix computation, pole assignment, optimal control, prediction and stability, system theory, eigenstructure assignment, and so on, we can refer to the related references [17] for details. This indicates that studying the computational methods of different kinds of Sylvester matrix equations has become an important subject in the field of computational mathematics and control. This motivates us to establish some effective algorithms for some types of Sylvester matrix equations in this work.

Due to the fact that the Sylvester matrix equations widely appear in practical problems and have important applications in many fields, in the past few decades, many researchers have devoted themselves to deriving a great deal of different methods to solve the Sylvester matrix equations, such as the Krylov subspace methods, neural network methods, direct methods, and so forth. The Krylov subspace methods are one of the classical iterative methods for the matrix equations, including the conjugate gradient method, the conjugate direction method, and so on. Besides, unlike numerical methods, artificial neural networks have the property of high-speed parallel operations and potential hardware implementations. Many effective neural network methods have been developed for matrix equation problems, for example, recurrent neural network method, zeroing neural network method. Also, by making use of matrix straightening operator and Kronecker product, Sylvester matrix equations can be transformed into linear systems, and the exact solutions of them can be calculated by applying the direct methods. However, it may lead to high-scale problems, and will consume much more computer time and memory space with the dimensions increasing. Hence, the research of iterative methods for the matrix equations have attracted considerable attention from many researchers.

A large number of efficient iterative methods have been developed for solving different kinds of Sylvester matrix equations. For instance, Ding and Chen [8] proposed the gradient-based iterative (GI) algorithm for solving the Sylvester matrix equation A X + X B = C . Ding et al. [9] derived the iterative method for the generalized Sylvester matrix equation A X B + C X D = E . These methods are constructed by applying the hierarchical identification principle, which decomposes a system into some subsystems and then identifies the unknown parameters of each subsystem successively. Based on this idea, there are a great deal of work involved in this direction. For example, Xie et al. [10] constructed the gradient-based and least squares-based iterative algorithms for the Sylvester-transpose matrix equation A X B + C X T D = F . Song et al. [11] developed the GI algorithm for the coupled Sylvester-transpose matrix equation η = 1 p ( A i η X η B i η + C i η X η T D i η ) = F i ( i = 1 , 2 , , N ). By using the hierarchical identification principle, Wu et al. [12] designed an efficient algorithm for solving the extended Sylvester-conjugate matrix equation A X B + C X ¯ D = F . Besides, Wu et al. [13] extended the GI algorithm to solve the coupled Sylvester-conjugate matrix equation η = 1 p ( A i η X η B i η + C i η X ¯ η D i η ) = F i   ( i = 1 , 2 , , N ) and derived the sufficient condition for the convergence of the GI algorithm. Subsequently, Wu et al. [14] put forward the GI algorithm for solving the complex conjugate and transpose (CCT) matrix equation l = 1 s 1 A l X B l + l = 1 s 2 C l X ¯ D l + l = 1 s 3 G l X T H l + l = 1 s 4 M l X H N l = F . Afterward, Beik et al. [1] proposed the GI algorithm for the generalized coupled Sylvester-transpose and conjugate matrix equations over a group of reflexive (anti-reflexive) matrices. For more iteration methods based on the hierarchical identification principle, we can refer to [6,7,1524] and the references therein. Moreover, the finite iterative methods are a kind of effective algorithms for the Sylvester matrix equations, which can calculate the solutions of many kinds of matrix equations within finite steps in the absence of round-off errors. Then many finite iterative methods have been established. For instance, Li [25] developed a finite iterative method for solving the coupled Sylvester matrix equations with conjugate transpose. By introducing the real linear operator, Zhang [26] proposed a finite iterative algorithm for solving the complex generalized coupled Sylvester matrix equations. In addition, Yan and Ma [27] designed the biconjugate residual algorithm for solving the reflexive or antireflexive solutions of generalized coupled Sylvester matrix equations, and they also proposed a finite iterative algorithm to solve a class of generalized coupled Sylvester-conjugate matrix equations over the generalized Hamiltonian matrices in [28]. Quite recently, Ma and Yan [29] derived a modified conjugate gradient method to solve the general discrete-time periodic Sylvester matrix equations.

Note that the CCT Sylvester matrix equations are quite general and include several kinds of Sylvester matrix equations, such as the Sylvester matrix equations, the complex conjugate Sylvester matrix equations, the Sylvester transpose matrix equations, and the conjugate transpose Sylvester matrix equations. This shows that the CCT Sylvester matrix equations appear in many practical problems and applications, and the solving problem of the CCT Sylvester matrix equations is worth research. For example, according to [21], the continuous zeroing dynamics design of time-varying linear system can be defined as L ˙ ( t ) = λ J M P ( L ( t ) ) , and J M P ( ) : C n × n C n × n is defined as an array of jmp functions:

J M P ( L ( t ) ) = j m p ( l 11 ( t ) ) j m p ( l 12 ( t ) ) j m p ( l 1 n ( t ) ) j m p ( l 21 ( t ) ) j m p ( l 22 ( t ) ) j m p ( l 2 n ( t ) ) j m p ( l n 1 ( t ) ) j m p ( l n 2 ( t ) ) j m p ( l n n ( t ) ) ,

where j m p ( l i j ( t ) ) = 1 if l i j ( t ) > 0 and j m p ( l i j ( t ) ) = 0 if l i j ( t ) 0 , and the time derivative is

L ( t ) = A ˙ 1 ( t ) Y ( t ) B 1 ( t ) + A 1 ( t ) Y ˙ ( t ) B 1 ( t ) + A 1 ( t ) Y ( t ) B ˙ 1 ( t ) + A ˙ 2 ( t ) Y ¯ ( t ) B 2 ( t ) + A 2 ( t ) Y ¯ ˙ ( t ) B 2 ( t ) + A 2 ( t ) Y ¯ ( t ) B ˙ 2 ( t ) + A ˙ 3 ( t ) Y T ( t ) B 3 ( t ) + A 3 ( t ) Y ˙ T ( t ) B 3 ( t ) + A 3 ( t ) Y T ( t ) B ˙ 3 ( t ) + A ˙ 4 ( t ) Y H ( t ) B 4 ( t ) + A 4 ( t ) Y ˙ H ( t ) B 4 ( t ) + A 4 ( t ) Y H ( t ) B ˙ 4 ( t ) .

L ˙ ( t ) = λ J M P ( L ( t ) ) is substituted into the aforementioned equation, and it can be simplified as the following time-varying linear system:

(1) A 1 ( t ) Y ˙ ( t ) B 1 ( t ) + A 2 ( t ) Y ¯ ˙ ( t ) B 2 ( t ) + A 3 ( t ) Y ˙ T ( t ) B 3 ( t ) + A 4 ( t ) Y ˙ H ( t ) B 4 ( t ) = G ( t ) ,

with A i ( t ) , B i ( t ) , G i ( t ) C n × n being the smoothly time-varying matrices, and Y ( t ) C n × n being the unknown time-varying matrix needs to be determined. In addition, t [ 0 , t f ] [ 0 , + ) , and t f stands for the final time. Then equation (1) can be rewritten as the CCT Sylvester matrix equation. Based on this fact, in this work, we investigate the effective and feasible iterative algorithms to solve the CCT Sylvester matrix equations, whose form is as follows:

(2) A 1 Z B 1 + A 2 Z ¯ B 2 + A 3 Z T B 3 + A 4 Z H B 4 = H ,

where A i , B i , H C n × n ( i = 1 , 2 , 3 , 4 ) are given matrices and the unknown matrix Z C n × n needs to be computed.

Until now, some different algorithms for the CCT Sylvester matrix equations have been proposed. Apart from the ones in [1,14,26] mentioned earlier, Zhang and Yin [30] offered a new proof of the GI algorithm for the CCT Sylvester matrix equations by making use of the properties of the real representation of a complex matrix, and deduced the necessary and sufficient conditions for the convergence, optimal parameter, and corresponding optimal convergence factor. Then the optimal GI (OGI) algorithm is obtained. Furthermore, on the basis of the relaxation technique used in [15,17], Wang et al. [21] introduced a relaxed factor into the GI algorithm and developed the relaxed GI (RGI) algorithm, and necessary and sufficient conditions for guaranteeing the convergence of the RGI algorithm are provided. Numerical results in [21] revealed that the RGI algorithm performs better than the GI and the OGI ones.

To improve the efficiency of the GI algorithm for the Sylvester matrix equations, many improved and modified versions of the GI algorithm have been established. For instance, Wang et al. [20] proposed the modified GI (MGI) algorithm for solving the Sylvester matrix equations, which uses the latest information to compute the next result. This idea has been extended to the generalized Sylvester-transpose matrix equations and the extended Sylvester-conjugate matrix equations in [19] and [18], respectively. In addition, to reduce the computations and the storage spaces of the GI algorithm, Tian et al. [23] and Hu and Wang [31] derived the Jacobi GI (JGI) algorithm for solving the Sylvester matrix equations by replacing the coefficient matrices by their diagonal parts.

It can be seen from the iterative scheme of the GI algorithm in [14,30] that it does not use the latest calculation result to compute the next result. Then the convergence speed of the GI algorithm is slow in many cases, and even the optimal parameter is adopted in GI algorithm. In addition, when the coefficient matrices of the CCT Sylvester matrix equation are large and dense, the matrix multiplication in the GI algorithm may require a large amount of computation and storage spaces, which consumes much time and reduces the computational efficiency of the GI algorithm. To overcome the aforementioned shortcomings and improve the convergence rate of the GI algorithm, we first use the hierarchical identification principle to turn CCT Sylvester matrix equation into four subsystems. Enlightened by the ideas of the JGI and the MGI algorithms, we apply the updated technique to the GI algorithm, replace the related coefficient matrices by their diagonal parts, and then construct the improved modified GI (IMGI) algorithm for the CCT Sylvester matrix equation (2). Moreover, we apply the relaxation technique to the IMGI algorithm and develop the improved modified relaxed GI (IMRGI) algorithm. We investigate some convergence properties of the IMGI and the IMRGI algorithms, which indicate that the proposed algorithms are convergent under proper restrictions. Numerical experiments are provided to demonstrate the effectiveness and superiorities of the IMGI and the IMRGI algorithms.

The rest of this article is organized as follows. In Section 2, some notations, definitions, and results, which will be used in the latter parts of this article are listed. In Section 3, we put forward two new algorithms referred to as the IMGI and the IMRGI algorithms for the CCT Sylvester matrix equation (2), and establish the convergence theorems of the two proposed algorithms. Several numerical experiments are provided to validate the effectiveness and advantages of the proposed algorithms in Section 4. Finally, some conclusions are drawn in Section 5 to end this article.

2 Preliminaries

In this section, we review and recall some notations, definitions, and lemmas, which come from References [11,21,30,32,33] and will be used throughout this article.

For a complex number p , Re ( p ) and Im ( p ) denote the real and complex parts of p , respectively. Let A be a complex matrix. λ ( A ) , A T , A ¯ , and A H represent the spectrum, transpose, conjugate, and conjugate transpose of A , respectively. And A = ( a i j ) denotes the absolute value of the matrix A . If A is a square matrix, then tr ( A ) stands for the trace of A . The spectral radius, Euclid norm, and Frobenius norm of A are denoted by ρ ( A ) , A 2 = λ max ( A H A ) , and A = tr ( A H A ) , respectively. Let A = D C , where D and C are the diagonal and non-diagonal parts of A , respectively.

In addition, we present several useful definitions below.

Definition 2.1

[32] Let A = [ a 1 , a 2 , , a n ] C m × n with a i being the i th column of A . The vector stretching function of A is defined as follows:

vec ( A ) = [ a 1 T , a 2 T , , a n T ] T C m n .

Definition 2.2

[21] The vec-permutation matrix P ( m , n ) is a square m n × m n matrix and can be defined as follows:

P ( m , n ) = i = 1 m j = 1 n E i j E i j T ,

where E i j = e i e j T is an elementary matrix of order m × n .

Definition 2.3

[32] For two matrices D = ( d i j ) C m × k and F = ( f i j ) C n × l , the Kronecker product of D and F is defined as follows:

D F = d 11 F d 12 F d 1 k F d 21 F d 22 F d 2 k F d m 1 F d m 2 F d m k F = [ d i j F ] m × k C m n × k l .

Definition 2.4

[32] Let L = L 1 + i L 2 C p × q with L 1 and L 2 being the real and imaginary parts of L , respectively. Then the real representation of L is defined as follows:

L σ = L 1 L 2 L 2 L 1 R 2 p × 2 q .

In the following, we list several useful lemmas.

Lemma 2.1

[13] (The properties of the real representation)

  1. For G , H C m × n , t R , it has

    ( G + H ) σ = G σ + H σ , ( t G ) σ = t G σ , P m G σ P n = ( G ¯ ) σ ;

  2. If S C m × n , T C n × r , and U C r × p , then

    ( S T ) σ = S σ P n T σ = S σ ( T ¯ ) σ P r , ( S T U ) σ = S σ ( T ¯ ) σ U σ ;

  3. If F C m × n , then Q m F σ Q n = F σ ;

  4. For K C m × n , it has ( ( K T ) σ ) T = K σ , where P i and Q j are defined as follows:

    P i = I i 0 0 I i , Q j = 0 I j I j 0 ,

    with I i and I j being the i × i and j × j unit matrices, respectively.

Lemma 2.2

[21] Let Y C m × n , then

vec ( Y T ) = P ( m , n ) vec ( Y ) .

Lemma 2.3

[33] Let A C m × n , B C n × s , and C C s × t , then

vec ( A B C ) = ( C T A ) vec ( B ) .

Lemma 2.4

[5] Let C = [ C i j ] be a square block matrix and D = [ C i j ] . Then ρ ( C ) ρ ( C ) ρ ( D ) .

Lemma 2.5

[34] Let T be a normal matrix, then ρ ( T ) = T 2 .

3 The IMGI and the IMRGI algorithms

In this section, we will establish two new algorithms for solving the CCT Sylvester matrix equations. The proposed methods outperform the GI one in [14] and its relaxed version (RGI) [21] from the point of view of computational efficiency, which will be confirmed by the numerical results in Section 5. To this end, we first introduce a lemma as follows.

Lemma 3.1

[9] Consider the matrix equation A X B = F , where A R m × r , B R s × n , and F R m × n are known matrices, and X R r × s is the matrix to be determined. For this matrix equation, an iterative algorithm is constructed as follows:

(3) X ( k + 1 ) = X ( k ) + μ A T ( F A X ( k ) B ) B T ,

with

(4) 0 < μ < 2 A 2 2 B 2 2 .

If this matrix equation has a unique solution X * , then the iterative solution X ( k ) converges to the unique solution X * , that is, lim k X ( k ) = X * .

Let Θ ( Z ) = A 1 Z B 1 + A 2 Z ¯ B 2 + A 3 Z T B 3 + A 4 Z H B 4 . We will review the GI algorithm in [14], whose framework is as follows.

The GI algorithm [14]:

Step 1: Given matrices A i , B i , H C n × n   ( i = 1 , 2 , 3 , 4 ) , and two constants ε > 0 and μ > 0 . Choose the initial matrix Z ( 0 ) , and set k = 0 ;

Step 2: If δ k = H A 1 Z ( k ) B 1 A 2 Z ¯ ( k ) B 2 A 3 Z ( k ) T B 3 A 4 Z ( k ) H B 4 H < ε , stop; otherwise, go to Step 3;

Step 3: Update the sequences

Z 1 ( k + 1 ) = Z ( k ) + μ A 1 H ( H Θ ( Z ( k ) ) ) B 1 H , Z 2 ( k + 1 ) = Z ( k ) + μ A 2 T ( H ¯ Θ ( Z ( k ) ) ¯ ) B 2 T , Z 3 ( k + 1 ) = Z ( k ) + μ B 3 ¯ ( H T Θ T ( Z ( k ) ) ) A 3 ¯ , Z 4 ( k + 1 ) = Z ( k ) + μ B 4 ( E H Θ H ( Z ( k ) ) ) A 4 , Z ( k + 1 ) = Z 1 ( k + 1 ) + Z 2 ( k + 1 ) + Z 3 ( k + 1 ) + Z 4 ( k + 1 ) 4 ;

Step 4: Set k k + 1 and return to Step 2.

Numerical experiments in [14] showed the effectiveness of the GI algorithm for the CCT Sylvester matrix equations.

According to the framework of the aforementioned GI algorithm, it can be seen that Step 3 of this algorithm may consume much time because the matrices A i and B i   ( i = 1 , 2 , 3 , 4 ) may be large and dense. To reduce the computation of the GI algorithm, according to Lemma 3.1, and inspired by the ideas of [23,24,31], we replace the matrices A i and B i   ( i = 1 , 2 , 3 , 4 ) by their diagonal parts. This may reduce the computing time of each iteration of the GI method, and the total calculation time decreases.

According to the hierarchical identification principle, we first transform the CCT Sylvester matrix equation (2) into the following four equations:

(5) A 1 Z B 1 = H 1 , A ¯ 2 Z B ¯ 2 = H 2 , B 3 T Z A 3 T = H 3 , B 4 H Z A 4 H = H 4 ,

where H 1 = H Θ ( Z ) + A 1 Z B 1 , H 2 = H Θ ( Z ) + A 2 Z ¯ B 2 ¯ , H 3 = ( H Θ ( Z ) + A 3 Z T B 3 ) T , and H 4 = ( H Θ ( Z ) + A 4 Z H B 4 ) H . And then we split the matrices A i , B i ( i = 1 , 2 , 3 , 4 ) into A i = D i 1 + C i 1 and B i = D i 2 + C i 2   ( i = 1 , 2 , 3 , 4 ) , with D i 1 and D i 2 being the diagonal parts of A i and B i , respectively. Then it follows from (5) that

(6) A 1 Z B 1 = H 1 ( D 11 + C 11 ) Z ( D 12 + C 12 ) = H 1 , A ¯ 2 Z B ¯ 2 = H 2 ( D 21 + C 21 ) ¯ Z ( D 22 + C 22 ) ¯ = H 2 , B 3 T Z A 3 T = H 3 ( D 32 + C 32 ) T Z ( D 31 + C 31 ) T = H 3 , B 4 H Z A 4 H = H 4 ( D 42 + C 42 ) H Z ( D 41 + C 41 ) H = H 4 ,

which yields that

(7) D 11 Z D 12 = H 1 D 11 Z C 12 C 11 Z D 12 C 11 Z C 12 H ˜ 1 , D ¯ 21 Z D ¯ 22 = H 2 D ¯ 21 Z C ¯ 22 C ¯ 21 Z D ¯ 22 C ¯ 21 Z C ¯ 22 H ˜ 2 , D 32 T Z D 31 T = H 3 D 32 T Z C 31 T C 32 T Z D 31 T C 32 T Z C 31 T H ˜ 3 , D 42 H Z D 41 H = H 4 D 42 H Z C 41 H C 42 H Z D 41 H C 42 H Z C 41 H H ˜ 4 .

Besides, Wang et al. [20] developed the modified GI (MGI) algorithm for the Sylvester equations. The information generated in the first half-iterative step is fully exploited and used to compute the result of the second half-iterative step, which leads to faster convergence rate. This motivates us to apply the updated technique in [20] to (7) and construct the following IMGI algorithm based on Lemma 3.1.

The IMGI algorithm:

Step 1: Given matrices A i , B i , H C n × n   ( i = 1 , 2 , 3 , 4 ) , and two constants ε > 0 and μ > 0 . Choose the initial matrices Z ( 0 ) and Z i ( 0 ) ( i = 1 , 2 , 3 , 4 ) , and set k = 0 ;

Step 2: If δ k = H A 1 Z ( k ) B 1 A 2 Z ¯ ( k ) B 2 A 3 Z ( k ) T B 3 A 4 Z ( k ) H B 4 H < ε , stop; otherwise, go to Step 3;

Step 3: Update the sequences

Z ( 1 ) ( k + 1 ) = Z ( k ) + μ D 11 H ( H Θ ( Z ( k ) ) ) D 12 H , Z ˆ ( k ) = ( Z ( 1 ) ( k + 1 ) + Z ( 2 ) ( k ) + Z ( 3 ) ( k ) + Z ( 4 ) ( k ) ) 4 , Z ( 2 ) ( k + 1 ) = Z ˆ ( k ) + μ D 21 T ( H ¯ Θ ( Z ˆ ( k ) ) ¯ ) D 22 T , Z ˇ ( k ) = ( Z ( 1 ) ( k + 1 ) + Z ( 2 ) ( k + 1 ) + Z ( 3 ) ( k ) + Z ( 4 ) ( k ) ) 4 ,

Z ( 3 ) ( k + 1 ) = Z ˇ ( k ) + μ D ¯ 32 ( H T Θ T ( Z ˇ ( k ) ) ) D ¯ 31 , Ź ( k ) = ( Z ( 1 ) ( k + 1 ) + Z ( 2 ) ( k + 1 ) + Z ( 3 ) ( k + 1 ) + Z ( 4 ) ( k ) ) 4 , Z ( 4 ) ( k + 1 ) = Ź ( k ) + μ D 42 ( H H Θ H ( Ź ( k ) ) ) D 41 , Z ( k + 1 ) = ( Z ( 1 ) ( k + 1 ) + Z ( 2 ) ( k + 1 ) + Z ( 3 ) ( k + 1 ) + Z ( 4 ) ( k + 1 ) ) 4 ;

Step 4: Set k k + 1 and return to Step 2.

Remark 3.1

Comparing the IMGI algorithm with the GI algorithm, it can be seen that the differences between the two algorithms are the Step 3 of them. The related coefficient matrices in the GI algorithm are replaced by their diagonal parts, and the next result is computed by the latest information. This leads to less computations of each iteration and faster convergence. So we can expect that the proposed IMGI algorithm performs better than the GI one, which will be verified by numerical results.

In the sequel, we will establish the convergence condition of the IMGI algorithm. Before that, we start with a lemma as follows.

Lemma 3.2

[21] The CCT Sylvester matrix equation (2) has a unique solution if and only if the matrix A is nonsingular, then the unique solution is given by

(8) vec ( Z σ ) = A 1 vec ( H σ ) ,

where

(9) A = ( P n ( B 1 ) σ ) T ( ( A 1 ) σ P n ) + ( B 2 ) σ T ( A 2 ) σ + [ ( P n ( B 3 ) σ ) T ( ( A 3 ) σ P n ) + ( B 4 ) σ T ( A 4 ) σ ] P ( 2 n , 2 n ) .

And the corresponding homogeneous matrix equation A 1 Z B 1 + A 2 Z ¯ B 2 + A 3 Z T B 3 + A 4 Z H B 4 = 0 has a unique solution Z = 0 .

In the following, by applying the properties of the real representation of a complex matrix and the vector stretching operator, we study the necessary and sufficient condition for the convergence of the IMGI algorithm.

Theorem 3.1

Suppose that the CCT Sylvester matrix equation (2) has a unique solution Z * . Then the IMGI algorithm is convergent if and only if the parameter μ is selected to satisfy ρ ( S T ) < 1 , where

(10) S = I 0 0 0 1 4 M 2 I 0 0 1 16 M 3 M 2 + 1 4 M 3 1 4 M 3 I 0 1 4 M 4 + 1 16 M 4 M 2 + 1 16 M 4 M 3 + 1 64 M 4 M 3 M 2 1 16 M 4 M 3 + 1 4 M 4 1 4 M 4 I , T = 1 4 M 1 M 1 M 1 M 1 0 M 2 M 2 M 2 0 0 M 3 M 3 0 0 0 M 4 ,

with

M 1 = I 4 n 2 μ W 1 , M 2 = I 4 n 2 μ W 2 , M 3 = I 4 n 2 μ W 3 , M 4 = I 4 n 2 μ W 4 , W 1 = ( B 1 D 12 H ) σ T P n ( D 11 H A 1 ) σ P n + ( B 2 D 12 H ) σ T ( D 11 H A 2 ) σ + [ ( B 3 D 12 H ) σ T P n ( D 11 H A 3 ) σ P n + ( B 4 D 12 H ) σ T ( D 11 H A 4 ) σ ] P ( 2 n , 2 n ) ,

W 2 = ( B ¯ 1 D 22 T ) σ T ( D 21 T A ¯ 1 ) σ + [ ( B ¯ 3 D 22 T ) σ T ( D 21 T A ¯ 3 ) σ + ( B ¯ 4 D 22 T ) σ T P n ( D 21 T A ¯ 4 ) σ P n ] P ( 2 n , 2 n ) + ( B ¯ 2 D 22 T ) σ T P n ( D 21 T A ¯ 2 ) σ P n , W 3 = [ ( A 1 T D ¯ 31 ) σ T P n ( D ¯ 32 B 1 T ) σ P n + ( A 2 T D ¯ 31 ) σ T ( D ¯ 32 B 2 T ) σ ] P ( 2 n , 2 n ) + ( A 3 T D ¯ 31 ) σ T P n ( D ¯ 32 B 3 T ) σ P n + ( A 4 T D ¯ 31 ) σ T ( D ¯ 32 B 4 T ) σ , W 4 = [ ( A 1 H D 41 ) σ T ( D 42 B 1 H ) σ + ( A 2 H D 41 ) σ T P n ( D 42 B 2 H ) σ P n ] P ( 2 n , 2 n ) + ( A 3 H D 41 ) σ T ( D 42 B 3 H ) σ + ( A 4 H D 41 ) σ T P n ( D 42 B 4 H ) σ P n .

Proof

Define the following error matrices:

(11) Z ˜ ( k ) = Z ( k ) Z * , Z ˆ ˜ ( k ) = Z ˆ ( k ) Z * , Z ˇ ˜ ( k ) = Z ˇ ( k ) X * , Ź ˜ ( k ) = Ź ( k ) Z * , Z ˜ ( 1 ) ( k ) = Z ( 1 ) ( k ) Z * , Z ˜ ( 2 ) ( k ) = Z ( 2 ) ( k ) Z * , Z ˜ ( 3 ) ( k ) = Z ( 3 ) ( k ) Z * , Z ˜ ( 4 ) ( k ) = Z ( 4 ) ( k ) Z * .

It follows from Line 1 of Step 3 of the IMGI algorithm that

Z ˜ ( 1 ) ( k + 1 ) = Z ˜ ( k ) μ D 11 H ( A 1 Z ˜ ( k ) B 1 + A 2 Z ¯ ˜ ( k ) B 2 + A 3 Z ˜ ( k ) T B 3 + A 4 Z ˜ ( k ) H B 4 ) D 12 H ,

which together with the real representation and Lemma 2.1 results in

(12) ( Z ˜ ( 1 ) ( k + 1 ) ) σ = ( Z ˜ ( k ) ) σ μ ( D 11 H A 1 Z ˜ ( k ) B 1 D 12 H + D 11 H A 2 Z ¯ ˜ ( k ) B 2 D 12 H + D 11 H A 3 Z ˜ ( k ) T B 3 D 12 H + D 11 H A 4 Z ˜ ( k ) H B 4 D 12 H ) σ = ( Z ˜ ( k ) ) σ μ [ ( D 11 H A 1 ) σ P n ( Z ˜ ( k ) ) σ P n ( B 1 D 12 H ) σ + ( D 11 H A 2 ) σ ( Z ˜ ( k ) ) σ ( B 2 D 12 H ) σ + ( D 11 H A 3 ) σ P n ( Z ˜ ( k ) T ) σ P n ( B 3 D 12 H ) σ + ( D 11 H A 4 ) σ ( Z ˜ ( k ) T ) σ ( B 4 D 12 H ) σ ] .

Using straightening operator on both sides of relation (12) yields that

(13) vec [ ( Z ˜ ( 1 ) ( k + 1 ) ) σ ] = vec [ ( Z ˜ ( k ) ) σ ] μ vec [ ( D 11 H A 1 ) σ P n ( Z ˜ ( k ) ) σ P n ( B 1 D 12 H ) σ + ( D 11 H A 2 ) σ ( Z ˜ ( k ) ) σ ( B 2 D 12 H ) σ + ( D 11 H A 3 ) σ P n ( Z ˜ ( k ) T ) σ P n ( B 3 D 12 H ) σ + ( D 11 H A 4 ) σ ( Z ˜ ( k ) T ) σ ( B 4 D 12 H ) σ ] = vec [ ( Z ˜ ( k ) ) σ ] μ { ( B 1 D 12 H ) σ T P n ( D 11 H A 1 ) σ P n + ( B 2 D 12 H ) σ T ( D 11 H A 2 ) σ + [ ( B 3 D 12 H ) σ T P n ( D 11 H A 3 ) σ P n + ( B 4 D 12 H ) σ T ( D 11 H A 4 ) σ ] P ( 2 n , 2 n ) } vec [ ( Z ˜ ( k ) ) σ ] = [ I 4 n 2 μ W 1 ] vec ( ( Z ˜ ( k ) ) σ ) M 1 vec [ ( Z ˜ ( k ) ) σ ] ,

where

W 1 = ( B 1 D 12 H ) σ T P n ( D 11 H A 1 ) σ P n + ( B 2 D 12 H ) σ T ( D 11 H A 2 ) σ + [ ( B 3 D 12 H ) σ T P n ( D 11 H A 3 ) σ P n + ( B 4 D 12 H ) σ T ( D 11 H A 4 ) σ ] P ( 2 n , 2 n ) , M 1 = I 4 n 2 μ W 1 .

From Line 3 of Step 3 of the IMGI algorithm, we have

(14) Z ˜ ( 2 ) ( k + 1 ) = Z ˆ ˜ ( k ) μ D 21 T ( A 1 Z ˆ ˜ ( k ) B 1 + A 2 Z ˆ ¯ ˜ ( k ) B 2 + A 3 Z ˆ ˜ ( k ) T B 3 + A 4 Z ˆ ˜ ( k ) H B 4 ¯ ) D 22 T = Z ˆ ˜ ( k ) μ ( D 21 T A ¯ 1 Z ˆ ¯ ˜ ( k ) B ¯ 1 D 22 T + D 21 T A ¯ 2 Z ˆ ˜ ( k ) B ¯ 2 D 22 T + D 21 T A ¯ 3 Z ˆ ¯ ˜ ( k ) T B ¯ 3 D 22 T + D 21 T A ¯ 4 Z ˆ ¯ ˜ ( k ) H B ¯ 4 D 22 T ) .

Taking the real representation of the both sides of (14) and using Lemma 2.1 result in

(15) ( Z ˜ ( 2 ) ( k + 1 ) ) σ = ( Z ˆ ˜ ( k ) ) σ μ ( D 21 T A ¯ 1 Z ˆ ¯ ˜ ( k ) B ¯ 1 D 22 T + D 21 T A ¯ 2 Z ˆ ˜ ( k ) B ¯ 2 D 22 T + D 21 T A ¯ 3 Z ˆ ¯ ˜ ( k ) T B ¯ 3 D 22 T + D 21 T A ¯ 4 Z ˆ ¯ ˜ ( k ) H B ¯ 4 D 22 T ) σ = ( Z ˆ ˜ ( k ) ) σ μ [ ( D 21 T A ¯ 1 ) σ ( Z ˆ ˜ ( k ) ) σ ( B ¯ 1 D 22 T ) σ + ( D 21 T A ¯ 2 ) σ P n ( Z ˆ ˜ ( k ) ) σ P n ( B ¯ 2 D 22 T ) σ + ( D 21 T A ¯ 3 ) σ ( Z ˆ ˜ ( k ) T ) σ ( B ¯ 3 D 22 T ) σ + ( D 21 T A ¯ 4 ) σ P n Z ˆ ˜ ( k ) T P n ( B ¯ 4 D 22 T ) σ ] .

Applying Kronecker product and vec operator in (15) leads to

(16) vec [ ( Z ˜ ( 2 ) ( k + 1 ) ) σ ] = vec [ ( Z ˆ ˜ ( k ) ) σ ] μ vec [ ( D 21 T A ¯ 1 ) σ ( Z ˆ ˜ ( k ) ) σ ( B ¯ 1 D 22 T ) σ + ( D 21 T A ¯ 2 ) σ P n ( Z ˆ ˜ ( k ) ) σ P n ( B ¯ 2 D 22 T ) σ + ( D 21 T A ¯ 3 ) σ ( Z ˆ ˜ ( k ) T ) σ ( B ¯ 3 D 22 T ) σ + ( D 21 T A ¯ 4 ) σ P n Z ˆ ˜ ( k ) T P n ( B ¯ 4 D 22 T ) σ ] = vec [ ( Z ˆ ˜ ( k ) ) σ ] μ { ( B ¯ 1 D 22 T ) σ T ( D 21 T A ¯ 1 ) σ + ( B ¯ 2 D 22 T ) σ T P n ( D 21 T A ¯ 2 ) σ P n

+ [ ( B ¯ 3 D 22 T ) σ T ( D 21 T A ¯ 3 ) σ + ( B ¯ 4 D 22 T ) σ T P n ( D 21 T A ¯ 4 ) σ P n ] P ( 2 n , 2 n ) } vec [ ( Z ˆ ˜ ( k ) ) σ ] = [ I 4 n 2 μ W 2 ] vec [ ( Z ˆ ˜ ( k ) ) σ ] M 2 vec [ ( Z ˆ ˜ ( k ) ) σ ] ,

with

W 2 = ( B ¯ 1 D 22 T ) σ T ( D 21 T A ¯ 1 ) σ + [ ( B ¯ 3 D 22 T ) σ T ( D 21 T A ¯ 3 ) σ + ( B ¯ 4 D 22 T ) σ T P n ( D 21 T A ¯ 4 ) σ P n ] P ( 2 n , 2 n ) + ( B ¯ 2 D 22 T ) σ T P n ( D 21 T A ¯ 2 ) σ P n , M 2 = I 4 n 2 μ W 2 .

According to Line 5 of Step 3 of the IMGI algorithm, it has

(17) Z ˜ ( 3 ) ( k + 1 ) = Z ˇ ˜ ( k ) μ D ¯ 32 ( A 1 Z ˇ ˜ ( k ) B 1 + A 2 Z ˇ ¯ ˜ ( k ) B 2 + A 3 Z ˇ ˜ ( k ) T B 3 + A 4 Z ˇ ˜ ( k ) H B 4 ) T D ¯ 31 = Z ˇ ˜ ( k ) μ D ¯ 32 ( B 1 T Z ˇ ˜ ( k ) T A 1 T + B 2 T Z ˇ ¯ ˜ ( k ) T A 2 T + B 3 T Z ˇ ˜ ( k ) A 3 T + B 4 T Z ˇ ¯ ˜ ( k ) A 4 T ) D ¯ 31 = Z ˇ ˜ ( k ) μ ( D ¯ 32 B 1 T Z ˇ ˜ ( k ) T A 1 T D ¯ 31 + D ¯ 32 B 2 T Z ˇ ¯ ˜ ( k ) T A 2 T D ¯ 31 + D ¯ 32 B 3 T Z ˇ ˜ ( k ) A 3 T D ¯ 31 + D ¯ 32 B 4 T Z ˇ ¯ ˜ ( k ) A 4 T D ¯ 31 ) .

By making use of the real representation of the both sides of (17) and Lemma 2.1, we derive

(18) ( Z ˜ ( 3 ) ( k + 1 ) ) σ = ( Z ˇ ˜ ( k ) ) σ μ ( D ¯ 32 B 1 T Z ˇ ˜ ( k ) T A 1 T D ¯ 31 + D ¯ 32 B 2 T Z ˇ ¯ ˜ ( k ) T A 2 T D ¯ 31 + D ¯ 32 B 3 T Z ˇ ˜ ( k ) A 3 T D ¯ 31 + D ¯ 32 B 4 T Z ˇ ¯ ˜ ( k ) A 4 T D ¯ 31 ) σ = ( Z ˇ ˜ ( k ) ) σ μ [ ( D ¯ 32 B 1 T ) σ P n ( Z ˇ ˜ ( k ) T ) σ P n ( A 1 T D ¯ 31 ) σ + ( D ¯ 32 B 2 T ) σ ( Z ˇ ˜ ( k ) T ) σ ( A 2 T D ¯ 31 ) σ + ( D ¯ 32 B 3 T ) σ P n ( Z ˇ ˜ ( k ) ) σ P n ( A 3 T D ¯ 31 ) σ + ( D ¯ 32 B 4 T ) σ ( Z ˇ ˜ ( k ) ) σ ( A 4 T D ¯ 31 ) σ ] .

By utilizing Kronecker product and vec operator in (18), we deduce that

(19) vec [ ( Z ˜ ( 3 ) ( k + 1 ) ) σ ] = vec [ ( Z ˇ ˜ ( k ) ) σ ] μ vec [ ( D ¯ 32 B 1 T ) σ P n ( Z ˇ ˜ ( k ) T ) σ P n ( A 1 T D ¯ 31 ) σ + ( D ¯ 32 B 2 T ) σ ( Z ˇ ˜ ( k ) T ) σ ( A 2 T D ¯ 31 ) σ + ( D ¯ 32 B 3 T ) σ P n ( Z ˇ ˜ ( k ) ) σ P n ( A 3 T D ¯ 31 ) σ + ( D ¯ 32 B 4 T ) σ ( Z ˇ ˜ ( k ) ) σ ( A 4 T D ¯ 31 ) σ ] = vec [ ( Z ˇ ˜ ( k ) ) σ ] μ { [ ( A 1 T D ¯ 31 ) σ T P n ( D ¯ 32 B 1 T ) σ P n + ( A 2 T D ¯ 31 ) σ T ( D ¯ 32 B 2 T ) σ ] P ( 2 n , 2 n ) + ( A 3 T D ¯ 31 ) σ T P n ( D ¯ 32 B 3 T ) σ P n + ( A 4 T D ¯ 31 ) σ T ( D ¯ 32 B 4 T ) σ } vec [ ( Z ˇ ˜ ( k ) ) σ ] = [ I 4 n 2 μ W 3 ] vec [ ( Z ˇ ˜ ( k ) ) σ ] M 3 vec [ ( Z ˇ ˜ ( k ) ) σ ] ,

with

W 3 = [ ( A 1 T D ¯ 31 ) σ T P n ( D ¯ 32 B 1 T ) σ P n + ( A 2 T D ¯ 31 ) σ T ( D ¯ 32 B 2 T ) σ ] P ( 2 n , 2 n ) + ( A 3 T D ¯ 31 ) σ T P n ( D ¯ 32 B 3 T ) σ P n + ( A 4 T D ¯ 31 ) σ T ( D ¯ 32 B 4 T ) σ , M 3 = I 4 n 2 μ W 3 .

In the same manner applied in (18), we can deduce that

(20) Z ˜ ( 4 ) ( k + 1 ) = Ź ˜ ( k ) μ D 42 ( A 1 Ź ˜ ( k ) B 1 + A 2 Ź ¯ ˜ ( k ) B 2 + A 3 Ź ˜ ( k ) T B 3 + A 4 Ź ˜ ( k ) H B 4 ) H D 41 = Ź ˜ ( k ) μ D 42 ( B 1 H Ź ˜ ( k ) H A 1 H + B 2 H Ź ˜ ( k ) T A 2 H + B 3 H Ź ¯ ˜ ( k ) A 3 H + B 4 H Ź ˜ ( k ) A 4 H ) D 41 = Ź ˜ ( k ) μ ( D 42 B 1 H Ź ˜ ( k ) H A 1 H D 41 + D 42 B 2 H Ź ˜ ( k ) T A 2 H D 41 + D 42 B 3 H Ź ¯ ˜ ( k ) A 3 H D 41 + D 42 B 4 H Ź ˜ ( k ) A 4 H D 41 ) ,

by Line 7 of Step 3 of the IMGI algorithm. Using the real representation in (20) and Lemma 2.1 yields that

(21) ( Z ˜ ( 4 ) ( k + 1 ) ) σ = ( Ź ˜ ( k ) ) σ μ ( D 42 B 1 H Ź ˜ ( k ) H A 1 H D 41 + D 42 B 2 H Ź ˜ ( k ) T A 2 H D 41 + D 42 B 3 H Ź ¯ ˜ ( k ) A 3 H D 41 + D 42 B 4 H Ź ˜ ( k ) A 4 H D 41 ) σ = ( Ź ˜ ( k ) ) σ μ [ ( D 42 B 1 H ) σ ( Ź ˜ ( k ) T ) σ ( A 1 H D 41 ) σ + ( D 42 B 2 H ) σ P n ( Ź ˜ ( k ) T ) σ P n ( A 2 H D 41 ) σ + ( D 42 B 3 H ) σ ( Ź ˜ ( k ) ) σ ( A 3 H D 41 ) σ + ( D 42 B 4 H ) σ P n ( Ź ˜ ( k ) ) σ P n ( A 4 H D 41 ) σ ] .

By utilizing Kronecker product and vec operator in (21), it holds that

(22) vec [ ( Z ˜ ( 4 ) ( k + 1 ) ) σ ] = vec [ ( Ź ˜ ( k ) ) σ ] μ vec [ ( D 42 B 1 H ) σ ( Ź ˜ ( k ) T ) σ ( A 1 H D 41 ) σ + ( D 42 B 2 H ) σ P n ( Ź ˜ ( k ) T ) σ P n ( A 2 H D 41 ) σ

+ ( D 42 B 3 H ) σ ( Ź ˜ ( k ) ) σ ( A 3 H D 41 ) σ + ( D 42 B 4 H ) σ P n ( Ź ˜ ( k ) ) σ P n ( A 4 H D 41 ) σ ] = vec [ ( Ź ˜ ( k ) ) σ ] μ { [ ( A 1 H D 41 ) σ T ( D 42 B 1 H ) σ + ( A 2 H D 41 ) σ T P n ( D 42 B 2 H ) σ P n ] P ( 2 n , 2 n ) + ( A 3 H D 41 ) σ T ( D 42 B 3 H ) σ + ( A 4 H D 41 ) σ T P n ( D 42 B 4 H ) σ P n } vec [ ( Ź ˜ ( k ) ) σ ] = [ I 4 n 2 μ W 4 ] vec [ ( Ź ˜ ( k ) ) σ ] M 4 vec [ ( Ź ˜ ( k ) ) σ ] ,

with

W 4 = [ ( A 1 H D 41 ) σ T ( D 42 B 1 H ) σ + ( A 2 H D 41 ) σ T P n ( D 42 B 2 H ) σ P n ] P ( 2 n , 2 n ) + ( A 3 H D 41 ) σ T ( D 42 B 3 H ) σ + ( A 4 H D 41 ) σ T P n ( D 42 B 4 H ) σ P n , M 4 = I 4 n 2 μ W 4 .

Combining (13) with Line 8 of Step 3 of IMGI algorithm leads to

(23) vec [ ( Z ˜ ( 1 ) ( k + 1 ) ) σ ] = M 1 vec [ ( Z ˜ ( k ) ) σ ] = M 1 vec 1 4 Z ˜ ( 1 ) ( k ) + 1 4 Z ˜ ( 2 ) ( k ) + 1 4 Z ˜ ( 3 ) ( k ) + 1 4 Z ˜ ( 4 ) ( k ) σ = 1 4 M 1 vec [ ( Z ˜ ( 1 ) ( k ) ) σ ] + 1 4 M 1 vec [ ( Z ˜ ( 2 ) ( k ) ) σ ] + 1 4 M 1 vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 4 M 1 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] .

Besides, according to (16), (23), and Line 2 of Step 3 of IMGI algorithm, we obtain

(24) vec [ ( Z ˜ ( 2 ) ( k + 1 ) ) σ ] = M 2 vec [ ( Z ˆ ˜ ( k ) ) σ ] = M 2 vec 1 4 Z ˜ ( 1 ) ( k + 1 ) + 1 4 Z ˜ ( 2 ) ( k ) + 1 4 Z ˜ ( 3 ) ( k ) + 1 4 Z ˜ ( 4 ) ( k ) σ = 1 4 M 2 vec [ ( Z ˜ ( 1 ) ( k + 1 ) ) σ ] + 1 4 M 2 vec [ ( Z ˜ ( 2 ) ( k ) ) σ ] + 1 4 M 2 vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 4 M 2 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] = 1 4 M 2 1 4 M 1 vec [ ( Z ˜ ( 1 ) ( k ) ) σ ] + 1 4 M 1 vec [ ( Z ˜ ( 2 ) ( k ) ) σ ] + 1 4 M 1 vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 4 M 1 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] + 1 4 M 2 vec ( Z ˜ ( 2 ) ( k ) ) + 1 4 M 2 vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 4 M 2 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] = 1 16 M 2 M 1 vec [ ( Z ˜ ( 1 ) ( k ) ) σ ] + 1 16 M 2 M 1 + 1 4 M 2 vec [ ( Z ˜ ( 2 ) ( k ) ) σ ] + 1 16 M 2 M 1 + 1 4 M 2 vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 16 M 2 M 1 + 1 4 M 2 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] .

It follows from (19), (23), (24), and Line 4 of Step 3 of the IMGI algorithm that

(25) vec [ ( Z ˜ ( 3 ) ( k + 1 ) ) σ ] = M 3 vec [ ( Z ˇ ˜ ( k ) ) σ ] = M 3 vec 1 4 Z ˜ ( 1 ) ( k + 1 ) + 1 4 Z ˜ ( 2 ) ( k + 1 ) + 1 4 Z ˜ ( 3 ) ( k ) + 1 4 Z ˜ ( 4 ) ( k ) σ = 1 4 M 3 vec [ ( Z ˜ ( 1 ) ( k + 1 ) ) σ ] + 1 4 M 3 vec [ ( Z ˜ ( 2 ) ( k + 1 ) ) σ ] + 1 4 M 3 vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 4 M 3 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ]

= 1 64 M 3 M 2 M 1 + 1 16 M 3 M 1 vec [ ( Z ˜ ( 1 ) ( k ) ) σ ] + 1 16 M 3 M 1 + 1 64 M 3 M 2 M 1 + 1 16 M 3 M 2 vec [ ( Z ˜ ( 2 ) ( k ) ) σ ] + 1 16 M 3 M 1 + 1 64 M 3 M 2 M 1 + 1 16 M 3 M 2 + 1 4 M 3 × vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 16 M 3 M 1 + 1 64 M 3 M 2 M 1 + 1 16 M 3 M 2 + 1 4 M 3 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] .

Finally, combining (22)–(25) with Line 6 of Step 3 of IMGI algorithm leads to

(26) vec [ ( Z ˜ ( 4 ) ( k + 1 ) ) σ ] = M 4 vec [ ( Ź ˜ ( k ) ) σ ] = M 4 vec 1 4 Z ˜ ( 1 ) ( k + 1 ) + 1 4 Z ˜ ( 2 ) ( k + 1 ) + 1 4 Z ˜ ( 3 ) ( k + 1 ) + 1 4 Z ˜ ( 4 ) ( k ) σ = 1 4 M 4 vec [ ( Z ˜ ( 1 ) ( k + 1 ) ) σ ] + 1 4 M 4 vec [ ( Z ˜ ( 2 ) ( k + 1 ) ) σ ] + 1 4 M 4 vec [ ( Z ˜ ( 3 ) ( k + 1 ) ) σ ] + 1 4 M 4 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] = 1 16 M 4 M 1 + 1 64 M 4 M 2 M 1 + 1 64 M 4 M 3 M 1 + 1 256 M 4 M 3 M 2 M 1 vec [ ( Z ˜ ( 1 ) ( k ) ) σ ] + 1 16 M 4 M 1 + 1 16 M 4 M 2 + 1 64 M 4 M 2 M 1 + 1 64 M 4 M 3 M 1 + 1 64 M 4 M 3 M 2 + 1 256 M 4 M 3 M 2 M 1 vec [ ( Z ˜ ( 2 ) ( k ) ) σ ] + 1 16 M 4 M 1 + 1 16 M 4 M 2 + 1 16 M 4 M 3 + 1 64 M 4 M 2 M 1 + 1 64 M 4 M 3 M 1 + 1 64 M 4 M 3 M 2 + 1 256 M 4 M 3 M 2 M 1 vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] + 1 16 M 4 M 1 + 1 16 M 4 M 2 + 1 16 M 4 M 3 + 1 64 M 4 M 2 M 1 + 1 64 M 4 M 3 M 1 + 1 64 M 4 M 3 M 2 + 1 256 M 4 M 3 M 2 M 1 + 1 4 M 4 vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] .

According to (23)–(26) and by some computations, we derive

(27) vec [ ( Z ˜ ( 1 ) ( k + 1 ) ) σ ] vec [ ( Z ˜ ( 2 ) ( k + 1 ) ) σ ] vec [ ( Z ˜ ( 3 ) ( k + 1 ) ) σ ] vec [ ( Z ˜ ( 4 ) ( k + 1 ) ) σ ] = S T vec [ ( Z ˜ ( 1 ) ( k ) ) σ ] vec [ ( Z ˜ ( 2 ) ( k ) ) σ ] vec [ ( Z ˜ ( 3 ) ( k ) ) σ ] vec [ ( Z ˜ ( 4 ) ( k ) ) σ ] ,

where the matrices S and T are defined as in (10). Then the matrix S T is the iteration matrix of the IMGI algorithm, and hence, the IMGI algorithm is convergent if and only if ρ ( S T ) < 1 .□

Note that the order of the matrix S T in Theorem 3.1 is very high if n is large. This results in difficulty of computing the spectral radius of the matrix S T . To overcome this drawback, we deduce the following corollary in terms of Lemma 2.4, which is easier to be computed compared with Theorem 3.1 because the order of related matrix is only 4.

Corollary 3.1

Suppose that the conditions of Theorem 3.1 are satisfied and let S T = [ L i j ] 4 × 4 . Then the IMGI algorithm is convergent if the parameter μ is chosen to satisfy

ρ L 11 L 12 L 13 L 14 L 21 L 22 L 23 L 24 L 31 L 32 L 33 L 34 L 41 L 42 L 43 L 44 < 1 ,

with being the matrix norm of a matrix.

Although the convergence conditions of the IMGI algorithm are derived in Theorem 3.1 and Corollary 3.1, they do not determine any interval of the step size factor μ . The following theorem proposes the interval of the step size factor μ in the IMGI algorithm under proper conditions.

Theorem 3.2

Let the conditions of Theorem 3.1 be satisfied, and S and T defined in Theorem 3.1 be normal matrices. Denote by ξ i ( 1 ) , ξ i ( 2 ) , ξ i ( 3 ) , and ξ i ( 4 )   ( i = 1 , 2 , , 4 n 2 ) the eigenvalues of the matrices W 1 , W 2 , W 3 , and W 4 , respectively, and Re ( ξ i ( j ) ) > 0 for i = 1 , 2 , , 4 n 2 ; j = 1 , 2 , 3 , 4 . Then the IMGI algorithm is convergent if

(28) 0 < μ < min 1 i 4 n 2 2 Re ( ξ i ( 1 ) ) Re ( ξ i ( 1 ) ) 2 + Im ( ξ i ( 1 ) ) 2 , 2 Re ( ξ i ( 2 ) ) Re ( ξ i ( 2 ) ) 2 + Im ( ξ i ( 2 ) ) 2 , 2 Re ( ξ i ( 3 ) ) Re ( ξ i ( 3 ) ) 2 + Im ( ξ i ( 3 ) ) 2 , 2 Re ( ξ i ( 4 ) ) Re ( ξ i ( 4 ) ) 2 + Im ( ξ i ( 4 ) ) 2 .

Proof

Inasmuch as S and T are normal matrices, it follows from Lemma 2.5 that

ρ ( S T ) S T 2 S 2 T 2 = ρ ( S ) ρ ( T ) = ρ ( T ) = max ρ 1 4 M 1 , ρ 1 4 M 2 , ρ 1 4 M 3 , ρ 1 4 M 4 = max 1 i 4 n 2 1 4 ( 1 μ ξ i ( 1 ) ) , 1 4 ( 1 μ ξ i ( 2 ) ) , 1 4 ( 1 μ ξ i ( 3 ) ) , 1 4 ( 1 μ ξ i ( 4 ) ) max 1 i 4 n 2 { 1 μ ξ i ( 1 ) , 1 μ ξ i ( 2 ) , 1 μ ξ i ( 3 ) , 1 μ ξ i ( 4 ) } .

This implies that ρ ( S T ) < 1 holds if

1 μ ξ i ( 1 ) < 1 , 1 μ ξ i ( 2 ) < 1 , 1 μ ξ i ( 3 ) < 1 , 1 μ ξ i ( 4 ) < 1 , i = 1 , 2 , , 4 n 2 ,

which is equivalent to

( 1 μ Re ( ξ i ( 1 ) ) ) 2 + μ 2 Im ( ξ i ( 1 ) ) 2 < 1 , ( 1 μ Re ( ξ i ( 2 ) ) ) 2 + μ 2 Im ( ξ i ( 2 ) ) 2 < 1 , ( 1 μ Re ( ξ i ( 3 ) ) ) 2 + μ 2 Im ( ξ i ( 3 ) ) 2 < 1 , ( 1 μ Re ( ξ i ( 4 ) ) ) 2 + μ 2 Im ( ξ i ( 4 ) ) 2 < 1 , i = 1 , 2 , , 4 n 2 .

Solving the aforementioned inequalities yields that

0 < μ < 2 Re ( ξ i ( 1 ) ) Re ( ξ i ( 1 ) ) 2 + Im ( ξ i ( 1 ) ) 2 , 0 < μ < 2 Re ( ξ i ( 2 ) ) Re ( ξ i ( 2 ) ) 2 + Im ( ξ i ( 2 ) ) 2 , 0 < μ < 2 Re ( ξ i ( 3 ) ) Re ( ξ i ( 3 ) ) 2 + Im ( ξ i ( 3 ) ) 2 , 0 < μ < 2 Re ( ξ i ( 4 ) ) Re ( ξ i ( 4 ) ) 2 + Im ( ξ i ( 4 ) ) 2 ,

which gives the convergence condition (28) of the IMGI algorithm.□

In the sequel, we establish the different convergence condition of the IMGI algorithm below.

Theorem 3.3

Assume that the CCT Sylvester matrix equation (2) has a unique solution Z * . If μ satisfies

(29) 0 < μ < min 2 D 11 2 2 D 12 2 2 , 2 D 21 2 2 D 22 2 2 , 2 D 31 2 2 D 32 2 2 , 2 D 41 2 2 D 42 2 2 ,

then the matrix sequence { Z ( k ) } generated by the IMGI algorithm converges to Z * .

Proof

We first define the error matrices

(30) Z ˜ ( k ) = Z ( k ) Z * , Z ˆ ˜ ( k ) = Z ˆ ( k ) Z * , Z ˇ ˜ ( k ) = Z ˇ ( k ) Z * , Ź ˜ ( k ) = Ź ( k ) Z * ,

and

P 1 ( k ) = A 1 Z ˜ ( k ) B 1 , P 2 ( k ) = A 1 Z ˆ ˜ ( k ) B 1 , P 3 ( k ) = A 1 Z ˇ ˜ ( k ) B 1 , P 4 ( k ) = A 1 Ź ˜ ( k ) B 1 ,

(31) Q 1 ( k ) = A 2 Z ˜ ¯ ( k ) B 2 , Q 2 ( k ) = A 2 Z ˆ ˜ ¯ ( k ) B 2 , Q 3 ( k ) = A 2 Z ˇ ˜ ¯ ( k ) B 2 , Q 4 ( k ) = A 2 Ź ˜ ¯ ( k ) B 2 , U 1 ( k ) = A 3 Z ˜ T ( k ) B 3 , U 2 ( k ) = A 3 Z ˆ ˜ T ( k ) B 3 , U 3 ( k ) = A 3 Z ˇ ˜ T ( k ) B 3 , U 4 ( k ) = A 3 Ź ˜ ( k ) T B 3 , V 1 ( k ) = A 4 Z ˜ H ( k ) B 4 , V 2 ( k ) = A 4 Z ˆ ˜ H ( k ) B 4 , V 3 ( k ) = A 4 Z ˇ ˜ H ( k ) B 4 , V 4 ( k ) = A 4 Ź ˜ ( k ) H B 4 , W 1 ( k ) = P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) , W 2 ( k ) = P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) , W 3 ( k ) = P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) , W 4 ( k ) = P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) .

Thus,

(32) H Θ ( Z ( k ) ) = ( A 1 Z ˜ ( k ) B 1 + A 2 Z ˜ ¯ ( k ) B 2 + A 3 Z ˜ T ( k ) B 3 + A 4 Z ˜ H ( k ) B 4 ) = ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) , H ¯ Θ ( Z ˆ ( k ) ) ¯ = ( A 1 Z ˆ ˜ ( k ) B 1 + A 2 Z ˆ ˜ ¯ ( k ) B 2 + A 3 Z ˆ ˜ T ( k ) B 3 + A 4 Z ˆ ˜ H ( k ) B 4 ¯ ) = ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) , H T Θ T ( Z ˇ ( k ) ) = ( A 1 Z ˇ ˜ ( k ) B 1 + A 2 Z ˇ ˜ ¯ ( k ) B 2 + A 3 Z ˇ ˜ T ( k ) B 3 + A 4 Z ˇ ˜ H ( k ) B 4 ) T = ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T , H H Θ H ( Ź ( k ) ) = ( A 1 Ź ˜ ( k ) B 1 + A 2 Ź ˜ ¯ ( k ) B 2 + A 3 Ź ˜ ( k ) T B 3 + A 4 Ź ˜ ( k ) H B 4 ) H = ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H .

Then, it follows from the iteration scheme of the IMGI algorithm and (32) that

(33) Z ˜ ( 1 ) ( k + 1 ) = Z ˜ ( k ) μ D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H , Z ˜ ( 2 ) ( k + 1 ) = Z ˆ ˜ ( k ) μ D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T , Z ˜ ( 3 ) ( k + 1 ) = Z ˇ ˜ ( k ) μ D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 , Z ˜ ( 4 ) ( k + 1 ) = Ź ˜ ( k ) μ D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 .

By taking the Frobenious norm of each equation in (33) and using the properties of the norm, it holds that

(34) Z ˜ ( 1 ) ( k + 1 ) 2 = Z ˜ ( k ) μ D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H 2 = Z ˜ ( k ) 2 μ tr ( Z ˜ H ( k ) D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H ) μ tr ( D 12 ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) H D 11 Z ˜ ( k ) ) + μ 2 D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H 2 Z ˜ ( k ) 2 2 μ Re tr [ D 12 H Z ˜ H ( k ) D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) ] + μ 2 D 11 2 2 D 12 2 2 P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) 2 ,

(35) Z ˜ ( 2 ) ( k + 1 ) 2 = Z ˆ ˜ ( k ) μ D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T 2 = Z ˆ ˜ ( k ) 2 μ tr ( Z ˆ ˜ H ( k ) D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T ) μ tr ( D ¯ 22 ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ) T D ¯ 21 Z ˆ ˜ ( k ) ) + μ 2 D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T 2 Z ˆ ˜ ( k ) 2 2 μ Re tr [ D 22 T Z ˆ ˜ H ( k ) D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) ] + μ 2 D 21 2 2 D 22 2 2 P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) 2 ,

(36) Z ˜ ( 3 ) ( k + 1 ) 2 = Z ˇ ˜ ( k ) μ D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 2 = Z ˇ ˜ ( k ) 2 μ tr ( Z ˇ ˜ H ( k ) D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 ) μ tr ( D 31 T ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ¯ ) D 32 T Z ˇ ˜ ( k ) ) + μ 2 D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 2 Z ˇ ˜ ( k ) 2 2 μ Re tr [ D ¯ 31 Z ˇ ˜ H ( k ) D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T ] + μ 2 D 31 2 2 D 32 2 2 P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) 2 ,

and

(37) Z ˜ ( 4 ) ( k + 1 ) 2 = Ź ˜ ( k ) μ D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 2 = Ź ˜ ( k ) 2 μ tr ( Ź ˜ H ( k ) D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 ) μ tr ( D 41 H ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) D 42 H Ź ˜ ( k ) ) + μ 2 D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 2 Ź ˜ ( k ) 2 2 μ Re tr [ D 41 Ź ˜ H ( k ) D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H ] + μ 2 D 41 2 2 D 42 2 2 P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) 2 .

In view of (34)–(37) and Cauchy inequality, it has

Z ˜ ( k + 1 ) 2 = 1 4 Z ˜ ( 1 ) ( k + 1 ) + 1 4 Z ˜ ( 2 ) ( k + 1 ) + 1 4 Z ˜ ( 3 ) ( k + 1 ) + 1 4 Z ˜ ( 4 ) ( k + 1 ) 2 1 4 Z ˜ ( 1 ) ( k + 1 ) + 1 4 Z ˜ ( 2 ) ( k + 1 ) + 1 4 Z ˜ ( 3 ) ( k + 1 ) + 1 4 Z ˜ ( 4 ) ( k + 1 ) 2 4 1 4 Z ˜ ( 1 ) ( k + 1 ) 2 + 1 4 Z ˜ ( 2 ) ( k + 1 ) 2 + 1 4 Z ˜ ( 3 ) ( k + 1 ) 2 + 1 4 Z ˜ ( 4 ) ( k + 1 ) 2 = 1 4 Z ˜ ( 1 ) ( k + 1 ) 2 + 1 4 Z ˜ ( 2 ) ( k + 1 ) 2 + 1 4 Z ˜ ( 3 ) ( k + 1 ) 2 + 1 4 Z ˜ ( 4 ) ( k + 1 ) 2 1 4 Z ˜ ( k ) 2 μ 2 Re tr [ D 12 H Z ˜ H ( k ) D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) ] + μ 2 4 D 11 2 2 D 12 2 2 P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) 2 + 1 4 Z ˆ ˜ ( k ) 2 μ 2 Re tr [ D 22 T Z ˆ ˜ H ( k ) D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) ] + μ 2 4 D 21 2 2 D 22 2 2 × P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) 2 + 1 4 Z ˇ ˜ ( k ) 2 μ 2 Re tr [ D ¯ 31 Z ˇ ˜ H ( k ) D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T ] + μ 2 4 D 31 2 2 D 32 2 2 P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) 2 + 1 4 Ź ˜ ( k ) 2 μ 2 Re tr [ D 41 Ź ˜ H ( k ) D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H ] + μ 2 4 D 41 2 2 D 42 2 2 P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) 2

1 4 Z ˜ ( k ) 2 μ 2 P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) 2 + μ 2 4 D 11 2 2 D 12 2 2 P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) 2 + 1 4 Z ˆ ˜ ( k ) 2 μ 2 P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) 2 + μ 2 4 D 21 2 2 D 22 2 2 P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) 2 + 1 4 Z ˇ ˜ ( k ) 2 μ 2 P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) 2 + μ 2 4 D 31 2 2 D 32 2 2 P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) 2 + 1 4 Ź ˜ ( k ) 2 μ 2 P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) 2 + μ 2 4 D 41 2 2 D 42 2 2 P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) 2 = 1 4 Z ˜ ( k ) 2 μ 4 ( 2 μ D 11 2 2 D 12 2 2 ) W 1 ( k ) 2 + 1 4 Z ˆ ˜ ( k ) 2 μ 4 ( 2 μ D 21 2 2 D 22 2 2 ) W 2 ( k ) 2 + 1 4 Z ˇ ˜ ( k ) 2 μ 4 ( 2 μ D 31 2 2 D 32 2 2 ) W 3 ( k ) 2 + 1 4 Ź ˜ ( k ) 2 μ 4 ( 2 μ D 41 2 2 D 42 2 2 ) W 4 ( k ) 2

(38) Z ˜ ( k ) 2 μ 4 ( 2 μ D 11 2 2 D 12 2 2 ) W 1 ( k ) 2 + 1 4 Z ˆ ˜ ( k ) 2 μ 4 ( 2 μ D 21 2 2 D 22 2 2 ) W 2 ( k ) 2 + 1 4 Z ˇ ˜ ( k ) 2 μ 4 ( 2 μ D 31 2 2 D 32 2 2 ) W 3 ( k ) 2 + 1 4 Ź ˜ ( k ) 2 μ 4 ( 2 μ D 41 2 2 D 42 2 2 ) W 4 ( k ) 2 Z ˜ ( 0 ) 2 μ 4 ( 2 μ D 11 2 2 D 12 2 2 ) i = 0 k W 1 ( i ) 2 + 1 4 i = 0 k Z ˆ ˜ ( i ) 2 μ 4 ( 2 μ D 21 2 2 D 22 2 2 ) i = 0 k W 2 ( i ) 2 + 1 4 i = 0 k Z ˇ ˜ ( i ) 2 μ 4 ( 2 μ D 31 2 2 D 32 2 2 ) i = 0 k W 3 ( i ) 2

+ 1 4 i = 0 k Ź ˜ ( i ) 2 μ 4 ( 2 μ D 41 2 2 D 42 2 2 ) i = 0 k W 4 ( i ) 2 .

By the analogous methods in [19,20,24], we can obtain

(39) 1 4 i = 0 k Z ˆ ˜ ( i ) 2 < + , 1 4 i = 0 k Z ˇ ˜ ( i ) 2 < + , 1 4 i = 0 k Ź ˜ ( i ) 2 < + .

Thus, if the parameter μ satisfies the following condition

(40) 0 < μ < min 2 D 11 2 2 D 12 2 2 , 2 D 21 2 2 D 22 2 2 , 2 D 31 2 2 D 32 2 2 , 2 D 41 2 2 D 42 2 2 ,

then (38) implies that i = 0 W 1 ( i ) 2 < + , which results in lim i + W 1 ( i ) = 0 , that is,

(41) lim i + W 1 ( i ) = lim i + A 1 Z ˜ ( i ) B 1 + A 2 Z ˜ ¯ ( i ) B 2 + A 3 Z ˜ T ( i ) B 3 + A 4 Z ˜ H ( i ) B 4 = 0 .

Having in mind that the CCT Sylvester matrix equation (2) has a unique solution, then it follows from (41) and Lemma 3.2 that lim i + Z ˜ ( i ) = 0 , i.e., lim i + Z ( i ) = Z * . The proof is completed.□

In [17], Niu et al. introduced a relaxation parameter into the GI algorithm and established the RGI algorithm. And the numerical results of [17] illustrated that the RGI algorithm outperforms the GI one with proper relaxation parameter. Then it motivates many researchers to derive the relaxed versions of some existing algorithms for several kinds of matrix equations, see [15,16,19] for details. Recently, Wang et al. [21] constructed a RGI algorithm for the CCT Sylvester matrix equation (2). The framework of the RGI algorithm is as follows.

The RGI algorithm [21]:

Step 1: Given matrices A i , B i , H C n × n   ( i = 1 , 2 , 3 , 4 ) , two constants ε , μ > 0 and a relaxed factor 0 < γ < 1 . Choose the initial matrices Z ( 0 ) and Z i ( 0 ) ( i = 1 , 2 , 3 , 4 ) , and set m = 0 ;

Step 2: If δ m = H A 1 Z ( m ) B 1 A 2 Z ¯ ( m ) B 2 A 3 Z ( m ) T B 3 A 4 Z ( m ) H B 4 H < ε , stop; otherwise, go to Step 3;

Step 3: Update the sequences

Z ( 1 ) ( m + 1 ) = Z ( 1 ) ( m ) + 1 2 γ μ A 1 H ( H Θ ( Z ( m ) ) ) B 1 H , Z ( 2 ) ( m + 1 ) = Z ( 2 ) ( m ) + 1 2 γ μ A 2 T ( H ¯ Θ ( Z ( m ) ) ¯ ) B 2 T , Z ( 3 ) ( m + 1 ) = Z ( 3 ) ( m ) + 1 γ 2 μ B 3 ¯ ( H T Θ T ( Z ( m ) ) ) A 3 ¯ , Z ( 4 ) ( m + 1 ) = Z ( 4 ) ( m ) + 1 γ 2 μ B 4 ( H H Θ H ( Z ( m ) ) ) A 4 , Z ( m + 1 ) = 1 2 ( 1 γ ) Z ( 1 ) ( m + 1 ) + 1 2 ( 1 γ ) Z ( 2 ) ( m + 1 ) + 1 2 γ Z ( 3 ) ( m + 1 ) + 1 2 γ Z ( 4 ) ( m + 1 ) ;

Step 4: Set m m + 1 and return to Step 2.

Numerical results of [21] validated that the RGI algorithm with proper ω is faster than the GI one in [14]. To further improve the computational efficiency of the IMGI algorithm, by making use of Lemma 3.1, and enlightened by the idea of [17,21], we propose the following IMRGI algorithm.

The IMRGI algorithm:

Step 1: Given matrices A i , B i , H C n × n   ( i = 1 , 2 , 3 , 4 ) , two constants ε , μ > 0 and a relaxed factor 0 < ω < 1 . Choose the initial matrices Z ( 0 ) and Z i ( 0 ) ( i = 1 , 2 , 3 , 4 ) , and set k = 0 ;

Step 2: If δ k = H A 1 Z ( k ) B 1 A 2 Z ¯ ( k ) B 2 A 3 Z ( k ) T B 3 A 4 Z ( k ) H B 4 H < ε , stop; otherwise, go to Step 3;

Step 3: Update the sequences

Z ( 1 ) ( k + 1 ) = Z ( k ) + 1 2 ω μ D 11 H ( H Θ ( Z ( k ) ) ) D 12 H , Z ˆ ( k ) = 1 2 ( 1 ω ) Z ( 1 ) ( k + 1 ) + 1 2 ( 1 ω ) Z ( 2 ) ( k ) + 1 2 ω Z ( 3 ) ( k ) + 1 2 ω Z ( 4 ) ( k ) , Z ( 2 ) ( k + 1 ) = Z ˆ ( k ) + 1 2 ω μ D 21 T ( H ¯ Θ ( Z ˆ ( k ) ) ¯ ) D 22 T , Z ˇ ( k ) = 1 2 ( 1 ω ) Z ( 1 ) ( k + 1 ) + 1 2 ( 1 ω ) Z ( 2 ) ( k + 1 ) + 1 2 ω Z ( 3 ) ( k ) + 1 2 ω Z ( 4 ) ( k ) , Z ( 3 ) ( k + 1 ) = Z ˇ ( k ) + 1 2 ( 1 ω ) μ D 32 ¯ ( H T Θ T ( Z ˇ ( k ) ) ) D 31 ¯ , Ź ( k ) = 1 2 ( 1 ω ) Z ( 1 ) ( k + 1 ) + 1 2 ( 1 ω ) Z ( 2 ) ( k + 1 ) + 1 2 ω Z ( 3 ) ( k + 1 ) + 1 2 ω Z ( 4 ) ( k ) , Z ( 4 ) ( k + 1 ) = Ź ( k ) + 1 2 ( 1 ω ) μ D 42 ( H H Θ H ( Ź ( k ) ) ) D 41 , Z ( k + 1 ) = 1 2 ( 1 ω ) Z ( 1 ) ( k + 1 ) + 1 2 ( 1 ω ) Z ( 2 ) ( k + 1 ) + 1 2 ω Z ( 3 ) ( k + 1 ) + 1 2 ω Z ( 4 ) ( k + 1 ) ;

Step 4: Set k k + 1 and return to Step 2.

In what follows, we establish the convergence theorem of the IMRGI algorithm.

Theorem 3.4

Assume that the CCT Sylvester matrix equation (2) has a unique solution Z * . Then the IMRGI algorithm is convergent if and only if the parameters μ and ω are chosen to satisfy ρ ( U V ) < 1 , where

(42) U = I 0 0 0 1 ω 2 G 2 I 0 0 ( 1 ω ) 2 4 G 3 G 2 + 1 ω 2 G 3 1 ω 2 G 3 I 0 1 ω 2 G 4 + ( 1 ω ) 2 4 G 4 G 2 + ω ( 1 ω ) 4 G 4 G 3 + ω ( 1 ω ) 2 8 G 4 G 3 G 2 ( 1 ω ) 2 4 G 4 G 3 + 1 ω 2 G 4 ω 2 G 4 I , V = 1 2 ( 1 ω ) G 1 1 2 ( 1 ω ) G 1 1 2 ω G 1 1 2 ω G 1 0 1 2 ( 1 ω ) G 2 1 2 ω G 2 1 2 ω G 2 0 0 1 2 ω G 3 1 2 ω G 3 0 0 0 1 2 ω G 4 ,

with

G 1 = I 4 n 2 1 2 ω μ W 1 , G 2 = I 4 n 2 1 2 ω μ W 2 , G 3 = I 4 n 2 1 2 ( 1 ω ) μ W 3 , G 4 = I 4 n 2 1 2 ( 1 ω ) μ W 4 , W 1 = ( B 1 D 12 H ) σ T P n ( D 11 H A 1 ) σ P n + ( B 2 D 12 H ) σ T ( D 11 H A 2 ) σ + [ ( B 3 D 12 H ) σ T P n ( D 11 H A 3 ) σ P n + ( B 4 D 12 H ) σ T ( D 11 H A 4 ) σ ] P ( 2 n , 2 n ) , W 2 = ( B ¯ 1 D 22 T ) σ T ( D 21 T A ¯ 1 ) σ + [ ( B ¯ 3 D 22 T ) σ T ( D 21 T A ¯ 3 ) σ + ( B ¯ 4 D 22 T ) σ T P n ( D 21 T A ¯ 4 ) σ P n ] P ( 2 n , 2 n ) + ( B ¯ 2 D 22 T ) σ T P n ( D 21 T A ¯ 2 ) σ P n ,

W 3 = [ ( A 1 T D ¯ 31 ) σ T P n ( D ¯ 32 B 1 T ) σ P n + ( A 2 T D ¯ 31 ) σ T ( D ¯ 32 B 2 T ) σ ] P ( 2 n , 2 n ) + ( A 3 T D ¯ 31 ) σ T P n ( D ¯ 32 B 3 T ) σ P n + ( A 4 T D ¯ 31 ) σ T ( D ¯ 32 B 4 T ) σ , W 4 = [ ( A 1 H D 41 ) σ T ( D 42 B 1 H ) σ + ( A 2 H D 41 ) σ T P n ( D 42 B 2 H ) σ P n ] P ( 2 n , 2 n ) + ( A 3 H D 41 ) σ T ( D 42 B 3 H ) σ + ( A 4 H D 41 ) σ T P n ( D 42 B 4 H ) σ P n .

Proof

The proof of this theorem is similar to that of Theorem 3.1, and hence, we omit it here.□

Theorem 3.4 gives the convergence condition of the IMRGI algorithm, but it does not propose the intervals of the parameters ω , μ , and their optimal expressions. The reason is that the matrices G 1 , G 2 , G 3 , and G 4 in the matrices U and V of Theorem 3.4 contain the parameters ω , μ , and they cannot be separated from the matrices U and V . Hence, it is difficult to obtain the expression of the iteration matrix of the IMRGI algorithm with respect to ω , μ , and the optimal values of the parameters ω , μ . Nevertheless, we can derive the intervals of the parameters ω , μ contained in the IMRGI algorithm and the quasi-optimal expression of ω under some suitable conditions.

Theorem 3.5

Let the conditions of Theorem 3.4 be satisfied, and U and V defined in Theorem 3.4 be normal matrices. Denote by λ i ( 1 ) , λ i ( 2 ) , λ i ( 3 ) , and λ i ( 4 )   ( i = 1 , 2 , , 4 n 2 ) the eigenvalues of the matrices W 1 , W 2 , W 3 , and W 4 , respectively, and Re ( λ i ( j ) ) > 0 for i = 1 , 2 , , 4 n 2 ; j = 1 , 2 , 3 , 4 . Then the IMRGI algorithm is convergent if 0 < ω < 1 and

(43) 0 < μ < min 1 i 4 n 2 4 Re ( λ i ( 1 ) ) ω ( Re ( λ i ( 1 ) ) 2 + Im ( λ i ( 1 ) ) 2 ) , 4 Re ( λ i ( 2 ) ) ω ( Re ( λ i ( 2 ) ) 2 + Im ( λ i ( 2 ) ) 2 ) , 4 Re ( λ i ( 3 ) ) ( 1 ω ) ( Re ( λ i ( 3 ) ) 2 + Im ( λ i ( 3 ) ) 2 ) , 4 Re ( λ i ( 4 ) ) ( 1 ω ) ( Re ( λ i ( 4 ) ) 2 + Im ( λ i ( 4 ) ) 2 ) .

Besides, if all the eigenvalues of the matrices W 1 , W 2 , W 3 , and W 4 are real, then ω * = 4 μ ( λ ¯ max + λ ¯ min ) ω 1 * or ω * = 1 4 μ ( λ ˜ max + λ ˜ min ) ω 2 * . If Φ ( ω 1 * ) < Φ ( ω 2 * ) , then ω * = ω 1 * . Otherwise, ω * = ω 2 * . Here, λ ¯ max and λ ¯ min denote the maximum and minimum eigenvalues of two matrices W 1 , W 2 , respectively, and λ ˜ max and λ ˜ min denote the maximum and minimum eigenvalues of two matrices W 3 , W 4 , respectively.

Proof

Inasmuch as U and V in Theorem 3.4 are normal matrices, according to Lemma 2.5, it has

(44) ρ ( U V ) U V 2 U 2 V 2 = ρ ( U ) ρ ( V ) = ρ ( V ) = max ρ 1 2 ( 1 ω ) G 1 , ρ 1 2 ( 1 ω ) G 2 , ρ 1 2 ω G 3 , ρ 1 2 ω G 4 = max 1 i 4 n 2 1 ω 2 1 1 2 ω μ λ i ( 1 ) , 1 ω 2 1 1 2 ω μ λ i ( 2 ) , ω 2 1 1 ω 2 μ λ i ( 3 ) , ω 2 1 1 ω 2 μ λ i ( 4 ) max 1 i 4 n 2 1 1 2 ω μ λ i ( 1 ) , 1 1 2 ω μ λ i ( 2 ) , 1 1 ω 2 μ λ i ( 3 ) , 1 1 ω 2 μ λ i ( 4 ) = max max 1 i 4 n 2 1 1 2 ω μ λ i ( 1 ) , 1 1 2 ω μ λ i ( 2 ) , max 1 i 4 n 2 1 1 ω 2 μ λ i ( 3 ) , 1 1 ω 2 μ λ i ( 4 ) ,

in view of 0 < ω < 1 , ω 2 < 1 and 1 ω 2 < 1 . This means that ρ ( U V ) < 1 if

1 1 2 ω μ λ i ( 1 ) < 1 , 1 1 2 ω μ λ i ( 2 ) < 1 , 1 1 ω 2 μ λ i ( 3 ) < 1 , 1 1 ω 2 μ λ i ( 4 ) < 1 , i = 1 , 2 , , 4 n 2 ,

which can be equivalently transformed into the following inequalities

1 1 2 ω μ Re ( λ i ( 1 ) ) 2 + 1 4 ω 2 μ 2 Im ( λ i ( 1 ) ) 2 < 1 , 1 1 2 ω μ Re ( λ i ( 2 ) ) 2 + 1 4 ω 2 μ 2 Im ( λ i ( 2 ) ) 2 < 1 ,

1 1 ω 2 μ Re ( λ i ( 3 ) ) 2 + ( 1 ω ) 2 4 μ 2 Im ( λ i ( 3 ) ) 2 < 1 , 1 1 ω 2 μ Re ( λ i ( 4 ) ) 2 + ( 1 ω ) 2 4 μ 2 Im ( λ i ( 4 ) ) 2 < 1 , i = 1 , 2 , , 4 n 2 .

By solving the above inequalities, it has

(45) 0 < μ < 4 Re ( λ i ( 1 ) ) ω ( Re ( λ i ( 1 ) ) 2 + Im ( λ i ( 1 ) ) 2 ) , 0 < μ < 4 Re ( λ i ( 2 ) ) ω ( Re ( λ i ( 2 ) ) 2 + Im ( λ i ( 2 ) ) 2 ) , 0 < μ < 4 Re ( λ i ( 3 ) ) ( 1 ω ) ( Re ( λ i ( 3 ) ) 2 + Im ( λ i ( 3 ) ) 2 ) , 0 < μ < 4 Re ( λ i ( 3 ) ) ( 1 ω ) ( Re ( λ i ( 4 ) ) 2 + Im ( λ i ( 4 ) ) 2 ) .

Then we obtain the convergence condition (43) of the IMRGI algorithm in terms of (45).

Besides, if all the eigenvalues of the matrices W 1 , W 2 , W 3 , and W 4 are real, then it follows from (44) that

(46) ρ ( U V ) max 1 1 2 ω μ λ ¯ max , 1 1 2 ω μ λ ¯ min , 1 1 ω 2 μ λ ˜ max , 1 1 ω 2 μ λ ˜ min = max { Δ 1 ( ω ) , Δ 2 ( ω ) } Φ ( ω ) ,

where λ ¯ max and λ ¯ min denote the maximum and minimum eigenvalues of two matrices W 1 , W 2 , respectively, and λ ˜ max and λ ˜ min denote the maximum and minimum eigenvalues of two matrices W 3 , W 4 , respectively. And Δ 1 ( ω ) = max 1 1 2 ω μ λ ¯ max , 1 1 2 ω μ λ ¯ min and Δ 2 ( ω ) = max 1 1 ω 2 μ λ ˜ max , 1 1 ω 2 μ λ ˜ min .

To minimize Δ 1 ( ω ) , the parameter ω should satisfy 1 1 2 ω μ λ ¯ min = 1 2 ω μ λ ¯ max 1 , which leads to ω 1 * = 4 μ ( λ ¯ max + λ ¯ min ) . In addition, the parameter ω minimizing Δ 2 ( ω ) satisfies 1 1 ω 2 μ λ ˜ min = 1 ω 2 μ λ ˜ max 1 , which results in ω 2 * = 1 4 μ ( λ ˜ max + λ ˜ min ) . Substituting ω 1 * and ω 2 * into Φ ( ω ) , if Φ ( ω 1 * ) < Φ ( ω 2 * ) , then we take ω * = ω 1 * . Otherwise, ω * = ω 2 * .□

Remark 3.2

Theorem 3.5 gives the convergence conditions for the parameters μ , ω of the IMRGI algorithm. However, it is involved in the real and imaginary parts of the eigenvalues of the matrices W i   ( i = 1 , 2 , 3 , 4 ) , and it is not easy to compute it. Moreover, the assumptions that the matrices U , V are normal matrices, and all the eigenvalues of the matrices W i   ( i = 1 , 2 , 3 , 4 ) are real are somewhat strong. Besides, (43) is a sufficient condition for the convergence of the IMRGI algorithm, and the assumption is stronger than Theorem 3.1. So we will study a more practical convergence condition for the IMRGI algorithm in the future.

In addition, we deduce another convergence theorem of the IMRGI algorithm as follows.

Theorem 3.6

Suppose that the CCT Sylvester matrix equation (2) has a unique solution Z * . If 0 < ω < 1 and μ satisfies

(47) 0 < μ < min 4 ω D 11 2 2 D 12 2 2 , 4 ω D 21 2 2 D 22 2 2 , 4 ( 1 ω ) D 31 2 2 D 32 2 2 , 4 ( 1 ω ) D 41 2 2 D 42 2 2 ,

or the parameters μ and ω satisfy

(48) max 0 , 1 min 4 μ D 31 2 2 D 32 2 2 , 4 μ D 41 2 2 D 42 2 2 < ω < min 4 μ D 11 2 2 D 12 2 2 , 4 μ D 21 2 2 D 22 2 2 , 1 , min 4 μ D 11 2 2 D 12 2 2 , 4 μ D 21 2 2 D 22 2 2 + min 4 μ D 31 2 2 D 32 2 2 , 4 μ D 41 2 2 D 42 2 2 > 1 ,

then the matrix sequence { Z ( k ) } generated by the IMRGI algorithm converges to Z * .

Proof

Again using the notations in (30)–(31), we have

(49) H Θ ( Z ( k ) ) = ( A 1 Z ˜ ( k ) B 1 + A 2 Z ˜ ¯ ( k ) B 2 + A 3 Z ˜ T ( k ) B 3 + A 4 Z ˜ H ( k ) B 4 ) = ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) , H ¯ Θ ( Z ˆ ( k ) ) ¯ = ( A 1 Z ˆ ˜ ( k ) B 1 + A 2 Z ˆ ˜ ¯ ( k ) B 2 + A 3 Z ˆ ˜ T ( k ) B 3 + A 4 Z ˆ ˜ H ( k ) B 4 ¯ ) = ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) , H T Θ T ( Z ˇ ( k ) ) = ( A 1 Z ˇ ˜ ( k ) B 1 + A 2 Z ˇ ˜ ¯ ( k ) B 2 + A 3 Z ˇ ˜ T ( k ) B 3 + A 4 Z ˇ ˜ H ( k ) B 4 ) T = ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T , H H Θ H ( Ź ( k ) ) = ( A 1 Ź ˜ ( k ) B 1 + A 2 Ź ˜ ¯ ( k ) B 2 + A 3 Ź ˜ ( k ) T B 3 + A 4 Ź ˜ ( k ) H B 4 ) H = ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H .

Then it follows from the IMRGI algorithm and (49) that

(50) Z ˜ ( 1 ) ( k + 1 ) = Z ˜ ( k ) 1 2 ω μ D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H , Z ˜ ( 2 ) ( k + 1 ) = Z ˆ ˜ ( k ) 1 2 ω μ D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T , Z ˜ ( 3 ) ( k + 1 ) = Z ˇ ˜ ( k ) 1 2 ( 1 ω ) μ D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 , Z ˜ ( 4 ) ( k + 1 ) = Ź ˜ ( k ) 1 2 ( 1 ω ) μ D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 .

Taking the Frobenius norm in (50) yields that

(51) Z ˜ ( 1 ) ( k + 1 ) 2 = Z ˜ ( k ) 1 2 ω μ D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H 2 = Z ˜ ( k ) 2 1 2 ω μ tr ( Z ˜ H ( k ) D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H ) 1 2 ω μ tr ( D 12 ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) H D 11 Z ˜ ( k ) ) + 1 4 ω 2 μ 2 D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) D 12 H 2 Z ˜ ( k ) 2 ω μ Re tr [ D 12 H Z ˜ H ( k ) D 11 H ( P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) ) ] + 1 4 ω 2 μ 2 D 11 2 2 D 12 2 2 P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) 2 Z ˜ ( k ) 2 ω μ P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) 2 + 1 4 ω 2 μ 2 D 11 2 2 D 12 2 2 P 1 ( k ) + Q 1 ( k ) + U 1 ( k ) + V 1 ( k ) 2 = Z ˜ ( k ) 2 ω μ 1 1 4 ω μ D 11 2 2 D 12 2 2 W 1 ( k ) 2 ,

(52) Z ˜ ( 2 ) ( k + 1 ) 2 = Z ˆ ˜ ( k ) 1 2 ω μ D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T 2 = Z ˆ ˜ ( k ) 2 1 2 ω μ tr ( Z ˆ ˜ H ( k ) D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T ) 1 2 ω μ tr ( D ¯ 22 ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ) T D ¯ 21 Z ˆ ˜ ( k ) ) + 1 4 ω 2 μ 2 D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) D 22 T 2 Z ˆ ˜ ( k ) 2 ω μ Re tr [ D 22 T Z ˆ ˜ H ( k ) D 21 T ( P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) ¯ ) ] + 1 4 ω 2 μ 2 D 21 2 2 D 22 2 2 P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) 2 Z ˆ ˜ ( k ) 2 ω μ P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) 2 + 1 4 ω 2 μ 2 D 21 2 2 D 22 2 2 P 2 ( k ) + Q 2 ( k ) + U 2 ( k ) + V 2 ( k ) 2 = Z ˆ ˜ ( k ) 2 ω μ 1 1 4 ω μ D 21 2 2 D 22 2 2 W 2 ( k ) 2 ,

(53) Z ˜ ( 3 ) ( k + 1 ) 2 = Z ˇ ˜ ( k ) 1 2 ( 1 ω ) μ D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 2 = Z ˇ ˜ ( k ) 2 1 2 ( 1 ω ) μ tr ( Z ˇ ˜ H ( k ) D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 ) 1 2 ( 1 ω ) μ tr ( D 31 T ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ¯ ) D 32 T Z ˇ ˜ ( k ) )

+ 1 4 ( 1 ω ) 2 μ 2 D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T D ¯ 31 2 Z ˇ ˜ ( k ) 2 ( 1 ω ) μ Re tr [ D ¯ 31 Z ˇ ˜ H ( k ) D ¯ 32 ( P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) ) T ] + 1 4 ( 1 ω ) 2 μ 2 D 31 2 2 D 32 2 2 P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) 2 Z ˇ ˜ ( k ) 2 ( 1 ω ) μ P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) 2 + 1 4 ( 1 ω ) 2 μ 2 D 31 2 2 D 32 2 2 P 3 ( k ) + Q 3 ( k ) + U 3 ( k ) + V 3 ( k ) 2 = Z ˇ ˜ ( k ) 2 ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 31 2 2 D 32 2 2 W 3 ( k ) 2 ,

and

(54) Z ˜ ( 4 ) ( k + 1 ) 2 = Ź ˜ ( k ) 1 2 ( 1 ω ) μ D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 2 = Ź ˜ ( k ) 2 1 2 ( 1 ω ) μ tr ( Ź ˜ H ( k ) D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 ) 1 2 ( 1 ω ) μ tr ( D 41 H ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) D 42 H Ź ˜ ( k ) ) + 1 4 ( 1 ω ) 2 μ 2 D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H D 41 2 Ź ˜ ( k ) 2 ( 1 ω ) μ Re tr [ D 41 Ź ˜ H ( k ) D 42 ( P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) ) H ] + 1 4 ( 1 ω ) 2 μ 2 D 41 2 2 D 42 2 2 P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) 2 Ź ˜ ( k ) 2 ( 1 ω ) μ P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) 2 + 1 4 ( 1 ω ) 2 μ 2 D 41 2 2 D 42 2 2 P 4 ( k ) + Q 4 ( k ) + U 4 ( k ) + V 4 ( k ) 2 = Ź ˜ ( k ) 2 ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 41 2 2 D 42 2 2 W 4 ( k ) 2 .

By making use of Cauchy-Schwarz inequality and (51)–(54), we derive

Z ˜ ( k + 1 ) 2 = 1 2 ( 1 ω ) Z ˜ ( 1 ) ( k + 1 ) + 1 2 ( 1 ω ) Z ˜ ( 2 ) ( k + 1 ) + 1 2 ω Z ˜ ( 3 ) ( k + 1 ) + 1 2 ω Z ˜ ( 4 ) ( k + 1 ) 2 = 1 4 ( 1 ω ) 2 Z ˜ ( 1 ) ( k + 1 ) 2 + 1 4 ( 1 ω ) 2 Z ˜ ( 2 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 3 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 4 ) ( k + 1 ) 2 + 1 2 ( 1 ω ) 2 Re tr ( ( Z ˜ ( 1 ) ( k + 1 ) ) H Z ˜ ( 2 ) ( k + 1 ) ) + 1 2 ω ( 1 ω ) Re tr ( ( Z ˜ ( 1 ) ( k + 1 ) ) H Z ˜ ( 3 ) ( k + 1 ) ) + 1 2 ω ( 1 ω ) Re tr ( ( Z ˜ ( 1 ) ( k + 1 ) ) H Z ˜ ( 4 ) ( k + 1 ) ) + 1 2 ω ( 1 ω ) Re tr ( ( Z ˜ ( 2 ) ( k + 1 ) ) H Z ˜ ( 3 ) ( k + 1 ) ) + 1 2 ω ( 1 ω ) Re tr ( ( Z ˜ ( 2 ) ( k + 1 ) ) H Z ˜ ( 4 ) ( k + 1 ) ) + 1 2 ω 2 Re tr ( ( Z ˜ ( 3 ) ( k + 1 ) ) H Z ˜ ( 4 ) ( k + 1 ) ) 1 4 ( 1 ω ) 2 Z ˜ ( 1 ) ( k + 1 ) 2 + 1 4 ( 1 ω ) 2 Z ˜ ( 2 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 3 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 4 ) ( k + 1 ) 2 + 1 2 ( 1 ω ) 2 Z ˜ ( 1 ) ( k + 1 ) Z ˜ ( 2 ) ( k + 1 ) + 1 2 ω ( 1 ω ) Z ˜ ( 1 ) ( k + 1 ) Z ˜ ( 3 ) ( k + 1 ) + 1 2 ω ( 1 ω ) Z ˜ ( 1 ) ( k + 1 ) Z ˜ ( 4 ) ( k + 1 ) + 1 2 ω ( 1 ω ) Z ˜ ( 2 ) ( k + 1 ) Z ˜ ( 3 ) ( k + 1 ) + 1 2 ω ( 1 ω ) Z ˜ ( 2 ) ( k + 1 ) Z ˜ ( 4 ) ( k + 1 ) + 1 2 ω 2 Z ˜ ( 3 ) ( k + 1 ) Z ˜ ( 4 ) ( k + 1 )

1 4 ( 1 ω ) 2 Z ˜ ( 1 ) ( k + 1 ) 2 + 1 4 ( 1 ω ) 2 Z ˜ ( 2 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 3 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 4 ) ( k + 1 ) 2 + 1 4 ( 1 ω ) 2 ( Z ˜ ( 1 ) ( k + 1 ) 2 + Z ˜ ( 2 ) ( k + 1 ) 2 ) + 1 4 ω 2 ( Z ˜ ( 3 ) ( k + 1 ) 2 + Z ˜ ( 4 ) ( k + 1 ) 2 ) + 1 4 ω ( 1 ω ) ( Z ˜ ( 1 ) ( k + 1 ) 2 + Z ˜ ( 3 ) ( k + 1 ) 2 ) + 1 4 ω ( 1 ω ) ( Z ˜ ( 1 ) ( k + 1 ) 2 + Z ˜ ( 4 ) ( k + 1 ) 2 ) + 1 4 ω ( 1 ω ) ( Z ˜ ( 2 ) ( k + 1 ) 2 + Z ˜ ( 3 ) ( k + 1 ) 2 ) + 1 4 ω ( 1 ω ) ( Z ˜ ( 2 ) ( k + 1 ) 2 + Z ˜ ( 4 ) ( k + 1 ) 2 ) = 1 4 ( 1 ω ) 2 Z ˜ ( 1 ) ( k + 1 ) 2 + 1 4 ( 1 ω ) 2 Z ˜ ( 2 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 3 ) ( k + 1 ) 2 + 1 4 ω 2 Z ˜ ( 4 ) ( k + 1 ) 2 + 1 4 ( 1 ω ) 2 ( Z ˜ ( 1 ) ( k + 1 ) 2 + Z ˜ ( 2 ) ( k + 1 ) 2 ) + 1 4 ω 2 ( Z ˜ ( 3 ) ( k + 1 ) 2 + Z ˜ ( 4 ) ( k + 1 ) 2 ) + 1 4 ω ( 1 ω ) ( 2 Z ˜ ( 1 ) ( k + 1 ) 2 + 2 Z ˜ ( 2 ) ( k + 1 ) 2 + 2 Z ˜ ( 3 ) ( k + 1 ) 2 + 2 Z ˜ ( 4 ) ( k + 1 ) 2 )

= 1 2 ( 1 ω ) Z ˜ ( 1 ) ( k + 1 ) 2 + 1 2 ( 1 ω ) Z ˜ ( 2 ) ( k + 1 ) 2 + 1 2 ω Z ˜ ( 3 ) ( k + 1 ) 2 + 1 2 ω Z ˜ ( 4 ) ( k + 1 ) 2 1 2 ( 1 ω ) Z ˜ ( k ) 2 1 2 ( 1 ω ) ω μ 1 1 4 ω μ D 11 2 2 D 12 2 2 W 1 ( k ) 2 + 1 2 ( 1 ω ) Z ˆ ˜ ( k ) 2 1 2 ( 1 ω ) ω μ 1 1 4 ω μ D 21 2 2 D 22 2 2 W 2 ( k ) 2 + 1 2 ω Z ˇ ˜ ( k ) 2 1 2 ω ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 31 2 2 D 32 2 2 W 3 ( k ) 2 + 1 2 ω Ź ˜ ( k ) 2 1 2 ω ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 41 2 2 D 42 2 2 W 4 ( k ) 2

(55) Z ˜ ( k ) 2 1 2 ( 1 ω ) ω μ 1 1 4 ω μ D 11 2 2 D 12 2 2 W 1 ( k ) 2 + 1 2 ( 1 ω ) Z ˆ ˜ ( k ) 2 1 2 ( 1 ω ) ω μ 1 1 4 ω μ D 21 2 2 D 22 2 2 W 2 ( k ) 2 + 1 2 ω Z ˇ ˜ ( k ) 2 1 2 ω ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 31 2 2 D 32 2 2 W 3 ( k ) 2 + 1 2 ω Ź ˜ ( k ) 2 1 2 ω ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 41 2 2 D 42 2 2 W 4 ( k ) 2 Z ˜ ( 0 ) 2 1 2 ( 1 ω ) ω μ 1 1 4 ω μ D 11 2 2 D 12 2 2 i = 0 k W 1 ( i ) 2 + 1 2 ( 1 ω ) i = 0 k Z ˆ ˜ ( i ) 2 1 2 ( 1 ω ) ω μ 1 1 4 ω μ D 21 2 2 D 22 2 2 i = 0 k W 2 ( i ) 2 + 1 2 ω i = 0 k Z ˇ ˜ ( i ) 2 1 2 ω ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 31 2 2 D 32 2 2 i = 0 k W 3 ( i ) 2 + 1 2 ω i = 0 k Ź ˜ ( i ) 2 1 2 ω ( 1 ω ) μ 1 1 4 ( 1 ω ) μ D 41 2 2 D 42 2 2 i = 0 k W 4 ( i ) 2 .

According to the methods applied in [19,20,24], it follows that

(56) 1 2 ( 1 ω ) i = 0 k Z ˆ ˜ ( i ) 2 < + , 1 2 ω i = 0 k Z ˇ ˜ ( i ) 2 < + , 1 2 ω i = 0 k Ź ˜ ( i ) 2 < + .

Hence if

(57) 1 1 4 ω μ D 11 2 2 D 12 2 2 > 0 , 1 1 4 ω μ D 21 2 2 D 22 2 2 > 0 , 1 1 4 ( 1 ω ) μ D 31 2 2 D 32 2 2 > 0 , 1 1 4 ( 1 ω ) μ D 41 2 2 D 42 2 2 > 0 ,

then (55) implies that i = 0 W 1 ( i ) 2 < + . According to the convergence theorem of series, we have lim i + W 1 ( i ) = 0 , i.e.,

(58) lim i + W 1 ( i ) = lim i + A 1 Z ˜ ( i ) B 1 + A 2 Z ˜ ¯ ( i ) B 2 + A 3 Z ˜ T ( i ) B 3 + A 4 Z ˜ H ( i ) B 4 = 0 .

Inasmuch as the CCT Sylvester matrix equation (2) has a unique solution, it follows from (58) that lim k + Z ˜ ( k ) = 0 by Lemma 3.2. This means that lim k + Z ( k ) = Z * . From (57), we have

(59) 0 < μ < min 4 ω D 11 2 2 D 12 2 2 , 4 ω D 21 2 2 D 22 2 2 , 4 ( 1 ω ) D 31 2 2 D 32 2 2 , 4 ( 1 ω ) D 41 2 2 D 42 2 2 ,

or

0 < ω < 4 μ D 11 2 2 D 12 2 2 , 0 < ω < 4 μ D 21 2 2 D 22 2 2 , 0 < 1 ω < 4 μ D 31 2 2 D 32 2 2 , 0 < 1 ω < 4 μ D 41 2 2 D 42 2 2 ,

which results in

(60) 1 min 4 μ D 31 2 2 D 32 2 2 , 4 μ D 41 2 2 D 42 2 2 < ω < min 4 μ D 11 2 2 D 12 2 2 , 4 μ D 21 2 2 D 22 2 2 .

Inequality (60) holds if

(61) min 4 μ D 11 2 2 D 12 2 2 , 4 μ D 21 2 2 D 22 2 2 + min 4 μ D 31 2 2 D 32 2 2 , 4 μ D 41 2 2 D 42 2 2 > 1 ,

then combining (60)–(61) with 0 < ω < 1 gives (48).□

4 Numerical experiments

This section gives several numerical examples to validate the effectiveness of the IMGI and the IMRGI algorithms, and compare their numerical performances with those of the GI, OGI, RGI, and MGI ones, in terms of the number of iteration steps (IT) and the elapsed time in seconds (CPU).

All the computations were performed in MATLAB (version R2016b) on a personal computer with Intel (R) Pentium (R) CPU G3240T 2.870 GHz, 16.0 GB memory and Windows 10 system. For all tested algorithms, the initial matrices are taken to be Z ( 0 ) = Z i ( 0 ) = I 2 × 1 0 6   ( i = 1 , 2 , 3 , 4 ) .

Example 4.1

[21] We consider the CCT Sylvester matrix equation (2) with the following coefficient matrices:

A 1 = 13 + 2 i 1 + 2 i 2 i 16 + 8 i , B 1 = 15 + 7 i 2 + 5 i 9 + 7 i 18 + 10 i , A 2 = 9 + 20 i 5 + 3 i 2 + 2 i 2 + 9 i , B 2 = 9 i + 19 4 i + 5 1 + 5 i 16 + 16 i , A 3 = 11 i + 3 5 i + 7 5 + 10 i 13 + 19 i , B 3 = 1 + 12 i 5 5 i 6 + 2 i 19 + 18 i , A 4 = 16 + 7 i 7 + 8 i 1 + 7 i 12 + 13 i , B 4 = 20 + 13 i 7 + 5 i 5 + 2 i 14 + 10 i , H = 706 + 1397 i 126 2886 i 2294 1179 i 426 4404 i .

The unique solution of this matrix equation is

Z * = 3 + i 1 i 5 + i 2 + 3 i .

For this example, all runs are terminated once, and the number of iterations k exceeds 20,000, or Z ( k ) Z * X * δ (denoted as “ERR”) with δ being a positive number.

In Table 1, we compare the proposed IMGI and IMRGI algorithms with GI, OGI, RGI, and MGI ones with respect to IT and CPU times. Here, the parameters of the GI, OGI, and RGI algorithms are taken as in [21]. And the parameters of the MGI, IMGI, and IMRGI algorithms are as follows:

μ MGI = min 1 A 1 2 2 B 1 2 2 , 1 A 2 2 2 B 2 2 2 , 1 A 3 2 2 B 3 2 2 , 1 A 4 2 2 B 4 2 2 = 1.6317 × 1 0 6 , μ IMGI = min 2 D 11 2 2 D 12 2 2 , 2 D 21 2 2 D 22 2 2 , 2 D 31 2 2 D 32 2 2 , 2 D 41 2 2 D 42 2 2 = 5.5089 × 1 0 6 , ω IMRGI = 1 1.8 , μ IMRGI = min 4 ω D 11 2 2 D 12 2 2 , 4 ω D 21 2 2 D 22 2 2 , 4 ( 1 ω ) D 31 2 2 D 32 2 2 , 4 ( 1 ω ) D 41 2 2 D 42 2 2 = 2.4790 × 1 0 5 .

According to Table 1, we see that all tested methods are convergent within the maximum number of iterations for all cases, and the IMGI and the IMRGI algorithms outperform the other ones with respect to IT and CPU times. The growth rates of the IT for the IMGI and the IMRGI algorithms are slower than the other ones, which reveals that the proposed algorithms are more stable than the other tested ones. Besides, the IMRGI algorithm performs better than the IMGI algorithm with proper relaxation factor ω . This means that the relaxation technique can improve the efficiency of an algorithm.

Table 1

Numerical results of the tested iterative algorithms for Example 4.1

Method δ 0.1 0.01 0.001 0.0001 0.00001
IT 250 819 1389 1959 2529
GI CPU 0.0265 0.0957 0.1378 0.2177 0.2299
ERR 9.97 × 1 0 2 1.00 × 1 0 2 9.99 × 1 0 4 1.00 × 1 0 4 1.00 × 1 0 5
IT 196 634 1,075 1,516 1,957
OGI CPU 0.0190 0.0604 0.1023 0.1319 0.1698
ERR 9.95 × 1 0 2 1.00 × 1 0 2 9.98 × 1 0 4 9.99 × 1 0 5 9.99 × 1 0 6
IT 94 238 403 569 735
RGI CPU 0.0157 0.0261 0.0405 0.0584 0.0714
ERR 9.86 × 1 0 2 1.00 × 1 0 2 9.91 × 1 0 4 9.89 × 1 0 5 9.89 × 1 0 6
IT 59 149 251 354 457
MGI CPU 0.0099 0.0166 0.0275 0.0449 0.0493
ERR 9.96 × 1 0 2 9.90 × 1 0 3 9.89 × 1 0 4 9.85 × 1 0 5 9.83 × 1 0 6
IT 19 42 70 98 127
IMGI CPU 0.0041 0.0083 0.0097 0.0133 0.0142
μ = 5.5089 × 1 0 6 ERR 9.48 × 1 0 2 1.00 × 1 0 2 9.63 × 1 0 4 1.00 × 1 0 4 9.68 × 1 0 6
IT 17 38 63 91 116
IMRGI CPU 0.0019 0.0045 0.0073 0.0089 0.0121
ω = 1 1.8 , μ = 2.4790 × 1 0 5 ERR 7.21 × 1 0 2 8.20 × 1 0 3 9.44 × 1 0 4 8.38 × 1 0 5 9.33 × 1 0 6

With the parameters in Table 1, the iterative sequences { Z ( k ) } generated by the IMGI and the IMRGI algorithms with the changing of IT are reported in Table 2. We conclude from Table 2 that with the increases of IT, the iterative sequences { Z ( k ) } produced by the IMGI and the IMRGI algorithms are gradually tended to the exact solution Z * . This validates the proposed algorithms can work out the approximate solution of the CCT Sylvester matrix equations effectively.

Table 2

The iterative solutions for IMGI ( μ = 5.5089 × 1 0 6 ) and IMRGI ( ω = 1 1.8 , μ = 2.4790 × 1 0 5 ) algorithms for Example 4.1

Algorithm k z 11 z 12 z 21 z 22
30 3.0034 + 1.0028 i 0.9357 1.0784 i 4.8978 + 0.9072 i 2.0080 + 2.9096 i
60 2.9999 + 1.0010 i 0.9961 1.0066 i 4.9937 + 0.9903 i 2.0006 + 2.9967 i
IMGI 90 3.0000 + 1.0001 i 0.9997 1.0006 i 4.9995 + 0.9991 i 2.0001 + 2.9998 i
120 3.0000 + 1.0000 i 1.0000 1.0001 i 5.0000 + 0.9999 i 2.0000 + 3.0000 i
150 3.0000 + 1.0000 i 1.0000 1.0000 i 5.0000 + 1.0000 i 2.0000 + 3.0000 i
30 3.0018 + 0.9786 i 0.9493 1.0854 i 4.9112 + 0.9117 i 1.9906 + 2.9054 i
60 2.9995 + 1.0005 i 0.9968 1.0050 i 4.9961 + 0.9930 i 2.0012 + 2.9970 i
IMRGI 90 3.0000 + 1.0001 i 0.9998 1.0003 i 4.9997 + 0.9996 i 2.0000 + 2.9999 i
120 3.0000 + 1.0000 i 1.0000 1.0000 i 5.0000 + 1.0000 i 2.0000 + 3.0000 i
150 3.0000 + 1.0000 i 1.0000 1.0000 i 5.0000 + 1.0000 i 2.0000 + 3.0000 i
Solution 3 + i 1 i 5 + i 2 + 3 i

To compare the tested algorithms fairly, we test the algorithms with the same values of μ in Table 3. We adopt three different values of μ for the tested algorithms, which are obtained by the weighted combinations of μ used for the tested algorithms in Table 1. And the expression of μ is μ = k 1 μ GI + k 2 μ OGI + k 3 μ RGI + k 4 μ MGI + k 5 μ IMGI + k 6 μ IMRGI , where μ GI , μ OGI , μ RGI , μ MGI , μ IMGI , and μ IMRGI are the values of μ used in the GI, OGI, RGI, MGI, IMGI, and IMRGI algorithms, respectively, and i = 1 6 k i = 1 ( k i 0 ) . We take three group numbers ( k 1 , k 2 . k 3 , k 4 , k 5 , k 6 ) = ( 1 6 , 1 6 , 1 6 , 1 6 , 1 6 , 1 6 ) , ( 0.4 , 0.2 , 0.1 , 0.25 , 0 , 0.05 ) , and ( 0.3 , 0.25 , 0 , 0.3 , 0 , 0.15 ) . By direct computations, it has μ = 7.0990 × 1 0 6 , 3.3924 × 1 0 6 , and 5.1411 × 1 0 6 , respectively. The numerical results of the tested algorithms with these three values of μ are listed in Table 3, and the relaxation factor ω used in the RGI and the IMRGI algorithms are the experimental optimal ones. From the numerical results shown in Table 3, we see that the GI and the OGI algorithms are invalid for all cases. For μ = 7.0990 × 1 0 6 , all algorithms are not convergent except for the IMRGI one. And for μ = 3.3924 × 1 0 6 , the IMGI algorithm performs the best, and the IMRGI algorithm outperforms the RGI one in terms of IT and CPU times. In addition, for μ = 5.1411 × 1 0 6 , the proposed IMGI and IMRGI algorithms are superior to the other ones. Although the IMGI algorithm has better numerical behavior than the IMRGI one for some cases, the latter one is convergent for all cases.

Table 3

Numerical results of the tested iterative algorithms for Example 4.1 with three different values of μ

Method τ 0.1 0.01 0.001 0.0001 0.00001
μ = 7.0990 × 1 0 6 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT Fail Fail Fail Fail Fail
CPU
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT Fail Fail Fail Fail Fail
CPU
IMRGI IT 52 113 196 284 372
ω = 0.5 CPU 0.0064 0.0134 0.0224 0.0322 0.0463
μ = 3.3924 × 1 0 6 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 178 454 769 1086 1403
ω = 0.5 CPU 0.0203 0.0501 0.0849 0.1224 0.1543
MGI IT 29 73 122 171 220
CPU 0.0035 0.0089 0.0145 0.0203 0.0257
IMGI IT 29 63 107 153 199
CPU 8.0020 × 1 0 4 0.0039 0.0129 0.0189 0.0226
IMRGI IT 108 231 403 586 770
ω = 0.5 CPU 0.0124 0.0279 0.0459 0.0671 0.0868
μ = 5.1411 × 1 0 6 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 118 300 507 716 924
ω = 0.5 CPU 0.0134 0.0336 0.0558 0.0806 0.1239
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT 20 45 74 105 135
CPU 0.0025 0.0057 0.0090 0.0131 0.0152
IMRGI IT 72 154 268 389 510
ω = 0.5 CPU 0.0091 0.0178 0.0324 0.0442 0.0584

The graphs of ERR(log10) against the IT of the tested algorithms for four different values of δ are plotted in Figures 1 and 2. As shown in Figure 1, the IMGI and the IMRGI algorithms are superior to the other ones because they require less IT to achieve the termination criterion. And the numerical performances of the IMGI and the IMRGI algorithms are comparable, and the latter one is slightly better than the former one.

Figure 1 
               The logarithmic of the ERR of the tested algorithms for Example 4.1 with 
                     
                        
                        
                           δ
                           =
                           0.1
                        
                        \delta =0.1
                     
                   (left) and 
                     
                        
                        
                           δ
                           =
                           0.01
                        
                        \delta =0.01
                     
                   (right).
Figure 1

The logarithmic of the ERR of the tested algorithms for Example 4.1 with δ = 0.1 (left) and δ = 0.01 (right).

Figure 2 
               The logarithmic of the ERR of the tested algorithms for Example 4.1 with 
                     
                        
                        
                           δ
                           =
                           0.001
                        
                        \delta =0.001
                     
                   (left) and 
                     
                        
                        
                           δ
                           =
                           0.0001
                        
                        \delta =0.0001
                     
                   (right).
Figure 2

The logarithmic of the ERR of the tested algorithms for Example 4.1 with δ = 0.001 (left) and δ = 0.0001 (right).

Example 4.2

[30] We consider the CCT Sylvester matrix equation (2) with the following parametric matrices

A 1 = 13 + 10 i 6 + 6 i 2 + i 16 + 18 i , B 1 = 15 + 17 i 8 + 5 i 9 + 7 i 18 + 10 i , A 2 = 19 + 20 i 5 + 3 i 2 + 2 i 20 + 19 i , B 2 = 19 + 19 i 5 + 4 i 1 + 5 i 16 + 16 i , A 3 = 13 + 11 i 7 + 5 i 5 + 10 i 13 + 19 i , B 3 = 11 + 12 i 5 + 5 i 6 + 2 i 19 + 18 i , A 4 = 16 + 17 i 7 + 8 i 1 + 7 i 12 + 13 i , B 4 = 20 + 13 i 7 + 5 i 5 + 2 i 14 + 10 i , H = 633 + 2558 i 1304 4267 i 665 6248 i 556 7565 i .

The unique solution of this matrix equation is

Z * = 3 i 5 + i 2 + 3 i .

In this example, we take

RES = E A 1 Z ( k ) B 1 A 2 Z ¯ ( k ) B 2 A 3 Z ( k ) T B 3 A 4 Z ( k ) H B 4 H τ ,

as the termination condition with a positive number τ > 0 . And the prescribed maximum iterative number k max is set to be 20,000.

Numerical results for different values of τ are reported in Table 4. In this table, we list the parameters, IT, CPU, and RES of the tested algorithms. The parameters taken in the tested algorithms are determined by using the same methods in Example 4.1. It is found in Table 4 that the GI and the IMGI algorithms are divergence for all cases, and the OGI and the RGI algorithms are not convergent for τ = 0.00001 , while the MGI and the IMRGI algorithms are convergent for all cases. With the proper parameter μ , the IMGI algorithm outperforms the GI, OGI and RGI ones in terms of IT and CPU times. In addition, the IMRGI algorithms performs the best among all algorithms. The exception is that the MGI algorithm needs less IT than the IMRGI one for τ = 0.1 . Finally, the IMRGI algorithm is more stable than the MGI one, due to the fact that the changing scope of the IT of the IMRGI algorithm is smaller than that for the MGI one.

Table 4

Numerical results of the tested iterative algorithms for Example 4.2

Method τ 0.1 0.01 0.001 0.0001 0.00001
IT Fail Fail Fail Fail Fail
GI CPU Fail Fail Fail Fail Fail
μ = 6.9817 × 1 0 7 RES
IT 4,363 9,450 14,537 19,624 Fail
OGI CPU 0.4973 1.2514 1.7253 2.2924 Fail
μ = 6.9801 × 1 0 7 RES 1.00 × 1 0 1 1.00 × 1 0 2 1.00 × 1 0 3 1.00 × 1 0 4
IT 4,363 9,450 14,537 19,624 Fail
RGI CPU 0.5793 1.1722 1.6961 2.2689 Fail
μ = 3.1411 × 1 0 6 RES 1.00 × 1 0 1 1.00 × 1 0 2 1.00 × 1 0 3 1.00 × 1 0 4
IT 5 91 198 306 413
MGI CPU 0.0103 0.0137 0.0236 0.0343 0.0500
μ = 1.0360 × 1 0 6 RES 9.42 × 1 0 2 1.00 × 1 0 2 9.94 × 1 0 4 9.80 × 1 0 5 9.95 × 1 0 6
IT Fail Fail Fail Fail Fail
IMGI CPU Fail Fail Fail Fail Fail
μ = 3.6401 × 1 0 6 RES
IT 5 100 227 355 482
IMGI CPU 6.5790 × 1 0 4 0.0115 0.0256 0.0373 0.0508
μ = 1.8200 × 1 0 6 RES 8.40 × 1 0 2 9.90 × 1 0 3 1.00 × 1 0 3 9.79 × 1 0 5 9.82 × 1 0 6
IT 14 77 165 254 343
IMRGI CPU 0.0039 0.0096 0.0185 0.0283 0.0377
ω = 1 4 , μ = 1.4690 × 1 0 5 RES 7.17 × 1 0 2 1.00 × 1 0 2 9.88 × 1 0 4 9.91 × 1 0 5 9.97 × 1 0 6

Table 5 lists the same items as those in Table 2. The results in Table 5 indicate that the iterative solutions generated by the IMGI and the IMRGI algorithms are gradually tended to the exact solution Z * as the IT increases.

Table 5

The iterative solutions for IMGI ( μ = 1.8200 × 1 0 6 ) and IMRGI ( ω = 1 4 , μ = 1.4690 × 1 0 5 ) algorithms for Example 4.2

Algorithm k z 11 z 12 z 21 z 22
IMGI 100 2.9781 0.0219 i 0.1105 1.1006 i 4.7773 + 0.6409 i 2.0645 + 3.1367 i
200 2.9930 0.0661 i 0.2367 1.0113 i 4.6391 + 0.9976 i 2.0944 + 2.8720 i
300 2.9910 0.0652 i 0.2182 1.0370 i 4.6550 + 0.9220 i 2.0933 + 2.8583 i
400 2.9911 0.0641 i 0.2187 1.0338 i 4.6554 + 0.9313 i 2.0927 + 2.8688 i
500 2.9911 0.0644 i 0.2190 1.0339 i 4.6549 + 0.9311 i 2.0929 + 2.8667 i
IMRGI 100 2.9894 0.0333 i 0.1913 1.0051 i 4.6948 + 0.9957 i 2.0727 + 3.1052 i
200 2.9912 0.0663 i 0.2197 1.0372 i 4.6530 + 0.9219 i 2.0940 + 2.8504 i
300 2.9911 0.0642 i 0.2190 1.0336 i 4.6550 + 0.9318 i 2.0928 + 2.8680 i
400 2.9911 0.0643 i 0.2189 1.0340 i 4.6550 + 0.9308 i 2.0928 + 2.8668 i
500 2.9911 0.0643 i 0.2190 1.0340 i 4.6550 + 0.9309 i 2.0928 + 2.8669 i
Solution 3 i 5 + i 2 + 3 i

Furthermore, to further show the advantages of the IMGI and the IMRGI algorithms over the other tested ones, the performances of the tested algorithms with three different values of μ are exhibited in Table 6. And the three values of μ are determined in the same manner as we used in Example 4.1. It is observed from Table 6 that the RGI and the IMRGI algorithms are convergent, while the other tested ones are invalid for μ = 3.6805 × 1 0 6 . And the IMRGI algorithm is superior to the RGI one with respect to IT and CPU times. In addition, as μ = 1.7265 × 1 0 6 , the GI and the OGI algorithms are divergence, and the RGI, IMGI, and IMRGI algorithms always outperform the MGI one. The IMRGI algorithm is less efficient than the RGI one except for the case of τ = 0.1 , and the IMGI algorithm leads to much better performance than the RGI one, and the advantage of the IMGI algorithm becomes more pronounced as τ decreases. Finally, the proposed IMGI and IMRGI algorithms have advantages over the other ones for μ = 2.8983 × 1 0 6 , and the IMGI algorithm is more stable than the IMRGI one for this case.

Table 6

Numerical results of the tested iterative algorithms for Example 4.2 with three different values of μ

Method τ 0.1 0.01 0.001 0.0001 0.00001
μ = 3.6805 × 1 0 6 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 89 230 481 741 1,005
ω = 0.25 CPU 0.0118 0.0299 0.0612 0.0968 0.1284
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT Fail Fail Fail Fail Fail
CPU
IMRGI IT 4 189 439 689 938
ω = 0.5 CPU 6.2360 × 1 0 4 0.0251 0.0587 0.0922 0.1005
μ = 1.7265 × 1 0 6 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 15 356 769 1,186 1,608
ω = 0.5 CPU 0.0021 0.0450 0.0994 0.1139 0.2084
MGI IT 51 803 1998 3161 4356
CPU 0.0071 0.1022 0.2652 0.3000 0.5491
IMGI IT 5 105 239 373 507
CPU 8.0020 × 1 0 4 0.0137 0.0321 0.0388 0.0645
IMRGI IT 7 394 923 1455 1986
ω = 0.5 CPU 0.0013 0.0549 0.1221 0.1413 0.2534
μ = 2.8983 × 1 0 6 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT Fail Fail Fail Fail Fail
CPU
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT 14 105 306 397 605
CPU 0.0019 0.0107 0.0386 0.0506 0.0765
IMRGI IT 5 238 554 871 1188
ω = 0.5 CPU 7.1740 × 1 0 4 0.0245 0.0706 0.1114 0.1514

Figures 3 and 4 display the convergence curves of the GI, OGI, RGI, MGI, IMGI, and IMRGI algorithms for four different values of τ . From these two figures, we observe that the IMRGI algorithm converges faster than the other ones, and the IMGI algorithm outperforms the GI, OGI, and RGI ones with respect to IT.

Figure 3 
               The logarithmic of the RES of the tested algorithms for Example 4.2 with 
                     
                        
                        
                           τ
                           =
                           0.1
                        
                        \tau =0.1
                     
                   (left) and 
                     
                        
                        
                           τ
                           =
                           0.01
                        
                        \tau =0.01
                     
                   (right).
Figure 3

The logarithmic of the RES of the tested algorithms for Example 4.2 with τ = 0.1 (left) and τ = 0.01 (right).

Figure 4 
               The logarithmic of the RES of the tested algorithms for Example 4.2 with 
                     
                        
                        
                           τ
                           =
                           0.001
                        
                        \tau =0.001
                     
                   (left) and 
                     
                        
                        
                           τ
                           =
                           0.0001
                        
                        \tau =0.0001
                     
                   (right).
Figure 4

The logarithmic of the RES of the tested algorithms for Example 4.2 with τ = 0.001 (left) and τ = 0.0001 (right).

Example 4.3

[30] Consider the CCT Sylvester matrix equation (2) with

A 1 = 16 2 i 3 i 9 2 i , B 1 = 6 2 i 2 i 15 + 3 i , A 2 = B 2 = 0 0 0 0 , A 3 = B 3 = 0 0 0 0 , A 4 = 6 + 10 i 1 10 i 5 i , B 4 = 16 5 i 1 3 i 5 , H = 585 235 i 1079 + 318 i 401 516 i 453 + 232 i , Z * = 2 + 5 i 3 i 1 3 i .

In this example, we adopt the termination criterion as in Example 4.1, i.e., ERR = Z ( k ) Z * Z * δ with δ > 0 or k exceeds the prescribed maximal number of iteration steps 20,000.

The numerical results of the tested algorithms for Example 4.3 are shown in Table 7. It follows from Table 7 that the GI algorithm has the slowest convergence speed, and the IMGI and the IMRGI algorithms have advantages over the other ones except for the case of δ = 0.1 . The OGI and the RGI algorithms have the same numerical performances, and they outperform the GI one. Also, the MGI algorithm is better than the IMGI and the IMRGI ones when δ = 0.1 and 0.01, whereas the latter ones are superior to the former one when δ becomes small. Moreover, the IMRGI algorithm performs better than the IMGI one, and it is the most stable among all tested algorithms, because the change amplitude of the IT of the IMRGI algorithm is the lowest.

Table 7

Numerical results of the tested iterative algorithms for Example 4.3

Method δ 0.1 0.01 0.001 0.0001 0.00001
IT 27 138 489 841 1194
GI CPU 0.0390 0.2016 0.7073 1.3760 1.6753
μ = 1.4041 × 1 0 5 ERR 9.47 × 1 0 2 1.00 × 1 0 2 9.97 × 1 0 4 9.99 × 1 0 5 9.950 × 1 0 6
IT 24 115 205 296 387
OGI CPU 0.0304 0.1610 0.3625 0.4663 0.6565
μ = 5.4000 × 1 0 5 ERR 9.81 × 1 0 2 9.80 × 1 0 3 9.95 × 1 0 4 9.89 × 1 0 5 9.84 × 1 0 6
IT 24 115 205 296 387
RGI CPU 0.0805 0.2041 0.3431 0.4895 0.8332
μ = 2.4300 × 1 0 4 ERR 9.81 × 1 0 2 9.80 × 1 0 3 9.95 × 1 0 4 9.89 × 1 0 5 9.84 × 1 0 6
IT 11 77 189 306 425
MGI CPU 0.0188 0.1115 0.2632 0.4301 0.6852
μ = 2.6145 × 1 0 5 ERR 8.84 × 1 0 2 9.80 × 1 0 3 9.81 × 1 0 4 9.95 × 1 0 5 9.82 × 1 0 6
IT 47 99 132 194 221
IMGI CPU 0.0052 0.0116 0.0141 0.0194 0.0954
μ = 3.3387 × 1 0 5 ERR 9.93 × 1 0 2 9.80 × 1 0 3 9.53 × 1 0 4 9.65 × 1 0 5 9.51 × 1 0 6
IT 43 90 130 176 204
IMRGI CPU 0.0054 0.0091 0.0125 0.0163 0.0178
ω = 1 3 , μ = 1.7233 × 1 0 4 ERR 9.80 × 1 0 2 9.40 × 1 0 3 9.32 × 1 0 4 9.88 × 1 0 5 9.92 × 1 0 6

In Table 8, we list the same items as those in Tables 2 and 5. And we can obtain the similar conclusions from Table 8 as in Tables 2 and 5.

Table 8

The iterative solutions for IMGI ( μ = 3.3387 × 1 0 5 ) and IMRGI ( ω = 1 3 , μ = 1.7233 × 1 0 4 ) algorithms for Example 4.3

Method k x 11 x 12 x 21 x 22
IMGI 50 2.2638 + 5.3661 i 2.9773 0.9517 i 0.7559 + 0.1147 i 0.0281 + 3.0737 i
100 1.9741 + 4.9711 i 2.9953 0.9913 i 0.9915 + 0.0320 i 0.0200 + 2.9892 i
150 2.0017 + 5.0022 i 3.0003 1.0000 i 0.9987 0.0012 i 0.0001 + 3.0001 i
200 1.9999 + 4.9998 i 2.9999 0.9999 i 0.9999 + 0.0003 i 0.0001 + 3.0000 i
250 2.0000 + 5.0000 i 3.0000 1.0000 i 1.0000 0.0000 i 0.0000 + 3.0000 i
IMRGI 50 2.1447 + 5.2224 i 2.9300 0.9280 i 0.8022 + 0.3178 i 0.0028 + 3.0750 i
100 1.9908 + 4.9890 i 3.0031 0.9989 i 0.9964 0.0069 i 0.0093 + 2.9913 i
150 2.0003 + 5.0006 i 2.9997 0.9997 i 0.9994 + 0.0015 i 0.0001 + 3.0003 i
200 2.0000 + 5.0000 i 3.0000 1.0000 i 1.0000 0.0000 i 0.0000 + 3.0000 i
250 2.0000 + 5.0000 i 3.0000 1.0000 i 1.0000 + 0.0000 i 0.0000 + 3.0000 i
Solution 2 + 5 i 3 i 1 3i

In addition, in Table 9, we list the same items as those in Tables 3 and 6. As observed in Table 9, besides the RGI and the IMRGI algorithms, the other tested ones are not convergent for μ = 9.0484 × 1 0 5 . Furthermore, the IMRGI algorithm has better convergence behaviors than the RGI one as δ 0.001 , and the changing scope of the IT of the former one is smaller than the latter one. According to the results of μ = 5.5869 × 1 0 5 , we can assert that among the tested algorithms, the IMGI one is the best one as it requires the least IT and CPU times, and the IMRGI algorithm outperforms the RGI one as δ 0.001 . Finally, by comparing the results of μ = 5.1405 × 1 0 5 , it can be seen that all tested algorithms are convergent except for the MGI one. And the IMGI algorithm is the most efficient as δ 0.001 . Moreover, the GI and the OGI algorithms need less IT and CPU times than the RGI and the IMRGI ones, and the IMRGI algorithm is superior to the RGI one for most cases.

Table 9

Numerical results of the tested iterative algorithms for Example 4.3 with three different values of μ

Method δ 0.1 0.01 0.001 0.0001 0.00001
μ = 9.0484 × 1 0 5 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 17 86 304 522 740
ω = 0.5 CPU 0.0068 0.0084 0.0217 0.0474 0.0733
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT Fail Fail Fail Fail Fail
CPU
IMRGI IT 68 144 205 282 322
ω = 0.5 CPU 0.0122 0.0111 0.0161 0.0223 0.0240
μ = 5.5869 × 1 0 5 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 27 139 492 846 1,200
ω = 0.5 CPU 0.0028 0.0113 0.0453 0.0621 0.1513
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT 30 62 89 120 145
CPU 0.0031 0.0073 0.0099 0.0155 0.0347
IMRGI IT 108 231 328 451 519
ω = 0.5 CPU 0.0111 0.0229 0.0327 0.0464 0.0697
μ = 5.1405 × 1 0 5 GI IT 9 39 134 229 324
CPU 0.0010 0.0036 0.0114 0.0185 0.0246
OGI IT 9 39 134 229 324
CPU 9.5780 × 1 0 4 0.0029 0.0103 0.0183 0.0249
RGI IT 29 150 534 919 1,305
ω = 0.5 CPU 0.0027 0.0119 0.0383 0.0682 0.1095
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT 32 67 96 130 154
CPU 0.0026 0.0057 0.0080 0.0103 0.0113
IMRGI IT 118 251 355 489 564
ω = 0.5 CPU 0.0110 0.0182 0.0252 0.0405 0.0396

The graphs of ERR(log10) against the IT of the tested algorithms for four different values of δ are displayed in Figures 5 and 6, respectively. The two figures show that although the IMGI and the IMRGI algorithms are less efficient than the other ones when the value of δ is large, and they are more efficient than the other ones as the value of δ is small. This is in accordance with the results in Table 7. Besides, we can conclude that when the IT increases, the convergence rates of the GI, OGI, RGI, and MGI algorithms are fast first, but become slow later. And it is the opposite for the IMGI and the IMRGI algorithms.

Figure 5 
               Comparison of convergence curves for Example 4.3 with 
                     
                        
                        
                           δ
                           =
                           0.1
                        
                        \delta =0.1
                     
                   (left) and 
                     
                        
                        
                           δ
                           =
                           0.01
                        
                        \delta =0.01
                     
                   (right).
Figure 5

Comparison of convergence curves for Example 4.3 with δ = 0.1 (left) and δ = 0.01 (right).

Figure 6 
               Comparison of convergence curves for Example 4.3 with 
                     
                        
                        
                           δ
                           =
                           0.001
                        
                        \delta =0.001
                     
                   (left) and 
                     
                        
                        
                           δ
                           =
                           0.0001
                        
                        \delta =0.0001
                     
                   (right).
Figure 6

Comparison of convergence curves for Example 4.3 with δ = 0.001 (left) and δ = 0.0001 (right).

Example 4.4

[18] Consider the CCT Sylvester matrix equation (2) with

A 1 = 1 + 2 i 2 i 1 i 2 + 3 i , B 1 = 2 4 i i 1 + 3 i 2 , A 2 1 i 3 i 0 1 + 2 i , B 2 = 2 1 i 1 + i 1 i , A 3 = B 3 = 0 0 0 0 , A 4 = B 4 = 0 0 0 0 , H = 21 + 11 i 9 + 7 i 52 22 i 18 + i , Z * = 1 + 2 i i 2 + i 1 + i .

Let δ > 0 , we set

ERR = Z ( k ) Z * Z * δ

to be the termination condition. And the prescribed maximum iterative number is 20,000 as in Examples 4.14.3.

For Example 4.4, we compare the IMGI and the IMRGI algorithms with the GI, OGI, RGI, and MGI ones in Table 10. The parameters adopted in the tested algorithms are obtained by utilizing the methods in Example 4.1. From the results listed in Table 10, we can conclude some observations: First, the IT of the OGI and the RGI algorithms are the same, and they perform better than the GI one. Second, the MGI algorithm outperforms the GI, OGI, and RGI ones except for the case of δ = 0.00001 . Third, the IMGI algorithm has faster convergence speed than the GI, OGI, RGI, and MGI ones as δ 0.001 . Fourth, the IMRGI algorithm performs the best in terms of IT and CPU times except for the case of δ = 0.1 . Finally, the proposed IMGI and IMRGI algorithms are more stable than the other ones, because the changing scopes of the IT of the former ones are less than those of the latter ones with the decreasing of δ , and the IMRGI algorithm is the most stable for this example.

Table 10

Numerical results of the tested iterative algorithms for Example 4.4

Method δ 0.1 0.01 0.001 0.0001 0.00001
IT 53 255 726 1362 1,998
GI CPU 0.0766 0.4319 1.1939 1.9604 2.7920
μ = 0.0028 ERR 9.88 × 1 0 2 9.90 × 1 0 2 9.97 × 1 0 4 9.97 × 1 0 5 9.99 × 1 0 6
IT 46 269 499 729 959
OGI CPU 0.0737 0.4681 0.8855 1.1386 1.4615
μ = 0.007607 ERR 9.99 × 1 0 2 1.00 × 1 0 2 9.97 × 1 0 4 9.98 × 1 0 5 9.99 × 1 0 6
IT 46 269 499 729 959
RGI CPU 0.0702 0.4569 0.8565 1.1687 1.4533
ω = 0.1 , μ = 0.08452 ERR 9.99 × 1 0 2 1.00 × 1 0 2 9.97 × 1 0 4 9.98 × 1 0 5 9.99 × 1 0 6
IT 26 126 360 677 994
MGI CPU 0.0433 0.2135 0.6016 1.2261 1.6008
μ = 0.0035 ERR 9.84 × 1 0 2 9.90 × 1 0 3 9.97 × 1 0 4 9.96 × 1 0 5 9.95 × 1 0 6
IT 56 127 194 262 329
IMGI CPU 0.0072 0.0158 0.0189 0.0235 0.0332
μ = 0.0077 ERR 9.85 × 1 0 2 9.70 × 1 0 3 9.90 × 1 0 4 9.69 × 1 0 5 9.80 × 1 0 6
IT 36 80 122 164 205
IMRGI CPU 0.0037 0.0071 0.0129 0.0175 0.0171
ω = 0.1 , μ = 0.1538 ERR 9.77 × 1 0 2 9.60 × 1 0 3 9.60 × 1 0 4 9.53 × 1 0 5 9.98 × 1 0 6

In Table 11, the iterative solutions { Z ( k ) } generated by the IMGI and the IMRGI algorithms are listed. We can see from Table 11 that with the increasing of IT, the iterative solutions { Z ( k ) } are gradually tended to the exact solution Z * .

Table 11

The iterative solutions for IMGI ( μ = 0.0077 ) and IMRGI ( ω = 0.1 , μ = 0.1538 ) algorithms for Example 4.4

Method k x 11 x 12 x 21 x 22
IMGI 100 1.0155 + 2.0425 i 0.0054 0.9530 i 1.9630 + 0.9980 i 1.0365 + 0.9909 i
200 1.0005 + 2.0014 i 0.0002 0.9984 i 1.9987 + 0.9999 i 1.0012 + 0.9997 i
300 1.0000 + 2.0000 i 0.0000 0.9999 i 2.0000 + 1.0000 i 1.0000 + 1.0000 i
400 1.0000 + 2.0000 i 0.0000 1.0000 i 2.0000 + 1.0000 i 1.0000 + 1.0000 i
500 1.0000 + 2.0000 i 0.0000 1.0000 i 2.0000 + 1.0000 i 1.0000 + 1.0000 i
IMRGI 100 1.0021 + 2.0055 i 0.0008 0.9939 i 1.9951 + 0.9997 i 1.0048 + 0.9988 i
200 1.0000 + 2.0000 i 0.0000 1.0000 i 2.0000 + 1.0000 i 1.0000 + 1.0000 i
300 1.0000 + 2.0000 i 0.0000 1.0000 i 2.0000 + 1.0000 i 1.0000 + 1.0000 i
400 1.0000 + 2.0000 i 0.0000 1.0000 i 2.0000 + 1.0000 i 1.0000 + 1.0000 i
500 1.0000 + 2.0000 i 0.0000 1.0000 i 2.0000 + 1.0000 i 1.0000 + 1.0000 i
Solution 1 + 2 i i 2 + i 1 + i

Furthermore, the same items as those in Tables 3, 6 and 9 are exhibited in Table 12. From the results of μ = 0.0433 , we conclude that the GI, OGI, RGI, MGI, and IMGI algorithms cannot achieve the stopping criterion within the largest admissible number of iteration steps, while the IMRGI one is valid. Meanwhile, for μ = 0.0197 , the proposed IMRGI algorithm has better numerical performances than the RGI one when δ 0.001 , and the former one is more stable than the latter one in view of the variation range of IT. Finally, for the case of μ = 0.0269 , we can obtain the similar conclusions as the case of μ = 0.0197 . All in all, the techniques utilized in the IMGI and the IMRGI algorithms can improve the convergence rates of the GI, OGI, RGI, and MGI ones from the point of view of computing efficiency.

Table 12

Numerical results of the tested iterative algorithms for Example 4.4 with three different values of μ

Method δ 0.1 0.01 0.001 0.0001 0.00001
μ = 0.0433 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT Fail Fail Fail Fail Fail
CPU
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT Fail Fail Fail Fail Fail
CPU
IMRGI IT 40 90 138 186 233
ω = 0.5 CPU 0.0053 0.0169 0.0156 0.0195 0.0242
μ = 0.0197 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 30 143 408 764 1121
ω = 0.5 CPU 0.0034 0.0156 0.0492 0.0803 0.1694
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT Fail Fail Fail Fail Fail
CPU
IMRGI IT 87 198 304 410 515
ω = 0.5 CPU 0.0109 0.0204 0.0444 0.0477 0.0750
μ = 0.0269 GI IT Fail Fail Fail Fail Fail
CPU
OGI IT Fail Fail Fail Fail Fail
CPU
RGI IT 22 105 298 558 819
ω = 0.5 CPU 0.0024 0.0107 0.0297 0.0555 0.0795
MGI IT Fail Fail Fail Fail Fail
CPU
IMGI IT Fail Fail Fail Fail Fail
CPU
IMRGI IT 64 145 222 299 376
ω = 0.5 CPU 0.0069 0.0150 0.0229 0.0312 0.0384

In Figures 7 and 8, we depict the error curves of the tested algorithms for four different values of δ . Figures 7 and 8 clearly show that the convergence rates of the IMRGI and the IMGI algorithms are faster than the other ones for most cases. This reveals that the proposed algorithms are efficient and feasible for solving the CCT Sylvester matrix equations.

Figure 7 
               Comparison of convergence curves for Example 4.4 with 
                     
                        
                        
                           δ
                           =
                           0.1
                        
                        \delta =0.1
                     
                   (left) and 
                     
                        
                        
                           δ
                           =
                           0.01
                        
                        \delta =0.01
                     
                   (right).
Figure 7

Comparison of convergence curves for Example 4.4 with δ = 0.1 (left) and δ = 0.01 (right).

Figure 8 
               Comparison of convergence curves for Example 4.4 with 
                     
                        
                        
                           δ
                           =
                           0.001
                        
                        \delta =0.001
                     
                   (left) and 
                     
                        
                        
                           δ
                           =
                           0.0001
                        
                        \delta =0.0001
                     
                   (right).
Figure 8

Comparison of convergence curves for Example 4.4 with δ = 0.001 (left) and δ = 0.0001 (right).

5 Conclusions

In this article, by replacing the coefficient matrices with their diagonal parts and making use of the updated technique, we first construct the IMGI algorithm for the CCT Sylvester matrix equations. Then we further apply the relaxation technique to the IMGI algorithm and establish the IMRGI algorithm. By making use of the real representation of a complex matrix and the vector stretching operator, we give the convergence conditions of the IMGI and the IMRGI algorithms. Some numerical examples are presented to validate the effectiveness and superiorities of the IMGI and the IMRGI algorithms. Compared with the GI, OGI, RGI, and MGI algorithms, the proposed IMGI and IMRGI algorithms can own higher computational efficiencies by adjusting the values of the relaxation factors and the step size factors. Then it is more conducive to solving Sylvester matrix equations in pole assignment, control theory, signal processes, prediction and stability, and so forth.

It is noteworthy that only one step size factor μ is used in the IMGI and the IMRGI algorithms. We will consider to take different step size factors into the IMGI and the IMRGI algorithms and propose their convergence analyses. Aside from that, we have not deduced the optimal parameters of the IMGI and the IMRGI algorithms in theory at present, which deserves to be investigated in the future.

Acknowledgements

We thank the editor and anonymous reviewers for their careful reading of the manuscript and for the helpful comments.

  1. Funding information: This work was supported by the National Natural Science Foundation of China (No. 12361078), and the Guangxi Natural Science Foundation (No. 2024GXNSFAA010498).

  2. Author contributions: All authors contributed to the study conception and design. All authors performed material preparation, data collection, and analysis. The authors read and approved the final manuscript.

  3. Conflict of interest: The authors declare that they have no competing interests.

  4. Ethical approval: The conducted research is not related to either human or animal use.

  5. Data availability statement: Data sharing is not applicable to this article as no datasets were generated or analyzed during this study.

References

[1] F. P. A. Beik, D. K. Salkuyeh, and M. M. Moghadam, Gradient-based iterative algorithm for solving the generalized coupled Sylvester-transpose and conjugate matrix equations over reflexive (anti-reflexive) matrices, Trans. Inst. Meas. Control. 36 (2014), 99–110, DOI: https://doi.org/10.1177/0142331213482485. 10.1177/0142331213482485Search in Google Scholar

[2] B. Zhou, L.-L. Lv, and G.-R. Duan, Parametric pole assignment and robust pole assignment for discrete-time linear periodic systems, SIAM J. Control Optimi. 48 (2010), 3975–3996, DOI: https://doi.org/10.1137/080730469. 10.1137/080730469Search in Google Scholar

[3] Y.-F. Cai, J. Qian, and S.-F. Xu, Robust partial pole assignment problem for high order control systems, Automatica 48 (2012), 1462–1466, DOI: https://doi.org/10.1016/j.automatica.2012.05.015. 10.1016/j.automatica.2012.05.015Search in Google Scholar

[4] Z.-B. Chen and X.-S. Chen, Conjugate gradient-based iterative algorithm for solving generalized coupled Sylvester matrix equations, J. Frank. Inst. 359 (2022), 9925–9951, DOI: https://doi.org/10.1016/j.jfranklin.2022.09.049. 10.1016/j.jfranklin.2022.09.049Search in Google Scholar

[5] Z.-B. Chen and X.-S. Chen, Modification on the convergence results of the Sylvester matrix equation AX+XB=C, J. Frank. Inst. 359 (2022), 3126–3147, DOI: https://doi.org/10.1016/j.jfranklin.2022.02.021. 10.1016/j.jfranklin.2022.02.021Search in Google Scholar

[6] M. Hajarian, Efficient iterative solutions to general coupled matrix equations, Int. J. Autom. Comput. 10 (2013), 418–486, DOI: https://doi.org/10.1007/s11633-013-0745-6. 10.1007/s11633-013-0745-6Search in Google Scholar

[7] M. Hajarian, Solving the general Sylvester discrete-time periodic matrix equations via the gradient-based iterative method, Appl. Math. Lett. 52 (2016), 87–95, DOI: https://doi.org/10.1016/j.aml.2015.08.017. 10.1016/j.aml.2015.08.017Search in Google Scholar

[8] F. Ding and T.-W. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Trans. Automat. Contr. 50 (2005), 1216–1221, DOI: https://doi.org/10.1109/TAC.2005.852558. 10.1109/TAC.2005.852558Search in Google Scholar

[9] F. Ding, P.-X. Liu, and J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Appl. Math. Comput. 197 (2008), 41–50, DOI: https://doi.org/10.1016/j.amc.2007.07.040. 10.1016/j.amc.2007.07.040Search in Google Scholar

[10] L. Xie, Y.-J. Liu, and H.-Z. Yang, Gradient based and least squares based iterative algorithms for matrix equations AXB+CXTD=F, Appl. Math. Comput. 217 (2010), 2191–2199, DOI: https://doi.org/10.1016/j.amc.2010.07.019. 10.1016/j.amc.2010.07.019Search in Google Scholar

[11] C.-Q. Song, G.-L. Chen, and L.-L. Zhao, Iterative solutions to coupled Sylvester-transpose matrix equations, Appl. Math. Model. 35 (2011), 4675–4683, DOI: https://doi.org/10.1016/j.apm.2011.03.038. 10.1016/j.apm.2011.03.038Search in Google Scholar

[12] A.-G. Wu, X. Zeng, G.-R. Duan, and W.-J. Wu, Iterative solutions to the extended Sylvester-conjugate matrix equation, Appl. Math. Comput. 217 (2010), 4427–4438, DOI: https://doi.org/10.1016/j.amc.2010.05.029. 10.1016/j.amc.2010.05.029Search in Google Scholar

[13] A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, Iterative solutions to coupled Sylvester-conjugate matrix equations, Comput. Math. Appl. 60 (2010), 54–66, DOI: https://doi.org/10.1016/j.camwa.2010.04.029. 10.1016/j.camwa.2010.04.029Search in Google Scholar

[14] A.-G. Wu, L.-L. Lv, and G.-R. Duan, Iterative algorithms for solving a class of complex conjugate and transpose matrix equations, Appl. Math. Comput. 217 (2011), 8343–8353, DOI: https://doi.org/10.1016/j.amc.2011.02.113. 10.1016/j.amc.2011.02.113Search in Google Scholar

[15] X.-P. Sheng, A relaxed gradient-based algorithm for solving generalized coupled Sylvester matrix equations, J. Frank. Inst. 355 (2018), 4282–4297, DOI: https://doi.org/10.1016/j.jfranklin.2018.04.008. 10.1016/j.jfranklin.2018.04.008Search in Google Scholar

[16] B.-H. Huang and C.-F. Ma, The relaxed gradient-based iterative algorithms for a class of generalized coupled Sylvester-conjugate matrix equations, J. Frank. Inst. 355 (2018), 3168–3195, DOI: https://doi.org/10.1016/j.jfranklin.2018.02.014. 10.1016/j.jfranklin.2018.02.014Search in Google Scholar

[17] Q. Niu, X. Wang, and L.-Z. Lu, A relaxed gradient-based algorithm for solving Sylvester equations, Asian J. Control 13 (2011), 461–464, DOI: https://doi.org/10.1002/asjc.328. 10.1002/asjc.328Search in Google Scholar

[18] M. A. Ramadan and A. M. E. Bayoumi, A modified gradient-based algorithm for solving extended Sylvester-conjugate matrix equations, Asian J. Control 20 (2018), 228–235, DOI: https://doi.org/10.1002/asjc.1574. 10.1002/asjc.1574Search in Google Scholar

[19] Y.-J. Xie and C.-F. Ma, The accelerated gradient-based iterative algorithm for solving a class of generalized Sylvester-transpose matrix equation, Appl. Math. Comput. 273 (2016), 1257–1269, DOI: https://doi.org/10.1016/j.amc.2015.07.022. 10.1016/j.amc.2015.07.022Search in Google Scholar

[20] X. Wang, L. Dai, and D. Liao, A modified gradient-based algorithm for solving Sylvester equations, Appl. Math. Comput. 218 (2012), 5620–5628, DOI: https://doi.org/10.1016/j.amc.2011.11.055. 10.1016/j.amc.2011.11.055Search in Google Scholar

[21] W.-L. Wang, C.-Q. Song, and S.-P. Ji, Iterative solution to a class of complex matrix equations and its application in time-varying linear system, J. Appl. Math. Comput. 67 (2021), 317–341, DOI: https://doi.org/10.1007/s12190-020-01486-6. 10.1007/s12190-020-01486-6Search in Google Scholar

[22] B.-H. Huang and C.-F. Ma, On the relaxed gradient-based iterative methods for the generalized coupled Sylvester-transpose matrix equations, J. Frank. Inst. 359 (2022), 10688–10725, DOI: https://doi.org/10.1016/j.jfranklin.2022.07.051. 10.1016/j.jfranklin.2022.07.051Search in Google Scholar

[23] Z.-L. Tian, M.-Y. Tian, C.-Q. Gu, and X.-N. Hao, An accelerated Jacobi-gradient-based iterative algorithm for solving Sylvester matrix equations, Filomat 31 (2017), 2381–2390, DOI: https://doi.org/10.2298/FIL1708381T. 10.2298/FIL1708381TSearch in Google Scholar

[24] W.-L. Wang and C.-Q. Song, Iterative algorithms for discrete-time periodic Sylvester matrix equations and its application in antilinear periodic system, Appl. Numer. Math. 168 (2021), 251–273, DOI: https://doi.org/10.1016/j.apnum.2021.06.006. 10.1016/j.apnum.2021.06.006Search in Google Scholar

[25] S.-K. Li, A finite iterative method for solving the generalized Hamiltonian solutions of coupled Sylvester matrix equations with conjugate transpose, Int. J. Comput. Math. 94 (2017), 757–773, DOI: https://doi.org/10.1080/00207160.2016.1148810. 10.1080/00207160.2016.1148810Search in Google Scholar

[26] H.-M. Zhang, A finite iterative algorithm for solving the complex generalized coupled Sylvester matrix equations by using the linear operators, J. Frank. Inst. 354 (2017), 1856–1874, DOI: https://doi.org/10.1016/j.jfranklin.2016.12.011. 10.1016/j.jfranklin.2016.12.011Search in Google Scholar

[27] T.-X. Yan and C.-F. Ma, The BCR algorithm for solving the reflexive or anti-reflexive solutions of generalized coupled Sylvester matrix equations, J. Frank. Inst. 357 (2020), 12787–12807, DOI: https://doi.org/10.1016/j.jfranklin.2020.09.030. 10.1016/j.jfranklin.2020.09.030Search in Google Scholar

[28] T.-X. Yan and C.-F. Ma, An iterative algorithm for generalized Hamiltonian solution of a class of generalized coupled Sylvester-conjugate matrix equations, Appl. Math. Comput. 411 (2021), 126491, DOI: https://doi.org/10.1016/j.amc.2021.126491. 10.1016/j.amc.2021.126491Search in Google Scholar

[29] C.-F. Ma and T.-X. Yan, A finite iterative algorithm for the general discrete-time periodic Sylvester matrix equations, J. Frank. Inst. 359 (2022), 4410–4432, DOI: https://doi.org/10.1016/j.jfranklin.2022.03.047. 10.1016/j.jfranklin.2022.03.047Search in Google Scholar

[30] H.-M. Zhang and H.-C. Yin, New proof of the gradient-based iterative algorithm for a complex conjugate and transpose matrix equation, J. Frank. Inst. 354 (2017), 7585–7603, DOI: https://doi.org/10.1016/j.jfranklin.2017.09.005. 10.1016/j.jfranklin.2017.09.005Search in Google Scholar

[31] W.-P. Hu and W.-G. Wang, Improved gradient iteration algorithms for solving the coupled Sylvester matrix equation, J. Nanjing Univ. Math. Biq. 33 (2016), 177–192, DOI: https://www.cnki.com.cn/Article/CJFDTotal-SXXT201602006.htm. Search in Google Scholar

[32] A.-G. Wu, Y. Zhang, and Y.-Y. Qian, Complex Conjugate Matrix Equations, Science Press, Beijing, 2017. Search in Google Scholar

[33] J.-J. Hu, Y.-F. Ke, and C.-F. Ma, Generalized conjugate direction algorithm for solving generalized coupled Sylvester transpose matrix equations over reflexive or anti-reflexive matrices, J. Frank. Inst. 359 (2022), 6958–6985, DOI: https://doi.org/10.1016/j.jfranklin.2022.07.005. 10.1016/j.jfranklin.2022.07.005Search in Google Scholar

[34] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, SIAM, Philadelphia, 1994. 10.1137/1.9781611971262Search in Google Scholar

Received: 2022-12-17
Revised: 2024-08-18
Accepted: 2024-10-18
Published Online: 2024-11-27

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. On the p-fractional Schrödinger-Kirchhoff equations with electromagnetic fields and the Hardy-Littlewood-Sobolev nonlinearity
  3. L-Fuzzy fixed point results in -metric spaces with applications
  4. Solutions of a coupled system of hybrid boundary value problems with Riesz-Caputo derivative
  5. Nonparametric methods of statistical inference for double-censored data with applications
  6. LADM procedure to find the analytical solutions of the nonlinear fractional dynamics of partial integro-differential equations
  7. Existence of projected solutions for quasi-variational hemivariational inequality
  8. Spectral collocation method for convection-diffusion equation
  9. New local fractional Hermite-Hadamard-type and Ostrowski-type inequalities with generalized Mittag-Leffler kernel for generalized h-preinvex functions
  10. On the asymptotics of eigenvalues for a Sturm-Liouville problem with symmetric single-well potential
  11. On exact rate of convergence of row sequences of multipoint Hermite-Padé approximants
  12. The essential norm of bounded diagonal infinite matrices acting on Banach sequence spaces
  13. Decay rate of the solutions to the Cauchy problem of the Lord Shulman thermoelastic Timoshenko model with distributed delay
  14. Enhancing the accuracy and efficiency of two uniformly convergent numerical solvers for singularly perturbed parabolic convection–diffusion–reaction problems with two small parameters
  15. An inertial shrinking projection self-adaptive algorithm for solving split variational inclusion problems and fixed point problems in Banach spaces
  16. An equation for complex fractional diffusion created by the Struve function with a T-symmetric univalent solution
  17. On the existence and Ulam-Hyers stability for implicit fractional differential equation via fractional integral-type boundary conditions
  18. Some properties of a class of holomorphic functions associated with tangent function
  19. The existence of multiple solutions for a class of upper critical Choquard equation in a bounded domain
  20. On the continuity in q of the family of the limit q-Durrmeyer operators
  21. Results on solutions of several systems of the product type complex partial differential difference equations
  22. On Berezin norm and Berezin number inequalities for sum of operators
  23. Geometric invariants properties of osculating curves under conformal transformation in Euclidean space ℝ3
  24. On a generalization of the Opial inequality
  25. A novel numerical approach to solutions of fractional Bagley-Torvik equation fitted with a fractional integral boundary condition
  26. Holomorphic curves into projective spaces with some special hypersurfaces
  27. On Periodic solutions for implicit nonlinear Caputo tempered fractional differential problems
  28. Approximation of complex q-Beta-Baskakov-Szász-Stancu operators in compact disk
  29. Existence and regularity of solutions for non-autonomous integrodifferential evolution equations involving nonlocal conditions
  30. Jordan left derivations in infinite matrix rings
  31. Nonlinear nonlocal elliptic problems in ℝ3: existence results and qualitative properties
  32. Invariant means and lacunary sequence spaces of order (α, β)
  33. Novel results for two families of multivalued dominated mappings satisfying generalized nonlinear contractive inequalities and applications
  34. Global in time well-posedness of a three-dimensional periodic regularized Boussinesq system
  35. Existence of solutions for nonlinear problems involving mixed fractional derivatives with p(x)-Laplacian operator
  36. Some applications and maximum principles for multi-term time-space fractional parabolic Monge-Ampère equation
  37. On three-dimensional q-Riordan arrays
  38. Some aspects of normal curve on smooth surface under isometry
  39. Mittag-Leffler-Hyers-Ulam stability for a first- and second-order nonlinear differential equations using Fourier transform
  40. Topological structure of the solution sets to non-autonomous evolution inclusions driven by measures on the half-line
  41. Remark on the Daugavet property for complex Banach spaces
  42. Decreasing and complete monotonicity of functions defined by derivatives of completely monotonic function involving trigamma function
  43. Uniqueness of meromorphic functions concerning small functions and derivatives-differences
  44. Asymptotic approximations of Apostol-Frobenius-Euler polynomials of order α in terms of hyperbolic functions
  45. Hyers-Ulam stability of Davison functional equation on restricted domains
  46. Involvement of three successive fractional derivatives in a system of pantograph equations and studying the existence solution and MLU stability
  47. Composition of some positive linear integral operators
  48. On bivariate fractal interpolation for countable data and associated nonlinear fractal operator
  49. Generalized result on the global existence of positive solutions for a parabolic reaction-diffusion model with an m × m diffusion matrix
  50. Online makespan minimization for MapReduce scheduling on multiple parallel machines
  51. The sequential Henstock-Kurzweil delta integral on time scales
  52. On a discrete version of Fejér inequality for α-convex sequences without symmetry condition
  53. Existence of three solutions for two quasilinear Laplacian systems on graphs
  54. Embeddings of anisotropic Sobolev spaces into spaces of anisotropic Hölder-continuous functions
  55. Nilpotent perturbations of m-isometric and m-symmetric tensor products of commuting d-tuples of operators
  56. Characterizations of transcendental entire solutions of trinomial partial differential-difference equations in ℂ2#
  57. Fractional Sturm-Liouville operators on compact star graphs
  58. Exact controllability for nonlinear thermoviscoelastic plate problem
  59. Improved modified gradient-based iterative algorithm and its relaxed version for the complex conjugate and transpose Sylvester matrix equations
  60. Superposition operator problems of Hölder-Lipschitz spaces
  61. A note on λ-analogue of Lah numbers and λ-analogue of r-Lah numbers
  62. Ground state solutions and multiple positive solutions for nonhomogeneous Kirchhoff equation with Berestycki-Lions type conditions
  63. A note on 1-semi-greedy bases in p-Banach spaces with 0 < p ≤ 1
  64. Fixed point results for generalized convex orbital Lipschitz operators
  65. Asymptotic model for the propagation of surface waves on a rotating magnetoelastic half-space
  66. Multiplicity of k-convex solutions for a singular k-Hessian system
  67. Poisson C*-algebra derivations in Poisson C*-algebras
  68. Signal recovery and polynomiographic visualization of modified Noor iteration of operators with property (E)
  69. Approximations to precisely localized supports of solutions for non-linear parabolic p-Laplacian problems
  70. Solving nonlinear fractional differential equations by common fixed point results for a pair of (α, Θ)-type contractions in metric spaces
  71. Pseudo compact almost automorphic solutions to a family of delay differential equations
  72. Periodic measures of fractional stochastic discrete wave equations with nonlinear noise
  73. Asymptotic study of a nonlinear elliptic boundary Steklov problem on a nanostructure
  74. Cramer's rule for a class of coupled Sylvester commutative quaternion matrix equations
  75. Quantitative estimates for perturbed sampling Kantorovich operators in Orlicz spaces
  76. Review Articles
  77. Penalty method for unilateral contact problem with Coulomb's friction in time-fractional derivatives
  78. Differential sandwich theorems for p-valent analytic functions associated with a generalization of the integral operator
  79. Special Issue on Development of Fuzzy Sets and Their Extensions - Part II
  80. Higher-order circular intuitionistic fuzzy time series forecasting methodology: Application of stock change index
  81. Binary relations applied to the fuzzy substructures of quantales under rough environment
  82. Algorithm selection model based on fuzzy multi-criteria decision in big data information mining
  83. A new machine learning approach based on spatial fuzzy data correlation for recognizing sports activities
  84. Benchmarking the efficiency of distribution warehouses using a four-phase integrated PCA-DEA-improved fuzzy SWARA-CoCoSo model for sustainable distribution
  85. Special Issue on Application of Fractional Calculus: Mathematical Modeling and Control - Part II
  86. A study on a type of degenerate poly-Dedekind sums
  87. Efficient scheme for a category of variable-order optimal control problems based on the sixth-kind Chebyshev polynomials
  88. Special Issue on Mathematics for Artificial intelligence and Artificial intelligence for Mathematics
  89. Toward automated hail disaster weather recognition based on spatio-temporal sequence of radar images
  90. The shortest-path and bee colony optimization algorithms for traffic control at single intersection with NetworkX application
  91. Neural network quaternion-based controller for port-Hamiltonian system
  92. Matching ontologies with kernel principle component analysis and evolutionary algorithm
  93. Survey on machine vision-based intelligent water quality monitoring techniques in water treatment plant: Fish activity behavior recognition-based schemes and applications
  94. Artificial intelligence-driven tone recognition of Guzheng: A linear prediction approach
  95. Transformer learning-based neural network algorithms for identification and detection of electronic bullying in social media
  96. Squirrel search algorithm-support vector machine: Assessing civil engineering budgeting course using an SSA-optimized SVM model
  97. Special Issue on International E-Conference on Mathematical and Statistical Sciences - Part I
  98. Some fixed point results on ultrametric spaces endowed with a graph
  99. On the generalized Mellin integral operators
  100. On existence and multiplicity of solutions for a biharmonic problem with weights via Ricceri's theorem
  101. Approximation process of a positive linear operator of hypergeometric type
  102. On Kantorovich variant of Brass-Stancu operators
  103. A higher-dimensional categorical perspective on 2-crossed modules
  104. Special Issue on Some Integral Inequalities, Integral Equations, and Applications - Part I
  105. On parameterized inequalities for fractional multiplicative integrals
  106. On inverse source term for heat equation with memory term
  107. On Fejér-type inequalities for generalized trigonometrically and hyperbolic k-convex functions
  108. New extensions related to Fejér-type inequalities for GA-convex functions
  109. Derivation of Hermite-Hadamard-Jensen-Mercer conticrete inequalities for Atangana-Baleanu fractional integrals by means of majorization
  110. Some Hardy's inequalities on conformable fractional calculus
  111. The novel quadratic phase Fourier S-transform and associated uncertainty principles in the quaternion setting
  112. Special Issue on Recent Developments in Fixed-Point Theory and Applications - Part I
  113. A novel iterative process for numerical reckoning of fixed points via generalized nonlinear mappings with qualitative study
  114. Some new fixed point theorems of α-partially nonexpansive mappings
  115. Generalized Yosida inclusion problem involving multi-valued operator with XOR operation
  116. Periodic and fixed points for mappings in extended b-gauge spaces equipped with a graph
  117. Convergence of Peaceman-Rachford splitting method with Bregman distance for three-block nonconvex nonseparable optimization
  118. Topological structure of the solution sets to neutral evolution inclusions driven by measures
  119. (α, F)-Geraghty-type generalized F-contractions on non-Archimedean fuzzy metric-unlike spaces
  120. Solvability of infinite system of integral equations of Hammerstein type in three variables in tempering sequence spaces c 0 β and 1 β
  121. Special Issue on Nonlinear Evolution Equations and Their Applications - Part I
  122. Fuzzy fractional delay integro-differential equation with the generalized Atangana-Baleanu fractional derivative
  123. Klein-Gordon potential in characteristic coordinates
  124. Asymptotic analysis of Leray solution for the incompressible NSE with damping
  125. Special Issue on Blow-up Phenomena in Nonlinear Equations of Mathematical Physics - Part I
  126. Long time decay of incompressible convective Brinkman-Forchheimer in L2(ℝ3)
  127. Numerical solution of general order Emden-Fowler-type Pantograph delay differential equations
  128. Global smooth solution to the n-dimensional liquid crystal equations with fractional dissipation
  129. Spectral properties for a system of Dirac equations with nonlinear dependence on the spectral parameter
  130. A memory-type thermoelastic laminated beam with structural damping and microtemperature effects: Well-posedness and general decay
  131. The asymptotic behavior for the Navier-Stokes-Voigt-Brinkman-Forchheimer equations with memory and Tresca friction in a thin domain
  132. Absence of global solutions to wave equations with structural damping and nonlinear memory
  133. Special Issue on Differential Equations and Numerical Analysis - Part I
  134. Vanishing viscosity limit for a one-dimensional viscous conservation law in the presence of two noninteracting shocks
  135. Limiting dynamics for stochastic complex Ginzburg-Landau systems with time-varying delays on unbounded thin domains
  136. A comparison of two nonconforming finite element methods for linear three-field poroelasticity
Downloaded on 13.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2024-0083/html
Scroll to top button