Home Mathematics Some non-commuting solutions of the Yang-Baxter-like matrix equation
Article Open Access

Some non-commuting solutions of the Yang-Baxter-like matrix equation

  • Duan-Mei Zhou EMAIL logo and Hong-Quang Vu
Published/Copyright: September 15, 2020

Abstract

Let A be a square matrix satisfying A 4 = A . We solve the Yang-Baxter-like matrix equation A X A = X A X to find some solutions, based on analysis of the characteristic polynomial of A and its eigenvalues. We divide the problem into small cases so that we can find the solution easily. Finally, in order to illustrate the results, two numerical examples are presented.

MSC 2010: 15A24

1 Introduction

Let A be an n × n complex matrix. The quadratic matrix equation:

(1) A X A = X A X ,

is often called the Yang-Baxter-like matrix equation since it is similar to the classical parameter-free Yang-Baxter equation in format [1,2,3]. The original Yang-Baxter equation has many applications in statistical mechanics, integrable systems, quantum theory, knot theory, braid group theory, and so on [3,4,5,6]. Although some solutions have been found for the Yang-Baxter equation in the quantum group theory, no systematical study of (1) has appeared in the literature as a purely linear algebra problem [7]. One possible reason is that solving a polynomial system of n 2 quadratic equations with n 2 unknowns is a challenging topic [7]. In the past several years, some special cases of (1) have been obtained for various classes of matrices A with different approaches in [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. Because finding general solutions of the Yang-Baxter-like matrix Eq. (1) is difficult, almost all the works so far have been toward constructing commuting solutions of the equation; see, e.g., [7,8,9,10,11,12,13,14,15,16,17] and references therein. Commuting solutions mean that the unknown matrix X satisfies the commutability condition A X = X A . All commuting solutions of (1) have been obtained when A is a matrix with general Jordan structure forms in [7], but finding all non-commuting solutions of (1) is still a challenging task when A is arbitrary.

Up to now, there are only isolated results toward this goal for special classes of the given matrix A, e.g., [18,19,20,21,22,23,24,25,26]. All solutions have been constructed for rank-1 matrices A in [23], rank-2 matrices A in [24,25], non-diagonalizable elementary matrices A in [26], idempotent matrices A ( A 2 = A ) in [19], A 2 = I in [18,20], A 3 = A in [21], and diagonalizable matrices A with two different eigenvalues in [22]. In this paper, we try to solve the matrix Eq. (1) to obtain some non-commuting solutions when the given matrix A satisfies A 4 = A . This is an important step to solve more general matrices.

We first provide some preliminary results. Then we study some solutions for the Yang-Baxter-like matrix equation of (1) when the given matrix A satisfies A 4 = A through analysis of the characteristic polynomial of A and its eigenvalues. Finally, we give some numerical experiments to illustrate our results.

2 Preliminary results

In this section, we give some results for our further discussion. First, we need the following lemmas.

Lemma 2.1

[21, Lemma 2.1] Let A = diag { M , P } be an n × n matrix such that M is m × m . Then the solutions of (1) are

X = K C D Z ,

where the sub-matrices K C m × m , Z C ( n m ) × ( n m ) , C, and D satisfy

(2) M K M = K M K + C P D , M C P = K M C + C P Z , P D M = D M K + Z P D , P Z P = D M C + Z P Z .

In particular, if P = 0 , then (2) is reduced to

M K M = K M K , K M C = 0 , D M K = 0 , D M C = 0 .

Lemma 2.2

[27, Theorem 2.1] Let k n be two positive integers, and let W and Λ be n × n and k × k matrices, respectively. If there exists an n × k matrix U that satisfies W U = U Λ , then the identity

p W + U V T ( λ ) p Λ ( λ ) p W ( λ ) p Λ + V T U ( λ )

is true for any n × k matrix V.

We assume that A is an n × n complex matrix with n 4 . Since A 4 = A , the polynomial

p ( λ ) = λ ( λ 1 ) λ + 1 + i 3 2 λ + 1 i 3 2

is an annihilator of A, that is, p ( A ) = 0 . Thus, the eigenvalues of A constitute a subset of 0 , 1 , 1 + i 3 2 , 1 i 3 2 and the minimal polynomial g ( λ ) of A, which is the unique annihilator of A with minimal degree and leading coefficient 1, is a factor of p ( λ ) . Therefore, each eigenvalue of A is semi-simple, and then A is the diagonalizable matrix. So there exists a nonsingular matrix S such that A = S J S 1 , where J = diag ( λ 1 I m 1 , , λ r I m r ) , λ i 0 , 1 , 1 + i 3 2 , 1 i 3 2 , i = 1 , 2 , , r . Here, each I m i ( i = 1 , 2 , , r ) denotes the m i × m i identity matrix. Let Y = S 1 X S , clearly that X is the solution of (1) if and only if Y is the solution of the equation:

(3) J Y J = Y J Y .

Moreover, X is a commuting solution if and only if Y is a commuting solution. Thus, in the following we just solve (3) to get solutions of (1).

Now, we have to consider the Yang-Baxter-like matrix Eq. (1) through several cases of the given matrix A. As we can see, in some trivial case of the minimal polynomial of A, we immediately obtained the solutions of the Yang-Baxter-like matrix Eq. (1). If g ( λ ) = 0 , then A = 0 . Thus, all n × n matrices are the solutions. If g ( λ ) = λ 1 , thus, A = I . Then all idempotent matrices are the solutions. Else if g ( λ ) = λ + 1 ± i 3 2 , thus, A = 1 ± i 3 2 I . Then equation (1) has the form 2 1 ± i 3 X = 2 1 ± i 3 X 2 . These cases are similar to the case A = I , but just an coefficient differs. So we just need to consider the following remaining nontrivial cases.

Case 1. g ( λ ) = λ λ + 1 + i 3 2 .

Case 2. g ( λ ) = λ λ + 1 i 3 2 .

Case 3. g ( λ ) = λ ( λ 1 ) .

Case 4. g ( λ ) = ( λ 1 ) λ + 1 + i 3 2 .

Case 5. g ( λ ) = ( λ 1 ) λ + 1 i 3 2 .

Case 6. g ( λ ) = λ + 1 + i 3 2 λ + 1 i 3 2 .

Case 7. g ( λ ) = λ ( λ 1 ) λ + 1 + i 3 2 .

Case 8. g ( λ ) = λ ( λ 1 ) λ + 1 i 3 2 .

Case 9. g ( λ ) = λ λ + 1 + i 3 2 λ + 1 i 3 2 .

Case 10. g ( λ ) = ( λ 1 ) λ + 1 + i 3 2 λ + 1 i 3 2 .

Case 11. g ( λ ) = λ ( λ 1 ) λ + 1 + i 3 2 λ + 1 i 3 2 .

In this paper, an analysis of all cases except Case 10 and Case 11 is given in the next section. It is too difficult to solve all the solutions of the Yang-Baxter-like matrix Eq. (1) when A is a diagonalizable complex matrix with three distinct nonzero eigenvalues. So Case 10 and Case 11 are still challenging tasks that are not easy to solve in short term and will be further studied in the future.

3 Some solutions of the matrix equation

3.1 Case 1: A 2 = 1 + i 3 2 A

Under the assumption that A 2 = 1 + i 3 2 A , A is diagonalizable with two eigenvalues 0 and 1 + i 3 2 . There exists a nonsingular matrix S, such that A = S J S 1 with

J = diag 1 + i 3 2 I m , 0 .

Partition Y as

Y = K C D Z ,

where K is m × m and Z is ( n m ) × ( n m ) . By applying Lemma 2.1, we have the following conclusion.

Theorem 3.1

Let A be an n × n complex matrix such that A 2 = 1 + i 3 2 A with rank m. Then all solutions of (1) are

(4) X = S K C D Z S 1

for some n × n nonsingular matrix S. Here, Z is an arbitrary ( n m ) × ( n m ) matrix. For any m × m nonsingular matrix U partitioned as

(5) U = ( U 1 U 2 )

and its inverse partitioned as

(6) U 1 = U 1 ˜ U 2 ˜ ,

where U 1 is m × s and U 1 ˜ is s × m ( s m ) . K = 1 + i 3 2 U 1 U 1 ˜ , C = U 2 W 2 , and D = H 2 U 2 ˜ with arbitrary ( m s ) × ( n m ) matrix W 2 and ( n m ) × ( m s ) matrix H 2 satisfying H 2 W 2 = 0 . In addition, X is a commuting solution if and only if W 2 = 0 and H 2 = 0 .

Proof

Applying Lemma 2.1, we have

(7) 1 i 3 2 K = 1 i 3 2 K 2 , K C = 0 , D K = 0 , D C = 0 .

So Z is the ( n m ) × ( n m ) arbitrary matrix. According to the first condition of (7), we have K = 1 + i 3 2 U Σ U 1 for any m × m nonsingular matrix U, where Σ = diag ( I s , 0 ) with s m the rank of K. Applying this result to the last three conditions of (7), we get

Σ U 1 C = 0 , D U Σ = 0 , D U U 1 C = 0 .

Denoting W = U 1 C and H = D U , we have

Σ W = 0 , H Σ = 0 , H W = 0 .

According to the block structure of Σ , partition W and H as

W 1 W 2 , ( H 1 H 2 ) ,

respectively. Then

Σ W = I s 0 0 0 W 1 W 2 = W 1 0 = ( 0 0 ) ,

hence W 1 = 0 .

H Σ = ( H 1 H 2 ) I s 0 0 0 = ( H 1 0 ) = ( 0 0 ) ,

hence H 1 = 0 . Thus,

H W = ( 0 H 2 ) 0 W 2 = H 2 W 2 = 0 .

Partition U as (5) and its inverse partition as (6), all solutions of (1) are (4), where K = 1 + i 3 2 U 1 U 1 ˜ , C = U 2 W 2 , and D = H 2 U 2 ˜ with arbitrary ( m s ) × ( n m ) matrix W 2 and ( n m ) × ( m s ) matrix H 2 satisfying H 2 W 2 = 0 .

In addition, X is a commuting solution if and only if W 2 = 0 and H 2 = 0 .□

3.2 Case 2: A 2 = 1 i 3 2 A

Suppose A 2 = 1 i 3 2 A . Then A is diagonalizable with two eigenvalues 0 and 1 i 3 2 . There exists a nonsingular matrix S such that A = S J S 1 with J = diag 1 i 3 2 I m , 0 . Partition Y as

Y = K C D Z ,

where K is m × m and Z is ( n m ) × ( n m ) . By applying Lemma 2.1, we have the following conclusion.

Theorem 3.2

Let A be an n × n complex matrix such that A 2 = 1 i 3 2 A with rank m. Then all solutions of (1) are

X = S K C D Z S 1

for some n × n nonsingular matrix S. Here, Z is an arbitrary ( n m ) × ( n m ) matrix, the other sub-matrices K , C , and D are constructed as follows. For any m × m nonsingular matrix U partitioned as ( U 1 U 2 ) , and its inverse partitioned as U 1 = U 1 ˜ U 2 ˜ , where U 1 is m × s and U 1 ˜ is s × m ( s m ) . K = 1 i 3 2 U 1 U 1 ˜ , C = U 2 W , and D = H U 2 ˜ is ( n m ) × m with arbitrary ( m s ) × ( n m ) matrix W and ( n m ) × ( m s ) matrix H satisfying H W = 0 . In addition, X is a commuting solution if and only if W = 0 and H = 0 .

Proof

The proof is similar to the proof of Theorem 3.1 and is omitted here.□

3.3 Case 3: A 2 = A

In this case, A is idempotent. So A has two eigenvalues 0 and 1. Let A = S J S 1 , where S is a n × n nonsingular matrix, J = diag { I m , 0 } . Eq. (1) in this case has been studied previously by [21]. But for the completeness of the presentation, we summarize the main results of [21] in the following theorem.

Theorem 3.3

Let A be an n × n idempotent with rank m. Then all solutions of (1) are

X = S K C D Z S 1 ,

for some n × n nonsingular matrix S. Here, Z is an arbitrary ( n m ) × ( n m ) matrix, the other sub-matrices K , C , and D are constructed as follows. For any m × m nonsingular matrix U partitioned as

( U 1 U 2 ) ,

and its inverse partitioned as

U 1 = U 1 ˜ U 2 ˜ ,

where U 1 is m × s and U 1 ˜ is s × m ( s m ) , the m × m matrix K = U 1 U 1 ˜ , the m × ( n m ) matrix C = U 2 W , and the ( n m ) × m matrix D = H U 2 ˜ with arbitrary ( m s ) × ( n m ) matrix W and ( n m ) × ( m s ) matrix H satisfying H W = 0 . In addition, X is a commuting solution if and only if W = 0 and H = 0 .

3.4 Case 4: A 2 = 1 i 3 2 A + 1 + i 3 2 I n

In this case, A is nonsingular with two eigenvalues 1 and 1 + i 3 2 . Let m be the multiplicity of eigenvalue 1, and let

J = diag I m , 1 + i 3 2 I n m

be the Jordan form of A. Then there exists a nonsingular matrix S such that A = S J S 1 . Let

Y = K C D Z ,

where K is m × m , Z is ( n m ) × ( n m ) , C is m × ( n m ) , and D is ( n m ) × m . Applying Lemma 2.1, we obtain all commuting solutions and non-commuting solutions of (1) in the following theorem.

Theorem 3.4

Let A be an n × n complex matrix such that A 2 = 1 i 3 2 A + 1 + i 3 2 I n with eigenvalue 1 of multiplicity m.

  1. All the commuting solutions of Eq. (1) are

    X = S K 0 0 Z S 1 ,

    where K and Z satisfy K = K 2 and 1 i 3 2 Z = 1 i 3 2 Z 2 .

  2. All the non-commuting solutions of Eq. (1) are

X = S K C D Z S 1 ,

where K is any m × m diagonalizable matrix and Z is any ( n m ) × ( n m ) diagonalizable matrix such that

  1. the nonzero matrices C a n d D have the same rank r such that

    C D C = 1 + i 3 3 C , D C D = 1 + i 3 3 D ;

  2. K and Z have eigenvalues i 3 3 and 3 i 3 6 of multiplicity r, respectively;

  3. the nonzero columns of C and nonzero rows of D are eigenvectors and left eigenvectors of K, respectively, associated with eigenvalue i 3 3 , and the nonzero columns of D and nonzero rows of C are eigenvectors and left eigenvectors of Z, respectively, associated with eigenvalue 3 i 3 6 ;

  4. the other eigenvalues of K and Z belong to { 0 , 1 } and { 0 , 1 + i 3 2 } , respectively.

Proof

By applying Lemma 2.1, Yang-Baxter-like matrix equation (3) is equivalent to

(8) K 2 K = 1 + i 3 2 C D , 1 i 3 2 Z 2 + 1 i 3 2 Z = D C , K C = 1 + i 3 2 C + 1 + i 3 2 C Z , D K = 1 + i 3 2 D + 1 + i 3 2 Z D .

We first look for all commuting solutions of Eq. (1), which correspond to all commuting solutions of (3). If J Y = Y J , then

1 + i 3 2 D = D , 1 + i 3 2 C = C .

We get C = 0 and D = 0 . If C = 0 and D = 0 , it is easy to prove that J Y = Y J . Thus, (8) implies that all commuting solutions of (1) are X = S Y S 1 with Y = diag ( K , Z ) , where K and Z satisfy K = K 2 and 1 i 3 2 Z = ( 1 i 3 2 Z ) 2 , respectively.

We show that there are no solutions of (8) such that C = 0 and D 0 or C 0 and D = 0 . If C = 0 and D 0 satisfy (8) for some matrices K and Z. From the first two equations of (8), we get K = K 2 and 1 i 3 2 Z = 1 i 3 2 Z 2 . So all the possible eigenvalues of K and Z are 0 , 1 and 0 , 1 + i 3 2 , respectively. According to the last equation of (8), we know that D 0 is a solution of the Sylvester equation D 1 i 3 2 K + I = Z D . This means 1 i 3 2 K + I and Z have common eigenvalues. Note that 1 i 3 2 × 0 + 1 = 1 and 1 i 3 2 × 1 + 1 = 3 i 3 2 are not eigenvalues of Z. This is a contradiction. The same results can also be obtained with assumption C 0 and D = 0 . Thus, any solutions of (8) with C = 0 or D = 0 is a commuting one. So all non-commuting solutions of (8) must have C 0 and D 0 .

From the first two equations of (8), we have

( K 2 K ) C = 1 + i 3 2 C D C = 1 + i 3 2 C 1 + i 3 2 Z 2 + 1 i 3 2 Z = C Z 1 i 3 2 C Z 2 .

From the first equation and the third equation of (8), we obtain

( K 2 K ) C = ( K I m ) K C = ( K I m ) 1 + i 3 2 C + 1 + i 3 2 C Z = 1 + i 3 2 K C + 1 + i 3 2 K C Z + 1 + i 3 2 C 1 + i 3 2 C Z = 1 i 3 2 C + 1 i 3 2 C Z + 1 i 3 2 C Z 1 i 3 2 C Z 2 + 1 + i 3 2 C 1 + i 3 2 C Z = 1 i 3 2 C Z 2 + 1 i 3 3 2 C Z + i 3 C .

Combining these results we have

(9) C Z = 3 i 3 6 C .

Similarly,

(10) Z D = 3 i 3 6 D .

Substituting (9) into the right of the third equation of (8), we get

(11) K C = i 3 3 C .

Substituting (10) into the right of the fourth equation of (8), we have

(12) D K = i 3 3 D .

Multiplying C to the first equation of (8) from the right and using (11), we obtain

C D C = 1 + i 3 3 C .

From which, we have

r ( C ) = r ( C D C ) r ( C D ) r ( C ) .

Hence, r ( C ) = r ( C D ) . Multiplying D to the second equation of (8) from the right and using (10), we get

D C D = 1 + i 3 3 D .

So

r ( D ) = r ( D C D ) r ( C D ) r ( D ) .

Hence, r ( D ) = r ( C D ) . Therefore,

r ( C ) = r ( D ) .

This means that (i) is true. From (10) and (11), we know that all nonzero columns of D and C are eigenvectors of Z and K associated with eigenvalues 3 i 3 6 and i 3 3 , respectively. From (9) and (12), we know that all nonzero rows of C and D are left eigenvectors of Z and K associated with eigenvalues 3 i 3 6 and i 3 3 , respectively. So for any non-commuting solution of (8), i 3 3 and 3 i 3 6 must be an eigenvalue of K and Z, respectively. Furthermore, they are semi-simple eigenvalues of K and Z, respectively. If eigenvalue i 3 3 of K is not semi-simple, then there exists a nonzero vector v satisfying u K + i 3 3 I v 0 and K + i 3 3 I u = K + i 3 3 I 2 v = 0 . The eigenvector u and the generalized eigenvector v must be linearly independent. In fact, if a u + b v = 0 for some a , b , then via multiplying this equality by K + i 3 3 I from the left we get b K + i 3 3 I v = b u = 0 , from which b = 0 and so a = 0 . Since

1 + i 3 2 C D v = ( K 2 K ) v = K + i 3 3 I 2 v 3 + i 2 3 3 K + i 3 3 I v 1 i 3 3 v = 3 + i 2 3 3 u 1 i 3 3 v ,

we get

u = 3 3 + i 2 3 1 + i 3 2 C D v 1 i 3 3 v .

Since K C = i 3 3 C , we get

0 = K + i 3 3 I 2 v = K + i 3 3 I u = K + i 3 3 I 3 3 + 2 i 3 1 + i 3 2 C D v 1 i 3 3 v = 3 ( 1 + i 3 ) 2 ( 3 + 2 i 3 ) K C D v + i 3 3 C D v 1 i 3 3 + 2 i 3 K + i 3 3 I v = 3 ( 1 + i 3 ) 2 ( 3 + 2 i 3 ) i 3 3 C D v + i 3 3 C D v 1 i 3 3 + 2 i 3 u = 1 i 3 3 + 2 i 3 u 0 .

This is a contradiction. Thus, i 3 3 is a semi-simple eigenvalue of K. Similarly, 3 i 3 6 is a semi-simple eigenvalue of Z. So (ii) and (iii) are true.

Since K C = i 3 3 C , we obtain

1 i 3 2 K C = C 3 + i 3 6 I n m .

Applying Lemma 2.2 to the first equation of (8), it follows that

p 1 i 3 2 K + C D ( α ) p 3 + i 3 6 I n m ( α ) p 1 i 3 2 K ( α ) p D C 3 + i 3 6 I n m ( α ) .

According to the second equation of (8) and the fact that the eigenvalues of the square of a matrix are the squares of the eigenvalues of the matrix, the aforementioned identity can be written as

l = 1 m α 1 i 3 2 a l 2 α + 3 + i 3 6 n m j = 1 m α 1 i 3 2 a j k = 1 n m α 1 i 3 2 b k + 1 + i 3 2 b k 2 3 + i 3 6 ,

where a 1 , a 2 , , a m are the eigenvalues of K and b 1 , b 2 , , b n m are the eigenvalues of Z, all counting algebraic multiplicity. Since K and Z have eigenvalues i 3 3 and 3 i 3 6 of multiplicity r, respectively, let a j = i 3 3 and b j = 3 i 3 6 , for j = 1 , , r . Thus,

α 1 i 3 2 a j = α + 3 + i 3 6 , for j = 1 , , r .

Dividing both sides by α + 3 + i 3 6 r , we obtain

l = 1 m α 1 i 3 2 a l 2 α + 3 + i 3 6 n m r j = r + 1 m α 1 i 3 2 a j k = 1 n m α 1 i 3 2 b k + 1 + i 3 2 b k 2 3 + i 3 6 .

Since 1 i 3 2 a j 2 = 1 i 3 6 , 1 i 3 2 b j + 1 + i 3 2 b j 2 3 + i 3 6 = 1 i 3 6 , for j = 1 , , r , dividing both sides by α + 1 i 3 6 r , the aforementioned identity can be further simplified to

l = r + 1 m α 1 i 3 2 a l 2 α + 3 + i 3 6 n m r j = r + 1 m α 1 i 3 2 a j k = r + 1 n m α 1 i 3 2 b k + 1 + i 3 2 b k 2 3 + i 3 6 .

This implies that

a j 2 = a j , for j = r + 1 , , m

and

b k 2 = 1 + i 3 2 b k , for k = r + 1 , , n m .

Consequently, a l = 0 or 1, for l = r + 1 , , m , and b k = 0 or 1 + i 3 2 , for k = r + 1 , , n m . Next, we show that such eigenvalues are semi-simple. If 0 is an eigenvalue of K that is not semi-simple, then there exists a vector v 0 such that u K v 0 and K 2 v = 0 . Multiplying v to the first equation of (8) from the right, we get

1 + i 3 2 C D v = ( K 2 K ) v = K 2 v K v = K v = u .

Combining this with (11), we obtain

0 = K u = 1 + i 3 2 K C D v = 1 + i 3 2 i 3 3 C D v = i 3 3 u 0 .

This is a contradiction. If 1 is an eigenvalue of K that is not semi-simple, then there exists a vector v 0 such that u ( K I ) v 0 and ( K I ) u = ( K I ) 2 v = 0 . Multiplying v to the first equation of (8) from the right, we have

1 + i 3 2 C D v = ( K 2 K ) v = K 2 v K v = ( K I ) 2 v + ( K I ) v = u .

Combining this with (11), we obtain

0 = ( K I ) 2 v = ( K I ) ( K I ) v = ( K I ) u = 1 + i 3 2 ( K I ) C D v = 1 + i 3 2 ( K C D v C D v ) = 1 + i 3 2 i 3 3 C D v C D v = 3 + i 3 3 1 + i 3 2 C D v = 3 + i 3 3 u 0 .

This is another contradiction. Therefore, the eigenvalues 0 and 1 are semi-simple. Similarly, the eigenvalues 0 and 1 + i 3 2 of Z are semi-simple. Hence, K and Z are diagonalizable.

Conversely, suppose that K is an m × m diagonalizable matrix, Z is an ( n m ) × ( n m ) diagonalizable matrix, C is an m × ( n m ) matrix, and D is an ( n m ) × m matrix such that ( K , C , D , Z ) satisfies (i)–(iv). We show that it is a solution of (8). According to (iii), we have (9), (10), (11), and (12). Combining (9) and (11), we have the third equality of (8). Combining (10) and (12), we have the fourth equality of (8). Then from (11) and (ii), we obtain

( K 2 K ) C = 1 i 3 3 C = 1 + i 3 2 C D C .

Thus,

( K 2 K ) C ^ = 1 + i 3 2 C D C ^ ,

where C ^ is the m × r matrix consisting of r linearly independent columns of C. Let C ˜ = ( c ˜ 1 , c ˜ 2 , , c ˜ m r ) be an m × ( m r ) matrix whose columns c ˜ 1 , c ˜ 2 , , c ˜ m r are linearly independent eigenvectors of K associated with eigenvalues 0 or 1. If K c ˜ j = c ˜ j for some columns of C ˜ , from the equality (12), we have

D c ˜ j = 3 i ( 3 ) D K c ˜ j = 3 i ( 3 ) D c ˜ j .

Therefore, D c ˜ j = 0 . Similarly, if K c ˜ j = 0 for some columns of C ˜ , we also have D c ˜ j = 0 . Thus, D C ˜ = 0 . Then

( K 2 K ) C ˜ = 0 = D C ˜ = 1 + i 3 2 C D C ˜ .

Since the columns of C ^ and C ˜ form a basic of m , then

K 2 K = 1 + i 3 2 C D .

By the same token and under the assumption D C D = 1 + i 3 3 D , we can obtain

1 + i 3 2 Z 2 + 1 i 3 2 Z = D C .

Therefore, ( K , C , D , Z ) is a solution of (8).□

3.5 Case 5: A 2 = 1 + i 3 2 A + 1 i 3 2 I n

In this case, A is nonsingular with two eigenvalues 1 and 1 i 3 2 . Let m be the multiplicity of eigenvalue 1, and let

J = diag I m , 1 i 3 2 I n m

be the Jordan form of A. Then there exists a nonsingular matrix S such that A = S J S 1 . In order to solve (1), we partition Y as

Y = K C D Z ,

where K is m × m , Z is ( n m ) × ( n m ) , C is m × ( n m ) , and D is ( n m ) × m . Applying Lemma 2.1 and the technique in Theorem 3.4, we summarize the main results as the following theorem.

Theorem 3.5

Let A be an n × n complex matrix such that A 2 = 1 + i 3 2 A + 1 i 3 2 I n with eigenvalue 1 of multiplicity m.

  1. All the commuting solutions of ( 1 ) are

    X = S K 0 0 Z S 1 ,

    where K and Z satisfy K = K 2 and 1 + i 3 2 Z = 1 + i 3 2 Z 2 .

  2. All the non-commuting solutions of (1) are

X = S K C D Z S 1 ,

where K is any m × m diagonalizable matrix and Z is any ( n m ) × ( n m ) diagonalizable matrix such that

  1. the nonzero matrices C a n d D have the same rank r and satisfy

    C D C = 1 i 3 3 C , D C D = 1 i 3 3 D ;

  2. K and Z have eigenvalues i 3 3 and 3 + i 3 6 of multiplicity r, respectively;

  3. the nonzero columns of C and nonzero rows of D are eigenvectors and left eigenvectors of K, respectively, associated with eigenvalue i 3 3 , and the nonzero columns of D and nonzero rows of C are eigenvectors and left eigenvectors of Z, respectively, associated with eigenvalue 3 + i 3 6 ;

  4. the other eigenvalues of K and Z belong to { 0 , 1 } and { 0 , 1 i 3 2 } , respectively.

Proof

The proof is similar to the proof of Theorem 3.4 and is omitted here.□

3.6 Case 6: A 2 = A I n

If A 2 = A I n , then A is a diagonalizable matrix with two eigenvalues 1 + i 3 2 and 1 i 3 2 . Let m and n m be the multiplicity of eigenvalues 1 + i 3 2 and 1 i 3 2 , respectively. Then there exists a nonsingular matrix S such that A = S J S 1 , where

J = diag 1 + i 3 2 I m , 1 i 3 2 I n m .

Applying Lemma 2.1 and the technique in Theorem 3.4, we obtain all commuting solutions and non-commuting solutions of (1) in the following theorem.

Theorem 3.6

Let A be an n × n complex matrix such that A 2 = A I n . If the multiplicity of eigenvalues 1 + i 3 2 and 1 i 3 2 are m and n m , respectively.

  1. All the commuting solutions of (1) are

    X = S K 0 0 Z S 1 ,

    where K and Z satisfy 1 i 3 2 K = 1 i 3 2 K 2 and 1 + i 3 2 Z = 1 + i 3 2 Z 2 .

  2. All the non-commuting solutions of (1) are

X = S K C D Z S 1 ,

where K is any m × m diagonalizable matrix and Z is any ( n m ) × ( n m ) diagonalizable matrix such that

  1. the nonzero matrices C a n d D have the same rank r and satisfy

    C D C = 2 3 C ,   D C D = 2 3 D ;

  2. K and Z have eigenvalues 3 + i 3 3 and 3 + i 3 6 of multiplicity r, respectively;

  3. the nonzero columns of C and nonzero rows of D are eigenvectors and left eigenvectors of K, respectively, associated with eigenvalue 3 + i 3 3 , and the nonzero columns of D and nonzero rows of C are eigenvectors and left eigenvectors of Z, respectively, associated with eigenvalue 3 + i 3 6 ;

  4. the other eigenvalues of K and Z belong to 0 , 1 + i 3 2 and 0 , 1 i 3 2 , respectively.

Proof

The proof is similar to the proof of Theorem 3.4 and is omitted here.□

3.7 Case 7: A 3 = 1 i 3 2 A 2 + 1 + i 3 2 A

Now we consider the case that A 3 = 1 i 3 2 A 2 + 1 + i 3 2 A . So the minimal polynomial of A is g ( λ ) = λ ( λ 1 ) λ + 1 + i 3 2 . In this case, A has three distinct eigenvalues: 0, 1, and 1 + i 3 2 . Assume that the rank of A is m and the multiplicity of eigenvalue 1 is k. Then there exists a nonsingular matrix S such that A = S J S 1 , where

J = diag I k , 1 + i 3 2 I m k , 0 .

We partition matrix Y as

Y = K F C 1 E T C 2 D 1 D 2 Z

accordingly. Then we have

J Y J = K 1 + i 3 2 F 0 1 + i 3 2 E 1 i 3 2 T 0 0 0 0

and

Y J Y = K 2 1 + i 3 2 F E K F 1 + i 3 2 F T K C 1 1 + i 3 2 F C 2 E K 1 + i 3 2 T E E F 1 + i 3 2 T 2 E C 1 1 + i 3 2 T C 2 D 1 K 1 + i 3 2 D 2 E D 1 F 1 + i 3 2 D 2 T D 1 C 1 1 + i 3 2 D 2 C 2 .

According to Eq. (3), we know that Z is any ( n m ) × ( n m ) matrix. We first look for all commuting solutions of (3). If Y J = J Y , then

K F C 1 1 + i 3 2 E 1 + i 3 2 T 1 + i 3 2 C 2 0 0 0 = K 1 + i 3 2 F 0 E 1 + i 3 2 T 0 D 1 1 + i 3 2 D 2 0 .

Thus,

E = 0 , F = 0 , D 1 = 0 , D 2 = 0 , C 1 = 0 , C 2 = 0 .

So, all commuting solutions of Eq. (3) must be satisfied

K 2 = K , 1 i 3 2 T 2 = 1 i 3 2 T .

Hence, we have the following result.

Theorem 3.7

Let A be an n × n complex matrix such that A 3 = 1 i 3 2 A 2 + 1 + i 3 2 A . Suppose that the rank of matrix A is m and the multiplicity of eigenvalue 1 is k. Then all commuting solutions of Eq. (1) are given by

X = S diag { K , T , Z } S 1 ,

where Z is any ( n m ) × ( n m ) matrix, K is the k × k idempotent matrix, and T is the ( m k ) × ( m k ) matrix satisfying 1 i 3 2 T = 1 i 3 2 T 2 .

Next, we will find all the non-commuting solutions of (1). Write matrix J = diag { M , 0 } with M = diag I k , 1 + i 3 2 I m k . Partition Y as

Y = K ^ C D Z ,

where K ^ has the same size as M. Then by applying Lemma 2.1, we have

(13) M K ^ M = K ^ M K ^ , K ^ M C = 0 , D M K ^ = 0 , D M C = 0 .

So Z is an arbitrary ( n m ) × ( n m ) matrix for all of its solutions. Next, we can get some results for several special cases as follows.

Theorem 3.8

Let A be an n × n complex matrix such that A 3 = 1 i 3 2 A 2 + 1 + i 3 2 A . Suppose that the rank of matrix A is m and the multiplicity of eigenvalue 1 is k.

  1. If K ^ = 0 , then all solutions of (13) are ( 0 , C , D , Z ) such that D M C = 0 . If in addition C = 0 or D = 0 , then all solutions are ( 0 , 0 , D , Z ) or ( 0 , C , 0 , Z ) , respectively.

  2. If C = 0 , then all solutions of (13) are ( K ^ , 0 , D , Z ) such that K ^ is a solution of the Yang-Baxter-like matrix equation M K ^ M = K ^ M K ^ and all rows of D belong to the left null space of M K ^ . If in addition D = 0 , then all solutions are commuting.

  3. If D = 0 , then all solutions of (13) are ( K ^ , C , 0 , Z ) such that K ^ is a solution of the Yang-Baxter-like matrix equation M K ^ M = K ^ M K ^ and all rows of C belong to the null space of K ^ M . If in addition C = 0 , then all solutions are commuting.

Proof

  1. Clearly that if K ^ = 0 , then the equivalent systems of (13) are D M C = 0 . Thus, all solutions of (13) are ( 0 , C , D , Z ) . In addition, if C = 0 , then all solutions are ( 0 , 0 , D , Z ) , else if D = 0 , then all solutions are ( 0 , C , 0 , Z ) .

  2. If C = 0 , then the equivalent systems of (13) are

    M K ^ M = K ^ M K ^ , D M K ^ = 0 .

    Thus, K ^ is a solution of the Yang-Baxter-like matrix equation M K ^ M = K ^ M K ^ . All rows of D belong to the left null space of M K ^ . If in addition D = 0 , then we just have only one equivalent M K ^ M = K ^ M K ^ . Hence, all solutions are commuting.

  3. The proof is similar to the proof of (2) and is omitted here.□

Now we will solve (13) for all non-commuting solutions of (3). We give the main result when the case A 3 = 1 i 3 2 A 2 + 1 + i 3 2 A .

Theorem 3.9

Let A be an n × n complex matrix such that A 3 = 1 i 3 2 A 2 + 1 + i 3 2 A . Suppose that the rank of matrix A is m and the multiplicity of eigenvalue 1 is k. Then all solutions of Eq. (1) are given by

X = S K F C 1 E T C 2 D 1 D 2 Z S 1 ,

where K is any k × k diagonalizable matrix and T is any ( m k ) × ( m k ) diagonalizable matrix such that

  1. the nonzero matrices F a n d E have the same rank r such that

    F E F = 1 + i 3 3 F , E F E = 1 + i 3 3 E ;

  2. K and T have eigenvalues i 3 3 and 3 i 3 6 of multiplicity r, respectively;

  3. the nonzero columns of F and nonzero rows of E are eigenvectors and left eigenvectors of K, respectively, associated with eigenvalue i 3 3 , and the nonzero columns of E and nonzero rows of F are eigenvectors and left eigenvectors of T, respectively, associated with eigenvalue 3 i 3 6 ;

  4. the other eigenvalues of K and T belong to {0, 1} and 0 , 1 + i 3 2 , respectively;

  5. K F E T is a commuting solution with M if and only if E = 0 and F = 0 , and in this case K = K 2 and 1 i 3 2 T = 1 i 3 2 T 2 .

Any nonzero column vector c = ( c 1 T   c 2 T ) T of the m × ( n m ) matrix [ C 1 T   C 2 T ] T and any nonzero row vector d = ( d 1   d 2 ) of the ( n m ) × m matrix [ D 1   D 2 ] are an eigenvector and a left eigenvector of the matrices

K 1 + i 3 2 F E 1 + i 3 2 T a n d K F 1 + i 3 2 E 1 + i 3 2 T ,

respectively, such that d 1 c 1 1 + i 3 2 d 2 c 2 = 0 . Z is an arbitrary ( n m ) × ( n m ) matrix.

Proof

The first equation of (13) is just the Yang-Baxter-like matrix equation for the nonsingular matrix M = diag I k , 1 + i 3 2 I m k that satisfies the condition M 2 = 1 i 3 2 M + 1 + i 3 2 I m . Its general solution has been constructed in Theorem 3.4. Thus, K is any k × k diagonalizable matrix and T is any ( m k ) × ( m k ) diagonalizable matrix such that

  1. the nonzero matrices F a n d E have the same rank r such that

    F E F = 1 + i 3 3 F , E F E = 1 + i 3 3 E ;

  2. K and T have eigenvalues i 3 3 and 3 i 3 6 of multiplicity r, respectively;

  3. the nonzero columns of F and nonzero rows of E are eigenvectors and left eigenvectors of K, respectively, associated with eigenvalue i 3 3 , and the nonzero columns of E and nonzero rows of F are eigenvectors and left eigenvectors of T, respectively, associated with eigenvalue 3 i 3 6 ;

  4. the other eigenvalues of K and T belong to { 0 , 1 } and 0 , 1 + i 3 2 , respectively;

  5. K F E T is a commuting solution with M if and only if E = 0 and F = 0 , and in this case K = K 2 and 1 i 3 2 T = 1 i 3 2 T 2 .

We solve the remaining three equations of (13) to get C and D for each such obtained solution (K, F, E, T). Then the last three equations of (13) are as follows:

(14) K 1 + i 3 2 F E 1 + i 3 2 T C 1 C 2 = 0 , D 1 D 2 K F 1 + i 3 2 E 1 + i 3 2 T = 0 , D 1 D 2 C 1 1 + i 3 2 C 2 = 0 .

The first equation of (14) implies that any nonzero column vector c = [ c 1 T c 2 T ] T of the m × ( n m ) matrix [ C 1 T C 2 T ] T is the eigenvector of the matrix K 1 + i 3 2 F E 1 + i 3 2 T . The second equation of (14) implies that any nonzero row vector d = [ d 1 d 2 ] of the ( n m ) × m matrix [ D 1 D 2 ] is the left eigenvector of the matrix K F 1 + i 3 2 E 1 + i 3 2 T . From the last equation of (14), we get d 1 c 1 1 + i 3 2 d 2 c 2 = 0 .□

3.8 Case 8: A 3 = 1 + i 3 2 A 2 + 1 i 3 2 A

Now we consider the case that A 3 = 1 + i 3 2 A 2 + 1 i 3 2 A . So the minimal polynomial of A is g ( λ ) = λ ( λ 1 ) λ + 1 i 3 2 . In this case, A have three distinct eigenvalues: 0, 1, and 1 i 3 2 . Assume that the rank of matrix A is m and the multiplicity of eigenvalue 1 is k. Then there exists a nonsingular matrix S such that A = S J S 1 , where

J = diag I k , 1 i 3 2 I m k , 0 .

We partition matrix Y as

Y = K F C 1 E T C 2 D 1 D 2 Z

accordingly. We have the following main results in this case.

Theorem 3.10

Let A be an n × n complex matrix such that A 3 = 1 + i 3 2 A 2 + 1 i 3 2 A . Suppose that the rank of matrix A is m and the multiplicity of eigenvalue 1 is k.

  1. All commuting solutions of ( 1 ) are given by

    X = S diag { K , T , Z } S 1 ,

    where Z is any ( n m ) × ( n m ) matrix, K is the k × k idempotent matrix, T is the ( m k ) × ( m k ) matrix satisfying 1 + i 3 2 T = 1 + i 3 2 T 2 .

  2. All non-commuting solutions of ( 1 ) are given by

X = S K F C 1 E T C 2 D 1 D 2 Z S 1 ,

where K is any k × k diagonalizable matrix and T is any ( m k ) × ( m k ) diagonalizable matrix such that

  1. the nonzero matrices F and E have the same rank r and satisfy

    F E F = 1 i 3 3 F , E F E = 1 i 3 3 E ;

  2. K and T have eigenvalues i 3 3 and 3 + i 3 6 of multiplicity r, respectively;

  3. the nonzero columns of F and nonzero rows of E are eigenvectors and left eigenvectors of K, respectively, associated with eigenvalue i 3 3 , and the nonzero columns of E and nonzero rows of F are eigenvectors and left eigenvectors of T, respectively, associated with eigenvalue 3 + i 3 6 ;

  4. the other eigenvalues of K and T belong to { 0 , 1 } and { 0 , 1 i 3 2 } , respectively;

  5. K F E T is a commuting solution with diag I k , 1 i 3 2 I m k if and only if E = 0 and F = 0 , and in this case K = K 2 and 1 + i 3 2 T = ( 1 + i 3 2 T ) 2 .

Any nonzero column vector c = ( c 1 T   c 2 T ) T of the m × ( n m ) matrix [ C 1 T   C 2 T ] T and any nonzero row vector d = ( d 1   d 2 ) of the ( n m ) × m matrix [ D 1   D 2 ] are an eigenvector and a left eigenvector of the matrices

K 1 i 3 2 F E 1 i 3 2 T a n d K F 1 i 3 2 E 1 i 3 2 T ,

respectively, such that d 1 c 1 1 i 3 2 d 2 c 2 = 0 . Z is an arbitrary ( n m ) × ( n m ) matrix.

Proof

The proof is similar to the proof of Theorems 3.7 and 3.9 and is omitted here.□

3.9 Case 9: A 3 = A 2 A

If A 3 = A 2 A , then the minimal polynomial of A is g ( λ ) = λ λ + 1 + i 3 2 λ + 1 i 3 2 . In this case, A have three distinct eigenvalues: 0, 1 + i 3 2 , and 1 i 3 2 . Assume that the rank of matrix A is m and the multiplicity of eigenvalue 1 + i 3 2 is k. Then there exists a nonsingular matrix S such that A = S J S 1 , where

J = diag 1 + i 3 2 I k , 1 i 3 2 I m k , 0 .

We partition matrix Y as

Y = K F C 1 E T C 2 D 1 D 2 Z

accordingly. We have the following main results in this case.

Theorem 3.11

Let A be an n × n complex matrix such that A 3 = A 2 A . Suppose the rank of matrix A is m and the multiplicity of eigenvalue 1 + i 3 2 is k.

  1. All commuting solutions of (1) are given by

    X = S diag { K , T , Z } S 1 ,

    where Z is any ( n m ) × ( n m ) matrix, K is the k × k matrix satisfying 1 i 3 2 K = 1 i 3 2 K 2 , and T is the ( m k ) × ( m k ) matrix satisfying 1 + i 3 2 T = 1 + i 3 2 T 2 .

  2. All non-commuting solutions of (1) are given by

X = S K F C 1 E T C 2 D 1 D 2 Z S 1 ,

where K is any k × k diagonalizable matrix and T is any ( m k ) × ( m k ) diagonalizable matrix such that

  1. the nonzero matrices F and E have the same rank r and satisfy

    F E F = 2 3 F , E F E = 2 3 E ;

  2. K and T have eigenvalues 3 + i 3 3 and 3 + i 3 6 of multiplicity r, respectively;

  3. the nonzero columns of F and nonzero rows of E are eigenvectors and left eigenvectors of K, respectively, associated with eigenvalue 3 + i 3 3 , and the nonzero columns of E and nonzero rows of F are eigenvectors and left eigenvectors of T, respectively, associated with eigenvalue 3 + i 3 6 ;

  4. the other eigenvalues of K and T belong to 0 , 1 + i 3 2 and 0 , 1 i 3 2 , respectively;

  5. K F E T is a commuting solution with diag 1 + i 3 2 I k , 1 i 3 2 I m k if and only if E = 0 and F = 0 , and in this case 1 i 3 2 K = ( 1 i 3 2 K ) 2 , T is ( m k ) × ( m k ) , and 1 + i 3 2 T = 1 + i 3 2 T 2 .

Any nonzero column vector c = c 1 T   c 2 T T of the m × ( n m ) matrix [ C 1 T   C 2 T ] T and any nonzero row vector d = ( d 1   d 2 ) of the ( n m ) × m matrix [ D 1   D 2 ] are an eigenvector and a left eigenvector of the matrices

1 + i 3 2 K 1 i 3 2 F 1 + i 3 2 E 1 i 3 2 T and 1 + i 3 2 K 1 + i 3 2 F 1 i 3 2 E 1 i 3 2 T ,

respectively, such that 1 + i 3 2 d 1 c 1 1 i 3 2 d 2 c 2 = 0 . Z is an arbitrary ( n m ) × ( n m ) matrix.

Proof

The proof is similar to the proof of Theorems 3.7 and 3.9 and is omitted here.□

4 Numerical examples

We present two numerical examples to illustrate our results.

Example 4.1

Let

A = 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 .

Then A 2 = A . This is Case 3. There exists a nonsingular matrix

S = 1 0 1 1 0 1 0 0 1 0 0 0 0 0 0 1 ,

such that A = S J S 1 , J = diag { 1 , 1 , 1 , 0 } . By Theorem 3.3, all solutions of (1) are

X = S Y S 1 = S K C D z S 1 ,

where z is an arbitrary number, K is any 3 × 3 matrix satisfying

K = S Σ S 1 , Σ = diag ( I s , 0 ) ,   s 3 ;

C = U 2 W 2 ; D = H 2 U 2 ˜ with any 3 × 3 nonsingular matrix U. Let

U = u 11 u 12 u 13 u 21 u 22 u 23 u 31 u 32 u 33 , W = w 1 w 2 w 3 , H = h 1 h 2 h 3 .

Denote

U 1 = 1 det U U 11 U 21 U 31 U 12 U 22 U 32 U 13 U 23 U 33 ,

where det U 0 , U 11 = u 22 u 33 u 23 u 32 , U 12 = ( u 21 u 33 u 23 u 31 ) , U 13 = u 12 u 32 u 22 u 31 , U 21 = ( u 12 u 33 u 13 u 32 ) , U 22 = u 11 u 33 u 13 u 31 , U 23 = ( u 11 u 32 u 12 u 31 ) , U 31 = u 12 u 23 u 31 u 22 , U 32 = ( u 11 u 23 u 13 u 21 ) , and U 33 = u 11 u 22 u 12 u 21 . Depending on the rank of K, we have the following expressions of K, C, and D.

  1. s = 0 , then K = U 0 U 1 = 0 . So

    Y = 0 0 0 u 11 w 1 + u 12 w 2 + u 13 w 3 0 0 0 u 21 w 1 + u 22 w 2 + u 23 w 3 0 0 0 u 31 w 1 + u 32 w 2 + u 33 w 3 h 1 U 11 + h 2 U 12 + h 3 U 13 h 1 U 21 + h 2 U 22 + h 3 U 23 h 1 U 31 + h 2 U 32 + h 3 U 33 z ,

    for all numbers u i j , w i , h j ( i = 1 , 2 , 3 ; j = 1 , 2 , 3 ) such that det U 0 and h 1 w 1 + h 2 w 2 + h 3 w 3 = 0 .

  2. s = 1 , then K = U Σ U 1 , where Σ = diag ( I 1 , 0 ) . So

    K = 1 det U u 11 U 11 u 11 U 21 u 11 U 31 u 21 U 11 u 21 U 21 u 21 U 31 u 31 U 11 u 31 U 21 u 31 U 31 , C = u 12 w 2 + u 13 w 3 u 22 w 2 + u 23 w 3 u 32 w 2 + u 33 w 3 ,

    D = 1 det U ( h 2 U 12 + h 3 U 13    h 2 U 22 + h 3 U 23    h 2 U 32 + h 3 U 33 ) ,

    for all numbers u i j ( i = 1 , 2 , 3 ; j = 1 , 2 , 3 ), h 2 , h 3 , w 2 , w 3 such that det U 0 and h 2 w 2 + h 3 w 3 = 0 .

  3. s = 2, then K = U Σ U 1 , where Σ = diag ( I 2 , 0 ) . So

K = = 1 det U u 11 U 11 + u 12 U 12 u 11 U 21 + u 12 U 22 u 11 U 31 + u 12 U 32 u 21 U 11 + u 22 U 12 u 21 U 21 + u 22 U 22 u 21 U 31 + u 22 U 32 u 31 U 11 + u 32 U 12 u 31 U 21 + u 32 U 22 u 31 U 31 + u 32 U 32 .

Since h 3 w 3 = 0 , we have

  • h 3 = 0 from which D = 0 and

    C = u 13 w 3 u 23 w 3 u 33 w 3 ,

    for all numbers u i j ( i = 1 , 2 , 3 ; j = 1 , 2 , 3 ) , w 3 such that det U 0 .

  • w 3 = 0 from which D = 0 and

D = 1 det U ( h 3 U 13    h 3 U 23    h 3 U 33 ) ,

for all numbers u i j ( i = 1 , 2 , 3 ; j = 1 , 2 , 3 ) , h 3 such that det U 0 .

  1. s = 3 , then K = U I U 1 = I , C = 0 , and D = 0 , so

Y = diag ( 1 , 1 , 1 , z ) , z .

Example 4.2

Let

A = 0 1 0 0 0 1 0 0 0 3 i 3 2 1 i 3 2 0 1 1 i 3 2 3 i 3 2 1 ,

then A 3 = 1 + i 3 2 A 2 + 1 i 3 2 A . This is Case 8. There exists a nonsingular matrix

S = 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 ,

such that A = S J S 1 with J = diag 1 , 1 , 1 i 3 2 , 0 .

By Theorem 3.10, all commuting solutions of (1) are X = S diag { K , t , z } S 1 , where K is any 2 × 2 idempotent matrix, t equals 0 or 1 i 3 2 , and z is any number.

All non-commuting solutions of (1) are

X = S K f c 1 e 3 + i 3 6 c 2 d 1 d 2 z S 1 ,

where K is any 2 × 2 diagonalizable matrix with a simple eigenvalue i 3 3 . The other simple eigenvalue is either 0 or 1 with a nonzero column vector f. The nonzero row vector e are eigenvector and left eigenvector of K associated with eigenvalue i 3 3 such that e f = 2 i 3 6 . z is an arbitrary number. Any nonzero column vector c = ( c 1 T   c 2 T ) T and any nonzero row vector d = ( d 1   d 2 ) are an eigenvector and a left eigenvector of the matrices

K 1 i 3 2 f e 3 i 3 6 and K f 1 i 3 2 e 3 i 3 6 ,

respectively, such that d 1 c 1 1 i 3 2 d 2 c 2 = 0 .

We write matrix K as

K = s 1 s 2 s 3 s 4 ,

then we will find that the explicit expressions of K in two cases corresponding to the eigenvalues of K are i 3 3 , 0 and i 3 3 , 1 , respectively.

  • If the eigenvalues of K are i 3 3 and 0, we have

    s 1 s 4 i 3 3 ( s 1 + s 4 ) 1 3 s 2 s 3 = 0 , s 1 s 4 = s 2 s 3 .

    Solving the aforementioned equations, we have

    K = i 3 6 ± 1 2 1 3 4 s 2 s 3 s 2 s 3 i 3 6 1 2 1 3 4 s 2 s 3 , s 2 , s 3 .

  • If the eigenvalues of K are i 3 3 and 1, we have

s 1 s 4 i 3 3 ( s 1 + s 4 ) 1 3 s 2 s 3 = 0 , s 1 s 4 s 2 s 3 ( s 1 + s 4 ) + 1 = 0 .

Solving the aforementioned system equation, we have

K = 3 + i 3 6 ± 1 2 1 3 i 3 3 4 s 2 s 3 s 2 s 3 3 + i 3 6 ± 1 2 1 3 i 3 3 4 s 2 s 3 , s 2 , s 3 .

5 Conclusions

In this paper, we have found some solutions of the Yang-Baxter-like matrix Eq. (1) when the given matrix A satisfies A 4 = A , which has extended the previous results of [18,19,20,21]. Our approach here is to use the Jordan decomposition of A to obtain a simplified Yang-Baxter-like matrix equation with A replaced by a simple block diagonal matrix, and then we solve a system of several matrix equations for the smaller sized solution blocks. The same idea and technique in this paper can be applied to find all solutions of (1) when A satisfies the condition A 4 = A or when A k = A for some k N . Once we obtain all the solutions of (1) when A is a diagonalizable complex matrix with three distinct nonzero eigenvalues, we can solve Cases 10 and 11. However, their commuting solution can be obtained by the same way used for these cases before. Finding all the non-commuting solutions of the Yang-Baxter-like matrix Eq. (1) for a general matrix A is a hard task, which will be further studied in the future.

Acknowledgments

The authors are indebted to the anonymous referees for their valuable comments to improve the original version of this paper. This work was supported by the National Natural Science Foundation of China (No. 11861008), the China Postdoctoral Science Foundation (No. 2018M641974), the Natural Science Foundation of Jiangxi Province (No. 20192BAB201008), China Scholarship Council (No. 201909865004), the Research fund of Gannan Normal University (Nos. YJG-2018-11, 18zb04), and the Key disciplines coordinate innovation projects of Gannan Normal University.

References

[1] R. J. Baxter, Partition function of the eight-vertex lattice model, Ann. Phys. 70 (1972), 193–228.10.1142/9789812798336_0003Search in Google Scholar

[2] C. N. Yang, Some exact results for the many-body problem in one dimension with repulsive delta-function interaction, Phys. Rev. Lett. 19 (1967), 1312–1315.10.1103/PhysRevLett.19.1312Search in Google Scholar

[3] C. N. Yang and M. Ge, Braid Group, Knot Theory, and Statistical Mechanics, World Scientific, Singapore, 1989.Search in Google Scholar

[4] Y. Akutsu and M. Wadati, Knot invariants and the critical statistical systems, J. Phys. Soc. Jpn. 56 (1987), 839–842.10.1142/9789812798329_0004Search in Google Scholar

[5] L. Alvarez-Gaume, C. Gomez, and G. Sierra, Hidden quantum symmetries in rational conformal field theories, Nuclear Phys. B 319 (1989), 155–186.10.1016/0550-3213(89)90604-4Search in Google Scholar

[6] D. Bachiller and F. Ced, A family of solutions of the Yang-Baxter equation, J. Algebra 412 (2014), 218–229.10.1016/j.jalgebra.2014.05.011Search in Google Scholar

[7] D. Shen, M. Wei, and Z. Jia, On commuting solutions of the Yang-Baxter-like matrix equation, J. Math. Anal. Appl. 462 (2018), 665–696.10.1016/j.jmaa.2018.02.030Search in Google Scholar

[8] J. Ding and N. H. Rhee, Spectral solutions of the Yang-Baxter matrix equation, J. Math. Anal. Appl. 402 (2013), 567–573.10.1016/j.jmaa.2013.01.054Search in Google Scholar

[9] J. Ding and N. H. Rhee, Computing solutions of the Yang-Baxter-like matrix equation for diagonalisable matrices, East Asian J. Appl. Math. 5 (2015), 75–84.10.4208/eajam.230414.311214aSearch in Google Scholar

[10] J. Ding and C. Zhang, On the structure of the spectral solutions of the Yang-Baxter matrix equation, Appl. Math. Lett. 35 (2014), 86–89.10.1016/j.aml.2013.11.007Search in Google Scholar

[11] J. Ding, C. Zhang, and N. H. Rhee, Further solutions of a Yang-Baxter-like matrix equation, East Asian J. Appl. Math. 3 (2013), 352–362.10.4208/eajam.130713.221113aSearch in Google Scholar

[12] J. Ding, C. Zhang, and N. H. Rhee, Commuting solutions of the Yang-Baxter matrix equation, Appl. Math. Lett. 44 (2015), 1–4, 10.1016/j.aml.2014.11.017.Search in Google Scholar

[13] Q. Dong and J. Ding, Complete commuting solutions of the Yang-Baxter-like matrix equation for diagonalizable matrices, Comput. Math. Appl. 72 (2016), 194–201.10.1016/j.camwa.2016.04.047Search in Google Scholar

[14] Q. Dong, J. Ding, and Q. Huang, Commuting solutions of a quadratic matrix equation for nilpotent matrices, Algebra Colloq. 25 (2018), no. 1, 31–44.10.1142/S1005386718000032Search in Google Scholar

[15] H. Ren, X. Wang, and T. Wang, Commuting solutions of the Yang-Baxter-like matrix equation for a class of rank-two updated matrices, Computers Math. Appl. 76 (2018), 1085–1098.10.1016/j.camwa.2018.05.042Search in Google Scholar

[16] D. Zhou, G. Chen, G. Yu, and J. Zhong, On the projection-based commuting solutions of the Yang-Baxter matrix equation, Appl. Math. Lett. 79 (2018), 155–161.10.1016/j.aml.2017.12.009Search in Google Scholar

[17] D. Zhou and J. Ding, Solving the Yang-Baxter-like matrix equation for nilpotent matrices of index three, Int. J. Comput. Math. 295 (2018), no. 2, 303–315.10.1080/00207160.2017.1284320Search in Google Scholar

[18] Q. Huang, M. Saeed Ibrahim Adam, J. Ding, and L. Zhu, All non-commuting solutions of the Yang-Baxter matrix equation for a class of diagonalizable matrices, Oper. Matrices 13 (2019), no. 1, 187–195.10.7153/oam-2019-13-11Search in Google Scholar

[19] M. Saeed Ibrahim Adam, J. Ding, and Q. Huang, Explicit solutions of the Yang-Baxter-like matrix equation for an idempotent matrix, Appl. Math. Lett. 63 (2017), 71–76.10.1016/j.aml.2016.07.021Search in Google Scholar

[20] M. Saeed Ibrahim Adam, J.Ding, Q. Huang, and L. Zhu, Solving a class of quadratic matrix equations, Appl. Math. Lett. 82 (2018), 58–63.10.1016/j.aml.2018.02.017Search in Google Scholar

[21] M. Saeed Ibrahim Adam, J. Ding, Q. Huang, and L. Zhu, All solutions of the Yang-Baxter-like matrix equation when A3 = A, J. Appl. Anal. Comput. 9 (2019), no. 3, 1022–1031.Search in Google Scholar

[22] D. Shen and M. Wei, All solutions of the Yang-Baxter-like matrix equation for diagonalizable coefficient matrix with two different eigenvalues, Appl. Math. Lett. 101 (2020), 106048, 10.1016/j.aml.2019.106048.Search in Google Scholar

[23] H. Tian, All solutions of the Yang-Baxter-like matrix equation for rank-one matrices, Appl. Math. Lett. 51 (2016), 55–59.10.1016/j.aml.2015.07.009Search in Google Scholar

[24] D. Zhou, G. Chen, and J. Ding, Solving the Yang-Baxter-like matrix equation for rank-two matrices, J. Comput. Appl. Math. 313 (2017) 142–151.10.1016/j.cam.2016.09.007Search in Google Scholar

[25] D. Zhou, G. Chen, and J. Ding, On the Yang-Baxter-like matrix equation for rank-two matrices, Open Math. 15 (2017), 340–353.10.1515/math-2017-0026Search in Google Scholar

[26] D. Zhou, G. Chen, J. Ding, and H. Tian, Solving the Yang-Baxter-like matrix equation with non-diagonalizable elementary matrices, Commun. Math. Sci. 17 (2019), no. 2, 393–411.10.4310/CMS.2019.v17.n2.a5Search in Google Scholar

[27] J. Ding and A. Zhou, Characteristic polynomials of some perturbed matrices, Appl. Math. Comput. 199 (2008), 631–636.10.1016/j.amc.2007.10.024Search in Google Scholar

Received: 2020-01-19
Revised: 2020-05-13
Accepted: 2020-07-01
Published Online: 2020-09-15

© 2020 Duan-Mei Zhou and Hong-Quang Vu, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Non-occurrence of the Lavrentiev phenomenon for a class of convex nonautonomous Lagrangians
  3. Strong and weak convergence of Ishikawa iterations for best proximity pairs
  4. Curve and surface construction based on the generalized toric-Bernstein basis functions
  5. The non-negative spectrum of a digraph
  6. Bounds on F-index of tricyclic graphs with fixed pendant vertices
  7. Crank-Nicolson orthogonal spline collocation method combined with WSGI difference scheme for the two-dimensional time-fractional diffusion-wave equation
  8. Hardy’s inequalities and integral operators on Herz-Morrey spaces
  9. The 2-pebbling property of squares of paths and Graham’s conjecture
  10. Existence conditions for periodic solutions of second-order neutral delay differential equations with piecewise constant arguments
  11. Orthogonal polynomials for exponential weights x2α(1 – x2)2ρe–2Q(x) on [0, 1)
  12. Rough sets based on fuzzy ideals in distributive lattices
  13. On more general forms of proportional fractional operators
  14. The hyperbolic polygons of type (ϵ, n) and Möbius transformations
  15. Tripled best proximity point in complete metric spaces
  16. Metric completions, the Heine-Borel property, and approachability
  17. Functional identities on upper triangular matrix rings
  18. Uniqueness on entire functions and their nth order exact differences with two shared values
  19. The adaptive finite element method for the Steklov eigenvalue problem in inverse scattering
  20. Existence of a common solution to systems of integral equations via fixed point results
  21. Fixed point results for multivalued mappings of Ćirić type via F-contractions on quasi metric spaces
  22. Some inequalities on the spectral radius of nonnegative tensors
  23. Some results in cone metric spaces with applications in homotopy theory
  24. On the Malcev products of some classes of epigroups, I
  25. Self-injectivity of semigroup algebras
  26. Cauchy matrix and Liouville formula of quaternion impulsive dynamic equations on time scales
  27. On the symmetrized s-divergence
  28. On multivalued Suzuki-type θ-contractions and related applications
  29. Approximation operators based on preconcepts
  30. Two types of hypergeometric degenerate Cauchy numbers
  31. The molecular characterization of anisotropic Herz-type Hardy spaces with two variable exponents
  32. Discussions on the almost 𝒵-contraction
  33. On a predator-prey system interaction under fluctuating water level with nonselective harvesting
  34. On split involutive regular BiHom-Lie superalgebras
  35. Weighted CBMO estimates for commutators of matrix Hausdorff operator on the Heisenberg group
  36. Inverse Sturm-Liouville problem with analytical functions in the boundary condition
  37. The L-ordered L-semihypergroups
  38. Global structure of sign-changing solutions for discrete Dirichlet problems
  39. Analysis of F-contractions in function weighted metric spaces with an application
  40. On finite dual Cayley graphs
  41. Left and right inverse eigenpairs problem with a submatrix constraint for the generalized centrosymmetric matrix
  42. Controllability of fractional stochastic evolution equations with nonlocal conditions and noncompact semigroups
  43. Levinson-type inequalities via new Green functions and Montgomery identity
  44. The core inverse and constrained matrix approximation problem
  45. A pair of equations in unlike powers of primes and powers of 2
  46. Miscellaneous equalities for idempotent matrices with applications
  47. B-maximal commutators, commutators of B-singular integral operators and B-Riesz potentials on B-Morrey spaces
  48. Rate of convergence of uniform transport processes to a Brownian sheet
  49. Curves in the Lorentz-Minkowski plane with curvature depending on their position
  50. Sequential change-point detection in a multinomial logistic regression model
  51. Tiny zero-sum sequences over some special groups
  52. A boundedness result for Marcinkiewicz integral operator
  53. On a functional equation that has the quadratic-multiplicative property
  54. The spectrum generated by s-numbers and pre-quasi normed Orlicz-Cesáro mean sequence spaces
  55. Positive coincidence points for a class of nonlinear operators and their applications to matrix equations
  56. Asymptotic relations for the products of elements of some positive sequences
  57. Jordan {g,h}-derivations on triangular algebras
  58. A systolic inequality with remainder in the real projective plane
  59. A new characterization of L2(p2)
  60. Nonlinear boundary value problems for mixed-type fractional equations and Ulam-Hyers stability
  61. Asymptotic normality and mean consistency of LS estimators in the errors-in-variables model with dependent errors
  62. Some non-commuting solutions of the Yang-Baxter-like matrix equation
  63. General (p,q)-mixed projection bodies
  64. An extension of the method of brackets. Part 2
  65. A new approach in the context of ordered incomplete partial b-metric spaces
  66. Sharper existence and uniqueness results for solutions to fourth-order boundary value problems and elastic beam analysis
  67. Remark on subgroup intersection graph of finite abelian groups
  68. Detectable sensation of a stochastic smoking model
  69. Almost Kenmotsu 3-h-manifolds with transversely Killing-type Ricci operators
  70. Some inequalities for star duality of the radial Blaschke-Minkowski homomorphisms
  71. Results on nonlocal stochastic integro-differential equations driven by a fractional Brownian motion
  72. On surrounding quasi-contractions on non-triangular metric spaces
  73. SEMT valuation and strength of subdivided star of K 1,4
  74. Weak solutions and optimal controls of stochastic fractional reaction-diffusion systems
  75. Gradient estimates for a weighted nonlinear parabolic equation and applications
  76. On the equivalence of three-dimensional differential systems
  77. Free nonunitary Rota-Baxter family algebras and typed leaf-spaced decorated planar rooted forests
  78. The prime and maximal spectra and the reticulation of residuated lattices with applications to De Morgan residuated lattices
  79. Explicit determinantal formula for a class of banded matrices
  80. Dynamics of a diffusive delayed competition and cooperation system
  81. Error term of the mean value theorem for binary Egyptian fractions
  82. The integral part of a nonlinear form with a square, a cube and a biquadrate
  83. Meromorphic solutions of certain nonlinear difference equations
  84. Characterizations for the potential operators on Carleson curves in local generalized Morrey spaces
  85. Some integral curves with a new frame
  86. Meromorphic exact solutions of the (2 + 1)-dimensional generalized Calogero-Bogoyavlenskii-Schiff equation
  87. Towards a homological generalization of the direct summand theorem
  88. A standard form in (some) free fields: How to construct minimal linear representations
  89. On the determination of the number of positive and negative polynomial zeros and their isolation
  90. Perturbation of the one-dimensional time-independent Schrödinger equation with a rectangular potential barrier
  91. Simply connected topological spaces of weighted composition operators
  92. Generalized derivatives and optimization problems for n-dimensional fuzzy-number-valued functions
  93. A study of uniformities on the space of uniformly continuous mappings
  94. The strong nil-cleanness of semigroup rings
  95. On an equivalence between regular ordered Γ-semigroups and regular ordered semigroups
  96. Evolution of the first eigenvalue of the Laplace operator and the p-Laplace operator under a forced mean curvature flow
  97. Noetherian properties in composite generalized power series rings
  98. Inequalities for the generalized trigonometric and hyperbolic functions
  99. Blow-up analyses in nonlocal reaction diffusion equations with time-dependent coefficients under Neumann boundary conditions
  100. A new characterization of a proper type B semigroup
  101. Constructions of pseudorandom binary lattices using cyclotomic classes in finite fields
  102. Estimates of entropy numbers in probabilistic setting
  103. Ramsey numbers of partial order graphs (comparability graphs) and implications in ring theory
  104. S-shaped connected component of positive solutions for second-order discrete Neumann boundary value problems
  105. The logarithmic mean of two convex functionals
  106. A modified Tikhonov regularization method based on Hermite expansion for solving the Cauchy problem of the Laplace equation
  107. Approximation properties of tensor norms and operator ideals for Banach spaces
  108. A multi-power and multi-splitting inner-outer iteration for PageRank computation
  109. The edge-regular complete maps
  110. Ramanujan’s function k(τ)=r(τ)r2(2τ) and its modularity
  111. Finite groups with some weakly pronormal subgroups
  112. A new refinement of Jensen’s inequality with applications in information theory
  113. Skew-symmetric and essentially unitary operators via Berezin symbols
  114. The limit Riemann solutions to nonisentropic Chaplygin Euler equations
  115. On singularities of real algebraic sets and applications to kinematics
  116. Results on analytic functions defined by Laplace-Stieltjes transforms with perfect ϕ-type
  117. New (p, q)-estimates for different types of integral inequalities via (α, m)-convex mappings
  118. Boundary value problems of Hilfer-type fractional integro-differential equations and inclusions with nonlocal integro-multipoint boundary conditions
  119. Boundary layer analysis for a 2-D Keller-Segel model
  120. On some extensions of Gauss’ work and applications
  121. A study on strongly convex hyper S-subposets in hyper S-posets
  122. On the Gevrey ultradifferentiability of weak solutions of an abstract evolution equation with a scalar type spectral operator on the real axis
  123. Special Issue on Graph Theory (GWGT 2019), Part II
  124. On applications of bipartite graph associated with algebraic structures
  125. Further new results on strong resolving partitions for graphs
  126. The second out-neighborhood for local tournaments
  127. On the N-spectrum of oriented graphs
  128. The H-force sets of the graphs satisfying the condition of Ore’s theorem
  129. Bipartite graphs with close domination and k-domination numbers
  130. On the sandpile model of modified wheels II
  131. Connected even factors in k-tree
  132. On triangular matroids induced by n3-configurations
  133. The domination number of round digraphs
  134. Special Issue on Variational/Hemivariational Inequalities
  135. A new blow-up criterion for the Nabc family of Camassa-Holm type equation with both dissipation and dispersion
  136. On the finite approximate controllability for Hilfer fractional evolution systems with nonlocal conditions
  137. On the well-posedness of differential quasi-variational-hemivariational inequalities
  138. An efficient approach for the numerical solution of fifth-order KdV equations
  139. Generalized fractional integral inequalities of Hermite-Hadamard-type for a convex function
  140. Karush-Kuhn-Tucker optimality conditions for a class of robust optimization problems with an interval-valued objective function
  141. An equivalent quasinorm for the Lipschitz space of noncommutative martingales
  142. Optimal control of a viscous generalized θ-type dispersive equation with weak dissipation
  143. Special Issue on Problems, Methods and Applications of Nonlinear analysis
  144. Generalized Picone inequalities and their applications to (p,q)-Laplace equations
  145. Positive solutions for parametric (p(z),q(z))-equations
  146. Revisiting the sub- and super-solution method for the classical radial solutions of the mean curvature equation
  147. (p,Q) systems with critical singular exponential nonlinearities in the Heisenberg group
  148. Quasilinear Dirichlet problems with competing operators and convection
  149. Hyers-Ulam-Rassias stability of (m, n)-Jordan derivations
  150. Special Issue on Evolution Equations, Theory and Applications
  151. Instantaneous blow-up of solutions to the Cauchy problem for the fractional Khokhlov-Zabolotskaya equation
  152. Three classes of decomposable distributions
Downloaded on 29.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/math-2020-0053/html
Scroll to top button