Home Mathematics The hyperbolic CS decomposition of tensors based on the C-product
Article Open Access

The hyperbolic CS decomposition of tensors based on the C-product

  • Hongwei Jin EMAIL logo , Siran Chen and Julio Benítez
Published/Copyright: November 18, 2025

Abstract

This paper studies the issues about the hyperbolic CS decomposition of tensors under the C-product. The aim of this paper is fourfold. Firstly, we establish the CS decomposition of a complex unitary tensor, including the thin version and the standard version. The corresponding numerical algorithm is also given. Next, we define three kinds of tensors, i.e., the strong unitary tensor, the mode-1 strong unitary tensor, the mode-2 strong unitary tensor, and we give the CS decomposition of the last two kind of tensors aforementioned. Numerical algorithms are also obtained to compute the two types of the CS decompositions. Moreover, we give the definition of another three classes of tensors, called the J -orthogonal tensor, the mode-1 strong J -orthogonal tensor and the mode-2 strong J -orthogonal tensor. The corresponding hyperbolic CS decompositions and numerical algorithms are also established. Finally, we give an application to the computation of the C-eigenvalues of the orthogonal tensor. Numerical examples are given to test our results.

MSC 2020: 15A18; 15A23; 15A69

1 Introduction

In recent years, the researches of tensors or multidimensional arrays have become more popular. A complex tensor can be regarded as a multidimensional array of data, which takes the form A = ( A i 1 i p ) C n 1 × n 2 × × n p . The order of a tensor is the number of dimensions which is also called ways or modes. Therefore, the well-known vectors and matrices are first-order tensors and second-order tensors.

Higher-order tensors have been applied in quite a lot of areas, such as psychometrics [1], chemometrics [2], face recognition [3], image and signal processing [4], [5], [6], [7], [8], [9] and so on.

The tensor decomposition has developed very well and found to have good applications in many fields, such as in eigenvalues, genomic signals, data mining, signal processing [10], [11], [12], [13], [14], [15], [16]. Kinds of tensor decompositions via diverse tensor products have been investigated to extend the matrices contents. For example, [17], [18], [19], [20], [21].

The tensor-tensor products include the Einstein product, T-product, C-product and so on. Lots of excellent works had appeared, such as: Sun et al. [22],23] studied the issue about the generalized inverse of tensors based on the Einstein product and a general product of tensors. Miao et al. [24], [25], [26] studied the tensor contents under the T-product. Panigrahy et al. [27] and Behera et al. [28] studied the reverse order law and computation concerning the Einstein product and T-product. Liu et al. [29] studied the issue of the dual core generalized inverse. Cong et al. [30] and Sahoo et al. [31] studied the core-EP inverse of tensors. Jin et al. [32] researched some work on the generalized inverse of tensors based on the T-product. Behera et al. [33],34] and Ji et al. [35] got some results on the Drazin inverse of the Einstein product. Cao et al. [36],37] had a further study on the perturbation and inequalities based on the T-product. Che et al. [38] established an efficient algorithm for computing the approximate t-URV. Chen et al. [39] studied the perturbations of the tensor Schur decomposition and Liu et al. [40] studied the weighted generalized tensor functions. Mo et al. [41],42] studied on the Time-varying generalized tensor eigenanalysis and T-eigenvalues of tensors. Wei et al. [43] studied the Neural network models and Shao et al. [44] studied the nonsymmetric algebraic Riccati equations.

Kernfeld et al. [45] defined a new tensor-tensor product, that is, the Cosine Transform Product, referred to as C-product for short. It has been shown that the C-product can be implemented efficiently using DCT (Discrete Cosine Transform). Moreover, the authors indicated that one can use C-product to conveniently specify a discrete image blurring model and the image restoration model. Bentbib et al. [46] explored good applications of the C-product. They proposed new methods for the problem of the third-order tensor completion in combination with the TV regularization procedure and tensor robust principal component analysis by using the C-product. Xu et al. [47] indicated that the advantages of using DCT are: (1) the complex calculation is not involved in the cosine transform based singular value decomposition, so the computational costs can be saved; (2) the intrinsic reflexive boundary condition along the tubes in the third dimension of tensors is employed, so its performance would be better than that by using the periodic boundary condition in DFT (Discrete Fourier Transform).

The CS decomposition is a very useful tool in the matrix analysis. Benítez et al. [48] studied the spectrum and the rank of a linear combination of two orthogonal projectors. By using the CS decomposition, the authors characterized when this linear combination is EP, diagonalizable, idempotent, tripotent, involutive, nilpotent, generalized projector, and hypergeneralized projector. The Moore-Penrose inverse of a linear combination of two orthogonal projectors in a special case was also derived. Calvetti et al. [49] indicated that the Schur form of the real orthogonal matrix can be got from a full CS decomposition. Based on this fact, the authors derived a CS decomposition based on the orthogonal eigenvalue method. An algorithm for an orthogonal similarity transformation of an orthogonal matrix to a condensed product form and an algorithm for full CS decomposition were also described.

Based on these backgrounds, we will study the theory of the hyperbolic CS decomposition of tensors via the C-product in this paper.

This paper is organized as follows. In Section 2, we give the terms and symbols needed to be used in this work. Then, we introduce the C-product of two tensors. In Section 3, we firstly introduce the thin version of the CS decomposition of a complex unitary tensor. Then, a standard CS decomposition of a complex unitary tensor is obtained. In Section 3, we define three kinds of tensors, i.e., the strong unitary tensor, the mode-1 strong unitary tensor and the mode-2 strong unitary tensor. Then, the CS decomposition and corresponding numerical algorithms of the mode-1 strong unitary tensor and the mode-2 strong unitary tensor are constructed. In the next section, we define another three classes of tensors, i.e., the J -orthogonal tensor, the mode-1 strong J -orthogonal tensor and the mode-2 strong J -orthogonal tensor, which the corresponding hyperbolic CS decompositions and numerical algorithms are established. Finally, we give an application to the computation of the C-eigenvalues of the orthogonal tensor. Numerical examples are given to verify our results.

2 Preliminaries

In this paper, we denote vectors, matrices, three or higher order tensors like a , A , A , respectively. Meanwhile, a i , A ij and A i 1 i 2 i p are the components of the vector a, matrix A and tensor A , respectively. The n × n identity matrix is denoted by I n . The frontal slice of the tensor A is A ( : , : , i ) . For simplicity, we denote the frontal slice as A ( i ) .

We start this section by introducing the following face-wise product between two tensors.

Definition 1.

[45] Let A C n 1 × n 2 × n 3 and B C n 2 × l × n 3 . The face-wise product A B C n 1 × l × n 3 is defined as

( A B ) ( i ) = A ( i ) B ( i ) , i = 1 , , n 3 .

The following example is helpful in understanding this product.

Example 1.

Let A C 3 × 3 × 2 and B C 3 × 2 × 2 with

A ( 1 ) = 1 3 5 2 6 0 0 2 4 , A ( 2 ) = 0 2 2 3 1 1 0 0 3 , B ( 1 ) = 1 1 2 0 2 1 , B ( 2 ) = 1 0 0 4 3 3 .

Then, A B C 3 × 2 × 2 and

( A B ) ( 1 ) = A ( 1 ) B ( 1 ) = 1 3 5 2 6 0 0 2 4 1 1 2 0 2 1 = 17 6 14 2 12 4 ,

( A B ) ( 2 ) = A ( 2 ) B ( 2 ) = 0 2 2 3 1 1 0 0 3 1 0 0 4 3 3 = 6 14 6 7 9 9 .

Now, we will present the C-product of two tensors. Firstly, we give the definition of the operation of mat(⋅).

Definition 2.

[45] Let A C n 1 × n 2 × n 3 . A ( 1 ) , A ( 2 ) , , A ( n 3 ) are its frontal slices. Then we use m a t ( A ) to denote the block Toeplitz-plus-Hankel matrix

(1) m a t ( A ) = A ( 1 ) A ( 2 ) A ( n 3 1 ) A ( n 3 ) A ( 2 ) A ( 1 ) A ( n 3 2 ) A ( n 3 1 ) A ( n 3 1 ) A ( n 3 2 ) A ( 1 ) A ( 2 ) A ( n 3 ) A ( n 3 1 ) A ( 2 ) A ( 1 ) + A ( 2 ) A ( 3 ) A ( n 3 ) O A ( 3 ) A ( 4 ) O A ( n 3 ) A ( n 3 ) O A ( 4 ) A ( 3 ) O A ( n 3 ) A ( 3 ) A ( 2 ) C n 1 n 3 × n 2 n 3 ,

where O is the n 1 × n 2 zero matrix.

Definition 3.

[45] Define ten(⋅) the inverse operation of the mat(⋅), i.e.,

t e n [ m a t ( A ) ] = A .

Now, we can give the C-product of two tensors.

Definition 4.

[45] Let A C n 1 × n 2 × n 3 and B C n 2 × l × n 3 . The cosine transform product, which is called C-product for short, is defined as

A c B = t e n [ m a t ( A ) m a t ( B ) ] .

From the above definition, we can see that it is easy to compute m a t ( A ) m a t ( B ) by using the technical of the matrices product. In order to compute the C-product, we must deal with the operation “ten(⋅)”, which can be realized by using the following algorithm.

Algorithm 2.1: Compute ten(⋅) of a matrix
Input: n 1 n 3 × n 2 n 3 block matrix Z
Output: n 1 × n 2 × n 3 tensor A
 1. Take the bottom left n 1 × n 2 block of Z and Z = A ( n 3 )
 2. for i = n 3 − 1, …, 1
    A ( i ) = [i-th block of first block column of Z] A ( i + 1 )
     end

Notice that the first column of m a t ( A ) defined in (1) is

A ( 1 ) + A ( 2 ) A ( 2 ) + A ( 3 ) A ( n 3 1 ) + A ( n 3 ) A ( n 3 ) .

So, it is easy to get all the frontal slices of A by using the technique of Algorithm 2.1.

Now, we present another way to define the C-product of two tensors by using the face-wise product. Before that, the mode-3 product of a tensor with a matrix is required.

Definition 5.

[20] The mode-3 product of a tensor A C n 1 × n 2 × n 3 with a matrix U C J × n 3 is denoted by A × 3 U . More precise, we have

( A × 3 U ) i 1 i 2 j = i 3 = 1 n 3 A i 1 i 2 i 3 U j i 3 , i 1 = 1 , , n 1 , i 2 = 1 , , n 2 , j = 1 , , J .

Now, we will introduce how to compute the mode-3 product of a tensor with a matrix. Let the frontal slice of A C n 1 × n 2 × n 3 be

A ( 1 ) = A 111 A 121 A 1 n 2 1 A 211 A 221 A 2 n 2 1 A n 1 11 A n 1 21 A n 1 n 2 1 , , A ( n 3 ) = A 11 n 3 A 12 n 3 A 1 n 2 n 3 A 21 n 3 A 22 n 3 A 2 n 2 n 3 A n 1 1 n 3 A n 1 2 n 3 A n 1 n 2 n 3 .

Then, the mode-3 unfolding of A , denoted by A ( 3 ) , is

(2) A ( 3 ) = A 111 A 211 A n 1 11 A 121 A 221 A n 1 21 A 1 n 2 1 A 2 n 2 1 A n 1 n 2 1 A 112 A 212 A n 1 12 A 122 A 222 A n 1 22 A 1 n 2 2 A 2 n 2 2 A n 1 n 2 2 A 11 n 3 A 21 n 3 A n 1 1 n 3 A 12 n 3 A 22 n 3 A n 1 2 n 3 A 1 n 2 n 3 A 2 n 2 n 3 A n 1 n 2 n 3 .

Notice that A × 3 U can be computed using the following matrix-matrix product. See [20] for details.

(3) W = A × 3 U W ( 3 ) = U A ( 3 ) .

The following example shows how to compute the mode-3 product of a tensor with a matrix.

Example 2.

Let A C 3 × 3 × 2 and U C 2 × 2 with

A ( 1 ) = 1 0 2 2 3 0 3 3 0 , A ( 2 ) = 0 1 2 3 2 0 0 0 3 , U = 1 1 2 1 .

Suppose W = A × 3 U . Then,

A ( 3 ) = 1 2 3 0 3 3 2 0 0 0 3 0 1 2 0 2 0 3 .

Hence,

W ( 3 ) = U A ( 3 ) = 1 1 2 1 1 2 3 0 3 3 2 0 0 0 3 0 1 2 0 2 0 3 = 1 5 3 1 5 3 4 0 3 2 7 6 1 8 6 6 0 3 .

Thus,

W ( 1 ) = 1 1 4 5 5 0 3 3 3 , W ( 2 ) = 2 1 6 7 8 0 6 6 3 .

Based on the above preparation work, we can get the alternative expression of the C-product. Observe that

(4) L ( A ) = A × 3 M and L 1 ( A ) = A × 3 M 1 .

Lemma 1.

[45] Let A C n 1 × n 2 × n 3 and B C n 2 × l × n 3 . Then,

(5) A c B = L 1 [ L ( A ) L ( B ) ] = [ ( A × 3 M ) ( B × 3 M ) ] × 3 M 1 ,

where M = W −1 C(I + Z), W = diag(C(:, 1)), C(:, 1) is the first column of C, the matrix Z C n 3 × n 3 is the circulant upshift matrix defined by

Z = d i a g ( o n e s ( n 3 1,1 ) , 1 ) ,

C is the orthogonal DCT matrix of size n 3 × n 3 and its elements are defined as

(6) C i j = 2 δ i j n 3 cos ( i 1 ) ( 2 j 1 ) π 2 n 3 , i , j = 1 , , n 3 ,

δ ij is the Kronecker symbol.

Now, we give an example to show the details to implement the C-product.

Example 3.

Let A C 3 × 3 × 2 and B C 3 × 2 × 2 with

A ( 1 ) = 1 1 2 0 0 3 2 2 2 , A ( 2 ) = 0 2 1 0 3 0 2 2 3 , B ( 1 ) = 2 2 0 0 2 1 , B ( 2 ) = 3 0 0 1 2 2 .

In this case, M is a 2 × 2 matrix and can be computed by using the Matlab. More precisely, M and M −1 are

M = 1 2 1 0 , M 1 = 0 1 0.5 0.5 .

By (4), we can compute L ( A ) = A × 3 M and L ( B ) = B × 3 M , i.e.,

L ( A ) ( 1 ) = 1 5 4 0 6 3 6 6 8 , L ( A ) ( 2 ) = 1 1 2 0 0 3 2 2 2 ,

L ( B ) ( 1 ) = 8 2 0 2 6 5 , L ( B ) ( 2 ) = 2 2 0 0 2 1 .

Then, we can get L ( A ) L ( B ) , that is,

( L ( A ) L ( B ) ) ( 1 ) = 32 32 18 27 96 64 , ( L ( A ) L ( B ) ) ( 2 ) = 6 4 6 3 8 6 .

By the last step, we can get the C-product of A and B , i.e.,

( A c B ) ( 1 ) = 6 4 6 3 8 6 , ( A c B ) ( 2 ) = 13 14 6 12 44 29 .

An algorithm of the C-product of A C n 1 × n 2 × n 3 and B C n 2 × l × n 3 is given below.

Algorithm 2.2: Compute the C-product of two tensors [45]
Input: n 1 × n 2 × n 3 tensor A and n 2 × l × n 3 tensor B
Output: n 1 × l × n 3 tensor C
 1. Compute M = W −1 C(I + Z) as in Lemma 1
 2. Compute A ̂ = L ( A ) = A × 3 M , B ̂ = L ( B ) = B × 3 M 1
 3. for i = 1, …, n 3
    C ̂ ( i ) = A ̂ ( i ) B ̂ ( i )
  end
 4. C = L 1 ( C ̂ )

By Algorithm 2.2, we can get the following lemma.

Lemma 2.

Let A , B and C be tensors with proper sizes. Then, the following statements are true.

  1. C = A + B L ( C ) ( i ) = L ( A ) ( i ) + L ( B ) ( i ) .

  2. C = A c B L ( C ) ( i ) = L ( A ) ( i ) L ( B ) ( i ) .

Let A C n 1 × n 2 × n 3 . The following lemma shows that m a t ( A ) can be block diagonalized.

Lemma 3.

[45] Let A C n 1 × n 2 × n 3 . Then,

( C n 3 I n 1 ) m a t ( A ) C n 3 1 I n 2 = d i a g L ( A ) ( 1 ) , L ( A ) ( 2 ) , , L ( A ) ( n 3 ) ,

where ⊗ is the Kronecker product and C n 3 is the n 3 × n 3 orthogonal DCT matrix.

Definition 6.

[45] Let L ( I ) = I ̂ C n × n × n 3 be such that I ̂ ( i ) = I n , i = 1, 2, …, n 3. Then, I = L 1 ( I ̂ ) is the identity tensor.

Definition 7.

[45] Let A C n × n × n 3 and B C n × n × n 3 . If

A c B = I and B c A = I ,

then A is said to be invertible and B is the inverse of A , which is denoted by A 1 .

We can check that the inverse of a tensor, if exists, is unique. The conjugate transpose of tensors can be defined as follows.

Definition 8.

[45] If A C n 1 × n 2 × n 3 , then the conjugate transpose of A , which is denoted by A H , is such that

L ( A H ) ( i ) = ( L ( A ) ( i ) ) H , i = 1,2 , , n 3 .

Lemma 4.

[45] Let A C n 1 × n 2 × n 3 and B C n 2 × l × n 3 . It holds that

( A c B ) H = B H c A H .

Definition 9.

[45] The tensor Q C n × n × n 3 is said unitary if Q H c Q = Q c Q H = I . The tensor Q C n 1 × n 2 × n 3 is said partially unitary if Q H c Q = I .

Definition 10.

[50] Let A C n 1 × n 2 × n 3 . Then, A is called an F-diagonal/F-upper/F-lower tensor if all frontal slices A ( i ) , i = 1, 2, …, n 3, of A are diagonal/upper triangular/lower triangular matrices.

The following lemma is helpful in establishing the main result of next sections.

Lemma 5.

[50] Let A C n 1 × n 2 × n 3 . Then, A is an F-diagonal/F-upper/F-lower tensor if and only if L ( A ) is an F-diagonal/F-upper/F-lower tensor.

The next lemma explains the operation of two block tensors.

Lemma 6.

[51] Let A C m 1 × n 1 × p , B C m 1 × n 2 × p , C C m 2 × n 1 × p , D C m 2 × n 2 × p , E C n 1 × m 1 × p , F C n 1 × m 2 × p , G C n 2 × m 1 × p and H C n 2 × m 2 × p . Then,

A B C D c E F G H = A c E + B c G A c F + B c H C c E + D c G C c F + D c H .

In the following, we show that the L(⋅) operation of the block tensor has the good character.

Lemma 7.

Let A C m 1 × n 1 × p , B C m 1 × n 2 × p , C C m 2 × n 1 × p and D C m 2 × n 2 × p . Then,

L A B C D = L ( A ) L ( B ) L ( C ) L ( D ) .

Proof.

Let M C p × p . By the definition of L(⋅), one has

L A B C D = A B C D × 3 M L A B C D ( 3 ) = M A B C D ( 3 ) L A B C D ( 3 ) = M A ( 3 ) M C ( 3 ) M B ( 3 ) M D ( 3 ) L A B C D = A × 3 M B × 3 M C × 3 M D × 3 M .

Hence, we claim that

L A B C D = L ( A ) L ( B ) L ( C ) L ( D ) .

3 The CS decomposition of the unitary tensor

In this section, we will study the CS decomposition of the unitary tensor based on C-product. Firstly, we give a thin version of the CS decomposition of the unitary tensor.

Theorem 1.

Let W 1 C m 1 × n 1 × p with m 1n 1, W 2 C m 2 × n 1 × p with m 2n 1 and

W = W 1 W 2 C ( m 1 + m 2 ) × n 1 × p

be a partially unitary tensor. Then, there exist unitary tensors U 1 C m 1 × m 1 × p , U 2 C m 2 × m 2 × p and V C n 1 × n 1 × p such that

(7) W = U 1 O O U 2 c C S c V H ,

where C C m 1 × n 1 × p , S C m 2 × n 1 × p are F-diagonal tensors and

(8) C H c C + S H c S = I .

Proof.

Since W is a partially unitary tensor, then W H c W = I . By Lemma 2, we have

L ( W H ) ( i ) L ( W ) ( i ) = I n 1 , i = 1 , , p .

Hence, L ( W ) ( i ) , i = 1 , , p , are partially unitary matrices. Notice that

W = W 1 W 2 ,

by using Lemma 7, we have

L ( W ) ( i ) = L ( W 1 ) ( i ) L ( W 2 ) ( i ) , i = 1 , , p .

Now, we can get the CS decomposition of L ( W ) ( i ) by using [52], Theorem 2.5.2], that is

(9) L ( W ) ( i ) = L ( U 1 ) ( i ) O O L ( U 2 ) ( i ) L ( C ) ( i ) L ( S ) ( i ) L ( V H ) ( i ) ,

where L ( U 1 ) ( i ) C m 1 × m 1 , L ( U 2 ) ( i ) C m 2 × m 2 , L ( V ) ( i ) C n 1 × n 1 and

L ( C ) ( i ) = d i a g ( c i 1 , c i 2 , , c i n 1 ) C m 1 × n 1 ,

L ( S ) ( i ) = d i a g ( s i 1 , s i 2 , , s i n 1 ) C m 2 × n 1 ,

with

(10) L ( C ) ( i ) H L ( C ) ( i ) + L ( S ) ( i ) H L ( S ) ( i ) = I n 1 , i = 1 , , p .

By Definition 1 and using (9) and (10), we have

(11) L ( W ) = L ( U 1 ) O O L ( U 2 ) L ( C ) L ( S ) L ( V H )

with

(12) L ( C ) H L ( C ) + L ( S ) H L ( S ) = L ( I ) .

Implementing the operation “L −1(⋅) on both sides of the equalities (11) and (12), we have

W = U 1 O O U 2 c C S c V H ,

with

C H c C + S H c S = I .

Since L ( C ) and L ( S ) are F-diagonal tensors, by Lemma 5, we have C and S are F-diagonal tensors. □

In the following, we will establish a more general version of the CS decomposition of tensors.

Theorem 2.

Let W 11 C m 1 × m 1 × p , W 22 C m 2 × m 2 × p and

W = W 11 W 12 W 21 W 22 C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , m 1 m 2

be a unitary tensor. Then, there exist unitary tensors U 1 C m 1 × m 1 × p , U 2 C m 2 × m 2 × p , V 1 C m 1 × m 1 × p and V 2 C m 2 × m 2 × p such that

(13)

where C C m 1 × m 1 × p , S C m 1 × m 1 × p are F-diagonal tensors and

(14) C H c C + S H c S = I .

Proof.

Since W is a unitary tensor, we have W H c W = W c W H = I . Then, by Lemma 2, we have

L ( W H ) ( i ) L ( W ) ( i ) = L ( W ) ( i ) L ( W H ) ( i ) = I m 1 + m 2 , i = 1 , , p ,

which implies that L ( W ) ( i ) , i = 1 , , p , are unitary matrices. Since

W = W 11 W 12 W 21 W 22 ,

by Lemma 7, we get

L ( W ) ( i ) = L ( W 11 ) ( i ) L ( W 12 ) ( i ) L ( W 21 ) ( i ) L ( W 22 ) ( i ) , i = 1 , , p .

By using the CS decomposition [53] of L ( W ) ( i ) , we have

(15)

where L ( U 1 ) ( i ) C m 1 × m 1 , L ( U 2 ) ( i ) C m 2 × m 2 , L ( V 1 ) ( i ) C m 1 × m 1 , L ( V 2 ) ( i ) C m 2 × m 2 and

L ( C ) ( i ) = d i a g ( c i 1 , c i 2 , , c i m 1 ) C m 1 × m 1 ,

L ( S ) ( i ) = d i a g ( s i 1 , s i 2 , , s i m 1 ) C m 1 × m 1 ,

with

(16) L ( C ) ( i ) H L ( C ) ( i ) + L ( S ) ( i ) H L ( S ) ( i ) = I m 1 , i = 1 , , p .

By Definition 1, we have

(17)

with

(18) L ( C ) H L ( C ) + L ( S ) H L ( S ) = L ( I ) .

Utilizing the operation “L −1(⋅) on both sides of the equalities (17) and (18), we have

with

C H c C + S H c S = I .

By Lemma 5, we get C and S are F-diagonal tensors due to L ( C ) and L ( S ) being F-diagonal tensors. □

In the following, we will build an efficient algorithm to compute the CS decomposition of a unitary tensor by Theorem 2.

Algorithm 3.1: Compute the CS decomposition of a unitary tensor
Input: (m 1 + m 2) × (m 1 + m 2) × p unitary tensor W
Output: U 1 C m 1 × m 1 × p , U 2 C m 2 × m 2 × p , V 1 C m 1 × m 1 × p , V 2 C m 2 × m 2 × p , C C m 1 × m 1 × p and S C m 1 × m 1 × p
 1. Compute W ̂ = L ( W ) = W × 3 M , where M is defined in (5)
 2. for i = 1, …, p
    U 1 ̂ ( i ) , U 2 ̂ ( i ) , V 1 ̂ ( i ) , V 2 ̂ ( i ) , C ̂ ( i ) , S ̂ ( i ) = c s d ( W ̂ ( i ) )
  end
 3. U 1 = L 1 ( U 1 ̂ ) , U 2 = L 1 ( U 2 ̂ ) , V 1 = L 1 ( V 1 ̂ ) , V 2 = L 1 ( V 2 ̂ ) , C = L 1 ( C ̂ ) , S = L 1 ( S ̂ )

Example 4.

Let W 11 C 4 × 4 × 2 , W 22 C 6 × 6 × 2 and W = W 11 W 12 W 21 W 22 C 10 × 10 × 2 with

W ( 1 ) = 0.3027 0.3743 0.2779 0.2215 0.3083 0.1542 0.4304 0.1624 0.5042 0.2393 0.3447 0.3034 0.3176 0.5575 0.3394 0.1318 0.2568 0.2835 0.1014 0.2972 0.3050 0.0483 0.2434 0.2416 0.2551 0.1318 0.0886 0.5412 0.5124 0.3760 0.2851 0.4978 0.1574 0.5621 0.2790 0.1894 0.1385 0.1395 0.3759 0.1911 0.2608 0.2768 0.4709 0.2590 0.1477 0.3053 0.1028 0.3940 0.3791 0.3768 0.3014 0.6169 0.0246 0.2510 0.0014 0.3029 0.5232 0.1531 0.1373 0.2392 0.2986 0.0714 0.4863 0.0514 0.0712 0.1829 0.4514 0.5976 0.2002 0.1646 0.3477 0.0083 0.0250 0.2481 0.3741 0.5556 0.4230 0.0955 0.3537 0.2347 0.3539 0.2345 0.3958 0.1973 0.3855 0 0.2236 0.1787 0.0542 0.6250 0.3479 0.0180 0.3488 0.1807 0.5757 0.6152 0.0696 0.0645 0.0309 0.0671 ,

W ( 2 ) = 0.0122 0.0544 0.2396 0.1050 0.0260 0.1035 0.3180 0.0026 0.2706 0.4656 0.0358 0.0575 0.0896 0.2251 0.4898 0.0739 0.0414 0.2913 0.0307 0.3987 0.0350 0.0530 0.1058 0.3458 0.2566 0.1341 0.0966 0.2178 0.5945 0.1192 0.0338 0.3478 0.0776 0.3527 0.3688 0.0882 0.0365 0.3182 0.3337 0.0589 0.0128 0.1435 0.1838 0.0963 0.2405 0.0367 0.2637 0.2248 0.0169 0.2571 0.0105 0.1996 0.1704 0.1723 0.1220 0.5052 0.1060 0.0832 0.1625 0.0215 0.0476 0.0874 0.6697 0.1163 0.0322 0.0029 0.3094 0.3066 0.1967 0.1566 0.0052 0.1740 0.0162 0.2149 0.2823 0.2862 0.0337 0.1679 0.4246 0.0891 0.0069 0.0672 0.2727 0.1791 0.2487 0.0297 0.2126 0.2010 0.0011 0.3846 0.0099 0.2727 0.2803 0.1206 0.2403 0.5181 0.0267 0.1242 0.0747 0.1492 .

A simple computation gives

L ( W ) ( 1 ) = 0.3272 0.4832 0.2014 0.0116 0.2564 0.0528 0.2055 0.1675 0.0369 0.6918 0.2730 0.1885 0.1383 0.1074 0.6401 0.2795 0.1740 0.2990 0.0400 0.5002 0.3749 0.0578 0.0317 0.4500 0.2581 0.1364 0.2818 0.1056 0.6766 0.1377 0.3528 0.1977 0.0023 0.1432 0.4586 0.3657 0.2115 0.4968 0.2916 0.3088 0.2863 0.0102 0.1033 0.4516 0.3334 0.2319 0.6302 0.0557 0.3454 0.1373 0.2804 0.2178 0.3654 0.0935 0.2455 0.7076 0.3112 0.0133 0.1877 0.1962 0.2034 0.2462 0.8531 0.1811 0.1355 0.1772 0.1674 0.0156 0.1932 0.1486 0.3581 0.3563 0.0074 0.1817 0.1904 0.0168 0.4903 0.4314 0.4955 0.0565 0.3401 0.3689 0.1495 0.5556 0.1119 0.0595 0.2016 0.5806 0.0520 0.1442 0.3281 0.5634 0.2119 0.4220 0.0951 0.4209 0.0161 0.3129 0.1186 0.2314 ,

L ( W ) ( 2 ) = 0.3027 0.3743 0.2779 0.2215 0.3083 0.1542 0.4304 0.1624 0.5042 0.2393 0.3447 0.3034 0.3176 0.5575 0.3394 0.1318 0.2568 0.2835 0.1014 0.2972 0.3050 0.0483 0.2434 0.2416 0.2551 0.1318 0.0886 0.5412 0.5124 0.3760 0.2851 0.4978 0.1574 0.5621 0.2790 0.1894 0.1385 0.1395 0.3759 0.1911 0.2608 0.2768 0.4709 0.2590 0.1477 0.3053 0.1028 0.3940 0.3791 0.3768 0.3014 0.6169 0.0246 0.2510 0.0014 0.3029 0.5232 0.1531 0.1373 0.2392 0.2986 0.0714 0.4863 0.0514 0.0712 0.1829 0.4514 0.5976 0.2002 0.1646 0.3477 0.0083 0.0250 0.2481 0.3741 0.5556 0.4230 0.0955 0.3537 0.2347 0.3539 0.2345 0.3958 0.1973 0.3855 0.0000 0.2236 0.1787 0.0542 0.6250 0.3479 0.0180 0.3488 0.1807 0.5757 0.6152 0.0696 0.0645 0.0309 0.0671 .

The first step, we will compute the singular value decomposition of L ( W 11 ) ( 1 ) . That is

L ( W 11 ) ( 1 ) = 0.6441 0.4914 0.5771 0.1035 0.3967 0.1466 0.4581 0.7818 0.5040 0.7902 0.1654 0.3070 0.4169 0.3356 0.6556 0.5327 × 0.8492 0 0 0 0 0.5081 0 0 0 0 0.2427 0 0 0 0 0.1236 × 0.7714 0.1123 0.4348 0.4509 0.5859 0.4536 0.2985 0.6015 0.0705 0.2855 0.7122 0.6374 0.2381 0.8367 0.4633 0.1691 H .

Then,

L ( C ) ( 1 ) = d i a g ( c 1 , c 2 , c 3 , c 4 ) = d i a g ( 0.8492 , 0.5081 , 0.2427 , 0.1236 ) ,

L ( S ) ( 1 ) = d i a g 1 c 1 2 , 1 c 2 2 , 1 c 3 2 , 1 c 4 2 = d i a g ( 0.5281 , 0.8613 , 0.9701 , 0.9923 ) .

Similarly, by computing the singular value decomposition of L ( W 11 ) ( 2 ) , one has

L ( C ) ( 2 ) = d i a g ( 0.9445 , 0.8620 , 0.4675 , 0.0986 ) ,

L ( S ) ( 2 ) = d i a g ( 0.3285 , 0.5070 , 0.8840 , 0.9923 ) .

Hence, there exist orthogonal tensors U and V such that

L U H c W c V ( 1 ) = 0.8492 0 0 0 0.5281 0 0 0 0 0 0 0.5081 0 0 0 0.8613 0 0 0 0 0 0 0.2427 0 0 0 0.9701 0 0 0 0 0 0 0.1236 0 0 0 0.9923 0 0 0.5281 0 0 0 0.8492 0 0 0 0 0 0 0.8613 0 0 0 0.5081 0 0 0 0 0 0 0.9701 0 0 0 0.2427 0 0 0 0 0 0 0.9923 0 0 0 0.1236 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 ,

and

L U H c W c V ( 2 ) = 0.9445 0 0 0 0.3285 0 0 0 0 0 0 0.8620 0 0 0 0.5070 0 0 0 0 0 0 0.4675 0 0 0 0.8840 0 0 0 0 0 0 0.0986 0 0 0 0.9951 0 0 0.3285 0 0 0 0.9445 0 0 0 0 0 0 0.5070 0 0 0 0.8620 0 0 0 0 0 0 0.8840 0 0 0 0.4675 0 0 0 0 0 0 0.9951 0 0 0 0.0986 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 .

Then,

U H c W c V ( 1 ) = 0.9445 0 0 0 0.3285 0 0 0 0 0 0 0.8620 0 0 0 0.5070 0 0 0 0 0 0 0.4675 0 0 0 0.8840 0 0 0 0 0 0 0.0986 0 0 0 0.9951 0 0 0.3285 0 0 0 0.9445 0 0 0 0 0 0 0.5070 0 0 0 0.8620 0 0 0 0 0 0 0.8840 0 0 0 0.4675 0 0 0 0 0 0 0.9951 0 0 0 0.0986 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1

and

U H c W c V ( 2 ) = 0.0477 0 0 0 0.0998 0 0 0 0 0 0 0.1769 0 0 0 0.1771 0 0 0 0 0 0 0.1124 0 0 0 0.0430 0 0 0 0 0 0 0.0125 0 0 0 0.0014 0 0 0.0998 0 0 0 0.047 0 0 0 0 0 0 0.1771 0 0 0 0.1769 0 0 0 0 0 0 0.0430 0 0 0 0.1124 0 0 0 0 0 0 0.0014 0 0 0 0.0125 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .

Finally, one can check that C H c C + S H c S = I . □

4 The CS decomposition of a strong unitary tensor

In this section, we will study the CS decomposition of a strong unitary tensor. Firstly, let us give the definition of a related tensor.

Definition 11.

Let Q C n × n × n 3 . If

Q H c Q = Q c Q H = D ,

where D is an F-diagonal tensor with D iij > 0 , i = 1, …, n, j = 1, …, n 3, then Q is call a strong unitary tensor.

Two more general versions of the strong unitary tensor is given as follows.

Definition 12.

Let Q C n × n × n 3 . If

Q c Q H = D ,

where D is an F-diagonal tensor with D iij > 0 , i = 1, …, n, j = 1, …, n 3, then Q is call a mode-1 strong unitary tensor. If

Q H c Q = D ,

where D is an F-diagonal tensor with D iij > 0 , i = 1, …, n, j = 1, …, n 3, then Q is call a mode-2 strong unitary tensor.

Now, we will do some researches on the CS decomposition of the mode-1 (mode-2) strong unitary tensor.

Theorem 3.

Let W 11 C m 1 × m 1 × p , W 22 C m 2 × m 2 × p and

W = W 11 W 12 W 21 W 22 C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , m 1 m 2

be a mode-1 strong unitary tensor. Then, there exist a mode-1 strong unitary tensors U C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p and unitary tensors V 1 C m 1 × m 1 × p and V 2 C m 2 × m 2 × p such that

where C C m 1 × m 1 × p , S C m 1 × m 1 × p are F-diagonal tensors and

C H c C + S H c S = I .

Proof.

Since W is a mode-1 strong unitary tensor, we have W c W H = D , a F-diagonal tensor with D iij > 0. Define P C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p as an F-diagonal tensor with P iij = D iij 1 , i = 1, …, m 1 + m 2, j = 1, …, p. Then, we have

P c W c W H c P = I .

Let W 0 = P c W . Then, W 0 is a unitary tensor. By Theorem 2, we have

where C C m 1 × m 1 × p , S C m 1 × m 1 × p are F-diagonal tensors and

C H c C + S H c S = I .

Hence,

Let U = P 1 c U 1 O O U 2 . It is easy to see U c U H = ( P 1 ) 2 , which means U is a mode-1 strong unitary tensors. □

Theorem 4.

Let W 11 C m 1 × m 1 × p , W 22 C m 2 × m 2 × p and

W = W 11 W 12 W 21 W 22 C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , m 1 m 2

be a mode-2 strong unitary tensor. Then, there exist unitary tensors U 1 C m 1 × m 1 × p , U 2 C m 2 × m 2 × p , and mode-1 strong unitary tensors V C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p such that

(19)

where C C m 1 × m 1 × p , S C m 1 × m 1 × p are F-diagonal tensors and

(20) C H c C + S H c S = I .

Proof.

The proof is similar as Theorem 3. □

In the following, we will give an algorithm to compute the CS decomposition of a mode-1 strong unitary tensor.

Algorithm 4.1: Compute the CS decomposition of a mode-1 strong unitary tensor
Input: (m 1 + m 2) × (m 1 + m 2) × p mode-1 strong unitary W
Output: U C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , V 1 C m 1 × m 1 × p , V 2 C m 2 × m 2 × p , C C m 1 × m 1 × p and S C m 1 × m 1 × p
 1. Compute D = W c W H
 2. Construct an F-diagonal tensor P C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p with P iij = D iij 1 , i = 1, …, m 1 + m 2, j = 1, …, p
 3. Compute W 0 = P c W
 4. Compute W 0 ̂ = L ( W 0 ) = W 0 × 3 M , where M is defined in (5)
 5. for i = 1, …, p
    U 1 ̂ ( i ) , U 2 ̂ ( i ) , V 1 ̂ ( i ) , V 2 ̂ ( i ) , C ̂ ( i ) , S ̂ ( i ) = c s d ( W 0 ̂ ( i ) )
  end
 6. U 1 = L 1 ( U 1 ̂ ) , U 2 = L 1 ( U 2 ̂ ) , V 1 = L 1 ( V 1 ̂ ) , V 2 = L 1 ( V 2 ̂ ) , C = L 1 ( C ̂ ) , S = L 1 ( S ̂ )
 7. U = P 1 c d i a g ( U 1 , U 2 )

Example 5.

Let W C 3 × 3 × 3 be a mode-1 strong unitary tensor with

W ( 1 ) = 10 3 0 2 3 0 7 3 + 4 3 i 4 3 + 2 i 4 3 4 3 + 2 i 2 + 4 3 i ,

W ( 2 ) = 2 3 + 1 2 i 0 1 2 i 0 i 1 3 2 i 1 1 2 i 1 3 2 i 1 , W ( 3 ) = 1 6 1 2 i 0 1 6 + i 0 2 3 + 1 3 i 1 3 + 1 2 i 1 3 + 1 2 i 1 3 + 1 2 i 2 3 i .

A simple computation gives

L ( W ) ( 1 ) = 0 0 2 0 1 0 4 0 0 , L ( W ) ( 2 ) = 2 + i 0 1 2 i 0 3 0 2 i 0 1 + 2 i , L ( W ) ( 3 ) = 5 0 0 0 3 + 2 i 2 + 3 i 0 2 + 3 i 3 + 2 i .

Then,

L W c W H ( 1 ) = 4 0 0 0 1 0 0 0 16 , L W c W H ( 2 ) = 10 0 0 0 9 0 0 0 10 , L W c W H ( 3 ) = 25 0 0 0 26 0 0 0 26 ,

which means W is a mode-1 strong unitary tensor. By simple computations, one has

L ( P ) ( 1 ) = 1 2 0 0 0 1 0 0 0 1 4 , L ( P ) ( 2 ) = 1 10 0 0 0 1 3 0 0 0 1 10 , L ( P ) ( 3 ) = 1 5 0 0 0 1 26 0 0 0 1 26 .

Then,

L ( W 0 ) ( 1 ) = 0 0 1 0 1 0 1 0 0 , L ( W 0 ) ( 2 ) = 2 10 + 1 10 i 0 1 10 2 10 i 0 1 0 2 10 1 10 i 0 1 10 2 + 2 10 i ,

L ( W 0 ) ( 3 ) = 1 0 0 0 3 26 + 2 26 i 2 26 + 3 26 i 0 2 26 + 3 26 i 3 26 + 2 26 i .

Moreover,

W 0 ( 1 ) = 2 3 0 1 3 0 1 3 + 2 26 + 4 3 26 i 4 3 26 + 2 26 i 1 3 4 3 26 + 2 26 i 2 26 + 4 3 26 i ,

W 0 ( 2 ) = 1 2 + 1 10 + 1 2 10 i 0 1 2 10 1 10 i 0 1 2 3 2 26 1 26 i 1 26 3 2 26 i 1 10 1 2 10 i 1 26 3 2 26 i 1 2 10 3 2 26 + 1 10 1 26 i ,

W 0 ( 3 ) = 1 6 1 10 1 2 10 i 0 1 3 1 2 10 + 1 10 i 0 1 6 + 1 2 26 + 1 3 26 i 1 3 26 + 1 2 26 i 1 3 1 10 + 1 2 10 i 1 3 26 + 1 2 26 i 1 2 26 1 2 10 + 1 3 26 1 10 i .

Next, it is easy to execute Algorithm 3.1 to get the CS decomposition of the mode-1 strong unitary tensor. □

Similar as Algorithm 4.1, we can get an algorithm to compute the CS decomposition of a mode-2 strong unitary tensor.

5 The hyperbolic CS decomposition of tensors

In this section, we firstly give some definitions of the J -orthogonal tensors. Then, we establish the hyperbolic CS decomposition of a J -orthogonal tensor, mode-1 strong J -orthogonal tensor and orthogonal tensor. Then, there exist unitary tensors J -orthogonal tensor, respectively.

Definition 13.

[54] Let Q R n × n . If

Q H J Q = Q J Q H = J ,

where J = d i a g I p , I q , p + q = n , then Q is called a J-orthogonal matrix.

Definition 14.

Let Q R n × n × n 3 . If

Q H c J c Q = Q c J c Q H = J ,

where J R n × n × n 3 is an F-diagonal tensor with

L ( J ) ( i ) = d i a g I k ( i ) , I n k ( i ) , 0 k n , i = 1 , , n 3 ,

then Q is called a J -orthogonal tensor.

The definition of a J -orthogonal tensor can be generalized as follows.

Definition 15.

Let Q R n × n × n 3 . If

Q H c J c Q = Q c J c Q H = J 0 ,

where J R n × n × n 3 and J 0 R n × n × n 3 are F-diagonal tensors with

L ( J ) ( i ) = d i a g ( I k ( i ) , I n k ( i ) ) and L ( J 0 ) ( i ) = d i a g ( α 1 ( i ) , , α k ( i ) , β k + 1 ( i ) , , β n ( i ) ) ,

where α 1 ( i ) , , α k ( i ) > 0 , β k + 1 ( i ) , , β n ( i ) > 0 , 0 ≤ kn, i = 1, …, n 3, then Q is called a strong J -orthogonal tensor.

Moreover, the definition of the strong J -orthogonal tensors can be extended to the mode-1 strong J -orthogonal tensor and mode-2 strong J -orthogonal tensor.

Definition 16.

Let Q R n × n × n 3 . If

Q c J c Q H = J 0 ,

where J R n × n × n 3 and J 0 R n × n × n 3 are F-diagonal tensors with

L ( J ) ( i ) = d i a g ( I k ( i ) , I n k ( i ) ) and L ( J 0 ) ( i ) = d i a g ( α 1 ( i ) , , α k ( i ) , β k + 1 ( i ) , , β n ( i ) ) ,

where α 1 ( i ) , , α k ( i ) > 0 , β k + 1 ( i ) , , β n ( i ) > 0 , 0 ≤ kn, i = 1, …, n 3, then Q is called a mode-1 strong J -orthogonal tensor.

Definition 17.

Let Q R n × n × n 3 . If

Q H c J c Q = J 0 ,

where J R n × n × n 3 and J 0 R n × n × n 3 are F-diagonal tensors with

L ( J ) ( i ) = d i a g ( I k ( i ) , I n k ( i ) ) and L ( J 0 ) ( i ) = d i a g ( α 1 ( i ) , , α k ( i ) , β k + 1 ( i ) , , β n ( i ) ) ,

where α 1 ( i ) , , α k ( i ) > 0 , β k + 1 ( i ) , , β n ( i ) > 0 , 0 ≤ kn, i = 1, …, n 3, then Q is called a mode-2 strong J -orthogonal tensor.

The following result is the hyperbolic CS decomposition of a J -orthogonal tensor.

Theorem 5.

Let W 11 R m 1 × m 1 × p , W 22 R m 2 × m 2 × p and

W = W 11 W 12 W 21 W 22 R ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , m 1 m 2

be a J -orthogonal tensor. Then, there exist unitary tensors U 1 R m 1 × m 1 × p , U 2 R m 2 × m 2 × p , V 1 R m 1 × m 1 × p and V 2 R m 2 × m 2 × p such that

(21)

where C R m 1 × m 1 × p , S R m 1 × m 1 × p are F-diagonal tensors and

(22) C 2 S 2 = I .

Proof.

Since W is a J -orthogonal tensor, then W H c J c W = J . According to Lemma 2, we get

L ( W H ) ( i ) L ( J H ) ( i ) L ( W ) ( i ) = L ( J H ) ( i ) , i = 1 , , p ,

which means L ( W ) ( i ) , i = 1 , , p are J -orthogonal matrices. Because of

W = W 11 W 12 W 21 W 22 ,

by Lemma 7, we get

L ( W ) ( i ) = L ( W 11 ) ( i ) L ( W 12 ) ( i ) L ( W 21 ) ( i ) L ( W 22 ) ( i ) , i = 1 , , p .

Using Theorem 3.2 of [54], we have

where L ( U 1 ) ( i ) R m 1 × m 1 , L ( U 2 ) ( i ) R m 2 × m 2 , L ( V 1 ) ( i ) R m 1 × m 1 , L ( V 2 ) ( i ) R m 2 × m 2 and

L ( C ) ( i ) = d i a g ( c i 1 , c i 2 , , c i m 1 ) R m 1 × m 1 ,

L ( S ) ( i ) = d i a g ( s i 1 , s i 2 , , s i m 1 ) R m 1 × m 1 ,

with

L ( C ) ( i ) 2 L ( S ) ( i ) 2 = I m 1 , i = 1 , , p .

By Definition 1, we have

(23)

with

(24) L ( C ) H L ( C ) L ( S ) H L ( S ) = L ( I ) .

Implementing the operation “L −1(⋅) on both sides of the equalities (23) and (24), we have

with

C 2 S 2 = I .

Similar as Theorem 1, we get that C and S are F-diagonal tensors. □

Let

W = W 11 W 12 W 21 W 22 R ( m 1 + m 2 ) × ( m 1 + m 2 ) × p .

If W 11 is invertible, denote

(25) e x c ( W ) = W 11 1 W 11 1 W 12 W 21 W 11 1 W 22 W 21 W 11 1 W 12 .

It is easy to check e x c ( e x c ( W ) ) = W . Moreover, by [54], Theorem 2.2], one has that if W is a J -orthogonal tensor, then e x c ( W ) is an orthogonal tensor. Conversely, if W is an orthogonal tensor and W 11 is invertible, then e x c ( W ) is a J -orthogonal tensor.

Now, we can set up an algorithm for the hyperbolic CS decomposition of the J -orthogonal tensor based on Theorem 5.

Algorithm 5.1: Compute the hyperbolic CS decomposition of a J -orthogonal tensor
Input: ( m 1 + m 2 ) × ( m 1 + m 2 ) × p J -orthogonal tensor W
Output: U 1 C m 1 × m 1 × p , U 2 C m 2 × m 2 × p , V 1 C m 1 × m 1 × p , V 2 C m 2 × m 2 × p , C C m 1 × m 1 × p and S C m 1 × m 1 × p
 1. Compute T = e x c ( W ) , where exc(⋅) was defined in (25)
 2. Compute T ̂ = L ( T ) = T × 3 M , where M is defined in (5)
 3. for i = 1, …, p
    U 1 ̂ ( i ) , U 2 ̂ ( i ) , V 1 ̂ ( i ) , V 2 ̂ ( i ) , C 0 ̂ ( i ) , S 0 ̂ ( i ) = c s d ( T ̂ ( i ) )
  end
 4. U 1 = L 1 ( U 1 ̂ ) , U 2 = L 1 ( U 2 ̂ ) , V 1 = L 1 ( V 1 ̂ ) , V 2 = L 1 ( V 2 ̂ ) , C 0 = L 1 ( C 0 ̂ ) , S 0 = L 1 ( S 0 ̂ )
 5. C = C 0 1 = C 0 + S 0 c C 0 1 c S 0 ; S = C 0 1 c S 0

Example 6.

Let W R 6 × 6 × 2 be a J -orthogonal tensor with

W ( 1 ) = 1.4951 3.3435 1.3816 3.1084 0.4438 0.8044 4.0321 3.7284 1.6996 4.8518 0.4706 1.5841 1.1742 0.3878 0.3225 1.0894 0.9100 0.6406 2.3137 3.0639 1.0316 3.4840 0.5632 1.4903 0.7685 1.1109 0.0678 1.6195 0.1573 0.4154 3.1892 3.6481 2.1505 4.2724 0.4984 1.1630 ,

W ( 2 ) = 5.2366 10.8788 9.4134 0.9070 7.6567 6.9803 2.9576 7.0364 7.3158 1.8476 5.8428 4.6961 4.8489 8.9312 6.0322 0.9117 5.3293 5.4867 1.4043 3.4248 4.0822 1.0095 2.8848 2.1003 4.6152 9.2967 7.2715 0.4296 5.9769 5.9621 1.6583 4.3817 5.3849 1.7032 3.9386 3.5379 .

A simple computation gives

L ( W ) ( 1 ) = 11.9683 25.1012 17.4453 1.2945 15.7572 14.7650 9.9473 17.8012 12.9320 1.1565 11.2151 10.9762 8.5235 17.4747 11.7419 0.7340 11.5687 10.3328 5.1222 9.9135 7.1328 1.4650 6.3328 5.6910 9.9989 19.7044 14.4752 0.7603 12.1111 11.5088 6.5058 12.4115 8.6194 0.8661 7.3789 8.2388 ,

L ( W ) ( 2 ) = 1.4951 3.3435 1.3816 3.1084 0.4438 0.8044 4.0321 3.7284 1.6996 4.8518 0.4706 1.5841 1.1742 0.3878 0.3225 1.0894 0.9100 0.6406 2.3137 3.0639 1.0316 3.4840 0.5632 1.4903 0.7685 1.1109 0.0678 1.6195 0.1573 0.4154 3.1892 3.6481 2.1505 4.2724 0.4984 1.1630 .

By using the formula

e x c ( W ) = W 11 1 W 11 1 W 12 W 21 W 11 1 W 22 W 21 W 11 1 W 12 ,

where W 11 R 2 × 2 × 2 , W 12 R 2 × 4 × 2 , W 21 R 4 × 2 × 2 and W 22 R 4 × 4 × 2 , one has

L ( T ) ( 1 ) = e x c ( L ( W ) ( 1 ) ) = 0.4859 0.6851 0.3838 0.1634 0.0277 0.3461 0.2715 0.3267 0.5120 0.0263 0.6145 0.4232 0.6031 0.1312 0.4764 0.1985 0.5937 0.0124 0.2028 0.2709 0.0911 0.8892 0.0987 0.2772 0.4917 0.4136 0.5489 0.3545 0.2750 0.2907 0.2088 0.4028 0.2322 0.1300 0.4286 0.7346 ,

L ( T ) ( 2 ) = e x c ( L ( W ) ( 2 ) ) = 0.4715 0.4228 0.0672 0.5859 0.4082 0.2906 0.5099 0.1891 0.3831 0.6677 0.3153 0.1106 0.3559 0.4232 0.5500 0.1425 0.5529 0.2565 0.4714 0.3990 0.2979 0.0827 0.5418 0.4790 0.2041 0.1149 0.4095 0.4275 0.1208 0.7617 0.3565 0.6588 0.5383 0.0318 0.3466 0.1673 .

It is easy to check that L ( T ) ( 1 ) and L ( T ) ( 2 ) are orthogonal. Next, using the method of Algorithm 3.1, we can get that there exist orthogonal tensors U and V such that

L U H c T c V ( 1 ) = 0.9407 0 0.3391 0 0 0 0 0.0290 0 0.9996 0 0 0.3391 0 0.9407 0 0 0 0 0.9996 0 0.0290 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ,

L U H c T c V ( 2 ) = 0.8204 0 0.5717 0 0 0 0 0.1541 0 0.9880 0 0 0.5717 0 0.8204 0 0 0 0 0.9880 0 0.1541 0 0 0 0 0 0 1 0 0 0 0 0 0 1 .

In the last step, we compute

L ( C ) ( 1 ) = ( L ( C 0 ) ( 1 ) ) 1 = 1.0630 0 0 34.4828 , L ( C ) ( 2 ) = ( L ( C 0 ) ( 2 ) ) 1 = 1.2189 0 0 6.4893 ,

L ( S ) ( 1 ) = ( L ( C 0 ) ( 1 ) ) 1 L ( S 0 ) ( 1 ) = 0.3605 0 0 34.4690 , L ( S ) ( 2 ) = ( L ( C 0 ) ( 2 ) ) 1 L ( S 0 ) ( 2 ) = 0.6969 0 0 6.4114 .

Therefore,

L U H c W c V ( 1 ) = 1.0630 0 0.3605 0 0 0 0 34.4828 0 34.4690 0 0 0.3605 0 1.0630 0 0 0 0 34.4690 0 34.4828 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ,

L U H c W c V ( 2 ) = 1.2189 0 0.6969 0 0 0 0 6.4893 0 6.4114 0 0 0.6969 0 1.2189 0 0 0 0 6.4114 0 6.4893 0 0 0 0 0 0 1 0 0 0 0 0 0 1 .

Hence,

U H c W c V ( 1 ) = 1.2189 0 0.6969 0 0 0 0 6.4893 0 6.4114 0 0 0.6969 0 1.2189 0 0 0 0 6.4114 0 6.4893 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ,

U H c W c V ( 2 ) = 0.0780 0 0.1682 0 0 0 0 13.9967 0 14.0288 0 0 0.1682 0 0.0780 0 0 0 0 14.0288 0 172410 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .

In the following, we give the hyperbolic CS decomposition of a mode-1 strong J -orthogonal tensor.

Theorem 6.

Let W 11 R m 1 × m 1 × p , W 22 R m 2 × m 2 × p and

W = W 11 W 12 W 21 W 22 R ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , m 1 m 2

be a mode-1 strong J -orthogonal tensor. Then, there exist a mode-1 strong unitary tensor U R ( m 1 + m 2 ) × ( m 1 + m 2 ) × p and unitary tensors V 1 R m 1 × m 1 × p and V 2 R m 2 × m 2 × p such that

where C R m 1 × m 1 × p , S R m 1 × m 1 × p are F-diagonal tensors and

C 2 S 2 = I .

Proof.

Since W is a mode-1 strong J -orthogonal tensor, by Definition 16, we have

W c J c W H = J 0 .

Define P C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p as an F-diagonal tensor with

P ( i ) = d i a g ( α 1 ( i ) ) 1 , , ( α p ( i ) ) 1 , ( β p + 1 ( i ) ) 1 , , ( β n ( i ) ) 1 , i = 1 , , p .

Then, we have

P c W c J c W H c P = I .

Let W 0 = P c W . Then, W 0 c J c W 0 H = J , which means that W 0 is a J -orthogonal tensor. By Theorem 5, we have

where C C m 1 × m 1 × p , S C m 1 × m 1 × p are F-diagonal tensors and

C 2 S 2 = I .

Therefore,

Let U = P 1 c U 1 O O U 2 . It is easy to check U c U H = ( P 1 ) 2 , which means that U is a mode-1 strong unitary tensor. □

Similar as Theorem 6, we can get the hyperbolic CS decomposition of a mode-2 strong J -orthogonal tensor as follows.

Theorem 7.

Let W 11 R m 1 × m 1 × p , W 22 R m 2 × m 2 × p and

W = W 11 W 12 W 21 W 22 R ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , m 1 m 2

be a mode-2 strong J -orthogonal tensor. Then, there exist unitary tensors U 1 R m 1 × m 1 × p , U 2 R m 2 × m 2 × p , and a mode-1 strong unitary tensor V R ( m 1 + m 2 ) × ( m 1 + m 2 ) × p such that

where C R m 1 × m 1 × p , S R m 1 × m 1 × p are F-diagonal tensors and

(27) C 2 S 2 = I .

In the following, we will give an algorithm to compute the hyperbolic CS decomposition of the mode-1 strong J -orthogonal tensor based on Theorem 6. One can also analogously get an algorithm to compute the hyperbolic CS decomposition of the mode-2 strong J -orthogonal tensor.

Algorithm 5.2: Compute the hyperbolic CS decomposition of the mode-1 strong J -orthogonal tensor
Input: ( m 1 + m 2 ) × ( m 1 + m 2 ) × p J -orthogonal tensor W
Output: U C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , V 1 C m 1 × m 1 × p , V 2 C m 2 × m 2 × p , C C m 1 × m 1 × p and S C m 1 × m 1 × p
 1. Compute D = W c J c W H
 2. Construct an F-diagonal tensor P C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p with P iij = D iij 1 , i = 1, …, m 1 + m 2, j = 1, …, p
 3. Compute W 0 = P c W
 4. Compute T = e x c ( W 0 ) , where exc(⋅) was defined in (25)
 5. Compute T ̂ = L ( T ) = T × 3 M , where M is defined in (5)
 6. for i = 1, …, p
    U 1 ̂ ( i ) , U 2 ̂ ( i ) , V 1 ̂ ( i ) , V 2 ̂ ( i ) , C 0 ̂ ( i ) , S 0 ̂ ( i ) = c s d ( T ̂ ( i ) )
  end
 7. U 1 = L 1 ( U 1 ̂ ) , U 2 = L 1 ( U 2 ̂ ) , V 1 = L 1 ( V 1 ̂ ) , V 2 = L 1 ( V 2 ̂ ) , C 0 = L 1 ( C 0 ̂ ) , S 0 = L 1 ( S 0 ̂ )
 8. C = C 0 + S 0 c C 0 1 c S 0 ; S = C 0 1 c S 0
 9. U = P 1 c d i a g ( U 1 , U 2 )

6 An application to the computation of the C-eigenvalues of a tensor

Firstly, we will introduce the C-eigenvalue of the tensor A .

Definition 18.

Let A C n × n × p . Suppose that X C n × 1 × p and X O . If

(28) A c X = λ X , λ C ,

then λ is called a C-eigenvalue of A and X is the C-eigenvector of A associated to λ.

Notice that A c X = λ X is equivalent to m a t ( A ) m a t ( X ) = λ m a t ( X ) . Hence, all the C-eigenvalues of A are the eigenvalues of the matrix m a t ( A ) and vice versa.

The following theorem involves a full CS decomposition of a unitary tensor.

Theorem 8.

Let W 11 C m 1 × m 1 × p , W 22 C m 2 × m 2 × p and

W = W 11 W 12 W 21 W 22 C ( m 1 + m 2 ) × ( m 1 + m 2 ) × p , m 1 m 2

be a unitary tensor. Then, there exist unitary tensors U 1 C m 1 × m 1 × p , U 2 C m 2 × m 2 × p , V 1 C m 1 × m 1 × p and V 2 C m 2 × m 2 × p such that

(29) W = U 1 O O U 2 c C S S C c V 1 O O V 2 H ,

where C C m 1 × m 1 × p , S C m 1 × m 1 × p are F-diagonal tensors and

(30) C H c C + S H c S = I .

Proof.

The proof is similar as Theorem 2 and follows by using [52], Theorem 2.6.3]. □

The next lemma is helpful in establish the main result.

Lemma 8.

Let W R 2 n × 2 n × p be an orthogonal tensor. Then, there exists an orthogonal tensor Ω 0 R 2 n × 2 n × p such that

Ω 0 H c W c Ω 0 = H a c H b ,

where

L ( H a ) ( i ) = d i a g α 1 ( i ) β 1 ( i ) β 1 ( i ) α 1 ( i ) , α 3 ( i ) β 3 ( i ) β 3 ( i ) α 3 ( i ) , α 2 n 1 ( i ) β 2 n 1 ( i ) β 2 n 1 ( i ) α 2 n 1 ( i ) ,

L ( H b ) ( i ) = d i a g 1 , α 2 ( i ) β 2 ( i ) β 2 ( i ) α 2 ( i ) , α 4 ( i ) β 4 ( i ) β 4 ( i ) α 4 ( i ) , α 2 n 2 ( i ) β 2 n 2 ( i ) β 2 n 2 ( i ) α 2 n 2 ( i ) , 1 ,

β k ( i ) > 0 , ( α k ( i ) ) 2 + ( β k ( i ) ) 2 = 1 , i = 1, 2, …, p, k = 1, 2, …, 2n − 1.

Proof.

Since W R 2 n × 2 n × p is an orthogonal tensor, one has L ( W ) ( i ) R 2 n × 2 n , i = 1, 2, …, p are orthogonal matrices. By [49], Algorithm 1], there exists an orthogonal matrix L ( Ω 0 ) ( i ) such that

( L ( Ω 0 ) ( i ) ) H L ( W ) ( i ) L ( Ω 0 ) ( i ) = L ( H a ) ( i ) L ( H b ) ( i ) .

Thus,

( L ( Ω 0 ) ) H L ( W ) L ( Ω 0 ) = L ( H a ) L ( H b ) ,

which implies Ω 0 H c W c Ω 0 = H a c H b . □

Define α , β R with β > 0 and α 2 + β 2 = 1. Define a , b R with a, b > 0 by

( a , b ) = β 2 ( 1 + α ) , 2 ( 1 + α ) 2 , α 0 , 2 ( 1 α ) 2 , β 2 ( 1 α ) , α < 0 .

Then, we have

(31) a b b a T α β β α a b b a = 1 0 0 1

and

(32) b a a b T α β β α b a a b = 1 0 0 1 .

Using (31), one has there exists an orthogonal tensor Ω a R 2 n × 2 n × p such that

Ω a H c H a c Ω a = D ,

where

L ( Ω a ) ( i ) = d i a g a 1 ( i ) b 1 ( i ) b 1 ( i ) a 1 ( i ) , a 2 ( i ) b 2 ( i ) b 2 ( i ) a 2 ( i ) , , a n ( i ) b n ( i ) b n ( i ) a n ( i )

and

L ( D ) ( i ) = d i a g ( 1 , 1,1 1 , , 1 , 1 ) .

Similar, using (32), one has that there exists an orthogonal tensor Ω b R 2 n × 2 n × p such that

Ω b H c H b c Ω b = D ,

where

L ( Ω b ) ( i ) = d i a g 1 , c 1 ( i ) d 1 ( i ) d 1 ( i ) c 1 ( i ) , c 2 ( i ) d 2 ( i ) d 2 ( i ) c 2 ( i ) , , c n 1 ( i ) d n 1 ( i ) d n 1 ( i ) c n 1 ( i ) , 1

and

L ( D ) ( i ) = d i a g ( 1 , 1,1 1 , , 1 , 1 ) .

Let I n be the n × n identity matrix and e j be the jth column of I n . Denote the tensor P R 2 n × 2 n × p defined by

L ( P ) ( i ) = [ e 1 , e 3 , , e 2 n 1 , e 2 , e 4 , , e 2 n ] , i = 1,2 , , p

and J R 2 n × 2 n × p by

L ( J ) ( i ) = d i a g ( I n , I n ) , i = 1,2 , , p .

Then, one has

( Ω a c P ) H c H a c ( Ω a c P ) = ( Ω b c P ) H c H b c ( Ω b c P ) = J .

Define

Z = ( Ω a c P ) H c ( Ω b c P ) = P H c Ω a c P H c P H c Ω b c P .

By using

Ω 0 H c W c Ω 0 = H a c H b = ( Ω a c P ) c J c ( Ω a c P ) H c ( Ω b c P ) c J c ( Ω b c P ) H ,

one has

( Ω 0 c Ω a c P ) H c W c ( Ω 0 c Ω a c P ) = J c Z c J c Z H .

Since Z is an orthogonal tensor, we can get its full CS decomposition as

U 1 O O U 2 H c Z c V 1 O O V 2 = C S S C ,

where

L ( C ) ( i ) = d i a g ( c 1 ( i ) , c 2 ( i ) , , c n ( i ) ) , L ( S ) ( i ) = d i a g ( s 1 ( i ) , s 2 ( i ) , , s n ( i ) ) , i = 1,2 , , p

with c k ( i ) , s k ( i ) > 0 and ( c k ( i ) ) 2 + ( s k ( i ) ) 2 = 1 , k = 1, 2, …, n. Finally, if we define

Ω = Ω 0 c Ω a c P c U 1 O O U 2 c P ,

one has

Ω H c W c Ω = Ψ ,

where

L ( Ψ ) ( i ) = d i a g ( c 1 ( i ) ) 2 ( s 1 ( i ) ) 2 2 c 1 ( i ) s 1 ( i ) 2 c 1 ( i ) s 1 ( i ) ( c 1 ( i ) ) 2 ( s 1 ( i ) ) 2 , , ( c n ( i ) ) 2 ( s n ( i ) ) 2 2 c n ( i ) s n ( i ) 2 c n ( i ) s n ( i ) ( c n ( i ) ) 2 ( s n ( i ) ) 2 , i = 1,2 , , p .

Let

K 1 ( i ) = L 1 ( c 1 ( i ) ) 2 ( s 1 ( i ) ) 2 2 c 1 ( i ) s 1 ( i ) 2 c 1 ( i ) s 1 ( i ) ( c 1 ( i ) ) 2 ( s 1 ( i ) ) 2 , i = 1,2 , , p ,

K 2 ( i ) = L 1 ( c 2 ( i ) ) 2 ( s 2 ( i ) ) 2 2 c 2 ( i ) s 2 ( i ) 2 c 2 ( i ) s 2 ( i ) ( c 2 ( i ) ) 2 ( s 2 ( i ) ) 2 , i = 1,2 , , p ,

K n ( i ) = L 1 ( c n ( i ) ) 2 ( s n ( i ) ) 2 2 c n ( i ) s n ( i ) 2 c n ( i ) s n ( i ) ( c n ( i ) ) 2 ( s n ( i ) ) 2 , i = 1,2 , , p .

Therefore, one can compute the eigenvalues of m a t ( K 1 ) , m a t ( K 2 ) , …, m a t ( K n ) , which are the C-eigenvalues of W .

Example 7.

Let W R 6 × 6 × 3 be an orthogonal tensor with

W ( 1 ) = 0.1901 0.0946 0.2032 0.0951 0.0452 0.4262 0.0691 0.0195 0.2722 0.3694 0.0229 0.5662 0.3240 0.1153 0.5152 0.0422 0.5658 0.0094 0.2378 0.3855 0.2456 0.4556 0.1895 0.4274 0.3024 0.0190 0.2085 0.2563 0.4049 0.3437 0.1751 0.3235 0.2424 0.0563 0.0705 0.1519 ,

W ( 2 ) = 0.4995 0.0791 0.5083 0.3609 0.2312 0.4383 0.1977 0.3028 0.2636 0.6572 0.0654 0.5792 0.0612 0.3539 0.6170 0.4071 0.4011 0.1120 0.5046 0.2850 0.1746 0.2857 0.4878 0.4129 0.4837 0.1765 0.2670 0.2843 0.5766 0.1052 0.1808 0.1464 0.3470 0.0962 0.3100 0.1314 ,

W ( 3 ) = 0.1665 0.1256 0.1938 0.1036 0.0162 0.2930 0.0103 0.0826 0.1914 0.3241 0.1113 0.6042 0.0117 0.1712 0.4949 0.0969 0.4144 0.1046 0.2441 0.3227 0.0724 0.2303 0.3967 0.3009 0.1637 0.3281 0.3617 0.2799 0.2872 0.0956 0.0429 0.3570 0.1486 0.0118 0.0828 0.2421 .

A simple computation gives

L ( W ) ( 1 ) = 0.4758 0.0015 0.4257 0.6098 0.4496 0.1358 0.4440 0.4208 0.1279 0.2969 0.3761 0.6163 0.4229 0.2502 0.2709 0.5781 0.5923 0.0053 0.2832 0.4610 0.7395 0.3449 0.0073 0.2035 0.3375 0.2843 0.3978 0.2476 0.1740 0.7453 0.4510 0.6834 0.1544 0.1598 0.5247 0.0695 ,

L ( W ) ( 2 ) = 0.4759 0.2993 0.4989 0.5596 0.1698 0.3051 0.2771 0.3659 0.1828 0.6119 0.0230 0.6172 0.3969 0.4098 0.5967 0.4618 0.2497 0.2071 0.5109 0.2222 0.3478 0.0604 0.6950 0.2864 0.3450 0.5236 0.4202 0.3079 0.4589 0.3533 0.3989 0.5340 0.2532 0.0281 0.4633 0.5255 ,

L ( W ) ( 3 ) = 0.5231 0.1412 0.5177 0.1622 0.2926 0.5714 0.1183 0.2396 0.3444 0.7025 0.1537 0.5411 0.2745 0.2981 0.6373 0.3523 0.5525 0.0168 0.4983 0.3477 0.0014 0.5110 0.2806 0.5394 0.6224 0.1706 0.1138 0.2607 0.6944 0.1429 0.0372 0.8269 0.4408 0.1643 0.1566 0.2626 .

Then, the full CS decompositions of L ( W ) ( 1 ) , L ( W ) ( 2 ) and L ( W ) ( 3 ) are

L U H c W c V ( 1 ) = 0.8305 0 0 0.5571 0 0 0 0.5428 0 0 0.8399 0 0 0 0.3590 0 0 0.9333 0.5571 0 0 0.8305 0 0 0 0.8399 0 0 0.5428 0 0 0 0.9333 0 0 0.3590 ,

L U H c W c V ( 2 ) = 0.9961 0 0 0.0881 0 0 0 0.6783 0 0 0.7348 0 0 0 0.1954 0 0 0.9807 0.0881 0 0 0.9961 0 0 0 0.7348 0 0 0.6783 0 0 0 0.9807 0 0 0.1954 ,

L U H c W c V ( 3 ) = 0.9737 0 0 0.026 0 0 0 0.6078 0 0 0.3153 0 0 0 0.0663 0 0 0.4978 0.026 0 0 0.9737 0 0 0 0.3153 0 0 0.6078 0 0 0 0.4978 0 0 0.0663 .

Then,

K 1 ( 1 ) = 0.7581 0.3422 0.3422 0.7581 , K 1 ( 2 ) = 0.0186 0.0624 0.0624 0.0186 , K 1 ( 3 ) = 0.2079 0.2291 0.2291 0.2079 , K 2 ( 1 ) = 0.0431 0.5594 0.5594 0.0431 , K 2 ( 2 ) = 0.0951 0.3068 0.3068 0.0951 , K 2 ( 3 ) = 0.1318 0.1307 0.1307 0.1318 , K 3 ( 1 ) = 0.4097 0.2674 0.2674 0.4097 , K 3 ( 2 ) = 0.3401 0.1586 0.1586 0.3401 , K 3 ( 3 ) = 0.1738 0.0427 0.0427 0.1738 .

By computing the eigenvalues of

m a t ( K 1 ) = K 1 ( 1 ) + K 1 ( 2 ) K 1 ( 2 ) + K 1 ( 3 ) K 1 ( 3 ) K 1 ( 2 ) + K 1 ( 3 ) K 1 ( 1 ) K 1 ( 2 ) + K 1 ( 3 ) K 1 ( 3 ) K 1 ( 2 ) + K 1 ( 3 ) K 1 ( 1 ) + K 1 ( 2 ) ,

m a t ( K 2 ) = K 2 ( 1 ) + K 2 ( 2 ) K 2 ( 2 ) + K 2 ( 3 ) K 2 ( 3 ) K 2 ( 2 ) + K 2 ( 3 ) K 2 ( 1 ) K 2 ( 2 ) + K 2 ( 3 ) K 2 ( 3 ) K 2 ( 2 ) + K 2 ( 3 ) K 2 ( 1 ) + K 2 ( 2 ) ,

m a t ( K 3 ) = K 3 ( 1 ) + K 3 ( 2 ) K 3 ( 2 ) + K 3 ( 3 ) K 3 ( 3 ) K 3 ( 2 ) + K 3 ( 3 ) K 3 ( 1 ) K 3 ( 2 ) + K 3 ( 3 ) K 3 ( 3 ) K 3 ( 2 ) + K 3 ( 3 ) K 3 ( 1 ) + K 3 ( 2 ) .

one can get the C-eigenvalues of the orthogonal tensor W :

λ 1,2 = 0.3795 ± 0.9252 i , λ 3,4 = 0.9846 ± 0.1755 i , λ 5,6 = 0.9474 ± 0.0507 i ,

λ 7,8 = 0.3336 ± 0.9188 i , λ 9,10 = 0.4099 ± 0.176 i , λ 11,12 = 0.129 ± 0.2142 i ,

λ 13,14 = 0.7423 ± 0.67 i , λ 15,16 = 0.9236 ± 0.3833 i , λ 17,18 = 0.2434 ± 0.0661 i .


Corresponding author: Hongwei Jin, School of Mathematical Sciences, Center for Applied Mathematics of Guangxi, Guangxi Minzu University, Nanning, 530006, China, E-mail:

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments, which have significantly improved the paper.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and consented to its submission to the journal, reviewed all the results, and approved the final version of the manuscript. All authors contributed equally to the manuscript.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors state no conflicts of interest.

  6. Research funding: This work was supported by the Special Fund for Science and Technological Bases and Talents of Guangxi (No. GUIKE AA24010005).

  7. Data availability: Some or all data, models, or code generated or used during the study are available from the corresponding author by request.

References

[1] P. Kroonenberg, Three-Mode Principal Component Analysis: Theory and Applications, DSWO Press, Leiden, 1983.Search in Google Scholar

[2] M. Ng, R. Chan, and W. Tang, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM J. Sci. Comput. 21 (1999), no. 3, 851–866, https://doi.org/10.1137/S1064827598341384.Search in Google Scholar

[3] N. Hao, M. Kilmer, K. Braman, and R. Hoover, Facial recognition using tensor-tensor decompositions, SIAM J. Imaging Sci. 6 (2013), no. 1, 437–463, https://doi.org/10.1137/110842570.Search in Google Scholar

[4] P. Comon, Tensor decompositions: state of the art and applications, Proceedings of the Institute of Mathematics and its Applications Conference Series, Institute of Mathematics and its Applications, Oxford, 2002, pp. 1–28.10.1093/oso/9780198507345.003.0001Search in Google Scholar

[5] L. De Lathauwer and B. De Moor, From matrix to tensor: Multilinear algebra and signal processing, in: J. McWhirter and I. K. Proudler (eds), Mathematics in Signal Processing IV, Clarendon Press, Oxford, UK, 1998, pp. 1–15.Search in Google Scholar

[6] J. Nagy and M. Kilmer, Kronecker product approximation for preconditioning in three-dimensional imaging applications, IEEE Trans. Image Process. 15 (2006), no. 3, 604–613, https://doi.org/10.1109/TIP.2005.863112.Search in Google Scholar PubMed

[7] N. Sidiropoulos, R. Bro, and G. Giannakis, Parallel factor analysis in sensor array processing, IEEE Trans. Signal Process. 48 (2000), no. 8, 2377–2388, https://doi.org/10.1109/78.852018.Search in Google Scholar

[8] W. Hoge and C. Westin, Identification of translational displacements between N-dimensional data sets using the high order SVD and phase correlation, IEEE Trans. Image Process. 14 (2005), no. 7, 884–889, https://doi.org/10.1109/TIP.2005.849327.Search in Google Scholar

[9] M. Rezghi and L. Eldén, Diagonalization of tensors with circulant structure, Linear Algebra Appl. 435 (2011), no. 3, 422–447, https://doi.org/10.1016/j.laa.2010.03.032.Search in Google Scholar

[10] M. Che, L. Qi, and Y. Wei, The generalized order tensor complementarity problems, Numer. Math. Theor. Meth. Appl. 13 (2020), no. 1, 131–149, https://doi.org/10.4208/nmtma.OA-2018-0117.Search in Google Scholar

[11] A. Cichocki, R. Zdunek, A. H. Phan, and S. I. Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation, John Wiley and Sons, Hoboken, 2009.10.1002/9780470747278Search in Google Scholar

[12] L. De Lathauwer, Signal Processing Based on Multilinear Algebra, PhD Thesis, Katholike Universiteit, Leuven, 1997.Search in Google Scholar

[13] Y. Miao, L. Qi, and Y. Wei, M-eigenvalues of the Riemann curvature tensor of conformally flat manifolds, 2018, arXiv:1808.01882, https://doi.org/10.48550/arXiv.1808.01882.Search in Google Scholar

[14] L. Omberg, G. H. Golub, and O. Alter, A tensor higher-order singular value decomposition for integrative analysis of DNA microarray data from different studies, Proc. Natl. Acad. Sci. USA 104 (2007), no. 47, 18371–18376, https://doi.org/10.1073/pnas.0709146104.Search in Google Scholar PubMed PubMed Central

[15] L. Xiong and J. Liu, A new C-eigenvalue localisation set for piezoelectric-type tensors, East Asian J. Appl. Math. 10 (2020), no. 1, 123–134, https://doi.org/10.4208/eajam.060119.040619.Search in Google Scholar

[16] X. Wang, M. Che, and Y. Wei, Best rank-one approximation of fourth-order partially symmetric tensors by neural network, Numer. Math. Theor. Meth. Appl. 11 (2018), no. 4, 673–700, https://doi.org/10.4208/nmtma.2018.s01.Search in Google Scholar

[17] J. D. Carroll and J. J. Chang, Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition, Psychometrika 35 (1970), no. 3, 283–319, https://doi.org/10.1007/BF02310791.Search in Google Scholar

[18] L. De Lathauwer, B. De Moor, and J. Vandewalle, A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl. 21 (2000), no. 4, 1253–1278, https://doi.org/10.1137/S0895479896305727.Search in Google Scholar

[19] Z. H. He, C. Chen, and X. X. Wang, A simultaneous decomposition for three quaternion tensors with applications in color video signal processing, Anal. Appl. 19 (2021), no. 3, 423–444, https://doi.org/10.1142/S0219530520400084.Search in Google Scholar

[20] T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Rev. 51 (2009), no. 3, 455–500, https://doi.org/10.1137/07070111X.Search in Google Scholar

[21] L. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika 31 (1966), no. 3, 279–311, https://doi.org/10.1007/BF02289464.Search in Google Scholar PubMed

[22] L. Sun, B. Zheng, C. Bu, and Y. Wei, Moore-Penrose inverse of tensors via Einstein product, Linear Multilinear Algebra 64 (2016), no. 4, 686–698, https://doi.org/10.1080/03081087.2015.1083933.Search in Google Scholar

[23] L. Sun, B. Zheng, Y. Wei, and C. Bu, Generalized inverses of tensors via a general product of tensors, Front. Math. China 13 (2018), no. 4, 893–911, https://doi.org/10.1007/s11464-018-0695-y.Search in Google Scholar

[24] Y. Miao, L. Qi, and Y. Wei, Generalized tensor function via the tensor singular value decomposition based on the T-product, Linear Algebra Appl. 590 (2020), 258–303, https://doi.org/10.1016/j.laa.2019.12.035.Search in Google Scholar

[25] Y. Miao, L. Qi, and Y. Wei, T-Jordan canonical form and T-Drazin inverse based on the T-product, Commun. Appl. Math. Comput. 3 (2021), 201–220, https://doi.org/10.1007/s42967-019-00055-4.Search in Google Scholar

[26] Y. Miao, T. Wang, and Y. Wei, Stochastic conditioning of tensor functions based on the tensor-tensor product, Pac. J. Optim. 19 (2023), no. 2, 205–235, https://doi.org/10.1016/j.pjopt.2022.11.002.Search in Google Scholar

[27] K. Panigrahy, R. Behera, and D. Mishra, Reverse order law for the Moore-Penrose inverses of tensors, Linear Multilinear Algebra 68 (2020), no. 2, 246–264, https://doi.org/10.1080/03081087.2018.1502252.Search in Google Scholar

[28] R. Behera, J. Sahoo, R. Mohapatra, and M. Nashed, Computation of generalized inverses of tensors via T-product, Numer. Linear Algebra Appl. 29 (2021), no. 2, e2416, https://doi.org/10.1002/nla.2416.Search in Google Scholar

[29] Y. Liu and H. Ma, Dual core generalized inverse of third-order dual tensor based on the T-product, Comput. Appl. Math. 41 (2022), no. 8, 391, https://doi.org/10.1007/s40314-022-02114-8.Search in Google Scholar

[30] Z. Cong and H. Ma, Characterizations and perturbations of the core-EP inverse of tensors based on the T-product, Numer. Funct. Anal. Optim. 43 (2022), no. 10, 1150–1200, https://doi.org/10.1080/01630563.2022.2087676.Search in Google Scholar

[31] J. Sahoo, R. Behera, P. Stanimirović, V. Katsikis, and H. Ma, Core and core-EP inverses of tensors, Comput. Appl. Math. 39 (2020), no. 9, 9, https://doi.org/10.1007/s40314-019-0983-5.Search in Google Scholar

[32] H. Jin, M. Bai, J. Benítez, and X. Liu, The generalized inverses of tensors and an application to linear models, Comput. Math. Appl. 74 (2017), no. 3, 385–397, https://doi.org/10.1016/j.camwa.2017.04.017.Search in Google Scholar

[33] R. Behera and D. Mishra, Further results on generalized inverses of tensors via the Einstein product, Linear Multilinear Algebra 65 (2017), no. 8, 1662–1682, https://doi.org/10.1080/03081087.2016.1253662.Search in Google Scholar

[34] R. Behera, A. Nandi, and J. Sahoo, Further results on the Drazin inverse of even order tensors, Numer. Linear Algebra Appl. 27 (2020), no. 5, e2317, https://doi.org/10.1002/nla.2317.Search in Google Scholar

[35] J. Ji and Y. Wei, The Drazin inverse of an even-order tensor and its application to singular tensor equations, Comput. Math. Appl. 75 (2018), no. 9, 3402–3413, https://doi.org/10.1016/j.camwa.2018.02.006.Search in Google Scholar

[36] Z. Cao and P. Xie, Perturbation analysis for t-product-based tensor inverse, Moore-Penrose inverse and tensor system, Commun. Appl. Math. Comput. 4 (2022), no. 4, 1441–1456, https://doi.org/10.1007/s42967-022-00186-1.Search in Google Scholar

[37] Z. Cao and P. Xie, On some tensor inequalities based on the T-product, Linear Multilinear Algebra 71 (2023), no. 3, 377–390, https://doi.org/10.1080/03081087.2022.2032567.Search in Google Scholar

[38] M. Che and Y. Wei, An efficient algorithm for computing the approximate t-URV and its applications, J. Sci. Comput. 92 (2022), no. 3, 93, https://doi.org/10.1007/s10915-022-01956-y.Search in Google Scholar

[39] J. Chen, W. Ma, Y. Miao, and Y. Wei, Perturbations of Tensor-Schur decomposition and its applications to multilinear control systems and facial recognitions, Neurocomputing 547 (2023), 126359, https://doi.org/10.1016/j.neucom.2023.126359.Search in Google Scholar

[40] Y. Liu and H. Ma, Weighted generalized tensor functions based on the tensor-product and their applications, Filomat 36 (2022), no. 18, 6403–6426, https://doi.org/10.2298/fil2218403l.Search in Google Scholar

[41] C. Mo, X. Wang, and Y. Wei, Time-varying generalized tensor eigenanalysis via Zhang neural networks, Neurocomputing 407 (2020), 465–479, https://doi.org/10.1016/j.neucom.2020.04.115.Search in Google Scholar

[42] C. Mo, W. Ding, and Y. Wei, Perturbation analysis on T-eigenvalues of third-order tensors, J. Optim. Theor. Appl. 202 (2024), no. 2, 668–702, https://doi.org/10.1007/s10957-024-02444-z.Search in Google Scholar

[43] P. Wei, X. Wang, and Y. Wei, Neural network models for time-varying tensor complementarity problems, Neurocomputing 523 (2023), 18–32, https://doi.org/10.1016/j.neucom.2022.12.008.Search in Google Scholar

[44] X. Shao, Y. Wei, and J. Yuan, Nonsymmetric algebraic Riccati equations under the tensor product, Numer. Funct. Anal. Optim. 44 (2023), no. 6, 545–563, https://doi.org/10.1080/01630563.2023.2192593.Search in Google Scholar

[45] E. Kernfeld, M. Kilmer, and S. Aeron, Tensor-tensor products with invertible linear transforms, Linear Algebra Appl. 485 (2015), 545–570, https://doi.org/10.1016/j.laa.2015.07.021.Search in Google Scholar

[46] A. Bentbib, A. El Hachimi, K. Jbilou, and A. Ratnani, Fast multidimensional completion and principal component analysis methods via the cosine product, Calcolo 59 (2022), no. 3, 26, https://doi.org/10.1007/s10092-022-00469-2.Search in Google Scholar

[47] W. Xu, X. Zhao, and M. Ng, A fast algorithm for cosine transform based tensor singular value decomposition, 2019, arXiv:1902.03070, https://doi.org/10.48550/arXiv.1902.03070.Search in Google Scholar

[48] J. Benítez and V. Rakočević, Applications of CS decomposition in linear combinations of two orthogonal projectors, Appl. Math. Comput. 203 (2008), no. 2, 761–769, https://doi.org/10.1016/j.amc.2008.05.053.Search in Google Scholar

[49] D. Calvetti, L. Reichel, and H. Xu, A CS decomposition for orthogonal matrices with application to eigenvalue computation, Linear Algebra Appl. 476 (2015), 197–232, https://doi.org/10.1016/j.laa.2015.03.007.Search in Google Scholar

[50] H. Jin, S. Xu, H. Jiang, and X. Liu, The generalized inverses of tensors via the C-product, 2022, arXiv:2211.02841, https://doi.org/10.48550/arXiv.2211.02841.Search in Google Scholar

[51] H. Jin, M. He, and Y. Wang, The expressions of the generalized inverses of the block tensor via the C-product, Filomat 26 (2023), no. 37, 909–932, https://doi.org/10.2298/FIL2326909J.Search in Google Scholar

[52] G. H. Golub and C. F. Van Loan, Matrix Computations, 4th ed., Johns Hopkins Univ. Press, Baltimore, 2013.Search in Google Scholar

[53] S. Xu, Theory and Methods of Computation of Matrices, Peking University Press, Beijing, 1995.Search in Google Scholar

[54] N. J. Higham, J-orthogonal matrices: properties and generation, SIAM Rev. 45 (2003), no. 3, 504–519, https://doi.org/10.1137/S0036144502414930.Search in Google Scholar

Received: 2025-03-25
Accepted: 2025-08-23
Published Online: 2025-11-18

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. On I-convergence of nets of functions in fuzzy metric spaces
  2. Special Issue on Contemporary Developments in Graphs Defined on Algebraic Structures
  3. Forbidden subgraphs of TI-power graphs of finite groups
  4. Finite group with some c#-normal and S-quasinormally embedded subgroups
  5. Classifying cubic symmetric graphs of order 88p and 88p 2
  6. Two-sided zero-divisor graphs of orientation-preserving and order-decreasing transformation semigroups
  7. Simplicial complexes defined on groups
  8. Further results on permanents of Laplacian matrices of trees
  9. Algebra
  10. Classes of modules closed under projective covers
  11. On the dimension of the algebraic sum of subspaces
  12. Green's graphs of a semigroup
  13. On an uncertainty principle for small index subgroups of finite fields
  14. On a generalization of I-regularity
  15. Algorithm and linear convergence of the H-spectral radius of weakly irreducible quasi-positive tensors
  16. The hyperbolic CS decomposition of tensors based on the C-product
  17. On weakly classical 1-absorbing prime submodules
  18. Equational characterizations for some subclasses of domains
  19. Algebraic Geometry
  20. Spin(8, ℂ)-Higgs bundles fixed points through spectral data
  21. Embedding of lattices and K3-covers of an Enriques surface
  22. Kodaira-Spencer maps for elliptic orbispheres as isomorphisms of Frobenius algebras
  23. Applications in Computer and Information Sciences
  24. Dynamics of particulate emissions in the presence of autonomous vehicles
  25. Exploring homotopy with hyperspherical tracking to find complex roots with application to electrical circuits
  26. Category Theory
  27. The higher mapping cone axiom
  28. Combinatorics and Graph Theory
  29. 𝕮-inverse of graphs and mixed graphs
  30. On the spectral radius and energy of the degree distance matrix of a connected graph
  31. Some new bounds on resolvent energy of a graph
  32. Coloring the vertices of a graph with mutual-visibility property
  33. Ring graph induced by a ring endomorphism
  34. A note on the edge general position number of cactus graphs
  35. Complex Analysis
  36. Some results on value distribution concerning Hayman's alternative
  37. Freely quasiconformal and locally weakly quasisymmetric mappings in metric spaces
  38. A new result for entire functions and their shifts with two shared values
  39. On a subclass of multivalent functions defined by generalized multiplier transformation
  40. Singular direction of meromorphic functions with finite logarithmic order
  41. Growth theorems and coefficient bounds for g-starlike mappings of complex order λ
  42. Refinements of inequalities on extremal problems of polynomials
  43. Control Theory and Optimization
  44. Averaging method in optimal control problems for integro-differential equations
  45. On superstability of derivations in Banach algebras
  46. The robust isolated calmness of spectral norm regularized convex matrix optimization problems
  47. Observability on the classes of non-nilpotent solvable three-dimensional Lie groups
  48. Differential Equations
  49. The ill-posedness of the (non-)periodic traveling wave solution for the deformed continuous Heisenberg spin equation
  50. A note on the global existence and boundedness of an N-dimensional parabolic-elliptic predator-prey system with indirect pursuit-evasion interaction
  51. Blow-up of solutions for Euler-Bernoulli equation with nonlinear time delay
  52. Periodic or homoclinic orbit bifurcated from a heteroclinic loop for high-dimensional systems
  53. Regularity of weak solutions to the 3D stationary tropical climate model
  54. Local minimizers for the NLS equation with localized nonlinearity on noncompact metric graphs
  55. Global existence and blow-up of solutions to pseudo-parabolic equation for Baouendi-Grushin operator
  56. Bubbles clustered inside for almost-critical problems
  57. Existence and multiplicity of positive solutions for multiparameter periodic systems
  58. Existence of positive periodic solutions for evolution equations with delay in ordered Banach spaces
  59. On a nonlinear boundary value problems with impulse action
  60. Normalized ground-states for the Sobolev critical Kirchhoff equation with at least mass critical growth
  61. Multiple positive solutions to a p-Kirchhoff equation with logarithmic terms and concave terms
  62. Infinitely many solutions for a class of Kirchhoff-type equations
  63. Real and non-real eigenvalues of regular indefinite Sturm–Liouville problems
  64. Existence of global solutions to a semilinear thermoelastic system in three dimensions
  65. Limiting profile of positive solutions to heterogeneous elliptic BVPs with nonlinear flux decaying to negative infinity on a portion of the boundary
  66. Morse index of circular solutions for repulsive central force problems on surfaces
  67. Differential Geometry
  68. On tangent bundles of Walker four-manifolds
  69. Pedal and negative pedal surfaces of framed curves in the Euclidean 3-space
  70. Discrete Mathematics
  71. Eventually monotonic solutions of the generalized Fibonacci equations
  72. Dynamical Systems Ergodic Theory
  73. Dynamical properties of two-diffusion SIR epidemic model with Markovian switching
  74. A note on weighted measure-theoretic pressure
  75. Pullback attractors for a class of second-order delay evolution equations with dispersive and dissipative terms on unbounded domain
  76. Pullback attractor of the 2D non-autonomous magneto-micropolar fluid equations
  77. Functional Analysis
  78. Spectrum boundary domination of semiregularities in Banach algebras
  79. Approximate multi-Cauchy mappings on certain groupoids
  80. Investigating the modified UO-iteration process in Banach spaces by a digraph
  81. Tilings, sub-tilings, and spectral sets on p-adic space
  82. Continuity and essential norm of operators defined by infinite tridiagonal matrices in weighted Orlicz and l spaces
  83. A family of commuting contraction semigroups on l 1 ( N ) and l ( N )
  84. q-Stirling sequence spaces associated with q-Bell numbers
  85. Chlodowsky variant of Bernstein-type operators on the domain
  86. Hyponormality on a weighted Bergman space of an annulus with a general harmonic symbol
  87. Characterization of derivations on strongly double triangle subspace lattice algebras by local actions
  88. Fixed point approaches to the stability of Jensen’s functional equation
  89. Geometry
  90. The regularity of solutions to the Lp Gauss image problem
  91. Solving the quartic by conics
  92. Group Theory
  93. On a question of permutation groups acting on the power set
  94. A characterization of the translational hull of a weakly type B semigroup with E-properties
  95. Harmonic Analysis
  96. Eigenfunctions on an infinite Schrödinger network
  97. Maximal function and generalized fractional integral operators on the weighted Orlicz-Lorentz-Morrey spaces
  98. Subharmonic functions and associated measures in ℝn
  99. Mathematical Logic, Model Theory and Foundation
  100. A topology related to implication and upsets on a bounded BCK-algebra
  101. Boundedness of fractional sublinear operators on weighted grand Herz-Morrey spaces with variable exponents
  102. Number Theory
  103. Fibonacci vector and matrix p-norms
  104. Recurrence for probabilistic extension of Dowling polynomials
  105. Carmichael numbers composed of Piatetski-Shapiro primes in Beatty sequences
  106. The number of rational points of some classes of algebraic varieties over finite fields
  107. Classification and irreducibility of a class of integer polynomials
  108. Decompositions of the extended Selberg class functions
  109. Joint approximation of analytic functions by the shifts of Hurwitz zeta-functions in short intervals
  110. Fibonacci Cartan and Lucas Cartan numbers
  111. Recurrence relations satisfied by some arithmetic groups
  112. The hybrid power mean involving the Kloosterman sums and Dedekind sums
  113. Numerical Methods
  114. A modified predictor–corrector scheme with graded mesh for numerical solutions of nonlinear Ψ-caputo fractional-order systems
  115. A kind of univariate improved Shepard-Euler operators
  116. Probability and Statistics
  117. Statistical inference and data analysis of the record-based transmuted Burr X model
  118. Multiple G-Stratonovich integral in G-expectation space
  119. p-variation and Chung's LIL of sub-bifractional Brownian motion and applications
  120. Real Analysis
  121. Chebyshev polynomials of the first kind and the univariate Lommel function: Integral representations
  122. Multiple solutions for a class of fourth-order elliptic equations with critical growth
  123. Majorization-type inequalities for (m, M, ψ)-convex functions with applications
  124. The evaluation of a definite integral by the method of brackets illustrating its flexibility
  125. Some new Fejér type inequalities for (h, g; α - m)-convex functions
  126. Some new Hermite-Hadamard type inequalities for product of strongly h-convex functions on ellipsoids and balls
  127. Topology
  128. Unraveling chaos: A topological analysis of simplicial homology groups and their foldings
  129. A generalized fixed-point theorem for set-valued mappings in b-metric spaces
  130. On SI2-convergence in T0-spaces
  131. Generalized quandle polynomials and their applications to stuquandles, stuck links, and RNA folding
Downloaded on 21.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/math-2025-0196/html
Scroll to top button