Home Estimation of High-Dimensional Matrix Factor Models with Change Points
Article
Licensed
Unlicensed Requires Authentication

Estimation of High-Dimensional Matrix Factor Models with Change Points

  • Lijie Peng , Guchu Zou and Jianhong Wu EMAIL logo
Published/Copyright: May 23, 2025
Become an author with De Gruyter Brill

Abstract

In this paper, we focus on the high-dimensional matrix factor models with change points. We first consider the matrix factor model with a single change point, and propose a least squares estimation method to identify the change point. When the number of change points is unknown, a two-step estimation procedure is developed to identify all the change points. Under some mild conditions, the estimator of the number of change points can be proved to be consistent, and the distance between the estimated and true change points can be proved to be stochastically bounded. Monte Carlo simulation study and real data analysis are carried out for illustration.

JEL Classification: C23; C13

Corresponding author: Jianhong Wu, College of Mathematics and Science, Shanghai Normal University, Shanghai 200234, China; and Lab for Educational Big Data and Policymaking, Shanghai 200234, China, E-mail: 

Lijie Peng and Guchu Zou contributed equally to this work.


Funding source: National Nature Science Foundation of China

Award Identifier / Grant number: 72173086

Acknowledgments

We are deeply grateful to Professor Jeremy Piger and two anonymous referees for valuable comments that led to substantial improvement of this paper.

  1. Conflict of interest: The authors state no conflict of interest.

  2. Research funding: This research is supported in part by the National Nature Science Foundation of China (Grant No. 72173086).

Appendix: Technical Details

We first state the following notations and lemmas that are used in the proof of the theorems. The detailed proofs of most of the lemmas are omitted here to save space and can be obtained from the authors upon request.

Let ϕ t  = vec(G t ), J ̃ = J ̃ 2 1 J ̃ 1 1 and J ̂ = J ̂ 2 1 J ̂ 1 1 . As described in the main text, define J ̃ 1 = 1 T p 1 p 2 2 t = 1 T G t Γ Π ̂ Π ̂ Π G t Γ Γ ̃ Λ ̃ 1 1 and J ̃ 2 = 1 T p 1 2 p 2 t = 1 T G t Γ Γ ̂ Γ ̂ Γ G t Π Π ̃ Λ ̃ 2 1 are r 1 × r 1 and r 2 × r 2 rotation matrices, where Λ ̃ 1 and Λ ̃ 2 as the diagonal matrices composed of the leading r 1 and r 2 eigenvalues of M ̃ 1 and M ̃ 2 , M ̃ 1 = 1 T p 1 p 2 2 t = 1 T X t Π ̂ Π ̂ X t , M ̃ 2 = 1 T p 1 2 p 2 t = 1 T X t Γ ̂ Γ ̂ X t , Γ ̂ and Π ̂ are the initial projection matrices in Yu et al. (2022). Following Proposition 1 of Bai (2003) and proof of Lemma A.2 in Yu et al. (2022), we have J ̃ 1 p J ̂ 1 and J ̃ 2 p J ̂ 2 as min T , p 1 , p 2 , where J ̂ 1 = 1 T p 2 2 t = 1 T G t Π Π ̂ Π ̂ Π G t Γ 1 2 Δ 1 Ψ 1 1 2 , J ̂ 2 = 1 T p 1 2 t = 1 T G t Γ Γ ̂ Γ ̂ Γ G t Π 1 2 Δ 2 Ψ 2 1 2 , Δ i and Ψ i denote the matrices of eigenvectors and eigenvalues of Σ i,G , i = 1, 2.

Lemma 1

Under Assumptions A1–A5, we have 1 T t = 1 T ϕ ̃ t J ̃ 2 1 J ̃ 1 1 ϕ t 2 = 1 T t = 1 T ϕ ̃ t J ̃ ϕ t 2 = O p 1 T × δ p 1 p 2 2 + 1 p 1 p 2 .

Proof.

This lemma is equivalent to Theorem 3.5 in Yu et al. (2022) for the corresponding model. Therefore, we only need to verify Assumptions A-E as outlined in Yu et al. (2022). Assumption A1 implies Assumption A, and Assumption A4 implies Assumption D. Assumptions B, C, and E can be verified using a similar proof technique as that of Lemma 1 in Baltagi, Kao, and Wang (2017).

Lemma 2

Under Assumptions A1–A5, we have J ̃ J ̂ = o p ( 1 ) .

Proof.

Similar to the Proposition 1 of Bai (2003), we have

Γ Γ ̃ p 1 p 1 T p 2 2 t = 1 T G t Π Π ̂ Π ̂ Π G t 1 2 Δ 1 Ψ 1 1 2 , Π Π ̃ p 2 p 1 T p 1 2 t = 1 T G t Γ Γ ̂ Γ ̂ Γ G t 1 2 Δ 2 Ψ 2 1 2 .

Then, we can adopt a similar conclusion as that of Lemma 2 in Baltagi, Kao, and Wang (2017) to prove J ̃ J ̂ = o p ( 1 ) .

Lemma 3

Under Assumptions A1–A6, we have

  1. Hajek-Renyi inequality is utilized to the process y t , t = 1 , , κ 0 , y t , t = κ 0 , , 1 , y t , t = κ 0 + 1 , , T and y t , t = T , , κ 0 + 1 ,

  2. sup κ κ 0 1 κ t = 1 κ ϕ t 2 = O p ( 1 ) , sup κ κ 0 1 T κ t = k + 1 T ϕ t 2 = O p ( 1 ) , sup κ < κ 0 1 κ 0 κ t = κ + 1 κ 0 ϕ t 2 = O p ( 1 ) , sup κ > κ 0 1 κ κ 0 t = κ 0 + 1 κ ϕ t 2 = O p ( 1 ) .

Proof.

The conclusions above can be obtained by similar proof methods of Lemma 3 in Baltagi, Kao, and Wang (2017).

Lemma 4

Suppose that Assumptions A1–A7 hold, we have

  1. sup κ D , κ κ 0 1 κ t = 1 κ ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ = O p ( 1 T δ p 1 p 2 + 1 p 1 p 2 ) ,

  2. sup κ D c , κ κ 0 1 κ t = 1 κ ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ = O p ( 1 T δ p 1 p 2 + 1 p 1 p 2 ) ,

  3. sup κ D c , κ < κ 0 1 κ 0 κ t = κ + 1 κ 0 ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ = O p ( 1 T δ p 1 p 2 + 1 p 1 p 2 ) ,

  4. sup κ D , κ < κ 0 1 κ 0 κ t = κ + 1 κ 0 ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ = O p ( 1 T δ p 1 p 2 + 1 p 1 p 2 ) ,

  5. sup κ D , κ < κ 0 1 κ 0 κ t = κ + 1 κ 0 ( ϕ ̃ t J ̃ ϕ t ) ( ϕ ̃ t J ̃ ϕ t ) = o p ( 1 ) ,

  6. sup κ D c , κ κ 0 1 κ t = 1 κ ( ϕ ̃ t J ̃ ϕ t ) ( ϕ ̃ t J ̃ ϕ t ) = o p ( 1 ) ,

  7. sup κ κ 0 1 T κ t = κ + 1 T ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ = O p ( 1 T δ p 1 p 2 + 1 p 1 p 2 ) .

Proof.

The proofs for parts (1), (3), and (4) can be derived by referring to the proof for part (2), while the proof for part (6) follows from the proof for part (5). For part (2), by Lemma 1 and Lemma 3, we have sup κ D c , κ κ 0 1 κ t = 1 κ ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ = O p 1 T δ p 1 p 2 + 1 p 1 p 2 . For part (5), by the proof of Theorem 3.5 in Yu et al. (2022), Lemma 3, and the proof of Lemma E.1 in Yu et al. (2022), along with part (1) of Assumption 7 and Theorem 3.1 in Yu et al. (2022), part (5) can be proved using similar methods as those for Lemma 5 in Baltagi, Kao, and Wang (2017). Finally, the proof for part (7) is provided below.

sup κ κ 0 1 T κ t = κ + 1 T ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ sup κ κ 0 1 κ 0 κ t = κ + 1 κ 0 ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ + 1 T κ 0 t = κ 0 + 1 T ( ϕ ̃ t J ̃ ϕ t ) ϕ t J ̃ .

Lemma 4(3) and Lemma 4(4) imply that the first term is O p ( 1 T δ p 1 p 2 + 1 p 1 p 2 ) . Similar to the proof of Lemma 4(2), we can show that the second term is O p ( 1 T δ p 1 p 2 + 1 p 1 p 2 ) .

Lemma 5

Suppose that Assumptions A1–A7 hold. Let z t = vec ϕ ̃ t ϕ ̃ t J ̂ ϕ t ϕ t J ̂ , we have

  1. sup κ D c , κ κ 0 1 κ 0 1 κ t = 1 κ z t 2 = o p ( 1 ) ,

  2. sup κ D , κ κ 0 1 κ 0 1 κ t = 1 κ z t 2 = o p ( 1 ) ,

  3. sup κ D c , κ κ 0 1 κ 0 t = 1 κ z t = o p ( 1 ) ,

  4. sup κ D , κ κ 0 1 κ 0 t = 1 κ z t = o p ( 1 ) ,

  5. sup κ D c , κ < κ 0 1 κ 0 κ t = κ + 1 κ 0 z t = o p ( 1 ) ,

  6. sup κ D , κ < κ 0 1 κ 0 κ t = κ + 1 κ 0 z t = o p ( 1 ) ,

  7. sup κ D c , κ < κ 0 1 κ 0 1 κ 0 κ t = κ + 1 κ 0 z t 2 = o p ( 1 ) ,

  8. sup κ D , κ < κ 0 1 κ 0 1 κ 0 κ t = κ + 1 κ 0 z t 2 = o p ( 1 ) ,

  9. sup κ κ 0 1 T κ t = κ + 1 T z t = o p ( 1 ) .

Proof.

The proofs for parts (2), (4), and (5)–(8) can be obtained by referring to the proof for part (1). Firstly, for part (1), based on Lemma 4(6), Lemma 4(2), Lemma 2, and Lemma 3, and following a similar approach as in the proof of Lemma 7 in Baltagi, Kao, and Wang (2017), we have t = 1 κ z t 2 = o p ( 1 ) . For part (3), using Lemma 1, Lemma 4(2), Lemma 2, and Lemma 3, and applying the same proof technique as in Lemma 7 of Baltagi, Kao, and Wang (2017), we have 1 κ 0 t = 1 κ z t = o p ( 1 ) . For part(9), by Lemma 1, Lemma 2, Lemma 4(7) and sup κ κ 0 1 T κ t = κ + 1 T ϕ t ϕ t = O p ( 1 ) , we can conclude that sup κ κ 0 1 T κ t = κ + 1 T z t = o p ( 1 ) .

Lemma 6

Under Assumptions A1 and A4, Assumptions B1–B6, as min{T, p 1, p 2} → , we have lim P ( k ̃ 1 n = k 1 + q 1 n ) = 1 and lim P ( k ̃ 2 n = k 2 + q 2 n ) = 1 , q 1n  > 0 and q 2n  > 0 for S n S 2 a , q 1n  ≥ 0 and q 2n  ≥ 0 for S n S 2 b S 2 c , with n = 0, 1, …, N.

Proof.

We consider four situations as follows: (1) S j S 1 ; (2) S j S 2 a ; (3) S j S 2 b ; (4) S j S 2 c . In each situation, the number of row and column factors are selected using the iterative method (IterER) in Yu et al. (2022). In the following, we show the performance of k ̃ 1 j in each case. The performance of k ̃ 2 j can be showed in the same manner, thus, omitted.

For S j S 1 , there exists l such that S j ⊆ (κ l , κ l+1] and κ l  ∈ S j , it is easy to confirm Assumptions A–E in Yu et al. (2022) and establish the consistency of k ̃ 1 j and k ̃ 2 j .

For S j S a , there exists l ≥ 1 such that S j ⊆ (κ l−1, κ l+1] and κ l  ∈ S j , the model can be rewritten as

X t = R 0 l F t C 0 l + E t for t = κ l 1 + 1 , , κ l R 0 l + 1 F t C 0 l + 1 + E t for t = κ l + 1 , , κ l + 1 .

Let Γ l and Π l are the p 1 × (k 1 + q 1l ) and p 2 × (k 2 + q 2l ) matrix that consisting of the maximal linearly independent subsystem of column vectors of [R 0l R 0l+1] and [C 0l C 0l+1], respectively. In addition, we have r ( R 0 l R 0 l + 1 ) = k 1 + q 1 l > k 1 and r ( C 0 l C 0 l + 1 ) = k 2 + q 2 l > k 2 . There exists (k 1 + q 1l ) × k 1 matrices Φ 1l and Φ 2l , and (k 2 + q 2l ) × k 2 matrices Ω l and Ω 2l , thus we have

X t = Γ l G t j Π l + E t ,

where G t j = Φ 1 l F t Ω 1 l for κ l−1 + 1 ≤ t ≤ κ l , and G t j = Φ 2 l F t Ω 2 l for κ l  < t ≤ κ l+1. Therefore, it is sufficient to validate Assumptions A–E of Yu et al. (2022) for the equivalent model, where the row and column factor numbers are given by k 1 + q 1l and k 2 + q 2l respectively. Assumption A1 implies Assumption A, and Assumption A4 implies Assumption D. We will confirm Assumptions B, C, and E as follows

Assumption B: Recalling that G t j = Φ 1 l F t Ω 1 l , for v j < t κ l Φ 2 l F t Ω 2 l , for κ l < t v j + 1 , we have

G t j 4 max Φ 1 l 4 , Φ 2 l 4 , Ω 1 l 4 , Ω 2 l 4 E F t 4 < M < ,

1 ω j t = v j + 1 v j + 1 vec ( G t j ) vec ( G t j ) = ω j , 1 ω j ( Ω 1 l Φ 1 l ) 1 ω j , 1 t = v j + 1 k l vec ( F t ) vec ( F t ) Ω 1 l Φ 1 l + ω j , 2 ω j ( Ω 2 l Φ 2 l ) 1 ω j , 2 t = k l + 1 v j + 1 vec ( F t ) vec ( F t ) Ω 2 l Φ 2 l p ρ j ( Ω 1 l Φ 1 l ) Σ F Ω 1 l Φ 1 l + ( 1 ρ j ) ( Ω 2 l Φ 2 l ) Σ F Ω 2 l Φ 2 l ,

where ρ j = lim ω j , 1 ω j ( 0,1 ) . Let Σ G j = ρ j ( Ω 1 l Φ 1 l ) Σ F Ω 1 l Φ 1 l + ( 1 ρ j ) ( Ω 2 l Φ 2 l ) Σ F Ω 2 l Φ 2 l .

Assumption C: By Assumption B2,

Γ l = R 0 l 2 + R 0 l + 1 2 1 2 2 r ̄ < , p 1 1 Γ l Γ l I k 1 + q 1 l 0 , Π l = C 0 l 2 + C 0 l + 1 2 1 2 2 c ̄ < , p 2 1 Π l Π l I k 2 + q 2 l 0 .

Assumption E:

E 1 ω j t = v j + 1 v j + 1 G t j v E t w 2 2 Φ 1 l 2 E ω j , 1 ω j 1 ω j , 1 t = v j + 1 κ l F t v E t w 2 Ω 1 l 2 + 2 Φ 2 l 2 E ω j , 2 ω j 1 ω j , 2 t = κ l + 1 v j + 1 F t v E t w 2 Ω 2 l 2 2 ρ j M + 2 ( 1 ρ j ) M = 2 M .

Consequently, Assumption E(1) is confirmed. Assumption E(2) can be verified similarly. Therefore, by Theorem 3.8 in Yu et al. (2022), we have lim P ( k ̃ 1 j = k 1 + q 1 j ) = 1 and lim P ( k ̃ 2 j = k 2 + q 2 j ) = 1 , where q 1j  > 0 and q 2j  > 0.

For S j S b , it implies that k l may be found at the left boundary of S j . We should prove that the consistency of the estimated number of row and column factors holds when S j S b , i.e. for any ϵ > 0, P ( k ̃ 1 j k 1 j ) < ϵ and P ( k ̃ 2 j k 2 j ) < ϵ as min{T, p 1, p 2} → . Here we consider the consistency of k ̃ 1 j , i.e. lim P ( k ̃ 1 j = k 1 j ) = 1 and consistency of k ̃ 2 j can be demonstrated in a similar way.

There exist M > 0 such that P ( k ̃ 1 j k 1 j ) = P ( k ̃ 1 j k 1 j , v j + 1 κ l v j + M ) , which resprsent the row and column factor loadings of the sub-sample undergo a break at t = κ l v j + 1 , v j + M , thus resulting in an unstable model structure. In this case, we have

X t = R 0 l F t C 0 l + U j + E t ,

where U j can be viewed as an extra error term. As suggested by Yu et al. (2022), the data matrix X t can be projected into a lower dimensional space. In view of this, let f t,⋅j denotes F t ’s j-th column and e ̃ t , j denotes E ̃ t ’s j-th column, where E ̃ t = E t C / p 2 . Define Y t , . j = R 2 l f t , j + e ̃ t , j = R 1 l f t , j + e ̃ t , j + μ t , j = a t , j + μ t , j , where a t , j = R 1 l f t , j + e ̃ t , j , μ t,⋅j  = R 2l f t,⋅j R 1l f t,⋅j for v j + 1 ≤ tκ l , and μ t,⋅j  = 0 for κ l + 1 ≤ t ≤ v j+1. In matrix form, we have Y t  = A t  + U t , where Y t , A t and U t are all p 1 × k 2j matrices. Define λ j , α j and β j as the j-th largest eigenvalue of 1 ω j p 2 t = v j + 1 v j + 1 Y t Y t , 1 ω j p 2 t = v j + 1 v j + 1 A t A t and 1 ω j p 2 t = v j + 1 v j + 1 W t W t respectively. By Weyl’s inequality for singular values, the perturbation effect of the extra error matrix W t on the eigenvalues of A t is

α j β 1 λ j α j + β 1 .

Hence, we have ( λ j α j ) 2 β 1 , and

β 1 tr 1 ω j p 2 t = v j + 1 v j + 1 W t W t = 1 ω j p 2 t = κ l + 1 v j + 1 j = 1 k 2 w t , j w t , j 2 1 ω j , 2 p 2 t = k l + 1 v j + 1 j = 1 k 2 f t , j 2 R 1 l 2 + R 2 l 2 8 1 ω j , 2 p 2 t = v j + 1 v j + M j = 1 k 21 f t , j 2 r ̄ 2 = O p 1 T p 2 .

According the proof for Theorem 3.8 about the consistency of factor numbers in Yu et al. (2022), we have α j  = ν j  + o p (1) for j ≤ k 1j , where ν j is the j-th largest eigenvalue of Σ 2,F l , 1 κ l + 1 κ l 1 t = κ l 1 + 1 κ l + 1 F t F t p Σ 2 , F l , and α j = O p 1 T p 1 + 1 T p 2 + 1 p 1 for j > k 1j . It follows that ω j = α j + 2 α j O p ( 1 T p 2 ) + O p ( 1 T p 2 ) = ν j + o p ( 1 ) for j ≤ k 1j , and ω j = O p 1 T p 1 + 1 T p 2 + 1 p 1 + O p 1 T p 1 1 / 2 + 1 T p 2 + 1 p 1 O p ( 1 T p 2 ) + O p ( 1 T p 2 ) = o p ( 1 ) for j > k 1j . This implies that the estimator of the numbers of row and column factors by applying the iterative algorithm in Yu et al. (2022) based on the sample Y t is still consistent for κ l v j + 1 , v j + M .

For S j S c , the consistency of the estimated number of row and column factors can be verified similarly.

Lemma 7

Under Assumptions A1 and A4, Assumptions B1–B6, as min{T, p 1, p 2} → , we have lim P k ̃ 1 n * = k 1 + q 1 n * = 1 and lim P k ̃ 2 n * = k 2 + q 2 n * = 1 , q 1 n * > 0 and q 2 n * > 0 for S n S 2 a S 2 c or S n S 2 a S 2 b , q 1 n * = 0 and q 2 n * = 0 for S n S 1 and S n + 1 S 1 , q 1 n * 0 and q 2 n * 0 for S n S 1 , S n + 1 S 2 c or S n S 2 b , S n + 1 S 1 with n = 0, …, N − 1.

Proof.

Recall that the change-point near the left/right end of S j can be situated within the interior of the new interval S j 1 * / S j by merging the adjacent subintervals. The remaining steps of the proof follow a similar approach to that of Lemma 6 and omitted.

Proof of Theorem 1.

Since the consistency of τ ̃ can be regarded as the intermediate steps for the Theorem 1, we need to prove for any ϵ > 0 and η > 0, P ( τ ̃ τ 0 > η ) < ϵ as min T , p 1 , p 2 . Let D = κ : ( τ 0 η ) T κ ( τ 0 + η ) T and D c as the complement of D, it is sufficient to prove P ( κ ̃ D c ) < ϵ . Let κ ̃ = arg min S ̃ ( κ ) , and we have S ̃ ( κ ) S ̃ κ 0 0 and min κ D c S ̃ ( κ ) S ̃ κ 0 0 , which implies that P ( κ ̃ D c ) < P ( min κ D c S ̃ ( κ ) S ̃ κ 0 0 ) . Hence for any ϵ > 0 and η > 0, it suffices to prove that P ( min κ D c S ̃ ( κ ) S ̃ κ 0 0 ) < ϵ as min T , p 1 , p 2 . Due to symmetry, we consider the case κ < κ 0. Let y t = vec J ̂ ϕ t ϕ t J ̂ Σ 1 for t ≤ κ 0 and y t = vec J ̂ ϕ t ϕ t J ̂ Σ 2 for t > κ 0. Define a κ = T κ 0 T κ vec Σ 2 vec Σ 1 , b κ = κ 0 κ T κ vec Σ 1 vec Σ 2 , Σ 1 = J ̂ Σ G , 1 J ̂ and Σ 2 = J ̂ Σ G , 2 J ̂ . Following similar proof technique of part D in Baltagi, Kao, and Wang (2017), together with part (1) of Lemma 3 and parts (1), (3), (5), (7) and (9) of Lemma 5, we have P ( min κ D c S ̃ ( κ ) S ̃ κ 0 0 ) < ϵ . Thus, in order to obtain the conclusion κ ̃ κ 0 = O p ( 1 ) , we shall prove that for any ϵ > 0, there exist M > 0 such that P ( | κ ̃ κ 0 | > M ) < ϵ as min T , p 1 , p 2 . Denote D M = κ : ( τ 0 η ) T κ ( τ 0 + η ) T , | κ κ 0 | > M for the given η and M, it follows that P ( | κ ̃ κ 0 | > M ) = P ( κ ̃ D c ) + P ( κ ̃ D M ) . Therefore, it suffices to prove that for any ϵ > 0 and η > 0, there exist M > 0 such that P(κ ∈ D M ) < ϵ as min T , p 1 , p 2 . By part (1) of Lemma 3 and parts (2), (4), (6), (8) and (9) of Lemma 5, the rest of the proof is similar to the proof of consistency of τ ̃ , thus, omitted.

Proof of Theorem 2.

For case (a), we have lim P ( k ̃ i n + 1 = k i ) = 1 and lim P ( k ̃ i n > k i ) = 1 (or lim P ( k ̃ i n = k i ) = 1 and lim P ( k ̃ i n + 1 > k i ) = 1 ) from Lemma 6. As a result, the probability that subinterval S n * contains a break converges to one. For case (b), we have lim P ( k ̃ i n = k ̃ i n + 1 = k i ) = 1 under Lemma 6. However, if there have no change in S n * , it follows that lim P ( k ̃ i n * = k i ) = 1 , which presents a contradiction in light of the fact that lim P k ̃ i n * > k ̃ i n = k ̃ i n + 1 = k i = 1 . Hence, we can conclude the probability that subinterval S n * contains a break converges to one. For case (c), we have lim P k ̃ 1 n = k ̃ i n + 1 = k ̃ i n * = k ̃ i n 1 * = k i = 1 from Lemma 6 and Lemma 7, i = 1, 2. However, if S n contains a break, there exists at least one of k ̃ i n , k ̃ i n * , and k ̃ i n 1 * is larger than k i . This implies that a contradiction would arise unless no break in S n . Hence, we can conclude the probability that S n contains no break converges to one. The proof of Theorem 2 is completed.

Proof of Theorem 3.

It is obvious that the proof technique used in Theorem 1 can be applied to demonstrate that κ ̃ 1 κ 1 = O p ( 1 ) in S ̃ 1 . Regarding S ̃ 2 = ( κ ̃ 1 , t 3 ] , there may exist two breaks based on the boundedness property of κ ̃ 1 . If κ ̃ 1 > κ 1 , it follows that κ ̃ 2 κ 2 = O p ( 1 ) . However, in case κ ̃ 1 κ 1 , two breaks κ 1 and κ 2 would be present within S ̃ 2 . As min{T, p 1, p 2} → , since t 3 − κ 2 > T/N and κ 1 κ ̃ 1 = O p ( 1 ) , the probability that κ 1 κ ̃ 1 is less than t 3 − κ 2 converges to one. When κ 2 − κ 1 > m 0 T, the performance of the factor model on S ̃ 2 is mainly influenced by the segment ( κ ̃ 1 , t 3 ] . Thus, using the same proof technique of Theorem 1, we obtain that κ ̃ 2 κ 2 = O p ( 1 ) . The proofs for κ l with l = 3, …, L can be completed by following a similar approach as in the case of κ 2, and thus, omitted.

References

Andrews, D. W. K. 1993. “Tests for Parameter Instability and Structural Change with Unknown Change Point.” Econometrica 61 (4): 821–56. https://doi.org/10.2307/2951764.Search in Google Scholar

Bai, J., and S. Ng. 2002. “Determining the Number of Factors in Approximate Factor Models.” Econometrica 70 (1): 191–221. https://doi.org/10.1111/1468-0262.00273.Search in Google Scholar

Bai, J. 2003. “Inferential Theory for Factor Models of Large Dimensions.” Econometrica 71 (1): 135–71. https://doi.org/10.1111/1468-0262.00392.Search in Google Scholar

Baltagi, B. H., C. Kao, and F. Wang. 2017. “Identification and Estimation of a Large Factor Model with Structural Instability.” Journal of Econometrics 197 (1): 87–100. https://doi.org/10.1016/j.jeconom.2016.10.007.Search in Google Scholar

Chen, B., E. Y. Chen, and R. Chen. 2024. “Time-Varying Matrix Factor Model.” arXiv preprint arXiv:2404.01546.10.2139/ssrn.4764031Search in Google Scholar

Chen, E. Y., and R. Chen. 2022. “Modeling Dynamic Transport Network with Matrix Factor Models: An Application to International Trade Flow.” Journal of Data Science 21 (3): 490–507. https://doi.org/10.6339/22-jds1065.Search in Google Scholar

Chen, E. Y., and J. Fan. 2023. “Statistical Inference for High-Dimensional Matrix-Variate Factor Models.” Journal of the American Statistical Association 118 (542): 1038–55. https://doi.org/10.1080/01621459.2021.1970569.Search in Google Scholar

Chen, E. Y., R. S. Tsay, and R. Chen. 2020. “Constrained Factor Models for High-Dimensional Matrix-Variate Time Series.” Journal of the American Statistical Association 115 (530): 775–93. https://doi.org/10.1080/01621459.2019.1584899.Search in Google Scholar

Corradi, V., and N. R. Swanson. 2014. “Testing for Structural Stability of Factor Augmented Forecasting Models.” Journal of Econometrics 182 (1): 100–18. https://doi.org/10.1016/j.jeconom.2014.04.011.Search in Google Scholar

Duan, J., J. Bai, and X. Han. 2023. “Quasi-maximum Likelihood Estimation of Break Point in High-Dimensional Factor Models.” Journal of Econometrics 233 (1): 209–36. https://doi.org/10.1016/j.jeconom.2021.12.011.Search in Google Scholar

Han, X., and A. Inoue. 2015. “Tests for Parameter Instability in Dynamic Factor Models.” Econometric Theory 31 (5): 1117–52. https://doi.org/10.1017/s0266466614000486.Search in Google Scholar

He, Y., X. Kong, L. Trapani, and L. Yu. 2023. “One-way or Two-Way Factor Model for Matrix Sequences?” Journal of Econometrics 235 (2): 1981–2004. https://doi.org/10.1016/j.jeconom.2023.02.008.Search in Google Scholar

He, Y., X. Kong, L. Trapani, and L. Yu. 2024a. “Online Change-point Detection for Matrix-Valued Time Series with Latent Two-Way Factor Structure.” The Annals of Statistics 52 (4): 1646–70. https://doi.org/10.1214/24-aos2410.Search in Google Scholar

He, Y., X. Kong, L. Yu, X. Zhang, and C. Zhao. 2024b. “Matrix Factor Analysis: From Least Squares to Iterative Projection.” Journal of Business & Economic Statistics 42 (1): 322–34. https://doi.org/10.1080/07350015.2023.2191676.Search in Google Scholar

Karavias, Y., P. Narayan, and J. Westerlund. 2022. “Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID–19.” Journal of Business & Economic Statistics 41 (3): 653–66. https://doi.org/10.1080/07350015.2022.2053690.Search in Google Scholar

Lam, C., and Q. Yao. 2012. “Factor Modeling for High-Dimensional Time Series: Inference for the Number of Factors.” The Annals of Statistics 40 (2): 694–726. https://doi.org/10.1214/12-aos970.Search in Google Scholar

Liu, X., and E. Y. Chen. 2019. “Helping Effects against Curse of Dimensionality in Threshold Factor Models for Matrix Time Series.” arXiv preprint arXiv:1904.07383.Search in Google Scholar

Ma, S., and L. Su. 2018. “Estimation of Large Dimensional Factor Models with an Unknown Number of Breaks.” Journal of Econometrics 207 (1): 1–29. https://doi.org/10.1016/j.jeconom.2018.06.019.Search in Google Scholar

Wang, D., X. Liu, and R. Chen. 2019. “Factor Models for Matrix-Valued High-Dimensional Time Series.” Journal of Econometrics 208 (1): 231–48. https://doi.org/10.1016/j.jeconom.2018.09.013.Search in Google Scholar

Wang, L., and J. Wu. 2022. “Estimation of High-Dimensional Factor Models with Multiple Structural Changes.” Economic Modelling 108: 105743. https://doi.org/10.1016/j.econmod.2021.105743.Search in Google Scholar

Yu, L., Y. He, X. Kong, and X. Zhang. 2022. “Projected Estimation for Large-Dimensional Matrix Factor Models.” Journal of Econometrics 229 (1): 201–17. https://doi.org/10.1016/j.jeconom.2021.04.001.Search in Google Scholar

Received: 2024-04-30
Accepted: 2025-04-07
Published Online: 2025-05-23

© 2025 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 23.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/snde-2024-0044/html
Scroll to top button