Home Mathematics Periodic INAR(1) model with Bell innovations distribution
Article
Licensed
Unlicensed Requires Authentication

Periodic INAR(1) model with Bell innovations distribution

  • Abderrahmen Manaa EMAIL logo
Published/Copyright: September 26, 2024

Abstract

In this paper, we introduce a class of periodic integer-valued autoregressive BL-PINAR(1) models with Bell innovations distribution based on the binomial thinning operator. The basic probabilistic and statistical properties of this class are studied. Indeed, the first and the second moment periodically stationary conditions are established. The closed forms of these moments are, under the obtained conditions, derived. Furthermore, the periodic autocovariance structure is also considered while providing the closed form of the periodic autocorrelation function. The conditional least squares (CLS), Yule–Walker (YW), weighted conditional least squares (WCLS), and conditional maximum likelihood (CML) methods are applied to estimate the underlying parameters. The asymptotic properties of the CLS and the YW estimators are obtained. The performances of these methods are compared through a simulation study. An application on a real data set is provided.

MSC 2020: 62F12; 62M10

A Appendix

Proof of Proposition 3.1

Substituting successively, 𝑚 times, in (2.2), we obtain

y t = ( i = 1 m φ t i + 1 ) y t m + j = 0 m 1 ( i = 1 j φ t i + 1 ) ε t j , t Z .

Letting t = s + τ S , s = 1 , , S and τ Z , while putting m = S , and taking into account the periodicity of the parameter φ s , we obtain

y s + τ S = ( i = 1 S φ i ) y s + j = 1 S ( i = 1 j 1 φ s i + 1 ) ε s j + 1 + τ S , t Z .

Then the mean μ y , s = E ( y s + τ S ) of the stochastic process { y s + τ S ; τ Z } does not depend on 𝜏 and depends only on the season 𝑠, i.e., the process is periodically stationary in the mean, if and only if i = 1 S φ i < 1 ; hence

μ y , s = ( 1 i = 1 S φ i ) 1 j = 1 S ( i = 1 j 1 φ s i + 1 ) λ s j + 1 e λ s j + 1 , s = 1 , , S .

Proof of Proposition 3.2

The variance γ ( t ) ( 0 ) of the periodically correlated process y t = φ t y t 1 + ε t , t Z , is given by the following non-homogeneous difference equation:

γ y ( t ) ( 0 ) = φ t 2 γ y ( t 1 ) ( 0 ) + ψ t ,

where ψ t = φ t ( 1 φ t ) E ( y t 1 ) + λ t ( 1 + λ t ) e λ t . Iterating the last equation 𝑚 times, we obtain

γ y ( t ) ( 0 ) = ( i = 1 m φ t i + 1 2 ) γ y ( t m ) ( 0 ) + j = 1 m ( i = 1 j 1 φ t i + 1 2 ) ψ t j + 1 .

Replacing, in the previous expression, 𝑡 by t = s + τ S , s = 1 , , S , τ Z , and then posing m = S , we obtain, while taking into account the periodicity of the coefficients,

γ ( s + τ S ) ( 0 ) = ( i = 1 S φ i 2 ) τ γ ( s ) ( 0 ) + k = 0 τ 1 ( i = 1 S φ i 2 ) k j = 1 S ( i = 1 j 1 φ s i + 1 2 ) ψ s j + 1 .

It is well known that the series k = 0 τ 1 ( i = 1 S φ i 2 ) k converges if and only if the quantity i = 1 S φ i 2 < 1 (hence i = 1 S φ i < 1 ), and in this case, we have

γ y ( s ) 0 = ( 1 i = 1 S φ i 2 ) 1 j = 1 S ( i = 1 j 1 φ s i + 1 2 ) ψ s j + 1 ,

with ψ s = φ s ( 1 φ s ) μ y , s 1 + λ s ( 1 + λ s ) e λ s . ∎

Proof of Proposition 3.3

It is possible to deduce from [6, Proposition B] that

i = 1 S φ i η < 1 ,

which implies that all roots of the characteristic polynomial of 𝐴 are contained within the unit circle. Furthermore, if E ζ τ < , then [9, Theorem 1] states that there exists a strict periodic stationary BL-PINAR(1) process satisfying (2.2). ∎

Proof of Proposition 4.1

The variances γ y ( s ) ( 0 ) , s = 1 , , S , were given in Proposition 3.2. Concerning the autocovariance function, we have γ y ( t ) ( h ) = E ( y t y t h ) μ y , t μ y , t h , h 1 , which can be calculated as follows:

γ Y ( t ) ( h ) = Cov ( y t ; y t h ) = Cov ( φ t y t 1 + ε t ; y t h ) = Cov ( φ t y t 1 ; y t h ) = φ t γ y ( t 1 ) ( h 1 ) , h 1 .

By iteration, we obtain

γ y ( t ) ( h ) = ( i = 1 h φ t i + 1 ) γ y ( t h ) ( 0 ) , h 1 .

Letting t = s + τ S , s = 1 , , S , τ Z , and h = ν + k S , ν = 1 , , S and k N , then the preceding equality can be rewritten as follows:

γ y ( s ) ( ν + k S ) = ( i = 1 ν + k S φ s i + 1 ) γ y ( s + τ S ( ν + k S ) ) ( 0 ) = ( i = 1 S φ i ) k ( i = 1 ν φ s i + 1 ) γ y ( s ν ) ( 0 ) , h 1 ;

hence, we have

ρ ( s ) ( ν + k S ) = ( i = 1 ν φ s i + 1 ) ( i = 1 S φ i ) k γ y ( s ν ) ( 0 ) / γ y ( s ) ( 0 ) .

Proof of Proposition 5.1

Using the well-known property

E ( a X ) = a E ( X ) and E [ ( a X ) 2 ] = a 2 E ( X 2 ) + a ( 1 a ) E ( X )

(for more properties, see [24]), according to Lemma 2.1, we can obtain the conditional mean

E ( y t y t 1 ) = φ t y t 1 + E ( ε t ) = φ t y t 1 + μ ε , t .

Hence, to estimate the parameters ( φ s , μ ε , s ) , s = 1 , , 𝑆, we consider a realization of a finite size 𝑁 below y ¯ s = ( y s , y s + S , , y s + ( N 1 ) S ) , so we have the quadratic form

Q ( φ t , μ ε , t y 1 , , y n ) = t = 1 n [ y t E ( y t y t 1 ) ] 2 = t = 1 n [ y t ( φ t y t 1 + μ ε , t ) ] 2 .

Letting t = s + τ S , s = 1 , , S and τ Z , while assuming that n = N S , and taking into account the periodicity of the parameters φ t and μ ε , t , we may rewrite the previous expression as follows:

Q ( φ s , μ ε , s y ¯ s , y ¯ s 1 ) = s = 1 S τ = 0 N 1 [ y s + τ S ( φ s y s 1 + τ S + μ ε , s ) ] 2 ;

hence, we have

(A.1) τ = 0 N 1 [ y s + τ S y s 1 + τ S ( φ s y s 1 + τ S 2 + μ ε , s y s 1 + τ S ) ] = 0 ,
(A.2) τ = 0 N 1 [ y s + τ S ( φ s y s 1 + τ S + μ ε , s ) ] = 0 .
From (A.1) and (A.2), we obtain

τ = 0 N 1 y s + τ S y s 1 + τ S ( φ s τ = 0 N 1 y s 1 + τ S 2 + μ ε , s τ = 0 N 1 y s 1 + τ S ) = 0 , τ = 0 N 1 y s + τ S ( φ s τ = 0 N 1 y s 1 + τ S + N μ ε , s ) = 0 .

Hence, we have the system

φ s ( τ = 0 N 1 y s 1 + τ S 2 ) + μ ε , s ( τ = 0 N 1 y s 1 + τ S ) = τ = 0 N 1 y s + τ S y s 1 + τ S , φ s ( τ = 0 N 1 y s 1 + τ S ) + N μ ε , s = τ = 0 N 1 y s + τ S ,

which leads to

φ ̂ s = N τ = 0 N 1 [ ( y s + τ S 1 N τ = 0 N 1 y s + τ S ) ( y s 1 + τ S 1 N τ = 0 N 1 y s + τ S ) ] N τ = 0 N 1 ( y s 1 + τ S 1 N τ = 0 N 1 y s 1 + τ S ) 2 , μ ̂ ε , s = τ = 0 N 1 y s + τ S φ ̂ s τ = 0 N 1 y s 1 + τ S N , s = 1 , , S .

The conditional mean of the CLS estimators φ ̂ s and μ ̂ ε , s , s = 1 , , S , with respect to the 𝜎-field

F s = { y s , y s + S , , y s + ( N 1 ) S } ,

are given by

E ( φ ̂ s F s 1 ) = τ = 0 N 1 E ( y s + τ S y s 1 + τ S ) y s 1 + τ S τ = 0 N 1 E ( y s + τ S y s 1 + τ S ) y ̄ s 1 τ = 0 N 1 ( y s 1 + τ S y ̄ s 1 ) 2 ,

where E ( y s + τ S y s 1 + τ S ) = φ s y s 1 + τ S + μ ε , s , s = 1 , , S ; hence, we have, for s = 1 , , S ,

E ( φ ̂ s F s 1 ) = τ = 0 N 1 ( φ s y s 1 + τ S + μ ε , s ) y s 1 + τ S τ = 0 N 1 ( φ s y s 1 + τ S + μ ε , s ) y ̄ s 1 τ = 0 N 1 ( y s 1 + τ S y ̄ s 1 ) 2 .

Hence, we have E ( φ ̂ s F s 1 ) = φ s , s = 1 , , S . The conditional mean for μ ̂ ε , s is

E ( μ ̂ ε , s F s 1 ) = N 1 ( τ = 0 N 1 E ( y s + τ S y s 1 + τ S ) N φ s y ̄ s 1 ) ,

Hence, with simple manipulations, we obtain E ( μ ̂ ε , s F s 1 ) = μ ε , s , s = 1 , , S . ∎

Proof of Proposition 5.2

It is easy check that g / θ i , s and 2 g / θ i , s θ j , s for i , j { 1 , 2 } , s = 1 , , S , with g ( θ ¯ t , y t 1 ) = E ( y t y t 1 ) , satisfy all the regularity conditions proposed in [15, p. 634]. Consequently, by [15, Theorem 3.1], we conclude that CLS-vector estimators θ ¯ ̂ s , CLS are strongly consistent. Then we have to prove that the following three conditions hold.

  1. E ( y t y t 1 , y t 2 , , y 0 ) = E ( y t y t 1 ) , t 1 a.e.

  2. E [ U s + τ S ( θ ¯ s ) | ( g ( θ ¯ s , y s 1 + τ S ) / θ i , s ) ( g ( θ ¯ s , y s 1 + τ S ) / θ j , s ) | ] < , i = 1 , 2 , where

    U s + τ S ( θ ¯ s ) = ( y s + τ S g ( θ ¯ s , y s 1 + τ S ) ) 2 .

  3. Σ s is non-singular.

Condition (A) is satisfied since y s + τ S is a first-order Markov chain while 𝑠 is fixed in { 1 , , S } . In order to prove condition (B), we check that the components of the matrix Ω s are all finite, i.e.,

( Ω s ) 1 , 1 = E [ U s + τ S 2 ( θ ¯ s ) ( φ s g ( θ ¯ s , y s 1 + τ S ) ) 2 ] = E ( y s 1 + τ S 2 U s + τ S 2 ( θ ¯ s ) ) = E ( y s 1 + τ S 2 E ( U s + τ S 2 ( θ ¯ s ) y s 1 + τ S ) ) = E ( y s 1 + τ S 2 Var ( y s + τ S y s 1 + τ S ) ) = φ s ( 1 φ s ) E ( y s 1 + τ S 3 ) + σ ε , s 2 E ( y s 1 + τ S 2 ) = α 1 , s ( 1 α 1 , s ) μ y , s 1 ( 3 ) + σ ε , s 2 μ y , s 1 ( 2 ) < .

Similarly, we have

( Ω s ) 2 , 2 = E [ U s + τ S 2 ( θ ¯ s ) ( μ ε , s g ( θ ¯ s , y s 1 + τ S ) ) 2 ] = E ( E ( U s + τ S 2 ( θ ¯ s ) y s 1 + τ S ) ) = E ( Var ( y s + τ S y s 1 + τ S ) ) = φ s ( 1 φ s ) E ( y s 1 + τ S ) + α 2 , s ( 1 α 2 , s ) E ( y s 1 + τ S ) + σ ε , s 2 = φ s ( 1 φ s ) μ y , s 1 + σ ε , s 2 < ,
( Ω s ) 1 , 2 = ( Ω s ) 2 , 1 = E [ U s + τ S 2 ( θ ¯ s ) φ s g ( θ ¯ s , y s 1 + τ S ) μ ε , s g ( θ ¯ s , y s 1 + τ S ) ] = E ( y s 1 + τ S E ( U s + τ S 2 ( θ ¯ s ) y s 1 + τ S ) ) = E ( y s 1 + τ S Var ( y s + τ S y s 1 + τ S ) ) = φ s ( 1 φ s ) E ( y s 1 + τ S 2 ) + σ ε , s 2 E ( y s 1 + τ S ) = φ s ( 1 φ s ) μ y , s 1 ( 2 ) + σ ε , s 2 μ y , s 1 < .
Finally, the matrix Ω s is given as follows:

Ω s = ( φ s ( 1 φ s ) μ y , s 1 ( 3 ) + σ ε , s 2 μ y , s 1 ( 2 ) φ s ( 1 φ s ) μ y , s 1 ( 2 ) + σ ε , s 2 μ y , s 1 φ s ( 1 φ s ) μ y , s 1 ( 2 ) + σ ε , s 2 μ y , s 1 φ s ( 1 φ s ) μ y , s 1 + σ ε , s 2 )

Therefore, condition (B) is also satisfied; then the elements of the matrix Σ s are given by

( Σ s ) 1 , 1 = E [ ( φ s g ( θ ¯ s , y s 1 + τ S ) ) 2 ] = E ( y s 1 + τ S 2 ) = μ y , s 1 ( 2 ) , ( Σ s ) 2 , 2 = E [ ( μ ε , s g ( θ ¯ s , y s 1 + τ S ) ) 2 ] = 1 , ( Σ s ) 1 , 2 = ( Σ s ) 2 , 1 = E [ φ s g ( θ ¯ s , y s 1 + τ S ) μ ε , s g ( θ ¯ s , y s 1 + τ S ) ] = E ( y s 1 + τ S ) = μ y , s 1 .

Hence, we have

Σ s = E ( y s 1 + τ S 2 y s 1 + τ S y s 1 + τ S 1 ) = ( μ y , s 1 ( 2 ) μ y , s 1 μ y , s 1 1 ) .

Then we have

Σ s 1 = 1 μ y , s 1 ( 2 ) ( μ y , s 1 ) 2 ( 1 μ y , s 1 μ y , s 1 μ y , s 1 ( 2 ) ) .

The calculation of the matrix’s ( Σ s ) determinant gives

| Σ s | = μ y , s 1 ( 2 ) ( μ y , s 1 ) 2 = σ y , s 1 2 > 0 ,

which leads us to conclude that the matrix Σ s is invertible. Thus, condition (C) is also satisfied. Finally, by Klimko and Nelson [15, Theorem 3.2], the CLS-vector estimators θ ¯ ̂ s , CLS are asymptotically normally distributed. This achieves the proof. ∎

Proof of Proposition 5.3

The proof is straightforwardly from Proposition 4.1. ∎

Proof of Proposition 5.4

This follows from straightforward modifications of the proof of [11, Theorem 3]. ∎

Acknowledgements

The author wishes to express gratitude to Professor Karl Sabelfeld for his invaluable support and patience.

References

[1] M. A. Al-Osh and A. A. Alzaid, First-order integer-valued autoregressive (INAR(1)) process, J. Time Ser. Anal. 8 (1987), no. 3, 261–275. 10.1111/j.1467-9892.1987.tb00438.xSearch in Google Scholar

[2] N. Aries and N. Mamode Khan, On periodic integer-valued moving average (INMA (𝑞)) models, J. Stat. Comput. Simul. 93 (2023), no. 3, 366–396. 10.1080/00949655.2022.2108031Search in Google Scholar

[3] M. Bentarzi and N. Aries, On some periodic I N A R M A (𝑝,𝑞) models, Comm. Statist. Simulation Comput. 51 (2022), no. 10, 5773–5793. 10.1080/03610918.2020.1780443Search in Google Scholar

[4] M. Bourguignon, J. Rodrigues and M. Santos-Neto, Extended Poisson INAR(1) processes with equidispersion, underdispersion and overdispersion, J. Appl. Stat. 46 (2019), no. 1, 101–118. 10.1080/02664763.2018.1458216Search in Google Scholar

[5] E. T. da Cunha, M. Bourguignon and K. L. P. Vasconcellos, On shifted integer-valued autoregressive model for count time series showing equidispersion, underdispersion or overdispersion, Comm. Statist. Theory Methods 50 (2021), no. 20, 4822–4843. 10.1080/03610926.2020.1725822Search in Google Scholar

[6] J.-P. Dion, G. Gauthier and A. Latour, Branching processes with immigration and integer-valued time series, Serdica Math. J. 21 (1995), no. 2, 123–136. Search in Google Scholar

[7] R. Ferland, A. Latour and D. Oraichi, Integer-valued GARCH process, J. Time Ser. Anal. 27 (2006), no. 6, 923–942. 10.1111/j.1467-9892.2006.00496.xSearch in Google Scholar

[8] K. Fokianos, Count time series models, Handbook of Statistics. Vol. 30, Elsevier, Amsterdam (2012), 315–347. 10.1016/B978-0-444-53858-1.00012-0Search in Google Scholar

[9] J. Franke and T. Rao Subba, Multivariate first-order integer-valued autoregressions, Technical Report, Technische Universität Kaiserslautern, 1995. Search in Google Scholar

[10] R. K. Freeland, Statistical analysis of discrete-time series with applications to the analysis of workers compensation claims data, PhD thesis, University of British Columbia, Canada, 1998. Search in Google Scholar

[11] R. K. Freeland and B. McCabe, Asymptotic properties of CLS estimators in the Poisson AR ( 1 ) model, Statist. Probab. Lett. 73 (2005), no. 2, 147–153. 10.1016/j.spl.2005.03.006Search in Google Scholar

[12] E. G. Gladyšev, Periodically correlated random sequences, Soviet Math. 2 (1961), 385–388. Search in Google Scholar

[13] J. Huang and F. Zhu, A new first-order integer-valued autoregressive model with Bell innovations, Entropy 23 (2021), no. 6, Paper No. 713. 10.3390/e23060713Search in Google Scholar PubMed PubMed Central

[14] M. A. Jazi, G. Jones and C.-D. Lai, First-order integer valued AR processes with zero inflated Poisson innovations, J. Time Series Anal. 33 (2012), no. 6, 954–963. 10.1111/j.1467-9892.2012.00809.xSearch in Google Scholar

[15] L. A. Klimko and P. I. Nelson, On conditional least squares estimation for stochastic processes, Ann. Statist. 6 (1978), no. 3, 629–642. 10.1214/aos/1176344207Search in Google Scholar

[16] A. Latour, The multivariate GINAR ( p ) process, Adv. in Appl. Probab. 29 (1997), no. 1, 228–248. 10.2307/1427868Search in Google Scholar

[17] C. Liu, J. Cheng and D. Wang, Statistical inference for periodic self-exciting threshold integer-valued autoregressive processes, Entropy 23 (2021), no. 6, Paper No. 765. 10.3390/e23060765Search in Google Scholar PubMed PubMed Central

[18] G. M. Ljung and G. E. P. Box, On a measure of lack of fit in time series models, Biometrika 65 (1978), no. 2, 297–303. 10.1093/biomet/65.2.297Search in Google Scholar

[19] A. Manaa and M. Bentarzi, Periodic negative binomial INGARCH(1, 1) model, Comm. Statist. Simulation Comput. 52 (2023), no. 11, 5139–5162. 10.1080/03610918.2021.1990329Search in Google Scholar

[20] E. McKenzie, Some simple models for discrete variate time series, J. Amer. Water Res. Assoc. 21 (1985), no. 4, 645–650. 10.1111/j.1752-1688.1985.tb05379.xSearch in Google Scholar

[21] M. Monteiro, M. G. Scotto and I. Pereira, Integer-valued autoregressive processes with periodic structure, J. Statist. Plann. Inference 140 (2010), no. 6, 1529–1541. 10.1016/j.jspi.2009.12.015Search in Google Scholar

[22] D. Moriña, P. Puig, J. Ríos, A. Vilella and A. Trilla, A statistical model for hospital admissions caused by seasonal diseases, Stat. Med. 30 (2011), no. 26, 3125–3136. 10.1002/sim.4336Search in Google Scholar PubMed

[23] S. Schweer and C. H. Weiß, Compound Poisson INAR(1) processes: Stochastic properties and testing for overdispersion, Comput. Statist. Data Anal. 77 (2014), 267–284. 10.1016/j.csda.2014.03.005Search in Google Scholar

[24] M. D. Silva and V. L. Oliveira, Difference equations for the higher-order moments and cumulants of the INAR (1) model, J. Time Series Anal. 25 (2000), no. 3, 317–333. 10.1111/j.1467-9892.2004.01685.xSearch in Google Scholar

[25] R. Souakri and B. Mohamed, On periodic generalized Poisson INAR (p) models, Comm. Statist. Simulation Comput. (2022), 10.1080/03610918.2022.2155305. 10.1080/03610918.2022.2155305Search in Google Scholar

[26] F. W. Steutel and K. van Harn, Discrete analogues of self-decomposability and stability, Ann. Probab. 7 (1979), no. 5, 893–899. 10.1214/aop/1176994950Search in Google Scholar

[27] C. H. Weiß, Controlling correlated processes of Poisson counts, Qual. Reliab. Eng. Int. 23 (2007), no. 6, 741–754. 10.1002/qre.875Search in Google Scholar

[28] F. Zhu, Q. Li and D. Wang, A mixture integer-valued ARCH model, J. Statist. Plann. Inference 140 (2010), no. 7, 2025–2036. 10.1016/j.jspi.2010.01.037Search in Google Scholar

[29] R. Zhu and H. Joe, Modelling count data time series with Markov processes based on binomial thinning, J. Time Ser. Anal. 27 (2006), no. 5, 725–738. 10.1111/j.1467-9892.2006.00485.xSearch in Google Scholar

Received: 2024-04-04
Revised: 2024-09-09
Accepted: 2024-09-10
Published Online: 2024-09-26
Published in Print: 2024-12-01

© 2024 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 11.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/mcma-2024-2015/html
Scroll to top button