Home Business & Economics Product of bi-dimensional VAR(1) model components. An application to the cost of electricity load prediction errors
Article
Licensed
Unlicensed Requires Authentication

Product of bi-dimensional VAR(1) model components. An application to the cost of electricity load prediction errors

  • Joanna Janczura ORCID logo EMAIL logo , Andrzej Puć , Łukasz Bielak and Agnieszka Wyłomańska ORCID logo
Published/Copyright: October 25, 2023
Become an author with De Gruyter Brill

Abstract

The multi-dimensional vector autoregressive (VAR) time series is often used to model the impulse-response functions of macroeconomics variables. However, in some economical applications, the variable of main interest is the product of time series describing market variables, like e.g. the cost, being the product of price and volume. In this paper, we analyze the product of the bi-dimensional VAR(1) model components. For the introduced time series, we derive general formulas for the autocovariance function and study its properties for different cases of cross-dependence between the VAR(1) model components. The theoretical results are then illustrated in the simulation study for two types of bivariate distributions of the residual series, namely the Gaussian and Student’s t. The obtained results are applied for the electricity market case study, in which we show that the additional cost of balancing load prediction errors prior to delivery can be well described by time series being the product of the VAR(1) model components with the bivariate normal inverse Gaussian distribution.

MSC 2010: 62M10; 62H99; 91B84

Funding source: Narodowe Centrum Nauki

Award Identifier / Grant number: 2019/35/D/HS4/00369

Award Identifier / Grant number: 2020/37/B/HS4/00120

Funding statement: J. Janczura and A. Puć acknowledge financial support of the National Science Centre, Poland under Sonata Grant No. 2019/35/D/HS4/00369. The work of A. Wyłomańska was supported by the National Center of Science under Opus Grant No. 2020/37/B/HS4/00120 “Market risk model identification and validation using novel statistical, probabilistic, and machine learning tools”.

A Appendix

Special cases analysis – Case 2

The formula for E ( Y ( t ) ) given in equation (4.5) follows directly from equation (3.3). Using equation (3.4), we can calculate the ACVF for { Y ( t ) } for h = 0 , 1 , . Indeed, we have

ACVF Y ( h ) = j , i = 0 m , p = h ϕ 11 j + m + h ϕ 22 i + p + h E [ Z 1 ( t j ) Z 2 ( t i ) Z 1 ( t m ) Z 2 ( t p ) ] ρ Z 2 σ Z , 1 2 σ Z , 2 2 ( 1 ϕ 11 ϕ 22 ) 2 .

Now, we can calculate the value

r 1 , 2 ( t , j , m , i , p ) = E [ Z 1 ( t j ) Z 1 ( t m ) Z 2 ( t i ) Z 2 ( t p ) ]

for all t Z and i , j = 0 , 1 , , m , p = h , h + 1 , . Using the fact that, for each t Z , the bi-dimensional residual series Z ( t ) is a zero-mean vector and, for t s , Z ( t ) is independent of Z ( s ) , one obtains the following:

r 1 , 2 ( t , j , m , i , p ) = { E [ Z 1 2 ( t j ) Z 2 2 ( t j ) ] if i = j = p = m , E [ Z 1 2 ( t j ) ] E [ Z 2 2 ( t i ) ] if j = m , i = p , j i , E [ Z 1 ( t j ) Z 2 ( t j ) ] E [ Z 1 ( t m ) Z 2 ( t m ) ] if j = i , m = p , j m , E [ Z 1 ( t j ) Z 2 ( t j ) ] E [ Z 1 ( t m ) Z 2 ( t m ) ] if j = p , i = m , j i .

Thus, we have

r 1 , 2 ( t , j , m , i , p ) = { m Z if i = j = p = m , i , j , m , p = 0 , 1 , 2 , σ Z , 1 2 σ Z , 2 2 if j = m , i = p , j i , i , j , m , p = 0 , 1 , 2 , ρ Z 2 σ Z , 1 2 σ Z , 2 2 if j = i , m = p , j m , i , j = 0 , 1 , 2 , m , p = h , h + 1 , ρ Z 2 σ Z , 1 2 σ Z , 2 2 if j = p , i = m , j i , i , j , m , p = 0 , 1 , 2 ,

where the value m Z = E [ Z 1 2 ( t ) Z 2 2 ( t ) ] is independent of 𝑡. Therefore, we have

ACVF Y ( h ) = m Z j = 0 ( ϕ 11 ϕ 22 ) 2 j + h + σ Z , 1 2 σ Z , 2 2 j = 0 ϕ 11 2 j + h [ i = 0 ϕ 22 2 i + h ϕ 22 2 j + h ] + ρ Z 2 σ Z , 1 2 σ Z , 2 2 j = 0 ϕ 11 j + h [ ϕ 22 j + h m = 0 ( ϕ 11 ϕ 22 ) m ϕ 22 j + h ( ϕ 11 ϕ 22 ) j ] + ρ Z 2 σ Z , 1 2 σ Z , 2 2 j = 0 ϕ 11 j + h [ ϕ 22 j + h m = h ( ϕ 11 ϕ 22 ) m ϕ 22 j + h ( ϕ 11 ϕ 22 ) j ] ρ Z 2 σ Z , 1 2 σ Z , 2 2 ( 1 ϕ 11 ϕ 22 ) 2 = m Z ( ϕ 11 ϕ 22 ) h 1 ( ϕ 11 ϕ 22 ) 2 + σ Z , 1 2 σ Z , 2 2 ( ϕ 11 ϕ 22 ) h ( 1 ( 1 ϕ 11 2 ) ( 1 ϕ 22 2 ) 1 1 ( ϕ 11 ϕ 22 ) 2 ) + ρ Z 2 σ Z , 1 2 σ Z , 2 2 ( ϕ 11 ϕ 22 ) h ( 1 ( 1 ϕ 11 ϕ 22 ) 2 1 1 ( ϕ 11 ϕ 22 ) 2 ) + ρ Z 2 σ Z , 1 2 σ Z , 2 2 ( ϕ 11 ϕ 22 ) h ( ( ϕ 11 ϕ 22 ) h ( 1 ϕ 11 ϕ 22 ) 2 1 1 ( ϕ 11 ϕ 22 ) 2 ) ρ Z 2 σ Z , 1 2 σ Z , 2 2 ( 1 ϕ 11 ϕ 22 ) 2 = ( ϕ 11 ϕ 22 ) h [ m Z σ Z , 1 2 σ Z , 2 2 2 ρ Z 2 σ Z , 1 2 σ Z , 2 2 1 ( ϕ 11 ϕ 22 ) 2 + σ Z , 1 2 σ Z , 2 2 ( 1 ϕ 11 2 ) ( 1 ϕ 22 2 ) + ρ Z 2 σ Z , 1 2 σ Z , 2 2 ( 1 ϕ 11 ϕ 22 ) 2 ] .

Finally, taking h = 0 , one obtains V ar ( Y ( t ) ) .

Special cases analysis – Case 3

One can show that, in this case, we have

ϕ 11 ( j ) = ϕ 11 j , ϕ 22 ( j ) = ϕ 22 j , ϕ 21 ( j ) = 0 , j = 0 , 1 , ,

and the eigenvalues of the matrix Φ are equal to ν 1 = ϕ 11 and ν 2 = ϕ 22 . Thus, according to equations (2.5) and (2.6), the following is fulfilled for j = 1 , 2 , :

ϕ 12 ( j ) = { ϕ 22 j ϕ 11 j ϕ 22 ϕ 11 ϕ 12 if ϕ 11 ϕ 22 , j ϕ 11 j 1 ϕ 12 if ϕ 11 = ϕ 22 ,

while for j = 0 , ϕ 12 ( j ) = 0 . In order to fulfill condition (2.3), we assume that | ϕ 11 | < 1 and | ϕ 22 | < 2 .

Using equation (2.8), one obtains

σ X , 1 2 = σ Z , 1 2 1 ϕ 11 2 + σ Z , 2 2 j = 0 ( ϕ 12 ( j ) ) 2 , σ X , 2 2 = σ Z , 2 2 1 ϕ 22 2 .

Thus, we have

σ X , 1 2 = { σ Z , 1 2 1 ϕ 11 2 + σ Z , 2 2 ϕ 12 2 ( ϕ 22 ϕ 11 ) 2 [ ϕ 22 2 1 ϕ 22 2 2 ϕ 11 ϕ 22 1 ϕ 11 ϕ 22 + ϕ 11 2 1 ϕ 21 2 ] if ϕ 11 ϕ 22 , σ Z , 1 2 1 ϕ 11 2 + σ Z , 2 2 ϕ 12 2 ( 1 + ϕ 11 2 ) ( 1 ϕ 11 2 ) 3 if ϕ 11 = ϕ 22 .

Moreover, from equation (2.9), we have

γ X , 1 , 2 = j = 1 σ Z , 2 2 ϕ 12 ( j ) ϕ 22 j .

Thus, from the above, we obtain the formula for the expected value of the random variable Y ( t ) for each t Z ; see equation (4.7).

To obtain the explicit formula for ACVF Y ( h ) , we will use equation (3.4). For h = 0 , 1 , 2 , we have

ACVF Y ( h ) = j , i = 0 m , p = h k , l , n , r = 1 2 ϕ 1 k ( j ) ϕ 2 l ( i ) ϕ 1 n ( m + h ) ϕ 2 r ( p + h ) E [ Z k ( t j ) Z l ( t i ) Z n ( t m ) Z r ( t p ) ] γ X , 1 , 2 2 = j , i = 0 m , p = h k , n = 1 2 ϕ 1 k ( j ) ϕ 22 ( i ) ϕ 1 n ( m + h ) ϕ 22 ( p + h ) E [ Z k ( t j ) Z 2 ( t i ) Z n ( t m ) Z 2 ( t p ) ] γ X , 1 , 2 2 = j , i = 0 m , p = h ϕ 22 i + p + h k , n = 1 2 ϕ 1 k ( j ) ϕ 1 n ( m + h ) E [ Z k ( t j ) Z 2 ( t i ) Z n ( t m ) Z 2 ( t p ) ] γ X , 1 , 2 2 = j , i = 0 m , p = h ϕ 22 i + p + h ϕ 12 ( j ) ϕ 12 ( m + h ) E [ Z 2 ( t j ) Z 2 ( t i ) Z 2 ( t m ) Z 2 ( t p ) ] + j , i = 0 m , p = h ϕ 22 i + p + h ϕ 11 j + m + h E [ Z 1 ( t j ) Z 2 ( t i ) Z 1 ( t m ) Z 2 ( t p ) ] γ X , 1 , 2 2 .

Moreover, the value

r 2 , 2 ( t , j , m , i , p ) = E [ Z 2 ( t j ) Z 2 ( t i ) Z 2 ( t m ) Z 2 ( t p ) ]

is given by

r 2 , 2 ( t , j , m , i , p ) = { κ Z if i = j = m = p , i , j , m , p = 0 , 1 , , σ Z , 2 4 if i = j , m = p , m j , i , j = 0 , 1 , , m , p = h , h + 1 , , σ Z , 2 4 if j = m , i = p , m i , i , j , m , p = 0 , 1 , , σ Z , 2 4 if j = p , i = m , m j , i , j , m , p = 0 , 1 , ,

where κ Z = E [ Z 2 4 ( t ) ] < is independent of 𝑡. Additionally, one can show that

E [ Z 1 ( t j ) Z 2 ( t i ) Z 1 ( t m ) Z 2 ( t p ) ] = σ Z , 1 2 σ Z , 2 2 if j = m , i = p , i , j , p , m = 0 , 1 , 2 , ,
ACVF Y ( h ) = κ Z j = 0 ϕ 12 ( j ) ϕ 22 2 j + h ϕ 12 ( j + h ) + σ Z , 2 4 j = 0 ϕ 12 ( j ) ϕ 12 ( j + h ) [ i = 0 ϕ 22 2 i + h ϕ 22 2 j + h ] + σ Z , 2 4 j = 0 ϕ 22 j + h ϕ 12 ( j ) [ i = 0 ϕ 22 i ϕ 12 ( i + h ) ϕ 22 j ϕ 12 ( j + h ) ] + σ Z , 2 4 j = 0 ϕ 22 j + h ϕ 12 ( j ) [ m = h ϕ 22 m ϕ 12 ( m + h ) ϕ 22 j ϕ 12 ( j + h ) ] + σ Z , 1 2 σ Z , 2 2 j = 0 i = 0 ϕ 11 2 j + h ϕ 22 2 i + h γ X , 1 , 2 2 .
Let us observe that, for h = 0 , 1 , 2 , , the following holds:

j = 0 i = 0 ϕ 11 2 j + h ϕ 22 2 i + h = ( ϕ 11 ϕ 22 ) h ( 1 ϕ 11 2 ) ( 1 ϕ 22 2 ) .

Now, to make the calculations simpler, let us assume that ϕ 11 = 0 and ϕ 22 0 . In this case, the matrix Φ (see equation (2.2)) has two different eigenvalues and ϕ 12 ( j ) = ϕ 12 ϕ 22 j 1 , j = 1 , 2 . Clearly, ϕ 11 ( j ) = 0 for j = 1 , 2 , , and for j = 0 , ϕ 11 j = 1 . We have the following:

j = 0 ϕ 12 ( j ) ϕ 22 2 j + h ϕ 12 ( j + h ) = ϕ 12 2 ϕ 22 2 h j = 1 ϕ 22 4 j 2 = ϕ 12 2 ϕ 22 2 h ϕ 22 2 1 ϕ 22 4 , h = 0 , 1 , 2 , .

Let us first consider the case h = 0 . We have

j = 0 ϕ 12 ( j ) ϕ 12 ( j ) [ i = 0 ϕ 22 2 i ϕ 22 2 j ] = ϕ 12 2 j = 1 ϕ 22 2 j 2 [ i = 0 ϕ 22 2 i ϕ 22 2 j ] = ϕ 12 2 [ 1 ( 1 ϕ 22 2 ) 2 ϕ 22 2 1 ϕ 22 4 ] , j = 0 ϕ 22 j ϕ 12 ( j ) [ i = 0 ϕ 22 i ϕ 12 ( i ) ϕ 22 j ϕ 12 ( j ) ] = ϕ 12 2 ϕ 22 2 j = 1 ϕ 22 2 j [ i = 1 ϕ 22 2 i ϕ 22 2 j ] = ϕ 12 2 [ ϕ 22 2 ( 1 ϕ 22 2 ) 2 ϕ 22 2 1 ϕ 22 4 ] .

On the other hand, for h > 0 , we have

j = 0 ϕ 12 ( j ) ϕ 12 ( j + h ) ϕ 22 2 h [ i = 0 ϕ 22 2 i + h ϕ 22 2 j + h ] = ϕ 12 2 ϕ 22 2 h j = 1 ϕ 22 2 j 2 [ i = 0 ϕ 22 2 i ϕ 22 2 j ] = ϕ 12 2 ϕ 22 2 h [ 1 ( 1 ϕ 22 2 ) 2 ϕ 22 2 1 ϕ 22 4 ] ,
j = 0 ϕ 22 j + h ϕ 12 ( j ) [ i = 0 ϕ 22 i ϕ 12 ( i + h ) ϕ 22 j ϕ 12 ( j + h ) ] = ϕ 12 2 ϕ 22 2 h j = 1 ϕ 22 2 j 1 [ i = 0 ϕ 22 2 i 1 ϕ 22 2 j 1 ] = ϕ 12 2 ϕ 22 2 h [ 1 ( 1 ϕ 22 2 ) 2 ϕ 22 2 1 ϕ 22 4 ] ,
j = 0 ϕ 22 j + h ϕ 12 ( j ) [ m = h ϕ 22 m ϕ 12 ( m + h ) ϕ 22 j ϕ 12 ( j + h ) ] = ϕ 12 2 j = 1 ϕ 22 2 j + h 1 [ m = h + 1 ϕ 22 2 m + h 1 ϕ 22 2 j + h 1 ] = ϕ 12 2 ϕ 22 2 h [ ϕ 22 2 h ϕ 22 2 ( 1 ϕ 22 2 ) 2 ϕ 22 2 1 ϕ 22 4 ] .
Finally, assuming ϕ 11 = 0 , we obtain the formulas for Var ( Y ( t ) ) and ACVF Y ( h ) given in equations (4.8) and (4.9), respectively.

Bivariate Gaussian distribution

The bivariate Gaussian distributed random vector ( Z 1 , Z 2 ) has the following PDF [36]:

(A.1) f Z 1 , Z 2 ( z 1 , z 2 ) = exp { 1 2 ( 1 ρ 2 ) [ ( z 1 μ Z , 1 ) 2 σ Z , 1 2 2 ρ ( z 1 μ Z , 1 σ Z , 1 ) ( z 2 μ Z , 2 σ Z , 2 ) + ( z 2 μ Z , 2 ) 2 σ Z , 2 2 ] } 2 π σ Z , 1 σ Z , 2 1 ρ 2 , z 1 , z 2 R ,

where ρ ( 1 , 1 ) is the correlation coefficient between random variables Z 1 and Z 2 (denoted in the main text as ρ Z ); μ Z , 1 , μ Z , 2 R are the corresponding expected values, while σ Z , 1 2 , σ Z , 2 2 > 0 are the corresponding variances. When ρ = 0 , the PDF of the random vector ( Z 1 , Z 2 ) is just the product of the PDFs of the Gaussian distributed random variables.

Bivariate Student’s t distribution

The bivariate Student’s t distributed random vector ( Z 1 , Z 2 ) is constructed as follows. Let us assume that ( N 1 , N 2 ) is the bivariate Gaussian vector defined by the PDF in equation (A.1) with expected values equal to zero, unit variances and ρ ( 1 , 1 ) being its correlation coefficient. Moreover, let χ 2 be the one-dimensional random variable with chi-square distribution with η > 0 degrees of freedom and assume that ( N 1 , N 2 ) and χ 2 are independent. Then the random vector defined as

( Z 1 , Z 2 ) = ( N 1 , N 2 ) χ 2 / η

has a bivariate Student’s t distribution with 𝜂 degrees of freedom and its PDF is given by (see [6])

(A.2) f Z 1 , Z 2 ( z 1 , z 2 ) = 1 2 π 1 ρ 2 [ 1 + z 1 2 2 ρ z 1 z 2 + z 2 2 η ( 1 ρ 2 ) ] η + 2 2 , z 1 , z 2 R .

The marginal random variables Z 1 and Z 2 have the one-dimensional Student’s t distribution defined by the following PDF [11]:

(A.3) f Z 1 ( z 1 ) = Γ ( ( η + 1 ) / 2 ) η π Γ ( η / 2 ) ( 1 + z 1 2 η ) η + 1 2 , z 1 R ,

where Γ ( ) is the gamma function, i.e. Γ ( α ) = 0 t α 1 e t d t for 𝛼 such that Re ( α ) > 0 . Note that the number of degrees of freedom, 𝜂, is equal for both marginal variables. It is worth highlighting that the correlation ρ Z between the random variables Z 1 and Z 2 is equal to the parameter 𝜌. However, its zero value is not equivalent to the independence of the random variables Z 1 and Z 2 since, in that case, the PDF of a random vector ( Z 1 , Z 2 ) (see equation (A.2)) is not a product of the PDFs of the marginal distributions; see equation (A.3). Hence, if Z 1 and Z 2 are independent, the PDF of the random vector is given by

f Z 1 , Z 2 ( z 1 , z 2 ) = Γ ( ( η Z , 1 + 1 ) / 2 ) Γ ( ( η Z , 2 + 1 ) / 2 ) η Z , 1 η Z , 2 π Γ ( η Z , 1 / 2 ) Γ ( η Z , 2 / 2 ) ( 1 + z Z , 1 2 η Z , 1 ) η Z , 1 + 1 2 ( 1 + z 2 2 η Z , 2 ) η Z , 2 + 1 2 , z 1 , z 2 R ,

where η Z , 1 > 0 , η Z , 2 > 0 are the degrees of freedom parameters of Z 1 and Z 2 , respectively.

The Student’s t distribution defined in (A.3) has zero mean and variance equal to σ Z , 1 2 = η η 2 . It can be generalized to the Student’s t location-scale distribution by applying the following transformation Z ( μ , λ ) := μ + λ Z , where 𝑍 is Student’s t distributed. It yields a three parameter ( μ , λ , η ) distribution, with 𝜇 being the shift parameter, λ > 0 the scale parameter and η > 0 the degrees of freedom. The variance of the Student’s t location-scale random variable is equal to σ Z ( μ , λ ) 2 = λ 2 η η 2 .

Bivariate normal inverse Gaussian distribution

A bivariate normal inverse Gaussian distribution is constructed as a variance-mean mixture of a two-dimensional Gaussian random vector with a univariate inverse Gaussian distributed mixing variable [29]. The density of a bivariate NIG variable Z = ( Z 1 , Z 2 ) is given explicitly by (see [3])

f ( Z 1 , Z 2 ) ( z ; μ , α , β , δ ) = δ 2 ( d 1 ) / 2 [ α π q ( z ) ] ( d + 1 ) / 2 exp [ p ( z ) ] K ( d + 1 ) / 2 [ α q ( z ) ] ,

where

q ( z ) = δ 2 + ( z μ ) T Σ 1 ( z μ ) , p ( z ) = δ α 2 β T Σ β + β T ( z μ ) ,

K d ( x ) is the modified Bessel function of the second kind with index 𝑑 and δ > 0 , α 2 > β T Σ β , μ R 2 , β R 2 , Σ R 2 × R 2 are the parameters of the distribution. Parameter 𝛼 controls the shape of the density, 𝛿 is the scale parameter, 𝜷 is the skewness parameter, 𝝁 is the translation parameter and the Σ matrix describes degree of correlations between the components of 𝐙. For more details on the NIG distribution, see e.g. [7, 29, 3, 34].

References

[1] J. Adamska, Ł. Bielak, J. Janczura and A. Wyłomańska, From multi-to univariate: A product random variable with an application to electricity market transactions. Pareto and Student’s t distribution case, Mathematics 10 (2022), no. 18, Article ID 3371. 10.3390/math10183371Search in Google Scholar

[2] M. Ahsanullah, B. M. G. Kibria and M. Shakil, Normal and Student’s 𝑡 Distributions and Their Applications, Atlantis Stud. Probab. Stat. 4, Atlantis Press, Paris, 2014. 10.2991/978-94-6239-061-4Search in Google Scholar

[3] A. Andresen, S. Koekebakker and S. Westgaard, Modeling electricity forward prices using the multivariate normal inverse Gaussian distribution, J. Energy Markets 3 (2010), no. 3, 1–23. 10.21314/JEM.2010.051Search in Google Scholar

[4] S. Ankargren, M. Unosson and Y. Yang, A flexible mixed-frequency vector autoregression with a steady-state prior, J. Time Ser. Econom. 12 (2020), no. 2, Article ID 20180034. 10.1515/jtse-2018-0034Search in Google Scholar

[5] L. A. Aroian, V. S. Taneja and L. W. Cornwell, Mathematical forms of the distribution of the product of two normal variables, Comm. Statist. Theory Methods 7 (1978), no. 2, 165–172. 10.1080/03610927808827610Search in Google Scholar

[6] N. Balakrishnan and C.-D. Lai, Continuous Bivariate Distributions, Springer, New York, 2009. 10.1007/b101765_6Search in Google Scholar

[7] O. E. Barndorff-Nielsen, Normal inverse Gaussian distributions and stochastic volatility modelling, Scand. J. Stat. 24 (1997), no. 1, 1–13. 10.1111/1467-9469.t01-1-00045Search in Google Scholar

[8] N. Bhargav, C. R. Nogueira da Silva, Y. J. Chun, E. J. Leonardo, S. L. Cotton and M. D. Yacoub, On the product of two 𝜅 - 𝜇 random variables and its application to double and composite fading channels, IEEE Trans. Wireless Commun. 17 (2018), no. 4, 2457–2470. 10.1109/TWC.2018.2796562Search in Google Scholar

[9] Ł. Bielak, A. Grzesiek, J. Janczura and A. Wyłomańska, Market risk factors analysis for an international mining company. Multi-dimensional heavy-tailed-based modelling, Res. Policy 74 (2021), Article ID 102308. 10.1016/j.resourpol.2021.102308Search in Google Scholar

[10] P. J. Brockwell and R. A. Davis, Introduction to Time Series and Forecasting, Springer Texts Statist., Springer, Cham, 2016. 10.1007/978-3-319-29854-2Search in Google Scholar

[11] W. G. Cochran, The distribution of quadratic forms in a normal system, with applications to the analysis of covariance, Math. Proc. Cambridge Philos. Soc. 30 (1934), no. 2, 178–191. 10.1017/S0305004100016595Search in Google Scholar

[12] P. Di Tella and C. Geiss, Product and moment formulas for iterated stochastic integrals (associated with Lévy processes), Stochastics 92 (2020), no. 6, 969–1004. 10.1080/17442508.2019.1680677Search in Google Scholar

[13] R. F. Engle, Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation, Econometrica 50 (1982), no. 4, 987–1007. 10.2307/1912773Search in Google Scholar

[14] C. Fezzi and D. Bunn, Structural analysis of electricity demand and supply interactions, Oxford Bull. Econ. Statist. 72 (2010), no. 6, 827–856. 10.1111/j.1468-0084.2010.00596.xSearch in Google Scholar

[15] J. Galambos and I. Simonelli, Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions, CRC Press, Boca Raton, 2004. 10.1201/9781482276633Search in Google Scholar

[16] A. Grzesiek, P. Giri, S. Sundar and A. Wyłomańska, Measures of cross-dependence for bidimensional periodic AR ( 1 ) model with 𝛼-stable distribution, J. Time Series Anal. 41 (2020), no. 6, 785–807. 10.1111/jtsa.12548Search in Google Scholar

[17] A. Grzesiek, G. Sikora, M. Teuerle and A. Wyłomańska, Spatio-temporal dependence measures for bivariate AR ( 1 ) models with 𝛼-stable noise, J. Time Series Anal. 41 (2020), no. 3, 454–475. 10.1111/jtsa.12517Search in Google Scholar

[18] A. Grzesiek, S. Sundar and A. Wyłomańska, Fractional lower order covariance-based estimator for bidimensional AR ( 1 ) model with stable distribution, Int. J. Adv. Eng. Sci. Appl. Math. 11 (2019), no. 3, 217–229. 10.1007/s12572-019-00250-9Search in Google Scholar

[19] A. Grzesiek, M. Teuerle and A. Wyłomańska, Cross-codifference for bidimensional VAR(1) time series with infinite variance, Comm. Statist. Simulation Comput. 51 (2022), no. 3, 1355–1380. 10.1080/03610918.2019.1670840Search in Google Scholar

[20] P. R. Hansen, Structural changes in the cointegrated vector autoregressive model, J. Econometrics 114 (2003), no. 2, 261–295. 10.1016/S0304-4076(03)00085-XSearch in Google Scholar

[21] S. Johansen, Modelling of cointegration in the vector autoregressive model, Econ. Model. 17 (2000), no. 3, 359–373. 10.1016/S0264-9993(99)00043-7Search in Google Scholar

[22] H. Kerem Cigizoglu and M. Bayazit, A generalized seasonal model for flow duration curve, Hydrological Process. 14 (2000), no. 6, 1053–1067. 10.1002/(SICI)1099-1085(20000430)14:6<1053::AID-HYP996>3.0.CO;2-BSearch in Google Scholar

[23] Y.-J. Lee and H.-H. Shih, The product formula of multiple Lévy–Itô integrals, Bull. Inst. Math. Acad. Sinica 32 (2004), no. 2, 71–95. Search in Google Scholar

[24] Y. Li, Q. He and R. S. Blum, On the product of two correlated complex Gaussian random variables, IEEE Signal Process. Lett. 27 (2020), 16–20. 10.1109/LSP.2019.2953634Search in Google Scholar

[25] J. Lips, Do they still matter?—Impact of fossil fuels on electricity prices in the light of increased renewable generation, J. Time Ser. Econom. 9 (2017), no. 2, Article ID 20160018. 10.1515/jtse-2016-0018Search in Google Scholar

[26] H. Lütkepohl, Comparison of criteria for estimating the order of a vector autoregressive process, J. Time Ser. Anal. 6 (1985), no. 1, 35–52. 10.1111/j.1467-9892.1985.tb00396.xSearch in Google Scholar

[27] S. Ly, K.-H. Pho, S. Ly and W.-K. Wong, Determining distribution for the product of random variables by using copulas, Risks 7 (2019), 10.3390/risks7010023. 10.3390/risks7010023Search in Google Scholar

[28] K. Maciejowska, Fundamental and speculative shocks, what drives electricity prices?, 11th International Conference on the European Energy Market (EEM14), IEEE Press, Piscataway (2014), 1–5. 10.1109/EEM.2014.6861289Search in Google Scholar

[29] A. J. McNeil, R. Frey and P. Embrechts, Quantitative Risk Management: Concepts, Techniques and Tools. Chapter 3, Princeton Ser. Finance, Princeton University, Princeton, 2005. Search in Google Scholar

[30] S. Nadarajah and D. K. Dey, On the product and ratio of 𝑡 random variables, Appl. Math. Lett. 19 (2006), no. 1, 45–55. 10.1016/j.aml.2005.01.004Search in Google Scholar

[31] S. Nadarajah and S. Kotz, A note on the product of normal and Laplace random variables, Braz. J. Probab. Stat. 19 (2005), no. 1, 33–38. Search in Google Scholar

[32] S. Nadarajah and S. Kotz, On the product and ratio of gamma and Weibull random variables, Econometric Theory 22 (2006), no. 2, 338–344. 10.1017/S0266466606060154Search in Google Scholar

[33] S. Nadarajah and S. Kotz, On the linear combination, product and ratio of normal and Laplace random variables, J. Franklin Inst. 348 (2011), no. 4, 810–822. 10.1016/j.jfranklin.2011.01.005Search in Google Scholar

[34] T. A. Øigård, A. Hanssen and R. E. Hansen, The Multivariate Normal Inverse Gaussian distribution: EM-estimation and analysis of synthetic aperture sonar data, 12th European Signal Processing Conference, IEEE Press, Vienna (2004), 1433–1436. Search in Google Scholar

[35] H. Podolski, The distribution of a product of 𝑛 independent random variables with generalized gamma distribution, Demonstr. Math. 4 (1972), 119–123. 10.1515/dema-1972-0205Search in Google Scholar

[36] G. G. Roussas, Joint and conditional p.d.f.’s, conditional expectation and variance, moment generating function, covariance, and correlation coefficient, An Introduction to Probability and Statistical Inference, Academic Press, New York (2015), 135–186. 10.1016/B978-0-12-800114-1.00004-4Search in Google Scholar

[37] F. Russo and P. Vallois, Product of two multiple stochastic integrals with respect to a normal martingale, Stochastic Process. Appl. 73 (1998), no. 1, 47–68. 10.1016/S0304-4149(97)00101-4Search in Google Scholar

[38] P. Saikkonen and H. Lütkepohl, Trend adjustment prior to testing for the cointegrating rank of a vector autoregressive process, J. Time Ser. Anal. 21 (2000), no. 4, 435–456. 10.1111/1467-9892.00192Search in Google Scholar

[39] J. Salo, H. M. El-Sallabi and P. Vainikainen, The distribution of the product of independent Rayleigh random variables, IEEE Trans. Antennas Propagation 54 (2006), no. 2, 639–643. 10.1109/TAP.2005.863087Search in Google Scholar

[40] A. Seijas-Macías and A. Oliveira, An approach to distribution of the product of two normal variables, Discuss. Math. Probab. Stat. 32 (2012), no. 1–2, 87–99. 10.7151/dmps.1146Search in Google Scholar

[41] D. J. Sheskin, Handbook of Parametric and Nonparametric Statistical Proceduress 5th ed., Chapman and Hall/CRC, Boca Raton, 2011. Search in Google Scholar

[42] W. E. Wecker, A note on the time series which is the product of two stationary time series, Stochastic Process. Appl. 8 (1978), no. 2, 153–157. 10.1016/0304-4149(78)90004-2Search in Google Scholar

[43] R. Weron, Modeling and Forecasting Electricity Loads and Prices: A Statistical Approach, Wiley Finance Ser., John Wiley & Sons, Chichester, 2006. 10.1002/9781118673362Search in Google Scholar

[44] H. White and C. W. J. Granger, Consideration of trends in time series, J. Time Ser. Econom. 3 (2011), no. 1, Article ID 2. 10.2202/1941-1928.1092Search in Google Scholar

[45] K. S. Williams, The 𝑛th power of a 2 × 2 matrix, Math. Mag. 65 (1992), no. 5, 336. 10.1080/0025570X.1992.11996049Search in Google Scholar

[46] P. S. Wilson and R. Toumi, A fundamental probability distribution for heavy rainfall, Geophys. Res. Lett. 32 (2005), 10.1029/2005GL022465. 10.1029/2005GL022465Search in Google Scholar

[47] Y. Yang and Y. Wang, Tail behavior of the product of two dependent random variables with applications to risk theory, Extremes 16 (2013), no. 1, 55–74. 10.1007/s10687-012-0153-2Search in Google Scholar

[48] E. Zivot and J. Wang, Vector autoregressive models for multivariate time series, Modeling Financial Time Series with S-Plus, Springer, New York (2003), 385–429. 10.1007/978-0-387-21763-5Search in Google Scholar

[49] European association for the cooperation of transmission system operators (TSOs) for electricity, 2021, https://transparency.entsoe.eu/. Search in Google Scholar

Received: 2022-06-03
Revised: 2023-08-10
Accepted: 2023-09-13
Published Online: 2023-10-25
Published in Print: 2024-01-01

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 21.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/strm-2022-0012/pdf
Scroll to top button