Home Business & Economics Extreme value techniques for stress scenario selection under elliptical symmetry and beyond
Article
Licensed
Unlicensed Requires Authentication

Extreme value techniques for stress scenario selection under elliptical symmetry and beyond

  • Menglin Zhou ORCID logo EMAIL logo and Natalia Nolde ORCID logo
Published/Copyright: February 6, 2025
Become an author with De Gruyter Brill

Abstract

The paper considers the problem of stress scenario selection, known as reverse stress testing, in the context of portfolios of financial assets. Stress scenarios are loosely defined as the most probable values of changes in risk factors for a given portfolio that lead to extreme portfolio losses. We extend the estimator of stress scenarios proposed in [P. Glasserman, C. Kang and W. Kang, Stress scenario selection by empirical likelihood, Quant. Finance 15 (2015), 1, 25–41] under elliptical symmetry to address the issue of data sparsity in the tail regions by incorporating extreme value techniques. The resulting estimator is shown to be consistent, asymptotically normally distributed and computationally efficient. The paper also proposes an alternative estimator that can be used when the joint distribution of risk factor changes is not elliptical but comes from the family of skew-elliptical distributions. We investigate the finite-sample performance of the two estimators in simulation studies and apply them on two financial portfolios.

MSC 2020: 60G70; 62P99

Funding statement: The authors acknowledge financial support of the UBC-Scotiabank Risk Analytics Initiative and Natural Sciences and Engineering Research Council of Canada.

A Proofs – Section 3

A.1 Proof of Lemma 3.4

From (2.5), X = R ⁒ Q ⊀ ⁒ S , where Q ⊀ ⁒ Q = Σ , and thus Y = c ⊀ ⁒ X = R ⁒ ( Q ⁒ c ) ⊀ ⁒ S . Let

Θ = ( Θ 1 , … , Θ d βˆ’ 1 ) ∈ [ 0 , Ο€ ) d βˆ’ 2 Γ— [ 0 , 2 Ο€ ) = : D .

In spherical coordinate, we have

S = ( cos ⁑ Θ 1 , sin ⁑ Θ 1 ⁒ cos ⁑ Θ 2 , … , sin ⁑ Θ 1 ⁒ β‹― ⁒ cos ⁑ Θ d βˆ’ 1 ) .

Let I ⁒ ( Θ ) = ( Q ⁒ c ) ⊀ ⁒ S and D I = { θ ∈ D : I ⁒ ( θ ) > 0 } . For any x > 0 ,

(A.1) P ⁒ ( Y β‰₯ t ⁒ x ) P ⁒ ( R β‰₯ t ) = P ⁒ ( R ⁒ I ⁒ ( Θ ) β‰₯ t ⁒ x ) P ⁒ ( R β‰₯ t ) = ∫ D I f Θ ⁒ ( ΞΈ ) ⁒ ∫ t ⁒ x / I ⁒ ( ΞΈ ) ∞ f R ⁒ ( r ) ⁒ d r ⁒ d ΞΈ P ⁒ ( R β‰₯ t ) = ∫ D I f Θ ⁒ ( ΞΈ ) ⁒ F Μ„ R ⁒ ( t ⁒ x / I ⁒ ( ΞΈ ) ) F Μ„ R ⁒ ( t ) ⁒ d ΞΈ β†’ x βˆ’ 1 / Ξ³ ⁒ ∫ D I f Θ ⁒ ( ΞΈ ) ⁒ ( I ⁒ ( ΞΈ ) ) 1 / Ξ³ ⁒ d ΞΈ , t β†’ ∞ .

The above limit indicates P ⁒ ( Y β‰₯ t ⁒ x ) / P ⁒ ( Y β‰₯ t ) β†’ x βˆ’ 1 / Ξ³ as t β†’ ∞ , which means that π‘Œ is regularly varying with index 1 / Ξ³ . Let

Ο– ⁒ ( Ξ³ , ρ ) = ∫ D I f Θ ⁒ ( ΞΈ ) ⁒ ( I ⁒ ( ΞΈ ) ) ( 1 βˆ’ ρ ) / Ξ³ ⁒ d ΞΈ ∈ ( 0 , ∞ ) .

From (A.1), we have

P ⁒ ( Y β‰₯ t ⁒ x ) / P ⁒ ( Y β‰₯ t ) βˆ’ x βˆ’ 1 / Ξ³ A Μƒ R ⁒ ( t ) = P ⁒ ( Y β‰₯ t ⁒ x ) / P ⁒ ( R β‰₯ t ) βˆ’ x βˆ’ 1 / Ξ³ ⁒ P ⁒ ( Y β‰₯ t ) / P ⁒ ( R β‰₯ t ) A Μƒ R ⁒ ( t ) β‹… P ⁒ ( Y β‰₯ t ) / P ⁒ ( R β‰₯ t )
= P ⁒ ( R β‰₯ t ) P ⁒ ( Y β‰₯ t ) ⁒ [ ∫ D I f Θ ⁒ ( ΞΈ ) A Μƒ R ⁒ ( t ) ⁒ F Μ„ R ⁒ ( t ⁒ x / I ⁒ ( ΞΈ ) ) F Μ„ R ⁒ ( t ) ⁒ d ΞΈ βˆ’ x βˆ’ 1 / Ξ³ ⁒ ∫ D I f Θ ⁒ ( ΞΈ ) A Μƒ R ⁒ ( t ) ⁒ F Μ„ R ⁒ ( t / I ⁒ ( ΞΈ ) ) F Μ„ R ⁒ ( t ) ⁒ d ΞΈ ]
= P ⁒ ( R β‰₯ t ) P ⁒ ( Y β‰₯ t ) ⁒ ∫ D I f Θ ⁒ ( ΞΈ ) A Μƒ R ⁒ ( t ) ⁒ [ F Μ„ R ⁒ ( t ⁒ x / I ⁒ ( ΞΈ ) ) F Μ„ R ⁒ ( t ) βˆ’ ( x I ⁒ ( ΞΈ ) ) βˆ’ 1 / Ξ³ ] ⁒ d ΞΈ
βˆ’ P ⁒ ( R β‰₯ t ) P ⁒ ( Y β‰₯ t ) ⁒ x βˆ’ 1 / Ξ³ ⁒ ∫ D I f Θ ⁒ ( ΞΈ ) A Μƒ R ⁒ ( t ) ⁒ [ F Μ„ R ⁒ ( t / I ⁒ ( ΞΈ ) ) F Μ„ R ⁒ ( t ) βˆ’ ( 1 I ⁒ ( ΞΈ ) ) βˆ’ 1 / Ξ³ ] ⁒ d ΞΈ
β†’ 1 Ο– ⁒ ( Ξ³ , 0 ) ⁒ ∫ D I f Θ ⁒ ( ΞΈ ) ⁒ ( x I ⁒ ( ΞΈ ) ) βˆ’ 1 / Ξ³ ⁒ ( x / I ⁒ ( ΞΈ ) ) ρ / Ξ³ βˆ’ 1 ρ ⁒ Ξ³ ⁒ d ΞΈ
βˆ’ 1 Ο– ⁒ ( Ξ³ , 0 ) ⁒ x βˆ’ 1 / Ξ³ ⁒ ∫ D I f Θ ⁒ ( ΞΈ ) ⁒ ( 1 I ⁒ ( ΞΈ ) ) βˆ’ 1 / Ξ³ ⁒ ( 1 / I ⁒ ( ΞΈ ) ) ρ / Ξ³ βˆ’ 1 ρ ⁒ Ξ³ ⁒ d ΞΈ
= 1 Ο– ⁒ ( Ξ³ , 0 ) β‹… ρ ⁒ Ξ³ ⁒ [ x ( ρ βˆ’ 1 ) / Ξ³ ⁒ Ο– ⁒ ( Ξ³ , ρ ) βˆ’ x βˆ’ 1 / Ξ³ ⁒ Ο– ⁒ ( Ξ³ , 0 ) βˆ’ x βˆ’ 1 / Ξ³ ⁒ Ο– ⁒ ( Ξ³ , ρ ) + x βˆ’ 1 / Ξ³ ⁒ Ο– ⁒ ( Ξ³ , 0 ) ]
= Ο– ⁒ ( Ξ³ , ρ ) Ο– ⁒ ( Ξ³ , 0 ) ⁒ x βˆ’ 1 / Ξ³ ⁒ x ρ / Ξ³ βˆ’ 1 ρ ⁒ Ξ³ , t β†’ ∞ .
Letting A Μƒ Y ⁒ ( t ) = Ο– ⁒ ( Ξ³ , ρ ) Ο– ⁒ ( Ξ³ , 0 ) ⁒ A Μƒ R ⁒ ( t ) completes the proof.

A.2 Proof of Lemma 3.5

We first provide a result from [17], which will be used several times in the later proofs.

Lemma A.1

Let π‘Œ be a random variable and let T : R β†’ R be an absolutely continuous function. Then, for any y ∈ R , we have

E ⁒ ( T ⁒ ( Y ) ⁒ 1 ⁑ { Y β‰₯ y } ) = T ⁒ ( y ) ⁒ P ⁒ ( Y β‰₯ y ) + ∫ y ∞ T β€² ⁒ ( x ) ⁒ P ⁒ ( Y β‰₯ x ) ⁒ d x .

We prove Lemma 3.5 assuming L = g ⁒ ( X ) = w ⊀ ⁒ X for some weight vector 𝐰. The proof is similar for the situation where ( X , L ) is jointly elliptically distributed with positive definite scale matrix. Let e j be the vector where the 𝑗-th component is 1 and all other components are 0 ( j ∈ { 1 , … , d } ). Using spherical coordinates, we can write

( X j , L ) ⊀ = ( e j ⊀ , w ) ⊀ X = R ( e j ⊀ , w ) ⊀ Q ⊀ S = : R ( I 1 ( Θ ) , I 2 ( Θ ) ) ⊀ , Θ ∈ D .

Let D I 2 = { θ ∈ D : I 2 ⁒ ( θ ) > 0 } . From Lemma A.1, for any y > 0 and q ∈ [ 0 , 1 / γ ) ,

E ⁒ ( X j q β‹… 1 ⁑ { L β‰₯ t ⁒ y } ) = E ⁒ [ R q ⁒ I 1 q ⁒ ( Θ ) β‹… 1 ⁑ { R ⁒ I 2 ⁒ ( Θ ) β‰₯ t ⁒ y } ] = ∫ D I 2 I 1 q ⁒ ( ΞΈ ) ⁒ ∫ 0 ∞ r q ⁒ 1 ⁑ { r ⁒ q β‰₯ t ⁒ y / I 2 ⁒ ( ΞΈ ) } β‹… f R ⁒ ( r ) ⁒ f Θ ⁒ ( ΞΈ ) ⁒ d r ⁒ d ΞΈ = ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ E ⁒ ( R q ⁒ 1 ⁑ { R β‰₯ t ⁒ y I 2 ⁒ ( ΞΈ ) } ) ⁒ d ΞΈ = ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ t q ⁒ y q I 2 q ⁒ ( ΞΈ ) ⁒ P ⁒ ( R β‰₯ t ⁒ y I 2 ⁒ ( ΞΈ ) ) ⁒ d ΞΈ + ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ ∫ y / I 2 ⁒ ( ΞΈ ) ∞ t q ⁒ q ⁒ x q βˆ’ 1 ⁒ P ⁒ ( R β‰₯ t ⁒ x ) ⁒ d x ⁒ d ΞΈ .

Thus

(A.2) E ⁒ ( X j q β‹… 1 ⁑ { L β‰₯ t ⁒ y } ) t q ⁒ P ⁒ ( R β‰₯ t ) = ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ y q I 2 q ⁒ ( ΞΈ ) ⁒ P ⁒ ( R β‰₯ t ⁒ y / I 2 ⁒ ( ΞΈ ) ) P ⁒ ( R β‰₯ t ) ⁒ d ΞΈ + ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ ∫ y / I 2 ⁒ ( ΞΈ ) ∞ q ⁒ x q βˆ’ 1 ⁒ P ⁒ ( R β‰₯ t ⁒ x ) P ⁒ ( R β‰₯ t ) ⁒ d x ⁒ d ΞΈ β†’ t β†’ ∞ y q βˆ’ 1 / Ξ³ ⁒ ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ I 2 1 / Ξ³ βˆ’ q ⁒ ( ΞΈ ) ⁒ d ΞΈ + ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ ∫ y / I 2 ⁒ ( ΞΈ ) ∞ q ⁒ x q βˆ’ 1 βˆ’ 1 / Ξ³ ⁒ d x ⁒ d ΞΈ = y q βˆ’ 1 / Ξ³ 1 / Ξ³ 1 / Ξ³ βˆ’ q ∫ D I 2 f Θ ( ΞΈ ) I 1 q ( ΞΈ ) I 2 1 / Ξ³ βˆ’ q ( ΞΈ ) d ΞΈ = : y q βˆ’ 1 / Ξ³ r ( Ξ³ , q ) ,

where r ⁒ ( γ , q ) ∈ R for q ∈ [ 0 , 1 / γ ) . Letting y = 1 , we obtain

(A.3) E ⁒ ( X j q ∣ L β‰₯ t ) t q = E ⁒ ( X j q β‹… 1 ⁑ { L β‰₯ t } ) / t q E ⁒ ( 1 ⁑ { L β‰₯ t } ) β†’ r ⁒ ( Ξ³ , q ) r ⁒ ( Ξ³ , 0 ) = : ΞΊ j , q < ∞ .

Furthermore,

(A.4) a ⁒ ( t ) ⁒ [ E ⁒ ( X j q β‹… 1 ⁑ { L β‰₯ t ⁒ y } ) t q ⁒ P ⁒ ( R β‰₯ t ) βˆ’ y q βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , q ) ] = a ⁒ ( t ) ⁒ y q ⁒ ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) I 2 q ⁒ ( ΞΈ ) ⁒ ( P ⁒ ( R β‰₯ t ⁒ y / I 2 ⁒ ( ΞΈ ) ) P ⁒ ( R β‰₯ t ) βˆ’ y βˆ’ 1 / Ξ³ I 2 βˆ’ 1 / Ξ³ ⁒ ( ΞΈ ) ) ⁒ d ΞΈ + a ⁒ ( t ) ⁒ ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ ∫ y / I 2 ⁒ ( ΞΈ ) ∞ q ⁒ x q βˆ’ 1 ⁒ ( P ⁒ ( R β‰₯ t ⁒ x ) P ⁒ ( R β‰₯ t ) βˆ’ x βˆ’ 1 / Ξ³ ) ⁒ d x ⁒ d ΞΈ ∼ a ⁒ ( t ) ⁒ A Μƒ R ⁒ ( t ) ⁒ y q ⁒ ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) I 2 q ⁒ ( ΞΈ ) ⁒ y βˆ’ 1 / Ξ³ I 2 βˆ’ 1 / Ξ³ ⁒ ( ΞΈ ) ⁒ ( y / I 2 ⁒ ( ΞΈ ) ) ρ / Ξ³ βˆ’ 1 ρ ⁒ Ξ³ ⁒ d ΞΈ + a ⁒ ( t ) ⁒ A Μƒ R ⁒ ( t ) ⁒ ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ ∫ y / I 2 ⁒ ( ΞΈ ) ∞ q ⁒ x q βˆ’ 1 βˆ’ 1 / Ξ³ ⁒ x ρ / Ξ³ βˆ’ 1 ρ ⁒ Ξ³ ⁒ d x ⁒ d ΞΈ = a ⁒ ( t ) ⁒ A Μƒ R ⁒ ( t ) ρ ⁒ Ξ³ ⁒ 1 / Ξ³ βˆ’ ρ / Ξ³ 1 / Ξ³ βˆ’ q βˆ’ ρ / Ξ³ ⁒ y q βˆ’ 1 / Ξ³ + ρ / Ξ³ ⁒ ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 q ⁒ ( ΞΈ ) ⁒ ( I 2 ⁒ ( ΞΈ ) ) 1 / Ξ³ βˆ’ q βˆ’ ρ / Ξ³ ⁒ d ΞΈ βˆ’ a ⁒ ( t ) ⁒ A Μƒ R ⁒ ( t ) ρ ⁒ Ξ³ ⁒ y q βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , q ) β†’ 0 , t β†’ ∞ ,

since a ⁒ ( t ) ⁒ A Μƒ R ⁒ ( t ) β†’ 0 and ρ < 0 . Combining (A.3) and (A.4), we prove

a ⁒ ( t ) ⁒ | E ⁒ ( X j q ∣ L β‰₯ t ) t q βˆ’ r ⁒ ( Ξ³ , q ) r ⁒ ( Ξ³ , 0 ) | β†’ 0 , t β†’ ∞ .

A.3 Proof of Lemma 3.6

We prove Lemma 3.6 under the situation that L = g ⁒ ( X ) = w ⊀ ⁒ X for some weight vector 𝐰. The proof is similar when ( X , L ) is jointly elliptically distributed with positive definite scale matrix. From [17, Appendix A.1], we have

ΞΌ Μƒ j ⁒ ( t ) m Μƒ j ⁒ ( t ) = E ⁒ ( Z 1 ∣ Z 1 β‰₯ t ) t ,

where Z 1 ⁒ = d ⁒ βˆ₯ Q ⁒ w βˆ₯ ⁒ e 1 ⊀ ⁒ R ⁒ S . From Lemma A.1, we have

E ⁒ ( Z 1 ∣ Z 1 β‰₯ t ) t = E ⁒ ( Z 1 ⁒ 1 ⁑ { Z 1 β‰₯ t } ) t ⁒ P ⁒ ( Z 1 β‰₯ t ) = t ⁒ P ⁒ ( Z 1 β‰₯ t ) + t ⁒ ∫ 1 ∞ P ⁒ ( Z 1 β‰₯ t ⁒ x ) ⁒ d x t ⁒ P ⁒ ( Z 1 β‰₯ t ) = 1 + ∫ 1 ∞ P ⁒ ( Z 1 β‰₯ t ⁒ x ) P ⁒ ( Z 1 β‰₯ t ) ⁒ d x ,

which indicates that

(A.5) k ⁒ ( ΞΌ Μƒ j ⁒ ( p ) m Μƒ j ⁒ ( p ) βˆ’ 1 1 βˆ’ Ξ³ ) = k ⁒ ( E ⁒ ( Z 1 ∣ Z 1 β‰₯ U L ⁒ ( 1 / p ) ) U L ⁒ ( 1 / p ) βˆ’ 1 1 βˆ’ Ξ³ ) = k ⁒ ∫ 1 ∞ ( P ⁒ ( Z 1 β‰₯ U L ⁒ ( 1 / p ) ⁒ x ) P ⁒ ( Z 1 β‰₯ U L ⁒ ( 1 / p ) ) βˆ’ x βˆ’ 1 / Ξ³ ) ⁒ d x .

Furthermore, according to Lemma 3.4, there exists a function A Μƒ Z 1 ⁒ ( t ) ∝ A Μƒ R ⁒ ( t ) such that

(A.6) lim t β†’ ∞ P ⁒ ( Z 1 β‰₯ t ⁒ x ) / P ⁒ ( Z 1 β‰₯ t ⁒ x ) βˆ’ x βˆ’ 1 / Ξ³ A Μƒ Z 1 ⁒ ( t ) = x βˆ’ 1 / Ξ³ ⁒ x ρ / Ξ³ βˆ’ 1 ρ ⁒ Ξ³ .

Combining (A.5) and (A.6), we have, as p β†’ 0 ,

k ⁒ ( ΞΌ Μƒ j ⁒ ( p ) m Μƒ j ⁒ ( p ) βˆ’ 1 1 βˆ’ Ξ³ ) ∼ k ⁒ A Μƒ Z 1 ⁒ ( U L ⁒ ( 1 / p ) ) ⁒ ∫ 1 ∞ x βˆ’ 1 / Ξ³ ⁒ x ρ / Ξ³ βˆ’ 1 ρ ⁒ Ξ³ ⁒ d x .

The integral in the above asymptotic equality is finite since ρ < 0 and Ξ³ > 0 . Note that, from (A.1), we have lim t β†’ ∞ P ⁒ ( L β‰₯ t ) / P ⁒ ( R β‰₯ t ) = c ∈ ( 0 , ∞ ) . Hence

(A.7) k ⁒ A Μƒ R ⁒ ( U L ⁒ ( 1 / p ) ) = k ⁒ A R ⁒ ( 1 P ⁒ ( R β‰₯ U L ⁒ ( 1 / p ) ) ) ∼ k ⁒ A R ⁒ ( c / p ) ∼ k ⁒ c ρ ⁒ A R ⁒ ( 1 / p ) β†’ 0 , n β†’ ∞ , p β†’ 0 ,

from Condition 3 (b) and the fact that k / ( n ⁒ p ) β†’ ∞ . Thus

k ⁒ A Μƒ Z 1 ⁒ ( U L ⁒ ( 1 / p ) ) ∝ k ⁒ A Μƒ R ⁒ ( U L ⁒ ( 1 / p ) ) β†’ 0 .

As a consequence, we obtain k ⁒ ( ΞΌ Μƒ j ⁒ ( p ) m Μƒ j ⁒ ( p ) βˆ’ 1 1 βˆ’ Ξ³ ) β†’ 0 .

A.4 Proof of Lemma 3.7

Let 𝑋 be a regularly varying random variable satisfying the second-order condition with the auxiliary function A X as in (3.10) and A Μƒ X as in (3.11). Recall that the two functions are connected via A Μƒ X ⁒ ( x ) = A X ⁒ ( 1 / P ⁒ ( X β‰₯ x ) ) .

Let d n = ( k n ⁒ p ) γ . We can write

k ⁒ [ ΞΌ Μƒ j ⁒ ( k / n ) ΞΌ Μƒ j ⁒ ( p ) ⁒ ( k n ⁒ p ) Ξ³ βˆ’ 1 ] = k ⁒ ( U L ⁒ ( n / k ) ⁒ ΞΊ j , 1 ΞΌ Μƒ j ⁒ ( p ) ⁒ d n βˆ’ 1 ) + U L ⁒ ( n / k ) ΞΌ Μƒ j ⁒ ( p ) ⁒ d n β‹… k ⁒ ( ΞΌ Μƒ j ⁒ ( k / n ) U L ⁒ ( n / k ) βˆ’ ΞΊ j , 1 ) = k ⁒ ( U L ⁒ ( n / k ) U L ⁒ ( 1 / p ) ⁒ d n βˆ’ 1 ) + k ⁒ U L ⁒ ( n / k ) ⁒ d n ΞΌ Μƒ j ⁒ ( p ) ⁒ ( ΞΊ j , 1 βˆ’ ΞΌ Μƒ j ⁒ ( p ) U L ⁒ ( 1 / p ) ) + U L ⁒ ( n / k ) ΞΌ Μƒ j ⁒ ( p ) ⁒ d n β‹… k ⁒ ( ΞΌ Μƒ j ⁒ ( k / n ) U L ⁒ ( n / k ) βˆ’ ΞΊ j , 1 ) .

Following arguments similar to (A.7), we can prove k ⁒ A Μƒ R ⁒ ( U L ⁒ ( n / k ) ) β†’ 0 as n β†’ ∞ . Combined with Lemma 3.5 and the fact that k / ( n ⁒ p ) β†’ ∞ , the last two terms converge to 0.

For the first term, Lemma 3.4 indicates there exists a function A L ⁒ ( t ) ∝ A R ⁒ ( t ) such that

(A.8) lim t β†’ ∞ U L ⁒ ( t ⁒ x ) / U L ⁒ ( t ) βˆ’ x Ξ³ A L ⁒ ( t ) = x Ξ³ ⁒ x ρ βˆ’ 1 ρ .

Based on [12, Theorems 2.3.6 and 2.3.9], (A.8) implies that, for any Ο΅ , Ξ΄ > 0 , there exists a t 0 = t 0 ⁒ ( Ο΅ , Ξ΄ ) > 1 such that, for all t , t ⁒ x β‰₯ t 0 ,

| U L ⁒ ( t ⁒ x ) U L ⁒ ( t ) βˆ’ x Ξ³ A 0 ⁒ ( t ) βˆ’ x Ξ³ ⁒ x ρ βˆ’ 1 ρ | ≀ Ο΅ ⁒ x Ξ³ + ρ ⁒ max ⁑ ( x Ξ΄ , x βˆ’ Ξ΄ ) ,

where

A 0 ⁒ ( t ) = βˆ’ ρ ⁒ ( 1 βˆ’ lim s β†’ ∞ s βˆ’ Ξ³ ⁒ U L ⁒ ( s ) t βˆ’ Ξ³ ⁒ U L ⁒ ( t ) ) .

Letting t = n / k β†’ and x = k / ( n ⁒ p ) β†’ , we have

(A.9) | U L ⁒ ( 1 / p ) U L ⁒ ( n / k ) ⁒ d n βˆ’ 1 A 0 ⁒ ( n / k ) + d n ρ / Ξ³ βˆ’ 1 ρ | ≀ Ο΅ ⁒ d n ( ρ + Ξ΄ ) / Ξ³ .

Let v n = U L ⁒ ( 1 / p ) / U L ⁒ ( n / k ) . As k / ( n ⁒ p ) β†’ ∞ , we have v n β†’ ∞ . From (A.9), we have

k ⁒ ( U L ⁒ ( n / k ) U L ⁒ ( 1 / p ) ⁒ d n βˆ’ 1 ) = k ⁒ d n / v n ⁒ ( 1 βˆ’ v n / d n ) ≀ Ο΅ ⁒ k ⁒ A 0 ⁒ ( n / k ) ⁒ d n v n ⁒ d n ( ρ + Ξ΄ ) / Ξ³ + k ⁒ A 0 ⁒ ( n / k ) ⁒ d n v n ⁒ d n ρ / Ξ³ βˆ’ 1 ρ , k ⁒ ( U L ⁒ ( n / k ) U L ⁒ ( 1 / p ) ⁒ d n βˆ’ 1 ) β‰₯ βˆ’ Ο΅ ⁒ k ⁒ A 0 ⁒ ( n / k ) ⁒ d n v n ⁒ d n ( ρ + Ξ΄ ) / Ξ³ + k ⁒ A 0 ⁒ ( n / k ) ⁒ d n v n ⁒ d n ρ / Ξ³ βˆ’ 1 ρ .

Note that d n / v n β†’ 1 and k ⁒ A 0 ⁒ ( n / k ) = O ⁒ ( k ⁒ A L ⁒ ( n / k ) ) β†’ 0 since A L ⁒ ( β‹… ) ∝ A R ⁒ ( β‹… ) . Choosing Ξ΄ < βˆ’ ρ , we prove convergence of the first term to 0.

A.5 Proof of Theorem 3.8

For simplicity, let u n = U L ⁒ ( n / k ) and define L Μƒ n = L n βˆ’ k , n / u n . From [12, Theorem 2.2.1] and Karamata’s theorem, we have, as n β†’ ∞ , L Μƒ n ⁒ β†’ P ⁒ 1 and

(A.10) k ⁒ ( L Μƒ n βˆ’ 1 ) ⁒ β†’ d ⁒ N ⁒ ( 0 , Ξ³ 2 ) .

At first, we establish two lemmas, which will be used to prove Theorem 3.8.

Lemma A.2

Let Conditions 1–3 hold. If furthermore 1 / Ξ³ > 1 , then for any j ∈ { 1 , … , d } , as n β†’ ∞ ,

(A.11) k ⁒ ( E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ L Μƒ n ⁒ u n } ) u n ⁒ P ⁒ ( L β‰₯ u n ) βˆ’ L Μƒ n 1 βˆ’ 1 / Ξ³ ⁒ ΞΊ j , 1 ) ⁒ β†’ P ⁒ 0 .

Proof

Using notation similar to Section A.2, we have κ j , 1 = r ⁒ ( γ , 1 ) / r ⁒ ( γ , 0 ) . For any y > 0 , write

k ⁒ | E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ y ⁒ u n } ) u n ⁒ P ⁒ ( L β‰₯ u n ) βˆ’ y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) r ⁒ ( Ξ³ , 0 ) | ≀ P ⁒ ( R β‰₯ u n ) E ⁒ ( 1 ⁑ { L β‰₯ u n } ) ⁒ k ⁒ | E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ y ⁒ u n } ) u n ⁒ P ⁒ ( R β‰₯ u n ) βˆ’ y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) | + y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) ⁒ k ⁒ | P ⁒ ( R β‰₯ u n ) E ⁒ ( 1 ⁑ { L β‰₯ u n } ) βˆ’ 1 r ⁒ ( Ξ³ , 0 ) | .

We first prove that the first term converges to 0. From (A.4), we have

lim n β†’ ∞ 1 A Μƒ R ⁒ ( u n ) ⁒ [ E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ y ⁒ u n } ) u n ⁒ P ⁒ ( R β‰₯ u n ) βˆ’ y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) ] = T j ⁒ ( y ) ,

where

T j ⁒ ( y ) = y 1 βˆ’ 1 / Ξ³ ρ ⁒ Ξ³ ⁒ [ 1 βˆ’ ρ 1 βˆ’ ρ βˆ’ Ξ³ ⁒ y ρ / Ξ³ ⁒ ∫ D I 2 f Θ ⁒ ( ΞΈ ) ⁒ I 1 ⁒ ( ΞΈ ) ⁒ ( I 2 ⁒ ( ΞΈ ) ) 1 / Ξ³ βˆ’ 1 βˆ’ ρ / Ξ³ ⁒ d ΞΈ βˆ’ r ⁒ ( Ξ³ , 1 ) ] .

Thus

k ⁒ | E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ y ⁒ u n } ) u n ⁒ P ⁒ ( R β‰₯ u n ) βˆ’ y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) | ≀ k ⁒ A Μƒ R ⁒ ( u n ) ⁒ | 1 A Μƒ R ⁒ ( u n ) ⁒ [ E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ y ⁒ u n } ) u n ⁒ P ⁒ ( R β‰₯ u n ) βˆ’ y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) ] βˆ’ T j ⁒ ( y ) | + k ⁒ A Μƒ R ⁒ ( u n ) ⁒ | T j ⁒ ( y ) | .

Note that T j ⁒ ( y ) is decreasing and continuous. Thus, for any closed interval [ a , b ] βŠ‚ ( 0 , ∞ ) ,

sup y ∈ [ a , b ] | 1 A Μƒ R ⁒ ( u n ) ⁒ [ E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ y ⁒ u n } ) u n ⁒ P ⁒ ( R β‰₯ u n ) βˆ’ y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) ] βˆ’ T j ⁒ ( y ) | β†’ 0 , n β†’ ∞ ,

which also indicates that

sup y ∈ [ a , b ] k ⁒ | E ⁒ ( X j ⁒ 1 ⁑ { L β‰₯ y ⁒ u n } ) u n ⁒ P ⁒ ( L β‰₯ u n ) βˆ’ y 1 βˆ’ 1 / Ξ³ ⁒ r ⁒ ( Ξ³ , 1 ) r ⁒ ( Ξ³ , 0 ) | β†’ 0 , n β†’ ∞ .

The local uniform convergence on closed interval implies (A.11) as L Μƒ n ⁒ β†’ P ⁒ 1 . ∎

Lemma A.3

Let Conditions 1–3 hold. Define

D n ⁒ ( x ) = k ⁒ ( 1 k ⁒ βˆ‘ i = 1 n X i u n ⁒ 1 ⁑ { L i β‰₯ u n ⁒ x } βˆ’ n ⁒ E ⁒ ( X ⁒ 1 ⁑ { L β‰₯ u n ⁒ x } ) u n ⁒ k ) ,

where { ( X 1 , L 1 ) , … , ( X n , L n ) } are i.i.d. copies of random vector ( X , L ) . Let D n , j ⁒ ( x ) with j ∈ { 1 , … , d } be the 𝑗-th component of D n ⁒ ( x ) . If 1 / Ξ³ > 2 , then as n β†’ ∞ ,

(A.12) D n , j ⁒ ( L Μƒ n ) βˆ’ D n , j ⁒ ( 1 ) ⁒ β†’ P ⁒ 0 .

Proof

For any x ∈ R , E ⁒ ( D n , j ⁒ ( x ) ) = 0 . To prove (A.12), we aim to prove that, for any Ρ > 0 ,

P ⁒ ( | D n , j ⁒ ( L Μƒ n ) βˆ’ D n , j ⁒ ( 1 ) | > Ξ΅ ) β†’ 0 .

For any Ξ΅ β€² > 0 , we have

P ⁒ ( | D n , j ⁒ ( L Μƒ n ) βˆ’ D n , j ⁒ ( 1 ) | > Ξ΅ ) ≀ P ⁒ ( | L Μƒ n βˆ’ 1 | > Ξ΅ β€² ) + P ⁒ ( | D n , j ⁒ ( L Μƒ n ) βˆ’ D n , j ⁒ ( 1 ) | > Ξ΅ , | L Μƒ n βˆ’ 1 | ≀ Ξ΅ β€² ) .

Since L Μƒ n ⁒ β†’ P ⁒ 1 , we have P ⁒ ( | L Μƒ n βˆ’ 1 | > Ξ΅ β€² ) β†’ 0 . Meanwhile, for any integer 𝑛, we can find ΞΎ n ∈ [ 1 βˆ’ Ξ΅ β€² , 1 + Ξ΅ β€² ] such that sup | x βˆ’ 1 | ≀ Ξ΅ β€² | D n , j ⁒ ( x ) βˆ’ D n , j ⁒ ( 1 ) | ≀ | D n , j ⁒ ( ΞΎ n ) βˆ’ D n , j ⁒ ( 1 ) | . Let X i , j with i ∈ { 1 , … , n } and j ∈ { 1 , … , d } be the 𝑗-th component of X i . We have

P ⁒ ( | D n , j ⁒ ( L Μƒ n ) βˆ’ D n , j ⁒ ( 1 ) | > Ξ΅ , | L Μƒ n βˆ’ 1 | ≀ Ξ΅ β€² ) ≀ P ⁒ ( | D n , j ⁒ ( ΞΎ n ) βˆ’ D n , j ⁒ ( 1 ) | > Ξ΅ ) ≀ Ξ΅ βˆ’ 2 ⁒ Var ⁑ [ D n , j ⁒ ( ΞΎ n ) βˆ’ D n , j ⁒ ( 1 ) ] = Ξ΅ βˆ’ 2 ⁒ Var ⁑ ( 1 k ⁒ βˆ‘ i = 1 n X i , j u n ⁒ 1 ⁑ { L i u n β‰₯ ΞΎ n } βˆ’ 1 k ⁒ βˆ‘ i = 1 n X i , j u n ⁒ 1 ⁑ { L i u n β‰₯ 1 } ) = Ξ΅ βˆ’ 2 ⁒ n k ⁒ Var ⁑ ( X 1 , j u n ⁒ 1 ⁑ { min ⁑ { ΞΎ n , 1 } ≀ L 1 u n ≀ max ⁑ { ΞΎ n , 1 } } ) ≀ Ξ΅ βˆ’ 2 ⁒ n k ⁒ E ⁒ ( X 1 , j 2 u n 2 ⁒ 1 ⁑ { ( 1 βˆ’ Ξ΅ β€² ) ≀ L 1 u n ≀ ( 1 + Ξ΅ β€² ) } ) = Ξ΅ βˆ’ 2 E ⁒ ( 1 ⁑ { L 1 β‰₯ u n } ) ⁒ [ E ⁒ ( X 1 , j 2 u n 2 ⁒ 1 ⁑ { L 1 β‰₯ ( 1 βˆ’ Ξ΅ β€² ) ⁒ u n } ) βˆ’ E ⁒ ( X 1 , j 2 u n 2 ⁒ 1 ⁑ { L 1 β‰₯ ( 1 + Ξ΅ β€² ) ⁒ u n } ) ] β†’ Ξ΅ βˆ’ 2 ⁒ [ ( 1 βˆ’ Ξ΅ β€² ) 2 βˆ’ 1 / Ξ³ βˆ’ ( 1 + Ξ΅ β€² ) 2 βˆ’ 1 / Ξ³ ] ⁒ r ⁒ ( Ξ³ , 2 ) r ⁒ ( Ξ³ , 0 )

by (A.2) with t = u n . Letting Ξ΅ β€² β†’ 0 , we prove (A.12). ∎

With these two lemmas, we are ready to prove Theorem 3.8. Write

k ( X Μ„ ⁒ ( L n βˆ’ k , n ) ΞΌ Μƒ ⁒ ( k / n ) βˆ’ 1 ) = k ( X Μ„ ⁒ ( L n βˆ’ k , n ) u n ⁒ ΞΊ 1 βˆ’ 1 ) + X Μ„ ⁒ ( L n βˆ’ k , n ) u n k ( u n ⁒ 1 ΞΌ Μƒ ⁒ ( k / n ) βˆ’ 1 ΞΊ 1 ) = : G 1 + G 2 .

For G 1 , with D n ⁒ ( L Μƒ n ) = ( D n , 1 ⁒ ( L Μƒ n ) , D n , 2 ⁒ ( L Μƒ n ) , … , D n , d ⁒ ( L Μƒ n ) ) ⊀ (see Lemma A.3 for notation), write

k ⁒ ( X Μ„ ⁒ ( L n βˆ’ k , n ) u n ⁒ ΞΊ 1 βˆ’ 1 ) = D n ⁒ ( L Μƒ n ) ΞΊ 1 + k ⁒ ( E ⁒ ( X β‹… 1 ⁑ { L β‰₯ L Μƒ n ⁒ u n } ) u n ⁒ P ⁒ ( L β‰₯ u n ) ⁒ ΞΊ 1 βˆ’ L Μƒ n 1 βˆ’ 1 / Ξ³ β‹… 1 ) + k ⁒ ( L Μƒ n 1 βˆ’ 1 / Ξ³ βˆ’ 1 ) β‹… 1 .

The second term converges to 𝟎 in probability from (A.11). Using the Delta method, we have

k ⁒ ( L Μƒ n 1 βˆ’ 1 / Ξ³ βˆ’ 1 ) β‹… 1 ⁒ β†’ d ⁒ Z 2

from (A.10). According to (A.12), D n ⁒ ( L Μƒ n ) has the same limit distribution as D n ⁒ ( 1 ) , where

D n ( 1 ) = k ( 1 k βˆ‘ i = 1 n X i u n 1 { L i β‰₯ u n } βˆ’ ΞΌ Μƒ ⁒ ( k / n ) u n ) = βˆ‘ i = 1 n ( 1 k X i u n 1 { L i β‰₯ u n } βˆ’ k n ΞΌ Μƒ ⁒ ( k / n ) u n ) = : βˆ‘ i = 1 n Y i .

We have E ⁒ ( D n ⁒ ( 1 ) ) = 0 . Furthermore, for any j 1 ∈ { 1 , … , d } ,

Var ⁑ ( Y i , j 1 ) = Var ⁑ ( 1 k ⁒ X i , j 1 u n ⁒ 1 ⁑ { L i β‰₯ u n } ) = 1 k ⁒ E ⁒ ( X i , j 1 2 u n 2 ⁒ 1 ⁑ { L i β‰₯ u n } ) βˆ’ k n 2 ⁒ ΞΌ Μƒ j 1 2 ⁒ ( k / n ) u n 2 .

Thus

Var ⁑ ( βˆ‘ i = 1 n Y i , j 1 ) = n k ⁒ E ⁒ ( X i , j 1 2 u n 2 ⁒ 1 ⁑ { L i β‰₯ u n } ) βˆ’ k n ⁒ ΞΌ Μƒ j 1 2 ⁒ ( k / n ) u n 2 ∼ E ⁒ ( X i , j 1 2 ∣ L i β‰₯ u n ) u n 2 β†’ ΞΊ j 1 , 2 .

Similarly, for j 1 β‰  j 2 ∈ { 1 , … , d } ,

Cov ⁑ ( βˆ‘ i = 1 n Y i , j 1 , βˆ‘ i = 1 n Y i , j 2 ) = βˆ‘ i = 1 n Cov ⁑ ( Y i , j 1 , Y i , j 2 ) β†’ Ο„ j 1 , j 2 .

Following a procedure similar to Section A.2, we can prove Ο„ j 1 , j 2 < ∞ . Furthermore, we can also find some Ο‚ > 0 such that 2 + Ο‚ < 1 / Ξ³ and, as n β†’ ∞ ,

βˆ‘ i = 1 n E ⁒ ( | Y i | 2 + Ο‚ ) ∼ n k 1 + Ο‚ / 2 ⁒ u n 2 + Ο‚ ⁒ E ⁒ ( | X | 2 + Ο‚ β‹… 1 ⁑ { L β‰₯ u n } ) = E ⁒ ( | X | 2 + Ο‚ ∣ L β‰₯ u n ) k Ο‚ / 2 ⁒ u n 2 + Ο‚ β†’ 0 .

Thus, by the Lyapunov central limit theorem (CLT) [5], we have

D n ⁒ ( 1 ) = βˆ‘ i = 1 n Y i ⁒ β†’ d ⁒ Z 1 , n β†’ ∞ .

According to [22], the convergence is jointly with Z 2 . Combining the above results, we have G 1 ⁒ β†’ d ⁒ Z 1 / ΞΊ 1 + Z 2 . From Lemma 3.5 and the convergence of G 1 , we have G 2 ⁒ β†’ P ⁒ 0 . As a consequence, we obtain

k ⁒ ( X Μ„ ⁒ ( L n βˆ’ k , n ) ΞΌ Μƒ ⁒ ( k / n ) βˆ’ 1 ) ⁒ β†’ d ⁒ Z 1 ΞΊ 1 + Z 2 , n β†’ ∞ .

Remark A.4

The proof of the fact that X Μ„ ⁒ ( L n βˆ’ k , n ) / ΞΌ Μƒ ⁒ ( k / n ) ⁒ β†’ P ⁒ 1 is straightforward by first proving that

X Μ„ ⁒ ( L n βˆ’ k , n ) u n βˆ’ 1 k ⁒ βˆ‘ i = 1 n X i u n ⁒ 1 ⁑ { L i β‰₯ u n } ⁒ β†’ P ⁒ 0 , n β†’ ∞ ,

via the Markov inequality. Next, the Chebyshev’s inequality gives convergence

1 k ⁒ βˆ‘ i = 1 n X i u n ⁒ 1 ⁑ { L i β‰₯ u n } βˆ’ ΞΌ Μƒ ⁒ ( k / n ) u n ⁒ β†’ P ⁒ 0 , n β†’ ∞ .

A.6 Proof of Theorem 3.9

We should note that m ⁒ ( β„“ ) = m Μƒ ⁒ ( p ) . Write

m Μ‚ ⁒ ( β„“ ) m ⁒ ( β„“ ) = ( 1 βˆ’ Ξ³ Μ‚ ) m Μƒ ⁒ ( p ) ⁒ ( k n ⁒ p Μ‚ ) Ξ³ Μ‚ ⁒ X Μ„ ⁒ ( L n βˆ’ k , n ) = ( 1 βˆ’ Ξ³ ) ⁒ ΞΌ Μƒ ⁒ ( p ) m Μƒ ⁒ ( p ) Γ— ΞΌ Μƒ ⁒ ( k / n ) ΞΌ Μƒ ⁒ ( p ) ⁒ ( k n ⁒ p ) Ξ³ Γ— X Μ„ ⁒ ( L n βˆ’ k , n ) ΞΌ Μƒ ⁒ ( k / n ) Γ— 1 βˆ’ Ξ³ Μ‚ 1 βˆ’ Ξ³ ⁒ ( k / ( n ⁒ p Μ‚ ) ) Ξ³ Μ‚ ( k / ( n ⁒ p ) ) Ξ³ β‹… 1 = : K 1 Γ— K 2 Γ— K 3 Γ— K 4 .

Consistency

From Proposition 2.9, we have K 1 β†’ 1 . Furthermore, (3.3) and Theorem 3.8 give K 2 β†’ 1 and K 3 ⁒ β†’ P ⁒ 1 , respectively. Corollary 4.4.4 in [12] implies p Μ‚ ⁒ β†’ P ⁒ 1 . Combining with the consistency of the Hill estimator Ξ³ Μ‚ , we obtain K 4 ⁒ β†’ P ⁒ 1 .

Asymptotic normality

Lemma 3.6 and Lemma 3.7, respectively, imply

K 1 = 1 + o ⁒ ( 1 k ) and K 2 = 1 + o ⁒ ( 1 k ) .

Furthermore, from Theorem 3.8, we have

K 3 = 1 + Z 1 + ΞΊ 1 ⁒ Z 2 k β‹… ΞΊ 1 + o p ⁒ ( 1 k ) .

For K 4 , we write

K 4 = [ 1 βˆ’ Ξ³ Μ‚ 1 βˆ’ Ξ³ Γ— ( k n ⁒ p ) ( Ξ³ Μ‚ βˆ’ Ξ³ ) Γ— ( p p Μ‚ ) Ξ³ Μ‚ ] β‹… 1 ,

which leads to

log ⁑ K 4 = [ log ⁑ ( 1 βˆ’ Ξ³ Μ‚ ) βˆ’ log ⁑ ( 1 βˆ’ Ξ³ ) ] β‹… 1 + log ⁑ k n ⁒ p ⁒ ( Ξ³ Μ‚ βˆ’ Ξ³ ) β‹… 1 + Ξ³ Μ‚ ⁒ ( log ⁑ p βˆ’ log ⁑ p Μ‚ ) β‹… 1 .

Theorem 3.2.5 in [12] suggests k Μƒ ⁒ ( Ξ³ Μ‚ βˆ’ Ξ³ ) ⁒ β†’ d ⁒ N ⁒ ( 0 , Ξ³ 2 ) , which implies

k Μƒ ⁒ [ log ⁑ ( 1 βˆ’ Ξ³ Μ‚ ) βˆ’ log ⁑ ( 1 βˆ’ Ξ³ ) ] ⁒ β†’ d ⁒ 1 1 βˆ’ Ξ³ ⁒ N ⁒ ( 0 , Ξ³ 2 ) and k Μƒ log ⁑ ( k / ( n ⁒ p ) ) ⁒ [ log ⁑ k n ⁒ p ⁒ ( Ξ³ Μ‚ βˆ’ Ξ³ ) ] ⁒ β†’ d ⁒ N ⁒ ( 0 , Ξ³ 2 ) .

From [12, Theorem 4.4.7], we have

k Μƒ log ⁑ ( k Μƒ / ( n ⁒ p ) ) ⁒ ( p Μ‚ p βˆ’ 1 ) ⁒ β†’ d ⁒ N ⁒ ( 0 , 1 ) ,

which indicates

k Μƒ ⁒ Ξ³ Μ‚ log ⁑ ( k Μƒ / ( n ⁒ p ) ) ⁒ ( log ⁑ p βˆ’ log ⁑ p Μ‚ ) ⁒ β†’ d ⁒ N ⁒ ( 0 , Ξ³ 2 )

with the Slutsky theorem. Combining results above, we have

k Μƒ b n ⁒ log ⁑ K 4 ⁒ β†’ d ⁒ Z 3 ,

where b n = log ⁑ ( k n ⁒ p ) if k Μƒ = o ⁒ ( k ) and b n = log ⁑ ( k Μƒ n ⁒ p ) if k = o ⁒ ( k Μƒ ) .

This indicates

K 4 = 1 + b n k Μƒ ⁒ Z 3 + o p ⁒ ( b n k Μƒ β‹… 1 ) .

Combining the results for K 1 , K 2 , K 3 and K 4 , we have

m Μ‚ ⁒ ( β„“ ) m ⁒ ( β„“ ) βˆ’ 1 = [ 1 + o ⁒ ( 1 k ) ] Γ— [ 1 + Z 1 + Z 2 k β‹… ΞΊ j , 1 + o p ⁒ ( 1 k ) ] Γ— [ 1 + b n k Μƒ ⁒ Z 3 + o p ⁒ ( b n ⁒ 1 k Μƒ ) ] βˆ’ 1 = Z 1 + ΞΊ 1 ⁒ Z 2 k β‹… ΞΊ 1 + b n k Μƒ ⁒ Z 3 + o p ⁒ ( 1 k ) + o p ⁒ ( b n ⁒ 1 k Μƒ ) .

We obtain

k Μƒ log ⁑ ( k / ( n ⁒ p ) ) ⁒ [ m Μ‚ ⁒ ( β„“ ) m ⁒ ( β„“ ) βˆ’ 1 ] β†’ d ⁒ Z 3 when ⁒ k Μƒ = o ⁒ ( k ) , k Μƒ log ⁑ ( k Μƒ / ( n ⁒ p ) ) ⁒ [ m Μ‚ ⁒ ( β„“ ) m ⁒ ( β„“ ) βˆ’ 1 ] β†’ d ⁒ Z 3 when ⁒ k = o ⁒ ( k Μƒ ) ⁒ and ⁒ Ξ· = ∞ , k ⁒ [ m Μ‚ ⁒ ( β„“ ) m ⁒ ( β„“ ) βˆ’ 1 ] β†’ d ⁒ Z 1 ΞΊ 1 + Z 2 + Ξ· ⁒ Z 3 when ⁒ k = o ⁒ ( k Μƒ ) ⁒ and ⁒ Ξ· < ∞ .

B Proofs – Section 4

We begin with a proposition that gives necessary conditions for a skew-elliptically distributed random vector X = ΞΎ + R ⁒ Q ⊀ ⁒ S β€² to be multivariate regularly varying. It is an extension of [25, Proposition 3.1] from bivariate to the 𝑑-dimensional case.

Proposition B.1

Consider a 𝑑-dimensional random vector Y ∼ SE d ⁑ ( 0 , Ξ© , f Μƒ d , G ∘ w ) . Suppose the density generator f Μƒ d ∈ RV βˆ’ ( Ξ½ + d ) / 2 for some Ξ½ > 0 . If, for all y ∈ R d ,

lim t β†’ ∞ w ( t y ) = : w ∞ ( y ) ∈ R , lim t β†’ ∞ sup y ∈ S d βˆ’ 1 | G ( w ( t y ) ) βˆ’ G ( w ∞ ( y ) ) | = 0 ,

then 𝐘 is multivariate regularly varying with index 𝜈. The corresponding density function of Ψ in (2.3) is given by

ψ ⁒ ( w ) = 2 ⁒ | Ξ© | βˆ’ 1 / 2 ⁒ G ⁒ ( w ∞ ⁒ ( y ) ) ⁒ ( w ⊀ ⁒ Ξ© βˆ’ 1 ⁒ w ) βˆ’ ( Ξ½ + d ) / 2 ⁒ [ ∫ D B ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i β‹… B ⁒ ( ΞΈ ) Ξ½ / 2 ⁒ d ⁒ ΞΈ ] βˆ’ 1 , w ∈ S d βˆ’ 1 ,

where

ΞΈ = ( ΞΈ 1 , ΞΈ 2 , … , ΞΈ d βˆ’ 1 ) ⊀ , B ⁒ ( ΞΈ ) = s ΞΈ ⊀ ⁒ Q ⁒ Q ⊀ ⁒ s ΞΈ , D B = { ΞΈ ∈ [ 0 , Ο€ ) d βˆ’ 2 Γ— [ 0 , 2 ⁒ Ο€ ) : B ⁒ ( ΞΈ ) > 0 } , s ΞΈ = ( cos ⁑ ΞΈ 1 , sin ⁑ ΞΈ 1 ⁒ cos ⁑ ΞΈ 2 , … , sin ⁑ ΞΈ 1 ⁒ β‹― ⁒ sin ⁑ ΞΈ d βˆ’ 1 ) ⊀

and 𝑄 is the upper triangular matrix based on the Cholesky decomposition of Ξ©.

Remark B.2

From [25, Lemma B.1], the statement f Μƒ d ∈ RV βˆ’ ( Ξ½ + d ) / 2 is equivalent to the following: (1) the scalar random variable 𝑅 of 𝐗 is regularly varying with index 𝜈; (2) βˆ₯ X βˆ₯ is regularly varying with index 𝜈. In the case of elliptical distributions, G ⁒ ( w ⁒ ( x ) ) = 1 / 2 for all x ∈ R d . Thus an elliptically distributed random vector is multivariate regularly varying if the scalar random variable 𝑅 is regularly varying.

Proof

From [2, Proposition 2], we have βˆ₯ Y βˆ₯ ⁒ = d ⁒ βˆ₯ Y Μƒ βˆ₯ , where Y Μƒ = R ⁒ Q T ⁒ S ∼ Ell d ⁑ ( 0 , Ξ© , f Μƒ d ) . Let f Θ be the density function of 𝑆 in spherical coordinates. It is expressed as

f Θ ⁒ ( ΞΈ ) = Ξ“ ⁒ ( d / 2 ) 2 ⁒ Ο€ d / 2 ⁒ ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i , ΞΈ ∈ [ 0 , Ο€ ) d βˆ’ 2 Γ— [ 0 , 2 ⁒ Ο€ ) .

Using the stochastic representation of Y Μƒ , we have

P ⁒ ( βˆ₯ Y βˆ₯ > t ) = P ⁒ ( βˆ₯ Y Μƒ βˆ₯ > t ) = P ⁒ ( R ⁒ ( S ⊀ ⁒ Q ⁒ Q ⊀ ⁒ S ) 1 / 2 > t ) = ∫ D B ∫ t B ⁒ ( ΞΈ ) ∞ f R ⁒ ( r ) β‹… f Θ ⁒ ( ΞΈ ) ⁒ d r ⁒ d ΞΈ = ∫ D B ∫ t B ⁒ ( ΞΈ ) ∞ c d ⁒ f Μƒ d ⁒ ( r 2 ) ⁒ r d βˆ’ 1 β‹… ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i ⁒ d ⁒ r ⁒ d ⁒ ΞΈ = ∫ D B ∫ 1 B ⁒ ( ΞΈ ) ∞ c d ⁒ t d ⁒ f Μƒ d ⁒ ( t 2 ⁒ r 2 ) ⁒ r d βˆ’ 1 β‹… ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i ⁒ d ⁒ r ⁒ d ⁒ ΞΈ ,

where f R is the density function of scalar random variable 𝑅. Let f Y be the density function of 𝐘 and

V ⁒ ( t ) = P ⁒ ( βˆ₯ Y βˆ₯ > t ) .

Potter bounds (see, e.g., [27, Proposition 2.6]) for regularly varying functions and dominated convergence theorem give

(B.1) Ο… ⁒ ( y ) = lim t β†’ ∞ f Y ⁒ ( t ⁒ y ) t βˆ’ d ⁒ V ⁒ ( t ) = lim t β†’ ∞ 2 ⁒ c d ⁒ | Ξ© | βˆ’ 1 ⁒ f Μƒ d ⁒ ( t 2 ⁒ y ⊀ ⁒ Ξ© βˆ’ 1 ⁒ y ) ⁒ G ⁒ ( w ⁒ ( t ⁒ y ) ) ∫ D B ∫ 1 B ⁒ ( ΞΈ ) ∞ c d ⁒ f Μƒ d ⁒ ( t 2 ⁒ r 2 ) ⁒ r d βˆ’ 1 β‹… ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i ⁒ d ⁒ r ⁒ d ⁒ ΞΈ = 2 ⁒ | Ξ© | βˆ’ 1 / 2 ⁒ G ⁒ ( w ∞ ⁒ ( y ) ) β‹… lim t β†’ ∞ f Μƒ ⁒ ( t 2 ⁒ y ⊀ ⁒ Ξ© βˆ’ 1 ⁒ y ) / f Μƒ ⁒ ( t 2 ) ∫ D B B ⁒ ( ΞΈ ) ⁒ ∫ 1 B ⁒ ( ΞΈ ) ∞ f Μƒ d ⁒ ( t 2 ⁒ r 2 ) / f Μƒ ⁒ ( t 2 ) β‹… r d βˆ’ 1 β‹… d r ⁒ d ΞΈ = 2 ⁒ | Ξ© | βˆ’ 1 / 2 ⁒ G ⁒ ( w ∞ ⁒ ( y ) ) β‹… ( y ⊀ ⁒ Ξ© βˆ’ 1 ⁒ y ) βˆ’ ( Ξ½ + d ) / 2 ⁒ [ ∫ D B ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i ⁒ ∫ 1 B ⁒ ( ΞΈ ) ∞ r βˆ’ Ξ½ βˆ’ 1 ⁒ d r ⁒ d ΞΈ ] βˆ’ 1 = 2 ⁒ Ξ½ ⁒ | Ξ© | βˆ’ 1 / 2 ⁒ G ⁒ ( w ∞ ⁒ ( y ) ) β‹… ( y ⊀ ⁒ Ξ© βˆ’ 1 ⁒ y ) βˆ’ ( Ξ½ + d ) / 2 ⁒ [ ∫ D B ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i β‹… B ⁒ ( ΞΈ ) Ξ½ / 2 ⁒ d ⁒ ΞΈ ] βˆ’ 1 .

We can see that Ο… ⁒ ( y ) is homogeneous of order βˆ’ Ξ½ βˆ’ d . Following steps similar to [25], we can also prove that

lim t β†’ ∞ sup y ∈ S d βˆ’ 1 | f Y ⁒ ( t ⁒ y ) t βˆ’ d ⁒ V ⁒ ( t ) βˆ’ Ο… ⁒ ( y ) | = 0 ,

which leads to multivariate regular variation of 𝐗. And thus we have

ψ ⁒ ( w ) = Ξ½ βˆ’ 1 ⁒ Ο… ⁒ ( w ) = 2 ⁒ | Ξ© | βˆ’ 1 / 2 ⁒ G ⁒ ( w ∞ ⁒ ( w ) ) β‹… ( w ⊀ ⁒ Ξ© βˆ’ 1 ⁒ w ) βˆ’ ( Ξ½ + d ) / 2 ∫ D B ∏ i = 1 d βˆ’ 1 ( sin ⁑ ΞΈ i ) d βˆ’ 1 βˆ’ i β‹… B ⁒ ( ΞΈ ) Ξ½ / 2 ⁒ d ⁒ ΞΈ ,

where w ∈ S d βˆ’ 1 . ∎

B.1 Proof of Theorem 4.1

Let f Y be the density function of 𝐘 and let f Z be the density function of Z : = Y / t . Due to the homogeneity of β„Ž, we have

argmax y ∈ R d f ⁒ ( y ∣ h ⁒ ( Y ) β‰₯ t ) t = argmax z ∈ R d f Z ⁒ ( z ) β‹… 1 ⁑ { h ⁒ ( z ) β‰₯ 1 } P ⁒ ( h ⁒ ( Z ) β‰₯ 1 ) = argmax z ∈ R d t d ⁒ f Y ⁒ ( t ⁒ z ) β‹… 1 ⁑ { h ⁒ ( z ) β‰₯ 1 } P ⁒ ( h ⁒ ( Y ) β‰₯ t ) .

First, we show the tail equivalence between h ⁒ ( Y ) and βˆ₯ Y βˆ₯ . From Proposition B.1, we know that 𝐘 is multivariate regularly varying with index 𝜈 and V ⁒ ( t ) = P ⁒ ( βˆ₯ Y βˆ₯ > t ) ∈ RV βˆ’ Ξ½ . Thus there exists a Radon measure ΞΌ Μƒ such that

lim t β†’ ∞ P ⁒ ( Y / t ∈ A ) V ⁒ ( t ) = ΞΌ Μƒ ⁒ ( A )

for every relatively compact set 𝐴 in [ βˆ’ ∞ , ∞ ] d \ { 0 } with ΞΌ Μƒ ⁒ ( βˆ‚ A ) = 0 . The limit measure ΞΌ Μƒ ⁒ ( β‹… ) is homogeneous of order βˆ’ Ξ½ . Define transformation T ⁒ ( y ) = ( h ⁒ ( y ) , y | h ⁒ ( y ) | ) . Let β„΅ = { y / | h ⁒ ( y ) | : y ∈ [ βˆ’ ∞ , ∞ ] d \ { 0 } } . Note that the transformation 𝑇 is continuous and bijective from [ βˆ’ ∞ , ∞ ] \ { 0 } to { ( βˆ’ ∞ , 0 ) βˆͺ ( 0 , ∞ ) } Γ— β„΅ . From [27, Proposition 5.5], for every Borel set 𝐡 in β„΅ and x > 0 , we have

lim t β†’ ∞ 1 V ⁒ ( t ) ⁒ P ⁒ ( h ⁒ ( Y ) t β‰₯ x , Y | h ⁒ ( Y ) | ∈ B ) = ΞΌ Μƒ ⁒ { y ∈ [ βˆ’ ∞ , ∞ ] d \ { 0 } : h ⁒ ( y ) β‰₯ x , y | h ⁒ ( y ) | ∈ B } = ΞΌ Μƒ ⁒ { y ∈ [ βˆ’ ∞ , ∞ ] d \ { 0 } : h ⁒ ( x βˆ’ 1 ⁒ y ) β‰₯ 1 , x βˆ’ 1 ⁒ y | h ⁒ ( x βˆ’ 1 ⁒ y ) | ∈ B } = x βˆ’ Ξ½ β‹… ΞΌ Μƒ ⁒ { z ∈ [ βˆ’ ∞ , ∞ ] d \ { 0 } : h ⁒ ( z ) β‰₯ 1 , z | h ⁒ ( z ) | ∈ B } .

Hence

lim t β†’ ∞ P ⁒ ( h ⁒ ( Y ) β‰₯ t ) V ⁒ ( t ) = ΞΌ Μƒ { z : h ( z ) β‰₯ 1 } = ∫ { z : h ⁒ ( z ) β‰₯ 1 } Ο… ( z ) d z : = c h ∈ ( 0 , ∞ ) ,

where Ο… ⁒ ( z ) in given in (B.1). Furthermore, from the proof of Proposition B.1, we have

lim t β†’ ∞ f Y ⁒ ( t ⁒ z ) t βˆ’ d ⁒ V ⁒ ( t ) = Ο… ⁒ ( z ) .

Combining results above, we have

lim t β†’ ∞ t d ⁒ f Y ⁒ ( t ⁒ z ) β‹… 1 ⁑ { h ⁒ ( z ) β‰₯ 1 } P ⁒ ( h ⁒ ( Y ) β‰₯ t ) = 1 c h ⁒ Ο… ⁒ ( z ) β‹… 1 ⁑ { h ⁒ ( z ) β‰₯ 1 } .

Next we prove that the convergence above is uniform.

sup z ∈ R d | t d ⁒ f Y ⁒ ( t ⁒ z ) β‹… 1 ⁑ { h ⁒ ( z ) β‰₯ 1 } P ⁒ ( h ⁒ ( Y ) β‰₯ t ) βˆ’ 1 c h ⁒ Ο… ⁒ ( z ) β‹… 1 ⁑ { h ⁒ ( z ) β‰₯ 1 } | = sup { z : h ⁒ ( z ) β‰₯ 1 } | t d ⁒ f Y ⁒ ( t ⁒ z ) P ⁒ ( h ⁒ ( Y ) β‰₯ t ) βˆ’ 1 c h ⁒ Ο… ⁒ ( z ) | ≀ V ⁒ ( t ) P ⁒ ( h ⁒ ( Y ) β‰₯ t ) β‹… sup { z : h ⁒ ( z ) β‰₯ 1 } | t d ⁒ f Y ⁒ ( t ⁒ z ) V ⁒ ( t ) βˆ’ Ο… ⁒ ( z ) | + | V ⁒ ( t ) P ⁒ ( h ⁒ ( Y ) β‰₯ t ) βˆ’ 1 c h | β‹… sup z β‰  0 Ο… ⁒ ( z ) .

The first term converges to 0 by Theorem 2.5 and the second term converges to 0 as sup z β‰  0 Ο… ⁒ ( z ) < ∞ . With the uniform convergence, [28, Theorem 7.33] gives (4.1).

C The GKK estimator: Lack of location invariance

This section reports results of a simulation study that shows that the GKK stress scenario estimator in (2.8) lacks location invariance.

Figure 7

The sampling densities of stress scenario estimates of m ⁒ ( β„“ ) for data generated from a bivariate t distribution with mean vector 𝝁 when β„“ is set to be the 0.99-quantile of portfolio loss 𝐿. The solid black lines correspond to the original stress scenario estimator from [17]; the dotted black lines correspond to the modified version of the GKK estimator in (2.10). The vertical red lines indicate the true values of stress scenario m ⁒ ( β„“ ) .

(a) 
                     
                        
                           
                              
                                 
                                    ΞΌ
                                    =
                                    
                                       
                                          (
                                          0
                                          ,
                                          0
                                          )
                                       
                                       ⊀
                                    
                                 
                              
                              
                              {\boldsymbol{\mu}}=(0,0)^{\top}
(a)

μ = ( 0 , 0 ) ⊀

(b) 
                     
                        
                           
                              
                                 
                                    ΞΌ
                                    =
                                    
                                       
                                          (
                                          2
                                          ,
                                          1
                                          )
                                       
                                       ⊀
                                    
                                 
                              
                              
                              {\boldsymbol{\mu}}=(2,1)^{\top}
(b)

μ = ( 2 , 1 ) ⊀

(c) 
                     
                        
                           
                              
                                 
                                    ΞΌ
                                    =
                                    
                                       
                                          (
                                          4
                                          ,
                                          3
                                          )
                                       
                                       ⊀
                                    
                                 
                              
                              
                              {\boldsymbol{\mu}}=(4,3)^{\top}
(c)

μ = ( 4 , 3 ) ⊀

(d) 
                     
                        
                           
                              
                                 
                                    ΞΌ
                                    =
                                    
                                       
                                          (
                                          10
                                          ,
                                          80
                                          )
                                       
                                       ⊀
                                    
                                 
                              
                              
                              {\boldsymbol{\mu}}=(10,80)^{\top}
(d)

μ = ( 10 , 80 ) ⊀

We simulate 500 random samples of size n = 3000 from a bivariate t distribution with mean vector 𝝁, covariance matrix Ξ£ = ( 1 0.5 0.5 1 ) and degrees of freedom Ξ½ = 4 , where ΞΌ ⊀ ∈ { ( 0 , 0 ) , ( 2 , 1 ) , ( 4 , 3 ) , ( 10 , 8 ) } . These samples are treated as realizations of risk factor changes X = ( X 1 , X 2 ) ⊀ . The portfolio loss is set as L = 0.7 ⁒ X 1 + 0.3 ⁒ X 2 . We estimate stress scenarios m ⁒ ( β„“ ) for values of β„“ set to the 0.99-quantile of 𝐿. Both the limit of the scaling factor and the empirical estimator of the conditional mean at the lower threshold were found to have good performance.

The sampling densities of the stress scenario estimates for different values of the location parameter are shown in Figure 7. From the plots, we can see that, when ΞΌ = ( 0 , 0 ) ⊀ , both the original GKK estimator in (2.8) and the modified estimator in (2.10) show good performance, and the two sampling densities overlap with each other. However, when ΞΌ β‰  ( 0 , 0 ) ⊀ , the original estimates tend to be underestimated. As 𝝁 moves away from the origin, the bias and variance of the original estimator increase. On the other hand, the modified estimator maintains a good performance across different values of 𝝁.

References

[1] C. Adcock, M. Eling and N. Loperfido, Skewed distributions in finance and actuarial science: A review, European J. Finance 21 (2015), 1253–1281. 10.1080/1351847X.2012.720269Search in Google Scholar

[2] A. Azzalini and A. Capitanio, Distributions generated by perturbation of symmetry with emphasis on a multivariate skew 𝑑-distribution, J. R. Stat. Soc. Ser. B Stat. Methodol. 65 (2003), no. 2, 367–389. 10.1111/1467-9868.00391Search in Google Scholar

[3] S. BabiΔ‡, L. Gelbgras, M. Hallin and C. Ley, Optimal tests for elliptical symmetry: Specified and unspecified location, Bernoulli 27 (2021), no. 4, 2189–2216. 10.3150/20-BEJ1305Search in Google Scholar

[4] B. Basrak, R. A. Davis and T. Mikosch, A characterization of multivariate regular variation, Ann. Appl. Probab. 12 (2002), no. 3, 908–920. 10.1214/aoap/1031863174Search in Google Scholar

[5] P. Billingsley, Probability and Measure, Wiley Ser. Probab. Stat., John Wiley & Sons, Hoboken, 1995. Search in Google Scholar

[6] T. Breuer and I. CsiszΓ‘r, Systematic stress tests with entropic plausibility constraints, J. Banking Finance 37 (2013), 1552–1559. 10.1016/j.jbankfin.2012.04.013Search in Google Scholar

[7] T. Breuer, M. Jandacka, K. Rheinberger and M. Summer, How to find plausible, severe, and useful stress scenarios, Int. J. Central Banking 5 (2009), 205–224. Search in Google Scholar

[8] J.-J. Cai, V. Chavez-Demoulin and A. Guillou, Modified marginal expected shortfall under asymptotic dependence, Biometrika 104 (2017), no. 1, 243–249. 10.1093/biomet/asx005Search in Google Scholar

[9] J.-J. Cai, J. H. J. Einmahl, L. de Haan and C. Zhou, Estimation of the marginal expected shortfall: The mean when a related variable is extreme, J. R. Stat. Soc. Ser. B. Stat. Methodol. 77 (2015), no. 2, 417–442. 10.1111/rssb.12069Search in Google Scholar

[10] J.-J. Cai and E. Musta, Estimation of the marginal expected shortfall under asymptotic independence, Scand. J. Stat. 47 (2020), no. 1, 56–83. 10.1111/sjos.12397Search in Google Scholar

[11] B. Y. Chang, P. Christoffersen and K. Jacobs, Market skewness risk and the cross section of stock returns, J. Financ. Econ. 107 (2013), 46–68. 10.1016/j.jfineco.2012.07.002Search in Google Scholar

[12] L. de Haan and A. Ferreira, Extreme Value Theory, Springer Ser. Oper. Res. Financ. Eng., Springer, New York, 2006. 10.1007/0-387-34471-3Search in Google Scholar

[13] L. de Haan and S. Resnick, On regular variation of probability densities, Stochastic Process. Appl. 25 (1987), no. 1, 83–93. 10.1016/0304-4149(87)90191-8Search in Google Scholar

[14] J. H. Einmahl, F. Yang and C. Zhou, Testing the multivariate regular variation model, J. Bus. Econom. Statist. 39 (2021), 907–919. 10.1080/07350015.2020.1737533Search in Google Scholar

[15] P. Embrechts, C. KlΓΌppelberg and T. Mikosch, Modelling Extremal Events, Appl. Math. (New York) 33, Springer, Berlin, 1997. 10.1007/978-3-642-33483-2Search in Google Scholar

[16] K. T. Fang, S. Kotz and K. W. Ng, Symmetric Multivariate and Related Distributions, Monogr. Statist. Appl. Probab. 36, Chapman and Hall, London, 1990. 10.1007/978-1-4899-2937-2Search in Google Scholar

[17] P. Glasserman, C. Kang and W. Kang, Stress scenario selection by empirical likelihood, Quant. Finance 15 (2015), no. 1, 25–41. 10.1080/14697688.2014.926019Search in Google Scholar

[18] B. M. Hill, A simple general approach to inference about the tail of a distribution, Ann. Statist. 3 (1975), no. 5, 1163–1174. 10.1214/aos/1176343247Search in Google Scholar

[19] F. W. Huffer and C. Park, A test for elliptical symmetry, J. Multivariate Anal. 98 (2007), no. 2, 256–281. 10.1016/j.jmva.2005.09.011Search in Google Scholar

[20] H. Hult and F. Lindskog, Multivariate extremes, aggregation and dependence in elliptical distributions, Adv. Appl. Probab. 34 (2002), no. 3, 587–608. 10.1239/aap/1033662167Search in Google Scholar

[21] R. Huisman, K. G. Koedijk, C. J. M. Kool and F. Palm, Tail-index estimates in small samples, J. Bus. Econom. Statist. 19 (2001), 208–216. 10.1198/073500101316970421Search in Google Scholar

[22] R. Kulik and Z. Tong, Estimation of the expected shortfall given an extreme component under conditional extreme value model, Extremes 22 (2019), no. 1, 29–70. 10.1007/s10687-018-0333-9Search in Google Scholar

[23] A. J. McNeil, R. Frey and P. Embrechts, Quantitative Risk Management, Princeton Ser. Finance, Princeton University, Princeton, 2015. Search in Google Scholar

[24] A. J. McNeil and A. D. Smith, Multivariate stress scenarios and solvency, Insurance Math. Econom. 50 (2012), no. 3, 299–308. 10.1016/j.insmatheco.2011.12.005Search in Google Scholar

[25] N. Nolde and J. Zhang, Conditional extremes in asymmetric financial markets, J. Bus. Econom. Statist. 38 (2020), 201–213. 10.1080/07350015.2018.1476248Search in Google Scholar

[26] D. N. Politis and J. P. Romano, The stationary bootstrap, J. Amer. Statist. Assoc. 89 (1994), no. 428, 1303–1313. 10.1080/01621459.1994.10476870Search in Google Scholar

[27] S. I. Resnick, Heavy-Tail Phenomena, Springer Ser. Oper. Res. Financ. Eng., Springer, New York, 2007. Search in Google Scholar

[28] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis, Springer, Berlin, 2009. Search in Google Scholar

[29] P. Rousseeuw, Multivariate estimation with high breakdown point, Mathematical Statistics and Applications, Reidel, Dordrecht (1985), 283–297. 10.1007/978-94-009-5438-0_20Search in Google Scholar

[30] Basel Committee on Banking Supervision, International convergence of capital measurement and capital standards: A revised framework, 2005. Search in Google Scholar

[31] Basel Committee on Banking Supervision, Principles for sound stress testing practices and supervision, 2009. Search in Google Scholar

[32] Committee of European Banking Supervision, Guidelines on stress testing (CP32), European Banking Authority, 2009. Search in Google Scholar

[33] Financial Services Authority, Stress and scenario testing: Feedback on CP08/24 and final rules, Policy statement 09/20, 2009. Search in Google Scholar

Received: 2024-08-01
Accepted: 2025-01-07
Published Online: 2025-02-06
Published in Print: 2025-05-01

Β© 2025 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 21.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/strm-2024-0022/html
Scroll to top button