Home Abel-Gontcharoff polynomials, parking trajectories and ruin probabilities
Article Open Access

Abel-Gontcharoff polynomials, parking trajectories and ruin probabilities

  • Claude Lefèvre EMAIL logo and Philippe Picard
Published/Copyright: November 29, 2023
Become an author with De Gruyter Brill

Abstract

The central mathematical tool discussed is a non-standard family of polynomials, univariate and bivariate, called Abel-Goncharoff polynomials. First, we briefly summarize the main properties of this family of polynomials obtained in the previous work. Then, we extend the remarkable links existing between these polynomials and the parking functions which are a classic object in combinatorics and computer science. Finally, we use the polynomials to determine the non-ruin probabilities over a finite horizon for a bivariate risk process, in discrete and continuous time, assuming that claim amounts are dependent via a partial Schur-constancy property.

MSC 2010: 26C05; 05A99; 91B30

1 Introduction

Goncharoff [19] introduced a new family of univariate polynomials in order to solve some interpolation problems in numerical analysis. Their interest in probability was first shown by Daniels [12] and Picard [32] and then developed by Lefèvre and Picard [25] and Picard and Lefèvre [33] for the study of epidemic processes. In these two articles, a multivariate version of the polynomials was also constructed to be able to deal with heterogeneous populations. Since then, these polynomials have been widely used, in particular to study first crossing problems, epidemic models and insurance risk processes. Among numerous articles, we refer to the studies by Picard and Lefèvre [36], Lefèvre and Picard [27], Ball [4] and Britton and Pardoux (editors) [6].

In the following, we give these polynomials the name Abel-Gontcharoff (A-G) polynomials because of the key role played by Abel-type expansions with respect to the A-G polynomials. The basic elements of the theory have already been presented in previous works (see, e.g. Lefèvre and Picard [27,28]). A brief updated summary is provided in Section 2.

Quite unexpectedly, Kung and Yan [23] and Khare et al. [21] independently encountered and studied this family of A-G polynomials, univariate and multivariate, in a purely mathematical framework. They notably proved that the polynomials make it possible to count so-called parking functions, an object with multiple applications in combinatorics and computer science. In parallel, we will show in Section 3 how the A-G polynomials lead to naturally defining a new concept of stochastic parking trajectories that takes into account ordered sequences of independent uniform random variables on ( 0 , 1 ) .

In the initial theory of insurance risk, the amounts of claims are assumed to be independent and identically distributed, which may prove restrictive in reality. Recently, dependent claims modelling has been integrated in various ways for univariate risk processes (e.g. Albrecher et al. [2], Constantinescu et al. [10], and Lefèvre and Simon [29]). For the multivariate case, the literature on the quantities linked to ruin remains limited (see the study by Albrecher et al. [3] and the references therein). In particular, results over a finite horizon are quite few (e.g. Picard et al. [37], and Dimitrova and Kaishev [16], and Castañer et al. [7]). In Section 4, we will consider a bivariate risk process, in discrete and continuous time, where claim amounts for the two risks are dependent on each other by satisfying a partial Schur-constancy property (see the studies by Castañer et al. [8] and Lefèvre [24]). A short presentation of this form of dependency is recalled in the Appendix. By using the A-G polynomials, we can then derive compact formulas for the non-ruin probabilities over a finite horizon that exhibit the hidden underlying algebraic structure.

It is worth mentioning that the family of A-G polynomials can be generalized to a family of functions that we have called A-G pseudopolynomials. For the theory with applications in applied probability, we refer to Picard and Lefèvre [34] and Lefèvre and Picard [26].

2 Abel-Gontcharoff polynomials

The concepts and results that we briefly present below essentially come from the study by Lefèvre and Picard [25,27,28]. Further details, including proofs, can be found in these articles.

2.1 Univariate A-G polynomials

We first deal with the construction of univariate A-G polynomials. To do this, we begin with a family U of reals, which will play the role of parameters:

(1) U = { u i , i N 0 } ,

where N 0 { 0 , 1 , 2 , } . We also introduce the shift operators k , k N 0 , which shift the parameters of U to start with the k th, i.e.

k U = { u k + i , i N 0 } .

The associated family of A-G polynomials of degrees n in the argument x , denoted { G n ( x U ) , n N 0 } , is then defined as follows.

Definition 2.1

Starting from G 0 ( x U ) = 1 , the polynomials G n ( x U ) are constructed recursively by using the differential equations:

(2) d d x G n ( x U ) = G n 1 ( x U ) , n 1 ,

with the boundary conditions

(3) G n ( u 0 U ) = 0 , n 1 .

So, for n = 1 , as G 0 = 1 , (2) gives G 1 ( x U ) = g 1 + x , where g 1 is a constant such that by (3), g 1 + u 0 = 0 , and hence, g 1 = u 0 . Applying the recursion with n = 2 and 3 then yields the following formulas for illustration.

Examples 2.2

G 1 ( x U ) = u 0 + x , G 2 ( x U ) = ( u 0 2 2 + u 0 u 1 ) u 1 x + x 2 2 , G 3 ( x U ) = ( u 0 3 6 + u 0 2 u 2 2 + u 0 u 1 2 2 u 0 u 1 u 2 ) + ( u 1 2 2 + u 1 u 2 ) x u 2 x 2 2 + x 3 6 .

Therefore, G n , n 1 , depends only on the first n parameters u 0 , , u n 1 of the family U . The insertion of U in the notation is justified to take into account the whole family of polynomials. Note that in Definition 2.1, a factor n sometimes multiplies the right-hand sie of (2).

We easily see that an affine transformation of U has a simple effect on G n :

(4) G n ( x a + b U ) = b n G n x a b U , n N 0 .

Below, we state two alternative definitions, (5) and (7), of the A-G polynomials. As usual, f ( k ) ( a ) denotes the k -th derivative of a function f ( x ) evaluated at the point x = a . The first is directly related to the original interpolation problem.

Proposition 2.3

Each G n is the unique polynomial of degree n satisfying the biorthogonality condition

(5) G n ( k ) ( u k U ) = δ n , k , k N 0 .

The second is a consequence of a key property of the A-G polynomials, namely, that they allow one to write an Abel-type expansion, rather than a Taylor expansion, for any polynomial in x (among others).

Proposition 2.4

A polynomial R n ( x ) of degree n in x can be expanded as follows:

(6) R n ( x ) = k = 0 n R n ( k ) ( u k ) G k ( x U ) .

So, taking R n ( x ) = x n n ! in (6) gives

(7) G n ( x U ) = x n n ! k = 0 n 1 ( u k ) n k ( n k ) ! G k ( x U ) , n 1 ,

hence a simple recursion for the polynomials G n starting from G 0 = 1 .

A determinantal formula exists for G n but is cumbersome to apply. In general, a recursive method is more efficient. An explicit expression is available when the u i are of affine form.

Special case 2.5

If u i u , i N 0 , are constant parameters,

G n ( x U ) = ( x u ) n n ! , n N 0 ,

and (6) reduces to the Taylor expansion of R n ( x ) .

More generally, if u i u i is a linear homogeneous function of i N 0 ,

(8) G n ( x U ) = x ( x u n ) n 1 n ! , n N 0 ,

and (6) becomes the Abel expansion of R n ( x ) .

In some cases, it may be useful to consider parameters which are no longer real numbers but random variables. The elegant identity below has various applications and will be used here in Section 4.2.

Proposition 2.6

Let { Y i , i 1 } be a sequence of exchangeable random variables, of partial sums W i = Y 1 + + Y i , i 1 . Then, for any integer l n ,

(9) E [ G n ( x W n , , W 1 ) W l ] = x n 1 ( n 1 ) ! x n W l l , a.s.

2.2 Bivariate A-G polynomials

The generalization to polynomials with several arguments is fairly easy to perform. We limit ourselves to the bivariate case for simplicity. This time, we consider two families U ( 1 ) and U ( 2 ) of real parameters with double indices:

(10) U ( 1 ) = { u i 1 , i 2 ( 1 ) , i 1 , i 2 N 0 } , and U ( 2 ) = { u i 1 , i 2 ( 2 ) , i 1 , i 2 N 0 } .

Here too, we introduce shift operators k 1 , k 2 , for k 1 , k 2 N 0 , such that

k 1 , k 2 ( U ( 1 ) , U ( 2 ) ) = { ( u k 1 + i 1 , k 2 + i 2 ( 1 ) , u k 1 + i 1 , k 2 + i 2 ( 2 ) ) , i 1 , i 2 N 0 } .

The associated bivariate A-G polynomials of degrees n 1 in x 1 and n 2 in x 2 form a family denoted { G n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) , n 1 , n 2 N 0 } .

Definition 2.7

Starting from G 0 , 0 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = 1 , the bivariate A-G polynomials are constructed recursively by using the partial differential equations

(11) x 1 G n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = G n 1 1 , n 2 ( x 1 , x 2 1 , 0 ( U ( 1 ) , U ( 2 ) ) ) , n 1 1 , n 2 0 , x 2 G n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = G n 1 , n 2 1 ( x 1 , x 2 0 , 1 ( U ( 1 ) , U ( 2 ) ) ) , n 1 0 , n 2 1 ,

with the boundary conditions

(12) G n 1 , n 2 ( u 0 , 0 ( 1 ) , u 0 , 0 ( 2 ) U ( 1 ) , U ( 2 ) ) = 0 , n 1 + n 2 1 .

Obviously, when n 1 = 0 , G 0 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) becomes the univariate polynomial G n 2 ( x 2 U ( 2 ) ) ; similarly when n 2 = 0 . Considering n 1 = n 2 = 1 , we obtain from (11)

( x 1 ) G 1 , 1 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = G 0 , 1 ( x 1 , x 2 1 , 0 ( U ( 1 ) , U ( 2 ) ) ) = u 1 , 0 ( 2 ) + x 2 , ( x 2 ) G 1 , 1 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = G 1 , 0 ( x 1 , x 2 0 , 1 ( U ( 1 ) , U ( 2 ) ) ) = u 0 , 1 ( 1 ) + x 1 ,

which implies that

G 1 , 1 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = g 1 , 1 u 1 , 0 ( 2 ) x 1 u 0 , 1 ( 1 ) x 2 + x 1 x 2 , where g 1 , 1 is a constant such that G 1 , 1 ( u 0 , 0 ( 1 ) , u 0 , 0 ( 2 ) U ( 1 ) , U ( 2 ) ) = 0 .

By way of illustration, here are the first bivariate A-G polynomials.

Example 2.8

G 1 , 0 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = u 0 , 0 ( 1 ) + x 1 , G 0 , 1 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = u 0 , 0 ( 2 ) + x 2 , G 2 , 0 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = ( u 0 , 0 ( 1 ) ) 2 2 + u 1 , 0 ( 1 ) u 0 , 0 ( 1 ) u 1 , 0 ( 1 ) x 1 + x 1 2 2 , G 0 , 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = ( u 0 , 0 ( 2 ) ) 2 2 + u 0 , 1 ( 2 ) u 0 , 0 ( 2 ) u 0 , 1 ( 2 ) x 2 + x 2 2 2 , G 1 , 1 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = u 0 , 1 ( 1 ) u 0 , 0 ( 2 ) + u 0 , 0 ( 1 ) u 1 , 0 ( 2 ) u 0 , 0 ( 1 ) u 0 , 0 ( 2 ) u 1 , 0 ( 2 ) x 1 u 0 , 1 ( 1 ) x 2 + x 1 x 2 , G 2 , 1 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = g 2 , 1 + ( u 1 , 1 ( 1 ) u 1 , 0 ( 2 ) + u 1 , 0 ( 1 ) u 2 , 0 ( 2 ) u 1 , 0 ( 1 ) u 1 , 0 ( 2 ) ) x 1 u 2 , 0 ( 2 ) x 1 2 2 + [ ( u 0 , 1 ( 1 ) ) 2 2 + u 0 , 1 ( 1 ) u 1 , 1 ( 1 ) ] x 2 u 1 , 1 ( 1 ) x 1 x 2 + x 1 2 x 2 2 ,

where g 2 , 1 is a constant such that G 2 , 1 ( u 0 , 0 ( 1 ) , u 0 , 0 ( 2 ) U ( 1 ) , U ( 2 ) ) = 0 .

Note that for n 1 + n 2 1 , each G n 1 , n 2 depends on the family U ( 1 ) only through the terms u i 1 , i 2 ( 1 ) for 0 i 1 n 1 1 , 0 i 2 n 2 , and on the family U ( 2 ) only through the terms u i 1 , i 2 ( 2 ) for 0 i 1 n 1 , 0 i 2 n 2 1 .

As in (4), an affine transformation of ( U ( 1 ) , U ( 2 ) ) has a simple effect on G n 1 , n 2 :

(13) G n 1 , n 2 ( x 1 , x 2 a 1 + b 1 U ( 1 ) , a 2 + b 2 U ( 2 ) ) = b 1 n 1 b 2 n 2 G n 1 , n 2 x 1 a 1 b 1 , x 2 a 2 b 2 U ( 1 ) , U ( 2 ) , n 1 , n 2 N 0 .

The two equivalent definitions (5) and (7) generalize to (14) and (16). For a function f ( x 1 , x 2 ) , let f ( k 1 , k 2 ) ( a 1 , a 2 ) be the partial derivative order k 1 in x 1 and k 2 in x 2 calculated at point ( a 1 , a 2 ) .

Proposition 2.9

For k 1 , k 2 N 0 ,

(14) G n 1 , n 2 ( k 1 , k 2 ) ( u k 1 , k 2 ( 1 ) , u k 1 , k 2 ( 2 ) U ( 1 ) , U ( 2 ) ) = δ n 1 , k 1 δ n 2 , k 2 , n 1 , n 2 N 0 ,

and this property characterizes the bivariate A-G polynomials.

Proposition 2.10

A polynomial R n 1 , n 2 ( x 1 , x 2 ) of degree n 1 in x 1 and n 2 in x 2 admits a bivariate Abel-type expansion

(15) R n 1 , n 2 ( x 1 , x 2 ) = k 1 = 0 n 1 k 2 = 0 n 2 R n 1 , n 2 ( k 1 , k 2 ) ( u k 1 , k 2 ( 1 ) , u k 1 , k 2 ( 2 ) ) G n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) .

So, taking R n 1 , n 2 ( x 1 , x 2 ) = x 1 n 1 x 2 n 2 n 1 ! n 2 ! in (15) yields

(16) x 1 n 1 n 1 ! x 2 n 2 n 2 ! = k 1 = 0 n 1 k 2 = 0 n 2 ( u k 1 , k 2 ( 1 ) ) n 1 k 1 ( n 1 k 1 ) ! ( u k 1 , k 2 ( 2 ) ) n 2 k 2 ( n 2 k 2 ) ! G k 1 , k 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) , n 1 + n 2 1 ,

hence a simple recursion for the polynomials G n 1 , n 2 starting from G 0 , 0 = 1 .

An explicit expression is available when u i 1 , i 2 ( 1 ) and u i 1 , i 2 ( 2 ) are again of affine form.

Special case 2.11

If for j = 1 , 2 , u i 1 , i 2 ( j ) u ( j ) , i 1 , i 2 N 0 , are constant parameters,

G n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = ( x 1 u ( 1 ) ) n 1 n 1 ! ( x 2 u ( 2 ) ) n 2 n 2 ! , n 1 , n 2 N 0 .

If for j = 1 , 2 , u i 1 , i 2 ( j ) = u i j ( j ) , i j N 0 , are one-dimensional parameters,

G n 1 , n 2 ( x 1 , x 2 , U ( 1 ) , U ( 2 ) ) = G n 1 ( x 1 U ( 1 ) ) G n 2 ( x 2 U ( 2 ) ) , n 1 , n 2 N 0 ,

where G n 1 and G n 2 are univariate A-G polynomials.

If for j = 1 , 2 , u i 1 , i 2 ( j ) = u 1 ( j ) i 1 + u 2 ( j ) i 2 are linear homogeneous functions of i 1 , i 2 N 0 ,

(17) G n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = ( x 1 u n 1 , n 2 ( 1 ) ) n 1 1 n 1 ! ( x 2 u n 1 , n 2 ( 2 ) ) n 2 1 n 2 ! ( x 1 x 2 x 1 u n 1 , 0 ( 2 ) x 2 u 0 , n 2 ( 1 ) ) , n 1 , n 2 N 0 .

3 Ordered uniforms and parking trajectories

Parking functions are a classical object in mathematics with applications in combinatorics, group theory and computer science. They are traditionally presented in the following way.

Suppose that there are n cars and n parking spots on a one way street. Each car i = 1 , , n has a preferred parking spot a i { 1 , , n } . Car 1 first enters the street and goes to its preferred spot a 1 . Car 2 then goes to its preferred spot a 2 : it parks there if this is empty, otherwise it takes the first available spot to the right of a 2 . The next cars try to park following the same rule. If for any car i , there is no available spot at a i or further right, the parking process stops.

A sequence ( a 1 , , a n ) in { 1 , , n } is said to be a parking function if every car succeeds in parking. It is easily seen that ( a 1 , , a n ) is a parking function iff its rearrangement in the ascending order, denoted ( a 1 : n , , a n : n ) , satisfies the constraints a i : n i , i = 1 , , n .

Parking functions were originally introduced by Konheim and Weiss [22] for investigating the storage device of hashing. They proved, inter alia, that there are ( n + 1 ) n 1 parking functions. Since then, parking functions have been encountered in many mathematical uses. A comprehensive survey was done by Yan [41]; see, e.g. Stanley [40] for supplements. Moreover, Diaconis and Hicks [14] studied links with probability, which led them to find new combinatorics involved.

3.1 Unidimensional trajectories

Various extensions of parking functions have been proposed and discussed in the literature. In particular, Kung and Yan [23] have shown that the univariate A-G polynomials make it possible to count the parking functions, which are upper bounded by any non-decreasing integer sequence u i 1 (instead of just i ), i = 1 , , n . Specifically, they proved that the number of parking functions, denoted P K n ( u 1 , , u n ) , is given by

(18) P K n ( u 1 , , u n ) = ( 1 ) n n ! G n ( 0 { u 0 , , u n 1 } ) ,

where G n is a univariate A-G polynomial as in Section 2.1, the factor n ! on the right-hand side being due to our definition (2).

Of course, the combinatorial result (18) requires that the upper bounds u i are all (non-decreasing) positive integers. This assumption may seem somewhat surprising because the polynomials G n are defined from arbitrary real parameters.

We are going to derive a different interpretation of the right-hand side of (18) when the non-decreasing upper bounds u i are this time reals in ( 0 , 1 ) . So, let U be the family (1) of these u i , and consider the associated A-G polynomials G n ( x U ) , where x u 0 . From (2) and (3), we directly see that G n admits the following integral representation:

(19) G n ( x U ) = y 0 = u 0 x d y 0 y 1 = u 1 y 0 d y 1 y n 1 = u n 1 y n 2 d y n 1 , n 1 .

Now, we introduce a sample ( U 1 , , U n ) of n independent ( 0 , 1 ) -uniform random variables. Denote by ( U 1 : n , , U n : n ) the corresponding order statistics. By using (19), we directly obtain the following representation of G n .

Proposition 3.1

For 0 x u 0 ,

(20) P ( U 1 : n u 0 , , U n : n u n 1 , a n d U 1 : n x ) = ( 1 ) n n ! G n ( x { u 0 , , u n 1 } ) .

We observe that the right-hand side of (18) and that of (20) when x = 0 are identical, but u i are no longer necessarily integers here. By mimicry with the parking functions, we will say that a sample ( U 1 , , U n ) of independent ( 0 , 1 ) -uniforms is a parking trajectory if its order statistics ( U 1 : n , , U n : n ) satisfy the constraints U i : n u i 1 , i = 1 , , n . Therefore, the formula (20) gives, for x = 0 , the probability that such a parking trajectory does exist.

Note that (20) can presumably be derived from formula (18) of P K n by using an appropriate limit argument. However, such a method would be unnecessarily complicated.

3.2 Bidimensional trajectories

As noted in Section 1, Khare et al. [21] have recently introduced, independently, the family of bivariate A-G polynomials (see also the study by Adeniran et al. [1]). Their motivation was mainly combinatorial and a key result concerns bivariate parking functions, which are defined, as in the univariate case, from sequences of non-decreasing positive integer parameters.

In fact, they generalized the result (18) by proving that the number of bivariate parking functions is given by an appropriate bivariate A-G polynomial. Their starting point is a two-dimensional lattice { ( i 1 , i 2 ) , i 1 = 0 , , n 1 , i 2 = 0 , , n 2 } . Consider all possible paths going from ( 0 , 0 ) to ( n 1 , n 2 ) by successive steps which are either horizontal (i.e. of the type ( i 1 , i 2 ) ( i 1 + 1 , i 2 ) ) or vertical (i.e. of the type ( i 1 , i 2 ) ( i 1 , i 2 + 1 ) ). To this two-dimensional lattice are attached two sets of upper bounds, { u i 1 , i 2 ( 1 ) } and { u i 1 , i 2 ( 2 ) } , which must be satisfied at each horizontal and vertical step, respectively. These bounds are positive integers, which are non-decreasing in i 1 and i 2 (i.e. u i 1 , i 2 ( j ) u i 1 , i 2 ( j ) when i 1 i 1 and i 2 i 2 , for j = 1 , 2 ).

A pair of integer sequences [ ( a 1 ( 1 ) , , a n 1 ( 1 ) ) , ( a 1 ( 2 ) , , a n 2 ( 2 ) ) ] is a bivariate parking function iff its rearrangement in ascending order, denoted [ ( a 1 : n 1 ( 1 ) , , a n 1 : n 1 ( 1 ) ) , ( a 1 : n 2 ( 2 ) , , a n 2 : n 2 ( 2 ) ) ] , satisfies the corresponding upper bounds constraints. Let us denote

U ( 1 ) = { u i 1 , i 2 ( 1 ) , 0 i 1 n 1 1 , 0 i 2 n 2 } , U ( 2 ) = { u i 1 , i 2 ( 2 ) , 0 i 1 n 1 , 0 i 2 n 2 1 } .

Khare et al. [21] proved that the number of bivariate parking functions, P K n 1 , n 2 ( U ( 1 ) , U ( 2 ) ) , is given by

(21) P K n 1 , n 2 ( U ( 1 ) , U ( 2 ) ) = ( 1 ) n 1 + n 2 n 1 ! n 2 ! G n 1 , n 2 ( 0 , 0 U ( 1 ) , U ( 2 ) ) ,

where G n 1 , n 2 is a bivariate A-G polynomial as in Section 2.2, the factor n 1 ! n 2 ! on the right-hand side being due to our definition (11).

We are going to show that here too, a formula like (21) is true in a probabilistic context, for sequences of parameters, which are (non-decreasing) reals in ( 0 , 1 ) . This result was proved in the study by Lefevre and Picard [28] in a sometimes imprecise way. It is argued in detail below.

Let us look at the same two-dimensional lattice as mentioned earlier, and consider all the paths going from ( 0 , 0 ) to ( n 1 , n 2 ) by successive steps, either vertical or horizontal. The lattice is again completed by upper bounds to be satisfied at each step: u i 1 , i 2 ( 1 ) for a horizontal step ( i 1 , i 2 ) ( i 1 + 1 , i 2 ) and u i 1 , i 2 ( 2 ) for a vertical step ( i 1 , i 2 ) ( i 1 , i 2 + 1 ) . These bounds are always non-decreasing in i 1 and i 2 but are now reals in ( 0 , 1 ) .

We now introduce a pair of independent samples of ( n 1 , n 2 ) independent ( 0 , 1 ) -uniform random variables [ ( U 1 ( 1 ) , , U n 1 ( 1 ) ) , ( U 1 ( 2 ) , , U n 2 ( 2 ) ) ] . The question we ask is whether the order statistics of these two samples, denoted [ ( U 1 : n 1 ( 1 ) , , U n 1 : n 1 ( 1 ) ) , ( U 1 : n 2 ( 2 ) , , U n 2 : n 2 ( 2 ) ) ] , make it possible to generate a path to ( n 1 , n 2 ) , which satisfies the upper bound constraints. A trajectory which does so is called a bivariate parking trajectory.

As an illustration, suppose ( n 1 , n 2 ) = ( 3 , 2 ) , and consider the associated lattice for which the fixed upper bounds are given in the left part of Figure 1. Assume the two ordered uniform samples, ( U 1 : 3 ( 1 ) , U 2 : 3 ( 1 ) , U 3 : 3 ( 1 ) ) and ( U 1 : 2 ( 2 ) , U 2 : 2 ( 2 ) ) , generate the path which is represented in the right part of Figure 1. Then this path will be a bivariate parking trajectory iff

( U 1 : 3 ( 1 ) u 0 , 0 ( 1 ) , U 2 : 3 ( 1 ) u 1 , 1 ( 1 ) , U 3 : 3 ( 1 ) u 2 , 1 ( 1 ) ) , and ( U 1 : 2 ( 2 ) u 1 , 0 ( 2 ) , U 2 : 2 ( 2 ) u 3 , 1 ( 2 ) ) .

Figure 1 
                  Lattice from 
                        
                           
                           
                              
                                 (
                                 
                                    0
                                    ,
                                    0
                                 
                                 )
                              
                           
                           \left(0,0)
                        
                      to 
                        
                           
                           
                              
                                 (
                                 
                                    
                                       
                                          n
                                       
                                       
                                          1
                                       
                                    
                                    ,
                                    
                                       
                                          n
                                       
                                       
                                          2
                                       
                                    
                                 
                                 )
                              
                              =
                              
                                 (
                                 
                                    3
                                    ,
                                    2
                                 
                                 )
                              
                           
                           \left({n}_{1},{n}_{2})=\left(3,2)
                        
                     : the set of upper bounds to satisfy (on the left side), and a path of two ordered uniform samples (on the right side).
Figure 1

Lattice from ( 0 , 0 ) to ( n 1 , n 2 ) = ( 3 , 2 ) : the set of upper bounds to satisfy (on the left side), and a path of two ordered uniform samples (on the right side).

We will prove that the probability that a bivariate parking trajectory does exist is given by formula (22), where ( x 1 , x 2 ) = ( 0 , 0 ) . Note that the right-hand side has exactly the same expression as (21).

Proposition 3.2

For 0 x 1 u 0 , 0 ( 1 ) and 0 x 2 u 0 , 0 ( 2 ) ,

(22) P ( a b i v a r i a t e p a r k i n g t r a j e c t o r y e x i s t s t o ( n 1 , n 2 ) , a n d U 1 : n 1 ( 1 ) x 1 , U 1 : n 2 ( 2 ) x 2 ) = ( 1 ) n 1 + n 2 n 1 ! n 2 ! G n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) .

Proof

Denote by B n 1 , n 2 ( x 1 , x 2 ) the probability P ( ) defined in the light-hand side of (22). Its increment in x 1 is given by

(23) B n 1 , n 2 ( x 1 + d x 1 , x 2 ) B n 1 , n 2 ( x 1 , x 2 ) = P ( a parking trajectory , and x 1 U 1 : n 1 ( 1 ) x 1 + d x 1 , U 1 : n 2 ( 2 ) x 2 ) .

As U 1 : n 1 ( 1 ) U 2 : n 1 ( 1 ) , the constraint U 2 : n 1 ( 1 ) x 1 may be added in the right-hand side of (23), which so becomes

(24) P ( a parking trajectory , and x 1 U 1 : n 1 ( 1 ) x 1 + d x 1 , U 2 : n 1 ( 1 ) x 1 , U 1 : n 2 ( 2 ) x 2 ) .

Moreover, the event ( x 1 U 1 : n 1 ( 1 ) U 2 : n 1 ( 1 ) x 1 + d x 1 ) is of order o ( d x 1 ) , implying that U 2 : n 1 ( 1 ) x 1 can be reinforced as U 2 : n 1 ( 1 ) x 1 + d x 1 . Thus, at first order, (24) is equivalent to

(25) P ( a parking trajectory , and x 1 U 1 : n 1 ( 1 ) x 1 + d x 1 , U 2 : n 1 ( 1 ) x 1 + d x 1 , U 1 : n 2 ( 2 ) x 2 ) .

Among the uniforms U 1 ( 1 ) , , U n 1 ( 1 ) , only one can take a value in ( x 1 , x 1 + d x 1 ) , each with the probability d x 1 . Suppose, for example, it is U 1 ( 1 ) . The remaining variables then form a set of n 1 1 independent uniforms independent of U 1 ( 1 ) , denoted ( U ˜ 1 ( 1 ) , , U ˜ n 1 1 ( 1 ) ) . Substituting U ˜ 1 : n 1 1 ( 1 ) for U 2 : n 1 ( 1 ) in (25) then yields

(26) n 1 d x 1 P ( a parking trajectory , and U ˜ 1 : n 1 1 ( 1 ) x 1 + d x 1 , U 1 : n 2 ( 2 ) x 2 ) .

After division by d x 1 0 , we obtain from (23) and (26)

(27) x 1 B n 1 , n 2 ( x 1 , x 2 ) = n 1 P ( a parking trajectory , and U ˜ 1 : n 1 1 ( 1 ) x 1 , U 1 : n 2 ( 2 ) x 2 ) .

Now, let us look at the probability term P ( ) on the right-hand side of (27). This time, the existence of a parking trajectory concerns the two independent ordered sequences of uniforms, [ ( U ˜ 1 : n 1 1 ( 1 ) , , U ˜ n 1 1 : n 1 1 ( 1 ) ) , ( U 1 : n 2 ( 2 ) , , U n 2 : n 2 ( 2 ) ) ] . By definition, they are required to reach the point ( n 1 , n 2 ) while satisfying the upper bound constraints given by the lattice. We note, however, that these constraints no longer apply from the origin ( 0 , 0 ) but from the point ( 1 , 0 ) because U 1 ( 1 ) = x 1 u 0 , 0 ( 1 ) and the upper bounds are non-decreasing. Therefore, it suffices to consider the paths from ( 1 , 0 ) to ( n 1 , n 2 ) or, equivalently, from ( 0 , 0 ) to ( n 1 1 , n 2 ) after applying the shift operator 1 , 0 to ( U ( 1 ) , U ( 2 ) ) .

With the notation of B n 1 , n 2 completed accordingly, we obtain from (27) that

(28) x 1 B n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = n 1 B n 1 1 , n 2 ( x 1 , x 2 1 , 0 ( U ( 1 ) , U ( 2 ) ) ) , n 1 1 .

Writing

(29) B n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = ( 1 ) n 1 + n 2 n 1 ! n 2 ! H n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) ,

(28) leads to

(30) x 1 H n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = H n 1 1 , n 2 ( x 1 , x 2 1 , 0 ( U ( 1 ) , U ( 2 ) ) ) , n 1 1 .

Similarly, we find that

(31) x 2 H n 1 , n 2 ( x 1 , x 2 U ( 1 ) , U ( 2 ) ) = H n 1 , n 2 1 ( x 1 , x 2 0 , 1 ( U ( 1 ) , U ( 2 ) ) ) , n 2 1 .

Moreover, from (29) and the definition of B n 1 , n 2 , we have

(32) H 0 , 0 = 1 , and H n 1 , n 2 ( u 0 , 0 ( 1 ) , u 0 , 0 ( 2 ) U ( 1 ) , U ( 2 ) ) = 0 , n 1 + n 2 1 .

We therefore deduce from (30)–(32) that each H n 1 , n 2 corresponds precisely to the A-G polynomial G n 1 , n 2 .□

Here too, (22) could be obtained from the formula (21) for P K n 1 , n 2 , but this is superfluous.

4 Ruin in a bidimensional risk model

The A-G polynomials are well known to highlight the hidden algebraic structure in first-crossing problems and epidemic processes. Our objective in this section is to show that they are also a valuable tool to determine non-ruin probabilities over a finite horizon in bidimensional risk processes when claim amounts exhibit a particular form of dependence. For greater clarity, we will only discuss here the case of univariate polynomials A-G because they arise in a very simple and natural way.

The notion of dependence used is that, called partial Schur-constancy, which was studied by Lefèvre [24] in the continuous case (see the study by Castañer et al. [8] for the discrete case). A short presentation is given in the Appendix. This form of dependence has the advantage of incorporating a large class of partially exchangeable dependencies while leading to compact expressions for the non-ruin probabilities in finite time.

Let us mention that other particular families of polynomials have proven useful in the mathematical theory of insurance. The reader is referred to the studies by Picard and Lefèvre [35], Goffard et al. [17], Dimitrova et al. [15], and Albrecher et al. [3].

4.1 A discrete-time process

To begin, we examine a bidimensional risk process that is formulated on a discrete-time scale N 0 . For a single risk, the study of a discrete-time model is fairly standard and well documented (see e.g. the review by Li et al. [30]). For two risks, the discrete-time model below has a structure similar to the one considered in the study by Castañer et al. [7].

Risk model. Specifically, the insurance concerns two risk processes in parallel which are observed on a horizon of length n , say, at equidistant instants i = 0 , 1 , , n . Let c 0 ( 1 ) (resp. c 0 ( 2 ) ) be the initial reserves of the company for the risk labelled 1 (resp. 2). The premium received during period ( i 1 , i ) , 1 i n , is a constant c i ( 1 ) (resp. c i ( 2 ) ). Note that these premiums are allowed to vary deterministically over time.

The successive claim amounts for period ( i 1 , i ) , 1 i n , are random variables X i ( 1 ) (resp. X i ( 2 ) ). In general, these amounts of claim depend on each other, in a more or less complex way. For the sequel, we will assume that the n claim amounts per unit of time for the risk ( 1 ) and those for the risk ( 2 ) form two subvectors as defined in (A2) of the Appendix, i.e.

(33) [ ( X 1 ( 1 ) , , X n ( 1 ) ) , ( X 1 ( 2 ) , , X n ( 2 ) ) ]

constitutes an absolutely continuous vector which is partially Schur-constant. Note that here, n 1 = n 2 = n .

So, for each risk j = 1 , 2 , the surplus process at time i is given by

(34) R i ( j ) = h i ( j ) S i ( j ) , 0 i n ,

where h i ( j ) is the total premiums received including the initial reserve, and S i ( j ) is the total claim occurred until time i , i.e.

h i ( j ) = c 0 ( j ) + c 1 ( j ) + + c i ( j ) , and S i ( j ) = X 1 ( j ) + + X i ( j ) .

Non-ruin probabilities. There are several possible definitions of ruin in multirisk management. Here are the two most standard definitions.

(i) Ruin occurs at time τ or as soon as one of the two surpluses becomes negative. In other words, τ or = min ( τ 1 , τ 2 ) , where τ j the ruin time for risk j . Thus, the event ( τ or > n ) means that both surpluses remain always non-negative until time n .

(ii) Ruin occurs at time τ and as soon as the two surpluses become negative (not necessarily at the same time). Thus, τ and = max ( τ 1 , τ 2 ) , and the event ( τ and > n ) means that at least one of the two surpluses remains always non-negative until n .

The corresponding probabilities of non-ruin until time n are

ϕ or ( n ) = P ( τ or > n ) , and ϕ and ( n ) = P ( τ and > n ) .

Of course, they are linked by the identity

(35) ϕ or ( n ) = ϕ 1 ( n ) + ϕ 2 ( n ) ϕ and ( n ) ,

where ϕ j ( n ) is the probability of non-ruin until time n for the single risk j .

In practice, it is convenient to deal with the probability of non-ruin ϕ or ( n ) , which will also give ϕ j ( n ) , j = 1 , 2 , and thus ϕ and ( n ) using (35). Let 1 ( A ) be the indicator of any event A . We will show that ϕ or ( n ) can be expressed in terms of two A-G polynomials, which are univariate (see Section 2.1).

Proposition 4.1

If the vector [ ( X 1 ( j ) , , X n ( j ) ) , j = 1 , 2 ] is partially Schur-constant,

(36) ϕ or ( n ) = [ ( n 1 ) ! ] 2 G n 1 ( 0 U ( 1 ) ) G n 1 ( 0 U ( 2 ) ) E { [ 1 ( S n ( 1 ) S n ( 2 ) ) n 1 ] 1 ( S n ( 1 ) h n ( 1 ) , S n ( 2 ) h n ( 2 ) ) } ,

where U ( j ) , j = 1 , 2 , is the family of parameters

(37) u i ( j ) = h i + 1 ( j ) , 0 i n 2 ,

and G n 1 ( 0 U ( j ) ) is the associated univariate A-G polynomial of degree n 1 at x = 0 .

Proof

There is no ruin for either risk up to time n if the two surpluses remain non-negative over the entire horizon. Thus,

ϕ or ( n ) = P [ ( S 1 ( 1 ) h 1 ( 1 ) , , S n ( 1 ) h n ( 1 ) ) , ( S 1 ( 2 ) h 1 ( 2 ) , , S n ( 2 ) h n ( 2 ) ) ] .

Conditioning on the vector ( S n ( 1 ) , S n ( 2 ) ) , of joint density f n , n (see (A8)), we obtain

(38) ϕ or ( n ) = s 1 = 0 h n ( 1 ) s 2 = 0 h n ( 2 ) P [ i = 1 n 1 ( S i ( 1 ) h i ( 1 ) , S i ( 2 ) h i ( 2 ) ) ( S n ( 1 ) , S n ( 2 ) ) = ( s 1 , s 2 ) ] f n , n ( s 1 , s 2 ) d s 1 d s 2 .

For each risk j = 1 , 2 , let us divide inside (38) all the inequalities S i ( j ) h i ( j ) by S n ( j ) , 0 i n . The claim amount vector (41) being partially Schur-constant, Proposition A.3 with (A6), (A7) applies, so that (38) can be rewritten as follows:

(39) ϕ or ( n ) = s 1 = 0 h n ( 1 ) s 2 = 0 h n ( 2 ) P [ i = 1 n 1 ( U i : n 1 ( 1 ) h i ( 1 ) s 1 ) ] P [ i = 1 n 1 ( U i : n 1 ( 2 ) h i ( 2 ) s 2 ) ] f n , n ( s 1 , s 2 ) d s 1 d s 2 .

By virtue of (20), the probabilities in the integral of (39) are given by univariate A-G polynomials, namely, for j = 1 , 2 ,

(40) P [ i = 1 n 1 ( U i : n 1 ( j ) h i ( j ) s j ) ] = ( n 1 ) ! ( 1 ) n 1 G n 1 ( 0 U ( j ) s j ) ,

where U ( j ) is the family of reals { u i ( j ) } defined in (37). From the relation (4), we have

G n 1 ( 0 U ( j ) s j ) = ( 1 s j ) n 1 G n 1 ( 0 U ( j ) ) .

Thus, the announced formula (36) follows from (39) and (40).□

Remember that the density f n , n ( s 1 , s 2 ) of ( S n ( 1 ) , S n ( 2 ) ) is given by (A8), so the expectation in (36) is indeed computable as follows:

E { [ 1 ( S n ( 1 ) S n ( 2 ) ) n 1 ] 1 ( S n ( 1 ) h n ( 1 ) , S n ( 2 ) h n ( 2 ) ) } = 1 [ ( n 1 ) ! ] 2 s 1 = 0 h n ( 1 ) s 2 = 0 h n ( 2 ) g ( n , n ) ( s 1 , s 2 ) d s 1 d s 2 .

Moreover, for a single risk j , ( X 1 ( j ) , , X n ( j ) ) is Schur-constant and (36) reduces to

ϕ j ( n ) = ( n 1 ) ! G n 1 ( 0 U ( j ) ) E { [ 1 ( S n ( j ) ) n 1 ] 1 ( S n ( j ) h n ( j ) ) } ,

which can also be evaluated, hence ϕ and ( n ) using (35).

As a special case, suppose that for each risk j = 1 , 2 , the premium per unit of time is constant over time and equal to c ( j ) , starting from an initial capital v ( j ) . This simplifying assumption is found in most of the literature.

Corollary 4.2

If, in addition, h i + 1 ( j ) = v ( j ) + c ( j ) ( i + 1 ) , i N 0 ,

(41) ϕ or ( n ) = ( v ( 1 ) + c ( 1 ) ) ( v ( 2 ) + c ( 2 ) ) [ ( v ( 1 ) + c ( 1 ) n ) ( v ( 2 ) + c ( 2 ) n ) ] n 2 E { [ 1 ( S n ( 1 ) S n ( 2 ) ) n 1 ] 1 ( S n ( 1 ) v ( 1 ) + c ( 1 ) n , S n ( 2 ) v ( 2 ) + c ( 2 ) n ) } .

Proof

As the function h i + 1 ( j ) is of affine form for i N 0 , the two A-G polynomials G n 1 ( 0 U ( j ) ) in (36) reduce to Abel polynomials. Specifically, by using (4) and (8), we obtain

G n 1 ( 0 { v ( j ) + c ( j ) ( i + 1 ) , i N 0 } ) = G n 1 ( v ( j ) c ( j ) { c ( j ) i , i N 0 } ) = ( v ( j ) c ( j ) ) ( v ( j ) c ( j ) c ( j ) ( n 1 ) ) n 2 ( n 1 ) ! = ( 1 ) n 1 ( v ( j ) + c ( j ) ) ( v ( j ) + c ( j ) n ) n 2 ( n 1 ) ! .

After substitution in (36), we obtain formula (41).□

4.2 A continuous-time process

We continue by examining a continuous-time version of this risk model. A process of this type has been proposed previously by a number of researchers, e.g. Dang et al. [11] and Gong and Badescu [18]. We thus now reason on a continuous-time scale t R + , and we want to determine the probabilities of non-ruin up to any real time t .

Claim arrivals. A major change is that we have to explicitly introduce the counting processes { N t ( j ) , t > 0 } , j = 1 , 2 , which generate the claim arrivals for each risk. To this end, we will consider two processes which are marginally Poisson but can be correlated. More precisely, we assume that if at time t , ( N t ( 1 ) , N t ( 2 ) ) = ( n 1 , n 2 ) , then the vectors of claim arrival times for each risk j , denoted by ( T 1 ( j ) , , T n j ( j ) ) , are independent of each other and distributed as the order statistics of a sample of n j independent ( 0 , t ) -uniform random variables.

For example, we could figure out that each risk j can occur or not during any small interval time ( t , t + d t ) with the probabilities

P ( N t + d t ( 1 ) = k 1 + 1 , N t + d t ( 2 ) = k 2 N t ( 1 ) = k 1 , N t ( 2 ) = k 2 ) = ( λ 1 λ 1 , 2 ) d t + o ( d t ) , P ( N t + d t ( 1 ) = k 1 , N t + d t ( 2 ) = k 2 + 1 N t ( 1 ) = k 1 , N t ( 2 ) = k 2 ) = ( λ 2 λ 1 , 2 ) d t + o ( d t ) , P ( N t + d t ( 1 ) = k 1 + 1 , N t + d t ( 2 ) = k 2 + 1 N t ( 1 ) = k 1 , N t ( 2 ) = k 2 ) = λ 1 , 2 d t + o ( d t ) , P ( N t + d t ( 1 ) = k 1 , N t + d t ( 2 ) = k 2 N t ( 1 ) = k 1 , N t ( 2 ) = k 2 ) = 1 ( λ 1 + λ 2 λ 1 , 2 ) d t + o ( d t ) ,

with λ 1 , λ 2 > λ 1 , 2 . In this case, each process { N t ( j ) , t 0 } is marginally Poisson of parameter λ j , and at any time t , the vector ( N t ( 1 ) , N t ( 2 ) ) has a standard bivariate Poisson distribution (see the study by Johnson et al. [20]). In other words, this vector has the representation

N t ( 1 ) = M t ( 1 ) + M t ( 1 , 2 ) , N t ( 2 ) = M t ( 2 ) + M t ( 1 , 2 ) ,

where M t ( 1 ) , M t ( 2 ) , M t ( 1 , 2 ) are independent Poisson variables of parameters ( λ 1 λ 1 , 2 ) t , ( λ 2 λ 1 , 2 ) t , λ 1 , 2 t , respectively. Note that the correlation coefficient between N t ( 1 ) and N t ( 2 ) is equal to λ 1 , 2 ( λ 1 λ 2 ) 1 2 and is thus always positive, which can be restrictive in certain situations.

Different methods have been developed to overcome this difficulty (see the study by Pfeifer and Neslehová [38] and the references therein). In particular, these authors propose to use copulas while keeping univariate Poisson margins, which is achievable thanks to Sklar’s key theorem. However, a stochastic interpretation as given earlier may no longer exist. We refer the reader to Bäuerle and Grübel [5] for further discussion of this issue.

Risk model. As is often done, we consider here too that for each risk j = 1 , 2 , the successive claim amounts X k ( j ) , k 1 , are independent of the claim arrival processes. In addition, as the numbers of claims on ( 0 , t ) are integer-valued random variables, it is now the sequence

(42) [ ( X k 1 ( 1 ) , k 1 1 ) , ( X k 2 ( 2 ) , k 2 1 ) ] ,

which is assumed to be absolutely continuous partially Schur-constant. So, the generator g ( x 1 , x 2 ) defined through (A2) must be here infinitely monotone function in ( x 1 , x 2 ) and thus corresponds to a Laplace transform like (A5). Of course, any pair of subvectors of lengths say ( n 1 , n 2 ) ,

[ ( X k 1 ( 1 ) , 1 k 1 n 1 ) , ( X k 2 ( 2 ) , 1 k 2 n 2 ) ] ,

then forms a partially Schur-constant vector.

For each risk j , the premiums received up to time t are given by a function h t ( j ) including an initial capital. Therefore, the surplus process at time t for risk j is given by

(43) R t ( j ) = h t ( j ) S t ( j ) , t 0 ,

where S t ( j ) is the total claim amounts up to time t , i.e.

S t ( j ) = k = 1 N t ( j ) X k ( j ) .

Non-ruin probabilities. Ruin is defined as in Section 4.1, and our objective is to determine the non-ruin probabilities ϕ or ( t ) until any time t . We will see that, as expected, ϕ or ( t ) can be again expressed in terms of two univariate A-G polynomials, which are defined this time from randomized parameters.

Proposition 4.3

If the sequence [ ( X k j ( j ) , k j 1 ) , j = 1 , 2 ] is partially Schur-constant,

(44) ϕ or ( t ) = n 1 = 0 n 2 = 0 P ( N t ( 1 ) = n 1 , N t ( 2 ) = n 2 ) E ( n 1 , n 2 ) ( t ) ,

where E ( n 1 , n 2 ) ( t ) denotes the expectation, taken with respect to ( S n 1 ( 1 ) , S n 2 ( 2 ) ) and ( T 1 ( j ) , , T n j ( j ) ) for j = 1 , 2 , defined by

(45) E ( n 1 , n 2 ) ( t ) = ( 1 ) n 1 + n 2 ( n 1 1 ) ! ( n 2 1 ) ! E { G n 1 1 ( 0 U ( 1 ) ) G n 2 1 ( 0 U ( 2 ) ) [ 1 ( S n 1 ( 1 ) ) n 1 1 ( S n 2 ( 2 ) ) n 2 1 ] 1 ( T n 1 ( 1 ) , T n 2 ( 2 ) < t , S n 1 ( 1 ) h T n 1 ( 1 ) ( 1 ) , S n 2 ( 2 ) h T n 2 ( 2 ) ( 2 ) ) } .

and U ( j ) , j = 1 , 2 , is the family of randomized parameters

(46) U i ( j ) = h T i + 1 ( j ) ( j ) , 0 i n j 2 .

Proof

For each risk j , fix the total number of claims that have occurred up to time t and their successive times of arrival. So, we set N t ( j ) = n j and ( T 1 ( j ) , , T n j ( j ) ) = ( t 1 ( j ) , , t n j ( j ) ) with t n j ( j ) t , and we collect all these values under the label A t . Denote by ϕ or ( t A t ) the probability of non-ruin until time t conditioned on A t . Clearly, we have

ϕ or ( t A t ) = P [ ( S 1 ( 1 ) h t 1 ( 1 ) ( 1 ) , , S n 1 ( 1 ) h t n 1 ( 1 ) ( 1 ) ) , ( S 1 ( 2 ) h t 1 ( 2 ) ( 2 ) , , S n 2 ( 2 ) h t n 2 ( 2 ) ( 2 ) ) A t ] .

The two claim subvectors being partially Schur-constant for any dimension, we can reason exactly as for Proposition 4.1 to obtain ϕ or ( t A t ) . After elimination of the conditioning on A t , we directly deduce the results (44)–(46).□

Recall that E ( n 1 , n 2 ) ( t ) in (44) is indeed computable because ( S n 1 ( 1 ) , S n 2 ( 2 ) ) has density (A8), and as stated previously, ( T 1 ( j ) , , T n j ( j ) ) , j = 1 , 2 , are two independent vectors, each distributed as ( t U 1 : n j ( j ) , , t U n j : n j ( j ) ) .

Let us examine the particular case where premiums accumulate at a constant rate c ( j ) , from an initial capital v ( j ) . This situation therefore resembles that discussed in Corollary 4.2.

Corollary 4.4

If, in addition, h t ( j ) = v ( j ) + c ( j ) t , t 0 , then the formula (44) applies with E ( n 1 , n 2 ) ( t ) which simplifies to the expectation, taken only with respect to ( S n 1 ( 1 ) , S n 2 ( 2 ) ) and ( T n 1 ( 1 ) , T n 2 ( 2 ) ) , defined by

(47) E ( n 1 , n 2 ) ( t ) = E j = 1 2 [ ( v ( j ) + c ( j ) T n j ( j ) ) n j 2 ( v ( j ) + c ( j ) T n j ( j ) n j ) / ( S n j ( j ) ) n j 1 ] 1 ( T n 1 ( 1 ) , T n 2 ( 2 ) < t , S n 1 ( 1 ) v ( 1 ) + c ( 1 ) T n 1 ( 1 ) , S n 2 ( 2 ) v ( 2 ) + c ( 2 ) T n 2 ( 2 ) ) .

Proof

We first condition on the numbers ( n 1 , n 2 ) of claims up to t . Given the form of h t ( j ) , the two polynomials involved in (45) become

(48) G n j 1 ( 0 { v ( j ) + c ( j ) T i ( j ) , 1 i n j 1 } ) , j = 1 , 2 .

Of course, each claim arrival time T i ( j ) is given by

(49) T i ( j ) = Y 1 ( j ) + + Y i ( j ) , i 1 ,

where Y i ( j ) represents the time between the ( i 1 ) -th and i -th claim arrivals. By construction, the two vectors ( Y 1 ( j ) , , Y n j ( j ) ) are conditionally independent, each of them being composed of exchangeable random variables.

Now, by using (4), we can rewrite the polynomial G n j 1 of (48) as follows:

(50) ( c ( j ) ) n j 1 G n j 1 ( v ( j ) c ( j ) { T i ( j ) , 1 i n j 1 } ) = ( c ( j ) ) n j 1 G n j 1 ( v ( j ) c ( j ) T n j ( j ) { T i ( j ) T n j ( j ) , 1 i n j 1 } ) = ( c ( j ) ) n j 1 G n j 1 ( v ( j ) c ( j ) + T n j ( j ) { T n j ( j ) T i ( j ) , 1 i n j 1 } ) .

From (49) (with T 0 ( j ) 0 ), we have

T n j ( j ) T i ( j ) = Y i + 1 ( j ) + + Y n j ( j ) , 0 i n j 1 ,

and because of the exchangeability of ( Y 1 ( j ) , , Y n j ( j ) ) ,

(51) T n j ( j ) T i ( j ) is distributed as Y 1 ( j ) + + Y n j i ( j ) W n j i ( j ) , 0 i n j 1 ,

where we have adopted the same notation as in Proposition 2.6. By combining (50) and (51), we then obtain for the expectation (45)

(52) E ( n 1 , n 2 ) ( t ) = ( n 1 1 ) ! ( n 2 1 ) ! ( c ( 1 ) ) n 1 1 ( c ( 2 ) ) n 2 1 E { [ 1 ( S n 1 ( 1 ) ) n 1 1 ( S n 2 ( 2 ) ) n 2 1 ] × G n 1 1 ( v ( 1 ) c ( 1 ) + W n 1 ( 1 ) { W n 1 1 ( 1 ) , , W 1 ( 1 ) } ) G n 2 1 ( v ( 2 ) c ( 2 ) + W n 2 ( 2 ) { W n 2 1 ( 2 ) , , W 1 ( 2 ) } ) 1 ( W n 1 ( 1 ) , W n 2 ( 2 ) < t , S n 1 ( 1 ) v ( 1 ) + c ( 1 ) W n 1 ( 1 ) , S n 2 ( 2 ) v ( 2 ) + c ( 2 ) W n 2 ( 2 ) ) } .

Finally, we also condition on the event ( W n 1 ( 1 ) , W n 2 ( 2 ) ) = ( t n 1 ( 1 ) , t n 2 ( 2 ) ) . By the tower rule, the expectation E { } in (52) becomes

(53) E { [ 1 ( S n 1 ( 1 ) ) n 1 1 ( S n 2 ( 2 ) ) n 2 1 ] 1 ( t n 1 ( 1 ) , t n 2 ( 2 ) < t , S n 1 ( 1 ) v ( 1 ) + c ( 1 ) t n 1 ( 1 ) , S n 2 ( 2 ) v ( 2 ) + c ( 2 ) t n 2 ( 2 ) ) E [ j = 1 2 G n j 1 ( v ( j ) c ( j ) + t n j ( j ) { W n j 1 ( j ) , , W 1 ( j ) } ) ( W n 1 ( 1 ) , W n 2 ( 2 ) ) = ( t n 1 ( 1 ) , t n 2 ( 2 ) ) ] } .

Since the vectors ( Y 1 ( j ) , , Y n j ( j ) ) are independent and each exchangeable, the identity (9) of Proposition 2.6 applies to both terms inside the conditional expectation of (53). This implies (in obvious notation) that

(54) E j = 1 2 G n j 1 ( ) ( t n 1 ( 1 ) , t n 2 ( 2 ) ) = j = 1 2 ( v ( j ) c ( j ) + t n j ( j ) ) n j 2 ( n j 2 ) ! v ( j ) c ( j ) + t n j ( j ) n j 1 t n j ( j ) n j .

Inserting (53) with (54) into (52) and then removing the conditioning, we deduce the desired formula (47).□

Acknowledgement

We thank the editor and the referees for valuable comments and suggestions. The work of C. Lefèvre was carried out within the DIALog Research Chair under the aegis of the Risk Foundation, an initiative of CNP Assurances.

  1. Conflict of interest: The authors state no conflict of interest.

Appendix A Partial Schur-constancy

We summarize here the key elements of the notion of partial Schur-constancy introduced in the study by Lefèvre [24]. Consider a random vector partitioned into two groups (for example) of absolutely continuous variables on R + of respective sizes n 1 and n 2 , denoted

(A1) [ ( X 1 ( 1 ) , , X n 1 ( 1 ) ) , ( X 1 ( 2 ) , , X n 2 ( 2 ) ) ] .

Definition A.1

The vector (A1) is partially Schur-constant if there exists a bivariate function g ( x 1 , x 2 ) : R + 2 ( 0 , 1 ) , called generator, such that the joint survival function of (A1) can be expressed in the special form

(A2) P [ ( X 1 ( 1 ) x 1 , 1 , , X n 1 ( 1 ) x 1 , n 1 ) , ( X 1 ( 2 ) x 2 , 1 , , X n 2 ( 2 ) x 2 , n 2 ) ] = g ( x 1 , 1 + + x 1 , n 1 , x 2 , 1 + + x 2 , n 2 ) , for all ; x j , i j R + , 1 i j n j , j = 1 , 2 .

So, the vector (A1) is partially exchangeable in the sense of de Finetti [13], but with a specific form of dependency. Of course, the partial Schur-constancy implies a simple Schur-constancy for each vector ( X 1 ( j ) , , X n j ( j ) ) , j = 1 , 2 . This last property is defined similarly but from a univariate generator g j ( x j ) : R + ( 0 , 1 ) (see, e.g. Nelsen [31], Chi et al. [9] and Lefèvre and Simon [29]). With respect to the bivariate generator g ( x 1 , x 2 ) , we then have g 1 ( x 1 ) = g ( x 1 , 0 ) and g 2 ( x 2 ) = g ( 0 , x 2 ) .

To actually exist, the generator of the survival function (A2) must satisfy certain conditions. The result below is stated in the study by Lefèvre [24] (see the study by Ressel [39] for a detailed analysis).

Proposition A.2

A function g ( x 1 , x 2 ) may generate a partially Schur-constant vector (A2)if and only if it is a ( n 1 , n 2 ) -monotone function on R + 2 . In other words, for all x 1 , x 2 R + ,

(A3) ( 1 ) k 1 + k 2 g ( k 1 , k 2 ) ( x 1 , x 2 ) 0 , 0 k j n j , j = 1 , 2 ,

provided that g ( x 1 , x 2 ) is sufficiently differentiable.

A family of survival copulas is also defined in the study by Lefèvre [24]. They are called partially Archimedean because they generalize Archimedean copulas by assuming partial exchangeability of the uniform vector involved. It is proved that a partially Schur-constant vector has a partially Archimedean survival copula with the same generator, and vice versa.

Different possible generators are proposed in the study by Lefèvre [24] which are thus ( n 1 , n 2 ) -monotone functions (by Proposition A.2 with (A3)). Here are two simple examples, among others.

(a) Consider for g ( x 1 , x 2 ) the bivariate exponential Gumbel distribution, i.e.

(A4) g ( x 1 , x 2 ) = e ζ 1 x 1 ζ 2 x 2 ζ 3 x 1 x 2 , x 1 , x 2 0 ,

where ζ 1 , ζ 2 , ζ 3 are positive parameters with ζ 3 ζ 1 ζ 2 .

Then, (A4) is ( 1 , n ) -monotone if and only if ζ 3 ( 1 n ) ζ 1 ζ 2 . To be ( 2 , n ) -monotone, a sufficient condition is ζ 3 [ 1 ( 1 1 n ) 1 2 ] ζ 1 ζ 2 , for example. To be ( 3 , n ) -monotone, it suffices that ζ 3 { 1 [ 1 1 n ( n 1 ) ( n 2 ) ] 1 3 } ζ 1 ζ 2 .

(b) Consider for g ( x 1 , x 2 ) the Laplace transform of some random vector ( Λ 1 , Λ 2 ) , i.e.

(A5) g ( x 1 , x 2 ) = E ( e Λ 1 x 1 Λ 2 x 2 ) , x 1 , x 2 0 .

Then, (A5) is infinitely monotone in ( x 1 , x 2 ) . In fact, the multivariate Bernstein-Widder theorem asserts that the reverse implication is always true.

For illustration, when ( Λ 1 , Λ 2 ) has a bivariate gamma distribution,

g ( x 1 , x 2 ) = 1 ( 1 + ζ 1 x 1 + ζ 2 x 2 + ζ 3 x 1 x 2 ) α , x 1 , x 2 0 ,

where α , ζ 1 , ζ 2 , ζ 3 are positive parameters satisfying the condition ζ 3 ζ 1 ζ 2 .

As expected, the partial Schur-constancy can be characterized by means of several equivalent representations. The following is easily proved and plays an important role in Section 4. Denote the two partial sums associated with the subvectors in (A1) by

S i j ( j ) = X 1 ( j ) + + X i j ( j ) , 1 i j n j , j = 1 , 2 .

Proposition A.3

The vector (A1) is partially Schur-constant if and only if each subvector ( X 1 ( j ) S n j ( j ) , , X n j ( j ) S n j ( j ) ) is independent of the variable S n j ( j ) , and the vector

(A6) [ ( S 1 ( 1 ) S n 1 ( 1 ) , , S n 1 ( 1 ) S n 1 ( 1 ) ) , ( S 1 ( 2 ) S n 2 ( 2 ) , , S n 1 ( 2 ) S n 2 ( 2 ) ) ]

is distributed as the vector

(A7) [ ( U 1 : n 1 1 ( 1 ) , , U n 1 1 : n 1 1 ( 1 ) ) , ( U 1 : n 2 1 ( 2 ) , , U n 2 1 : n 2 1 ( 2 ) ) ] ,

where each subvector ( U 1 : n j 1 ( j ) , , U n j 1 : n j 1 ( j ) ) corresponds to the order statistics of a sample of n j 1 independent ( 0 , 1 ) -uniform random variables, independent of each other and of the vector ( S n 1 ( 1 ) , S n 2 ( 2 ) ) .

Moreover, the density function of ( S n 1 ( 1 ) , S n 2 ( 2 ) ) is known explicitly as follows:

(A8) f n 1 , n 2 ( s 1 , s 2 ) = g ( n 1 , n 2 ) ( s 1 , s 2 ) s 1 n 1 1 ( n 1 1 ) ! s 2 n 2 1 ( n 2 1 ) ! , s 1 , s 2 R + .

Note that the study by Lefèvre [24] briefly discusses an extension of partial Schur-constancy to the modelling of nested and multi-level dependencies.

References

[1] Adeniran, A., Snider, L., & Yan, C. (2021). Multivariate difference Gončarov polynomials. Integers, Ron Graham Memorial, 21A, 1–21. 10.1515/9783110754216-001Search in Google Scholar

[2] Albrecher, H., Constantinescu, C., & Loisel, S. (2011). Explicit ruin formulas for models with dependence among risks. Insurance: Mathematics and Economics, 48, 265–270. 10.1016/j.insmatheco.2010.11.007Search in Google Scholar

[3] Albrecher, H., Cheung, E. C. K., Liu, H., & Woo, J-K. (2022). A bivariate Laguerre expansions approach for joint ruin probabilities in a two-dimensional insurance risk process. Insurance: Mathematics and Economics, 103, 96–118. 10.1016/j.insmatheco.2022.01.004Search in Google Scholar

[4] Ball, F. (2019). Susceptibility sets and the final outcome of collective Reed-Frost epidemics. Methodology and Computing in Applied Probability, 21, 401–421. 10.1007/s11009-018-9631-6Search in Google Scholar

[5] Bäuerle, N., & Grübel, R. (2005). Multivariate counting processes: Copulas and beyond. Astin Bulletin, 35, 379–408. 10.1017/S0515036100014306Search in Google Scholar

[6] Britton, T., & Pardoux, E. (2019). Stochastic Epidemic Models with Inference. Lecture Notes in Mathematics 2255 (Editors), Cham: Springer. 10.1007/978-3-030-30900-8Search in Google Scholar

[7] Castañer, A., Claramunt, M. M., & Lefèvre, C. (2013). Survival probabilities in bivariate risk models, with application to reinsurance. Insurance: Mathematics and Economics, 53, 632–642. 10.1016/j.insmatheco.2013.09.001Search in Google Scholar

[8] Castañer, A., Claramunt, M. M., Lefèvre, C., & Loisel, S. (2019). Partially Schur-constant models. Journal of Multivariate Analysis, 172, 47–58. 10.1016/j.jmva.2019.01.007Search in Google Scholar

[9] Chi, Y., Yang, J., & Qi, Y. (2009). Decomposition of a Schur-constant model and its applications. Insurance: Mathematics and Economics, 44, 398–408. 10.1016/j.insmatheco.2008.11.010Search in Google Scholar

[10] Constantinescu, C., Hashorva, E., & Ji, L. (2011). Archimedean copulas in finite and infinite dimensions - with application to ruin problems. Insurance: Mathematics and Economics, 49, 487–495. 10.1016/j.insmatheco.2011.08.006Search in Google Scholar

[11] Dang, L., Zhu, N., & Zhang, H. (2009). Survival probability for a two-dimensional risk model. Insurance: Mathematics and Economics, 44, 491–496. 10.1016/j.insmatheco.2009.02.001Search in Google Scholar

[12] Daniels, H. E. (1967). The distribution of the total size of an epidemic. Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, 4, 281–293. Search in Google Scholar

[13] de Finetti, B. (1938). Sur la condition daéquivalence partielle. Actualités Scientifiques et Industrielles, 739, 5–18. Search in Google Scholar

[14] Diaconis, P., & Hicks, A. (2017). Probabilizing parking functions. Advances in Applied Mathematics, 89, 125–155. 10.1016/j.aam.2017.05.004Search in Google Scholar

[15] Dimitrova, D. S., Ignatov, Z. G., & Kaishev, V. K. (2019). Ruin and deficit under claim arrivals with the order statistics property. Methodology and Computing in Applied Probability, 21, 511–530. 10.1007/s11009-018-9669-5Search in Google Scholar

[16] Dimitrova, D. S., & Kaishev, V. K. (2010). Optimal joint survival reinsurance: An efficient frontier approach. Insurance: Mathematics and Economics, 47, 27–35. 10.1016/j.insmatheco.2010.03.006Search in Google Scholar

[17] Goffard, P.-O., Loisel, S., & Pommeret, D. (2016). A polynomial expansion to approximate the ultimate ruin probability in the compound Poisson ruin model. Journal of Computational and Applied Mathematics, 296, 499–511. 10.1016/j.cam.2015.06.003Search in Google Scholar

[18] Gong, L., Badescu, A. L., & Cheung, E. C. K. (2012). Recursive methods for a multi-dimensional risk process with common shocks. Insurance: Mathematics and Economics, 50, 109–120. 10.1016/j.insmatheco.2011.10.007Search in Google Scholar

[19] Gontcharoff, W. (1937). Détermination des Fonctions Entières par Interpolation. Paris: Hermann. Search in Google Scholar

[20] Johnson, N. L., Kotz, S., & Balakrishnan, N. (1997). Discrete Multivariate Distributions. New York: Wiley. Search in Google Scholar

[21] Khare, N., Lorentz, R., & Yan, C. (2014). Bivariate Gontcharov polynomials and integer sequences. Science China, Mathematics, 57, 1561–1578. 10.1007/s11425-014-4827-xSearch in Google Scholar

[22] Konheim, A. G., & Weiss, B. (1966). An occupancy discipline and applications. SIAM Journal on Applied Mathematics, 14, 1266–1274. 10.1137/0114101Search in Google Scholar

[23] Kung, J., & Yan, C. (2003). Gončarov polynomials and parking functions. Journal of Combinatorial Theory, A102, 16–37. 10.1016/S0097-3165(03)00009-8Search in Google Scholar

[24] Lefèvre, C. (2021). On partially Schur-constant models and their associated copulas. Dependence Modeling, 9, 225–242. 10.1515/demo-2021-0111Search in Google Scholar

[25] Lefèvre, C., & Picard, P. (1990). A non-standard family of polynomials and the final size distribution of Reed-Frost epidemic processes. Advances in Applied Probability, 22, 25–48. 10.2307/1427595Search in Google Scholar

[26] Lefèvre, C., & Picard, P. (1996). Abelian-type expansions and non-linear death processes (II). Advances in Applied Probability, 28, 877–894. 10.2307/1428185Search in Google Scholar

[27] Lefèvre, C., & Picard, P. (2015). Risk models in insurance and epidemics: A bridge through randomized polynomials. Probability in the Engineering and Informational Sciences, 29, 399–420. 10.1017/S0269964815000066Search in Google Scholar

[28] Lefèvre, C., & Picard, P. (2016). Polynomials, random walks and risk processes: A multivariate framework. Stochastics: An International Journal of Probability and Stochastic Processes, 88, 1147–1172. 10.1080/17442508.2016.1215449Search in Google Scholar

[29] Lefèvre, C., & Simon, M. (2021). Schur-constant and related dependence models, with application to ruin probabilities. Methodology and Computing in Applied Probability, 23, 317–339. 10.1007/s11009-019-09744-2Search in Google Scholar

[30] Li, S., Lu, Y., & Garrido, J. (2009). A review of discrete-time risk models. Revista de la Real Academia de Ciencias Exactas. Fisicas y Naturales. Serie A. Matematicas, 103, 321–337. 10.1007/BF03191910Search in Google Scholar

[31] Nelsen, R. B. (2005). Some properties of Schur-constant survival models and their copulas. Brazilian Journal of Probability and Statistics, 19, 179–190. Search in Google Scholar

[32] Picard, P. (1980). Applications of martingale theory to some epidemic models. Journal of Applied Probability, 17, 583–599. 10.2307/3212953Search in Google Scholar

[33] Picard, P., & Lefèvre, C. (1990). A unified analysis of the final state and severity distribution in collective Reed-Frost epidemic processes. Advances in Applied Probability, 22, 269–294. 10.2307/1427536Search in Google Scholar

[34] Picard, P., & Lefèvre, C. (1996). First crossing of basic counting processes with lower non-linear boundaries: A unified approach through pseudopolynomials (I). Advances in Applied Probability, 28, 853–876. 10.2307/1428184Search in Google Scholar

[35] Picard, P., & Lefèvre, C. (1997). The probability of ruin in finite time with discrete claim size distribution. Scandinavian Actuarial Journal, 1, 58–69. 10.1080/03461238.1997.10413978Search in Google Scholar

[36] Picard, P., & Lefèvre, C. (2003). On the first meeting or crossing of two independent trajectories for some counting processes. Stochastic Processes and their Applications, 104, 217–242. 10.1016/S0304-4149(02)00240-5Search in Google Scholar

[37] Picard, P., Lefèvre, C., & Coulibaly, I. (2003). Multirisks model and finite-time ruin probabilities. Methodology and Computing in Applied Probability, 5, 337–353. 10.1023/A:1026287204089Search in Google Scholar

[38] Pfeifer, D., & Neslehová, J. (2004). Modeling and generating dependent risk processes for IFR and DFA. Astin Bulletin, 34, 333–360. 10.1017/S0515036100013726Search in Google Scholar

[39] Ressel, P. (2018). A multivariate version of Williamsonas theorem, l1-symmetric survival functions, and generalized Archimedean copulas. Dependence Modeling, 6, 356–368. 10.1515/demo-2018-0020Search in Google Scholar

[40] Stanley, R. P. (1999). Enumerative Combinatorics, (Vol. 2). Cambridge: Cambridge University Press. 10.1017/CBO9780511609589Search in Google Scholar

[41] Yan, C. (2015). Parking functions. In Handbook of Enumerative Combinatorics, (pp. 835–893), Boca Raton: CRC Press. Search in Google Scholar

Received: 2023-05-29
Revised: 2023-10-06
Accepted: 2023-11-07
Published Online: 2023-11-29

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 20.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/demo-2023-0107/html
Scroll to top button