Home Critical Galton–Watson Processes with Overlapping Generations
Article Open Access

Critical Galton–Watson Processes with Overlapping Generations

  • Serik Sagitov ORCID logo EMAIL logo
Published/Copyright: November 6, 2021
Become an author with De Gruyter Brill

Abstract

A properly scaled critical Galton–Watson process converges to a continuous state critical branching process ξ ( ) as the number of initial individuals tends to infinity. We extend this classical result by allowing for overlapping generations and considering a wide class of population counts. The main result of the paper establishes a convergence of the finite-dimensional distributions for a scaled vector of multiple population counts. The set of the limiting distributions is conveniently represented in terms of integrals ( 0 y ξ ( y - u ) d u γ , y 0 ) with a pertinent γ 0 .

MSC 2010: 60J80

1 Introduction

One of the basic stochastic population models of a self-reproducing system is built upon the following two assumptions:

  1. different individuals live independently from each other according to the same individual life law described in (B);

  2. an individual dies at age one and at the moment of death gives birth to a random number 𝑁 of offspring.

Within this model, the numbers of individuals Z 0 , Z 1 , , born at times t = 0 , 1 , , form a Markov chain, whose transition probabilities are fully described by the distribution of the offspring number 𝑁. The Markov chain { Z t , t 0 } is usually called a Galton–Watson process, or GW-process for short. A GW-process is classified as subcritical, critical, or supercritical, depending on whether the mean offspring number E ( N ) is less than, equal to, or larger than the critical value 1.

It is known that, in the critical case, with

(1.1) E ( N ) = 1 , Var ( N ) = 2 b , b < ,

the finite-dimensional distributions (fdds) of a properly scaled GW-process converge,

(1.2) { n - 1 Z n u , u 0 Z 0 = n } fdd { ξ ( u ) , u 0 ξ ( 0 ) = 1 } , n ,

and the limiting fdds are represented by a continuous state branching process ξ ( ) , which is a continuous time Markov process with a transition law determined by

(1.3) E ( e - λ ξ ( v + u ) ξ ( v ) = x ) = e - λ x 1 + λ b u , v , u , x , λ 0 .

Note how the parameter 𝑏 acts as a time scale: the larger the variance of 𝑁, the faster the change of the population size.

In this paper, we study { Z ( t ) , t 0 } , a Galton–Watson process with overlapping generations, or GWO-process for short, where Z ( t ) is the number of individuals alive at time 𝑡 in a reproduction system satisfying the following two assumptions:

  1. different individuals live independently from each other according to the same individual life law described in (B*);

  2. an individual lives 𝐿 units of time and gives 𝑁 births at random ages τ 1 , , τ N , satisfying

    (1.4) 1 τ 1 τ N L .

Assumption (B*) allows for overlapping generations, when mothers may coexist with their daughters. We focus on the critical case (1.1) and aim at an extension of (1.2) to the GWO-processes.

The process { Z ( t ) , t 0 } , being non-Markov in general, is studied with help of an associated renewal process, introduced in Section 2. The mean inter-arrival time

(1.5) a := E ( τ 1 + + τ N )

of this renewal process gives us the average generation length. It is important to distinguish between the average generation length 𝑎, which in this paper will be assumed finite, and the average life length μ := E ( L ) , allowed to be infinite.

With a more sophisticated reproduction mechanism (1.4), there are many interesting population counts to study, alongside the number of newborns Z t and the number of individuals alive Z ( t ) at the time 𝑡. Observe that Z t is the total number of daughters produced at time 𝑡 by Z ( t - 1 ) individuals alive at time t - 1 . In particular, in the GW setting, a = 1 and Z ( t ) Z t since all alive individuals are newborn.

An interesting case of population counts is treated by Theorem 4 dealing with decomposable multitype GW-processes. Theorem 4 is obtained as an application of the main results of the paper, Theorems 1, 2, 3, stated and proven in Section 5. The following three statements are straightforward corollaries of Theorems 1, 2, and 3 respectively. In these theorems, it is always assumed that the GWO-process stems from a large number Z 0 = n of progenitors born at time zero.

Corollary 1

Consider a GWO-process satisfying (1.1) and a < . If μ < , then

{ n - 1 Z ( n u ) , u > 0 Z 0 = n } fdd { μ a - 1 ξ ( u a - 1 ) , u > 0 ξ ( 0 ) = 1 } , n .

Corollary 2

Consider a GWO-process satisfying (1.1) and a < . If μ = and, for some slowly varying function at infinity L ( ) ,

(1.6) j = 0 t P ( L > j ) = t γ L ( t ) , 0 γ 1 , t ,

then, as n ,

{ n - 1 - γ L - 1 ( n ) Z ( n u ) , u > 0 Z 0 = n } fdd { a γ - 1 ξ γ ( u a - 1 ) , u > 0 ξ ( 0 ) = 1 } .

Corollary 3

Consider a GWO-process satisfying (1.1), a < , and (1.6). Then, as n ,

{ ( n - 1 - γ L - 1 ( n ) Z ( n u ) , n - 1 Z n u ) , u > 0 Z 0 = n } fdd { ( a γ - 1 ξ γ ( u a - 1 ) , a - 1 ξ ( u a - 1 ) ) , u > 0 ξ ( 0 ) = 1 } .

Notice that condition (1.6) holds even in the case μ < , with γ = 0 and L ( t ) μ as t . The family of processes { ξ γ ( ) } γ 0 emerging in our limit theorems can be expressed in the integral form

(1.7) ξ 0 ( u ) := ξ ( u ) for γ = 0 and ξ γ ( u ) := 0 u ξ ( u - v ) d v γ for γ > 0 , u 0 ,

which is treated as a convenient representation of the limiting fdds; see Section 4.

The following remarks comment on relevant literature and mention an interesting open problem.

  1. The GW-process is a basic model of the biologically motivated theory of branching processes; see [1, 7, 17]. The critical GW-process can be viewed as a stochastic model of a sustainable reproduction, when a mother produces on average one daughter; see [12]. On the convergence result (1.2) for the critical GW-processes, see [1, 2, 10, 13].

  2. The GWO-process is a discrete time version of the so-called general branching process, often called the Crump–Mode–Jagers process; see [7, 10, 11, 18]. An earlier mentioning of a discrete time branching process with overlapping generations can be found in [21].

  3. The fruitful concept of population counts, allowing for a variety of individual scores (see Section 2) was first introduced in [9]. The interested reader may find several demographical examples of population counts in [9, 10].

  4. The above-mentioned Theorem 4 deals with the decomposable critical multitype GW-processes. In a more general setting, such processes were studied in [5], addressing related issues by applying a different approach.

  5. Compared to earlier attempts (see [19, 20] and especially [15]), the current treatment of critical age-dependent branching processes is made more accessible by restricting the analysis to the case of finite Var ( N ) and 𝑎, as well as focusing on the discrete time setting.

  6. Our proofs do not use (1.2) as a known fact (unlike for example [8], addressing a related problem). Therefore, convergence (1.2) can be derived from the above-mentioned Corollary 1.

  7. The branching renewal approach, introduced in Section 3, takes its origin in [6].

  8. The idea of studying branching processes starting from a large number of individuals is quite old; see [16] and especially [13]. For a most recent paper in the continuous time setting, see [14].

  9. The definitions and basic properties of slowly and regularly varying functions used in this paper can be found in [4]. We apply some basic facts of the renewal theory from [3].

  10. Our limit theorems are stated in terms of the fdd-convergence. Finding simple conditions on the individual scores, ensuring weak convergence in the Skorokhod sense, is an open problem.

Notational Agreements

  1. To avoid confusion, we set apart discrete and continuous variables:

    i , j , k , l , n , p , q , s , t Z = { 0 , ± 1 , ± 2 , } , u , v , x , y , z , λ [ 0 , ) .

    Mixed products are treated as integer numbers so that n u stands for n u . The latter results in n u n not always being equal to 𝑢.

  2. We distinguish between a stronger and a weaker forms of the uniform convergence

    f ( n ) ( y ) y f ( y ) , f ( n ) ( y ) y f ( y ) , n ,

    which respectively require the relations

    sup 0 y y 1 | f ( n ) ( y ) - f ( y ) | 0 , sup y 0 y y 1 | f ( n ) ( y ) - f ( y ) | 0 , n ,

    to hold for any 0 < y 0 < y 1 < .

  3. We will write

    E n ( ) := E ( Z 0 = n )

    to say that the expected value is computed under the assumption that the GWO-process starts from 𝑛 individuals born at time 0. With a little risk of confusion, we will also write

    E x ( ) := E ( ξ ( 0 ) = x )

    when the expectation deals with the finite-dimensional distributions of the continuous state branching process ξ ( ) .

  4. We will often use the following two shortenings:

    e 1 x := 1 - e - x , e 2 x := x - e 1 x = e - x - 1 + x .

    Note that both these functions are increasing, and for 0 x y ,

    (1.8) 0 e 1 y - e 1 x y - x , 0 e 2 x min ( x , 1 2 x 2 ) ,
    (1.9) e 1 x + y = e 1 x + e 1 y - e 1 x e 1 y , e 2 x + y = e 2 x + e 2 y + e 1 x e 1 y .

  5. In different formulas, the symbols C , C 1 , C 2 , c , c 1 , c 2 represent different positive constants.

2 Population Counts

The number of individuals alive at time 𝑡 can be counted as the sum of individual scores

Z ( t ) = j = 0 t k = 1 Z j 1 { j t < j + L j k } = j = 0 t k = 1 Z j ζ j k ( t - j ) ,

where L j k is the life length of the 𝑘-th individual born at time 𝑗 (according to an arbitrary labelling of the Z j individuals born at time 𝑗) and ζ j k ( t ) = 1 { 0 t < L j k } is its individual score. Here the individual score is 1 if the individual is alive at time 𝑡, and 0 otherwise. This representation leads to the next definition of a population count.

Definition 2.1

For a progenitor of the GWO-process, define its individual score as a vector ( χ ( t ) ) t Z with non-negative, possibly dependent components such that χ ( t ) = 0 for all t < 0 . This random vector is allowed to depend on the individual characteristics (1.4), but it is assumed to be independent from such characteristics of other individuals.

Define a population count X ( t ) = X [ χ ] ( t ) as the sum of time shifted individual scores

(2.1) X ( t ) := j = 0 t k = 1 Z j χ j k ( t - j ) , t Z ,

assuming that the individual scores ( χ j k ( t ) ) t Z are independent copies of ( χ ( t ) ) t Z .

2.1 The Litter Sizes

In terms of (1.4), the litter sizes of a generic individual are defined by ν ( t ) := j = 1 N 1 { τ j = t } , t 1 , so that ν ( 1 ) + + ν ( L ) = N . On the other hand, given the random infinite-dimensional vector

( L , ν ( 1 ) , ν ( 2 ) , ) , L 1 , ν ( t ) 0 , t 1 ,

where ν ( t ) is treated as the litter size at age 𝑡 for an individual with the life length 𝐿, the consecutive ages at childbearing can be found as

τ j = t = 1 L t  1 { N ( t - 1 ) < τ j N ( t ) } , N ( t ) := ( ν ( 1 ) + + ν ( t ) ) 1 { L t } ,

where N ( t ) is the number of daughters produced by a mother of age 𝑡.

In the critical case, the probabilities

A ( t ) := E ( ν ( t ) 1 { L t } ) , t 1 ,

sum up to one since t 1 A ( t ) = E ( ν ( 1 ) + + ν ( L ) ) = E ( N ) = 1 . A renewal process with inter-arrival times having distribution A ( 1 ) , ( A ( 2 ) , plays a crucial role in the analysis of the critical GWO-processes. Observe that the corresponding mean inter-arrival time is indeed given by (1.5),

t = 1 t A ( t ) = E ( t = 1 t ν ( t ) 1 { L t } ) = E ( t = 1 t j = 1 N 1 { τ j = t } ) = E ( j = 1 N t = 1 t 1 { τ j = t } ) = E ( τ 1 + + τ N ) = a .

In order to avoid a possible confusion, we emphasise at this point that ν ( t ) = 0 and A ( t ) = 0 if t 0 .

2.2 Associated Renewal Process

In the GWO setting with Z 0 = 1 , the process Z t conditioned on { N ( t ) = k } , where N ( t ) is the birth count of the founder, can be viewed as the sum of 𝑘 independent daughter copies Z t = Z t - τ 1 ( 1 ) + + Z t - τ k ( k ) . This branching property implies that the expected number of newborns U ( t ) := E 1 ( Z t ) satisfies a recursive relation

U ( t ) = E ( j = 1 N ( t ) U ( t - τ j ) ) = E ( k = 1 t U ( t - k ) ν ( k ) 1 { L k } ) = U * A ( t ) , t 1 ,

where the ∗ symbol stands for a discrete convolution

A 1 * A 2 ( t ) := j = - A 1 ( t - j ) A 2 ( j ) , t Z .

Resolving the obtained recursion U ( t ) = 1 { t = 0 } + U * A ( t ) , we find a familiar expression for the renewal function

U ( t ) = 1 { t = 0 } + k = 1 t A * k ( t ) , A * 1 ( t ) := A ( t ) , A * ( k + 1 ) ( t ) := A * k * A ( t ) ,

so that, by the elementary renewal theorem,

(2.2) U ( t ) 1 a , t .

This says that, in the long run, the underlying reproduction process produces one birth per 𝑎 units of time. In this sense, 𝑎 can be treated as the average generation length.

Later on, we will need the following facts concerning the distribution of W t , the waiting time to the next renewal event:

R t ( j ) := P ( W t = j ) , j 1 , t 0 .

These probabilities satisfy the renewal equation R t ( j ) = A ( t + j ) + R t * A ( t ) , which yields

(2.3) R t ( j ) = k = 0 t A ( t + j - k ) U ( k ) , j 1 , t 0 .

By the key renewal theorem, there exists a stable distribution of the residual time W t , in that

R t ( j ) R ( j ) , t , R ( j ) := a - 1 k = j A ( k ) , j 1 .

Lemma 2.2

Assume (1.1), a < , and suppose a family of non-negative functions r ( n ) ( t ) is such that

sup n 1 , t 1 r ( n ) ( t ) < , r ( n ) ( n y ) y r ( y ) , n .

If r ( y ) r ( 0 ) as y 0 , then

t = 1 r ( n ) ( t ) R n y ( t ) y r ( 0 ) , n .

Proof

Observe that

t = 1 r ( n ) ( t ) R n y ( t ) - r ( 0 ) = t = 1 t 0 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t ) + t = t 0 + 1 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t )

for any t 0 > 0 . From

t = 1 t 0 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t ) = t = 1 t 0 ( r ( n ) ( t ) - r ( t n - 1 ) ) R n y ( t ) + t = 1 t 0 ( r ( t n - 1 ) - r ( 0 ) ) R n y ( t ) ,

we deduce

t = 1 t 0 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t ) y 0 , n ,

using the assumptions on r ( n ) ( ) and r ( ) . It remains to notice that

t = t 0 + 1 | r ( n ) ( t ) - r ( 0 ) | R n y ( t ) C t = t 0 + 1 R n y ( t ) ,

and

t = t 0 + 1 R n y ( t ) y t = t 0 + 1 R ( t ) 0

as first t and then t 0 . ∎

2.3 Expected Population Counts

If Z 0 = 1 , then X ( t ) , defined by (2.1), can be represented as

(2.4) X ( t ) = χ ( t ) + j = 1 N ( t ) X ( j ) ( t - τ j )

in terms of the independent daughter processes X ( j ) ( ) , where N ( t ) is the birth count of the founder. Taking expectations, we arrive at a recursion

M ( t ) = m ( t ) + E ( j = 1 N ( t ) M ( t - τ j ) ) = m ( t ) + j = 1 t M ( t - j ) A ( j ) ,

where M ( t ) := E 1 ( X ( t ) ) , m ( t ) := E ( χ ( t ) ) . This renewal equation M ( t ) = m ( t ) + M * A ( t ) yields

M ( t ) = m * U ( t ) = j = 0 t m ( t - j ) U ( j ) ,

and applying the key renewal theorem, we conclude

(2.5) E 1 ( X ( t ) ) m χ , t , m χ := a - 1 t = 0 E ( χ ( t ) ) .

The obtained parameter m χ can be viewed as the average 𝜒-score for the population with overlapping generations. The next result goes further than (2.5) by giving a useful asymptotic relation in the case m χ = .

Proposition 2.3

Consider a critical GWO-process with a < . If, for some function L ( ) slowly varying at infinity,

(2.6) j = 0 t E ( χ ( j ) ) = t γ L ( t ) , t , 0 γ < ,

then E 1 ( X ( t ) ) a - 1 t γ L ( t ) as t .

Proof

We have to show that (2.6) implies M ( t ) - a - 1 M t = o ( M t ) as t , where M t := j = 0 t m ( j ) . To this end, observe that the difference

M ( t ) - a - 1 j = 0 t m ( t - j ) = j = 0 t m ( t - j ) ( U ( j ) - a - 1 )

is estimated from above by

j = 0 t m ( t - j ) | U ( j ) - a - 1 | C j = 0 t ϵ - 1 m ( t - j ) + ϵ j = t ϵ t m ( t - j ) C ( M t - M t - t ϵ ) + ϵ M t , t t ϵ ,

for an arbitrarily small ϵ > 0 and some finite constants 𝐶, t ϵ . It remains to apply the property of the regularly varying function M t , saying that M t - M t - c = o ( M t ) as t for any fixed c 0 . ∎

Turning to X ( t ) = Z ( t ) , the number of individuals alive at time 𝑡, observe that, with χ ( t ) = 1 { 0 t < L } ,

t 0 E ( χ ( t ) ) = t 0 P ( L > t ) = μ .

Therefore, given E ( N ) = 1 ,

E 1 ( Z ( t ) ) μ a - 1 , t .

In this case, the parameter m χ = μ a - 1 can be treated as the degree of generation overlap. For example, m χ = 2 means that, on average, the life length 𝐿 covers two generation lengths.

3 Branching Renewal Equations

A useful extension of Definition 2.1 broadens the range of individual scores by replacing (2.1) with

(3.1) X ( t ) := j = 0 k = 1 Z j χ j k ( t - j ) , t Z .

Relation (3.1) takes into account even those individuals who are born after time 𝑡, allowing χ ( t ) > 0 for t < 0 . In this paper, we refer to this extension only to deal with the finite-dimensional distributions of the population counts defined by (2.1); see Lemma 3.2 below.

Definition 3.1

For the population count X ( t ) = X [ χ ] ( t ) given by (3.1), the log-Laplace transform Λ ( t ) = Λ [ χ ] ( t ) is given by

e - Λ ( t ) := E 1 ( e - X ( t ) ) , t Z .

The purpose of this section is to introduce a branching renewal equation for Λ ( ) and establish Proposition 3.5, which will play a key role in the proofs of the main results of this paper.

Lemma 3.2

For a given vector ( t 1 , , t p ) with non-negative integer components, consider the log-Laplace transform

Λ ( t ) = - ln E 1 ( exp { - i = 1 p λ i X ( t i + t ) } )

of the 𝑝-dimensional distribution of the population sum X ( ) defined by (2.1). Then, in accordance with Definition 3.1,

Λ ( t ) = Λ [ ψ ] ( t ) , ψ ( t ) := i = 1 p λ i χ ( t i + t ) , t Z .

Proof

It suffices to observe that

i = 1 p λ i X ( t i + t ) = ( 2.1 ) i = 1 p j = 0 t k = 1 Z j λ i χ j k ( t i + t - j ) = j = 0 k = 1 Z j ψ j k ( t - j ) = ( 3.1 ) X [ ψ ] ( t ) .

3.1 Derivation of the Branching Renewal Equation

Here we show that Definition 3.1 leads to what we call a branching renewal equation,

(3.2) Λ ( t ) = B ( t ) - Ψ [ Λ ] * U ( t ) , t 0 ,

where the operator

(3.3) Ψ [ f ] ( t ) := E ( j = 1 L e - ν ( j ) f ( t - j ) ) - j = 1 e - f ( t - j ) A ( j ) , t 0 ,

is defined on the set of non-negative sequences ( f ( t ) ) t Z ; see more on it in Section 3.2. The convolution term Ψ [ Λ ] * U ( t ) represents the non-linear part of the branching renewal equation. A seemingly free term B ( ) of equation (3.2) is a non-negative function specified below by (3.4) and (3.5). It also depends on the function Λ ( ) in a non-linear way; however, asymptotically it acts as a truly free term.

The derivation of (3.2) is based on the following extended version of decomposition (2.4):

X ( t ) = χ ( t ) + j = 1 N X ( j ) ( t - τ j ) , t Z ,

where X ( j ) ( ) are independent daughter copies of ( X ( ) Z 0 = 1 ) . It entails e χ ( t ) - X ( t ) = j = 1 N e - X ( j ) ( t - τ j ) , and taking expectations, we obtain

E 1 ( e χ ( t ) - X ( t ) ) = E ( e - j = 1 N Λ ( t - τ j ) ) = E ( e - j = 1 L ν ( j ) Λ ( t - j ) ) .

On the other hand (recall e 1 x := 1 - e - x ),

E 1 ( e χ ( t ) - X ( t ) ) - e - Λ ( t ) = E 1 ( e χ ( t ) - X ( t ) - e - X ( t ) ) = E 1 ( e 1 χ ( t ) e χ ( t ) - X ( t ) ) .

Denoting the last expectation D ( t ) , we can write

(3.4) D ( t ) = E ( e 1 χ ( t ) e - j = 1 L ν ( j ) Λ ( t - j ) ) ,

due to independence between the progenitor score χ ( t ) and the GWO-processes stemming from progenitor’s daughters. Combing the previous relations, we find

e - Λ ( t ) = E ( e - j = 1 L ν ( j ) Λ ( t - j ) ) - D ( t ) ,

which, after introducing a term involving operator (3.3), brings

e - Λ ( t ) = j = 1 e - Λ ( t - j ) A ( j ) + Ψ [ Λ ] ( t ) - D ( t ) .

Subtracting both sides from 1 yields

e 1 - Λ ( t ) = j = 1 e 1 - Λ ( t - j ) A ( j ) - Ψ [ Λ ] ( t ) + D ( t ) ,

which can be rewritten in the form of a renewal equation

e 1 - Λ ( t ) = e 1 - Λ * A ( t ) + j = t + 1 e 1 - Λ ( t - j ) A ( j ) - Ψ [ Λ ] ( t ) + D ( t ) .

Formally solving this renewal function, we get

e 1 - Λ ( t ) = j = 1 e 1 Λ ( - j ) R t ( j ) - Ψ [ Λ ] * U ( t ) + D * U ( t ) ,

where R t ( j ) is given by (2.3). Here we used

k = 0 t j = t - k + 1 e 1 - Λ ( t - k - j ) A ( j ) U ( k ) = k = 0 t U ( k ) j = 1 e 1 - Λ ( - j ) A ( j + t - k ) = j = 1 e 1 Λ ( - j ) R t ( j ) .

Since e 1 - Λ ( t ) = Λ ( t ) - e 2 - Λ ( t ) , we conclude that relation (3.2) holds with

(3.5) B ( t ) = e 2 Λ ( t ) + j = 1 e 1 Λ ( - j ) R t ( j ) + D * U ( t ) .

3.2 Laplace Transform of the Reproduction Law

The Laplace transform of the reproduction law E ( e - f ( τ 1 ) - - f ( τ N ) ) is a positive functional defined on the set of non-negative sequences ( f ( t ) ) t 1 . The higher than first moments of the joint distribution of ( τ 1 , , τ N ) are characterised by the non-linear functional

(3.6) Ψ ( f ) := E ( j = 1 N e - f ( τ j ) - j = 1 N e - f ( τ j ) ) .

This functional is monotone in view of the elementary equality

(3.7) j = 1 k ( a j - b j ) - j = 1 k a j + j = 1 k b j = j = 1 k ( a j - b j ) ( 1 - a 1 a j - 1 b j + 1 b k ) ,

in that if f ( t ) g ( t ) for all t 1 , then Ψ ( f ) Ψ ( g ) . In particular, with g ( t ) 0 , we get Ψ ( g ) = E ( 1 - N ) = 0 due to our standing assumption E ( N ) = 1 , which implies that Ψ ( f ) 0 for all eligible f ( ) .

The earlier introduced operator (3.3) is obtained from functional (3.6) through the connection

Ψ [ f ] ( t ) = Ψ ( f t ) , f t ( j ) := f ( t - j ) 1 { 1 j t } ,

which is verified by

Ψ ( f t ) = ( 3.6 ) E ( j = 1 N e - f t ( τ j ) - j = 1 N e - f t ( τ j ) ) = E ( k = 1 L e - f t ( k ) ν ( k ) - k = 1 L e - f t ( k ) ν ( k ) ) = E ( k = 1 L e - f ( t - k ) ν ( k ) ) - k = 1 e - f ( t - k ) A ( k ) = ( 3.3 ) Ψ [ f ] ( t ) .

Lemma 3.3

Consider a constant function f ( t ) = z , t Z . If (1.1), then

Ψ [ f ] ( t ) = Ψ ( z ) = E ( e - z N ) - e - z , t 0 ,

and z - 2 Ψ ( z ) b as z 0 .

Proof

The first assertion follows from the relation connecting Ψ [ f ] ( t ) and Ψ ( f ) . The second assertion follows from the L’Hospital rule. ∎

Lemma 3.4

If (1.1) holds and

n r n ( n y ) y r ( y ) , n ,

where r : [ 0 , ) [ 0 , ) is a continuous function, then

n 2 Ψ [ r n ] ( n y ) y b r 2 ( y ) , n .

Proof

Observe that (3.7) implies

Ψ [ f ] ( t ) - Ψ [ g ] ( t ) = E ( j = 1 N ( e - g ( t - τ j ) - e - f ( t - τ j ) ) ( 1 - i = 1 j - 1 e - f ( t - τ i ) i = j + 1 N e - g ( t - τ i ) ) ) ,

which in turn gives, for arbitrary 1 t 1 t ,

| Ψ [ f ] ( t ) - Ψ [ g ] ( t ) | E ( j = 1 N ( t 1 ) | f ( t - τ j ) - g ( t - τ j ) | I j + f g j = N ( t 1 ) + 1 N I j ) ,

where f := sup t 1 | f ( t ) | and

I j := ( 1 - i = 1 j - 1 e - f ( t - τ i ) i = j + 1 N e - g ( t - τ i ) ) i = 1 j - 1 f ( t - τ i ) + i = j + 1 N g ( t - τ i ) f g ( N - 1 ) .

Using E ( N ( N - 1 ) ) = 2 b , we therefore obtain

| Ψ [ f ] ( t ) - Ψ [ g ] ( t ) | 2 b f g max 1 j t 1 | f ( t - j ) - g ( t - j ) | + f g 2 E ( ( N ( t ) - N ( t 1 ) ) N ) .

This implies that

(3.8) | Ψ [ f ] ( t ) - Ψ [ g ] ( t ) | 2 b f g max 1 j t 1 | f ( t - j ) - g ( t - j ) | + f g 2 δ ( t 1 ) ,

where δ ( t ) := E ( ( N - N ( t ) ) N ) 0 as t .

Applying (3.8) with t 1 = n ϵ , t = n y , and

f ( j ) := r n ( j ) , g ( j ) := z n , j 1 , z n := n - 1 r ( y ) ,

we get

| n 2 Ψ [ r n ] ( n y ) - n 2 Ψ [ z n ] ( n y ) | C sup 0 x ϵ | n r n ( n ( y - x ) ) - r ( y ) | + C 1 δ ( n ϵ ) .

Thus, under the imposed conditions,

lim ϵ 0 sup 0 y y 0 ( n 2 Ψ [ r n ] ( n y ) - n 2 Ψ [ z n ] ( n y ) ) 0 , n ,

for any y 0 > 0 . It remains to observe that n 2 Ψ [ z n ] ( n y ) y b r 2 ( y ) as n , according to Lemma 3.3. ∎

3.3 Basic Convergence Result

If Λ ( t ) is given by Definition 3.1, then

(3.9) E n ( e - X ( t ) ) = e - n Λ ( t ) .

This observation explains the importance of the next result.

Proposition 3.5

Assume (1.1), a < , and consider a sequence of positive functions Λ n ( ) satisfying

(3.10) Λ n ( t ) = B n ( t ) - Ψ ( Λ n ) * U ( t ) , t 0 , n 1 .

If the non-negative functions B n ( t ) are such that

(3.11) n B n ( n y ) y B ( y ) , n ,

where B ( y ) is a continuous function, then

n Λ n ( n y ) y r ( y ) , n ,

where r ( y ) is a continuous function uniquely defined by

(3.12) r ( y ) = B ( y ) - b a - 1 0 y r 2 ( u ) d u .

Proof

We will prove this statement in three steps. Firstly, we will show

(3.13) r ( y ) = n B n ( n y ) - n t = 0 n y Ψ [ n - 1 r n ] ( n y - t ) U ( t ) + δ n ( y ) ,

where δ n ( y ) stands for a function (different in different formulas) such that δ n ( y ) y 0 as n . Secondly, putting Δ n ( y ) := n Λ n ( n y ) - r ( y ) , we will find a y * > 0 such that

(3.14) sup y 0 u y 1 | Δ n ( u ) | 0 , n , 0 < y 0 y 1 y * .

Thirdly, we will demonstrate that

(3.15) Δ n ( y ) y 0 , n .

Proof of (3.13). Rewriting (3.12) as

r ( y ) = B ( y ) - b 0 y r 2 ( y - u ) a - 1 d u

and using (2.2), (3.11), we obtain

r ( y ) = n B n ( n y ) - b n - 1 t = 0 n y r 2 ( y - t n - 1 ) U ( t ) + δ n ( y ) .

This and Lemma 3.4 imply (3.13).

Proof of (3.14). Relations (3.10) and (3.13) yield

(3.16) Δ n ( y ) = n t = 0 n y ( Ψ [ Λ n ] ( t ) - Ψ [ n - 1 r n ] ( t ) ) U ( n y - t ) + δ n ( y ) .

Under the current assumptions, the inequality n Λ n ( n y ) n B n ( n y ) implies that the sequence of functions n Λ n ( n y ) is uniformly bounded over any finite interval 0 y y 1 . Therefore, putting t 1 := t ϵ into (3.8) gives

n 2 | Ψ [ Λ n ] ( t ) - Ψ [ n - 1 r n ] ( t ) | C 1 sup ( 1 - ϵ ) t j t | Δ n ( j n - 1 ) | + C 2 δ ( t ϵ )

for any fixed 0 < ϵ < 1 . Combining this with (3.16) entails

(3.17) | Δ n ( y ) | C n - 1 t = n ϵ n y U ( n y - t ) sup ( 1 - ϵ ) t j t | Δ n ( j n - 1 ) | + C 1 n - 1 t = 0 n ϵ U ( n y - t ) + δ n ( y )

so that, for some positive constant c * independent of ( n , ϵ , y ) ,

| Δ n ( y ) | c * y sup ϵ ( 1 - ϵ ) u y | Δ n ( u ) | + C ϵ + δ n ( y ) .

It follows that

sup ϵ ( 1 - ϵ ) y v | Δ n ( y ) | c * v sup ϵ ( 1 - ϵ ) u v | Δ n ( u ) | + C ϵ + sup ϵ ( 1 - ϵ ) y v δ n ( y ) .

Replacing here 𝑣 by y * := ( 2 c * ) - 1 , we derive

lim sup n sup ϵ ( 1 - ϵ ) u y * | Δ n ( u ) | C ϵ ,

which, after letting ϵ 0 , results in (3.14).

Proof of (3.15). It suffices to demonstrate that the convergence interval in (3.14) can be consecutively expanded from ( 0 , y * ] to ( 0 , 2 y * ] , from ( 0 , 2 y * ] to ( 0 , 3 y * ] , and so forth. Suppose we have established that, for some k 1 ,

sup y 0 u y 1 | Δ n ( u ) | 0 , n , 0 < y 0 y 1 k y * .

Then, for k y * < y ( k + 1 ) y * , by (3.17),

| Δ n ( y ) | C n - 1 t = n k y * n y U ( n y - t ) sup ( 1 - ϵ ) t j t | Δ n ( j n - 1 ) | + C ϵ + δ n ( y ) ,

yielding

sup k y * y ( k + 1 ) y * | Δ n ( y ) | c * y * sup k y * u ( k + 1 ) y * | Δ n ( u ) | + C ϵ + sup k y * u ( k + 1 ) y * δ n ( y ) .

Since c * y * < 1 , we may conclude that

sup k y * u ( k + 1 ) y * | Δ n ( u ) | 0 , n ,

thereby completing the proof of (3.15). ∎

4 Continuous State Critical Branching Process

In this section, among other things, we clarify the meaning of ξ γ ( ) given by (1.7), in terms of the log-Laplace transforms of the fdds of the process ξ ( ) . From now on, we consistently use the following shortenings:

G p ( u ¯ , λ ¯ ) := G p ( u 1 , , u p ; λ 1 , , λ p ) , G p ( c 1 u ¯ + y , c 2 λ ¯ ) := G p ( c 1 u 1 + y , , c 1 u p + y ; c 2 λ 1 , , c 2 λ p ) , H p , q ( u ¯ , λ ¯ ) := H p , q ( u 1 , , u p ; λ 11 , , λ p 1 ; ; λ 1 q , , λ p q ) .

4.1 Laplace Transforms for ξ ( )

The set of functions

(4.1) G p ( u ¯ , λ ¯ ) := - ln E 1 ( e - λ 1 ξ ( u 1 ) - - λ p ξ ( u p ) ) , p 1 ,

with u i , λ i 0 , determines the fdds for the process ξ ( ) .

Lemma 4.1

For non-negative x , y , u 1 , u 2 , , λ 1 , λ 2 , ,

E ( e - λ 1 ξ ( u 1 + y ) - - λ p ξ ( u p + y ) ξ ( y ) = x ) = e - x G p ( u ¯ , λ ¯ ) .

Proof

This result is obtained by induction, using (1.3) and the Markov property of ξ ( ) . To illustrate the argument, take p = 2 and non-negative y , y 1 , y 2 . We have

E ( e - λ 1 ξ ( y + y 1 + y 2 ) - λ 2 ξ ( y + y 2 ) ξ ( y ) = x ) = E ( e - λ 2 ξ ( y + y 2 ) E ( e - λ 1 ξ ( y + y 1 + y 2 ) ξ ( y + y 2 ) ) ξ ( y ) = x ) = ( 1.3 ) E ( exp { - ( λ 2 + λ 1 1 + b λ 1 y 1 ) ξ ( y + y 2 ) } | ξ ( y ) = x ) = ( 1.3 ) exp { - ( λ 1 + λ 2 + b λ 1 λ 2 y 1 ) x 1 + b λ 1 ( y 1 + y 2 ) + b λ 2 y 2 + b 2 λ 1 λ 2 y 1 y 2 } .

With u 2 = y 2 and u 1 = y 1 + y 2 , this gives an explicit expression

G 2 ( u ¯ , λ ¯ ) = λ 1 + λ 2 + b λ 1 λ 2 ( u 1 - u 2 ) 1 + b λ 1 u 1 + b λ 2 u 2 + b 2 λ 1 λ 2 ( u 1 - u 2 ) u 2

for the asserted relation E ( e - λ 1 ξ ( u 1 + y ) - λ 2 ξ ( u 2 + y ) ξ ( y ) = x ) = e - x G 2 ( u ¯ , λ ¯ ) in the case p = 2 . ∎

Lemma 4.2

If

(4.2) u 1 > > u p = 0 , λ 1 0 , , λ p 0 ,

then for all y 0 , assuming G 0 ( u ¯ , λ ¯ ) := 0 , the following two relations hold:

G p ( u ¯ + y , λ ¯ ) = ( b y + ( G p - 1 ( u ¯ , λ ¯ ) + λ p ) - 1 ) - 1 ,
(4.3) G p ( u ¯ + y , λ ¯ ) = G p - 1 ( u ¯ , λ ¯ ) + λ p - b 0 y G p 2 ( u ¯ + v , λ ¯ ) d v .

Proof

With u p = 0 , relation (4.1) gives

G p ( u ¯ + y , λ ¯ ) = - ln E 1 ( e - λ p ξ ( y ) E ( e - λ 1 ξ ( u 1 + y ) - - λ p - 1 ξ ( u p - 1 + y ) ξ ( y ) ) ) .

Applying Lemma 4.1 and (1.3), we get the first statement

G p ( u ¯ + y , λ ¯ ) = - ln E 1 ( e - λ p ξ ( y ) e - G p - 1 ( u ¯ , λ ¯ ) ξ ( y ) ) = ( b y + ( G p - 1 ( u ¯ , λ ¯ ) + λ p ) - 1 ) - 1 .

This implies the second statement since a function of the form H ( y ) = ( b y + H 0 - 1 ) - 1 satisfies the integral equation

(4.4) H ( y ) = H 0 - b 0 y H 2 ( v ) d v .

4.2 Riccati Integral Equations

Equation (4.3) has a form of the Riccati integral equation (4.4), associated with a simple Riccati differential equation H ( y ) = - H 2 ( y ) , H ( 0 ) = H 0 . Our limit theorems require a more general equation of this type,

(4.5) H ( y ) = F ( y ) - b 0 y H 2 ( v ) d v .

Lemma 4.3

Let function F : [ 0 , ) [ 0 , ) be non-decreasing, with F ( 0 ) 0 . For a given n 1 , consider the step function

F ( n ) ( y ) := k = 0 F ( k n ) 1 { k n y < k + 1 n } , y 0 ,

and put

e - H ( n ) ( y ) := E 1 ( exp { - ξ F ( n ) ( n y n ) } ) , y 0 ,

where

ξ F ( n ) ( k n ) := ξ ( k n ) F ( 0 ) + i = 1 k ξ ( k - i n ) ( F ( i n ) - F ( i - 1 n ) ) .

Then the function H ( n ) ( ) satisfies a recursion

H ( n ) ( k n ) = F ( k n ) - F ( k - 1 n ) + H ( n ) ( k - 1 n ) ( 1 + b n H ( n ) ( k - 1 n ) ) - 1 , k 1 ,

with H ( n ) ( 0 ) = F ( 0 ) .

Proof

Putting f k := F ( k n ) and f - 1 := 0 , we get

H ( n ) ( k n ) = - ln E 1 ( exp { - i = 0 k ξ ( k - i n ) ( f i - f i - 1 ) } ) = f k - f k - 1 - ln E 1 ( exp { - i = 0 k - 1 ξ ( k - i n ) ( f i - f i - 1 ) } ) ,

and by Lemma 4.1,

H ( n ) ( k n ) = f k - f k - 1 + G k ( u ¯ + 1 n , λ ¯ ) ,

with u i := k - i n and λ i = f i - 1 - f i - 2 for i 1 . Since, by Lemma 4.2,

G k ( u ¯ + 1 n , λ ¯ ) = ( b n + ( G k - 1 ( u ¯ , λ ¯ ) + λ k ) - 1 ) - 1 ,

we conclude

H ( n ) ( k n ) = f k - f k - 1 + ( b n + ( H n ( n ) ( k - 1 n ) ) - 1 ) - 1 = f k - f k - 1 + H ( n ) ( k - 1 n ) ( 1 + b n H ( n ) ( k - 1 n ) ) - 1 .

Proposition 4.4

Let function F ( ) have a continuous derivative F : [ 0 , ) [ 0 , ) , and let F ( 0 ) 0 . The functions H ( n ) ( ) , defined by Lemma 4.3, converge,

H ( n ) ( y ) H ( y ) , y 0 , n ,

to the solution of the Riccati equation (4.5).

Proof

Applying a Taylor expansion to the recursion stated by Lemma 4.3, we obtain

H ( n ) ( k n ) = f k - f k - 1 + H ( n ) ( k - 1 n ) - b n ( H ( n ) ( k - 1 n ) ) 2 + ϵ n ( k ) , ϵ n ( k ) = H ( n ) ( k - 1 n ) ( ( 1 + b n H ( n ) ( k - 1 n ) ) - 1 - 1 + b n H ( n ) ( k - 1 n ) ) = ( b n ) 2 ( H ( n ) ( k - 1 n ) ) 3 1 + b n H ( n ) ( k - 1 n ) .

By summing this recursion, we get

(4.6) H ( n ) ( k n ) = f k - b n i = 0 k - 1 ( H ( n ) ( i n ) ) 2 + i = 1 k ϵ n ( i ) .

To prove the lemma, it suffices to verify that

(4.7) Δ n ( k ) := H ( n ) ( k n ) - H ( k n ) y 0 , n ,

where H ( n ) ( ) satisfies (4.6), with f i = F ( i n ) . To this end, note that

i = 0 k ξ ( k - i n ) ( f i - f i - 1 ) = f k ξ ( 0 ) + i = 0 k - 1 ( ξ ( k - i n ) - ξ ( k - i - 1 n ) ) f i f k ξ ( k n )

implies an upper bound

H ( n ) ( k n ) - ln E 1 ( e - f k ξ ( k n ) ) = ( 1.3 ) f k 1 + b f k k n ,

that ensures H ( n ) ( k n ) C ( y ) , provided f k C 1 ( y ) for all k n y , so that i = 1 n y ϵ n ( i ) y 0 as n .

This and (4.6) entail

Δ n ( k n ) = - b n i = 0 k - 1 Δ n ( i n ) ( H ( n ) ( i n ) + H ( i n ) ) + δ n ( k ) ,

where δ n ( n y ) y 0 as n . In view of this relation, we can find a sufficiently small y * > 0 such that

sup 0 y y * | Δ n ( n y ) | 0 , n .

It follows that

Δ n ( k n ) = - b n i = n y * k - 1 Δ n ( i n ) ( H ( n ) ( i n ) + H ( i n ) ) + δ n ( k ) ,

where δ n ( n y ) y 0 as n . This, in turn, gives

sup 0 y 2 y * | Δ n ( n y ) | 0 , n ,

and proceeding in the same manner, we arrive at (4.7). ∎

4.3 Laplace Transforms for ξ F ( )

Notice that the Riemann–Stieltjes integrals appearing in this paper are understood as

0 t f ( u ) d F ( u ) := F ( 0 ) f ( 0 ) + ( 0 , t ] f ( u ) d F ( u ) .

Referring to Proposition 4.4, we treat the Riemann–Stieltjes integral

ξ F ( y ) = 0 y ξ ( y - v ) d F ( v )

as a random variable satisfying E 1 ( e - ξ F ( y ) ) = e - H ( y ) . This interpretation will be extended to the fdds of ξ F ( ) in terms of the log-Laplace transforms

(4.8) H p ( u ¯ , λ ¯ ) := - ln E 1 ( e - λ 1 ξ F ( u 1 ) - - λ p ξ F ( u p ) ) .

Lemma 4.5

Under the assumptions of Proposition 4.4, given (4.2), the function (4.8) satisfies

(4.9) H p ( u ¯ + y , λ ¯ ) = H p - 1 ( u ¯ , λ ¯ ) + F p ( y ) - b 0 y H p 2 ( u ¯ + v , λ ¯ ) d v ,

where F p ( y ) := i = 1 p λ i ( F ( u i + y ) - F ( u i ) ) for y > 0 , and F p ( 0 ) := λ p F ( 0 ) .

Proof

The proof of Lemma 4.5 uses an argument similar to Lemma 4.3 and Proposition 4.4, with the main idea being to demonstrate that the step function version of (4.8), defined by

e - H p ( n ) ( u ¯ + y , λ ¯ ) := E 1 ( exp { - j = 1 p λ i ξ F ( n ) ( n u i n + n y n ) } ) ,

converges, as n , to the solution of (4.9), i.e., H p ( n ) ( u ¯ + y , λ ¯ ) H p ( u ¯ + y , λ ¯ ) . Instead of giving tedious details in terms of the discrete version of (4.8), we indicate below the key new argument in terms of the continuous version of the integral ξ F ( ) .

Due to (4.8), we have

e - H p ( u ¯ , λ ¯ ) = E 1 ( exp { - i = 1 p λ i ξ d F ( u i ) } ) ,

which, in view of (1.3) and (4.8), yields

e - H p ( u ¯ + y , λ ¯ ) = E 1 ( exp { - i = 1 p λ i 0 u i + y ξ ( u i + y - v ) d F ( v ) } ) .

Splitting each of the integrals in two parts 0 u i + y = 0 u i + u i u i + y , we find

i = 1 p 0 u i + y ξ ( u i + y - v ) d F ( v ) = i = 1 p - 1 0 u i ξ ( u i + y - v ) d F ( v ) + 0 y ξ ( y - v ) d F p ( v ) ,

and then, using the Markov property of the process ξ ( ) ,

E ( exp { - i = 1 p - 1 λ i 0 u i ξ ( u i + y - v ) d F ( v ) } | ξ ( u ) , 0 u y ) = e - ξ ( y ) H p - 1 ( u ¯ , λ ¯ ) ,

we obtain

e - H p ( u ¯ + y , λ ¯ ) = E 1 ( exp { - ξ ( y ) H p - 1 ( u ¯ , λ ¯ ) - ξ F p ( y ) } ) = E 1 ( e - ξ F p ( y ) ) ,

where F p ( y ) := H p - 1 ( u ¯ , λ ¯ ) + F p ( y ) . After this, it remains to apply Proposition 4.4. ∎

5 Main Results

The aim of this chapter is to establish an fdd-convergence result for the vector ( X 1 ( ) , , X q ( ) ) composed of the population counts corresponding to different individual scores χ 1 ( ) , , χ q ( ) , which may depend on each other.

5.1 Limit Theorems

Theorem 1

Consider a population count defined by (2.1). If (1.1), a < , m χ < , see (2.5), then

(5.1) { n - 1 X ( n u ) , u > 0 Z 0 = n } fdd { m χ ξ ( u a - 1 ) , u > 0 ξ ( 0 ) = 1 } , n ,

where ξ ( ) is the continuous state branching process satisfying (1.3).

There are three new features in the limiting process of (5.1) compared to that of (1.2):

  • the continuous time parameter 𝑢 does not include zero, reflecting the fact that it may take some time for the distribution of ages of coexisting individuals to stabilise;

  • the time scale a - 1 corresponds to the scaling by the average length of overlapping generations;

  • the factor m χ accounts for the average 𝜒-score in a population with overlapping generations.

Theorem 2

Consider a population count defined by (2.1). Assume (1.1), a < , (2.6), and in the case m χ = , assume additionally

(5.2) E ( χ 2 ( t ) ) = o ( t 2 γ L 2 ( t ) ) , t .

Then

{ n - 1 - γ L - 1 ( n ) X ( n u ) , u > 0 Z 0 = n } fdd { a γ - 1 ξ γ ( u a - 1 ) , u > 0 ξ ( 0 ) = 1 } , n ,

where ξ γ ( ) is given by (1.7), which is understood according to the previous chapter.

The next result extends Theorems 1 and 2 to the case of several population counts.

Theorem 3

Consider q 1 population counts X 1 ( t ) , , X q ( t ) , each defined by Definition 2.1 in terms of different individual scores χ 1 ( t ) , , χ q ( t ) . Assume (1.1), a < , and (2.6), with γ = γ j and L = L j for the χ j -score, j = 1 , , q . If m χ j = , assume additionally condition (5.2) for the χ j -score.

Then, as n ,

( X 1 ( n u ) n 1 + γ 1 L 1 ( n ) , , X q ( n u ) n 1 + γ q L q ( n ) | Z 0 = n ) u > 0 fdd ( a γ 1 - 1 ξ γ 1 ( u a - 1 ) , , a γ q - 1 ξ γ q ( u a - 1 ) ξ ( 0 ) = 1 ) u > 0 .

To illustrate the utility of Theorem 3, we consider a multitype GW-process

{ ( Z t 1 , Z t 2 , , Z t q ) , t 0 Z 0 1 = n } ,

where Z t i is the number of type 𝑖 individuals born at time 𝑡, for i = 1 , , q . Each individual of type 𝑖 is assumed to live one unit of time and then be replaced by N i j individuals of type 𝑗. Denoting m i j := E ( N i j ) , assume that the multitype GW- process is decomposable in that

(5.3) m i j = 0 , 1 j < i q .

The next result deals with a decomposable critical GW-process, satisfying

(5.4) m j j = 1 , 1 j q , m j - 1 , j ( 0 , ) , 2 j q , m i j < , 1 i j q .

To put this process into the GWO-framework, we treat as GWO-individuals only the type 1 individuals, while the other types will be addressed by respective population counts. Clearly, the numbers of GWO-individuals form a single type GW-process, and (1.2), derived from Corollary 1, describes the limit behaviour of the scaled process ( Z t 1 , t 0 Z 0 1 = n ) . Since the process { Z 0 1 , , Z n - 1 1 Z 0 1 = n } during 𝑛 units of time produces type 2 individuals, of order 𝑛 new individuals per unit of time, one would expect, in view of Theorem 3, a typical number of type 2 individuals at time 𝑛 to be of order n 2 . An extrapolation of this reasoning suggests scaling by n j for the number of type 𝑗 individuals, j = 1 , , q .

Theorem 4

Consider a decomposable multitype GW-process ( Z t 1 , Z t 2 , , Z t q ) starting with 𝑛 individuals of type 1. Assume (5.3) and (5.4). If, furthermore, Var ( N j j ) < for all 1 j q , and Var ( N 11 ) = 2 b , then

{ ( n - 1 Z n y 1 , n - 2 Z n y 2 , , n - q Z n y q ) , y 0 Z 0 1 = n } fdd { ( ξ ( y ) , α 1 ξ 1 ( y ) , , α q - 1 ξ q - 1 ( y ) ) y 0 ξ ( 0 ) = 1 }

as n , with α j := 1 j ! m 1 , 2 m j , j + 1 , j = 1 , , q - 1 .

Here the limiting process ξ ( ) is the same as in (1.2) and ξ j ( y ) = 0 y ξ ( y - u ) d u j ; see (1.7). Notice that the only source of randomness in the 𝑞-dimensional limit process is the randomly fluctuating number of the first type of individuals. Observe also that only the means m j , j + 1 appear in the limit, but not the other means like for example m 1 , 3 . This fact reflects the following phenomenon of the reproduction system under consideration: in a large population, the number of type 3 individuals stemming directly from type 1 individuals is negligible compared to the number of type 3 individuals stemming from type 2 individuals.

5.2 Proof of Theorem 1

Assuming (4.2), put

Λ n , p ( t ) := ln E 1 ( exp { - n - 1 i = 1 p λ i X ( n u i + t ) } ) , t Z .

Due to (3.9), the Laplace transform of the 𝑝-dimensional distributions of the scaled X ( ) are given by

E n ( exp { - n - 1 i = 1 p λ i X ( n ( u i + y ) ) } ) = e - n Λ n , p ( n y ) , y 0 .

We prove Theorem 1 by showing that

(5.5) n Λ n , p ( n y ) y r p ( y ) , n ,

where the function r p ( y ) := G p ( a - 1 ( u ¯ + y ) , m χ λ ¯ ) determines the limiting fdds of Theorem 1 by Lemma 4.1. Our proof of (5.5) consists of several steps summarised in the next flow chart:

(5.6) ( 5.5 ) ( 5.9 ) ( 5.14 ) ( 5.17 ) ( 5.15 ) ( 5.16 ) ( 5.18 ) , ( 5.19 ) , ( 5.20 ) .

Due to Lemma 3.2, we have

(5.7) Λ n , p ( t ) = ( Λ ( t ) ) [ ψ n , p ] ,

with

(5.8) ψ n , p ( t ) = n - 1 i = 1 p λ i χ ( n u i + t ) .

On the other hand, according to (4.3), the limit function r p ( ) satisfies

r p ( y ) = r p - 1 ( 0 ) + λ p m χ - b a - 1 0 y r p 2 ( v ) d v .

Thus, we can prove relation (5.5) using Proposition 3.5 and induction over 𝑝 by verifying that

(5.9) n B n ( n y ) y r p - 1 ( 0 ) + λ p m χ ,

where, in accordance with (3.4) and (3.5),

(5.10) B n ( t ) = e 2 Λ n , p ( t ) + t = 1 e 1 Λ n , p ( - t ) R n y ( t ) + D n * U ( t ) ,
(5.11) D n ( t ) = E 1 ( e 1 ψ n , p ( t ) e - j = 1 Λ n , p ( t - j ) ν ( j ) ) .

The initial induction step, with p = 0 , becomes trivial if we set r 0 ( y ) := 0 for all 𝑦. To state a relevant induction assumption, denote

(5.12) Λ n , p - 1 ( t ) := ln E 1 ( exp { - n - 1 i = 1 p - 1 λ i X ( n u i + t ) } ) , t Z ,

where u 1 > u 2 > > u p - 1 and λ 1 0 , , λ p - 1 0 . Then the inductive hypothesis claims

(5.13) n Λ n , p - 1 ( n y ) y G p - 1 ( a - 1 ( u ¯ + y ) , m χ λ ¯ ) , n .

We establish the uniform convergence (5.9), under assumption (5.13), in three steps

(5.14) n e 2 Λ n , p ( n y ) y 0 , n ,
(5.15) n t = 1 e 1 Λ n , p ( - t ) R n y ( t ) y r p - 1 ( 0 ) , n ,
(5.16) n t = 1 n y D n ( n y - t ) U ( t ) y λ p m χ , n .

Proof of (5.14)

The upper bound

e 1 n Λ n , p ( t ) n E 1 ( X [ ψ n , p ] ( t ) ) = i = 1 p λ i E 1 ( X ( n u i + t ) ) ,

under the assumption m χ < , implies

(5.17) sup n 1 sup - < t n y n Λ n , p ( t ) < for any y > 0 .

This and a corollary of (1.8), n e 2 Λ n , p ( n y ) n 2 Λ n , p 2 ( n y ) , entail (5.14). ∎

Proof of (5.15)

Setting u i := u i - u p - 1 , recall (5.12). Notice that, since, by (4.2), u p = 0 , we get, for t > 0 ,

Λ n , p ( - t ) = ln E 1 ( exp { - n - 1 i = 1 p - 1 λ i X ( n u i - t ) } ) = ln E 1 ( exp { - n - 1 i = 1 p - 1 λ i X ( n ( u i + u p - 1 ) - t ) } ) = Λ n , p - 1 ( n u p - 1 - t ) .

By the induction assumption (5.13), the function

r ( n ) ( t ) := n e 1 Λ n , p - 1 ( n u p - 1 - t ) 1 { 1 t n u p - 1 / 2 }

satisfies

r ( n ) ( n y ) y r ( y ) , n , r ( y ) := G p - 1 ( a - 1 ( u ¯ - y ) , m χ λ ¯ ) 1 { 0 y u p - 1 / 2 } .

Moreover, due to (5.17), we have 0 r ( n ) ( t ) C for all n , t 1 . Since r ( 0 ) = r p - 1 ( 0 ) , relation (5.15) now follows from Lemma 2.2. ∎

Proof of (5.16)

In view of

D n ( t ) = E ( ψ n , p ( t ) ) - E ( e 2 ψ n , p ( t ) ) - E ( e 1 ψ n , p ( t ) e 1 j = 1 Λ n , p ( t - j ) ν ( j ) ) ,

relation (5.16) follows from (2.2) and the next three relations:

(5.18) n t = 1 n y E ( ψ n , p ( n y - t ) ) U ( t ) y λ p m χ , n ,
(5.19) n t = 1 n y E ( e 2 ψ n , p ( t ) ) y 0 , n ,
(5.20) n t = 1 n y E ( e 1 ψ n , p ( t ) e 1 j = 1 Λ n , p ( t - j ) ν ( j ) ) y 0 , n .

To prove (5.18), notice that (5.8) gives n ψ n , p ( t ) = λ p χ ( t ) + n ψ n , p - 1 ( t ) . Since

t = 1 n y E ( χ ( n y - t ) ) U ( t ) y m χ , n ,

it suffices to check that

(5.21) n t = 1 n y E ( ψ n , p - 1 ( t ) ) y 0 , n .

This follows from the fact that, for any positive 𝑢,

t = 1 n y E ( χ ( n u + t ) ) t > n u E ( χ ( t ) ) ,

with the right-hand side going to 0 as n under the assumption m χ < .

Turning to (5.19), we split its left-hand side in three parts using (1.9), and then produce an upper bound as a sum of three terms involving an arbitrary k 1 :

n e 2 ψ n , p ( t ) = n e 2 n - 1 λ p χ ( t ) + n e 2 ψ n , p - 1 ( t ) + n e 1 n - 1 λ p χ ( t ) e 1 ψ n , p - 1 ( t ) n - 1 λ p 2 χ 2 ( t ) 1 { χ ( t ) k } + λ p χ ( t ) 1 { χ ( t ) > k } + 2 n ψ n , p - 1 ( t ) .

The third term is handled by (5.21). The first term is further estimated from above by

n - 1 t = 1 n y E ( χ 2 ( t ) 1 { χ ( t ) k } ) n - 1 k t = 1 E ( χ ( t ) ) ,

where the right-hand side converges to zero for any fixed 𝑘. Finally, in view of

t = 1 n y E ( χ ( t ) 1 { χ ( t ) > k } ) t = 1 E ( χ ( t ) 1 { χ ( t ) > k } ) ,

the proof of (5.19) is finished by applying Fatou’s lemma as k .

To prove convergence (5.20), we use the bound

e 1 ψ n , p ( t ) n - 1 λ p χ ( t ) + ψ n , p - 1 ( t ) ,

and referring to (5.21), reduce the task to

t = 1 n y E ( χ ( t ) e 1 j = 1 Λ n , p ( t - j ) ν ( j ) ) y 0 , n .

The last relation follows from the upper bound

t = 1 E ( χ ( t ) e 1 j = 1 Λ n , p ( t - j ) ν ( j ) ) t = k E ( χ ( t ) ) + t = 1 k E ( χ ( t ) 1 { χ ( t ) > k 1 } ) + k 1 t = 1 k j = 1 Λ n , p ( t - j ) A ( j )

because the third term tends to 0 as n , thanks to (5.17), and the first two terms in the right-hand side vanish as k and k 1 due to the assumption m χ < . ∎

5.3 Proof of Theorem 2

The main idea of the proof of Theorem 2 is the same as of Theorem 1, and here we mainly focus on the new argument addressing the case m χ = . We want to prove (5.5) with the modified notation

Λ n , p ( t ) := ln E 1 ( exp { - n - 1 - γ L - 1 ( n ) i = 1 p λ i X ( n u i + t ) } ) , r p ( y ) := H p ( a - 1 ( u ¯ + y ) , a γ - 1 λ ¯ ) ,

where H p ( u ¯ , λ ¯ ) is defined by (4.8), with F ( y ) := y γ . In this case, relation (5.7) holds with

ψ n , p ( t ) := i = 1 p λ n , i χ ( n u i + t ) , λ n , i := λ i n - 1 - γ L - 1 ( n ) ,

and according to (4.9), the right-hand side of (5.5) satisfies

r p ( y ) = r p - 1 ( 0 ) + a - 1 i = 1 p λ i ( ( u i + y ) γ - u i γ ) - b a - 1 0 y r p 2 ( v ) d v .

Thus, under the conditions of Theorem 2, relation (5.5) will follow from Proposition 3.5 after we show

n B n ( n y ) y r p - 1 ( 0 ) + a - 1 i = 1 p λ i ( ( u i + y ) γ + u i γ ) , n ,

where B n ( t ) is defined by (5.10) and (5.11). Its counterpart (5.9) was proven in the case m χ < according to flow chart (5.6). In the rest of the proof, we follow the same flow chart and comment on necessary changes in the case m χ = .

The counterparts of (5.14), (5.17), and (5.15) in the case m χ = are verified in a way similar to the case m χ < , now using Proposition 2.3. The counterpart of (5.16) takes the form

n t = 1 n y D n ( n y - t ) U ( t ) y a - 1 i = 1 p λ i ( ( u i + y ) γ - u i γ ) , n ,

as Proposition 2.3 yields the following counterpart of (5.18):

n t = 1 n y E ( ψ n , p ( n y - t ) ) U ( t ) y a - 1 i = 1 p λ i ( ( u i + y ) γ - u i γ ) , n .

To verify (5.19) in the case m χ = , we check that

(5.22) n t = 1 n y E ( ψ n , p 2 ( t ) ) y 0 , n ,

by putting to use condition (5.2) to handle the terms

n i = 1 p t = 1 n y λ n , i 2 E ( χ 2 ( n u i + t ) ) + 2 n i = 1 p j = i + 1 p t = 1 n y λ n , i λ n , j E ( χ ( n u i + t ) χ ( n u j + t ) ) ,

after applying the Cauchy–Schwarz inequality for expectations

E ( χ ( n u i + t ) χ ( n u j + t ) ) E ( χ 2 ( n u i + t ) ) E ( χ 2 ( n u i + t ) ) .

To prove the counterpart of (5.20) in the case m χ = , we use a sequence of upper bounds

n t = 1 n y E ( e 1 ψ n , p ( t ) e 1 j = 1 Λ n , p ( t - j ) ν ( j ) ) n t = 1 n y E ( ψ n , p ( t ) 1 { N > n ϵ } ) + n t = 1 n y E ( ψ n , p ( t ) j = 1 Λ n , p ( t - j ) ν ( j ) 1 { N n ϵ } ) n t = 1 n y E ( ψ n , p 2 ( t ) ) P ( N > n ϵ ) + sup t n y ( n Λ n , p ( t ) ) t = 1 n y E ( ψ n , p ( t ) N 1 { N n ϵ } ) C 1 ϵ - 1 t = 1 n y E ( ψ n , p 2 ( t ) ) + C 2 n ϵ t = 1 n y E ( ψ n , p ( t ) ) ,

where we applied the Cauchy–Schwarz and Markov inequalities together with (5.17). By the Cauchy–Schwarz inequality for the dot product,

( t = 1 n y 1 E ( ψ n , p 2 ( t ) ) ) 2 n y t = 1 n y E ( ψ n , p 2 ( t ) ) ,

which together with (5.22) yield t = 1 n y E ( ψ n , p 2 ( t ) ) y 0 as n . On the other hand, in view of Proposition 2.3, the upper bound

n ϵ t = 1 n y E ( ψ n , p ( t ) ) < ϵ C ( y 1 ) , n 1 , 0 y y 1

holds for an arbitrary ϵ > 0 . Sending ϵ 0 ends the proof of (5.20) and thereby of Theorem 2.

5.4 Proof of Theorem 3

Lemma 5.1

Put

H p , q ( u ¯ , λ ¯ ) := - ln E 1 ( exp { - i = 1 p j = 1 q λ i j ξ γ j ( u i ) } ) ,

assuming

(5.23) 0 = γ 1 = = γ s < γ s + 1 γ q , 0 s q .

Then, for u 1 > > u p = 0 , the following integral equation holds:

(5.24) H p , q ( u ¯ + y , λ ¯ ) = H p - 1 , q ( u ¯ , λ ¯ ) + F p , q ( y ) - b 0 y H p , q 2 ( u ¯ + v , λ ¯ ) d v ,
F p , q ( y ) := λ p 1 + + λ p s + i = 1 p j = s + 1 q λ i j ( ( u i + y ) γ j - u i γ j ) .

Proof

The lemma is proven similarly to Lemma 4.5. ∎

Theorem 3 is obtained by combining the proofs of Theorems 1 and 2. The aim is to prove (5.5) with

ψ n , p ( t ) := i = 1 p j = 1 q λ n , i j χ j ( n u i + t ) , λ n , i j := λ i j n - 1 - γ j L j - 1 ( n ) , r p ( y ) := H p , q ( a - 1 ( u ¯ + y ) , a γ 1 - 1 λ ¯ 1 , , a γ q - 1 λ ¯ q ) ,

assuming u 1 > > u p - 1 > u p = 0 and λ i j 0 . Without loss of generality, we assume (5.23) and that, for some 0 s s ,

m χ j < , j = 1 , , s , m χ j = , j = s + 1 , , q .

According to (5.24), the limit function in (5.5) satisfies the integral equation

r p ( y ) = r p - 1 ( 0 ) + a - 1 F p , q ( y ) - b a - 1 0 y r p 2 ( v ) d v .

Therefore, to apply Proposition 3.5, we have to prove for the updated version of (5.10) that

n B n ( n y ) y r p - 1 ( 0 ) + a - 1 F p , q ( y ) , n ,

which, once again, is done according to flow chart (5.6). Even in this more general setting, the counterparts of (5.14) and (5.15) are valid, and the task boils down to verifying the counterpart of (5.16),

n t = 1 n y D n ( n y - t ) U ( t ) y a - 1 F p , q ( y ) , n ,

where the limit is obtained using Proposition 2.3 for the counterpart of (5.18),

n t = 1 n y E ( ψ n , p ( n y - t ) ) U ( t ) y a - 1 F p , q ( y ) , n .

It remains to verify the counterparts of (5.19), (5.20).

Proof of (5.19)

Observe that ψ n , p ( t ) = ψ n , p ( t ) + ψ n , p ′′ ( t ) , where

ψ n , p ( t ) := j = 1 s i = 1 p λ n , i j χ j ( n u i + t ) , ψ n , p ′′ ( t ) := j = s + 1 q i = 1 p λ n , i j χ j ( n u i + t ) .

Using (1.9), we can split the left-hand side of (5.19) into the sum of three terms

n t = 1 n y E ( e 2 ψ n , p ( t ) ) = n t = 1 n y E ( e 2 ψ n , p ( t ) ) + n t = 1 n y E ( e 2 ψ n , p ′′ ( t ) ) + n t = 1 n y E ( e 1 ψ n , p ( t ) e 1 ψ n , p ′′ ( t ) ) .

The first and the second terms are handled using the argument of the proofs of Theorems 1 and 2 respectively.

The third term requires a special attention. It is estimated from above by

n t = 1 n y E ( e 1 ψ n , p ( t ) e 1 ψ n , p ′′ ( t ) ) n t = 1 n y E ( ( e 1 j = 1 s λ n , p j χ j ( t ) + e 1 ψ n , p - 1 ( t ) ) e 1 ψ n , p ′′ ( t ) ) C j = 1 s t = 1 n y E ( χ j ( t ) e 1 ψ n , p ′′ ( t ) ) + n t = 1 n y E ( ψ n , p - 1 ( t ) ) .

The last term is tackled in a way similar to (5.21), and it remains to show that, for each j s ,

t = 1 n y E ( χ j ( t ) e 1 ψ n , p ′′ ( t ) ) y 0 , n .

To this end, observe that, for an arbitrary k 1 ,

t = 1 n y E ( χ j ( t ) e 1 ψ n , p ′′ ( t ) ) k t = 1 n y E ( ψ n , p ′′ ( t ) ) + t = 1 E ( χ j ( t ) 1 { χ j ( t ) > k } ) .

The first term is taken care of by (5.21), while the second term vanishes as k since m χ j < . ∎

Proof of (5.20)

Using ψ n , p ( t ) = ψ n , p ( t ) + ψ n , p ′′ ( t ) , we get e 1 ψ n , p ( t ) e 1 ψ n , p ( t ) + e 1 ψ n , p ′′ ( t ) , which allows us to replace (5.20) by the following two relations:

n t = 1 n y E ( e 1 ψ n , p ( t ) e 1 j = 1 Λ n , p ( t - j ) ν ( j ) ) y 0 , n , n t = 1 n y E ( e 1 ψ n , p ′′ ( t ) e 1 j = 1 Λ n , p ( t - j ) ν ( j ) ) y 0 , n .

The first relation is proven in the same way as (5.20) was proven for Theorem 1, and the second relation is proven in the same way as (5.20) was proven for Theorem 2. ∎

5.5 Proof of Theorem 4

Adapting the setting of Theorem 4 to Theorem 3, we treat the process ( Z t 1 , , Z t q ) as a vector of population counts for a single type GW-process. This is achieved by focusing on the type 1 individuals and introducing 𝑞 individual scores for a generic individual of type 1 born at time 0 by setting

  • χ 1 ( t ) := 1 { t = 0 } ,

  • χ j ( t ) := the number of descendants of the generic individual, which (a) have no other intermediate ancestors of type 1, (b) are born at time 𝑡, and (c) have type 𝑗,

for j = 2 , , q . Having this, our task is to check that conditions (2.6) and (5.2) hold with γ := j - 1 and L ( t ) := 1 ( j - 1 ) ! m 1 , 2 m j - 1 , j for all t 1 and j = 2 , , q .

To check condition (2.6) with χ ( ) := χ j ( ) for a given j = 2 , , q , we use representation

(5.25) χ j ( t + 1 ) = i = 2 j k = 1 N 1 i Z t i ( j , k ) ,

where Z t i ( j , k ) = d Z t i ( j ) stands for the number of type 𝑗 individuals born at time t + 1 and descending from a type 𝑖 individual born at time 1. This gives

E ( χ j ( t + 1 ) ) = i = 2 j m 1 j M i j ( t ) , M i j ( t ) := E ( Z t i ( j ) Z 0 i = 1 ) .

Furthermore, due to the decomposable branching property, we have

(5.26) Z t + 1 i ( j ) = l = i j k = 1 N i l Z t l ( j , k ) ,

implying the recursion

M i j ( t + 1 ) = l = i j m i l M l j ( t ) = M i j ( t ) + l = i + 1 j - 1 m i l M l j ( t ) + m i j .

Putting here j = i + 1 , we get M i , i + 1 ( t ) = t m i , i + 1 . From

M i , i + 2 ( t + 1 ) = M i , i + 2 ( t ) + m i , i + 1 M i + 1 , i + 2 ( t ) + m i , i + 2 = M i , i + 2 ( t ) + t m i , i + 1 m i + 1 , i + 2 + m i , i + 2 ,

we find M i , i + 2 ( t ) 1 2 t 2 m i , i + 1 m i + 1 , i + 2 as t . Thus, by iteration, we derive

M i j ( t ) 1 ( i - j ) ! t j - i m i , i + 1 m j - 1 , j , t ,

which allows us to conclude that condition (2.6) holds in the desired form, because

E ( χ j ( t ) ) 1 ( j - 2 ) ! t j - 2 m 1 , 2 m j - 1 , j , t .

Finally, to verify condition (5.2) with γ = j - 1 , it suffices to show that

(5.27) E ( χ j 2 ( t ) ) C t 2 j - 3 , j = 2 , , q ,

using the following corollary of (5.25):

E ( χ j 2 ( t + 1 ) ) = E ( ( i = 2 j k = 1 N 1 i Z t i ( j , k ) ) 2 ) = i = 2 j m 1 i V i j ( t ) + 2 i = 2 j E ( N 1 i ( N 1 i - 1 ) ) M i j 2 ( t ) + 2 2 i < l j E ( N 1 i N 1 l ) M i j ( t ) M l j ( t ) ,

where V i j ( t ) := E ( ( Z t i ( j ) ) 2 Z 0 i = 1 ) . From here, relation (5.27) is obtained from

i = 2 j E ( N 1 i ( N 1 i - 1 ) ) M i j 2 ( t ) + 2 i < l j E ( N 1 i N 1 l ) M i j ( t ) M l j ( t ) C 1 2 i l j t j - i t j - l C 2 t 2 j - 4

and the upper bound V i j ( t ) C t 2 j - 2 i + 1 , derived next. Using (5.26) and applying similar estimates, we find

V i j ( t + 1 ) = E ( ( l = i j k = 1 N i l Z t l ( j , k ) ) 2 ) l = i j m i l V l j ( t ) + C t 2 j - 2 i .

In particular, V j j ( t + 1 ) V j j ( t ) + C implies V j j ( t ) C t . This, in turn, gives

V j - 1 , j ( t + 1 ) V j - 1 , j ( t ) + C 1 t + C 2 t 2

and V j - 1 , j ( t ) C t 3 . Reiterating this argument, we find V i j ( t ) C t 2 j - 2 i + 1 , which ends the proof of (5.27) and Theorem 4 as a whole.

Acknowledgements

The author is grateful to an anonymous reviewer for a close reading of the manuscript and valuable comments.

References

[1] K. B. Athreya and P. E. Ney, Branching Processes, Grundlehren Math. Wiss. 196, Springer, New York, 1972. 10.1007/978-3-642-65371-1Search in Google Scholar

[2] S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence, John Wiley & Sons, New York, 1986. 10.1002/9780470316658Search in Google Scholar

[3] W. Feller, An Introduction to Probability Theory and its Applications. Vol. I, John Wiley & Sons, New York, 1968. Search in Google Scholar

[4] W. Feller, An Introduction to Probability Theory and its Applications. Vol. II, John Wiley & Sons, New York, 1971. Search in Google Scholar

[5] J. Foster and P. Ney, Limit laws for decomposable critical branching processes, Z. Wahrsch. Verw. Gebiete 46 (1978/79), no. 1, 13–43. 10.1007/BF00535685Search in Google Scholar

[6] P. J. Green, Conditional limit theorems for general branching processes, J. Appl. Probab. 14 (1977), no. 3, 451–463. 10.2307/3213448Search in Google Scholar

[7] P. Haccou, P. Jagers and V. A. Vatutin, Branching Processes: Variation, Growth, and Extinction of Populations, Cambridge University, Cambridge, 2005. 10.1017/CBO9780511629136Search in Google Scholar

[8] J. M. Holte, A generalization of Goldstein’s comparison lemma and the exponential limit law in critical Crump–Mode–Jagers branching processes, Adv. in Appl. Probab. 8 (1976), no. 1, 88–104. 10.2307/1426023Search in Google Scholar

[9] P. Jagers, Convergence of general branching processes and functionals thereof, J. Appl. Probab. 11 (1974), 471–478. 10.2307/3212691Search in Google Scholar

[10] P. Jagers, Branching Processes with Biological Applications, John Wiley & Sons, London, 1975. Search in Google Scholar

[11] P. Jagers and S. Sagitov, General branching processes in discrete time as random trees, Bernoulli 14 (2008), no. 4, 949–962. 10.3150/08-BEJ138Search in Google Scholar

[12] M. Kimmel and D. E. Axelrod, Branching Processes in Biology, Springer, New York, 2002. 10.1007/b97371Search in Google Scholar

[13] J. Lamperti, The limit of a sequence of branching processes, Z. Wahrsch. Verw. Gebiete 7 (1967), 271–288. 10.1007/BF01844446Search in Google Scholar

[14] M. Möhle and B. Vetter, Asymptotics of continuous-time discrete state space branching processes for large initial state, Markov Process. Related Fields 27 (2021), no. 1, 1–42. Search in Google Scholar

[15] S. M. Sagitov, A multidimensional critical branching process generated by a large number of particles of a single type, Theory Probab. Appl. 35 (1991), 118–130. 10.1137/1135012Search in Google Scholar

[16] B. A. Sevast’janov, Transient phenomena in branching stochastic processes, Theory Probab. Appl. 4 (1959), 113–128. 10.1137/1104011Search in Google Scholar

[17] B. A. Sevast’janov, Verzweigungsprozesse, Akademie, Berlin, 1974. 10.1515/9783112727973Search in Google Scholar

[18] Z. Taïb, Branching Processes and Neutral Evolution, Lecture Notes Biomath. 93, Springer, Berlin, 1992. 10.1007/978-3-642-51536-1Search in Google Scholar

[19] V. A. Vatutin, Critical Bellman–Harris branching processes starting with a large number of particles, Math. Notes 40 (1986), 803–811. 10.1007/BF01159675Search in Google Scholar

[20] V. A. Vatutin, Asymptotic properties of Bellman–Harris critical branching processes starting with a large number of particles, J. Soviet Math. 47 (1989), 2673–2681. 10.1007/BF01095591Search in Google Scholar

[21] P. Whittle, A branching process in which individuals have variable lifetimes, Biometrika 51 (1964), 262–264. 10.2307/2334217Search in Google Scholar

Received: 2021-07-26
Revised: 2021-10-08
Accepted: 2021-10-22
Published Online: 2021-11-06
Published in Print: 2022-01-01

© 2021 Sagitov, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 23.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/eqc-2021-0027/html?lang=en
Scroll to top button