Home Mathematics Regularity of models associated with Markov jump processes
Article Open Access

Regularity of models associated with Markov jump processes

  • Wissem Jedidi EMAIL logo
Published/Copyright: September 5, 2022

Abstract

We consider a jump Markov process X = ( X t ) t 0 , with values in a state space ( E , ) . We suppose that the corresponding infinitesimal generator π θ ( x , d y ) , x E , hence the law P x , θ of X , depends on a parameter θ Θ . We prove that several models (filtered or not) associated with X are linked, by their regularity according to a certain scheme. In particular, we show that the regularity of the model ( π θ ( x , d y ) ) θ Θ is equivalent to the local regularity of ( P x , θ ) θ Θ .

MSC 2010: 65C20; 62M20

1 Introduction and main results

Jump Markov processes, have found application in Bayesian statistics, chemistry, economics, information theory, finance, physics, population dynamics, speech processing, signal processing, statistical mechanics, traffic modeling, thermodynamics, and many others [1]. Regularity plays a significant role in the classical asymptotic statistics for parametric statistical models for jump Markov processes; see [2,3,4] for recent developments. Asymptotic normality or Bernstein-von Mises-type theorems impose several regularity conditions so that their results hold rigorously. In this article, we focus on the regularity conditions of several statistical models associated with a jump Markov process X with values E being an arbitrary space state, endowed with a σ -field . Let Ω be the canonical space of piecewise constant functions ω : R + E , right continuous for the discrete topology. Let X = ( X t ) t 0 be the canonical process, ( t ) t 0 the canonical filtration, and = t 0 t . Let T 0 = 0 and ( T n ) n 0 be the sequence of the jump times of X , which are almost surely increasing to . To each θ Θ R d and x E , we associate

μ θ : E ( 0 , ) , an -measurable function , Q θ ( x , d y ) , a Markov kernel (also called a transition probability) on E .

We assume that, under P x , θ , the process ( X t ) t 0 is Markovian, starts from x E , is non-exploding, and admits the infinitesimal generator

π θ ( x , d y ) = μ θ ( x ) Q θ ( x , d y ) .

The existence of the probabilities P x , θ is guaranteed by the boundedness of the function μ θ for instance. The following facts clarify our focus on the different statistical models that will be presented later on:

  • under P x , θ , and conditionally to T n 1 , the distribution of T n T n 1 is exponential with parameter μ θ ( X T n 1 ) ;

  • Q θ ( x , d y ) = P x , θ ( X T 1 d y ) is the transition probability of the embedded Markov chain ( X T n ) n 0 ;

  • Q ¯ θ k ( x , d y , d t ) , k N , is the distribution of ( X T k , T k ) under P x , θ ;

  • The associated sub-Markovian transition kernels ( P t θ ) t 0 satisfy the backward Kolmogorov equations:

    (1) t P t θ ( x , A ) = E ( P t θ ( y , A ) P t θ ( x , A ) ) π θ ( x , d y ) , s , t 0 , x E , A .

    The Markov process X is simple if π θ ( x , d y ) is a Markov kernel, i.e., for every x E , π θ ( x , d y ) is a probability measure on ( E , ) . In this case, the transition functions are also Markov and satisfy the Chapman-Kolmogorov equation

    P s + t θ ( x , A ) = E P t θ ( y , A ) P s θ ( x , d x ) , x E , A ,

    and

    P x , θ ( X s + t A t ) = P s θ ( X t , A ) , P x , θ -almost surely,

    cf. [5] for more account.

  • The multivariate point process associated with the process ( X t ) t 0 is

    λ ( , d t , d y ) = k 1 ε ( T k , X T k ) ( d t , d y ) ,

    and its compensator, under P x , θ , is

    ν θ ( , d t , d y ) = π θ ( X t , d y ) 1 R + ( t ) d t .

Cf. Höpfner et al. [6] for instance. Note that in our study, we will not use the transition functions nor the multivariate point process and its compensator. In fact, we aim to show that the regularity of each model for the following statistical models is linked to the others, according to a certain scheme:

(2) x = ( Ω , , ( t ) t 0 , ( P x , θ ) θ Θ ) = the filtered model associated with ( X t ) t 0 ,

(3) E x = ( E , , ( π θ ( x , d y ) ) θ Θ ) = the model associated with the generator of ( X t ) t 0 ,

(4) E x = ( E , , ( Q θ ( x , d y ) ) θ Θ ) = the model associated with the observation of X T 1 ,

(5) E x k = ( E × R + , R + , ( Q ¯ θ k ( x , d y , d t ) ) θ Θ ) = the model associated with the observation of ( X T k , T k ) .

The model E x is not a proper statistical model since ( π θ ( x , d y ) ) θ Θ is not a probability measure. Nevertheless, the extension of the notion of regularity to models associated with families of finite positive measures is also feasible and is described as follows. Let ( R θ ) θ Θ be a family of finite positive measures in ( E , ) . For θ , ξ Θ , we denote by Π θ , ξ a measure that dominates R 0 , R θ , and R ξ , and by z θ , ξ , z ξ , θ , and Z θ , ξ , be Radon-Nikodym derivatives, respectively, of R θ and R ξ according to Π θ , ξ and of R θ according to R ξ . The Lebesgue decomposition of R θ , with respect to R ξ , is given by the pair ( N θ , ξ , Z θ , ξ ) ,

N θ , ξ = { u / z ξ , θ ( u ) = 0 } , Z θ , ξ = z θ , ξ z ξ , θ , outside N θ , ξ , 0 , on N θ , ξ .

We start by recalling the notion of “error functions” which was introduced in [6] as follows.

Definition 1

A function f : [ 0 , ) [ 0 , ) is called an error function, if lim u 0 f ( u ) = 0 . More generally, an error function is any positive function f : E × ( 0 , ) [ 0 , ) , such that

f ( , u ) is -measurable , u > 0 , and lim u 0 f ( x , u ) = 0 , x E .

Definition 2

(Regularity of non-filtered models). The model ( E , , ( R θ ) θ Θ ) is regular at θ = 0 , if the random function

Θ L 2 ( R 0 ) θ Z θ , 0 ,

is differentiable at θ = 0 , i.e., there exists a random vector V = ( V i ) 1 i d and an error function f : [ 0 , ) [ 0 , ) , such that

(6) R θ ( N θ , 0 ) + Z θ , 0 1 1 2 V θ L 2 ( R 0 ) 2 θ 2 f ( θ ) .

Note that if the regularity of the model ( E , , ( R θ ) θ Θ ) holds, then V is necessarily ( R 0 ) -square integrable. Furthermore, if ( R θ ) θ Θ is a family of probability measures, then E R 0 ( V ) = 0 . The Hellinger integral of order 1 2 between the measures R θ and R ξ , , is defined by

H θ , ξ Π θ , ξ ( z θ , ξ z ξ , θ )

and is independent of the dominating measure Π θ , ξ . The regularity of the model ( E , , ( R θ ) θ Θ ) is equivalent to one of these two assertions:

  1. There exist an error function f 1 and a random vector V (the same as before), such that

    (7) z θ , 0 z 0 , θ 1 2 z 0 , θ V θ L 2 ( Π θ , 0 ) θ f 1 ( θ ) .

  2. There exists an error function f 2 and a matrix I = [ I i j ] 1 i , j d , such that

    H 0 , 0 + H θ , ξ H 0 , θ H 0 , ξ 1 4 θ I ξ θ ξ f 2 ( θ ξ ) .

    The matrix I is positive definite and is called the Fisher information matrix of the model at θ = 0 . It is linked to the vector V by

    I i j = R 0 ( V i V j ) ,

    cf. [6,7].

Let ( Ω , ) be a sample space endowed with a filtration ( t ) t 0 , and a family of probability measures ( P θ ) θ Θ coinciding on 0 . The regularity of the statistical filtered model

(8) ( Ω , , ( t ) t 0 , ( P θ ) θ Θ )

mimics the one in Definition 2 and is expressed in terms of likelihood processes [8,7]. For a clear presentation, we need to introduce the likelihood process of P θ with respect to P ξ , θ , ξ Θ , defined in Jacod and Shiryaev’s book [9], by

Z t θ , ξ = d P θ t d P ξ t , t 0 .

The process Z t θ , ξ is a positive ( P ξ , t ) -supermartingale and is a martingale if

P θ l o c P ξ , ( i.e. if P θ t P ξ t , t 0 ) .

For any probability measure K θ , ξ , locally dominating P θ and P ξ , the ( K θ , ξ , t ) -martingales

z t θ , ξ = d P θ t d K θ , ξ t , z t ξ , θ = d P ξ t d K θ , ξ t

and the stopping times

τ θ , ξ = inf { t 0 s.t. z t θ , ξ = 0 } , τ ξ , θ = inf { t 0 s.t. z t ξ , θ = 0 }

provide this version of Z θ , ξ :

Z t θ , ξ = z t θ , ξ z t ξ , θ , if t < τ θ , ξ τ ξ , θ 0 , if t τ θ τ ξ .

As for non-filtered models, we have the following definition.

Definition 3

(Regularity of filtered models). Let T be a stopping time relative to ( t ) t 0 . The model ( Ω , , ( t ) t 0 , ( P θ ) θ Θ ) is said to be regular (or differentiable) at time T and at θ = 0 , if the model ( Ω , T , ( P θ ) θ Θ ) is regular in the sense of Definition 2. That means that there exists an T -measurable, P 0 -square-integrable, and centered random vector V T = [ V T i ] 1 i d and two error functions f 1 , T , f 2 , T , such that

(9) E P 0 [ 1 Z T θ , 0 ] θ 2 f 1 , T ( θ )

and

(10) E P 0 Z T θ , 0 1 1 2 θ V T 2 θ 2 f 2 , T ( θ ) .

As in Definition 2 and according to [7, point 3.12], the regularity of the model is equivalent to the existence of a positive definite d × d matrix J T and of an error function f T , such that

H T 0 , 0 + H T θ , ξ H T 0 , θ H T 0 , ξ 1 4 θ J T ξ θ ξ f T ( θ ξ ) ,

where

H T θ , ξ = E K θ , ξ [ z T θ , ξ z T ξ , θ ] = E P 0 [ Z T θ , ξ Z T ξ , θ 1 ( T < τ θ τ ξ ) ]

is the Hellinger integral of order 1 2 , at time T , and which is independent of the choice of the dominating probability measure. The Fisher information matrix of the model is then

J T = [ E P 0 [ V T i V T j ] ] 1 i , j d .

It is worth noting that if the regularity at a time T implies the regularity at any stopping time S T . In particular, if the regularity holds along a sequence S p , p N , increasing to infinity, then there exists a local martingale ( V t ) t 0 , locally square-integrable, null at zero, such that if T S p for some p , then V T is a version of the random variable in (10). In particular, if (9) and (10) are satisfied for all t 0 , then ( V t ) t 0 is a square-integrable martingale, null at 0, cf. [7, Corollary 3.16].

We are now able to introduce the notion of local regularity, which is less restrictive than the preceding one.

Definition 4

(Local regularity of filtered models). A sequence ( S p ) p N of stopping times is called a localizing sequence if it is P 0 -almost surely increasing to . A localizing family is a sequence formed by the pair ( S p , S n , p ) p N , n 1 , where ( S p ) p N is a localizing sequence and ( S n , p ) n 1 is a sequence of stopping times, satisfying

(11) S n , p S p and lim n P 0 ( S n , p < S p ) = 0 .

The model (8) is said to be locally regular (or locally differentiable at θ = 0 ), if there exists a right continuous, left limited process ( V t ) t 0 on R d , such that, for all ( θ n , θ ) satisfying

(12) lim n θ n = 0 and lim n θ n θ n = θ ,

there exists a localizing family ( S p , S n , p ) p N , n 1 , satisfying

(13) lim n E P 0 [ 1 Z S n , p θ n , 0 ] θ n 2 = 0 , p N ,

and

(14) Z t S n , p θ n , 0 1 θ n L 2 ( P 0 ) 1 2 θ V t S p , as n + , p N , t 0 .

Note that if the model is regular along a localizing sequence, then it is necessarily locally regular. By Theorem [7, Theorem 4.6], the process ( V t ) t 0 is a locally square-integrable ( P 0 , t )-local martingale and the Fisher information process ( I t ) t 0 , at θ = 0 , is defined as the predicable quadratic covariation of ( V t ) t 0 :

I t [ I t i j ] 1 i , j d = [ V i , V j t ] 1 i , j d .

The local regularity does guarantee the integrability of I ; however, it is the minimal condition we require to obtain the property of local asymptotic normality (LAN) for statistical models. In this case, the Fisher information quantities provide the lower bound of the variance of any estimator of the unknown parameters intervening in the models, see [10,11,12] for instance.

According to [7, Theorem 6.2], the local regularity is equivalent to the two following conditions:

(15) 1 θ ξ Var h 0 , 0 + h θ , ξ h 0 , θ h 0 , ξ 1 4 θ I ξ t P 0 0 , as θ , ξ 0 , t 0 ,

and for all t 0 ,

(16) A t θ θ 2 P 0 0 , as θ 0 ,

where

  • ( Var { . } t ) t 0 is the variation process of { . } .

  • ( h t θ , ξ ) t 0 is a version of the Hellinger process of order 1 2 between P θ and P ξ , i.e., is a predictable nondecreasing process, null at zero, such that

    (17) z θ z ξ + [ z θ z ξ ] h θ , ξ is a ( K θ , ξ , t ) -martingale.

  • ( A t θ ) t 0 is the predictable nondecreasing process intervening in the Doob-Meyer decomposition of the supermartingale ( Z t θ ) t 0 . Since P θ and P 0 coincide on 0 , then necessarily Z 0 θ = 1 and there exists a ( P 0 , t ) -local martingale ( M t θ ) t 0 such that

    Z θ = 1 + M θ A θ .

The results that we obtain complete those of Höpfner et al. [6], who proved that if ( π θ ( x , d y ) ) θ Θ is regular and if the process X satisfies a condition of positive recurrence (resp. null recurrence), then the model ( P x , θ ) θ Θ localized around the parameter θ = 0 is or locally asymptotically normal or is locally asymptotically mixed normal. The main result is as follows.

Theorem 5

The model x is locally regular for all x E , if, and only if, E y is regular for all y E .

Models (2)–(5) are described in depth in Section 2. We also provide a full scheme linking them by their regularity, see Theorems 68 and 10. The proofs are given in Section 3.

2 Additional regularity properties

Our notations and the calculus of the Hellinger integrals and of the likelihood processes are borrowed from Höpfner [13] and Höpfner et al. [6]. For x E and θ , ξ θ , the following measures will be used in the sequel.

  1. Π x θ , ξ ( d y ) is a measure dominating π θ ( x , d y ) , π ξ ( x , d y ) , and π 0 ( x , d y ) . Thus, Π x θ , ξ ( d y ) also dominates Q θ ( x , d y ) , Q ξ ( x , d y ) , and Q 0 ( x , d y ) ;

  2. Q x θ , ξ ( d y , d t ) is a transition probability on E × R + dominating Q ¯ θ ( x , d y , d t ) , Q ¯ ξ ( x , d y , d t ) , and Q ¯ 0 ( x , d y , d t ) ;

  3. K x θ , ξ is a probability measure, locally dominating P x , θ , P x , ξ , and P x , 0 ;

  4. Π x θ ( d y ) = Π x θ , 0 ( d y ) , Q x θ ( d y , d t ) = P x θ , 0 ( d y , d t ) , and K x θ = K x θ , 0 .

The Radon-Nikodym derivatives are denoted by

χ θ , ξ ( x , ) = d π θ ( x , ) d Π x θ , ξ ( ) , ρ θ , ξ ( x , ) = d Q θ ( x , ) d Π x θ , ξ ( ) , ρ θ , ξ 1 ( x , , ) = d Q ¯ θ ( x , , ) d Q x θ , ξ ( , ) χ θ ( x , ) = χ θ , 0 ( x , ) , ρ θ ( x , ) = ρ θ , 0 ( x , ) , ρ θ 1 ( x , , ) = ρ θ , 0 1 ( x , , ) χ 0 ( x , ) = χ 0 , θ ( x , ) , r h o 0 ( x , ) = ρ 0 , θ ( x , ) , ρ 0 1 ( x , , ) = ρ 0 , θ 1 ( x , , ) .

If we choose

Π x θ , ξ ( d y ) = π θ ( x , d y ) + π ξ ( x , d y ) + π 0 ( x , d y ) ,

and if K x θ , ξ is the probability under which the canonical process ( X t ) t 0 , starts from x , and has the infinitesimal generator Π x θ , ξ ( d y ) , then we have

P x , 0 l o c K x θ , ξ , P x , θ l o c K x θ , ξ and P x , 0 l o c K x θ , ξ .

With the convention j = 1 0 = 1 , a version of the likelihood processes of P x , θ , with respect to K x θ , ξ and to ( t ) t 0 , is given in [6] by

z t θ , ξ = d P x , θ t d K x θ , ξ t = { j 1 : T j t χ θ , ξ ( X T j 1 , X T j ) } exp 0 t E ( 1 χ θ , ξ ) ( X s , y ) Π X s ξ , θ ( d y ) d s .

With the notations

z t θ z t θ , 0 , z t 0 z t 0 , θ , and τ θ inf { t 0 / z t θ = 0 } ,

a version of the likelihood processes of P x , θ , relative to P x , 0 and ( t ) t 0 , is explicitly given by

Z t θ = z t θ z t 0 = exp 0 t ( μ 0 μ θ ) ( X s ) d s j 1 : T j t χ θ χ 0 ( X T j 1 , X T j ) , if t < τ 0 τ θ 0 , if t τ 0 τ θ .

The Hellinger integral of order 1 2 between π θ ( x , d y ) and π ξ ( x , d y ) is then

H θ , ξ ( x ) = E χ θ , ξ ( x , y ) χ ξ , θ ( x , y ) Π x θ , ξ ( d y ) ,

and the Hellinger integral of order 1 2 at time t between P x , θ and P x , ξ relative to the filtration ( t ) t 0 , is expressed by

(18) H t θ , ξ ( x ) = E K x θ , ξ [ z t θ , ξ z t ξ , θ ] .

We also consider the quantities

H ¯ θ , ξ ( x ) = μ θ ( x ) + μ ξ ( x ) 2 H θ , ξ ( x ) ,

which are used to define the Hellinger process ( h t θ , ξ ) t 0 , of order 1 2 , between P x , θ and P x , ξ , and relative to ( t ) t 0 . It is expressed by

(19) h t θ , ξ = 0 t H ¯ θ , ξ ( X s ) d s .

Finally, we define the function

(20) g ( x , θ , ξ ) 1 θ ξ H ¯ 0 , θ ( x ) + H ¯ 0 , ξ ( x ) H ¯ θ , ξ ( x ) 1 4 θ I ( x ) ξ ,

where I ( x ) is the Fisher information matrix of the model E x at θ = 0 , whenever it is regular. Consequently, H ¯ 0 , θ ( x ) is expressed by

(21) H ¯ 0 , θ ( x ) = 1 8 θ I ( x ) θ + 1 2 θ 2 g ( x , θ , θ ) .

Observe that the function g in (20) is such that the function

f ( x , u ) sup θ , ξ u g ( x , θ , ξ ) , x E ,

is nondecreasing in u , satisfies g ( x , θ , ξ ) f ( x , θ ξ ) . Thus, f has the vocation to be an error function.

We can now state a first technical but intuitive result.

Theorem 6

Let x E . Then the following assertions are equivalent.

  1. E x is regular;

  2. E x is regular and μ . ( x ) is differentiable at θ = 0 ;

  3. E x 1 is regular;

  4. x is regular at time T 1 .

In the three following theorems, we complete our results by studying the regularity of the filtered model x , at fixed times t > 0 , or at the jump times T k , k N . In this direction, we obtain only partial results appealing to some additional conditions of integrability.

Theorem 7

Let t > 0 . For all x E , assume the following.

Condition A t ( x ) . There exists u t > 0 , an error function f 1 and a measurable function f 2 : E [ 0 , ) , satisfying the following:

H ¯ θ , ξ ( x ) θ ξ 2 f 2 ( x ) , i f θ , ξ u t ,

and

0 t E K x θ [ f 1 ( X s , u t ) 2 ] d s < + , 0 t E K x θ , ξ [ f 2 ( X s ) 2 ] d s < + .

Then, y is regular at the time t , for all y E .

Theorem 8

For all x E , assume the following. The model E x is regular, and

Condition B ( x ) . The error function f in (7), associated with the model E x 1 , satisfies the following. There exists r > 0 such that

Q x θ [ f ( , r ) ] ( x ) = E × R + f ( y , r ) Q x θ ( x , d y , d t ) < + , if θ r .

Then, the model E y k is regular for all y E , and all k N .

Remark 9

  1. The control in the first integral in condition A t ( x ) is exactly the required condition for E x to be regular. The finiteness of the second integral will ensure integrability conditions in the proof of Theorem 7.

  2. Equivalently, we could replace the error function in condition B( x ) by the one in (6). The integrability condition becomes

    Q ¯ 0 [ f ( , r ) ] ( x ) = E × R + f ( y , r ) Q ¯ 0 ( x , d y , d t ) < + ,

    and the only difference is that we would have to check two inequalities instead of one.

We conclude with our last result.

Theorem 10

1. Let x E . If x is regular at a time t > 0 , then E x is regular.

2. Furthermore, if x is regular at a time t > 0 , for all x E , then, y is locally regular, for all y E .

3 Proofs of the theorems

We will sometimes use the notion of isomorphism between two statistical models. Referring to Strasser’s book [14], we say that two models G = ( A , A , ( P θ ) θ Θ ) and = ( B , , ( Q θ ) θ Θ ) are isomorphic if they are randomized of each other. To illustrate this notion, assume for instance that G and are, respectively, dominated by P and Q . Then, the model is randomized from G , if there exists a Markovian operator M : L ( A , A , P θ ) L ( B , , Q θ ) , such that

d Q θ d Q = M d P θ d P , θ Θ .

The models G and are randomized of each other if they are mutually exhaustive, which is always the case in our study, each time an isomorphism holds, cf. [14, Lemma 23.5 and Theorem 24.11]. When computing expectations, these isomorphisms allow us to handle at our convenience, one of the two likelihoods of the models G and . The latter is justified by the fact that they have the same law under the respective probability quotient.

Proof of Theorem 6

( 1 ) ( 2 ) : (a) By (7), the regularity of E x at θ = 0 is equivalent to the existence of a random vector V ( x , ) L 2 ( π 0 ( x , d y ) ) , and of an error function f χ , such that

(22) h ( x , θ ) E χ θ ( x , y ) χ 0 ( x , y ) 1 2 χ 0 ( x , y ) θ V ( x , y ) 2 Π x θ ( d y ) θ 2 f χ ( x , θ ) .

The latter implies

(23) E ( χ θ ( x , y ) χ 0 ( x , y ) ) 2 Π x θ ( d y ) θ 2 2 f χ ( x , θ ) + 1 2 E V ( x , z ) 2 π 0 ( x , d z ) = θ f χ ( x , θ ) and f χ is an error function .

(b) The implication “ E x is regular at θ = 0 μ . ( x ) is differentiable at θ = 0 ” is shown in [6], using the fact that the differentiability of χ θ ( x , ) in L 2 implies the differentiability of χ θ ( x , ) in L 1 . Furthermore, the derivative at θ = 0 of μ . ( x ) is

E V ( x , z ) π 0 ( x , d z ) = μ 0 ( x ) E V ( x , z ) Q 0 ( x , d z ) ,

which gives,

(24) μ 0 ( x ) μ θ ( x ) = 1 1 2 θ E V ( x , z ) Q 0 ( x , d z ) + θ F μ ( x , θ ) ,

where

f μ ( x , u ) sup θ u F μ ( x , θ ) is an error function

(c) Let us define

h ( x , θ ) E ρ θ ( x , y ) ρ 0 ( x , y ) 1 2 ρ 0 ( x , y ) θ V ( x , y ) 2 Π x θ ( d y ) ,

where the function

(25) V ( x , y ) V ( x , y ) E V ( x , z ) Q 0 ( x , d z ) L 2 ( Q 0 ( x , d y ) )

satisfies

E V ( x , y ) Q 0 ( x , d y ) = 0 .

Then, we can write

h ( x , θ ) = E χ θ ( x , y ) μ θ ( x ) χ 0 ( x , y ) μ 0 ( x ) 1 2 χ 0 ( x , y ) μ 0 ( x ) θ V ( x , y ) 2 Π x θ ( d y ) ,

and use (24) and (25) to obtain

h ( x , θ ) = 1 μ 0 ( x ) E χ θ ( x , y ) χ 0 ( x , y ) 1 2 χ 0 ( x , y ) θ V ( x , y ) 1 2 θ E V ( x , z ) Q 0 ( x , d z ) { χ θ ( x , y ) χ 0 ( x , y ) } + θ F μ ( x , θ ) χ θ ( x , y ) 2 Π x θ ( d y ) 3 μ 0 ( x ) E χ θ ( x , y ) χ 0 ( x , y ) 1 2 χ 0 ( x , y ) θ V ( x , y ) 2 + 1 4 θ E V ( x , z ) Q 0 ( x , d z ) 2 × χ θ ( x , y ) χ 0 ( x , y ) 2 + θ 2 ( f μ ( x , θ ) ) 2 χ θ ( x , y ) Π x θ ( d y ) .

Finally, according to (22) and (23), we have

h ( x , θ ) 3 μ 0 ( x ) θ 2 f χ ( x , θ ) + 1 4 θ E V ( x , z ) Q 0 ( x , d z ) 2 f χ ( x , θ ) + μ θ ( x ) θ 2 ( f μ ( x , θ ) ) 2 θ 2 f ρ ( x , θ ) , where f ρ is an error function .

( 2 ) ( 1 ) : (a) Under the condition of differentiability of μ . ( x ) at 0, we obtain

(26) μ θ ( x ) = μ 0 ( x ) 1 + 1 2 θ μ 0 ( x ) μ 0 ( x ) + θ F μ ( x , θ ) ,

where

f μ ( x , u ) = sup θ u F μ ( x , θ ) is an error function .

(b) According to (7), the regularity of E x , at θ = 0 , is equivalent to the existence of a centered vector V ( x , ) L 2 ( Q 0 ( x , d y ) ) , and of an error function f ρ such that

h ( x , θ ) = E ρ θ ( x , y ) ρ 0 ( x , y ) 1 2 ρ 0 ( x , y ) θ V ( x , y ) 2 Π x θ ( d y ) θ 2 f ρ ( x , θ ) .

The vector V ( x , ) is defined by

V ( x , y ) V ( x , y ) + μ 0 ( x ) μ 0 ( x ) ,

which belongs to L 2 ( π 0 ( x , d y ) ) and satisfies

μ 0 ( x ) E V ( x , z ) π 0 ( x , d z ) .

(c) Let us define

(27) h ( x , θ ) = E χ θ ( x , y ) χ 0 ( x , y ) 1 2 χ 0 ( x , y ) θ V ( x , y ) 2 Π x θ ( d y ) .

By (26), we have

h ( x , θ ) = E 1 + 1 2 θ E V ( x , z ) Q 0 ( x , d z ) + θ F μ ( x , θ ) μ 0 ( x ) ρ θ ( x , y ) μ 0 ( x ) ρ 0 ( x , y ) 1 2 μ 0 ( x ) ρ 0 ( x , y ) θ V ( x , y ) 2 Π x θ ( d y ) = μ 0 ( x ) E ρ θ ( x , y ) ρ 0 ( x , y ) 1 2 ρ 0 ( x , y ) θ V ( x , y ) + 1 2 θ E V ( x , z ) Q 0 ( x , d z ) { ρ θ ( x , y ) ρ 0 ( x , y ) } + θ F μ ( x , θ ) ρ θ ( x , y ) 2 Π x θ ( d y ) .

With the same arguments as in ( 1 ) ( 2 ) (c), we retrieve

h ( x , θ ) 3 μ 0 ( x ) θ 2 f ρ ( x , θ ) + 1 4 θ E V ( x , z ) Q 0 ( x , d z ) 2 × 2 θ 2 f ρ ( x , θ ) + 1 2 E θ V ( x , z ) 2 Q 0 ( x , d z ) + θ 2 μ θ ( x ) f μ ( x , θ ) .

Consequently, there exists an error function f χ such that

h ( x , θ ) θ 2 f χ ( x , θ ) .

( 2 ) ( 3 ) : Let x E . Since

Q ¯ θ ( x , d y , d t ) = Q θ ( x , d y ) μ θ ( x ) e μ θ ( x ) t 1 R + ( t ) d t

is the tensorial product of two probability measures, then E x 1 is statistically isomorphic to E x × E x , where

(28) E x = ( R + , R + , ( μ θ ( x ) e μ θ ( x ) t 1 R + ( t ) d t ) θ Θ ) .

The differentiability of μ . ( x ) , at θ = 0 , is equivalent to the differentiability of the model E x . The assertion is then a consequence of [15, Corollary I.7.1] in Ibragimov and Has’minskii’s book.

( 3 ) ( 2 ) : As in the proceeding implication, observe that E x 1 is statistically isomorphic to E x × E x , and the result becomes a simple consequence of [15, Theorem I.7.2].

( 3 ) ( 4 ) : This equivalence is deduced from the fact that Q ¯ θ ( x , d y , d t ) is the distribution of ( X T 1 , T 1 ) , then Q ¯ θ ( x , d y , d t ) is identified with P x , θ restricted to the σ -field T 1 . Thus, E x 1 and ( Ω , T 1 , ( P θ ) θ Θ ) are statistically isomorphic.□

Proof of Theorem 5

(1) For the necessity condition, we will check (15) and (16), as it was done for the Markov chains in [7]. For fixed x E , we choose the dominating probability K x θ , ξ , the one for which the process ( X t ) t 0 has the generator

Π x θ , ξ ( d y ) = π θ ( x , d y ) + π ξ ( x , d y ) + π 0 ( x , d y ) .

(1)(a) Using the function g in (20), we have

h t 0 , θ + h t 0 , ξ h t θ , ξ 1 4 0 t θ I ( X s ) ξ d s = θ ξ 0 t g ( X s , θ , ξ ) d s .

Then, the convergence (15) holds if A t ( y ) is true for all y E , which, by Remark 9, is equivalent to the regularity of E y , which is regular for all y E .

(1)(b) The Doob-Meyer decomposition of the supermartingale Z θ asserts that

Z θ = 1 + M θ A θ ,

where M θ is a local martingale and A θ is a predictable nondecreasing process. Since the jump times of the process ( X t ) t 0 are totally inaccessible, then Z θ is left-quasi continuous and A θ has necessarily continuous paths, cf. [16, Theorem 14]. From the decomposition of the additive functional log Z θ on the event ( t < τ 0 τ θ ) , into a local martingale N θ , and a process with finite variation B θ (see [5, p. 40]), we may write Z θ in the form

Z t θ = i 1 , T i t χ θ χ 0 ( X T i 1 , X T i ) exp 0 t ( μ 0 μ θ ) ( X s ) d s = e N t θ + B t θ ,

where,

N t θ = s t , X s X s log χ θ χ 0 ( X s , X s ) 0 t E log χ θ χ 0 ( X s , y ) π 0 ( X s , d y ) d s ,

B t θ = 0 t E 1 μ θ μ 0 ( X s ) + log χ θ χ 0 ( X s , y ) π 0 ( X s , d y ) d s .

Applying Ito’s formula to the semimartingale Z θ , we obtain

Z t θ = 1 + 0 t Z s θ d N s θ + 0 t Z s θ d B s θ + s t Z s θ ( e Δ N s θ 1 Δ N s θ ) = 1 + 0 t Z s θ d N s θ + 0 t Z s θ d B s θ + i 1 , T i t Z T i 1 θ χ θ χ 0 ( X T i 1 , X T i ) 1 log χ θ χ 0 ( X T i 1 , X T i ) = 1 + M t θ A t θ ,

where

M t θ = 0 t Z s θ d N s θ + 0 t E Z s θ χ θ χ 0 ( X s , y ) 1 log χ θ χ 0 ( X s , y ) ( μ ν 0 ) ( , d s , d y ) , A t θ = 0 t Z s θ d B s θ 0 t E Z s θ χ θ χ 0 ( X s , y ) 1 log χ θ χ 0 ( X s , y ) ν 0 ( , d s , d y ) .

Then, we may write

A t θ = 0 t Z s θ μ θ ( X s ) E 1 ρ θ ρ 0 ( X s , y ) Q 0 ( X s , d y ) d s .

Using [7, point 7.5], we retrieve that there exists an error function f ( z , ) , z E , such that

0 E 1 ρ θ ρ 0 ( z , y ) Q 0 ( z , d y ) θ 2 f ( z , θ )

(29) A t θ θ 2 F t θ Y t θ sup s t Z s θ , Y t θ k 0 ( T k + 1 t T k t ) μ θ ( X T k ) f ( X T k , θ ) .

Then, observing that Y t θ ( ω ) , ω Ω , is a finite sum, that θ μ θ ( ) is continuous at θ = 0 and using the fact that f is an error function, we deduce that for all t 0 ,

(30) Y t θ 0 , P x , 0 -a.s. , as θ 0 .

On the other hand, the Doob inequality for positive supermartingales yields

P x , 0 ( ( Z θ ) t A ) E x , 0 [ Z 0 θ ] A = 1 A , for all t 0 , A > 0 , and θ Θ .

We deduce that if θ n is a sequence going to 0 and if ε > 0 , then

P x , 0 ( F t θ n > ε ) = P x , 0 ( ( Z θ n ) t Y t θ n > ε , ( Z θ n ) t > A ) + P x , 0 ( ( Z θ n ) t Y t θ n > ε , ( Z θ n ) t A ) P x , 0 ( ( Z θ n ) t > A ) + P x , 0 Y t θ n > ε A 1 A + P x , 0 Y t θ n > ε A .

Since A may be chosen arbitrarily big, then the latter and (30) show that for all t 0 ,

F t θ n P x , 0 0 , as n + ,

which, by (29), gives (16).

(2) For the sufficient condition, we write the conditions of local regularity, then we express them at the time T 1 .

(2)(a) The local regularity of x at θ = 0 implies that there exists a ( P x , 0 , t ) -local martingale ( V t ) t 0 , locally square-integrable, null at zero, satisfying (14) and represented by

(31) V t = 0 t E υ ( s , y ) ( λ ν 0 ) ( , d s , d y ) ,

where the function υ : Ω × R + × E R d is predictable and satisfies

0 t E υ ( s , y ) ( λ + ν 0 ) ( , d s , d y ) < + , P x , 0 a.s. , for all t 0 .

Let ( θ n , θ ) n satisfy (12), and ( S p , S n , p ) p N , n 1 be the corresponding localizing family. By [7, Theorem 4.6], we obtain that for all p N :

lim n + E x , 0 sup t 0 Z t S n , p θ n , 0 1 θ n 1 2 θ V t S p 2 = 0 .

Furthermore, we can choose ( S p ) p N independent of ( θ n ) n N and S p p (this is what we will do in the sequel). We deduce that

lim n + E x , 0 Z T 1 S n , p θ n , 0 1 θ n 1 2 θ V T 1 S p 2 = 0 ,

then, using [7, Lemma 3.17], we obtain

(32) lim n + E x , 0 E x , 0 [ Z T 1 S n , p θ n , 0 σ ( X T 1 S n , p ) ] 1 θ n 1 2 θ E x , 0 [ V T 1 S p σ ( X T 1 S n , p ) ] 2 = 0 .

(2)(b) By [5], we may write

T 1 S n , p = T 1 R n , p , T 1 S p = T 1 R p ,

where R n , p = r n , p ( X 0 ) , R p = r p ( X 0 ) , the sequence ( R p ) p N does not depend on ( θ n ) n N and the functions r n , p , r p : E ( 0 , p ] are -measurable. Moreover, by (11), we deduce the following inequalities and inclusions:

( i ) T 1 R n , p T 1 R p and R p p ; ( i i ) ( S n , p T 1 ) = ( R n , p T 1 ) ( S p T 1 ) = ( R p T 1 ) ; ( i i i ) ( S p < T 1 , S p = S n , p ) = ( R p < T 1 , R p = R n , p ) ; ( i v ) lim n + P x , 0 ( R p < T 1 , R p = R n , p ) = P x , 0 ( R p < T 1 ) > 0 ; ( v ) lim n + P x , 0 ( R n , p T 1 ) = lim n + P x , 0 ( R p T 1 , R p = R n , p ) = P x , 0 ( R p T 1 ) > 0 .

(2)(c) Let us define the quantities

k x , p ( n ) E x , 0 E x , 0 [ Z T 1 R n , p θ n , 0 σ ( X T 1 R n , p ) ] 1 θ n 1 2 θ E x , 0 [ V T 1 R p σ ( X T 1 R n , p ) ] 2 1 ( R p < T 1 , R p = R n , p ) = E x , 0 e 1 2 ( μ 0 μ θ n ) ( X 0 ) r p ( X 0 ) 1 θ n 1 2 θ E x , 0 [ V r p ( X 0 ) σ ( X 0 ) ] 2 1 ( R p < T 1 , R p = R n , p ) , l x , p ( n ) E x , 0 E x , 0 [ Z T 1 R n , p θ n , 0 σ ( X T 1 R n , p ) ] 1 θ n 1 2 θ E x , 0 [ V T 1 R p σ ( X T 1 R n , p ) ] 2 1 ( R p T 1 , R p = R n , p ) = E x , 0 E x , 0 [ Z T 1 θ n , 0 σ ( X T 1 ) ] 1 θ n 1 2 θ E x , 0 [ V T 1 σ ( X T 1 ) ] 2 1 ( R p T 1 , R p = R n , p ) ,

and the -measurable function w 1 , p : E R d given by

w 1 , p ( x ) = E x , 0 [ V r p ( x ) ] .

There exists an -measurable function w 2 : E × E R d , such that

(33) E x , 0 [ V T 1 σ ( X T 1 ) ] = w 2 ( x , X T 1 ) .

(2)(d) Using ( i v ) in (2) (b), and by (32), we obtain that for all p N ,

lim n + k x , p ( n ) = lim n + e 1 2 ( μ 0 μ θ n ) ( x ) r p ( x ) 1 θ n 1 2 θ w 1 , p ( x ) 2 P x , 0 ( R p < T 1 ) = 0 .

Since R p p , then P x , 0 ( R p < T 1 ) > 0 . We deduce from the latter that θ μ θ ( x ) is differentiable at 0, and that its derivative

μ 0 ( x ) = w 1 , p ( x ) r p ( x ) ,

is independent of p and also of the functions r p .

(2)(e) Similarly, by ( v ) in 2(b), and by (32), for all p N , we have

lim n + l x , p ( n ) = lim n + E x , 0 ρ θ n ρ 0 ( x , X T 1 ) 1 θ n 1 2 θ w 2 ( x , X T 1 ) 2 P x , 0 ( R p T 1 ) = 0 ,

and since

P x , 0 ( R p T 1 ) = P x , 0 ( r p ( x ) T 1 ) > 0 ,

we obtain the differentiability, in L 2 ( Q 0 ( x , d y ) ) , at θ = 0 , of θ ρ θ ρ 0 ( x , ) .

(2)(f) To prove the regularity of the model E x , it remains to show that

(34) lim n + 1 θ n 2 E x , 0 1 ρ θ n ρ 0 ( x , X T 1 ) = 0 .

Observe that, for all p N , we have

E x , 0 [ 1 Z T 1 R n , p θ n ] = E x , 0 [ E x , 0 [ 1 Z T 1 θ n σ ( X T 1 ) ] 1 ( R n , p T 1 ) ] + E x , 0 [ E x , 0 [ 1 Z R n , p θ n σ ( X R n , p ) ] 1 ( R n , p < T 1 ) ] = E x , 0 1 ρ θ n ρ 0 ( x , X T 1 ) 1 ( R n , p T 1 ) + [ 1 e ( μ 0 μ θ n ) ( x ) r n , p ( x ) ] P x , 0 ( R n , p < T 1 ) .

On the other hand, by (13), by ( i v ) in (2)(b), and by the fact that θ μ θ ( x ) is differentiable at 0 (which is equivalent to the regularity of the model E x in (28)), we obtain

0 lim n + E x , 0 [ 1 Z T 1 R n , p θ n ] θ n 2 lim n + E x , 0 [ 1 Z S n , p θ n ] θ n 2 = 0

and

lim n + [ 1 e ( μ 0 μ θ n ) ( x ) r p ( x ) ] θ n 2 P x , 0 ( R n , p < T 1 ) = 0 .

The latter gives

lim n + E x , 0 1 ρ θ n ρ 0 ( x , X T 1 ) θ n 2 P x , 0 ( R p T 1 ) = 0 .

(2)(g) The regularity of the model E x is deduced by steps (2)(d), (2)(e), (2)(f) and by Theorem 6.□

Proof of Theorem 7

Fix x E and t > 0 . Assume that A t ( y ) is satisfied for all y E and that θ , ξ < u t . The dominating probability measure K x θ , ξ is

K x θ , ξ = 1 3 ( P x , 0 + P x , θ + P x , ξ ) .

In this proof, we simplify some notations as follows:

(35) z s α = d P x , α s d K x α , ξ s , for α = 0 , θ , ξ < u t .

Due to the choice of K x θ , ξ , we have

z s 0 + z s θ + z s ξ = 3 .

(1) First note that the regularity of E y is equivalent to

g ( y , θ , ξ ) = 1 θ ξ H ¯ 0 , θ ( y ) + H ¯ 0 , ξ ( y ) H ¯ θ , ξ ( y ) 1 4 θ I ( y ) ξ f 1 ( y , θ ξ ) ,

where f 1 is the error function in condition A t ( y ) . Moreover, we have

(36) 0 t E K x θ , ξ [ g ( X s , θ , ξ ) 2 ] d s 1 2 F 1 , t ( x , θ ξ ) ,

where the error function F 1 , t is

F 1 , t ( x , θ ξ ) 0 t E K x θ , ξ [ f ( X s , θ ξ ) 2 ] d s 1 2 .

On the other hand, (21) and the condition A t ( y ) yield

(37) sup θ , ξ u t 0 t E K x θ , ξ [ I ( X s ) 2 ] d s < .

(2) We will show the existence of an error function F t , for which

L t ( x , θ , ξ ) 1 + H t θ , ξ H t 0 , θ H t 0 , ξ 1 4 E x , 0 0 t θ I ( X s ) ξ d s , x E

satisfies

(38) L t ( x , θ , ξ ) θ ξ F t ( x , θ ξ ) .

Using (17), (19), and the fact that ( z t 0 ) t 0 is a ( K x θ , ξ , t ) -martingale, we decompose L t ( x , θ , ξ ) into

(39) L t ( x , θ , ξ ) = E K x θ , ξ 0 t z s θ z s ξ d h s θ , ξ + 0 t z s 0 z s θ d h s 0 , θ + 0 t z s 0 z s ξ d h s 0 , ξ 1 4 z t 0 0 t θ I ( X s ) ξ d s = 0 t E K x θ , ξ [ A s + B s + C s + D s ] d s ,

where

A s = H ¯ 0 , θ ( X s ) + H ¯ 0 , ξ ( X s ) H ¯ θ , ξ ( X s ) 1 4 θ I ( X s ) ξ z s θ z s ξ , B s = H ¯ 0 , θ ( X s ) ( z s θ z s 0 z s θ z s ξ ) , C s = H ¯ 0 , ξ ( X s ) ( z s ξ z s 0 z s θ z s ξ ) , D s = 1 4 θ I ( X s ) ξ ( z s θ z s ξ z s 0 ) .

(2)(a) By (35) and (36), we obtain

(40) 0 t E K x θ , ξ [ A s ] d s 3 θ ξ F 1 , t ( x , θ ξ ) .

(2)(b) Applying Cauchy-Schwarz’s inequality twice, and using (35), we obtain

(41) 0 t E K x θ , ξ [ B s ] d s 3 0 t E K x θ , ξ [ H ¯ 0 , θ ( X s ) 2 ] d s 1 2 × 0 t E K x θ , ξ [ ( z s 0 z s ξ ) 2 ] d s 1 2 .

Then, the condition A t implies

0 t E K x θ , ξ [ H ¯ 0 , θ ( X s ) 2 ] d s θ 4 0 t E K x θ , ξ [ f 2 ( X s ) 2 ] d s .

By (17), we have

E K x θ , ξ [ ( z s 0 z s ξ ) 2 ] = 2 E K x θ , ξ 0 s z r 0 z r ξ d h r 0 , ξ 6 E K x θ , ξ [ h s 0 , ξ ] ,

hence,

0 t E K x θ , ξ [ ( z s 0 z s ξ ) 2 ] d s 6 0 t 0 s E K x θ , ξ [ H ¯ 0 , ξ ( X r ) ] d r d s 6 t ξ 2 0 t E K x θ , ξ [ f 2 ( X s ) ] d s .

Finally, condition A t , implies that

F 2 , t ( x , u ) = 3 u 2 t sup θ , ξ u t 0 t E K x θ , ξ [ f 2 ( X s ) 2 ] d s 3 4

is an error function. Then, by (41) and (3) we obtain

0 t E K x θ , ξ [ B s ] d s θ ξ F 2 , t ( x , θ ξ ) .

(2)(c) As in (4), there exists an error function F 3 , t such that

0 t E K x θ , ξ [ C s ] d s θ ξ F 3 , t ( x , θ ξ ) .

(2)(d) For the control of the fourth integral in (39), it suffices to observe that the inequality

z s θ z s ξ z s 0 z s 0 × z s θ z s 0 + z s θ × z s ξ z s 0

implies

E K x θ , ξ [ ( z s θ z s ξ z s 0 ) 2 ] 6 { E K x θ , ξ [ ( z s 0 z s θ ) 2 ] + E K x θ , ξ [ ( z s 0 z s ξ ) 2 ] } .

Then, using (3) one obtains

0 t E K x θ , ξ [ ( z s θ z s ξ z s 0 ) 2 ] d s 1 2 6 t ( θ ξ ) 0 t E K x θ , ξ [ f 2 ( X s ) ] d s 1 2 .

By (37) and by condition A t conclude that

F 4 , t ( x , u ) = 6 u t sup θ , ξ u t 0 t E K x θ , ξ [ I ( X s ) 2 ] d s × [ 0 t E K x θ , ξ [ f 2 ( X s ) ] d s ] 1 2

is an error function satisfying

0 t E K x θ , ξ [ D s ] d s θ ξ F 4 , t ( x , θ ξ ) .

(2)(e) The control (38) is obtained with F t = 3 F 1 , t + F 2 , t + F 3 , t + F 4 , t .□

For the proof of Theorem 8, we need a lemma which generalizes [15, Corollary I.7.1], hence the situation of Theorem 8. Let ( F , ) be an arbitrary state space and R θ ( x , d y ) , S θ ( x , d y ) , θ Θ , x F , be two Markovian kernels and Π x θ ( d y ) be a kernel dominating

R θ ( x , d y ) , R 0 ( x , d y ) , S θ ( x , d y ) , and S 0 ( x , d y ) .

We consider the statistical models:

F x = ( F , , ( R θ ( x , d y ) ) θ Θ ) , G x = ( F , , ( S θ ( x , d y ) ) θ Θ ) , H x = ( F , , ( R θ S θ ( x , d y ) ) θ Θ ) , H ¯ x ( F 2 , 2 , ( R θ ( x , d y 1 ) S θ ( y 1 , d y 2 ) ) θ Θ ) ,

where the product R θ S θ is the Markovian product of the kernels R θ and S θ , i.e.,

R θ S θ ( x , A ) = E R θ ( x , d y ) S θ ( y , A ) , A .

The Radon-Nikodym densities associated with the models F x and G x , relative to Π x θ ( d y ) , are

α θ ( x , ) = d R θ ( x , ) d Π x θ ( ) , β θ ( x , ) = d S θ ( x , ) d Π x θ ( ) α 0 ( x , ) = d R 0 ( x , ) d Π x θ ( ) , β 0 ( x , ) = d S 0 ( x , ) d Π x θ ( ) .

Choosing

Π x θ ( ) = 1 4 { R θ ( x , ) + R 0 ( x , ) + S θ ( x , ) + S 0 ( x , ) } ,

we have α . , β . 4 . We introduce the realizations of the last kernels as follows. Let Y 1 and Y 2 be two random variables on a probability space ( Ω , A ) with values on the state space ( F , ) . For θ Θ and x F , let us define the probability measure P x , θ on Ω such that

(42) P x , θ ( Y 1 ) = R θ ( x , d y ) and P x , θ ( Y 2 Y 1 = y ) = S θ ( y , d z ) ,

hence,

P x , θ ( Y 2 ) = R θ S θ ( x , d y ) .

We also define the probability measure K x θ , on Ω , enjoying the same properties as in (42), when replacing P x , θ by K x θ (respectively, R θ ( x , d y ) and S θ ( x , d y ) by Π x θ ( d y ) ). With these choices, and by [9, Theorem IV 4.16], we see that

P x , θ K x θ and P x , 0 K x θ .

We can now state that

F x is statistically isomorphic to x ( Ω , σ ( Y 1 ) , ( P x , θ ) θ Θ ) , H x is statistically isomorphic to x ( Ω , σ ( Y 2 ) , ( P x , θ ) θ Θ ) , H ¯ x is statistically isomorphic to ¯ x ( Ω , σ ( Y 1 , Y 2 ) , ( P x , θ ) θ Θ ) .

Consequently, the Radon-Nikodym densities of x and ¯ x , with respect to K x θ , are expressed by

z θ = d P x , θ σ ( Y 1 ) d K x θ σ ( Y 1 ) = α θ ( x , Y 1 ) and z ¯ θ = d P x , θ σ ( Y 1 , Y 2 ) d K x θ σ ( Y 1 , Y 2 ) = α θ ( x , Y 1 ) β θ ( Y 1 , Y 2 ) .

The regularity of the models x and G x is equivalent to the following: there exists two error functions f α and f β , an R 0 ( x , d y ) -centered random vector V α ( x , ) L 2 ( R 0 ( x , d y ) ) , and an S 0 ( x , d y ) -centered random vector V β ( x , ) L 2 ( S 0 ( x , d y ) ) , such that

(43) a ( x , θ ) F α θ ( x , y ) α 0 ( x , y ) 1 2 α 0 ( x , y ) θ V α ( x , y ) 2 Π x θ ( d y ) = E K x θ α θ ( x , Y 1 ) α 0 ( x , Y 1 ) 1 2 α 0 ( x , Y 1 ) θ V α ( x , Y 1 ) 2 θ 2 f α ( x , θ )

and

(44) b ( x , θ ) F β θ ( x , y ) β 0 ( x , y ) 1 2 β 0 ( x , y ) θ V β ( x , y ) 2 Π x θ ( d y ) θ 2 f β ( x , θ ) .

We are now able to state the fundamental lemma.

Lemma 11

Let x F . Assume that F x is regular and that G y is regular for all y F . Also assume that there exists r > 0 , such that the error function f β in (44) satisfies

Π x θ [ f β ( , r ) ] ( x ) < + , i f θ < r .

Then, the model H ¯ x is regular, and so is H x (as a sub-model of H ¯ x ).

Proof

We need to show that there exists an error function f γ such that

(45) c ( x , θ ) E K x θ z ¯ θ z ¯ 0 1 2 z ¯ 0 θ ( V α ( x , Y 1 ) + V β ( Y 1 , Y 2 ) ) 2 θ 2 f γ ( x , θ ) ,

for all θ satisfying θ < r . To this end, we split c ( x , θ ) as follows:

c ( x , θ ) = E K x θ α θ ( x , Y 1 ) β θ ( Y 1 , Y 2 ) α 0 ( x , Y 1 ) β 0 ( Y 1 , Y 2 ) 1 2 1 2 α 0 ( x , Y 1 ) β 0 ( Y 1 , Y 2 ) θ ( V α ( x , Y 1 ) + V β ( Y 1 , Y 2 ) ) 2 = E K x θ α θ ( x , Y 1 ) α 0 ( x , Y 1 ) 1 2 α 0 ( x , Y 1 ) θ V α ( x , Y 1 ) β θ ( Y 1 , Y 2 ) + β θ ( Y 1 , Y 2 ) β 0 ( Y 1 , Y 2 ) 1 2 β 0 ( Y 1 , Y 2 ) θ V β ( Y 1 , Y 2 ) α 0 ( x , Y 1 ) + 1 2 α 0 ( x , Y 1 ) θ V α ( x , Y 1 ) ( β θ ( Y 1 , Y 2 ) β 0 ( Y 1 , Y 2 ) ) 2 .

Then, using the fact that α . , β . 4 , we obtain

c ( x , θ ) 12 E K x θ α θ ( x , Y 1 ) α 0 ( x , Y 1 ) 1 2 α 0 ( x , Y 1 ) θ V α ( x , Y 1 ) 2 + E K x θ β θ ( Y 1 , Y 2 ) β 0 ( Y 1 , Y 2 ) 1 2 β 0 ( Y 1 , Y 2 ) θ V β ( Y 1 , Y 2 ) 2 + θ 2 16 E x , 0 [ V α ( x , Y 1 ) 2 ] 1 2 E K x θ [ β θ ( Y 1 , Y 2 ) β 0 ( Y 1 , Y 2 ) 2 ] 1 2 .

Using inequalities (43), (44) and the fact that Π x θ [ f β ( , r ) ] ( x ) < + , we obtain

E x , 0 [ V α ( x , Y 1 ) 2 ] = R 0 [ V α 2 ] ( x ) < F × F β 0 ( y , z ) V β ( y , z ) 2 Π y θ ( d z ) Π x θ ( d y ) < .

Thus,

g γ ( x , θ ) θ 2 2 Π x θ [ f β ( , θ ) ] ( x ) + F × F V β ( y , z ) 2 Π y θ ( d z ) Π x θ ( d y ) 0 , as θ 0 .

Finally, since

E K x θ [ β θ ( Y 1 , Y 2 ) β 0 ( Y 1 , Y 2 ) 2 ] = F × F β θ ( y , z ) β 0 ( y , z ) 2 Π y θ ( d z ) Π x θ ( d y ) ,

then, (45) holds with the error function

f γ ( x , θ ) 12 f α ( x , θ ) + 12 R 0 [ f β ( . , θ ) ] ( x ) + 3 4 E x , 0 [ V α ( x , Y 1 ) 2 ] g γ ( x , θ ) .

Proof of Theorem 8

The proof is a simple application of Lemma 11, by taking

F = E × R + , = R + , R θ = S θ = Q ¯ θ ,

and by making an induction on the index k , using the same condition of integrability of the error function.□

Proof of Theorem 10

  1. First, we note that

    ( Ω , t , ( P x , θ ) θ Θ ) regular ( Ω , t T 1 , ( P x , θ ) θ Θ ) regular ( Ω , σ ( X t T 1 ) , ( P x , θ ) θ Θ ) regular .

    Using the Bayes theorem, we express the likelihood of the model by

    E x , 0 [ Z t T 1 θ σ ( X t T 1 ) ] .

    Since the model x is regular at each time s [ 0 , t ] , then the derivative ( V s ) 0 s t of the model ( Ω , s , ( P x , θ ) θ Θ ) is given by (31). Using [7, Lemma 3.13], we obtain the derivative at θ = 0 of the likelihood E x , 0 [ Z t T 1 θ σ ( X t T 1 ) ] in the form

    E x , 0 [ V t T 1 σ ( X t T 1 ) ] .

    (1)(a) There exists then an error function F 1 , t , such that

    k ( t , x , θ ) E x , 0 E x , 0 [ Z t T 1 θ σ ( X t T 1 ) ] 1 1 2 θ E x , 0 [ V t T 1 σ ( X t T 1 ) ] 2 = E x , 0 E x , 0 [ Z t θ σ ( X t ) ] 1 1 2 θ E x , 0 [ V t σ ( X t ) ] 2 1 ( t < T 1 ) + E x , 0 E x , 0 [ Z T 1 θ σ ( X T 1 ) ] 1 1 2 θ E x , 0 [ V T 1 σ ( X T 1 ) ] 2 1 ( t T 1 ) = k 1 ( t , x , θ ) + k 2 ( t , x , θ ) θ 2 F 1 , t ( x , θ ) .

    (1)(b) As in the last point, we see that there exists an error function F 2 , t such that

    (46) l ( t , x , θ ) E x , 0 1 ρ θ ρ 0 ( x , X T 1 ) P x , 0 ( t T 1 ) E x , 0 [ 1 Z T 1 ] F 2 , t ( x , θ ) .

    (1)(c) We use the same arguments as in the proof of Theorem 5 by taking S n , p = R n , p = S p = R p = t , and we obtain that

    • k 1 ( t , x , θ ) θ 2 F 1 , t ( x , θ ) expresses the differentiability of θ μ θ ( x ) at θ = 0 ,

    • k 2 ( t , x , θ ) θ 2 F 1 , t ( x , θ ) and (46) express the regularity of the model E x .

    In virtue of Theorem 6, the latter is equivalent to the regularity of E x .

  2. The second assertion is an immediate consequence of Theorem 6.□

Acknowledgements

The author is grateful to Jean Jacod who introduced him to the topic of statistics of Lévy processes and to the referees for their valuable comments and recommendations that improved the content of the paper. The work of the author was supported by the “Research Supporting Project number (RSP-2021/162), King Saud University, Riyadh, Saudi Arabia.”

  1. Funding information: This study was funded by Research Supporting Project number (RSP-2021/162), King Saud University, Riyadh, Saudi Arabia.

  2. Conflict of interest: The author declares no conflict of interest.

References

[1] S. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability, Second edition, Cambridge University Press, Cambridge, 2009. 10.1017/CBO9780511626630Search in Google Scholar

[2] N. Lazrieva, T. Sharia, and T. Toronjadze, Semimartingale stochastic approximation procedure and recursive estimation, J. Math. Sci. (N.Y.) 153 (2008), no. 3, 211–261, https://doi.org/10.1007/s10958-008-9127-y. Search in Google Scholar

[3] T. Ogihara and Y. Uehara, Local asymptotic mixed Normality via transition density approximation and an application to ergodic jump-diffusion processes, 2021, arXiv: https://arxiv.org/abs/2105.00284. Search in Google Scholar

[4] V. Panov, Modern Problems of Stochastic Analysis and Statistics: Selected Contributions in Honor of Valentin Konakov, V. Panov (eds.), Springer Proceedings in Mathematics, Springer Cham, Switzerland, 2017. 10.1007/978-3-319-65313-6Search in Google Scholar

[5] J. Jacod and A. V. Skorohod, Jumping Markov processes, Ann. Inst. Henri Poincare Probab. Stat. 32 (1996), no. 1, 11–67, http://eudml.org/doc/77529. Search in Google Scholar

[6] R. Höpfner, J. Jacod, and L. Ladelli, Local asymptotic normality and mixed normality for Markov statistical models, Probab. Theory Related Fields 86 (1990), 105–129, https://doi.org/10.1007/BF01207516. Search in Google Scholar

[7] J. Jacod, Une application of la topologie d’Emery: le processus information d’un modèle statistique filtré, In: J. Azéma, M. Yor, and P. A. Meyer (eds), Séminaire de Probabilités XXIII. Lecture Notes in Mathematics 1372 (1989), 448–474, https://doi.org/10.1007/BFb0083993. Search in Google Scholar

[8] K. Dzhaparidze and E. Valkeila, On the Hellinger type distances for filtered experiments, Probab. Theory Related Fields 85 (1990), 105–117, https://doi.org/10.1007/BF01377632. Search in Google Scholar

[9] J. Jacod and A. N. Shiryaevv, Limit Theorems for Stochastic Processes, Springer, Berlin, Heidelberg, New York, 1987. 10.1007/978-3-662-02514-7Search in Google Scholar

[10] W. Jedidi, Local asymptotic normality complexity arising in a parametric statistical Lévy model, Complexity 2021 (2021), 3143324, 1–18, https://doi.org/10.1155/2021/3143324. Search in Google Scholar

[11] H. Rammeh, Problèmes d’estimation et d’estimation adaptative pour les processus de Cauchy et les processus stables symétriques, PhD thesis, University of Paris, 1994.Search in Google Scholar

[12] H. Luschgy, Local asymptotic mixed normality for semimartingale experiments, Probab. Theory Related Fields 92 (1992), 151–176, https://doi.org/10.1007/BF01194919. Search in Google Scholar

[13] R. Höpfner, On statistics of Markov step processes, representation of the loglikelihood ratio process in filtered local models, Probab. Theory Related Fields 94 (1993), 375–398, https://doi.org/10.1007/BF01199249. Search in Google Scholar

[14] H. Strasser, Mathematical Theory of Statistics, Statistical Experiment and Asymptotic Decision Theory, Walter deGruyter, Berlin, New York, 1985. 10.1515/9783110850826Search in Google Scholar

[15] I. A. Ibragimov and R. Z. Has’minskii, Statistical Estimation, Asymptotic Theory, Springer, Berlin Heidelberg New York, 1981. 10.1007/978-1-4899-0027-2Search in Google Scholar

[16] C. Dellacherie and P. A. Meyer, Probabilités et Potentiel, Chapters V to VIII, Paris, Hermann, 1980. Search in Google Scholar

Received: 2022-01-18
Revised: 2022-06-29
Accepted: 2022-07-01
Published Online: 2022-09-05

© 2022 Wissem Jedidi, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. A random von Neumann theorem for uniformly distributed sequences of partitions
  3. Note on structural properties of graphs
  4. Mean-field formulation for mean-variance asset-liability management with cash flow under an uncertain exit time
  5. The family of random attractors for nonautonomous stochastic higher-order Kirchhoff equations with variable coefficients
  6. The intersection graph of graded submodules of a graded module
  7. Isoperimetric and Brunn-Minkowski inequalities for the (p, q)-mixed geominimal surface areas
  8. On second-order fuzzy discrete population model
  9. On certain functional equation in prime rings
  10. General complex Lp projection bodies and complex Lp mixed projection bodies
  11. Some results on the total proper k-connection number
  12. The stability with general decay rate of hybrid stochastic fractional differential equations driven by Lévy noise with impulsive effects
  13. Well posedness of magnetohydrodynamic equations in 3D mixed-norm Lebesgue space
  14. Strong convergence of a self-adaptive inertial Tseng's extragradient method for pseudomonotone variational inequalities and fixed point problems
  15. Generic uniqueness of saddle point for two-person zero-sum differential games
  16. Relational representations of algebraic lattices and their applications
  17. Explicit construction of mock modular forms from weakly holomorphic Hecke eigenforms
  18. The equivalent condition of G-asymptotic tracking property and G-Lipschitz tracking property
  19. Arithmetic convolution sums derived from eta quotients related to divisors of 6
  20. Dynamical behaviors of a k-order fuzzy difference equation
  21. The transfer ideal under the action of orthogonal group in modular case
  22. The multinomial convolution sum of a generalized divisor function
  23. Extensions of Gronwall-Bellman type integral inequalities with two independent variables
  24. Unicity of meromorphic functions concerning differences and small functions
  25. Solutions to problems about potentially Ks,t-bigraphic pair
  26. Monotonicity of solutions for fractional p-equations with a gradient term
  27. Data smoothing with applications to edge detection
  28. An ℋ-tensor-based criteria for testing the positive definiteness of multivariate homogeneous forms
  29. Characterizations of *-antiderivable mappings on operator algebras
  30. Initial-boundary value problem of fifth-order Korteweg-de Vries equation posed on half line with nonlinear boundary values
  31. On a more accurate half-discrete Hilbert-type inequality involving hyperbolic functions
  32. On split twisted inner derivation triple systems with no restrictions on their 0-root spaces
  33. Geometry of conformal η-Ricci solitons and conformal η-Ricci almost solitons on paracontact geometry
  34. Bifurcation and chaos in a discrete predator-prey system of Leslie type with Michaelis-Menten prey harvesting
  35. A posteriori error estimates of characteristic mixed finite elements for convection-diffusion control problems
  36. Dynamical analysis of a Lotka Volterra commensalism model with additive Allee effect
  37. An efficient finite element method based on dimension reduction scheme for a fourth-order Steklov eigenvalue problem
  38. Connectivity with respect to α-discrete closure operators
  39. Khasminskii-type theorem for a class of stochastic functional differential equations
  40. On some new Hermite-Hadamard and Ostrowski type inequalities for s-convex functions in (p, q)-calculus with applications
  41. New properties for the Ramanujan R-function
  42. Shooting method in the application of boundary value problems for differential equations with sign-changing weight function
  43. Ground state solution for some new Kirchhoff-type equations with Hartree-type nonlinearities and critical or supercritical growth
  44. Existence and uniqueness of solutions for the stochastic Volterra-Levin equation with variable delays
  45. Ambrosetti-Prodi-type results for a class of difference equations with nonlinearities indefinite in sign
  46. Research of cooperation strategy of government-enterprise digital transformation based on differential game
  47. Malmquist-type theorems on some complex differential-difference equations
  48. Disjoint diskcyclicity of weighted shifts
  49. Construction of special soliton solutions to the stochastic Riccati equation
  50. Remarks on the generalized interpolative contractions and some fixed-point theorems with application
  51. Analysis of a deteriorating system with delayed repair and unreliable repair equipment
  52. On the critical fractional Schrödinger-Kirchhoff-Poisson equations with electromagnetic fields
  53. The exact solutions of generalized Davey-Stewartson equations with arbitrary power nonlinearities using the dynamical system and the first integral methods
  54. Regularity of models associated with Markov jump processes
  55. Multiplicity solutions for a class of p-Laplacian fractional differential equations via variational methods
  56. Minimal period problem for second-order Hamiltonian systems with asymptotically linear nonlinearities
  57. Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound
  58. Non-binary quantum codes from constacyclic codes over 𝔽q[u1, u2,…,uk]/⟨ui3 = ui, uiuj = ujui
  59. On the general position number of two classes of graphs
  60. A posteriori regularization method for the two-dimensional inverse heat conduction problem
  61. Orbital stability and Zhukovskiǐ quasi-stability in impulsive dynamical systems
  62. Approximations related to the complete p-elliptic integrals
  63. A note on commutators of strongly singular Calderón-Zygmund operators
  64. Generalized Munn rings
  65. Double domination in maximal outerplanar graphs
  66. Existence and uniqueness of solutions to the norm minimum problem on digraphs
  67. On the p-integrable trajectories of the nonlinear control system described by the Urysohn-type integral equation
  68. Robust estimation for varying coefficient partially functional linear regression models based on exponential squared loss function
  69. Hessian equations of Krylov type on compact Hermitian manifolds
  70. Class fields generated by coordinates of elliptic curves
  71. The lattice of (2, 1)-congruences on a left restriction semigroup
  72. A numerical solution of problem for essentially loaded differential equations with an integro-multipoint condition
  73. On stochastic accelerated gradient with convergence rate
  74. Displacement structure of the DMP inverse
  75. Dependence of eigenvalues of Sturm-Liouville problems on time scales with eigenparameter-dependent boundary conditions
  76. Existence of positive solutions of discrete third-order three-point BVP with sign-changing Green's function
  77. Some new fixed point theorems for nonexpansive-type mappings in geodesic spaces
  78. Generalized 4-connectivity of hierarchical star networks
  79. Spectra and reticulation of semihoops
  80. Stein-Weiss inequality for local mixed radial-angular Morrey spaces
  81. Eigenvalues of transition weight matrix for a family of weighted networks
  82. A modified Tikhonov regularization for unknown source in space fractional diffusion equation
  83. Modular forms of half-integral weight on Γ0(4) with few nonvanishing coefficients modulo
  84. Some estimates for commutators of bilinear pseudo-differential operators
  85. Extension of isometries in real Hilbert spaces
  86. Existence of positive periodic solutions for first-order nonlinear differential equations with multiple time-varying delays
  87. B-Fredholm elements in primitive C*-algebras
  88. Unique solvability for an inverse problem of a nonlinear parabolic PDE with nonlocal integral overdetermination condition
  89. An algebraic semigroup method for discovering maximal frequent itemsets
  90. Class-preserving Coleman automorphisms of some classes of finite groups
  91. Exponential stability of traveling waves for a nonlocal dispersal SIR model with delay
  92. Existence and multiplicity of solutions for second-order Dirichlet problems with nonlinear impulses
  93. The transitivity of primary conjugacy in regular ω-semigroups
  94. Stability estimation of some Markov controlled processes
  95. On nonnil-coherent modules and nonnil-Noetherian modules
  96. N-Tuples of weighted noncommutative Orlicz space and some geometrical properties
  97. The dimension-free estimate for the truncated maximal operator
  98. A human error risk priority number calculation methodology using fuzzy and TOPSIS grey
  99. Compact mappings and s-mappings at subsets
  100. The structural properties of the Gompertz-two-parameter-Lindley distribution and associated inference
  101. A monotone iteration for a nonlinear Euler-Bernoulli beam equation with indefinite weight and Neumann boundary conditions
  102. Delta waves of the isentropic relativistic Euler system coupled with an advection equation for Chaplygin gas
  103. Multiplicity and minimality of periodic solutions to fourth-order super-quadratic difference systems
  104. On the reciprocal sum of the fourth power of Fibonacci numbers
  105. Averaging principle for two-time-scale stochastic differential equations with correlated noise
  106. Phragmén-Lindelöf alternative results and structural stability for Brinkman fluid in porous media in a semi-infinite cylinder
  107. Study on r-truncated degenerate Stirling numbers of the second kind
  108. On 7-valent symmetric graphs of order 2pq and 11-valent symmetric graphs of order 4pq
  109. Some new characterizations of finite p-nilpotent groups
  110. A Billingsley type theorem for Bowen topological entropy of nonautonomous dynamical systems
  111. F4 and PSp (8, ℂ)-Higgs pairs understood as fixed points of the moduli space of E6-Higgs bundles over a compact Riemann surface
  112. On modules related to McCoy modules
  113. On generalized extragradient implicit method for systems of variational inequalities with constraints of variational inclusion and fixed point problems
  114. Solvability for a nonlocal dispersal model governed by time and space integrals
  115. Finite groups whose maximal subgroups of even order are MSN-groups
  116. Symmetric results of a Hénon-type elliptic system with coupled linear part
  117. On the connection between Sp-almost periodic functions defined on time scales and ℝ
  118. On a class of Harada rings
  119. On regular subgroup functors of finite groups
  120. Fast iterative solutions of Riccati and Lyapunov equations
  121. Weak measure expansivity of C2 dynamics
  122. Admissible congruences on type B semigroups
  123. Generalized fractional Hermite-Hadamard type inclusions for co-ordinated convex interval-valued functions
  124. Inverse eigenvalue problems for rank one perturbations of the Sturm-Liouville operator
  125. Data transmission mechanism of vehicle networking based on fuzzy comprehensive evaluation
  126. Dual uniformities in function spaces over uniform continuity
  127. Review Article
  128. On Hahn-Banach theorem and some of its applications
  129. Rapid Communication
  130. Discussion of foundation of mathematics and quantum theory
  131. Special Issue on Boundary Value Problems and their Applications on Biosciences and Engineering (Part II)
  132. A study of minimax shrinkage estimators dominating the James-Stein estimator under the balanced loss function
  133. Representations by degenerate Daehee polynomials
  134. Multilevel MC method for weak approximation of stochastic differential equation with the exact coupling scheme
  135. Multiple periodic solutions for discrete boundary value problem involving the mean curvature operator
  136. Special Issue on Evolution Equations, Theory and Applications (Part II)
  137. Coupled measure of noncompactness and functional integral equations
  138. Existence results for neutral evolution equations with nonlocal conditions and delay via fractional operator
  139. Global weak solution of 3D-NSE with exponential damping
  140. Special Issue on Fractional Problems with Variable-Order or Variable Exponents (Part I)
  141. Ground state solutions of nonlinear Schrödinger equations involving the fractional p-Laplacian and potential wells
  142. A class of p1(x, ⋅) & p2(x, ⋅)-fractional Kirchhoff-type problem with variable s(x, ⋅)-order and without the Ambrosetti-Rabinowitz condition in ℝN
  143. Jensen-type inequalities for m-convex functions
  144. Special Issue on Problems, Methods and Applications of Nonlinear Analysis (Part III)
  145. The influence of the noise on the exact solutions of a Kuramoto-Sivashinsky equation
  146. Basic inequalities for statistical submanifolds in Golden-like statistical manifolds
  147. Global existence and blow up of the solution for nonlinear Klein-Gordon equation with variable coefficient nonlinear source term
  148. Hopf bifurcation and Turing instability in a diffusive predator-prey model with hunting cooperation
  149. Efficient fixed-point iteration for generalized nonexpansive mappings and its stability in Banach spaces
Downloaded on 7.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2022-0482/html
Scroll to top button