Startseite Spike detection for calcium activity
Artikel
Lizenziert
Nicht lizenziert Erfordert eine Authentifizierung

Spike detection for calcium activity

  • Hermine Biermé EMAIL logo , Camille Constant , Anne Duittoz und Christine Georgelin
Veröffentlicht/Copyright: 30. September 2021
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

We present in this paper a global methodology for the spike detection in a biological context of fluorescence recording of GnRH-neurons calcium activity. For this purpose we first propose a simple stochastic model that could mimic experimental time series by considering an autoregressive AR(1) process with a linear trend and specific innovations involving spiking times. Estimators of parameters with asymptotic normality are established and used to set up a statistical test on estimated innovations in order to detect spikes. We compare several procedures and illustrate on biological data the performance of our procedure.

2010 Mathematics Subject Classification Primary: 62M10; 62F12; 62F03; Secondary: 92B25

Corresponding author: Hermine Biermé, LMA UMR CNRS 7348, Université de Poitiers Bât. H3 - Site du Futuroscope, TSA 61125, 11 bd Marie et Pierre Curie, 86073 Poitiers Cedex 9, France, E-mail:

Acknowledgments

We would like to sincerely thank the reviewers for their valuable comments and challenging remarks that helped us to improve substantially this paper.

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

Appendix A

A.1 From continuous time to discrete time

We assume to observe a process X over a time interval [0, T], T R + , satisfying

d X t = 1 τ ( μ ( t ) X t ) d t + λ d N t + σ d W t ,

for a linear trend μ ( t ) = c + d T t , c R + , d R , τ, σ, λ ∈ (0, +∞), and W a Brownian motion independent from N a Poisson process of parameter ν ∈ (0, 1]. It follows, writing γ = 1/τ that

(18) d e γ t X t = γ e γ t c + d T t d t + σ e γ t d W t + λ e γ t d N t .

Now, we assume to have (n + 1) observations uniformly distributed on [0, T]: X 0, X T n , …, X k T n , …, X T . For k ∈ {0, …, n}, we will note X n,k for X k T n . Moreover, we introduce ϕ n e γ T n . By integrating (18) over [ k T n , ( k + 1 ) T n ] , we have:

ϕ n ( k + 1 ) X n , k + 1 ϕ n k X n , k = c ϕ n ( k + 1 ) ϕ n k + d n ( k + 1 ) ϕ n ( k + 1 ) k ϕ n k d γ T ϕ n ( k + 1 ) ϕ n k + σ k T n ( k + 1 ) T n e γ t d W t + λ k T n ( k + 1 ) T n e γ t d N t .

Or equivalently:

X n , k + 1 = ϕ n X n , k + c 1 ϕ n + d k n 1 ϕ n + d n d γ T 1 ϕ n + σ k T n ( k + 1 ) T n e γ t ( k + 1 ) T n d W t + λ j = N k T n N ( k + 1 ) T N e γ s j ( k + 1 ) T n ,

where ( s j ) j are the points of the Poisson process N. Assuming n is large enough:

d n d γ T 1 ϕ n = d n d γ T 1 e γ T n = d n d γ T γ T n + o 1 n = o 1 n ;

k T n ( k + 1 ) T n e γ t ( k + 1 ) T n d W t = W ( k + 1 ) T n W k T n + o 1 n ,

with W ( k + 1 ) T n W k T n N 0 , T n and N ( k + 1 ) T n N k T n P ν T n . Then

P N ( k + 1 ) T n N k T n = 1 = ν T n + o 1 n and P N ( k + 1 ) T n N k T n = 0 = 1 ν T n + o 1 n ,

so we can consider the following approximation

X n , k + 1 = ϕ n X n , k + c 1 ϕ n + d k n 1 ϕ n + σ ϵ k + 1 + λ U k + 1 ,

where for all k ∈ {0, …, n}, ϵ k N 0 , T n , U k B ν T n are independant variables. In this paper we first consider this approximation for a fix time interlaps T n = 1 .

A.2 Proof of Theorem 1

By Cramer–Wold device, it is sufficient to prove that for all u, v, w and x in R :

1 n k = 0 n 1 Δ n , k d n + N 0 , l ,

where we denote Δ n , k = u + v k n + w Y k + x Z k + 1 Y k w ρ Y ( 0 ) for all n ≥ 1 and k ∈ {0, …, n} and

l = u v w x Σ 1 u v w x t .

As usual for proving asymptotic normality in time series we consider and (m n )-dependent approximation of (Y k ) defined through Y k ( m n ) = j = 0 m n ϕ j Z k j and denote Δ n , k ( m n ) = u + v k n + w Y k ( m n ) + x Z k + 1 Y k ( m n ) w ρ Y ( m n ) ( 0 ) , where ρ Y ( m n ) ( 0 ) = Var ( Y k ( m n ) ) . Let us remark that we may write

Δ n , k = Δ n , k ( m n ) + u + v k n + x Z k + 1 R m n , k + w R m n , k 2 + 2 w Y k ( m n ) R m n , k + w ρ Y ( m n ) ( 0 ) ρ Y ( 0 )

where R m n , k = j = m n + 1 + ϕ j Z k j = ϕ m n + 1 Y k m n 1 . We will show that we can find c > 0 such that

(19) 1 n k = 0 n 1 ( Δ n , k Δ n , k ( m n ) ) L 2 c | ϕ | m n .

Then, since |ϕ| < 1 as soon as m n → +∞, by Slutsky theorem, it is sufficient to prove that

1 n k = 0 n 1 Δ n , k ( m n ) d n + N 0 , l .

For that, since now ( Δ n , k ( m n ) ) is a sequence of m n -dependent random variables, it is enough to check the three points of Theorem 2 from [32] concerning central limit theorem for m n dependent centered random variables. Namely, in our setting, considering U n , k = Δ n , k ( m n ) n , it follows from

  1.  Var k = 0 n 1 U n , k = 1 n  Var k = 0 n Δ n , k ( m n ) n + l ;

  2. there exists C > 0 such that k = 0 n 1 E ( U n , k 2 ) C , for all n ≥ 1;

  3. for all ɛ > 0, L n ( ε ) = m n 2 k = 0 n 1 E ( U n , k 2 1 | U n , k | ε m n 2 ) n + 0 .

In order to prove that and to compute explicitly the covariance matrix we need the following result.

Lemma 1

Let ( Z k ) k be a sequence of iid centered rv with E ( Z 0 4 ) < + and (Y k ) its associated AR(1) process with autoregression coefficient ϕ ∈ (−1, 1). Then, for all n ≥ 1 and k, l ∈ {0, …, n − 1} we have

( 1 ) Cov Y k , Y l 2 = E Z 0 3 1 ϕ 3 ϕ 1 2 ( l k ) + 3 2 | l k | , ( 2 ) Cov Y k , Y l Z l + 1 = 0 , ( 3 ) Cov Y k Z k + 1 , Y l Z l + 1 = σ Z 2 1 ϕ 2 E Z 0 2 1 { k = l } , ( 4 ) Cov Y k 2 , Y l Z l + 1 = 2 σ Z 4 1 ϕ 2 ϕ 2 ( k l ) 1 1 { k l + 1 } , ( 5 ) Cov Y k 2 , Y l 2 = Cov Y k 2 , Y l 2 = σ Z 4 1 ϕ 2 ϕ 2 | l k | ( E ( Z 0 4 ) 3 σ Z 4 ) σ Z 4 ( 1 + ϕ 2 ) + 2 ϕ 2 | l k | 1 ϕ 2 .

It follows that one can find a constant c > 0 such that

| Cov ( Δ n , k , Δ n , l ) | c | ϕ | | k l | .

Proof

We first consider k and l such that 0 ≤ kln − 1:

( 1 ) Cov Y k , Y l 2 = m , n , p = 0 + ϕ m + n + p E Z k m Z l n Z l p .

Since the centered variables Z k are iid, the terms of this sum are equal to zero except if the indices km, ln and lp are equal then if p = n = lk + m. So we have:

Cov Y k , Y l 2 = m = 0 + ϕ 3 m + 2 ( l k ) E Z k l 3 = E Z 0 3 1 ϕ 3 ϕ 2 ( l k ) .

(2) Since Z l+1 is independant of Y k Y l , we get Cov(Y k , Y l Z l+1) = 0.

(3) Since Z l+1 is independant of Y k Y l Z k+1 if k < l, we have

Cov Y k Z k + 1 , Y l Z l + 1 = E Y k Z k + 1 Y l Z l + 1 1 { k = l } = E Y k 2 Z k + 1 2 1 { k = l } = E Y k 2 E Z k + 1 2 1 { k = l } = σ Z 4 1 ϕ 2 1 { k = l } .

(4) Since Z l+1 is independant of Y k 2 Y l , we obtain Cov Y k 2 , Y l Z l + 1 = 0 .

(5) We have

Cov Y k 2 , Y l 2 = m , n , p , q = 0 + ϕ m + n + p + q E Z k m Z k n Z l p Z l q ρ Y ( 0 ) 2 .

The terms of this sum are equal to zero except if the indices km, kn, lp, lq are all equal or equal in twos, then we deduce that

Cov Y k 2 , Y l 2 = σ Z 4 1 ϕ 2 ϕ 2 ( l k ) ( E ( Z 0 4 ) 3 σ Z 4 ) σ Z 4 ( 1 + ϕ 2 ) + 2 ϕ 2 ( l k ) 1 ϕ 2 .

Now if we have 0 ≤ l < kn − 1, we can check the equalities (3) and (5) using symmetric property. For the other ones:

  • (1)

    Cov Y k , Y l 2 = m , n , p = 0 + ϕ m + n + p E Z k m Z l n Z l p .

    The terms of this sum are equal to zero except if the indices km, ln and lp are equal then if p = n and m = n + kl.

    Cov Y k , Y l 2 = n = 0 + ϕ 3 n + k l E Z k l 3 = E Z 0 3 1 ϕ 3 ϕ k l .

  • (2)

    Cov Y k , Y l Z l + 1 = m , n = 0 + ϕ m + n E Z k m Z l n Z l + 1 = 0

    since the centered variables Z km , Z ln and Z l+1 cannot be all equal.

    (3) We can prove by induction that

    Cov Y k 2 , Y l Z l + 1 = 2 σ Z 4 1 ϕ 2 ϕ 2 ( k l ) 1 .

    Actually, for k = l + 1

    Cov Y l + 1 2 , Y l Z l + 1 = Cov ϕ Y l + Z l + 1 2 , Y l Z l + 1 = ϕ 2 Cov Y l 2 , Y l Z l + 1 + Cov ( Z l + 1 2 , Y l Z l + 1 ) + 2 ϕ Var Y l Z l + 1 = 0 + 0 + 2 ϕ Var ( Y l ) Var ( Z l + 1 ) = 2 ϕ 1 ϕ 2 σ Z 4

    which proves the property for k = l + 1. Now assuming that the result holds for k > l we get

    Cov Y k + 1 2 , Y l Z l + 1 = Cov ϕ Y k + Z k + 1 2 , Y l Z l + 1 = ϕ 2 Cov Y k 2 , Y l Z l + 1 + Cov ( Z k + 1 2 , Y l Z l + 1 ) + 2 ϕ Cov Y k Z k + 1 , Y l Z l + 1 .

    As k > l, the two last terms are equal to zero and

    Cov Y k + 1 2 , Y l Z l + 1 = ϕ 2 Cov Y k 2 , Y l Z l + 1 = 2 σ Z 4 1 ϕ 2 ϕ 2 k + 1 l 1 ,

    which proves using induction that (4) holds for any kl + 1.

Hence, since |ϕ| < 1, one can find a constant c > 0 such that

Cov Δ n , k , Δ n , l c | ϕ | | k l | .

Since Y k m n = Y k ϕ m n + 1 Y k m n 1 , we can check that there exists c > 0 such that

Cov Δ n , k Δ n , k m n , Δ n , l Δ n , l m n c | ϕ | 2 m n | ϕ | | k l |  and  Cov Δ n , k m n , Δ n , l m n c | ϕ | | k l | .

Hence

Var 1 n k = 0 n 1 Δ n , k Δ n , k m n c | ϕ | 2 m n 1 | ϕ | ,

that proves (19) and (i) will follow from the fact that 1 n Var k = 0 n 1 Δ n , k n + l . Hence, we compute

Var k = 0 n 1 Δ n , k = Var k = 0 n 1 u + v k n + w Y k + x Z k + 1 Y k = k , l = 0 n 1 Cov u + v k n + w Y k + x Z k + 1 Y k , u + v l n + w Y l + x Z l + 1 Y l

= 1 n k , l = 0 n 1 u 2 n + u v ( k + l ) + v 2 k l n Cov ( Y k , Y l ) + u w n + v w k Cov Y k , Y l 2 + ( u w n + v w l ) Cov Y k 2 , Y l + 1 n k , l = 0 n 1 ( u x n + v k x ) Cov Y k , Y l Z l + 1 + ( u x n + v l x ) Cov Y k Z k + 1 , Y l + n x 2 Cov Y k Z k + 1 , Y l Z l + 1 + 1 n k , l = 0 n 1 x w n Cov Y k 2 , Y l Z l + 1 + x w n Cov Y k Z k + 1 , Y l 2 + n w 2 Cov Y k 2 , Y l 2 .

Now in order to get an explicit asymptotic variance we need the following computations.

Lemma 2

For |ϕ| < 1, we have the following behaviors when n tends to +∞:

  1. i = 0 n j = 0 n ϕ | j i | = n 1 + ϕ 1 ϕ + o ( n ) ;

  2. i = 0 n j = 0 n i ϕ | j i | = n 2 2 1 + ϕ 1 ϕ + o ( n 2 ) ;

  3. i = 0 n j = 0 n i j ϕ | j i | = n 3 3 1 + ϕ 1 ϕ + o ( n 3 ) ;

  4. i = 0 n j = 0 n ϕ 1 2 ( j i ) + 3 2 | j i | = n 1 + ϕ + ϕ 2 1 ϕ 2 + o ( n ) ;

  5. i = 0 n j = i + 1 n ϕ 2 ( j i ) 1 = n ϕ 1 ϕ 2 + o ( n ) ;

  6. i = 0 n j = 0 n i ϕ 1 2 ( j i ) + 3 2 | j i | = n 2 2 1 + ϕ + ϕ 2 1 ϕ 2 + o ( n 2 ) .

Then we obtain that 1 n Var k = 0 n 1 Δ n , k n + l , where

l = u 2 σ Z 2 1 ϕ 2 1 ϕ 1 + ϕ + u v σ Z 2 1 ϕ 2 1 + ϕ 1 ϕ + v 2 σ Z 2 1 ϕ 2 1 + ϕ 3 ( 1 ϕ ) + 2 u w E Z 0 3 σ Z 2 ( 1 ϕ 3 ) 1 + ϕ + ϕ 2 1 ϕ 2 + v w E Z 0 3 1 ϕ 3 1 + ϕ + ϕ 2 2 ( 1 ϕ 2 ) + w 2 σ Z 4 1 ϕ 2 1 + ϕ 2 1 ϕ 2 E Z 0 4 3 σ Z 4 σ Z 4 ( 1 + ϕ 2 ) + 2 1 ϕ 2 + 4 w x σ Z 4 1 ϕ 2 ϕ 1 ϕ 2 + x 2 σ Z 4 1 ϕ 2 = σ Z 2 ( 1 ϕ ) 2 u 2 + u v + 2 u w E ( Z 0 3 ) σ Z 2 ( 1 + ϕ ) + v 2 3 + E Z 0 3 σ Z 2 ( 1 + ϕ ) v w + σ Z 2 ( 1 ϕ ) 2 w 2 ( E Z 0 4 3 σ Z 4 ) σ Z 2 ( 1 + ϕ ) 2 + 2 σ Z 2 1 + ϕ 2 ( 1 ϕ ) ( 1 + ϕ ) 3 + 4 w x σ Z 2 ϕ ( 1 + ϕ ) 2 + x 2 σ Z 2 1 ϕ 1 + ϕ = u v w x Σ 1 u v w x t 0 .

For (ii) note that for k < n

| U n , k | = 1 n Δ n , k m n 1 n V k ,

where V k = | u | + | v | W k + | w | ( W k 2 + ρ Y ( 0 ) ) + | x Z k + 1 | W k with W k = j = 0 + | ϕ | j | Z k j | . Since E ( Z 0 4 ) < + , we have for all α ∈ (0, 4]

W k L α Z 0 L α 1 1 | ϕ | .

Hence, by Cauchy–Schwarz inequality,

V k L 2 | u | + | v | W k L 2 + | w | W k L 4 2 + ρ Y ( 0 ) + | x | Z k + 1 L 4 W k L 4 C ,

where

C = | u | + | v | Z 0 L 2 1 1 | ϕ | + | w | Z 0 L 4 2 1 ( 1 | ϕ | ) 2 + ρ Y ( 0 ) + | x | Z 0 L 4 2 1 1 | ϕ | .

and (ii) holds since k = 0 n 1 E ( U n , k 2 ) 1 n k = 0 n 1 V k L 2 2 C 2 .

For (iii) we remark that since E ( V 0 2 ) < + one has

E ( V k 2 1 V k > n 1 / 8 ) = E ( V 0 2 1 V 0 > n 1 / 8 ) n + 0 ,

and we can choose m n = min E ( V 0 2 1 V 0 > n 1 / 8 ) 1 / 4 , n 1 / 8 n + + , in such a way that for ɛ > 0,

L n ( ε ) = m n 2 k = 0 n 1 E U n , k 2 1 m n 2 | U n , k | > ε m n 2 k = 0 n 1 1 n E V k 2 1 m n 2 n | V k | > ε m n 2 E ( V 0 2 1 V 0 > ε n 1 / 4 ) ,

where we have used the fact that m n 2 n n 1 / 4 in the last inequality. Since ɛn 1/4 > n 1/8, for n sufficiently large, we obtain

L n ( ε ) m n 2 ,

such that (iii) holds.

A.3 Proof of Theorem 2

Let us write

H n ( X ) = A n ( X ) t A n ( X ) = n k = 0 n 1 k n k = 0 n 1 X k k = 0 n 1 k n k = 0 n 1 k n 2 k = 0 n 1 k n X k k = 0 n 1 X k k = 0 n 1 k n X k k = 0 n 1 X k 2 .

In view of (3), if (X n,k ) satisfies (2) for some θ = ( m , b , ϕ ) R 2 × 1,1 , we can write X n , k = c n + d k n + Y k with Y a stationary centered solution of the AR(1) Eq. (4), namely

Y k + 1 = ϕ Y k + Z k + 1 ,

and c n , d are given by (5) i.e. c n = c b n ( 1 ϕ ) 2 , d = b 1 ϕ and c = m 1 ϕ .

Hence, according to Corollary 1 and Lemma 1 above, the following convergences hold a.s and in L 2:

1 n k = 0 n 1 X n , k = c n + d 1 n k = 0 n 1 k n + 1 n k = 0 n 1 Y k n + c + d / 2 , 1 n k = 0 n 1 k n X n , k = c n 1 n k = 0 n 1 k n + d 1 n k = 0 n 1 k n 2 + 1 n k = 0 n 1 k n Y k n + c / 2 + d / 3 , 1 n k = 0 n 1 X n , k 2 = c n 2 + 2 c n d 1 n k = 0 n 1 k n + d 2 1 n k = 0 n 1 k n 2 + 1 n k = 0 n 1 Y k 2 + + 2 c n 1 n k = 0 n 1 Y k + 2 d 1 n k = 0 n 1 k n Y k n + c 2 + d 2 / 3 + c d + ρ Y ( 0 ) ,

with ρ Y ( 0 ) = σ Z 2 / ( 1 ϕ 2 ) as the stationary solution of the AR(1) equation. Therefore

1 n H n ( X ) n + H 1 1 / 2 c + d / 2 1 / 2 1 / 3 c / 2 + d / 3 c + d / 2 c / 2 + d / 3 c 2 + d 2 / 3 + c d + ρ Y ( 0 )  a.s. and in  L 2 .

Note that for all ( u , v , w ) R 3 ,

u v w H u v w t = ( u + c w ) 2 + 1 3 ( v + d w ) 2 + w 2 ρ Y ( 0 ) + ( u + c w ) ( v + d w ) = ( u + c w ) + 1 2 ( v + d w ) 2 + 1 12 ( v + d w ) 2 + w 2 ρ Y ( 0 ) ,

and therefore the matrix H is positive definite.

Now let us consider for θ ̃ = ( m ̃ , b ̃ , ϕ ̃ ) R 3 , the contrast function

M n ( θ ̃ ) = ( X n A n ( X ) θ ̃ ) t ( X n A n ( X ) θ ̃ ) = k = 0 n 1 X n , k + 1 ϕ ̃ X n , k m ̃ b ̃ k n 2 ,

where X n ( X n , k + 1 ) 0 k n 1 . Let us write θ = (m, b, ϕ) the true parameter such that

X n , k + 1 ϕ X n , k m b k n = Z k + 1 ,

meaning that X n A n (X)θ = Z for Z ( Z k + 1 ) 0 k n 1 . Then θ ̂ n = arg min θ ̃ R 3 M n ( θ ̃ ) satisfies J n ( θ ̂ n ) = 0 for J n = ∇M n . But J n ( θ ̃ ) = 2 t A n ( X ) ( X n A n ( X ) θ ̃ ) and J n ( θ ) = J n ( θ ) J n ( θ ̂ n ) = 2 H n ( X ) ( θ θ ̂ n ) . On the other hand, since X n A n (X)θ = Z, we get

J n ( θ ) = 2 k = 0 n 1 Z k + 1 k = 0 n 1 k n Z k + 1 k = 0 n 1 X n , k Z k + 1 .

We will prove that 1 2 n J n ( θ ) n + 0 a.s. and in L 2 with 1 2 n J n ( θ ) d n + N ( 0 , Σ ) , with Σ = σ Z 2 H . Since (X n,k ) satisfies (3) for θ the true parameter, we can write

1 n k = 0 n 1 X n , k Z k + 1 = c n × 1 n k = 0 n 1 Z k + 1 + d × 1 n k = 0 n 1 k n Z k + 1 + 1 n k = 0 n 1 Y k Z k + 1 .

Each empirical mean will tend to 0 a.s. and in L 2 according to Corollary 1 (choosing ϕ = 0 for (i)). This allows to conclude for the first point using the fact that (c n ) is a bounded sequence. Using the fact that ( θ θ ̂ n ) = 1 n H n 1 1 2 n J n ( θ ) we can conclude for the a.s. convergence. Now to prove the convergence in distribution, we use again the Cramer–Wold device (see Proposition 6.3.1 of [10] for instance) and consider for ( u , v , w ) R 3 \ { 0 } ,

1 n k = 0 n 1 u + v k n + w X n , k Z k + 1 = 1 n k = 0 n 1 ( u + c n w ) + ( v + d w ) k n + w Y k Z k + 1 .

The convergence will follow from a Lindeberg condition for triangular array of martingales [33]. Actually, let us write Δ n , k + 1 = ( u + c n w ) + ( v + d w ) k n + w Y k Z k + 1 and S n , l = k = 0 l Δ n , k + 1 . We may consider the filtration F n , l = σ ( Z k , k l ) = F l . It follows that

E ( Δ n , k + 1 | F n , k ) = E ( Δ n , k + 1 | F k ) = ( u + c n w ) + ( v + d w ) k n + w Y k E ( Z k + 1 ) = 0 a.s.

since Z k+1 is centered and independent from F k and (S n,l ) is a martingale triangular array. Then let us write S n s n S n , n 1 s n , where s n 2 = Var ( S n , n 1 ) . Hence according to Theorem 2 of [33] or [34] if

(20) 1 s n 2 k = 1 n 1 E ( Δ n , k + 1 2 | F k ) P n + 1 ,

(21)  and     1 s n 2 k = 1 n 1 E ( Δ n , k + 1 2 1 | Δ n , k + 1 | > ε s n | F k ) P n + 0 ,  for all ε > 0 ,

we have S n s n d n + N ( 0,1 ) .

So let us first compute the asymptotic variance. We write S n = k = 1 n 1 E ( Δ n , k + 1 2 | F k ) and note that s n 2 = E ( S n ) . In our setting, it is clear that

E ( Δ n , k + 1 2 | F k ) = σ Z 2 ( u + c n w ) + ( v + d w ) k n + w Y k 2 .

By Corollary 1 we have 1 n k = 0 n 1 Y k 0 , 1 n k = 0 n 1 k n Y k 0 and 1 n k = 0 n 1 Y k 2 ρ Y ( 0 ) a.s. and therefore,

1 n k = 0 n 1 E ( Δ n , k + 1 2 | F k ) n + s 2 σ Z 2 ( u + c w ) 2 + 1 3 ( v + d w ) 2 + w 2 ρ Y ( 0 ) + ( u + c w ) ( v + d w ) a.s. 

Note that we may deduce from these lines that the asymptotic covariance matrix is given by

Σ = σ Z 2 1 1 2 c + d 2 1 2 1 3 c 2 + d 3 c + d 2 c 2 + d 3 c 2 + d 2 3 + ρ Y ( 0 ) + c d = σ Z 2 H .

Moreover, since (u, v, w) ≠ 0 and H is positive definite we check that s 2 > 0. Then,

s n 2 n = E 1 n k = 0 n 1 E ( Δ n , k + 1 2 | F k ) n + s 2 ,

and (20) follows by writing

1 s n 2 k = 0 n 1 E ( Δ n , k + 1 2 | F k ) = n s n 2 × 1 n k = 0 n 1 E ( Δ n , k + 1 2 | F k ) .

Now it remains to prove the Lindenberg condition (21). So let us fix N N * large enough such that for all n ≥ 2N we have s n > s 2 N , choose C > 0 such that |Δ n,k+1| ≤ C(1 + |Y k |)|Z k+1| and remark that for n ≥ 2N, we have

1 n k = 0 n 1 E ( Δ n , k + 1 2 1 | Δ n , k + 1 | > ε s n | F k ) 1 n k = 0 n 1 E ( Δ n , k + 1 2 1 | Δ n , k + 1 | > ε s N | F k ) C 2 n k = 0 n 1 E ( ( 1 + | Y k | ) 2 Z k + 1 2 1 | C ( 1 + | Y k | ) | Z k + 1 | > ε s N | F k ) .

Hence, by stationarity, we get

E 1 n k = 0 n 1 E ( Δ n , k + 1 2 1 | Δ n , k + 1 | > ε s n | F k ) = C 2 E ( ( 1 + | Y 0 | ) 2 Z 1 2 1 | ( 1 + | Y 0 | ) | Z 1 | > ε s N / C ) n + 0 ,

since E ( ( 1 + | Y 0 | ) 2 Z 1 2 ) < + , and allows to get (21). We have therefore S n s n d n + N ( 0,1 ) and consequently, by Slutsky’s theorem, 1 2 n J n ( θ ) = s n n S n s n d n + N ( 0 , s 2 ) . But 1 2 n J n ( θ ) = 1 n H n ( X ) n ( θ θ ̂ n ) , and we can write

n ( θ θ ̂ n ) = 1 n H n ( X ) 1 1 2 n J n ( θ ) .

Again, by Slutsky’s theorem, we may deduce that

n ( θ θ ̂ n ) d n + H 1 N 0 , Σ = N 0 , H 1 Σ t H 1 = N 0 , σ Z 2 H 1 .

Note that det ( H ) = ρ Y ( 0 ) 12 , so

Σ 2 = σ Z 2 H 1 = ( 1 ϕ 2 ) c 2 + 4 ρ Y ( 0 ) c d 6 ρ Y ( 0 ) c c d 6 ρ Y ( 0 ) d 2 + 12 ρ Y ( 0 ) d c d 1 ,

with c = m 1 ϕ , d = b 1 ϕ and ρ Y ( 0 ) = 1 1 ϕ 2 σ Z 2 .

A.4 Numerical results

We first present some histograms obtained with 1000 simulations, ν = 0.3, λ = 1, σ = 0.2, and n = 1000. The red lines correspond to the theoretical Gaussian distribution with variance computed according to Theorem 2.

Figure 21: 
Histograms for a = 5, b = −5. First row: 



(





m

̂



n



(

1

)



−
m

)



$({\hat{m}}_{n}^{(1)}-m)$



. Second row: 



(





b

̂



n



(

1

)



−
b

)



$({\hat{b}}_{n}^{(1)}-b)$



. Third row: 



(





ϕ

̂



n



(

1

)



−
ϕ

)



$({\hat{\phi }}_{n}^{(1)}-\phi )$



.
Figure 21:

Histograms for a = 5, b = −5. First row: ( m ̂ n ( 1 ) m ) . Second row: ( b ̂ n ( 1 ) b ) . Third row: ( ϕ ̂ n ( 1 ) ϕ ) .

Figure 22: 
Histograms for c = 5, d = −5, n = 1000. First row: 



(





c

̂



n



(

1

)



−
c

)



$({\hat{c}}_{n}^{(1)}-c)$



. Second row: 



(





d

̂



n



(

1

)



−
d

)



$({\hat{d}}_{n}^{(1)}-d)$



.
Figure 22:

Histograms for c = 5, d = −5, n = 1000. First row: ( c ̂ n ( 1 ) c ) . Second row: ( d ̂ n ( 1 ) d ) .

Figure 23: 
Test over one simulation for c = 15, d = −10, b = d(1 − ϕ), a = c(1 − ϕ) − λν, X
0 = c

n
, with c

n
 = c − d/(n(1 − ϕ)). First line: σ = 0.1, second line: σ = 0.3, third line: σ = 0.5.
Figure 23:

Test over one simulation for c = 15, d = −10, b = d(1 − ϕ), a = c(1 − ϕ) − λν, X 0 = c n , with c n = cd/(n(1 − ϕ)). First line: σ = 0.1, second line: σ = 0.3, third line: σ = 0.5.

Figure 24: 
Test over one simulation for a = 5, b = −5, d = b/(1 − ϕ), c

n
 = (a + λν)/(1 − ϕ) − b/(n(1 − ϕ)2), X
0 = c

n
. First line: σ = 0.1, second line: σ = 0.3, third line: σ = 0.5.
Figure 24:

Test over one simulation for a = 5, b = −5, d = b/(1 − ϕ), c n = (a + λν)/(1 − ϕ) − b/(n(1 − ϕ)2), X 0 = c n . First line: σ = 0.1, second line: σ = 0.3, third line: σ = 0.5.

Figure 25: 
Means and standard deviation of the estimators of ϕ, c

n
 and d, according Theorem 2 (first column) or linear regression (second colum), over 100 simulations of 





(



X


n
,
k



)



0
≤
k
≤
n




${({X}_{n,k})}_{0\le k\le n}$



, for n ∈ {100, 200, …, 5000} with ϕ = 0.9, ν = 0.3, λ = 1, c = 15, d = −10, X
0 = c

n
, with c

n
 = c − d/(n(1 − ϕ)).
Figure 25:

Means and standard deviation of the estimators of ϕ, c n and d, according Theorem 2 (first column) or linear regression (second colum), over 100 simulations of ( X n , k ) 0 k n , for n ∈ {100, 200, …, 5000} with ϕ = 0.9, ν = 0.3, λ = 1, c = 15, d = −10, X 0 = c n , with c n = cd/(n(1 − ϕ)).

Table 2:

TPR for tests over one simulation for ν = 0.3, λ = 1, c = 15, d = −10, b = d(1 − ϕ), a = c(1 − ϕ) − λν, X 0 = c n , with c n = cd/(n(1 − ϕ)) and n = 100, as shown in Figure 23 with tolerance α = β = 0.01.

Φ σ = 0.1 σ = 0.3 σ = 0.5
0.1 0.5 0.9 0.1 0.5 0.9 0.1 0.5 0.9
T (1) 1 1 1 0.9032 0.9677 0.8387 0.8710 0.5484 0.3226
T (2) 1 1 1 0.9032 0.9677 0.8387 0.8710 0.5484 0.2903
T ̃ ( 1 ) 1 1 1 0.7419 0.9355 0.8065 0.6452 0.2258 0.0645
T ̃ ( 2 ) 1 1 1 0.7097 0.9355 0.8065 0.6452 0.2258 0.0645
T c ̃ ( 1 ) 1 1 1 0.6219 0.8065 0.7419 0.6129 0.2258 0.1613
T c ̃ ( 2 ) 1 1 1 0.6219 0.7742 0.7419 0.6129 0.2903 0.0645
T c ( 1 ) 1 1 1 0.5484 0.7419 0.6774 0.6129 0.1613 0.0645
T c ( 2 ) 1 1 1 0.5484 0.7419 0.6774 0.6129 0.1613 0.0645
  1. The number of jumps is 31.

Table 3:

FPR for tests over one simulation for ν = 0.3, λ = 1, c = 15, d = −10, b = d(1 − ϕ), a = c(1 − ϕ) − λν, X 0 = c n , with c n = cd/(n(1 − ϕ)) and n = 100, as shown in Figure 23 with tolerance α = β = 0.01.

Φ σ = 0.1 σ = 0.3 σ = 0.5
0.1 0.5 0.9 0.1 0.5 0.9 0.1 0.5 0.9
T (1) 0.0145 0.0145 0 0.0145 0.0290 0.0290 0.1159 0.0145 0
T (2) 0.0145 0 0 0.0145 0.0290 0.0290 0.1159 0.0145 0
T ̃ ( 1 ) 0 0 0 0.0145 0 0.0145 0.0435 0 0
T ̃ ( 2 ) 0 0 0 0.0145 0 0.0145 0.0435 0 0
T c ̃ ( 1 ) 0 0 0 0.0145 0 0 0.0145 0 0
T c ̃ ( 2 ) 0 0 0 0.0145 0 0 0.0145 0 0
T c ( 1 ) 0 0 0 0 0 0 0.0145 0 0
T c ( 2 ) 0 0 0 0 0 0 0.0145 0 0
  1. The number of jumps is 31.

Table 4:

TPR for tests over one simulation for ν = 0.3, λ = 1, a = 5, b = −5, b = d(1 − ϕ), a = c(1 − ϕ) − λν, X 0 = c n , with c n = cd/(n(1 − ϕ)) and n = 100, as shown in Figure 24 with tolerance α = β = 0.01.

Φ σ = 0.1 σ = 0.3 σ = 0.5
0.1 0.5 0.9 0.1 0.5 0.9 0.1 0.5 0.9
T (1) 1 1 1 0.9 0.9 0.7333 0.5 0.6 0.3
T (2) 1 1 1 0.9 0.9 0.7333 0.5333 0.6 0.2667
T ̃ ( 1 ) 1 1 1 0.7333 0.8 0.6333 0.2 0.4667 0
T ̃ ( 2 ) 1 1 1 0.7333 0.8 0.6333 0.2 0.4667 0
T c ̃ ( 1 ) 1 1 0.9667 0.6667 0.6333 0.6333 0.2333 0.4 0
T c ̃ ( 2 ) 1 1 0.9667 0.6667 0.6667 0.6333 0.2333 0.4333 0
T c ( 1 ) 1 1 0.9667 0.5333 0.6333 0.5667 0.2 0.2333 0
T c ( 2 ) 1 1 0.9667 0.5333 0.6333 0.5667 0.2 0.6667 0
  1. The number of jumps is 30.

Table 5:

FPR for tests over one simulation for ν = 0.3, λ = 1, a = 5, b = −5, b = d(1 − ϕ), a = c(1 − ϕ) − λν, X 0 = c n , with c n = cd/(n(1 − ϕ)) and n = 100, as shown in Figure 24 with tolerance α = β = 0.01.

Φ σ = 0.1 σ = 0.3 σ = 0.5
0.1 0.5 0.9 0.1 0.5 0.9 0.1 0.5 0.9
T (1) 0 0.0286 0 0 0 0.0286 0.0143 0.0571 0
T (2) 0 0.0286 0 0 0 0.0286 0.0143 0.0571 0
T ̃ ( 1 ) 0 0.0143 0 0 0 0.0143 0 0.0143 0
T ̃ ( 2 ) 0 0.0143 0 0 0 0.0143 0 0.0143 0
T c ̃ ( 1 ) 0 0 0 0 0 0.0143 0 0.0143 0
T c ̃ ( 2 ) 0 0 0 0 0 0.0143 0 0.0143 0
T c ( 1 ) 0 0 0 0 0 0.0143 0 0 0
T c ( 2 ) 0 0 0 0 0 0 0.0143 0 0
  1. The number of jumps is 30.

Now we present typical realizations of trajectories for several choices of parameters but with the same jumps (in red star) in order to illustrate the parameters effect on shapes and tests. We set ν = 0.3, λ = 1, n = 100 and consider tests: in blue cross detected jumps with T (1) for level α = 0.01, in green cross detected jumps with the multiple test correction T ̃ ( 1 ) for level β = 0.01, in black plus detected jumps with the multiple test correction T ̃ c ( 1 ) for level β = 0.01 and magenta plus indicate the detected jumps for T c ( 1 ) for level α = 0.01. We add circle for the corresponding detected jumps using T (2) in blue, T ̃ ( 2 ) in green, T ̃ c ( 2 ) in black and T c ( 2 ) in magenta. The blue line, respectively green line, is the estimated line with the first, respectively the second, estimators. The dot red line corresponds to the straight line with parameters (c n , d) induced by our choices of values. The corresponding true and false positive rates are given in tables. We first present results for c = 15, d = −10 in Figure 23 and Tables 2 and 3.

References

1. Levine, JE, Pau, KY, Ramirez, VD, Jackson, GL. Simultaneous measurement of luteinizing hormone-releasing hormone and luteinizing hormone release in unanesthetized, ovariectomized sheep. Endocrinology 1982;111:1449–55. https://doi.org/10.1210/endo-111-5-1449.Suche in Google Scholar PubMed

2. Clarke, IJ, Cummins, JT. The temporal relationship between gonadotropin releasing hormone (GnRH) and luteinizing (LH) secretion in ovariectomized ewes. Endocrinology 1982;111:1737–9. https://doi.org/10.1210/endo-111-5-1737.Suche in Google Scholar PubMed

3. Caraty, A, Orgeur, P, Thiery, J-C. Demonstration of the pulsatile secretion of LH-RH into hypophysial portal blood of ewes using an original technic for multiple samples. Comptes rendus des séances de l’Académie des sciences. Série III, Sciences de la vie 1982;295:10.Suche in Google Scholar

4. Moenter, SM. GnRH neuron electrophysiology: a decade of study. Brain Res 2010;1364:10–24. https://doi.org/10.1016/j.brainres.2010.09.066.Suche in Google Scholar PubMed PubMed Central

5. Wray, S. From nose to brain: development of gonadotrophin-releasing hormone-1 neurones. J Neuroendocrinol 2010;22:743–53. https://doi.org/10.1111/j.1365-2826.2010.02034.x.Suche in Google Scholar PubMed PubMed Central

6. S Constantin, A Caraty, S Wray, and A Duittoz. Development of gonadotropin-releasing hormone-1 secretion in mouse nasal explants. Endocrinology, 150:3221–7, 2009. https://doi.org/10.1210/en.2008-1711.Suche in Google Scholar PubMed PubMed Central

7. C Georgelin, C Constant, H Biermé, G Chevrot, B Piégu, R Fleurot, et al.. GnRH paracrine/autocrine action induced a non-stochastic behaviour and episodic synchronisation of GnRH neurons activity: in vitro and in silico study. In preparation.Suche in Google Scholar

8. Burkitt, AN. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol Cybern 2006;95:1–19. https://doi.org/10.1007/s00422-006-0068-6.Suche in Google Scholar PubMed

9. Gerstner, W, Kistler, W. Spiking neuron models: an introduction. New York, NY, USA: Cambridge University Press; 2002.10.1017/CBO9780511815706Suche in Google Scholar

10. Brockwell, PJ, Davis, RA. Time series: theory and methods In: Springer series in statistics. New York: Springer; 2006 [Reprint of the second (1991) edition].Suche in Google Scholar

11. Perron, P, Yabu, T. Testing for trend in the presence of autoregressive error: a comment. J Am Stat Assoc 2012;107:844. https://doi.org/10.1080/01621459.2012.668638.Suche in Google Scholar

12. Roy, A, Falk, B, Fuller, WA. Testing for trend in the presence of autoregressive error. J Am Stat Assoc 2004;99:1082–91. https://doi.org/10.1198/016214504000000520.Suche in Google Scholar

13. Qiu, D, Shao, Q, Yang, L. Efficient inference for autoregressive coefficients in the presence of trends. J Multivariate Anal 2013;114:40–53. https://doi.org/10.1016/j.jmva.2012.07.016.Suche in Google Scholar

14. McLachlan, G, Peel, D. Finite mixture models, In: Wiley series in probability and statistics. Wiley; 2000.10.1002/0471721182Suche in Google Scholar

15. Fraley, C, Raftery, AE, Murphy, TB, Scrucca, L. MCLUST version 4 for R: normal mixture modeling for model-based clustering, Classification, and density estimation. Technical report No. 597; 2012.Suche in Google Scholar

16. Jahn, P, Berg, RW, Hounsgaard, J, Ditlevsen, S. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process. J Comput Neurosci 2011;31:563–79. https://doi.org/10.1007/s10827-011-0326-z.Suche in Google Scholar PubMed PubMed Central

17. Ditlevsen, S, Samson, A. Parameter estimation in neuronal stochastic differential equation models from intracellular recordings of membrane potentials in single neurons: a review. J SFdS 2016;157:6–21.Suche in Google Scholar

18. Ali, F, Kwan, AC. Interpreting in vivo calcium signals from neuronal cell bodies, axons, and dendrites: a review. Neurophotonics 2019;7:1–12. https://doi.org/10.1117/1.NPh.7.1.011402.Suche in Google Scholar PubMed PubMed Central

19. Athreya, KB, Pantula, SG. A note on strong mixing of ARMA processes. Stat Probab Lett 1986;4:187–90. https://doi.org/10.1016/0167-7152(86)90064-7.Suche in Google Scholar

20. Doukhan, P. Stochastic models for time series, In: Volume 80 of Mathématiques & Applications (Berlin) [mathematics & applications]. Cham: Springer; 2018.10.1007/978-3-319-76938-7Suche in Google Scholar

21. van der Vaart, AW. Asymptotic statistics, In: Volume 3 of Cambridge series in statistical and probabilistic mathematics. Cambridge: Cambridge University Press; 1998.10.1017/CBO9780511802256Suche in Google Scholar

22. Kulperger, RJ. On the residuals of autoregressive processes and polynomial regression. Stochastic Process. Appl. 1985;21:107–18. https://doi.org/10.1016/0304-4149(85)90380-1.Suche in Google Scholar

23. Dick, NP, Bowden, DC. Maximum likelihood estimation for mixtures of two normal distributions. Biometrics 1973;29:781–90. https://doi.org/10.2307/2529143.Suche in Google Scholar

24. Kikawa, C, Shatalov, M, Kloppers, P, Mkolesia, A. Parameter estimation for a mixture of two univariate Gaussian distributions: a comparative analysis of the proposed and maximum likelihood methods. J Adv Math Comput Sci 2015;12:1–8.10.9734/BJMCS/2016/16617Suche in Google Scholar

25. Behboodian, J. Information matrix for a mixture of two normal distributions. J Stat Comput Simulat 1972;1:295–314. https://doi.org/10.1080/00949657208810024.Suche in Google Scholar

26. Ng, SK, Krishnan, T, McLachlan, GJ. The EM algorithm. In: Springer handbook of computational statistics. Heidelberg: Springer; 2012.10.1007/978-3-642-21551-3_6Suche in Google Scholar

27. Jeff Wu, CF. On the convergence properties of the EM algorithm. Ann Stat 1983;11:95–103.10.1214/aos/1176346060Suche in Google Scholar

28. Macmillan, NA, Creelman, CD. Detection Theory: A User’s Guide, 2nd ed. Psychology Press; 2004. https://doi.org/10.4324/9781410611147.Suche in Google Scholar

29. Green, DM, Swets, JA. Signal detection theory and psychophysics. John Wiley & Sons Ltd; 1966.Suche in Google Scholar

30. Benjamini, Y, Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Stat Soc B 1995;57:289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x.Suche in Google Scholar

31. Müller, P, Parmigiani, G, Robert, C, Rousseau, J. Optimal sample size for multiple testing: the case of gene expression microarrays. J Am Stat Assoc 2004;99:990–1001. https://doi.org/10.1198/016214504000001646.Suche in Google Scholar

32. Heinrich, L. Asymptotic behaviour of an empirical nearest-neighbour distance function for stationary Poisson cluster processes. Math Nachr 1988;136:131–48. https://doi.org/10.1002/mana.19881360109.Suche in Google Scholar

33. Brown, BM. Martingale central limit theorems. Ann Math Stat 1971;42:59–66. https://doi.org/10.1214/aoms/1177693494.Suche in Google Scholar

34. Gaenssler, P, Strobel, J, Stute, W. On central limit theorems for martingale triangular arrays. Acta Math Acad Sci Hungar 1978;31:205–16. https://doi.org/10.1007/bf01901971.Suche in Google Scholar

Received: 2020-04-03
Revised: 2021-03-17
Accepted: 2021-07-31
Published Online: 2021-09-30

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 11.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/ijb-2020-0043/html
Button zum nach oben scrollen