Home Mathematics Complete consistency for the estimator of nonparametric regression model based on m-END errors
Article Open Access

Complete consistency for the estimator of nonparametric regression model based on m-END errors

  • Shui-Li Zhang , Tiantian Hou and Cong Qu EMAIL logo
Published/Copyright: November 18, 2021

Abstract

In this paper, we study the complete consistency for the estimator of nonparametric regression model based on m-END errors and obtain the convergence rates of the complete consistency under more general conditions. Finally, some simulations are illustrated to verify the validity of our results.

MSC 2010: 62G05; 62G08; 60E15

1 Introduction

In many statistical models and statistical applications, the random variables are usually assumed to be independent. However, the assumption that the random variables are independent is not plausible. So many statisticians extended this condition to various dependent structures and mixing structures, such as positively associated (PA) random variables, negatively associated (NA) random variables, negatively orthant dependent (NOD) random variables, extended negatively dependent (END) random variables, ρ -mixing random variables, φ -mixing random variables, and so on.

First, let us recall the concept of END random variables, which was introduced by Liu [1] as follows.

Definition 1.1

A finite collection of random variables X 1 , X 2 , , X n is said to be END if there exists a constant M > 0 ,

(1.1) P ( X 1 > x 1 , X 2 > x 2 , , X n > x n ) M i = 1 n P ( X i > x i )

and

(1.2) P ( X 1 x 1 , X 2 x 2 , , X n x n ) M i = 1 n P ( X i x i )

hold for each n 1 and all real numbers x 1 , x 2 , , x n .

An infinite sequence { X n , n 1 } is said to be END if every finite subcollection is END. An array of random variables { X n i , 1 i n , n 1 } is called row-wise END random variables if for each n 1 , { X n i , 1 i n } is a sequence of END random variables.

Recently, inspired by the definition of m -NA random variables, Wang et al. [2] extended the concept of END random variables to the case of m -END random variables as follows.

Definition 1.2

Let m 1 be a fixed integer. A sequence of random variables { X n , n 1 } is said to be m -extended negatively dependent ( m -END) if for any n 2 and any i 1 , i 2 , , i n such that i k i j m for all 1 k j n , we have that X i 1 , X i 2 , , X i n are END.

The END random variables are a very general dependent structure, which includes independent random variables, NA random variables and NOD random variables as special cases. It is easily seen that m -END random variables of END random variables (when m = 1 ). Hence, the study of limiting behavior of m -END random variables is of more interest. Since the concept of m -END random variables was introduced by Wang et al. [2], many authors were devoted to studying the limit theory. Wang et al. [2] gave the Kolmogorov exponential inequality for m -END random variables and obtained the complete convergence for arrays of m -END random variables and the complete consistency for estimator of nonparametric regression models by using the Kolmogorov exponential inequality. Wang et al. [3] studied the complete convergence and complete moment convergence for arrays of row-wise m -END random variables, and as the applications of complete convergence obtained the strong consistency of the least square estimator in the EV regression models.

Consider the following nonparametric regression model

(1.3) Y n i = f ( x n i ) + ε n i , 1 i n , n 1 ,

where { x n i } are known fixed design points from A R q which is a given compact set for some q 1 , f ( ) is an unknown regression function on A , and { ε n i , 1 i n } are random errors with E ε n i = 0 . As an estimator of f ( x ) , the following linear weighted estimator was given

(1.4) f n ( x ) = i = 1 n W n i ( x ) Y n i ,

where W n i ( x ) = W n i ( x , x n 1 , , x n n ) , i = 1 , 2 , , n are the weight functions.

Since the weighted estimator f n ( x ) was proposed by Stone [4], many authors studied the consistency and the asymptotic normality of the estimator f n ( x ) when the { ε n i , 1 i n , n 1 } are independent random variables or dependent random variables. In the case of independent samples, it goes back to Gasser and Müller [5], Georgiev and Greblicki [6], Müller [7] and others. In the case of independent samples, Wang et al. [8] studied complete moment convergence for arrays of row-wise NOD random variables and obtained the complete consistency for the estimators of nonparametric and semiparametric regression models. Zhang et al. [9] studied the complete consistency for the weighted estimator of nonparametric regression model based on widely orthant-dependent random variables and provided a numerical simulation. Asymptotic distribution of the estimator is also considered. Shen et al. [10] have given the asymptotic normality of the linear weighted estimator f n ( x ) for ρ -mixing errors.

Let us recall the concept of complete convergence of random variable sequence, which was introduced by Hsu and Robbins [11] as follows. A sequence { U n , n 1 } is said to converge completely to a constant C if

n = 1 P ( U n C > ε ) < , for all ε > 0 .

In view of the Borel-Cantelli lemma, this implies that U n C almost surely.

The random variable arrays { X n i , 1 i n , n 1 } are stochastically dominated by random variable X if there exists a positive constant C , such that

P ( X n i > x ) C P ( X > x ) ,

for all x 0 , 1 i n , n 1 .

The main purpose of the present paper is to study the complete consistency for the estimator f n ( x ) based on the m -END errors and obtain the convergence rate of the complete consistency. The remainder of this paper is organized as follows. Section 2 states our main results. The proofs of main results and simulations are given in Sections 3 and 4, respectively. Throughout this paper, the symbols C , C 1 , C 2 , C 3 , C 4 represent positive constants whose values may change from one place to another, I ( A ) is the indicator function of set A , and x denotes the integer part of x .

2 Main results

2.1 Some assumptions for our results

For any function f ( x ) , we use c ( f ) to denote all continuity points of the function f on A . The norm x is the Euclidean norm. For any fixed design point x A , we state the following assumptions on weighted function W n i ( x ) , which will be used to support our main results.

  1. i = 1 n W n i ( x ) 1 ;

  2. i = 1 n W n i ( x ) C for all n 1 , and max 1 i n W n i ( x ) = O ( n 1 / 2 r log s n ) , s > 1 , r > 1 / 2 ;

  3. i = 1 n W n i ( x ) f ( x n i ) f ( x ) I ( x n i x > n 1 / 2 r ) 0 ;

  4. i = 1 n W n i ( x ) 1 = O ( n 1 / 2 r ) ;

  5. i = 1 n W n i ( x ) C for all n 1 , and max 1 i n W n i ( x ) = O ( n 1 / r log s n ) , s > 1 , r > 1 / 2 ;

  6. i = 1 n W n i ( x ) f ( x n i ) f ( x ) I ( x n i x > n 1 / 2 r ) = O ( n 1 / 2 r ) .

The assumptions ( A 1 ) ( A 6 ) are general, and these can be found in Wang and Si [12], Chen et al. [13], and so on.

2.2 Main results

In this subsection, we state the main results and some remarks.

Theorem 2.1

Let { ε n i , 1 i n , n 1 } be an array of row-wise m -END random variables, which is stochastically dominated by random variable X . Assume that the conditions ( A 1 ) ( A 3 ) hold. If E X 1 + 2 r < , then for any x c ( f ) , we have

(2.1) f n ( x ) f ( x ) , completely , as n .

Remark 2.1

Under the moment condition E X p < for some 2 < p < 4 , and the assumptions ( A 1 ) , ( A 3 ) and

(2.2) i = 1 n W n i ( x ) C , max 1 i n W n i ( x ) = O ( n γ ) , 2 / p < γ < 1 ,

Wang et al. [8] established the same result based on the NOD random variables. Let γ = 1 / 2 r , we can see that the conditions (2.2) are weaker than the assumption ( A 2 ) in this paper. However, note that 2 / p < γ < 1 , 2 < p < 4 , which means that 1 + 2 r 1 + p / 2 p , so our moment assumptions in Theorem 2.1 are weaker than those in Wang et al. [8]. At the same time, the m -END random variables include the NOD random variables as a special case, hence the results in Theorem 2.1 are the complement and extension to the works of Wang et al. [8].

The END random variables and m -NA random variables are the m -END random variables, so we can get the following corollaries.

Corollary 2.1

Let { ε n i , 1 i n , n 1 } be an array of row-wise END random variables, which is stochastically dominated by random variable X . Assume that the conditions ( A 1 ) ( A 3 ) hold. If E X 1 + 2 r < , then for any x c ( f ) , we have

(2.3) f n ( x ) f ( x ) , completely , as n .

Remark 2.2

Under the moment condition E X 2 p < for some p 1 , the assumptions ( A 1 ) , ( A 3 ) and

(2.4) i = 1 n W n i ( x ) C , max 1 i n W n i ( x ) = O ( n 1 / p ) ,

Wang et al. [14] established the same result based on the END random variables. Let p = 2 r , we can see that the conditions (2.4) are weaker than the assumptions ( A 2 ) . However, note that r > 1 / 2 , then 1 + 2 r 4 r = 2 p , so our moment assumptions in Corollary 2.1 are weaker than those in Wang et al. [14]. Hence, the results in Corollary 2.1 are the complements to the works of Wang et al. [14].

Corollary 2.2

Let { ε n i , 1 i n , n 1 } be an array of row-wise m -NA random variables, which is stochastically dominated by random variable X . Assume that the conditions ( A 1 ) ( A 3 ) hold. If E X 1 + 2 r < , then for any x c ( f ) , we have

(2.5) f n ( x ) f ( x ) , completely , as n .

As an application of Theorem 2.1, we give the complete consistency for the nearest neighbor estimator of f ( x ) . Without loss of generality, let A = [ 0 , 1 ] , taking x n i = i n , n 1 , i = 1 , 2 , , n . For any x A , we rewrite x n 1 x , x n 2 x , , x n n x as follows:

x R 1 ( x ) ( n ) x x R 2 ( x ) ( n ) x x R n ( x ) ( n ) x .

Let 1 k n n , the nearest neighbor estimator of f ( x ) in model (1.3) is defined as follows:

f ˜ n ( x ) = i = 1 n W ˜ n i ( x ) Y n i ,

where

(2.6) W ˜ n i ( x ) = 1 / k n , if x n i x x R k n ( x ) ( n ) x , 0 , otherwise .

Based on the aforementioned notations, we can get the following Corollary 2.3.

Corollary 2.3

Let { ε n i , 1 i n , n 1 } be an array of row-wise m -END random variables, which is stochastically dominated by a random variable X , f ( x ) is a continuous function on A = [ 0 , 1 ] , k n = n 1 / 2 . If E X 4 < , then for any x [ 0 , 1 ] , we have

(2.7) f ˜ n ( x ) f ( x ) , completely , as n .

Theorem 2.2

Let { ε n i , 1 i n , n 1 } be an array of row-wise m -END random variables, which is stochastically dominated by random variable X . Assume that the conditions ( A 4 ) ( A 6 ) hold. If E X 2 + 2 r < , then for any x c ( f ) , we have

(2.8) f n ( x ) f ( x ) = O ( n 1 / 2 r ) , completely, as n .

Similar to Corollaries 2.1 and 2.2, we can get the following results.

Corollary 2.4

Let { ε n i , 1 i n , n 1 } be an array of row-wise END random variables, which is stochastically dominated by random variable X . Assume that the conditions ( A 4 ) ( A 6 ) hold. If E X 2 + 2 r < , then for any x c ( f ) , we have

(2.9) f n ( x ) f ( x ) = O ( n 1 / 2 r ) , completely, as n .

Remark 2.3

Let r > 1 , and

(2.10) i = 1 n W n i ( x ) 1 = O ( n 1 / 2 r log n ) ; i = 1 n W n i ( x ) C for all n 1 and max 1 i n W n i ( x ) = O ( n 1 / r ) ; i = 1 n W n i ( x ) f ( x n i ) f ( x ) I ( x n i x > n 1 / 2 r log n ) = O ( n 1 / 2 r log n ) .

For the design regression model (1.3) with END errors, under the assumptions (2.10) and moment condition E X 2 + 2 r < , Yang et al. [15] obtained the complete convergence rate for the estimator as follows.

(2.11) f n ( x ) f ( x ) = O ( n 1 / 2 r log n ) , completely, as n .

Compared with the results of Yang et al. [15], the assumptions on weighted functions W n i ( x ) are different, and the complete convergence rate for the estimator is also different. So our results are the complement and extension to the works of Yang et al. [15].

Corollary 2.5

Let { ε n i , 1 i n , n 1 } be an array of row-wise m -NA random variables, which is stochastically dominated by random variable X . Assume that the conditions ( A 4 ) ( A 6 ) hold. If E X 2 + 2 r < , then for any x c ( f ) , we have

(2.12) f n ( x ) f ( x ) = O ( n 1 / 2 r ) , completely , as n .

3 Proofs of main results

Before giving the proofs of our results, we need to state some useful lemmas.

3.1 Some useful lemmas

Lemma 3.1

(Kuczmaszewska [16]) Let { X n , n 1 } be a sequence of random variables, which is stochastically dominated by a random variable X . If E X p < for some p > 0 , then for any t > 0 and n 1 , the following statements hold:

(3.1) E X n p C E X p ,

(3.2) E X n p I ( X n t ) C [ E X p I ( X t ) + t p P ( X > t ) ] ,

and

(3.3) E X n p I ( X n > t ) C E X p I ( X > t ) .

Lemma 3.2

(Wang et al. [2]) Let { X n , n 1 } be a sequence of m -END random variables. If { g n ( ) , n 1 } are all non-decreasing (or non-increasing) functions, then { g n ( X n ) , n 1 } are still m -END random variables.

Lemma 3.3

(Wang et al. [3]) Let 1 p 2 , { X n , n 1 } be an m -END random variable sequence with E X n = 0 , E X n p < , then there exists a constant C dependent only on p and m such that

(3.4) E i = 1 n X i p C i = 1 n E X i p .

Lemma 3.4

(Yang et al. [15]) Let { X n , n 1 } be an END random variable sequence with E X n = 0 , and X n d n , a.s., n 1 , where { d n , n 1 } be a sequence of positive constants. Denote b n = max 1 i n d i , and Δ n 2 = i = 1 n E X i 2 for all n 1 . Then for any ε > 0 , there exists a constant M > 0 , such that

(3.5) P i = 1 n X i ε 2 M exp ε 2 2 ( 2 Δ n 2 + b n ε ) .

From the definition of m -END random variables and Lemma 3.4, we can get the following result.

Lemma 3.5

Let { X n , n 1 } be m -END random variable sequences, { d n , n 1 } be a sequence of nonnegative constants. If X n d n a.s. for all n 1 , then for any r > 0 , we have

(3.6) P k = 1 n X k r 2 m M exp r 2 2 ( 2 m 2 Δ n 2 + m r b n ) ,

where Δ n 2 = i = 1 n E X i 2 , b n = max 1 i n d i , n 1 , M is the constant in Lemma 3.4.

3.2 Proofs of main theorems

Proof of Theorem 2.1

First, it is easy to see that

(3.7) f n ( x ) f ( x ) f n ( x ) E f n ( x ) + E f n ( x ) f ( x ) .

For x c ( f ) and a > 0 , then

(3.8) E f n ( x ) f ( x ) i = 1 n W n i ( x ) f ( x n i ) f ( x ) I ( x n i x a ) + i = 1 n W n i ( x ) f ( x n i ) f ( x ) I ( x n i x > a ) + f ( x ) i = 1 n W n i ( x ) 1 .

Since x c ( f ) , for any ε > 0 , there exists a δ > 0 , such that for all y satisfying y x < δ , we have

f ( y ) f ( x ) < ε .

For n large enough, we have that 0 < n 1 / 2 r < δ , so

(3.9) E f n ( x ) f ( x ) ε i = 1 n W n i ( x ) + f ( x ) i = 1 n W n i ( x ) 1 + i = 1 n W n i ( x ) f ( x n i ) f ( x ) I ( x n i x > n 1 / 2 r ) .

From conditions ( A 1 ) ( A 3 ) , we can get

(3.10) E f n ( x ) f ( x ) 0 , for x c ( f ) .

From (3.7)–(3.10), it is enough to show

(3.11) f n ( x ) E f n ( x ) = i = 1 n W n i ( x ) ε n i 0 , completely, as n .

Without loss of generality, we assume W n i ( x ) 0 . For 1 i n , n 1 , define

ε n i ( 1 ) = n 1 / 2 r I ( ε n i < n 1 / 2 r ) + ε n i I ( ε n i n 1 / 2 r ) + n 1 / 2 r I ( ε n i > n 1 / 2 r ) , ε n i ( 2 ) = ( ε n i + n 1 / 2 r ) I ( ε n i < n 1 / 2 r ) + ( ε n i n 1 / 2 r ) I ( ε n i > n 1 / 2 r ) .

Note that ε n i = ε n i ( 1 ) + ε n i ( 2 ) and E ε n i = 0 , then

(3.12) f n ( x ) E f n ( x ) = i = 1 n W n i ( x ) ε n i = i = 1 n W n i ( x ) ( ε n i ( 1 ) E ε n i ( 1 ) ) + i = 1 n W n i ( x ) ( ε n i ( 2 ) E ε n i ( 2 ) ) S n 1 + S n 2 .

Since r > 1 / 2 , so E X 1 + 2 r < imply E ( X 2 ) < . From ( A 2 ) and Lemma 3.1, we have

(3.13) b n 1 = max 1 i n W n , i ( x ) ( ε n i ( 1 ) E ε n i ( 1 ) ) C n 1 / 2 r max 1 i n W n i ( x ) C 1 log s n ,

and

Δ n 1 2 = i = 1 n E [ W n i ( x ) ( ε n i ( 1 ) E ε n i ( 1 ) ) ] 2 = i = 1 n W n i 2 ( x ) E ( ε n i ( 1 ) E ε n i ( 1 ) ) 2 C max 1 i n W n i ( x ) i = 1 n W n i ( x ) E ( X 2 ) C 2 n 1 / 2 r ( log n ) s .

By Lemma 3.2, we can get that { ε n i ( 1 ) E ε n i ( 1 ) , 1 i n , n 1 } is an array of row-wise m -END random variables. By Lemma 3.5, then for sufficiently large K > 0 , we have

(3.14) n = 1 P ( S n 1 K ε ) = n = 1 P i = 1 n W n i ( x ) ( ε n i ( 1 ) E ε n i ( 1 ) ) K ε C n = 1 exp K 2 ε 2 2 ( 2 m 2 Δ n 1 2 + m K b n 1 ε ) C n = 1 exp K 2 ε 2 2 ( 2 C 2 m 2 n 1 / 2 r log s n + C 1 K m ε log s n ) C n = 1 exp K ε 4 C 1 m log s n C n = 1 n 2 < .

By Markov inequality, Lemma 3.1 and ( A 2 ) , we have

(3.15) n = 1 P ( S n 2 K ε ) = n = 1 P i = 1 n W n i ( x ) ( ε n i ( 2 ) E ε n i ( 2 ) ) K ε C n = 1 E i = 1 n W n i ( x ) ( ε n i ( 2 ) E ε n i ( 2 ) ) C n = 1 i = 1 n W n i ( x ) E ( ε n i I ( ε n i > n 1 / 2 r ) ) C n = 1 E ( X I ( X > n 1 / 2 r ) ) C k = 1 E ( X I ( k 1 / 2 r X < ( k + 1 ) 1 / 2 r ) ) n = 1 k 1 C k = 1 k E ( X I ( k 1 / 2 r X < ( k + 1 ) 1 / 2 r ) ) C E X 1 + 2 r < .

From (3.12), (3.14), and (3.15), we have

(3.16) n = 1 P ( f n ( x ) E f n ( x ) 2 K ε ) = n = 1 P ( S n 1 + S n 2 2 K ε ) n = 1 P ( S n 1 K ε ) + n = 1 P ( S n 2 K ε ) < .

Proof of Corollary 2.3

It suffices to show the conditions of ( A 1 ) ( A 3 ) are satisfied. By the definition of the nearest neighbor estimator and k n = n 1 / 2 , we can get

(3.17) i = 1 n W ˜ n i ( x ) = i = 1 k n 1 k n = 1 ,

(3.18) max 1 i n W ˜ n i ( x ) = 1 k n = 1 n 1 / 2 n 1 / 3 ( log n ) s ,

(3.19) i = 1 n W ˜ n i ( x ) f ( x n i ) f ( x ) I ( x n i x n 1 / 3 ) C i = 1 k n 1 k n x R i ( n ) ( x ) x 2 n 2 / 3 C i = 1 k n 1 k n ( i / n ) 2 n 2 / 3 C 1 k n n 2 / 3 n 2 i = 1 k n i 2 C k n 2 n 4 / 3 C n 1 / 3 .

From (3.17)–(3.19), we can see that ( A 1 ) ( A 3 ) hold for r = 3 / 2 .□

Proof of Theorem 2.2

In order to prove that (2.8) holds, we only need

(3.20) E f n ( x ) f ( x ) = O ( n 1 / 2 r ) , completely, as n ,

and

(3.21) f n ( x ) E f n ( x ) = O ( n 1 / 2 r ) , completely, as n .

Note that from assumptions ( A 4 ) ( A 6 ) , we can see that (3.20) holds by (3.9). Since r > 1 / 2 , so E X 2 + 2 r < imply E X 2 < . From ( A 5 ) , we have

(3.22) b n 2 = max 1 i n W n i ( x ) ( ε n i ( 1 ) E ε n i ( 1 ) ) C n 1 / 2 r max 1 i n W n i ( x ) C 3 n 1 / 2 r log s n ,

and

Δ n 2 2 = i = 1 n E [ W n i ( x ) ( ε n i ( 1 ) E ε n i ( 1 ) ) ] 2 = i = 1 n W n i 2 ( x ) E ( ε n i ( 1 ) E ε n i ( 1 ) ) 2 C max 1 i n W n i ( x ) i = 1 n W n i ( x ) E X 2 C 4 n 1 / r ( log n ) s .

By Lemma 3.5 and s > 1 , then

(3.23) n = 1 P ( S n 1 ε n 1 / 2 r ) = n = 1 P i = 1 n W n i ( x ) ( ε n i ( 1 ) E ε n i ( 1 ) ) ε n 1 / 2 r C n = 1 exp ε 2 n 1 / r 2 ( 2 m 2 Δ n 2 2 + m b n 2 ε n 1 / 2 r ) C n = 1 exp ε 2 n 1 / r 2 ( 2 C 4 m 2 n 1 / r log s n + C 3 m ε n 1 / r log s n ) C n = 1 exp ε 4 C 3 m log s n C n = 1 n 2 < .

By Markov inequality and Lemma 3.1, we have that

(3.24) n = 1 P ( S n 2 ε n 1 / 2 r ) = n = 1 P i = 1 n W n i ( x ) ( ε n i ( 2 ) E ε n i ( 2 ) ) ε n 1 / 2 r C n = 1 n 1 / 2 r E i = 1 n W n i ( x ) ( ε n i ( 2 ) E ε n i ( 2 ) ) C n = 1 n 1 / 2 r i = 1 n W n i ( x ) E ε n i I ( ε n i > n 1 / 2 r ) C n = 1 n 1 / 2 r i = 1 n W n i ( x ) E X I ( X > n 1 / 2 r ) C n = 1 n 1 / 2 r E X I ( X > n 1 / 2 r ) C n = 1 n 1 / 2 r k = n E X I ( k 1 / 2 r < X ( k + 1 ) 1 / 2 r ) C k = 1 E X I ( k 1 / 2 r < X ( k + 1 ) 1 / 2 r ) n = 1 k n 1 / 2 r C k = 1 k 1 + 1 / 2 r E X I ( k 1 / 2 r < X ( k + 1 ) 1 / 2 r ) C E X 2 r + 2 < .

From (3.12), (3.23), and (3.24), we can see that (3.21) holds.□

4 Numerical simulation

In this section, we present some simulations to show the finite sample performance of the nonparametric estimator f n ( x ) by the results of Corollary 2.3. The data are generated from model (1.3). For any n 3 , let ( ε 1 , ε 2 , , ε n ) N n ( 0 , Σ ) , where 0 represents zero vector and

Σ = 1.16 0.4 0 0 0 0 0.4 1.16 0.4 0 0 0 0 0.4 1.16 0 0 0 0 0 0 1.16 0.4 0 0 0 0 0.4 1.16 0.4 0 0 0 0 0.4 1.16 .

From the Joag-Dev and Proschan [17], we can get that ( ε 1 , ε 2 , , ε n ) is an NA vector, hence it is m -END random variables. Let k n = n 1 / 2 , taking the sample size n as n = 100 , 500, 1000 respectively, and every points x = 0.01 , 0.02 , 0.03 , , 0.98 , 0.99 , 1 , we use R software to compute the estimators f ˜ n ( x ) of f ( x ) with f ( x ) = exp ( x ) and f ( x ) = sin ( 2 π x ) for 1000 times, and take the mean of 1000 times as the final estimation value of f ˜ n ( x ) for each point. We obtained the comparison of f ˜ n ( x ) and f ( x ) in Figures 1, 2, 3, 4, 5 and 6. At the same time, we obtained the MSE of f ˜ n ( x ) with f ( x ) = exp ( x ) and f ( x ) = sin ( 2 π x ) at x = 0.25 , 0.45 , 0.65 , 0.85 (Table 1).

Figure 1 
               Comparison of 
                     
                        
                        
                           
                              
                                 
                                    
                                       f
                                    
                                    
                                       ˜
                                    
                                 
                              
                              
                                 n
                              
                           
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        {\widetilde{f}}_{n}\left(x)
                     
                   and 
                     
                        
                        
                           f
                           
                              (
                              
                                 x
                              
                              )
                           
                           =
                           exp
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        f\left(x)=\exp \left(x)
                     
                   with 
                     
                        
                        
                           n
                           =
                           100
                        
                        n=100
                     
                  .
Figure 1

Comparison of f ˜ n ( x ) and f ( x ) = exp ( x ) with n = 100 .

Figure 2 
               Comparison of 
                     
                        
                        
                           
                              
                                 
                                    
                                       f
                                    
                                    
                                       ˜
                                    
                                 
                              
                              
                                 n
                              
                           
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        {\widetilde{f}}_{n}\left(x)
                     
                   and 
                     
                        
                        
                           f
                           
                              (
                              
                                 x
                              
                              )
                           
                           =
                           exp
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        f\left(x)=\exp \left(x)
                     
                   with 
                     
                        
                        
                           n
                           =
                           500
                        
                        n=500
                     
                  .
Figure 2

Comparison of f ˜ n ( x ) and f ( x ) = exp ( x ) with n = 500 .

Figure 3 
               Comparison of 
                     
                        
                        
                           
                              
                                 
                                    
                                       f
                                    
                                    
                                       ˜
                                    
                                 
                              
                              
                                 n
                              
                           
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        {\widetilde{f}}_{n}\left(x)
                     
                   and 
                     
                        
                        
                           f
                           
                              (
                              
                                 x
                              
                              )
                           
                           =
                           exp
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        f\left(x)=\exp \left(x)
                     
                   with 
                     
                        
                        
                           n
                           =
                           
                              
                              1000
                              
                           
                        
                        n=\hspace{0.1em}\text{1000}\hspace{0.1em}
                     
                  .
Figure 3

Comparison of f ˜ n ( x ) and f ( x ) = exp ( x ) with n = 1000 .

Figure 4 
               Comparison of 
                     
                        
                        
                           
                              
                                 
                                    
                                       f
                                    
                                    
                                       ˜
                                    
                                 
                              
                              
                                 n
                              
                           
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        {\widetilde{f}}_{n}\left(x)
                     
                   and 
                     
                        
                        
                           f
                           
                              (
                              
                                 x
                              
                              )
                           
                           =
                           sin
                           
                              (
                              
                                 2
                                 π
                                 x
                              
                              )
                           
                        
                        f\left(x)=\sin \left(2\pi x)
                     
                   with 
                     
                        
                        
                           n
                           =
                           100
                        
                        n=100
                     
                  .
Figure 4

Comparison of f ˜ n ( x ) and f ( x ) = sin ( 2 π x ) with n = 100 .

Figure 5 
               Comparison of 
                     
                        
                        
                           
                              
                                 
                                    
                                       f
                                    
                                    
                                       ˜
                                    
                                 
                              
                              
                                 n
                              
                           
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        {\widetilde{f}}_{n}\left(x)
                     
                   and 
                     
                        
                        
                           f
                           
                              (
                              
                                 x
                              
                              )
                           
                           =
                           sin
                           
                              (
                              
                                 2
                                 π
                                 x
                              
                              )
                           
                        
                        f\left(x)=\sin \left(2\pi x)
                     
                   with 
                     
                        
                        
                           n
                           =
                           500
                        
                        n=500
                     
                  .
Figure 5

Comparison of f ˜ n ( x ) and f ( x ) = sin ( 2 π x ) with n = 500 .

Figure 6 
               Comparison of 
                     
                        
                        
                           
                              
                                 
                                    
                                       f
                                    
                                    
                                       ˜
                                    
                                 
                              
                              
                                 n
                              
                           
                           
                              (
                              
                                 x
                              
                              )
                           
                        
                        {\widetilde{f}}_{n}\left(x)
                     
                   and 
                     
                        
                        
                           f
                           
                              (
                              
                                 x
                              
                              )
                           
                           =
                           sin
                           
                              (
                              
                                 2
                                 π
                                 x
                              
                              )
                           
                        
                        f\left(x)=\sin \left(2\pi x)
                     
                   with 
                     
                        
                        
                           n
                           =
                           
                              
                              1000
                              
                           
                        
                        n=\hspace{0.1em}\text{1000}\hspace{0.1em}
                     
                  .
Figure 6

Comparison of f ˜ n ( x ) and f ( x ) = sin ( 2 π x ) with n = 1000 .

Table 1

The MSE of f n ( x )

f ( x ) x n = 100 n = 500 n = 1000
exp ( x ) 0.25 0.0504 0.0143 0.0087
0.45 0.0617 0.0132 0.0089
0.65 0.0375 0.0203 0.0085
0.85 0.0353 0.0256 0.0091
sin ( 2 π x ) 0.25 0.0445 0.0134 0.0093
0.45 0.0372 0.0136 0.0093
0.65 0.0342 0.0155 0.0085
0.85 0.0371 0.0154 0.0089

Figures 1, 2, 3 present the comparison of f ˜ n ( x ) and f ( x ) = exp ( x ) with n = 100 , 500, 1000, and Figures 4, 5, 6 present the comparison of f ˜ n ( x ) and f ( x ) = sin ( 2 π x ) with n = 100 , 500, 1000, respectively. From Figures 1, 2, 3, 4, 5 and 6, we can see that the estimators converge to the true functions as sample sizes n increase. These do provide a numerical validation of the main results.

Acknowledgments

The authors would like to express their gratitude to the editors and the reviewers for their thoughtful comments and valuable suggestions.

  1. Conflict of interest: Authors state no conflict of interest.

References

[1] L. Liu, Precise large deviations for dependent random variables with heavy tails, Statist. Probab. Lett. 79 (2009), no. 9, 1290–1298, https://doi.org/10.1016/j.spl.2009.02.001. Search in Google Scholar

[2] X. J. Wang, Y. Wu, and S. H. Hu, Exponential probability inequality for m-END random variables and its applications, Metrika 79 (2016), no. 2, 127–147, https://doi.org/10.1007/s00184-015-0547-7. Search in Google Scholar

[3] Z. J. Wang, Y. Wu, M. G. Wang, and X. J. Wang, Complete and complete moment convergence with applications to the EV regression models, Statistics 53 (2019), no. 2, 261–282, https://doi.org/10.1080/02331888.2019.1570197. Search in Google Scholar

[4] C. J. Stone, Consistent nonparametric regression, Ann. Stat. 5 (1977), no. 4, 595–645. 10.1214/aos/1176343886Search in Google Scholar

[5] T. Gasser and H. G. Müller, Kernel estimation of regression functions, in: Smoothing Techniques for Curve Estimation, Lecture Notes in Mathematics, vol. 757, Springer, Berlin, Heidelberg, 1979, pp. 23–68, https://doi.org/10.1007/BFb0098489. 10.1007/BFb0098489Search in Google Scholar

[6] A. A. Georgiev and W. Greblicki, Nonparametric function recovering from noisy observations, Inference 13 (1986), no. 1, 1–14, https://doi.org/10.1016/0378-3758(86)90114-X. 10.1016/0378-3758(86)90114-XSearch in Google Scholar

[7] H. G. Müller, Weak and universal consistency of moving weighted averages, Period. Math. Hungar. 18 (1987), no. 3, 241–250, https://doi.org/10.1007/BF01848087. Search in Google Scholar

[8] X. J. Wang, Y. Wu, S. H. Hu, and N. X. Ling, Complete moment convergence for negatively orthant dependent random variables and its applications in statistical models, Stat. Papers 61 (2020), 1147–1180, https://doi.org/10.1007/s00362-018-0983-3. 10.1007/s00362-018-0983-3Search in Google Scholar

[9] R. Zhang, Y. Wu, W. F. Xu, and X. J. Wang, On complete consistency for the weighted estimator of nonparametric regression model, RACSAM 113 (2019), 2319–2333, https://doi.org/10.1007/s13398-018-00621-0. Search in Google Scholar

[10] A. T. Shen, M. M. Ning, and C. Q. Wu, The asymptotic normality of the linear weighted estimator in nonparametric regression models, Comm. Statist. Theory Methods 48 (2019), no. 6, 1367–1376, https://doi.org/10.1080/03610926.2018.1429633. 10.1080/03610926.2018.1429633Search in Google Scholar

[11] P. L. Hsu and H. Robbins, Complete convergence and the law of large numbers, Proc. Nat. Acad. Sci. U.S.A 33 (1947), no. 2, 25–31, https://doi.org/10.1073/pnas.33.2.25. Search in Google Scholar PubMed PubMed Central

[12] X. J. Wang and Z. Y. Si, Complete consistency for the estimator of nonparametric regression model under ND sequence, Statist. Papers 56 (2015), no. 3, 585–596, https://doi.org/10.1007/s00362-014-0598-2. Search in Google Scholar

[13] Z. Y. Chen, H. B. Wang, and X. J. Wang, The consistency for the estimator of nonparametric regression model based on martingale difference errors, Statist. Papers 57 (2016), no. 2, 451–469, https://doi.org/10.1007/s00362-015-0662-6. Search in Google Scholar

[14] X. J. Wang, L. L. Zheng, C. Xu, and S. H. Hu, Complete consistency for the weighted estimator of nonparametric regression model based on extended negatively dependent errors, Statistics 49 (2015), no. 2, 396–407, https://doi.org/10.1080/02331888.2014.888431. 10.1080/02331888.2014.888431Search in Google Scholar

[15] W. Z. Yang, H. Y. Xu, L. Chen, and S. H. Hu, Complete consistency of estimators for regression model based on extended negatively dependent errors, Statist. Papers 59 (2018), 449–465, https://doi.org/10.1007/s00362-016-0771-x. Search in Google Scholar

[16] A. Kuczmaszewska, On complete convergence in Marcinkiewicz-Zygmund type SLLN for negatively associated random variables, Acta. Math. Hungar. 128 (2010), no. 1, 116–130, https://doi.org/10.1007/s10474-009-9166-y. Search in Google Scholar

[17] K. Joag-Dev and F. Proschan, Negative association of random variables with applications, Ann. Statist. 11 (1983), no. 1, 286–295, https://doi.org/10.1214/aos/1176346079. Search in Google Scholar

Received: 2021-03-03
Revised: 2021-06-09
Accepted: 2021-07-05
Published Online: 2021-11-18

© 2021 Shui-Li Zhang et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Sharp conditions for the convergence of greedy expansions with prescribed coefficients
  3. Range-kernel weak orthogonality of some elementary operators
  4. Stability analysis for Selkov-Schnakenberg reaction-diffusion system
  5. On non-normal cyclic subgroups of prime order or order 4 of finite groups
  6. Some results on semigroups of transformations with restricted range
  7. Quasi-ideal Ehresmann transversals: The spined product structure
  8. On the regulator problem for linear systems over rings and algebras
  9. Solvability of the abstract evolution equations in Ls-spaces with critical temporal weights
  10. Resolving resolution dimensions in triangulated categories
  11. Entire functions that share two pairs of small functions
  12. On stochastic inverse problem of construction of stable program motion
  13. Pentagonal quasigroups, their translatability and parastrophes
  14. Counting certain quadratic partitions of zero modulo a prime number
  15. Global attractors for a class of semilinear degenerate parabolic equations
  16. A new implicit symmetric method of sixth algebraic order with vanished phase-lag and its first derivative for solving Schrödinger's equation
  17. On sub-class sizes of mutually permutable products
  18. Asymptotic solution of the Cauchy problem for the singularly perturbed partial integro-differential equation with rapidly oscillating coefficients and with rapidly oscillating heterogeneity
  19. Existence and asymptotical behavior of solutions for a quasilinear Choquard equation with singularity
  20. On kernels by rainbow paths in arc-coloured digraphs
  21. Fully degenerate Bell polynomials associated with degenerate Poisson random variables
  22. Multiple solutions and ground state solutions for a class of generalized Kadomtsev-Petviashvili equation
  23. A note on maximal operators related to Laplace-Bessel differential operators on variable exponent Lebesgue spaces
  24. Weak and strong estimates for linear and multilinear fractional Hausdorff operators on the Heisenberg group
  25. Partial sums and inclusion relations for analytic functions involving (p, q)-differential operator
  26. Hodge-Deligne polynomials of character varieties of free abelian groups
  27. Diophantine approximation with one prime, two squares of primes and one kth power of a prime
  28. The equivalent parameter conditions for constructing multiple integral half-discrete Hilbert-type inequalities with a class of nonhomogeneous kernels and their applications
  29. Boundedness of vector-valued sublinear operators on weighted Herz-Morrey spaces with variable exponents
  30. On some new quantum midpoint-type inequalities for twice quantum differentiable convex functions
  31. Quantum Ostrowski-type inequalities for twice quantum differentiable functions in quantum calculus
  32. Asymptotic measure-expansiveness for generic diffeomorphisms
  33. Infinitesimals via Cauchy sequences: Refining the classical equivalence
  34. The (1, 2)-step competition graph of a hypertournament
  35. Properties of multiplication operators on the space of functions of bounded φ-variation
  36. Disproving a conjecture of Thornton on Bohemian matrices
  37. Some estimates for the commutators of multilinear maximal function on Morrey-type space
  38. Inviscid, zero Froude number limit of the viscous shallow water system
  39. Inequalities between height and deviation of polynomials
  40. New criteria-based ℋ-tensors for identifying the positive definiteness of multivariate homogeneous forms
  41. Determinantal inequalities of Hua-Marcus-Zhang type for quaternion matrices
  42. On a new generalization of some Hilbert-type inequalities
  43. On split quaternion equivalents for Quaternaccis, shortly Split Quaternaccis
  44. On split regular BiHom-Poisson color algebras
  45. Asymptotic stability of the time-changed stochastic delay differential equations with Markovian switching
  46. The mixed metric dimension of flower snarks and wheels
  47. Oscillatory bifurcation problems for ODEs with logarithmic nonlinearity
  48. The B-topology on S-doubly quasicontinuous posets
  49. Hyers-Ulam stability of isometries on bounded domains
  50. Inhomogeneous conformable abstract Cauchy problem
  51. Path homology theory of edge-colored graphs
  52. Refinements of quantum Hermite-Hadamard-type inequalities
  53. Symmetric graphs of valency seven and their basic normal quotient graphs
  54. Mean oscillation and boundedness of multilinear operator related to multiplier operator
  55. Numerical methods for time-fractional convection-diffusion problems with high-order accuracy
  56. Several explicit formulas for (degenerate) Narumi and Cauchy polynomials and numbers
  57. Finite groups whose intersection power graphs are toroidal and projective-planar
  58. On primitive solutions of the Diophantine equation x2 + y2 = M
  59. A note on polyexponential and unipoly Bernoulli polynomials of the second kind
  60. On the type 2 poly-Bernoulli polynomials associated with umbral calculus
  61. Some estimates for commutators of Littlewood-Paley g-functions
  62. Construction of a family of non-stationary combined ternary subdivision schemes reproducing exponential polynomials
  63. On the evolutionary bifurcation curves for the one-dimensional prescribed mean curvature equation with logistic type
  64. On intersections of two non-incident subgroups of finite p-groups
  65. Global existence and boundedness in a two-species chemotaxis system with nonlinear diffusion
  66. Finite groups with 4p2q elements of maximal order
  67. Positive solutions of a discrete nonlinear third-order three-point eigenvalue problem with sign-changing Green's function
  68. Power moments of automorphic L-functions related to Maass forms for SL3(ℤ)
  69. Entire solutions for several general quadratic trinomial differential difference equations
  70. Strong consistency of regression function estimator with martingale difference errors
  71. Fractional Hermite-Hadamard-type inequalities for interval-valued co-ordinated convex functions
  72. Montgomery identity and Ostrowski-type inequalities via quantum calculus
  73. Universal inequalities of the poly-drifting Laplacian on smooth metric measure spaces
  74. On reducible non-Weierstrass semigroups
  75. so-metrizable spaces and images of metric spaces
  76. Some new parameterized inequalities for co-ordinated convex functions involving generalized fractional integrals
  77. The concept of cone b-Banach space and fixed point theorems
  78. Complete consistency for the estimator of nonparametric regression model based on m-END errors
  79. A posteriori error estimates based on superconvergence of FEM for fractional evolution equations
  80. Solution of integral equations via coupled fixed point theorems in 𝔉-complete metric spaces
  81. Symmetric pairs and pseudosymmetry of Θ-Yetter-Drinfeld categories for Hom-Hopf algebras
  82. A new characterization of the automorphism groups of Mathieu groups
  83. The role of w-tilting modules in relative Gorenstein (co)homology
  84. Primitive and decomposable elements in homology of ΩΣℂP
  85. The G-sequence shadowing property and G-equicontinuity of the inverse limit spaces under group action
  86. Classification of f-biharmonic submanifolds in Lorentz space forms
  87. Some new results on the weaving of K-g-frames in Hilbert spaces
  88. Matrix representation of a cross product and related curl-based differential operators in all space dimensions
  89. Global optimization and applications to a variational inequality problem
  90. Functional equations related to higher derivations in semiprime rings
  91. A partial order on transformation semigroups with restricted range that preserve double direction equivalence
  92. On multi-step methods for singular fractional q-integro-differential equations
  93. Compact perturbations of operators with property (t)
  94. Entire solutions for several complex partial differential-difference equations of Fermat type in ℂ2
  95. Random attractors for stochastic plate equations with memory in unbounded domains
  96. On the convergence of two-step modulus-based matrix splitting iteration method
  97. On the separation method in stochastic reconstruction problem
  98. Robust estimation for partial functional linear regression models based on FPCA and weighted composite quantile regression
  99. Structure of coincidence isometry groups
  100. Sharp function estimates and boundedness for Toeplitz-type operators associated with general fractional integral operators
  101. Oscillatory hyper-Hilbert transform on Wiener amalgam spaces
  102. Euler-type sums involving multiple harmonic sums and binomial coefficients
  103. Poly-falling factorial sequences and poly-rising factorial sequences
  104. Geometric approximations to transition densities of Jump-type Markov processes
  105. Multiple solutions for a quasilinear Choquard equation with critical nonlinearity
  106. Bifurcations and exact traveling wave solutions for the regularized Schamel equation
  107. Almost factorizable weakly type B semigroups
  108. The finite spectrum of Sturm-Liouville problems with n transmission conditions and quadratic eigenparameter-dependent boundary conditions
  109. Ground state sign-changing solutions for a class of quasilinear Schrödinger equations
  110. Epi-quasi normality
  111. Derivative and higher-order Cauchy integral formula of matrix functions
  112. Commutators of multilinear strongly singular integrals on nonhomogeneous metric measure spaces
  113. Solutions to a multi-phase model of sea ice growth
  114. Existence and simulation of positive solutions for m-point fractional differential equations with derivative terms
  115. Bernstein-Walsh type inequalities for derivatives of algebraic polynomials in quasidisks
  116. Review Article
  117. Semiprimeness of semigroup algebras
  118. Special Issue on Problems, Methods and Applications of Nonlinear Analysis (Part II)
  119. Third-order differential equations with three-point boundary conditions
  120. Fractional calculus, zeta functions and Shannon entropy
  121. Uniqueness of positive solutions for boundary value problems associated with indefinite ϕ-Laplacian-type equations
  122. Synchronization of Caputo fractional neural networks with bounded time variable delays
  123. On quasilinear elliptic problems with finite or infinite potential wells
  124. Deterministic and random approximation by the combination of algebraic polynomials and trigonometric polynomials
  125. On a fractional Schrödinger-Poisson system with strong singularity
  126. Parabolic inequalities in Orlicz spaces with data in L1
  127. Special Issue on Evolution Equations, Theory and Applications (Part II)
  128. Impulsive Caputo-Fabrizio fractional differential equations in b-metric spaces
  129. Existence of a solution of Hilfer fractional hybrid problems via new Krasnoselskii-type fixed point theorems
  130. On a nonlinear system of Riemann-Liouville fractional differential equations with semi-coupled integro-multipoint boundary conditions
  131. Blow-up results of the positive solution for a class of degenerate parabolic equations
  132. Long time decay for 3D Navier-Stokes equations in Fourier-Lei-Lin spaces
  133. On the extinction problem for a p-Laplacian equation with a nonlinear gradient source
  134. General decay rate for a viscoelastic wave equation with distributed delay and Balakrishnan-Taylor damping
  135. On hyponormality on a weighted annulus
  136. Exponential stability of Timoshenko system in thermoelasticity of second sound with a memory and distributed delay term
  137. Convergence results on Picard-Krasnoselskii hybrid iterative process in CAT(0) spaces
  138. Special Issue on Boundary Value Problems and their Applications on Biosciences and Engineering (Part I)
  139. Marangoni convection in layers of water-based nanofluids under the effect of rotation
  140. A transient analysis to the M(τ)/M(τ)/k queue with time-dependent parameters
  141. Existence of random attractors and the upper semicontinuity for small random perturbations of 2D Navier-Stokes equations with linear damping
  142. Degenerate binomial and Poisson random variables associated with degenerate Lah-Bell polynomials
  143. Special Issue on Fractional Problems with Variable-Order or Variable Exponents (Part I)
  144. On the mixed fractional quantum and Hadamard derivatives for impulsive boundary value problems
  145. The Lp dual Minkowski problem about 0 < p < 1 and q > 0
Downloaded on 28.2.2026 from https://www.degruyterbrill.com/document/doi/10.1515/math-2021-0081/html
Scroll to top button