Home Nonparametric expectile shortfall regression for functional data
Article Open Access

Nonparametric expectile shortfall regression for functional data

  • Ibrahim M. Almanjahie , Hanan Abood , Salim Bouzebda EMAIL logo , Fatimah Alshahrani and Ali Laksaci
Published/Copyright: April 30, 2025
Become an author with De Gruyter Brill

Abstract

This work addresses the issue of financial risk analysis by introducing a novel expected shortfall (ES) regression model, which employs expectile regression to define the shortfall threshold in financial risk management. We develop a nonparametric estimator for this model and provide mathematical support by proving both pointwise and uniform complete convergence of the estimator. These asymptotic results are derived under traditional assumptions and include precise convergence rates, emphasizing the impact of the regressor’s dimensionality on the estimation approach. A key feature of our contribution is the straightforward implementation of the estimator, demonstrated through applications on both simulated and real data. Our findings indicate that the new ES-expectile model is more effective than the standard model based on quantile regression, offering improved relevance in financial risk management.

MSC 2010: 62G05; 62G08; 62R10; 62R20

1 Introduction

Risk analysis is a pivotal element for financial investors. Traditionally, value at risk (VaR) has been the standard function for modeling financial risk, a model officially endorsed by the Basel Committee in 1996 and 2006. Despite its popularity, VaR has notable weaknesses, particularly its insensitivity to outliers or extreme risks. To address these limitations, the Basel Committee in 2014 introduced the expected shortfall (ES) function [1], which controls the expected loss or gain beyond a given threshold, usually defined by VaR. In this study, we consider the ES function using the expectile model instead. The expectile model, introduced by Newey and Powell [2], is highly sensitive to outliers, enhancing its ability to fit financial risk better than the VaR function. This sensitivity has led to significant attention in risk analysis literature [36]. Additionally, the expectile function has been applied in heteroscedasticity analysis (e.g., [7,8]) and outlier detection [9]. Expectile regression has been explored using vectorial regressors (e.g., [10]) and more recently in the context of functional statistics [1122]. The concept of shortfall risk was introduced by Artzner et al. [23]. Based on its coherency property, the ES function has gained popularity in the last decade. A comparative study between VaR and shortfall was performed by Artzner et al. [23], showing that VaR is inappropriate when profit-loss distributions are non-Gaussian. ES models can be estimated using parametric, semi-parametric, and nonparametric approaches (e.g., [2426]). In this article, we focus on the nonparametric approach, building on the work of researchers like Scaillet [27], who constructed an estimator using the kernel method. Using similar techniques, Cai and Wang [28] established the asymptotic distribution of the constructed estimator, while [29] used the Bahadur representation to construct an estimator for the ES function. Regarding nonparametric functional data analysis (NFDA), the literature is limited. Only two works have addressed the nonparametric estimation of functional ES regression. The first contribution was developed by Ferraty and Quintela-del Río [30], who studied the functional version of the kernel estimator of ES regression under the mixing assumption. Using the same estimator, Ait-Hennani et al. [31] established almost complete consistency under quasi-associated dependency.

In this work, we aim to develop a new risk model that combines the advantages of both the expectile and ES models. Specifically, contrary to existing works, we define ES using the expectile threshold. This new risk model accumulates the advantages of both approaches. The expectile is coherent and elicitable and is very sensitive to the magnitude of the lower tail, unlike VaR, which is not influenced by this aspect. On the other hand, the ES model fulfills all conditions of spectral risk measures [32]. Thus, the ES model based on expectile substantially improves risk management in practice. A key challenge in ES estimation is the absence of an associated backtesting measure. Our contribution constructs a computational kernel estimator that reduces the computational time of this risk tool. Mathematically, we establish the asymptotic properties of the estimator, including the convergence rate in the almost complete mode for both pointwise and uniform versions. The practical implementation of this risk model is evaluated using empirical analysis. To our knowledge, no attempt has been made so far to estimate the functional ES regression based on expectile regression functions.

This article is organized as follows: we present our risk model and its estimator in Section 2. Section 3 introduces the necessary assumptions. The pointwise convergence of the constructed estimator is shown in Section 4, while uniform consistency is given in Section 5. Section 6 discusses the computational ability of the estimator over simulated and real data applications. Finally, the proofs of the auxiliary results are given in the Appendix.

2 Model and estimator

Consider n independent random pairs ( X 1 , Y 1 ) , , ( X n , Y n ) residing in × R , each identically distributed as ( X , Y ) . Throughout this study, we posit that the functional space is a semi-metric space, endowed with the semi-metric d ( , ) . Furthermore, assume the existence of a regular version of the conditional distribution of Y given X . In probability theory, it remains an open problem whether a regular version of conditional probability exists in every topological setting. Nevertheless, such a version has been established in certain frameworks [33,34]. Generally, the validity of this assumption hinges on various topological properties, such as completeness, separability, and compactness. For the sake of generality, we therefore assume the existence of a regular version without imposing additional topological constraints.

Now, the conventional ES regression is defined for s as

CES p ( s ) = E [ Y Y > CVaR p ( s ) , X = s ] ,

where CVaR α denotes the conditional VaR at level 1 α . In this article, we introduce an alternative ES-regression framework that employs the expectile instead of VaR α . Specifically, we define the ES-regression as

CEX p ( s ) = E [ Y Y > ExP p ( s ) , X = s ] ,

where 1 I A denotes the indicator function of the set A , and CExP p is the expectile regression defined by

CExP p ( s ) = arg min t R { E [ p ( Y t ) 2 1 I { Y t > 0 } X = s ] + E [ ( 1 p ) ( Y t ) 2 1 I { Y t 0 } X = s ] } .

Replacing CVaR α with CExP p enhances the regression’s sensitivity to extreme values, addressing the inherent limitation of VaR α in capturing tail risks. This improvement is particularly crucial in practical applications where catastrophic losses or extreme events are of primary concern.

Throughout this article, we assume that F ( ) is a known measurable function and that a = a n is a positive sequence of real numbers converging to zero as n approaches infinity. Under these assumptions, the ES-regression estimator is given by

(1) CEX p ^ ( s ) = i = 1 n F ( a 1 d ( s , X i ) ) Y i 1 I { Y i > CExP ^ p ( s ) } i = 1 n F ( a 1 d ( s , X i ) ) ,

where CExP ^ p ( s ) is the kernel estimator of CExP p ( s ) , defined as the solution to

G ^ ( CExP ^ p ( s ) ; s ) = p 1 p ,

with

G ^ ( t ; s ) = i = 1 n F n i ( s ) ( Y i t ) 1 I { Y i t 0 } i = 1 n F n i ( s ) ( Y i t ) 1 I { Y i t > 0 } , for t R ,

where

F n i ( s ) = F i ( s ) i = 1 n F i ( s ) and F i ( s ) = F ( a 1 d ( s , X i ) ) .

This formulation ensures that the ES-regression is both theoretically robust and practically sensitive to extreme outcomes, thereby enhancing its applicability in risk-sensitive analyses.

Remark 2.1

In this article, we explore the concept of conditional expectiles, which are derived from the method of least asymmetrically weighted squares estimation – a fundamental technique in statistical applications borrowed from the econometrics literature. This approach frequently employs the notion of expectiles as introduced by Newey and Powell [2], serving as the least-squares counterparts to traditional quantiles. Expectiles are so named because they resemble the quantiles of a random variable; however, unlike quantiles, they are based on a quadratic loss function analogous to that used for expectations. For further details, refer to [35,36] and the forthcoming section. The expectile model has gained substantial traction in the financial literature, as evidenced by studies such as [37] and [5]. This popularity is largely due to expectiles being the only elicitable coherent risk measures, as discussed in [38] and references therein. Additionally, expectile regression has been applied to heteroscedasticity analysis [39]. Further motivations for employing the expectile model can be found in recent works by Mohammedi et al. [11] and Almanjahie et al. [18]. For a comprehensive overview of expectile curves in regression analysis, see [10], along with the extensive discussions therein. Specifically, Eilers [40] provided an appraisal of expectiles, while Koenker [41] offered a critical perspective. Although quantiles and expectiles differ in their construction, they share similar properties. As demonstrated by Jones [42], this similarity arises because expectiles are effectively quantiles applied to a transformed version of the original distribution. Both quantiles and expectiles encapsulate information about the entire distribution of a random variable and can be viewed as extensions of the median and mean, respectively. Expectiles present several advantages over quantiles in various applications. Notably, expectiles are more sensitive to the magnitude of infrequent catastrophic losses and depend on both the tail realizations of the predictor and their probabilities, whereas quantiles depend solely on the frequency of tail realizations [43]. This heightened sensitivity to tail behavior facilitates more prudent and responsive risk management. Moreover, quantiles can be criticized for their computational complexity due to the non-differentiability of their associated loss function. In contrast, expectiles offer greater computational efficiency, although they lack a direct interpretation in terms of relative frequency [11,18]. Furthermore, expectile regression can be utilized in other statistical contexts, such as predictive modeling using neural networks [44], and in addressing heteroscedasticity issues in regression analysis [8].

3 Pointwise convergence

Before stating the asymptotic properties of the estimator CEX p ^ , we need to introduce some notations and assumptions. First, we set by C s or C some strictly positive generic constants, N s is a given neighborhood of s , and, for all t R , we define

ES ( t , s ) = E [ Y 1 I Y > t X = s ] .

To present our main findings, we now introduce and rely on the following hypotheses:

  1. P ( X B ( s , r ) ) = φ ( s , r ) > 0 ,

    where

    B ( s , h ) = { x : d ( x , s ) < h } .

  2. There exists δ > 0 such that for all

    ( t 1 , t 2 ) [ CExP p ( s ) δ , CExP p ( s ) + δ ] , and for all ( s 1 , s 2 ) N x 2 ,

    we have

    ES ( t 1 , s 1 ) ES ( t 2 , s 2 ) C x ( d b ( s 1 , s 2 ) b + t 1 t 2 ) , b > 0 .

  3. For all m > 2 ,

    E [ Y m X = s ] C < , a.s.

  4. The kernel function F ( ) is supported on (0, 1) such that

    C 1 I [ 0,1 ] ( t ) < F ( t ) < C 1 I [ 0,1 ] ( t ) .

  5. The parameter a satisfies

    lim n n φ ( s , a ) ln n = .

3.1 Comments on the hypotheses

The five assumptions considered are standard in functional data analysis. They closely resemble those used by Ferraty and Vieu [45]. The first assumption (P1) connects the spectral and functional structures of the functional variable. This condition is fulfilled by a broad class of functional random variables arising from continuous-time processes. As one illustrative example, when X is derived from fractional Brownian motion with parameter δ (0, 2), assumption (P1) holds in the L -norm for a function of the form φ ( s , r ) ( C s e r 2 δ ) . For further details and additional examples of functional variables satisfying (P1), see Theorems 3.1 and 4.6 in [46]. The nonparametric approach is particularly motivated by the typically unknown distribution of financial movements, making assumption (P2) crucial for defining the functional space of the model. The integrability of the response variable, a mild condition, is commonly used in regression analysis. Assumptions (P4) and (P5) specify the kernel function and the smoothing parameter, respectively. These technical assumptions are essential for precisely determining the convergence rate of the estimator. In particular, assumption (P4) is satisfied by several types of kernels, including the quadratic kernel and the β -kernel. Further examples of kernels fulfilling (P4) are discussed in the study by Ferraty and Vieu [45].

Now, we state the following result.

Theorem 3.1

Under conditions (P1)–(P5), we have

(2) CEX p ^ ( s ) CEX p ( s ) = O a b + ln n n φ ( s , a ) , a.co .

Remark 3.2

It is worth noting that (P3) can be replaced by more general assumptions on the moments of Y , as outlined in [47] and further discussed in [48]. Specifically, consider the following condition:

  1. Let { ( x ) : x 0 } be a continuous, nonnegative, and increasing family of functions. Assume that there exists some q > 2 such that, as x , the following two properties hold:

    1. x q ( x ) is non-decreasing (denoted );

    2. x 1 log ( ( x ) ) is non-increasing (denoted ).

Moreover, for every t ( 0 ) , define inv ( t ) to be the unique nonnegative solution of

(3) ( inv ( t ) ) = t .

Particularly interesting choices for ( ) include

  1. ( x ) = x p for some p > 2 ;

  2. ( x ) = exp ( s x ) for some s > 0 .

3.2 Proof of Theorem 3.1

For t R , we define

ES ^ ( t , s ) = i = 1 n F [ a 1 d ( s , X i ) ] Y i 1 I Y i > t i = 1 n F [ a 1 d ( s , X i ) ] .

So, we have

ES ^ ( CExP ^ p ( s ) , s ) = CEX p ^ ( s ) , and ES ( CExP p ( s ) , s ) = CEX p ( s ) .

Then, we obtain

CEX p ^ ( s ) CEX p ( s ) = ES ^ ( CExP ^ p ( s ) , s ) ES ( CExP ^ p ( s ) , s ) + ES ( CExP ^ p ( s ) , s ) ES ( CExP p ( s ) , s ) .

Hence, we infer

CEX p ^ ( s ) CEX p ( s ) sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ ( t , s ) ES ( t , s ) + C CExP ^ p ( s ) C E x P p ( s ) .

So, Theorem 3.1 results from

(4) sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ ( t , s ) ES ( t , s ) = O ( a b ) + O ln n n φ ( s , a ) 1 2 , a.co. ,

and

(5) CExP ^ p ( s ) C E x P p ( s ) = O ( h b ) + O ln n n φ ( s , a ) 1 2 , a.co.

The result (5) is proved in the study by Mohammedi et al. [11]. So, we concentrate only on (4). We define

ES ^ D ( s ) = 1 n i = 1 n K ˜ i ,

where K ˜ i is given by

K ˜ i F i E [ F 1 ( s ) ] .

Observe that F 1 ( s ) = F 1 ( s ) 1 B ( s , a ) ( X 1 ) . We write

ES ( t , x ) E [ ES ^ N ( t , s ) ] 1 E [ F 1 ( s ) ] E [ F 1 ( s ) 1 B ( s , a ) ( X 1 ) ( ES ( t , x ) ES ( t , X 1 ) ) ] .

Indeed, as E [ ES ^ D ( s ) ] = 1 , for t R , we have

ES ^ ( t , s ) ES ^ ( t , s ) = 1 ES ^ D ( s ) [ ( ES ^ N ( t , s ) E [ ES ^ N ( t , s ) ] ) ( ES ^ ( t , s ) E [ ES ^ N ( t , s ) ] ) ] ES ^ N ( t , s ) ES ^ D ( s ) [ ES ^ D ( s ) E [ ES ^ D ( s ) ] ] .

The proof is carried out through Lemmas 3.33.5.

Lemma 3.3

Under conditions (P1) and (P3)–(P5), we have

ES ^ D ( s ) E [ ES ^ D ( s ) ] = O ln n n φ ( s , a ) 1 2 , a.co.

Moreover, we infer

n P ES ^ D ( s ) < 1 2 < .

Lemma 3.4

Under conditions (P1)–(P2) and (P4)–(P5), we have

sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ( t , s ) E [ ES ^ N ( t , s ) ] = O ( a b ) .

Lemma 3.5

Under conditions (P1)–(P5), we have

sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s ) E [ ES ^ N ( t , s ) ] = O ln n n φ ( s , a ) 1 2 , a.co.

4 Uniform consistency

To enhance the mathematical support of ES ^ ( t , s ) , we derive its uniform almost complete convergence over a subset S in . To accomplish this, we denote some generic constants in R * + by C or C .

Definition 4.1

Let S be a subset of a semi-metric space , and let ε > 0 be given. A finite set of points x 1 , x 2 , , x n 0 is called an ε -net for S if

S k = 1 N ε B ( x k , ε ) .

The quantity ψ S = log ( N ε ) , where N ε is the minimal number of open balls in of radius ε needed to cover S , is called the Kolmogorov ε -entropy of the set S .

This concept was introduced by Kolmogorov and Tikhomirov [49]. It represents a measure of the complexity of a set, in the sense that high entropy indicates that a significant amount of information is required to describe an element with an accuracy of ε . Therefore, the choice of the topological structure (in other words, the choice of the semi-metric) plays a crucial role when examining uniform asymptotic results over S . For more examples, refer to [50,51].

In our analysis, we will impose the following conditions:

  1. For all x S , we have

    0 < C φ ( a ) P ( X B ( s , a ) ) C φ ( a ) < .

  2. There exists δ > 0 such that for all ( t 1 , t 2 ) R 2 and all ( x 1 , x 2 ) S 2 ,

    ES ( t 1 , x 1 ) ES ( t 2 , x 2 ) C ( d ( x 1 , x 2 ) b + t 1 t 2 ) , b > 0 .

  3. For all m > 2 ,

    E [ Y m X ] C m < , a.s.

  4. The kernel function F ( ) is an a -Lipschitz continuous function on its support [0, 1] and is itself supported in (0, 1). Moreover,

    C 1 I [ 0,1 ] ( t ) < F ( t ) < C 1 I [ 0,1 ] ( t ) .

  5. There exists α > 0 such that

    lim n ( n α a φ ( a ) ) = and lim n ψ S ( n α 1 2 ) n φ ( a ) ,

    where ψ S is the Kolmogorov ε -entropy of S .

Assumptions (U1–U5) are the uniform versions of (P1–P5). These assumptions allow us to express the uniform convergence rate in terms of the Kolmogorov entropy of the subset S . This consideration is important in the functional context, where compactness is not standard. It is well known that Kolmogorov entropy can be characterized in many standard functional settings. In particular, when the underlying functions are driven by a standard Brownian motion, the Kolmogorov ε -entropy of the unit ball in a Sobolev space is on the order of C ε 1 . This finding was originally established in [52]. We return to this last reference for more examples Additionally, assumptions (U4) and (U5) are technical in nature and used to simplify the proofs. We point out that (U4) is satisfied for the Beta-kernel, while assumption (U5) can be verified for many classes of functions, such as the closed ball in the Sobolev space or the unit ball in the Cameron-Martin space.

We obtain the following result.

Theorem 4.2

Under conditions (U1)–(U5), and if

n = 1 n 1 2 exp { ( 1 β ) ψ S ( n a 1 2 ) } < , f o r s o m e β > 1 ,

we have

(6) sup x S ES ^ ( t , x ) ES ( t , x ) = O ( a b ) + O ψ S ( n α 1 2 ) n φ ( a ) , a.co.

Remark 4.3

Theorem 4.2 represents the first result addressing the ES regression model, which utilizes expectile regression to establish the shortfall threshold in financial risk management. The topological structure of the space directly affects the convergence rate in Theorem 4.2 through the quantities ϕ and ψ S . Additionally, the regularity condition (U2) directly influences the bias terms.

Remark 4.4

We first consider the rescaled Gaussian process ( W c ( t ) = W ( t c ) ) t 0 , where W ( t ) itself is a Gaussian process with spectral measure μ . Suppose there exists a > 0 such that

e a λ μ ( d λ ) < .

Under this condition, the Kolmogorov ε -entropy of C – taken to be the unit ball of the reproducing kernel Hilbert space corresponding to W c , viewed as a subset of ( C ( [ 0 , 1 ] ) , ) – is on the order of

1 c log 2 1 ε ;

(see [53] for details).

As a second example, let C be the closed unit ball of the Cameron-Martin space associated with the covariance operator of the standard stationary Ornstein-Uhlenbeck process, defined by

Cov ( s , t ) = exp ( a s t ) , a > 0 .

When this set is endowed with the norm of the Sobolev space W 1,2 ( [ 0,1 ] ) , its Kolmogorov ε -entropy has the order

2 a π ε

(cf. [53]).

Finally, [54] shows that any closed ball in W 1,1 ( [ 0 , T ] ) , when equipped with the L 1 ( [ 0 , T ] ) norm, has a Kolmogorov ε -entropy on the order of

1 ε log 1 ε .

4.1 Proof of Theorem 4.2

We employ the same strategy as in the first theorem to deduce that uniform consistency is based on the uniform version of the preceding lemmas.

Lemma 4.5

Under the conditions of Theorem 4.2, we have

sup x S ES ^ D ( s ) E [ ES ^ D ( s ) ] = O ln n n φ ( a ) 1 2 a.co.

Moreover,

n P ( inf x S ES ^ D ( s ) < C ) < .

Lemma 4.6

Under conditions (U1)–(U2) and (U4)–(U5), we have

sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ( t , s ) E [ ES ^ N ( t , s ) ] = O ( a b ) .

Lemma 4.7

Under the conditions of Theorem 4.2, we have

sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s ) E [ ES ^ N ( t , s ) ] = O ln n n φ ( a ) 1 2 , a.co.

5 Empirical analysis

This section is devoted to examining the practical use of the model studied in this work. This section is divided into three sections. In the first subsection, we discuss the selection of the smoothing parameter, which is the pivotal parameter in our estimation. For this reason, the choice of this parameter is primordial for the computational aspect. After the smoothing parameter selection, we examine the usefulness of the estimator. This practical study is conducted using two examples. The first one concerns artificial data, and the second one treats financial real data coming from some popular index markets according to the Dow-Jones index.

5.1 Smoothing parameter selection: cross-validation (CV)

As mentioned earlier, the choice of the smoothing parameter h is crucial in this nonparametric framework. Now, as our estimation procedure is based on the expectile regression, the appropriate CV rule is the mean squared error (MSE). The latter is usual in nonparametric functional data:

(7) a CV opt = arg min a i = 1 n ( Y i CExP ^ 0.5 ( X i ) ) 2 .

This rule is motivated by the fact that the conditional mean E [ Y X ] is associated with CExP ^ p with p = 0.5 . The popularity of this approach comes from its easy implementation. However, we can employ a more accurate rule that is a generalization of (7). It is explicitly expressed by

(8) a opt = arg min a i = 1 n ρ ( Y i CExP ^ p ( X i ) ) 2 ,

where ρ is the scoring function defining C E x P p . The main advantage of this last rule is its dependence on the threshold p . Of course, it is very beneficial in this area of financial risk analysis. Indeed, in risk analysis, we are interested in the tail that corresponds to small or large values of p . Observe that the challenging issue in ES is the absence of the scoring function or backtesting measure. Thus, using the optimal smoothing parameter associated with the expectile regression is more adequate for the model CEX p ^ . Moreover, this choice reduces the time cost of the companionability of the ES-expectile regression.

5.2 Simulated data

In this section, we will examine the behavior of the ES-expectile function over finite sample observations. More precisely, we evaluate how the estimator performs under conditions that fully match the standard theoretical assumptions and then contrast these results with what happens when those assumptions are partially relaxed. In particular, we focus on three main aspects: the role of the independence condition, the selection of an appropriate smoothing parameter, and the choice of kernel. By comparing these scenarios, we illustrate how each factor influences the overall behavior of the estimator. For this study, we compare our model to the alternative financial models such as the VaR, and the standard ES based on the percentile regression. For the empirical analysis, we generate the functional variable X ( ) using the R-package rockchalk and the code routine dgp.fiid. Such curves represent a continuous trajectory of a financial asset, and the response variable Y represents a future characteristic of this trajectory. For this reason, for all i , we assume that Y i is the initial value of X i + 1 . In this empirical analysis, we compare the independence to the dependence case. In Figure 1, we plot the curves of the independent case versus the dependent case.

Figure 1 
                  Shape of the curves.
Figure 1

Shape of the curves.

As discussed in the following, the principal object of this computational part is to conduct a comparison study between the ES-expectile regression CEX p ^ and the ES-based on the VaR regression defined by the percentile regression

V p ( z ) = inf { z R : F ( z : z ) p } ,

where F is the conditional cumulative function of Y given X . The latter is estimated by using the kernel-estimator of the function F ( ) as

F ^ ( z : z ) = i = 1 n 1 I ( Y i z ) < 0 K d ( z , X i ) h n i = 1 n K d ( z , X i ) h n .

Thereafter, we estimate the VaR function by

V ^ p ( z ) = inf { z R : F ^ ( z : z ) p } .

Recall that, in this context, the ES regression is expressed by

CES p ˜ ( s ) = i = 1 n F ( a 1 x X i ) ( a G ( a 1 ( V p ^ ( s ) Y i ) ) + Y i ( 1 H ( a 1 ( V p ^ ( s ) Y i ) ) ) ) p i = 1 n K ( a 1 x X i ) ,

where

G ( s ) = s u F ( u ) , H ( s ) = s F ( u ) d u .

The two estimators CEX p ^ and CES p ˜ are calculated using the β -kernel, the L 2 metric, and the smoothing parameter of the CV rule (7). In the sense that

a CV opt ( s ) = arg min a H n ( s ) i = 1 n ( Y i CExP ^ 0.5 ( X i ) ) 2 ,

where H n ( s ) is the set of the positive real a ( s ) such that the ball centered at x with radius a ( s ) contains exactly k neighbors of x . This smoothing selection method is usual in nonparametric functional statistics (see Ferraty and Vieu [45] for a deeper discussion on the subject). In addition, we compare our CV procedure with the bootstrap algorithm presented in [16,55]. To illustrate the influence of the chosen kernel and the associated bandwidth, we compute our estimators using a Gaussian kernel and a bandwidth set to a = a CV opt a boot opt , where a boot opt is the optimal bandwidth determined by the bootstrap selector. We evaluate the performance of these estimators via the following backtesting metrics:

Mse = 1 n i = 1 n ( Y i ϑ ^ p ( ) ) 2 I { Y i > CExP ^ p ( X i ) } , Msp = 1 n i = 1 n ( Y i ϑ ^ p ( ) ) 2 I { Y i > V ^ p ( X i ) } .

These error measures are examined for p = 0.9 , 0.5, 0.1, 0.05, and 0.01. The corresponding results are presented in the following tables.

Error results of the ES-expectile
Cases Estimation p = 0.9 p = 0.5 p = 0.1 p = 0.05 p = 0.01
Independent case Estimation with CV-rule 0.16 0.14 0.13 0.098 0.096
Estimation with bootstrap-rule 0.19 0.16 0.11 0.089 0.077
Estimation with arbitrary choice 0.45 0.53 0.48 0.31 0.32
Dependent case Estimation with CV-rule 0.21 0.29 0.22 0.24 0.27
Estimation with bootstrap-rule 0.25 0.19 0.23 0.25 0.21
Estimation with arbitrary choice 0.68 0.72 0.87 0.56 0.69
Error results of the ES based on VaR
Cases Estimation p = 0.9 p = 0.5 p = 0.1 p = 0.05 p = 0.01
Independent case Estimation with CV-rule 0.32 0.23 0.37 0.34 0.29
Estimation with bootstrap-rule 0.27 0.23 0.38 0.20 0.19
Estimation with arbitrary choice 0.91 0.87 0.82 0.98 0.89
Dependent case Estimation with CV-rule 0.61 0.59 0.42 0.74 0.57
Estimation with bootstrap-rule 0.78 0.79 0.81 0.87 0.82
Estimation with arbitrary choice 1.036 1.13 0.97 1.11 1.05

It is evident that both estimators are strongly influenced by the choice of backtesting measure and the parameters involved in their construction, including independence assumptions and the selection of smoothing parameters. However, it appears that the ES-expectile model is more efficient compared to the ES based on VaR. This last is more stable because of its insensitivity to the extreme values. Indeed, by a sample comparison of the variability between CEX p ^ and CES p ˜ , we observe that the error values of the ES-expectile are in the range (0.096, 0.29) versus (0.23, 0.74) for the ES based on VaR.

5.3 Real data application

The last paragraph of this empirical analysis concerns the applicability of our model in real data examples. Specifically, we evaluate the efficiency of the ES-expectile model over financial data associated with the Dow Jones index. The latter is an old market index. It dates to May 26, 1896. The index value in this market is calculated by dividing the sum of the stock prices of the 30 companies by some fixed factor. In this experiment, we study the period between 11/10/1971 and 12/10/2022. It constitutes more than 25,000 days. The parent data are displayed in Figure 2. They are available in https://fred.stlouisfed.org/series/DJIA (accessed on 24 April 2024).

Figure 2 
                  Shape of the curves.
Figure 2

Shape of the curves.

Now, to explore the functional path of the data, we cut the initial data by pieces of 30 days. These values of these pieces represent the functional regressors X i . They are plotted in Figure 3.

Figure 3 
                  Initial data.
Figure 3

Initial data.

Next, similar to the artificial case, we choose the response variable Y as the index of the first day, 1 month ahead. In the sense that Y i = X i + 1 ( d 1 ) and to ensure the independent structure of our work, we select distanced observations. Once again, we compare the two estimators CEX p ^ and CES p ˜ using a real data ( X i , Y i ) i = 1 , , 120 . Such estimators are computed using the same algorithm as the artificial data. In the sense that we use the same kernel, we select the smoothing parameters by rule (7) and we employ the L 2 metric obtained by the principal component analysis-metric. We refer to Ferraty and Vieu [45] for more details on the mathematical formulation of these metrics. The comparison results are given in Figures 4 and 5 where we plot the true ( Y i ) i = 1 , , 400 versus the estimator CEX p ^ ( X i ) and CES p ˜ ( X i ) for two values of p = 0.1 and p = 0.95 . The graphs show the superiority of the ES-expectile regression over the ES-quantile e model. This superiority is confirmed by computing the expectile mean square (Mse) against percentile mean square (Msp) that are, respectively, 0.147 and 0.318, showing that the ES-expectile is more precedence than the ES based on quantile regression.

Figure 4 
                  Comparison between ES-expectile and ES-quantile (VaR) for 
                        
                           
                           
                              p
                              =
                              0.1
                           
                           p=0.1
                        
                     .
Figure 4

Comparison between ES-expectile and ES-quantile (VaR) for p = 0.1 .

Figure 5 
                  Comparison between ES-expectile and ES-quantile (VaR) for 
                        
                           
                           
                              p
                              =
                              0.95
                              .
                           
                           p=0.95.
Figure 5

Comparison between ES-expectile and ES-quantile (VaR) for p = 0.95 .

6 Conclusion and future works

In this work, we have focused on the estimation of the free parameters in the ES-expectile regression model. We developed an estimator using kernel smoothing and examined both the theoretical and practical aspects of this model within the context of functional data analysis. On the theoretical side, we established convergence results using the Borell-Cantelli lemma for both pointwise and uniform cases. These findings provide strong mathematical support for the proposed model. The asymptotic results were derived under standard conditions, with precise convergence rates specified. Practically, our estimator is straightforward to implement and has shown superior performance. Additionally, our contribution paves the way for future research in several directions.

One critical aspect that remains unexplored in this article is the optimal selection of smoothing parameters aimed at minimizing the MSEs of the proposed kernel estimators. This topic holds significant relevance within the broader context of kernel-based nonparametric estimation and warrants focused research efforts. Although this issue is essential for refining the proposed method, we have chosen to defer a comprehensive investigation to future work. In subsequent studies, we intend to thoroughly address the selection of smoothing parameters, potentially employing CV or other optimal selection techniques (for instance, see [48]). An area of particular interest is determining the uniform convergence of the estimator with respect to bandwidth selection, which is crucial for optimal smoothing parameter determination.

Several avenues are available for further development of our method. A central challenge in kernel smoothing is the presence of bias error, which remains the primary limitation of standard kernel estimators. Although local linear fitting provides a theoretical approach to mitigating this bias, the problem continues to demand further attention in the field of NFDA. Indeed, the bias issue is especially pertinent when dealing with complex functional data, where simple kernel smoothing methods may yield suboptimal results. The computational results presented in this article demonstrate that implementing a local linear bias correction (CB) substantially enhances the performance of the estimation method, reducing the overall error. However, the use of any CB process, particularly in a nonparametric setting, requires robust mathematical support to ensure control over its asymptotic behavior. This remains a significant gap in the current literature. To the best of our knowledge, no comprehensive analysis has been undertaken in NFDA regarding the asymptotics of CB methods. Therefore, we believe this represents an exciting and unexplored area for future research.

Furthermore, it would be highly beneficial to explore how the proposed method can be extended or combined with established CB algorithms. Notably, we see great potential in incorporating the well-known CB algorithms, such as those developed by Yao [56] and Karlsson [57], which have been shown to improve estimation accuracy in related contexts. By integrating local linear smoothing with these CB algorithms, we anticipate a significant improvement in estimation quality, particularly for complex functional data. This integration could offer a more robust approach, balancing both bias reduction and variance control, which is crucial in NFDA.

Another natural question that arises is how to adapt our results to other popular nonparametric estimation techniques, such as wavelet-based estimators [5860], delta sequence estimators [61], k -nearest neighbor ( k NN) estimators [62], and other local linear estimators [63]. These methods, widely used in various fields of functional data analysis, could potentially benefit from the CB techniques presented in this article. By investigating the extension of our approach to these estimators, we may uncover new synergies between kernel-based methods and other estimation frameworks, thereby advancing the state of the art in NFDA.

Finally, as a future direction, we propose relaxing the stationarity assumption employed in the present study. While stationarity is a common assumption in many functional data models, it may not always be realistic, particularly in dynamic or time-varying settings [6466]. Therefore, extending our work to accommodate locally stationary functional ergodic processes could yield significant insights. In this more general setting, the conditional quantile estimator would be allowed to vary smoothly over time, thus accommodating nonstationary or evolving data structures. Investigating the uniform limit theorems for such processes would be a natural and fruitful extension of the current work, contributing to a deeper understanding of the asymptotic properties of kernel estimators in more flexible, nonstationary contexts. Furthermore, we aim to explore alternative estimation approaches using single index structures or additive modeling.

Acknowledgements

The authors extend their sincere gratitude to the Editor-in-Chief, the Associate Editor, and the three referees for their invaluable feedback and for pointing out a number of oversights in the version initially submitted. Their insightful comments have greatly refined and focused the original work, resulting in a markedly improved presentation.

  1. Funding information: The authors thank and extend their appreciation to the funders of this work: this research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; and the Deanship of Scientific Research and Graduate Studies at King Khalid University through the Research Groups Program under Grant Number R.G.P. 2/338/45.

  2. Author contributions: The authors contributed approximately equally to this work. All authors have read and agreed to the final version of the manuscript. Formal analysis, I. Alamjahie; validation, H. Abood M.B. and F. Alshahrani; writing – review and editing, S. Bouzebda. and A. Laksaci.

  3. Conflict of interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article.

  4. Ethical approval: The conducted research is not related to either human or animals use.

  5. Data availability statement: The data used in this study are available through the link https://fred.stlouisfed.org/series/DJIA (accessed on 24 April 2024).

Appendix

This section is devoted to the proof of our results. The notation mentioned earlier is also used in what follows.

Proof of Lemma 3.3

Combining (P1) and (P4), we obtain the following inequalities:

K ˜ i < C φ ( s , a ) and E [ K ˜ i 2 ] < C φ ( s , a ) .

Applying the Bernstein inequality, we obtain for any η > 0 ,

P ES ^ D ( s ) E [ ES ^ D ( s ) ] > η ln n n φ ( s , a ) C n C η 2 .

Consequently, for C η 2 > 1 , we have

n P ES ^ D ( s ) E [ ES ^ D ( s ) ] > η ln n n φ ( s , a ) < .

Furthermore,

n 1 P ES ^ D ( s ) 1 2 n 1 P ES ^ D ( s ) E [ ES ^ D ( s ) ] > 1 2 < .

Thus, the proof is complete.□

Proof of Lemma 3.4

Condition (P2) implies that

1 B ( s , a ) ( X 1 ) ES ( t , x ) ES ( t , X 1 ) C a b .

This inequality holds uniformly for t . Thus,

sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ( t , x ) E [ ES ^ N ( t , s ) ] C a b .

We deduce that

sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ( t , x ) E [ ES ^ N ( t , s ) ] = O ( a b ) .

Hence, the proof is complete.□

Proof of Lemma 3.5

The compactness of [ CExP p ( s ) δ , CExP p ( s ) + δ ] implies that

(A1) [ CExP p ( s ) δ , CExP p ( s ) + δ ] j = 1 l n ( y j d n , y j + d n ) ,

with d n = O 1 n and l n = O ( n ) . Both functions E [ ES ^ N ( , s ) ] and ES ^ N ( , s ) are increasing. Thus, for 1 j l n ,

(A2) E [ ES ^ N ( y j d n , s ) ] sup t ( y j d n , y j + d n ) E [ ES ^ N ( t , s ) ] E [ ES ^ N ( y j + d n , s ) ] , ES ^ N ( y j d n , s ) sup t ( y j d n , y j + d n ) ES ^ N ( t , s ) ES ^ N ( y j + d n , s ) .

Now, by (P2), we obtain for all t 1 , t 2 [ CExP p ( s ) δ , CExP p ( s ) + δ ] ,

E [ ES ^ N ( t 1 , s ) ] E [ ES ^ N ( t 2 , s ) ] C t 1 t 2 .

Hence,

sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s ) E [ ES ^ N ( t , s ) ] max 1 j l n max z { y j d n , y j + d n } ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] + C d n .

Observe that

d n = n 1 2 = o ln n n φ ( s , a ) .

Therefore, it remains to demonstrate that

max 1 j l n max z { y j d n , y j + d n } ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] = O ln n n φ ( s , a ) , a.co.

For this, we write, for any η > 0 ,

P max 1 j l n max z { y j d n , y j + d n } ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] > η ln n n φ ( s , a ) 2 l n max 1 j l n max z { y j d n , y j + d n } P ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] > η ln n n φ ( s , a ) .

It suffices to assess

P ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] > η ln n n φ ( s , a ) .

For this, we define

F ˜ i ( s ) = 1 E [ F 1 ( s ) ] [ F i ( s ) Y i 1 { Y i z } E [ F i ( s ) Y i 1 { Y 1 z } ] ] .

We write

ε > 0 , P ( ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] > ε ) = P 1 n i = 1 n F ˜ i ( s ) > ε .

Next, to use Bernstein’s inequality, we consider the m -th order moment of F ˜ i ( s ) . Observe that for any integer r ,

E ( F 1 ( s ) Y 1 1 { Y 1 z } r ) E [ F 1 ( s ) r Y 1 r ] E [ E [ Y 1 r X = s ] F 1 r ( s ) ] C E [ F 1 r ( s ) ] .

We obtain

r N , E ( F 1 ( s ) Y 1 1 { Y 1 z } r ) C φ ( s , a ) .

On the other hand, we write

1 E m [ F 1 ( s ) ] E ( Y 1 F 1 ( s ) E [ Y 1 F 1 ( s ) ] ) m = 1 E m [ F 1 ( s ) ] E r = 0 m m r ( Y 1 F 1 ( s ) ) r ( E [ Y 1 F 1 ( s ) ] ) m r ( 1 ) m r 1 E m [ F 1 ( s ) ] r = 0 m m r E Y 1 F 1 ( s ) r E E [ Y 1 F 1 ( s ) ] m r 1 E m [ F 1 ( s ) ] r = 0 m m r E Y 1 F 1 ( s ) r E [ E [ Y 1 F 1 ( s ) ] X 1 ] m r C r = 0 m m r φ ( s , a ) r + 1 C ( φ ( s , a ) m + 1 ) .

We obtain

(A3) E F ˜ i ( s ) m = O ( φ ( s , a ) m + 1 ) .

Thus, applying Bernstein’s inequality with a n 2 = φ ( s , a ) 1 , we obtain for all τ > 0 ,

P ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] > τ ln n n φ ( s , a ) = P i = 1 n F ˜ i ( s ) > n τ ln n n φ ( s , a ) 2 exp 1 2 τ 2 ln n 1 + τ ln n n φ ( s , a ) .

Therefore, we obtain

n 1 P ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] > τ ln n n φ ( s , a ) 2 n 1 exp ( C τ 2 ln n ) .

Therefore, choosing τ > 1 C , we have

(A4) ES ^ N ( z , s ) E [ ES ^ N ( z , s ) ] = O a.co. ln n n φ ( s , a ) .

Hence, the proof is complete.□

Proof of Lemma 4.5

Let ε = n α 1 2 and consider x 1 , , x N ε ( S ) as an ε -net for S . For any x S , define

k ( s ) = arg min k { 1,2 , , N ε ( S ) } d ( s , x k ) .

We decompose

sup x S ES ^ D ( s ) E [ ES ^ D ( s ) ] sup x S ES ^ D ( s ) ES ^ D ( x k ( s ) ) F 1 + sup x S ES ^ D ( x k ( s ) ) E ES ^ D ( x k ( s ) ) F 2 + sup x S E ES ^ D ( x k ( s ) ) E ES ^ D ( s ) F 3 .

For F 1 , we have

sup x S ES ^ D ( s ) ES ^ D ( x k ( s ) ) sup x S 1 n i = 1 n 1 E [ F 1 ( s ) ] F i ( s ) 1 E [ F 1 ( x k ( s ) ) ] F i ( x k ( s ) ) C φ ( a ) sup x S d ( s , x k ( s ) ) a C ε a φ ( a ) .

Given that ε = n α 1 2 and n α a φ ( a ) , we can write:

ε a φ ( a ) = O ψ S ( n α 1 2 ) n φ ( a ) .

Hence,

sup x S ES ^ D ( s ) ES ^ D ( x k ( s ) ) = O ψ S ( n α 1 2 ) n φ ( a ) , a.co.

For F 2 , we have for all η > 0 ,

P sup x S ES ^ D ( x k ( s ) ) E ES ^ D ( x k ( s ) ) > η ψ S ( n α 1 2 ) n φ ( a ) = P max k { 1 , , N ε ( S ) } ES ^ D ( x k ( s ) ) E ES ^ D ( x k ( s ) ) > η ψ S ( n α 1 2 ) n φ ( a ) N ε ( S ) max k { 1 , , N ε ( S ) } P ES ^ D ( x k ) E ES ^ D ( x k ) > η ψ S ( n α 1 2 ) n φ ( a ) .

Let

Δ k i ˜ = 1 E [ F 1 ( x k ) ] ( F i ( x k ) E [ F i ( x k ) ] ) .

We show that under (U1) and (U4),

Δ k i ˜ C φ ( a ) and Var ( Δ k i ˜ ) 1 E 2 [ F 1 ( x k ) ] E [ F 1 2 ( x k ) ] C φ ( a ) .

Applying Bernstein’s inequality, we obtain

P ES ^ D ( x k ) E ES ^ D ( x k ) > η ψ S ( n α 1 2 ) n φ ( a ) 2 exp C η 2 ψ S ( n α 1 2 ) .

Thus, using the fact that ψ S ( n α 1 2 ) = log N ε ( S ) and choosing η such that C η 2 = β , we have

N ε ( S ) max k { 1 , , N ε ( S ) } P ES ^ D ( x k ) E ES ^ D ( x k ) > β ψ S ( n α 1 2 ) n φ ( a ) C N ε ( S ) 1 β .

Since

n = 1 N ε ( S ) 1 β < ,

we obtain that

E sup x S ES ^ D ( x k ( s ) ) E ES ^ D ( x k ( s ) ) > η ψ S ( n α 1 2 ) n φ ( a ) < ,

which allows us to write that

sup x S ES ^ D ( x k ( s ) ) E ES ^ D ( x k ( s ) ) = O ψ S ( n α 1 2 ) n φ ( a ) .

For F 3 ,

sup x S E ES ^ D ( s ) E ES ^ D ( x k ( s ) ) E ( sup x S ES ^ D ( s ) ES ^ D ( x k ( s ) ) ) C ε h φ ( a ) .

Since ε = n α 1 2 , we deduce that

sup x S E ES ^ D ( s ) E ES ^ D ( x k ( s ) ) = O ψ S ( n α 1 2 ) n φ ( a ) .

Finally, for the last required result,

inf x S ES ^ D ( s ) 1 2 x S , such that 1 ES ^ D ( s ) > 1 2 sup x S 1 ES ^ D ( s ) > 1 2 .

We deduce from the previous lemma that

P inf x S ES ^ D ( s ) 1 2 P sup x S 1 ES ^ D ( s ) > 1 2 < .

Consequently,

i = 1 P inf x S ES ^ D ( s ) < 1 2 < .

Hence, the proof is complete.□

Proof of Lemma 4.6

The equi-distribution of the couples ( X i , Y i ) shows that

ES ( t , x ) E [ ES ^ N ( t , s ) ] = 1 E [ F 1 ( s ) ] E [ F 1 ( s ) 1 I B ( s , a ) ( X 1 ) ( ES ( t , x ) ES ( t , X 1 ) ) ] .

Observe that, assumption (U2) allows us to write that, for each x S ,

1 I B ( s , h ) ( X 1 ) ES ( t , s ) ES ( t , s 1 ) C a b ,

Because this inequality is uniform on t and x ,

sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ( t , s ) E [ ES ^ N ( t , s ) ] C a b .

Finally, we obtain

sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ( t , s ) E [ ES ^ N ( t , s ) ] = O ( a b ) .

Hence, the proof is complete.□

Proof of Lemma 4.7

This proof follows the same steps as the proof of Lemma 4.5. For this, we keep the same notations and use the following decomposition:

sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s ) E ES ^ N ( t , s ) sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s ) ES ^ N ( t , s k ( s ) ) G 1 + sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s k ( s ) ) E ES ^ N ( t , s k ( s ) ) G 2 + sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] E ES ^ N ( t , s k ( s ) ) E ES ^ N ( t , s ) G 3 .

Starting by ( G 1 ): The Lipschitz’s condition on the kernel F permits x S , and t [ CExP p ( s ) δ , CExP p ( s ) + δ ]

ES ^ N ( t , s ) ES ^ N ( t , s k ( s ) ) = 1 n i = 1 n F i ( s ) Y i 1 I { Y i t } i = 1 n F i ( x k ( s ) ) Y i 1 I { Y i t } C 1 n φ ( a ) i = 1 n Y i 1 I { Y i t } F ( a 1 d ( x k ( s ) , s i ) ) F ( a 1 d ( x , s i ) ) C d ( x , s k ( s ) ) a φ ( a ) 1 n i = 1 n Y i 1 I { Y i t } < C ε a φ ( a ) 1 n i = 1 n Y i .

As ε = n α 1 2 , and

1 n i = 1 n Y i = O ( 1 ) ,

we obtain

(A5) sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s ) ES ^ N ( t , s k ( s ) ) = O ψ S ( n a 1 2 ) n φ ( a ) , a.co.

This last gives

(A6) sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] E ES ^ N ( t , s k ( s ) ) E ES ^ N ( t , s ) = O ψ S ( n α 1 2 ) n φ ( a ) .

Concerning ( G 2 ) use the same argument as in (F2) to prove that

sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( t , s ) E ES ^ N ( t , s ) max k { 1 , , N ε ( S ) } max 1 j l n max z { y j d n , y j + d n } ES ^ N ( z , s ) E ES ^ N ( z , s ) + C d n .

Thus, it remains to show that

(A7) max k { 1 , , N ε ( S ) } max 1 j l n max z { y j d n , y j + d n } ES ^ N ( z , s k ( s ) ) E ES ^ N ( z , s k ( s ) ) = O ψ S ( n α 1 2 ) n ϕ ( h ) , a.co.

Indeed, we have, for all real η > 0 ,

P max k { 1 , , N ε ( S ) } max 1 j l n max z { y j d n , y j + d n } ES ^ N ( z , s k ( s ) ) E ES ^ N ( z , s k ( s ) ) > η ψ S ( n α 1 2 ) n ϕ ( h ) 2 l n N ε ( S ) max k { 1 , , N ε ( S ) } max 1 j l n max s j = y j d n , y j + d n P ES ^ N ( z , s k ) E ES ^ N ( z , s k ) > η ψ S ( n α 1 2 ) n ϕ ( h ) .

We apply the Bernstein exponential inequality to obtain, for each j l n ,

P ES ^ N ( z , s k ) E ES ^ N ( z , s k ) > η ψ S ( n α 1 2 ) n ϕ ( h ) 2 exp { C η 2 ψ S ( n α 1 2 ) } .

Therefore, using the fact that ψ S ( n α 1 2 ) = ln N ε ( S ) and z n n 1 2 , we have

z n N ε ( S ) max j { 1,2 , d o t s , z n } max k { 1 , , N ε ( S ) } P ES ^ N ( z , s k ) E ES ^ N ( z , s k ) > η ψ S ( n α 1 2 ) n ϕ ( h ) C z n N ε ( S ) 1 C η 2 .

By choosing η such that C η 2 = β , we obtain

(A8) sup x S sup t [ CExP p ( s ) δ , CExP p ( s ) + δ ] ES ^ N ( z , s k ) E ES ^ N ( z , s k ) = O ψ S ( n α 1 2 ) n ϕ ( h ) , a.co.

This completes the proof of Lemma 4.7.□

References

[1] Basel Committee on Banking Supervision, Fundamental review of the trading book: A revised market risk framework, 2014, http://www.bis.org/publ/bcbs265.pdf. Search in Google Scholar

[2] W. K. Newey and J. L. Powell, Asymmetric least squares estimation and testing, Econometrica 55 (1987), no. 4, 819–847, DOI: https://doi.org/10.2307/1911031. 10.2307/1911031Search in Google Scholar

[3] L. Schulze Waltrup, F. Sobotka, T. Kneib, and G. Kauermann, Expectile and quantile regression-David and Goliath?, Stat. Model. 15 (2015), no. 5, 433–456, DOI: https://doi.org/10.1177/1471082X14561155. 10.1177/1471082X14561155Search in Google Scholar

[4] F. Bellini and E. Di Bernardino, Risk management with expectiles, The Eur. J. Finance 23 (2017), no. 6, 487–506, DOI: https://doi.org/10.1080/1351847X.2015.1052150. 10.1080/1351847X.2015.1052150Search in Google Scholar

[5] M. Farooq and I. Steinwart, Learning rates for kernel-based expectile regression, Mach. Learn. 108 (2019), no. 2, 203–227, DOI: https://doi.org/10.1007/s10994-018-5762-9. 10.1007/s10994-018-5762-9Search in Google Scholar

[6] F. Bellini, I. Negri, and M. Pyatkova, Backtesting VaR and expectiles with realized scores, Stat. Methods Appl. 28 (2019), no. 1, 119–142, DOI: https://doi.org/10.1007/s10260-018-00434-w. 10.1007/s10260-018-00434-wSearch in Google Scholar

[7] Y. Gu and H. Zou, High-dimensional generalizations of asymmetric least squares regression and their applications, Ann. Statist. 44 (2016), no. 6, 2661–2694, DOI: https://doi.org/10.1214/15-AOS1431. 10.1214/15-AOS1431Search in Google Scholar

[8] J. Zhao, Y. Chen, and Y. Zhang, Expectile regression for analyzing heteroscedasticity in high dimension, Statist. Probab. Lett. 137 (2018), 304–311, DOI: https://doi.org/10.1016/j.spl.2018.02.006. 10.1016/j.spl.2018.02.006Search in Google Scholar

[9] S. Chakroborty, R. Iyer, and A. Alexandre Trindade, On the use of the M-quantiles for outlier detection in multivariate data. 2024. https://arxiv.org/abs/2401.01628. Search in Google Scholar

[10] T. Kneib, Beyond mean regression, Stat. Model. 13 (2013), no. 4, 275–303, DOI: https://doi.org/10.1177/1471082X13494159. 10.1177/1471082X13494159Search in Google Scholar

[11] M. Mohammedi, S. Bouzebda, and A. Laksaci, The consistency and asymptotic normality of the kernel type expectile regression estimator for functional data, J. Multivariate Anal. 181 (2021), Paper No. 104673, 24, DOI: https://doi.org/10.1016/j.jmva.2020.104673. 10.1016/j.jmva.2020.104673Search in Google Scholar

[12] S. Girard, G. Stupfler, and A. Usseglio-Carleve, Functional estimation of extreme conditional expectiles, Econom. Stat. 21 (2022), 131–158, DOI: https://doi.org/10.1016/j.ecosta.2021.05.006. 10.1016/j.ecosta.2021.05.006Search in Google Scholar

[13] A. Goia and P. Vieu, An introduction to recent advances in high/infinite dimensional statistics [Editorial], J. Multivariate Anal. 146 (2016), 1–6, DOI: https://doi.org/10.1016/j.jmva.2015.12.001. 10.1016/j.jmva.2015.12.001Search in Google Scholar

[14] G. Aneiros, R. Cao, R. Fraiman, C. Genest, and P. Vieu, Recent advances in functional data analysis and high-dimensional statistics, J. Multivariate Anal. 170 (2019), 3–9, DOI: https://doi.org/10.1016/j.jmva.2018.11.007. 10.1016/j.jmva.2018.11.007Search in Google Scholar

[15] Z. Chikr Elmezouar, F. Alshahrani, I. M. Almanjahie, S. Bouzebda, Z. Kaid, and A. Laksaci, Strong consistency rate in functional single index expectile model for spatial data, AIMS Math. 9 (2024), no. 3, 5550–5581, DOI: https://doi.org/10.3934/math.2024269. 10.3934/math.2024269Search in Google Scholar

[16] I. M. Almanjahie, S. Bouzebda, Z. Kaid, and A. Laksaci, Nonparametric estimation of expectile regression in functional dependent data, J. Nonparametr. Stat. 34 (2022), no. 1, 250–281, DOI: https://doi.org/10.1080/10485252.2022.2027412. 10.1080/10485252.2022.2027412Search in Google Scholar

[17] I. M. Almanjahie, S. Bouzebda, Z. C. Elmezouar, and A. Laksaci, The functional kNN estimator of the conditional expectile: uniform consistency in number of neighbors, Stat. Risk Model. 38 (2022), no. 3–4, 47–63, DOI: https://doi.org/10.1515/strm-2019-0029. 10.1515/strm-2019-0029Search in Google Scholar

[18] I. M. Almanjahie, S. Bouzebda, Z. Kaid, and A. Laksaci, The local linear functional kNN estimator of the conditional expectile: uniform consistency in number of neighbors, Metrika 87 (2024), no. 8, 1007–1035, DOI: https://doi.org/10.1007/s00184-023-00942-0. 10.1007/s00184-023-00942-0Search in Google Scholar

[19] O. Litimein, A. Laksaci, L. Ait-Hennani, B. Mechab, and M. Rachdi, Asymptotic normality of the local linear estimator of the functional expectile regression, J. Multivariate Anal. 202 (2024), Paper No. 105281, 16, DOI: https://doi.org/10.1016/j.jmva.2023.105281. 10.1016/j.jmva.2023.105281Search in Google Scholar

[20] D. Yu, M. Pietrosanu, I. Mizera, B. Jiang, L. Kong, and W. Tu, Functional linear partial quantile regression with guaranteed convergence for neuroimaging data analysis, Stat Biosci 17 (2024), 174–190, DOI: https://doi.org/10.1007/s12561-023-09412-7. 10.1007/s12561-023-09412-7Search in Google Scholar

[21] E. Di Bernardino, T. Laloë, and C. Pakzad, Estimation of extreme multivariate expectiles with functional covariates, J. Multivariate Anal. 202 (2024), Paper No. 105292, 21, DOI: https://doi.org/10.1016/j.jmva.2023.105292. 10.1016/j.jmva.2023.105292Search in Google Scholar

[22] A. Laksaci, S. Bouzebda, F. Alshahrani, O. Litimein, and B. Mechab, Spatio-functional local linear asymmetric least square regression estimation: Application for spatial prediction of covid-19 propagation, Symmetry 15 (2023), no. 12, 2108, DOI: https://doi.org/10.3390/sym15122108. 10.3390/sym15122108Search in Google Scholar

[23] P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath, Coherent measures of risk, Math. Finance 9 (1999), no. 3, 203–228, DOI: https://doi.org/10.1111/1467-9965.00068. 10.1111/1467-9965.00068Search in Google Scholar

[24] M. Brutti Righi and P. Sergio Ceretta, A comparison of expected shortfall estimation models, J. Econ. Business 78 (2015), 14–47, DOI: https://doi.org/10.1016/j.jeconbus.2014.11.002.10.1016/j.jeconbus.2014.11.002Search in Google Scholar

[25] K. Moutanabbir and M. Bouaddi, A new non-parametric estimation of the expected shortfall for dependent financial losses, J. Statist. Plann. Inference 232 (2024), Paper No. 106151, 18, DOI: https://doi.org/10.1016/j.jspi.2024.106151. 10.1016/j.jspi.2024.106151Search in Google Scholar

[26] E. Lazar, J. Pan, and S. Wang, On the estimation of value-at-risk and expected shortfall at extreme levels, J. Commodity Markets 34 (2024), 100391, DOI: https://doi.org/10.1016/j.jcomm.2024.100391. 10.1016/j.jcomm.2024.100391Search in Google Scholar

[27] O. Scaillet, Nonparametric estimation and sensitivity analysis of expected shortfall, Math. Finance 14 (2004), no. 1, 115–129, DOI: https://doi.org/10.1111/j.0960-1627.2004.00184.x. 10.1111/j.0960-1627.2004.00184.xSearch in Google Scholar

[28] Z. Cai and X. Wang, Nonparametric estimation of conditional VaR and expected shortfall, J. Econometrics 147 (2008), no. 1, 120–130, DOI: https://doi.org/10.1016/j.jeconom.2008.09.005. 10.1016/j.jeconom.2008.09.005Search in Google Scholar

[29] Y. Wu, W. Yu, N. Balakrishnan, X. Wang, Nonparametric estimation of expected shortfall via Bahadur-type representation and Berry-Esséen bounds, J. Stat. Comput. Simul. 92 (2022), no. 3, 544–566, DOI: https://doi.org/10.1080/00949655.2021.1966791. 10.1080/00949655.2021.1966791Search in Google Scholar

[30] F. Ferraty and A. Quintela-del Río, Conditional VAR and expected shortfall: a new functional approach, Econometric Rev. 35 (2016), no. 2, 263–292, DOI: https://doi.org/10.1080/07474938.2013.807107. 10.1080/07474938.2013.807107Search in Google Scholar

[31] L. Ait-Hennani, Z. Kaid, A. Laksaci, and M. Rachdi, Nonparametric estimation of the expected shortfall regression for quasi-associated functional data, Mathematics 10 (2022), no. 23, 4508, DOI: https://doi.org/10.3390/math10234508. 10.3390/math10234508Search in Google Scholar

[32] S. Fuchs, R. Schlotter, and K. D. Schmidt, A review and some complements on quantile risk measures and their domain, Risks 5 (2017), no. 4, 1–16, DOI: https://doi.org/10.3390/risks5040059. 10.3390/risks5040059Search in Google Scholar

[33] D. Leão, M. D. Fragoso, and P. R. C. Ruffino, Characterizations of Radon spaces, Statist. Probab. Lett. 42 (1999), no. 4, 409–413, DOI: https://doi.org/10.1016/S0167-7152(98)00237-5. 10.1016/S0167-7152(98)00237-5Search in Google Scholar

[34] D. Leão, M. Fragoso, and P. Ruffino, Regular conditional probability, disintegration of probability and Radon spaces, Proyecciones 23 (2004), no. 1, 15–29, DOI: https://doi.org/10.4067/S0716-09172004000100002. 10.4067/S0716-09172004000100002Search in Google Scholar

[35] D. J. Aigner, T. Amemiya, and D. J. Poirier, On the estimation of production frontiers: maximum likelihood estimation of the parameters of a discontinuous density function, Int. Econom. Rev. 17 (1976), no. 2, 377–396, DOI: https://doi.org/10.2307/2525708. 10.2307/2525708Search in Google Scholar

[36] S. Bouzebda and M. Chaouch, Uniform limit theorems for a class of conditional Z-estimators when covariates are functions, J. Multivariate Anal. 189 (2022), Paper No. 104872, 21, DOI: https://doi.org/10.1016/j.jmva.2021.104872. 10.1016/j.jmva.2021.104872Search in Google Scholar

[37] M. Pratesi, M. Giovanna Ranalli, and N. Salvati, Nonparametric M-quantile regression using penalised splines, J. Nonparametr. Stat. 21 (2009), no. 3, 287–304, DOI: https://doi.org/10.1080/10485250802638290. 10.1080/10485250802638290Search in Google Scholar

[38] F. Bellini, V. Bignozzi, and G. Puccetti, Conditional expectiles, time consistency and mixture convexity properties, Insurance Math. Econom. 82 (2018), 117–123, DOI: https://doi.org/10.1016/j.insmatheco.2018.07.001. 10.1016/j.insmatheco.2018.07.001Search in Google Scholar

[39] C. Chen, S. Guo, and X. Qiao, Functional linear regression: Dependence and error contamination, J. Bus. Econom. Statist. 40 (2022), no. 1, 444–457, DOI: https://doi.org/10.1080/07350015.2020.1832503. 10.1080/07350015.2020.1832503Search in Google Scholar

[40] P. H. C. Eilers, Discussion: the beauty of expectiles [mr3179527], Stat. Model. 13 (2013), no. 4, 317–322, DOI: https://doi.org/10.1177/1471082X13494313. 10.1177/1471082X13494313Search in Google Scholar

[41] R. Koenker, Discussion: living beyond our means [mr3179527], Stat. Model. 13 (2013), no. 4, 323–333, DOI: https://doi.org/10.1177/1471082X13494314. 10.1177/1471082X13494314Search in Google Scholar

[42] M. C. Jones, Expectiles and M-quantiles are quantiles, Statist. Probab. Lett. 20 (1994), no. 2, 149–153, DOI: https://doi.org/10.1016/0167-7152(94)90031-0. 10.1016/0167-7152(94)90031-0Search in Google Scholar

[43] C.-M. Kuan, J.-H. Yeh, and Y.-C. Hsu, Assessing value at risk with CARE, the conditional autoregressive expectile models, J. Econometrics 150 (2009), no. 2, 261–270, DOI: https://doi.org/10.1016/j.jeconom.2008.12.002. 10.1016/j.jeconom.2008.12.002Search in Google Scholar

[44] J. Lin, A neural networks based method with genetic data analysis of complex diseases, ProQuest LLC, Ann Arbor, MI, Thesis (Ph.D.)-Michigan State University, 2021, http://gateway.proquest.com.accesdistant.sorbonne-universite.fr/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28644379. Search in Google Scholar

[45] F. Ferraty and P. Vieu, Springer Series in Statistics, Springer, New York, 2006, Theory and practice. Search in Google Scholar

[46] W. V. Li and Q.-M. Shao, Gaussian processes: inequalities, small ball probabilities and applications, Stochastic processes: theory and methods, Handbook of Statistics, vol. 19, North-Holland, Amsterdam, 2001, pp. 533–597, DOI: https://doi.org/10.1016/S0169-7161(01)19019-X. 10.1016/S0169-7161(01)19019-XSearch in Google Scholar

[47] P. Deheuvels, One bootstrap suffices to generate sharp uniform bounds in functional estimation, Kybernetika (Prague) 47 (2011), no. 6, 855–865.Search in Google Scholar

[48] S. Bouzebda and N. Taachouche, On the variable bandwidth kernel estimation of conditional U-statistics at optimal rates in sup-norm, Phys. A 625 (2023), Paper No. 129000, 72, DOI: https://doi.org/10.1016/j.physa.2023.129000. 10.1016/j.physa.2023.129000Search in Google Scholar

[49] A. N. Kolmogorov and V. M. Tihomirov, ε-entropy and ε-capacity of sets in function spaces, Uspehi Mat. Nauk 14 (1959), no. 2(86), 3–86. Search in Google Scholar

[50] F. Ferraty, A. Laksaci, A. Tadj, and P. Vieu, Rate of uniform consistency for nonparametric estimates with functional variables, J. Statist. Plann. Inference 140 (2010), no. 2, 335–352, DOI: https://doi.org/10.1016/j.jspi.2009.07.019. 10.1016/j.jspi.2009.07.019Search in Google Scholar

[51] O. Litimein, F. Alshahrani, S. Bouzebda, A. Laksaci, and B. Mechab, Kolmogorov entropy for convergence rate in incomplete functional time series: application to percentile and cumulative estimation in high dimensional data, Entropy 25 (2023), no. 7, Paper No. 1108, 33, DOI: https://doi.org/10.3390/e25071108. 10.3390/e25071108Search in Google Scholar PubMed PubMed Central

[52] H. Luschgy and G. Pagès, Sharp asymptotics of the Kolmogorov entropy for Gaussian measures, J. Funct. Anal. 212 (2004), no. 1, 89–120, DOI: https://doi.org/10.1016/j.jfa.2003.09.004. 10.1016/j.jfa.2003.09.004Search in Google Scholar

[53] A. van der Vaart and H. van Zanten, Bayesian inference with rescaled Gaussian process priors, Electron. J. Stat. 1 (2007), 433–448, DOI: https://doi.org/10.1214/07-EJS098. 10.1214/07-EJS098Search in Google Scholar

[54] A. Shirikyan, Euler equations are not exactly controllable by a finite-dimensional external force, Phys. D 237 (2008), no. 10–12, 1317–1323, DOI: https://doi.org/10.1016/j.physd.2008.03.021. 10.1016/j.physd.2008.03.021Search in Google Scholar

[55] I. Soukarieh and S. Bouzebda, Renewal type bootstrap for increasing degree U-process of a Markov chain, J. Multivariate Anal. 195 (2023), Paper No. 105143, 25, DOI: https://doi.org/10.1016/j.jmva.2022.105143. 10.1016/j.jmva.2022.105143Search in Google Scholar

[56] W. Yao, A bias corrected nonparametric regression estimator, Statist. Probab. Lett. 82 (2012), no. 2, 274–282, DOI: https://doi.org/10.1016/j.spl.2011.10.006. 10.1016/j.spl.2011.10.006Search in Google Scholar

[57] A. Karlsson, Bootstrap methods for bias correction and confidence interval estimation for nonlinear quantile regression of longitudinal data, J. Stat. Comput. Simul. 79 (2009), no. 9–10, 1205–1218, DOI: https://doi.org/10.1080/00949650802221180. 10.1080/00949650802221180Search in Google Scholar

[58] S. Didi and S. Bouzebda, Wavelet density and regression estimators for continuous time functional stationary and ergodic processes, Mathematics 10 (2022), no. 22, 4356, DOI: https://doi.org/10.3390/math10224356. 10.3390/math10224356Search in Google Scholar

[59] S. Didi, A. Al Harby, and S. Bouzebda, Wavelet density and regression estimators for functional stationary and ergodic data: Discrete time, Mathematics 10 (2022), no. 19, 3433, DOI: https://doi.org/10.3390/math10193433. 10.3390/math10193433Search in Google Scholar

[60] S. Bouzebda and N. Taachouche, Oracle inequalities and upper bounds for kernel conditional U-statistics estimators on manifolds and more general metric spaces associated with operators, Stochastics 96 (2024), no. 8, 2135–2198, DOI: https://doi.org/10.1080/17442508.2024.2391898. 10.1080/17442508.2024.2391898Search in Google Scholar

[61] S. Bouzebda and A. Nezzal, Asymptotic properties of conditional U-statistics using delta sequences, Comm. Statist. Theory Methods 53 (2024), no. 13, 4602–4657, DOI: https://doi.org/10.1080/03610926.2023.2179887. 10.1080/03610926.2023.2179887Search in Google Scholar

[62] S. Bouzebda and A. Nezzal, Uniform in number of neighbors consistency and weak convergence of kNN empirical conditional processes and kNN conditional U-processes involving functional mixing data, AIMS Math. 9 (2024), no. 2, 4427–4550, DOI: https://doi.org/10.3934/math.2024218. 10.3934/math.2024218Search in Google Scholar

[63] M.-Y. Cheng and H.-T. Wu, Local linear regression on manifolds and its geometric interpretation, J. Amer. Statist. Assoc. 108 (2013), no. 504, 1421–1434, DOI: https://doi.org/10.1080/01621459.2013.827984. 10.1080/01621459.2013.827984Search in Google Scholar

[64] B. M. Agua and S. Bouzebda, Single index regression for locally stationary functional time series, AIMS Math. 9 (2024), no. 12, 36202–36258, DOI: https://doi.org/10.3934/math.20241719. 10.3934/math.20241719Search in Google Scholar

[65] S. Bouzebda, Weak convergence of the conditional single index U-statistics for locally stationary functional time series, AIMS Math. 9 (2024), no. 6, 14807–14898, DOI: https://doi.org/10.3934/math.2024720. 10.3934/math.2024720Search in Google Scholar

[66] I. Soukarieh and S. Bouzebda, Weak convergence of the conditional U-statistics for locally stationary functional time series, Stat. Inference Stoch. Process. 27 (2024), no. 2, 227–304, DOI: https://doi.org/10.1007/s11203-023-09305-y. 10.1007/s11203-023-09305-ySearch in Google Scholar

Received: 2024-07-04
Revised: 2025-02-09
Accepted: 2025-03-28
Published Online: 2025-04-30

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. On approximation by Stancu variant of Bernstein-Durrmeyer-type operators in movable compact disks
  3. Circular n,m-rung orthopair fuzzy sets and their applications in multicriteria decision-making
  4. Grand Triebel-Lizorkin-Morrey spaces
  5. Coefficient estimates and Fekete-Szegö problem for some classes of univalent functions generalized to a complex order
  6. Proofs of two conjectures involving sums of normalized Narayana numbers
  7. On the Laguerre polynomial approximation errors and lower type of entire functions of irregular growth
  8. New convolutions for the Hartley integral transform imbedded in the Banach algebras and convolution-type integral equations
  9. Some inequalities for rational function with prescribed poles and restricted zeros
  10. Lucas difference sequence spaces defined by Orlicz function in 2-normed spaces
  11. Evaluating the efficacy of fuzzy Bayesian networks for financial risk assessment
  12. Fixed point results for contractions of polynomial type
  13. Estimation for spatial semi-functional partial linear regression model with missing response at random
  14. Investigating the controllability of differential systems with nonlinear fractional delays, characterized by the order 0 < η ≤ 1 < ζ ≤ 2
  15. New forms of bilateral inequalities for K-g-frames
  16. Rate of pole detection using Padé approximants to polynomial expansions
  17. Existence results for nonhomogeneous Choquard equation involving p-biharmonic operator and critical growth
  18. Note on the shape-preservation of a new class of Kantorovich-type operators via divided differences
  19. Geršhgorin-type theorems for Z1-eigenvalues of tensors with applications
  20. New topologies derived from the old one via operators
  21. Blow up solutions for two-dimensional semilinear elliptic problem of Liouville type with nonlinear gradient terms
  22. Infinitely many normalized solutions for Schrödinger equations with local sublinear nonlinearity
  23. Nonparametric expectile shortfall regression for functional data
  24. Advancing analytical solutions: Novel wave insights and methodologies for beta fractional Kuralay-II equations
  25. A generalized p-Laplacian problem with parameters
  26. A study of solutions for several classes of systems of complex nonlinear partial differential difference equations in ℂ2
  27. Towards finding equalities involving mixed products of the Moore-Penrose and group inverses by matrix rank methodology
  28. ω -biprojective and ω ¯ -contractible Banach algebras
  29. Coefficient functionals for Sakaguchi-type-Starlike functions subordinated to the three-leaf function
  30. Solutions of several general quadratic partial differential-difference equations in ℂ2
  31. Inequalities for the generalized trigonometric functions with respect to weighted power mean
  32. Optimization of Lagrange problem with higher-order differential inclusion and special boundary-value conditions
  33. Hankel determinants for q-starlike functions connected with q-sine function
  34. System of partial differential hemivariational inequalities involving nonlocal boundary conditions
  35. A new family of multivalent functions defined by certain forms of the quantum integral operator
  36. A matrix approach to compare BLUEs under a linear regression model and its two competing restricted models with applications
  37. Weighted composition operators on bicomplex Lorentz spaces with their characterization and properties
  38. Behavior of spatial curves under different transformations in Euclidean 4-space
  39. Commutators for the maximal and sharp functions with weighted Lipschitz functions on weighted Morrey spaces
  40. A new kind of Durrmeyer-Stancu-type operators
  41. A study of generalized Mittag-Leffler-type function of arbitrary order
  42. On the approximation of Kantorovich-type Szàsz-Charlier operators
  43. Split quaternion Fourier transforms for two-dimensional real invariant field
  44. Review Article
  45. Characterization generalized derivations of tensor products of nonassociative algebras
  46. Special Issue on Differential Equations and Numerical Analysis - Part II
  47. Existence and optimal control of Hilfer fractional evolution equations
  48. Persistence of a unique periodic wave train in convecting shallow water fluid
  49. Existence results for critical growth Kohn-Laplace equations with jumping nonlinearities
  50. Monotonicity and oscillation for fractional differential equations with Riemann-Liouville derivatives
  51. Nontrivial solutions for a generalized poly-Laplacian system on finite graphs
  52. Stability and bifurcation analysis of a modified chemostat model
  53. Special Issue on Nonlinear Evolution Equations and Their Applications - Part II
  54. Analytic solutions of a generalized complex multi-dimensional system with fractional order
  55. Extraction of soliton solutions and Painlevé test for fractional Peyrard-Bishop DNA model
  56. Special Issue on Recent Developments in Fixed-Point Theory and Applications - Part II
  57. Some fixed point results with the vector degree of nondensifiability in generalized Banach spaces and application on coupled Caputo fractional delay differential equations
  58. On the sum form functional equation related to diversity index
  59. Special Issue on International E-Conference on Mathematical and Statistical Sciences - Part II
  60. Simpson, midpoint, and trapezoid-type inequalities for multiplicatively s-convex functions
  61. Converses of nabla Pachpatte-type dynamic inequalities on arbitrary time scales
  62. Special Issue on Blow-up Phenomena in Nonlinear Equations of Mathematical Physics - Part II
  63. Energy decay of a coupled system involving a biharmonic Schrödinger equation with an internal fractional damping
  64. Special Issue on Some Integral Inequalities, Integral Equations, and Applications - Part II
  65. Nonlinear heat equation with viscoelastic term: Global existence and blowup in finite time
  66. New Jensen's bounds for HA-convex mappings with applications to Shannon entropy
  67. Special Issue on Approximation Theory and Special Functions 2024 conference
  68. Ulam-type stability for Caputo-type fractional delay differential equations
  69. Faster approximation to multivariate functions by combined Bernstein-Taylor operators
  70. (λ, ψ)-Bernstein-Kantorovich operators
  71. Some special functions and cylindrical diffusion equation on α-time scale
  72. (q, p)-Mixing Bloch maps
  73. Orthogonalizing q-Bernoulli polynomials
  74. On better approximation order for the max-product Meyer-König and Zeller operator
  75. Moment-based approximation for a renewal reward process with generalized gamma-distributed interference of chance
  76. Special Issue on Variational Methods and Nonlinear PDEs
  77. A note on mean field type equations
  78. Ground states for fractional Kirchhoff double-phase problem with logarithmic nonlinearity
  79. Solution of nonlinear Langevin equations involving Hilfer-Hadamard fractional order derivatives and variable coefficients
Downloaded on 7.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2025-0125/html
Scroll to top button