Home Mathematics A matrix approach to compare BLUEs under a linear regression model and its two competing restricted models with applications
Article Open Access

A matrix approach to compare BLUEs under a linear regression model and its two competing restricted models with applications

  • EMAIL logo
Published/Copyright: July 29, 2025
Become an author with De Gruyter Brill

Abstract

Suppose that a weakly singular linear regression model M and its two competing restricted models M 1 and M 2 are given. We first compare the superiorities of the best linear unbiased estimators (BLUEs) under M and M 1 related to the covariance matrix criterion and derive some necessary and sufficient conditions. Based on these conditions and the covariance matrix criterion, we then compare the superiorities of the BLUEs under M 1 and M 2 and obtain some results. We finally apply these results to measure the influence of an observation on the BLUE.

MSC 2010: 15A09; 62H12; 62J05

1 Introduction

Throughout this article, M , M , r ( M ) , and C ( M ) , respectively, denote its transpose, Moore-Penrose inverse, rank, and column space for a matrix M and R m × n stands for the collection of all m × n real matrices. Define i + ( M ) and i ( M ) as the number of the positive and negative eigenvalues of a symmetric matrix M computed with multiplicities respectively, and postulate P M = MM , E M = I MM , and F M = I M M . Moreover, for symmetric matrices M R n × n and N R n × n , M N ( 0 , 0 , 0 ) in the Löwner partial ordering means that i + ( M N ) = n ( i ( M N ) = 0 , i ( M N ) = n , i + ( M N ) = 0 ) . For briefness, we use M = { y , X β , Σ } to denote the regression model

(1.1) y = X β + ε , E ( ε ) = 0 , Cov ( ε ) = Σ 0 ,

where y R n × 1 is an observable random vector, β R p × 1 is a parameter vector, ε R n × 1 is a vector of unobservable disturbances, and X R n × p and Σ are two known matrices without any rank restrictions.

Let two competing stochastic restrictions be added on regression parameter β that may be shown by

(1.2) r j = A j β + e j , E ( e j ) = 0 , Cov ( e j ) = V j , j = 1 , 2 ,

where e j R m j × 1 is a random error vector such that Cov ( ε , e j ) = 0 , A j R m j × p , V j R m j × m j , and r j R m j × 1 are three known matrices with any rank. The constant vector r j may be approached as a random vector with E ( r j ) = A j β , j = 1, 2 [1, p. 252].

Linear stochastic restrictions may come from former studies or some external source and have numerous applications to deletion diagnostics in regression analysis [2,3], bioassay in biometric research [4], economic relations [5,6], among others. Alternatively, we can apply linear stochastic restrictions to the estimation of linear regression models with missing data [7]. Furthermore, as stated by Arashi and Tabatabaey [8], by linear stochastic restrictions, we can implement an analysis and examination of one’s own feelings and thoughts. Additionally, integrating linear stochastic restrictions and Zellner’s [9] “Seemingly Unrelated” estimators, we can comparatively reveal good properties of estimators using a mean squared error criterion. For the impact of linear stochastic restrictions on estimators and predictors, see [1014], for the whole review on linear stochastic restrictions and applications, see [5].

As proposed first by Theil and Goldberger [15], using the regression model (1.1) and stochastic restrictions (1.2) simultaneously yields the following common model:

(1.3) M j = y r j , X β A j β , Σ 0 0 V j , j = 1 , 2 .

When r ( X ) = p , Σ 0 , and V j 0 , j = 1 , 2 , it is well known that the best linear unbiased estimator (BLUE) of β in (1.3) can be written as

BLUE ( β M j ) = ( S + A j V j 1 A j ) 1 ( X Σ 1 y + A j V j 1 r j ) = BLUE ( β M ) + S 1 A j ( V j + A j S 1 A j ) 1 ( r j A j BLUE ( β M ) ) , j = 1 , 2 ,

where S = X Σ 1 X and BLUE ( β M ) = ( X Σ 1 X ) 1 X Σ 1 y is the BLUE of β in (1.1). By straightforward operations, we obtain

Cov ( BLUE ( β M ) ) Cov ( BLUE ( β M j ) ) , j = 1 , 2 .

In 2007, Xu and Yang generalized the above result to a certain extent, obtaining a more general situation. However, by further research, we find that the conclusion may be further enhanced and is presented in the next section. Suppose that X has a full column rank, i.e., r ( X ) = p , Σ = I n , and r ( A j ) = m j , j = 1 , 2 , in (1.3), Baksalary [16] considered the problem of making comparison of BLUEs of parameter β under M 1 and M 2 using the covariance matrix criterion, as well as [17,18]. The resulting necessary and sufficient conditions were deduced, which suggested that one of the estimators outperformed the other. While in many cases, the assumption mentioned above is not reasonable, as in (3.6). To obtain a more general result, we extend the linear regression models in [1623] to a weakly singular linear regression model, i.e., the model in (1.1) satisfies C ( X ) C ( Σ ) . The main contribution of this work is to construct exact formulas of the comparisons between BLUEs of X β under the models M 1 and M 2 with C ( X ) C ( Σ ) relative to their covariance matrices utilizing matrix rank and index of inertia methods, as well as between BLUEs of β . Through these exact formulas, it is quite convenient to make comparison between two BLUEs since the rank and index of inertia of a matrix can be readily checked in practice. In this article, our approach to treating these two competing restriction equations (1.2) is available for the research area in a biological assay. For instance, in slope ratio assays, it is necessary to consider the cases of imposing the stochastic restrictions (1.2) on regression parameters [4]. Moreover, another reason for considerations on restrictions (1.2) is for specialized persons to make adjustment tests based on the comparison of estimators under M 1 and M 2 , for example, to discard one of two restrictions (1.2).

Now, let us recollect the definition that if Gy is an unbiased estimator of X β , then

(1.4) Gy = BLUE ( X β M ) Cov ( Gy ) Cov ( Hy ) H : HX = X G ( X , Σ E X ) = ( X , 0 ) ,

see [19]. The solution to (1.4) is

(1.5) G = ( X , 0 ) ( X , Σ E X ) + U E ( X , Σ E X ) ,

where U R n × n is an arbitrary matrix.

Hence, in terms of (1.4) and (1.5), it can be obtained that

(1.6) BLUE ( X β M j ) = G j y r j , j = 1 , 2 ,

where

(1.7) G j = ( X,0 ) ( X ^ j , Σ ^ j E X ^ j ) + U j E ( X ^ j , Σ ^ j E X ^ j ) ,

in which X ^ j = X A j , Σ ^ j = Σ 0 0 V j , , and U j R n × ( n + m j ) is an arbitrary matrix.

For the sake of simplification of our conclusions including the Moore-Penrose inverses of matrices, we assemble the following three results furnished by [20].

Lemma 1.1

Let Q 1 R n × n and Q 2 R m × m be both symmetric matrices and Q 3 R n × m . Then

(1.8) i ± ( P Q 1 P ) i ± ( Q 1 ) i f P R n × n i s a n a r b i t r a r y m a t r i x ,

(1.9) i ± ( P Q 1 P ) = i ± ( Q 1 ) i f P R n × n i s a n o n s i n g u l a r m a t r i x ,

(1.10) i ± Q 1 0 0 Q 2 = i ± ( Q 1 ) + i ± ( Q 2 ) ,

(1.11) i ± 0 Q 3 Q 3 0 = r ( Q 3 ) , i ± ( Q 1 ) = i ( Q 1 ) .

Lemma 1.2

Let Q 1 R n × n be a symmetric matrix and let Q 2 R n × m and Q 3 R m × m . Then

(1.12) i ± Q 1 Q 2 Q 2 0 = r ( Q 2 ) + i ± ( E Q 2 Q 1 E Q 2 ) ,

(1.13) i + Q 1 Q 2 Q 2 0 = r ( Q 1 , Q 2 ) a n d i Q 1 Q 2 Q 2 0 = r ( Q 2 ) , i f Q 1 0 ,

(1.14) i + Q 1 Q 2 Q 2 0 = r ( Q 2 ) a n d i Q 1 Q 2 Q 2 0 = r ( Q 1 , Q 2 ) , i f Q 1 0 ,

(1.15) i ± Q 1 Q 2 Q 2 Q 3 i ± ( Q 1 ) .

Lemma 1.3

Assume that Q j R n j × n j is a symmetric matrix and let O j R l j × m and P j R l j × n j , j = 1 , 2 . If C ( Q j ) C ( P j ) and C ( O j ) C ( P j ) , j = 1 , 2 , then

(1.16) i ± [ O 1 ( P 1 ) Q 1 P 1 O 1 O 2 ( P 2 ) Q 2 P 2 O 2 ] = i ± Q 1 0 P 1 0 0 0 Q 2 0 P 2 0 P 1 0 0 0 O 1 0 P 2 0 0 O 2 0 0 O 1 O 2 0 r ( P 1 ) r ( P 2 ) .

We also need the following lemma supplied by Marsaglia and Styan [21].

Lemma 1.4

Let Q 1 R m × n , Q 2 R m × k , and Q 3 R l × n . Then

(1.17) r ( Q 1 , Q 2 ) = r ( Q 1 ) + r ( E Q 1 Q 2 ) = r ( Q 2 ) + r ( E Q 2 Q 1 ) ,

(1.18) r Q 1 Q 3 = r ( Q 1 ) + r ( Q 3 F Q 1 ) = r ( Q 3 ) + r ( Q 1 F Q 3 ) ,

(1.19) r Q 1 Q 2 Q 3 0 = r ( Q 2 ) + r ( Q 3 ) + r ( E Q 2 Q 1 F Q 3 ) .

2 Comparing the BLUEs

In this section, we first compare covariance matrices of BLUEs under M and M 1 . On the basis of the resulting consequence, we then make comparison of covariance matrices of BLUEs under M 1 and M 2 . We observe by (1.4)–(1.7) that

(2.1) Cov ( BLUE ( X β M ) ) = G Σ G = ( X,0 ) ( X , Σ E X ) Σ [ ( X , Σ E X ) ] ( X , 0 ) ,

(2.2) Cov ( BLUE ( X β M j ) ) = G j Σ ^ j G j = ( X,0 ) ( X ^ j , Σ ^ j E X ^ j ) Σ ^ j [ ( X ^ j , Σ ^ j E X ^ j ) ] ( X,0 ) , j = 1 , 2 .

Assumption 2.1

Suppose that M is a weakly singular linear regression model, namely, C ( X ) C ( Σ ) , as depicted in the literature.

Theorem 2.1

Consider the models M and M 1 under Assumption 2.1 and define BLUE ( β M ) and BLUE ( β M 1 ) as the BLUEs of β under M and M 1 , respectively. Then,

  1. Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M ) ) .

  2. Cov ( BLUE ( β M 1 ) ) Cov ( BLUE ( β M ) ) .

  3. Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M ) ) if and only if r ( A ) + r ( X ) r X A 1 = n .

  4. Cov ( BLUE ( X β M 1 ) ) = Cov ( BLUE ( X β M ) ) if and only if r ( A ) + r ( X ) = r X A 1 , or equivalently C ( A 1 ) C ( X ) = { 0 } .

  5. Cov ( BLUE ( β M 1 ) ) Cov ( BLUE ( β M ) ) if and only if r ( A 1 ) = p .

  6. Cov ( BLUE ( β M 1 ) ) = Cov ( BLUE ( β M ) ) if and only if A 1 = 0 .

Proof

Applying Lemma 1.3 to the difference between (2.1) and (2.2) for j = 1 leads to

(2.3) i ± ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M ) ) ) = i ± ( ( X , 0 ) ( X ^ 1 , Σ ^ 1 E X ^ 1 ) Σ ^ 1 [ ( X ^ 1 , Σ ^ 1 E X ^ 1 ) ] ( X,0 ) ( X,0 ) ( X , Σ E X ) Σ [ ( X , Σ E X ) ] ( X , 0 ) ) = i ± Σ ^ 1 0 X ^ 1 Σ ^ 1 E X ^ 1 0 0 0 0 Σ 0 0 X Σ E X 0 X ^ 1 0 0 0 0 0 X E X ^ 1 Σ ^ 1 0 0 0 0 0 0 0 X 0 0 0 0 X 0 E X Σ 0 0 0 0 0 0 0 X 0 X 0 0 r ( X ^ 1 , Σ ^ 1 E X ^ 1 ) r ( X , Σ E X ) .

Noticing r ( X , Σ E X ) = r ( X , Σ ) and r ( X ^ 1 , Σ ^ 1 E X ^ 1 ) = r ( X ^ 1 , Σ ^ 1 ) and utilizing (1.9) and (1.10), (2.3) can be equivalently written as

(2.4) i ± Σ ^ 1 0 X ^ 1 0 0 0 0 0 Σ 0 0 X 0 0 X ^ 1 0 0 0 0 0 X 0 0 0 E X ^ 1 Σ ^ 1 E X ^ 1 0 0 0 0 X 0 0 0 0 X 0 0 0 0 0 E X Σ E X 0 0 0 X 0 X 0 0 r ( X ^ 1 , Σ ^ 1 ) r ( X , Σ ) = i ± Σ ^ 1 0 X ^ 1 0 0 0 Σ 0 X 0 X ^ 1 0 0 0 X 0 X 0 0 X 0 0 X X 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X Σ E X ) r ( X ^ 1 , Σ ^ 1 ) r ( X , Σ ) = i ± Σ ^ 1 0 X ^ 1 0 0 0 Σ X 0 0 X ^ 1 X 0 0 0 0 0 0 0 X 0 0 0 X 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X Σ E X ) r ( X ^ 1 , Σ ^ 1 ) r ( X , Σ ) .

First, using (1.10) and (1.11) and then using Assumption 2.1, we deduce that (2.4) is

i ± Σ 0 0 X 0 V 1 0 A 1 0 0 Σ X X A 1 X 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X Σ E X ) r ( X ^ 1 , Σ ^ 1 ) r ( X , Σ ) + r ( X ) = i ± Σ 0 0 0 0 V 1 0 A 1 0 0 Σ 0 0 A 1 0 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X Σ E X ) r ( X ^ 1 , Σ ^ 1 ) r ( X , Σ ) + r ( X ) = i ± V 1 A 1 A 1 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X Σ E X ) r ( X ^ 1 , Σ ^ 1 ) r ( X , Σ ) + r ( X ) + r ( Σ ) .

In consideration of

C ( X ) C ( Σ ) , r ( X ^ 1 , Σ ^ 1 E X ^ 1 ) = r ( X ^ 1 ) + r ( Σ ^ 1 E X ^ 1 ) , r ( X , Σ E X ) = r ( X ) + r ( Σ E X )

and (1.13), we have

(2.5) i + ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M ) ) ) = 0 ,

(2.6) i ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M ) ) ) = r ( X ) + r ( A 1 ) r X A 1 ,

which gives rise to (a). If we let β be estimable under M , then according to the definition of estimability, there is an invertible matrix Q R n × n satisfying

(2.7) QX = ( Q 1 , Q 2 ) ,

where Q 1 R p × p is a nonsingular matrix. Put

(2.8) Ω = Cov ( BLUE ( β M 1 ) ) Cov ( BLUE ( β M ) ) .

We have

(2.9) Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M ) ) = X Ω X .

Utilizing (2.7) and applying (1.8), (1.9), and (1.15) yield the subsequent relationship

(2.10) i ± ( Ω ) i ± ( X Ω X ) = i ± ( Q X Ω X Q ) = i ± Q 1 Ω Q 1 Q 1 Ω Q 2 Q 2 Ω Q 1 Q 2 Ω Q 2 i ± ( Q 1 Ω Q 1 ) = i ± ( Ω ) ,

i.e.,

(2.11) i ± ( Ω ) = i ± ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M ) ) ) ,

which, by (2.5), implies (b). Apparently, (c)–(f) hold by (2.5) and (2.6)□

Theorem 2.1 (b) is the generalized form of Theorem 3.1 in [11]. It is noteworthy that (1.6) includes covariance matrix V 1 , while the results in Theorem 2.1 have nothing to do with it. And the conclusions (a) and (b) in Theorem 2.1 are quite intuitive, because more information is added to the unknown regression parameters, which brings about a higher accuracy for the corresponding estimators. In addition, Ren [13] showed that

BLUE ( X β M 1 ) = BLUE ( X β M )

with probability 1 if and only if

C ( A 1 ) C ( X ) = { 0 }

with Assumption 2.1, which is compared with (d) in Theorem 2.1.

For the restriction models M 1 and M 2 meeting A 1 = A 2 , some authors explored the superiority of BLUE over the other with some assumptions to these two restriction models, which is related to covariance matrix criterion, and showed that there exists a one-to-one match between the covariance matrix of random error vector in (1.2) and BLUE under the restriction model in (1.3) [16]. In addition to the case adding A 1 = A 2 on M 1 and M 2 , there was also some literature handling how to compare the dominance of BLUEs under M 1 and M 2 related to the covariance matrix criterion when the assertion A 1 A 2 occurs, see [17,18]. However, when faced with considerable generalized inverses of matrices included in BLUEs under M 1 and M 2 , statisticians had no proper methods to take up them and as a result did not give a more general conclusion on the previous problem in the existing literature. Fortunately, with the development of matrix rank and index of inertia method we can pay attention to addressing this problem. For more details on the matrix index of inertia method, see [20]; for more details on the matrix rank method, see [21]; for the applications of matrix rank and index of inertia methods in statistical inference, see [2231]. When we wish to compare the BLUEs under M 1 and M 2 in the sense of the covariance matrix criterion, the following assumption, in terms of conclusion (d) in Theorem 2.1, may be introduced.

Assumption 2.2

C ( A j ) C ( X ) , or equivalently r ( X , A j ) = r ( X ) , j = 1 , 2 .

Theorem 2.2

Consider these two models M 1 and M 2 appearing in (1.3) under Assumptions 2.1 and 2.2. Then,

(a) The inequality Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) holds if and only if

(2.12) i + V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 = r ( V 1 , A 1 ) .

(b) The inequality Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) holds if and only if

(2.13) i V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 = r ( V 2 , A 2 ) + n .

(c) The equality Cov ( BLUE ( X β M 1 ) ) = Cov ( BLUE ( X β M 2 ) ) holds if and only if

(2.14) r V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 = r ( V 1 , A 1 ) + r ( V 2 , A 2 ) .

Proof

The application of Lemma 1.3 to the difference between covariances of BLUEs for X β under M 1 and M 2 yields

i ± ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) ) = i ± ( ( X , 0 ) ( X ^ 1 , Σ ^ 1 E X ^ 1 ) Σ ^ 1 [ ( X ^ 1 , Σ ^ 1 E X ^ 1 ) ] ( X,0 ) ( X , 0 ) ( X ^ 2 , Σ ^ 2 E X ^ 2 ) Σ ^ 2 [ ( X ^ 2 , Σ ^ 2 E X ^ 2 ) ] ( X,0 ) ) = i ± Σ ^ 1 0 X ^ 1 Σ ^ 1 E X ^ 1 0 0 0 0 Σ ^ 2 0 0 X ^ 2 Σ ^ 2 E X ^ 2 0 X ^ 1 0 0 0 0 0 X E X ^ 1 Σ ^ 1 0 0 0 0 0 0 0 X ^ 2 0 0 0 0 X 0 E X ^ 2 Σ ^ 2 0 0 0 0 0 0 0 X 0 X 0 0 r ( X ^ 1 , Σ ^ 1 E X ^ 1 ) r ( X ^ 2 , Σ ^ 2 E X ^ 2 ) ,

which dealed with by (1.9) and r ( X ^ j , Σ ^ j E X ^ j ) = r ( X ^ j , Σ ^ j ) , j = 1 , 2 , results in

(2.15) i ± Σ ^ 1 0 X ^ 1 0 0 0 0 0 Σ ^ 2 0 0 X ^ 2 0 0 X ^ 1 0 0 0 0 0 X 0 0 0 E X ^ 1 Σ ^ 1 E X ^ 1 0 0 0 0 X ^ 2 0 0 0 0 X 0 0 0 0 0 E X ^ 2 Σ ^ 2 E X ^ 2 0 0 0 X 0 X 0 0 r ( X ^ 1 , Σ ^ 1 ) r ( X ^ 2 , Σ ^ 2 ) .

Applying (1.10) to (2.15) and simplifying by (1.11) lead to

(2.16) i ± Σ ^ 1 0 X ^ 1 0 0 0 Σ ^ 2 0 X ^ 2 0 X ^ 1 0 0 0 X 0 X ^ 2 0 0 X 0 0 X X 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X ^ 2 Σ ^ 2 E X ^ 2 ) r ( X ^ 1 , Σ ^ 1 ) r ( X ^ 2 , Σ ^ 2 ) = i ± Σ ^ 1 0 X ^ 1 0 0 0 Σ ^ 2 X ^ 2 0 0 X ^ 1 X ^ 2 0 0 0 0 0 0 0 X 0 0 0 X 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X ^ 2 Σ ^ 2 E X ^ 2 ) r ( X ^ 1 , Σ ^ 1 ) r ( X ^ 2 , Σ ^ 2 ) = i ± Σ ^ 1 0 X ^ 1 0 Σ ^ 2 X ^ 2 X ^ 1 X ^ 2 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X ^ 2 Σ ^ 2 E X ^ 2 ) r ( X ^ 1 , Σ ^ 1 ) r ( X ^ 2 , Σ ^ 2 ) + r ( X ) = i ± Σ 0 0 0 X 0 V 1 0 0 A 1 0 0 Σ 0 X 0 0 0 V 2 A 2 X A 1 X A 2 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X ^ 2 Σ ^ 2 E X ^ 2 ) r ( X ^ 1 , Σ ^ 1 ) r ( X ^ 2 , Σ ^ 2 ) + r ( X ) .

In light of Assumption 2.1, we obtain

(2.17) i ± Σ 0 0 0 0 0 V 1 0 0 A 1 0 0 Σ 0 0 0 0 0 V 2 A 2 0 A 1 0 A 2 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X ^ 2 Σ ^ 2 E X ^ 2 ) r ( X ^ 1 , Σ ^ 1 ) r ( X ^ 2 , Σ ^ 2 ) + r ( X ) = i ± V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 + i ( E X ^ 1 Σ ^ 1 E X ^ 1 ) + i ± ( E X ^ 2 Σ ^ 2 E X ^ 2 ) r ( X ^ 1 , Σ ^ 1 ) r ( X ^ 2 , Σ ^ 2 ) + r ( X ) + r ( Σ ) .

Therefore, by Assumption 2.2, we give

i + ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) ) = i + V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 r ( V 1 , A 1 ) , i ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) ) = i V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 r ( V 2 , A 2 ) ,

establishing the desired results (a)–(c).□

Due to the non-negative definiteness of V j , i.e., V j 0 , there exists an invertible matrix P j such that

P j V j P j = I l j 0 0 0

with r ( V j ) = l j , j = 1 , 2 . Apparently,

r j = A j β + e j r ˜ j = A ˜ j β + e ˜ j ,

where r ˜ j = P j r j , A ˜ j = P j A j and e ˜ j = P j e j , j = 1 , 2 . In regard of

Cov ( e ˜ j ) = I l j 0 0 0 , j = 1 , 2 ,

the following statement may be postulated.

Assumption 2.3

V j = I l j 0 0 0 and A j = A j 1 A j 2 l j m j l j , 0 l j m j , j = 1 , 2 , where l j and m j l j stand for the number of the rows in A j 1 and A j 2 , respectively.

Under Assumption 2.3, Theorem 2.2 can be reduced to the following statement.

Corollary 2.1

Consider both models M 1 and M 2 appearing in (1.3) under Assumptions 2.1 and 2.3.

(a) The inequality Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) holds if and only if

(2.18) i + A 21 A 21 A 11 A 11 A 12 A 22 A 12 0 0 A 22 0 0 = r ( A 12 ) ,

or equivalently,

(2.19) C ( A 22 ) C ( A 12 ) and F A 12 ( A 21 A 21 A 11 A 11 ) F A 12 0 .

(b) The inequality Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) holds if and only if

(2.20) i A 21 A 21 A 11 A 11 A 12 A 22 A 12 0 0 A 22 0 0 = r ( A 22 ) + n .

(c) The equality Cov ( BLUE ( X β M 1 ) ) = Cov ( BLUE ( X β M 2 ) ) holds if and only if

(2.21) r A 21 A 21 A 11 A 11 A 12 A 22 A 12 0 0 A 22 0 0 = r ( A 12 ) + r ( A 22 ) ,

or equivalently,

(2.22) C ( A 22 ) = C ( A 12 ) and F A 12 ( A 21 A 21 A 11 A 11 ) F A 12 = 0 .

Proof

Note from Assumption 2.3 that

(2.23) r ( V 1 , A 1 ) = r ( A 12 ) + l 1

and

(2.24) i + V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 = i + A 21 A 21 A 11 A 11 A 12 A 22 A 12 0 0 A 22 0 0 + l 1 .

Therefore, (2.12) is precisely (2.18). Also, observe from Assumption 2.3 that

(2.25) r ( V 2 , A 2 ) = r ( A 22 ) + l 2

and

(2.26) i V 1 0 A 1 0 V 2 A 2 A 1 A 2 0 = i A 21 A 21 A 11 A 11 A 12 A 22 A 12 0 0 A 22 0 0 + l 2 ,

which implies (2.20). Substituting (2.23)–(2.26) into (2.14) leads to (2.21). Note that E A 12 = F A 12 and from C ( A 22 ) C ( A 12 ) consequently E ( A 12 , A 22 ) = F A 12 . Hence, by (1.12), the equivalence (2.18) ⇔ (2.19) is established. Likewise, we can obtain (2.21) (2.22).□

Let β be estimable under M 1 and M 2 , that is,

(2.27) r ( X , A 1 ) = r ( X , A 2 ) = p .

The application of Assumption 2.2 to (2.27) gives r ( X ) = p . Analogous to (2.7)–(2.11) of Theorem 2.1, it is demonstrated that

i ± ( Ω ) = i ± ( Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) ) ,

where Ω = Cov ( BLUE ( β M 1 ) ) Cov ( BLUE ( β M 2 ) ) , and hence, it can be correspondingly acquired that necessary and sufficient conditions for the superiority of the BLUE for β over the other under M 1 and M 2 relative to the covariance matrix criterion. Assume that (1.2) are both exact linear restrictions, i.e., V j = 0 , j = 1 , 2 . In this case, the comparison of covariance matrices of BLUEs for β under M 1 and M 2 was first investigated by Trenkler [32], then extended by Ren and Zhou [33], which is the special situation of the above conclusion. It should be emphasized that the easier conditions in corollary 2.1 can be found once l 1 and l 2 are specified, for instance, if l 1 = m 1 and l 2 = m 2 in Assumption 2.3, then corollary 2.1 is simplified as follows.

Corollary 2.2

Under Assumptions 2.1 and 2.3, and let l 1 = m 1 and l 2 = m 2 in Assumption 2.3. Then,

(a) Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) A 2 A 2 A 1 A 1 0 .

(b) Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) i ( A 2 A 2 A 1 A 1 ) = n .

(c) Cov ( BLUE ( X β M 1 ) ) = Cov ( BLUE ( X β M 2 ) ) A 2 A 2 A 1 A 1 = 0 .

3 Applications

There are a lot of ways to measure the influence of an observation on estimators in the literature, see, e.g. [34] for Cook’s distance, [35] for Welsch-Kuh’s distance, and [36] for Andrews-Pregibon statistic, etc. Set

(3.1) X ^ j X ^ j 0 , j = 1 , 2 ,

m 1 = m 2 = 1 in (1.2), and then regard (1.2) as two additional observations. In the model with n + 2 observations, Belsley et al. [3] compared the influences of these two observations in (1.2) on an ordinary least squares estimator (OLSE) by utilization of the determinant ratio of covariance matrices of estimators.

In this section, we present an application of the above results to compare the effects of observations (1.2) on BLUE under the model with n + 2 observations according to the covariance matrix criterion without the restrictions (3.1) on matrix X ^ j , j = 1 , 2 . With m 1 = m 2 = 1 , we write (1.2) as

(3.2) r j = α j β + e j , E ( e j ) = 0 , Cov ( e j ) = σ j 2 > 0 , j = 1 , 2 .

By Corollary 2.2, the following results can be given.

Corollary 3.1

Under Assumptions 2.1 and 2.3, and let l 1 = l 2 = m 1 = m 2 = 1 in Assumption 2.3. Then,

(a) Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) α 2 α 2 α 1 α 1 .

(b) The inequality Cov ( BLUE ( X β M 1 ) ) Cov ( BLUE ( X β M 2 ) ) only hold under α 2 α 2 < α 1 α 1 and n = 1 .

(c) Cov ( BLUE ( X β M 1 ) ) = Cov ( BLUE ( X β M 2 ) ) α 2 α 2 = α 1 α 1 .

As to σ 1 2 = 0 or σ 2 2 = 0 in (3.2), similar conclusions can also be drawn. We omit them here. If we would like to delete the observation r 1 from the model M 1 with m 1 = 1 , then considering the model M 1 after deleting the observation r 1 is equivalent to considering a new model

y r 1 = X β α 1 β + e ^ n + 1 α + ε e 1 ,

where e ^ n + 1 = ( 0 , , 0 , 1 ) R ( n + 1 ) × 1 by using Theorem 2.1 because of the fact

C α 1 1 C X 0 = { 0 } .

This has an important application to deletion diagnostics [37].

In order to explain the theoretical conclusions, we consider an example regarding piglet fattening due to Section 1.3 of [38]. This example can be given by a linear regression model

(3.3) y = X β + ε ,

where

(3.4) y = y 11 y 12 y 13 y 21 y 22 y 23 , X = 1 1 0 5.0 1 1 0 5.0 1 1 0 5.2 1 0 1 5.0 1 0 1 5.3 1 0 1 5.4 , β = μ β 1 β 2 γ , ε = ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 .

Set

(3.5) Cov ( ε ) = Σ = 1 1 1 0 0 0 1 1 1 0 0 0 1 1 2 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 .

These data originate from an experimental research on piglet fattening where several kinds of feeds are fed to various piglets. The experiment is usually implemented to compare the fattening effects of different feeds in terms of the amount of weight gained. Here, y i j is the number of weight obtained by the jth piglet fed to the ith feed, i = 1 , 2, j = 1 , 2, 3, μ means a total effect included in all the observations, β i denotes the effect of the i th feed, i = 1 , 2, γ represents the regression coefficient. Put

(3.6) M j = y 1 r j , X 1 β α j β , Σ 1 0 0 1 , j = 1 , 2 ,

where

y 1 = y 11 y 12 y 13 y 21 , X 1 = 1 1 0 5.0 1 1 0 5.0 1 1 0 5.2 1 0 1 5.0 , Σ 1 = 1 1 1 0 1 1 1 0 1 1 2 0 0 0 0 1 , α 1 = ( 1 , 0 , 1 , 5.3 )

α 2 = ( 1 , 0 , 1 , 5.4 ) , r 1 = y 22 and r 2 = y 23 .

Note that

C ( α 1 ) C ( X 1 ) , C ( α 2 ) C ( X 1 ) , C ( X 1 ) C ( Σ 1 ) ,

that is, Assumptions 2.1–2.3 in Section 2 hold. Hence, the conditions in Corollary 3.1 are satisfied. Since

α 1 α 1 = 1 2 + 0 2 + 1 2 + 5 . 3 2 < 1 2 + 0 2 + 1 2 + 5 . 4 2 = α 2 α 2 ,

so we obtain

Cov ( BLUE ( X 1 β M 2 ) ) Cov ( BLUE ( X 1 β M 1 ) )

in view of Corollary 3.1, i.e., the observation y 22 has a bigger effect than the observation y 23 on the BLUE according to the covariance matrix criterion. Trivially,

X 1 α j X 1 α j 0 , j = 1 , 2 ,

does not hold, which leads to the ineffectiveness of many ways to compare the impact of observations on the BLUE, such as Cook’s distance, Welsch-Kuh’s distance, Andrews-Pregibon statistic, and so on.

4 Conclusion

The author made a comparison between covariance matrices of BLUEs under M and M 1 in (1.3), which showed that stochastic linear restrictions could improve the corresponding BLUE with respect to the covariance matrix criterion. Based on the results derived in Theorem 2.1, we gained some necessary and sufficient conditions for a variety of equalities and inequalities of covariance matrices of BLUEs under M 1 and M 2 in (1.3) to hold. The comparison findings clearly give us some answers on how to evaluate the effects of different stochastic linear restrictions on BLUE. It is worth noting that if V 1 , V 2 , and Σ are unknown, then Theorems 2.1 and 2.2 have no use by reason of involvement of V 1 , V 2 , and Σ . In this situation, they can be substituted for the corresponding estimators, and these are other kinds of work that are not discussed in this article. The OLSE defined by the optimality criterion different from BLUE is frequently employed in the literature due to its excellent properties. Therefore, under Assumption 2.1, it will be interesting to investigate how to make a comparison between the covariance matrices of OLSEs under M 1 and M 2 in (1.3). In addition, how to compare the covariance matrices of OLSEs and BLUEs under M 1 and M 2 in (1.3) is also a challenging work.

Acknowledgments

The author would like to thank the handling editor and anonymous reviewers for their insightful comments and suggestions.

  1. Funding information: This research was supported by the Natural Science Foundation of Anhui Provincial Education Department (2024AH050299), and the key discipline construction project of Anhui Science and Technology University (XK-XJGY002).

  2. Author contribution: The author has accepted responsibility for the whole content of the manuscript and approved its submission.

  3. Conflict of interest: The author declares no conflict of interest.

  4. Ethical approval: Not applicable.

  5. Data availability statement: Not applicable.

References

[1] C. R. Rao, H. Toutenburg, Shalabh, and C. Heumann, Linear Models and Generalizations: Least Squares and Alternatives, Springer, New York, 2008. Search in Google Scholar

[2] S. Chatterjee and A. S. Hadi, Sensitivity Analysis in Linear Regression, Wiley, New York, 1988. 10.1002/9780470316764Search in Google Scholar

[3] D. A. Belsley, E. Kuh, and R. E. Welsch, Regression Diagnostics, Wiley, New York, 1980. 10.1002/0471725153Search in Google Scholar

[4] J. F. Finney, Statistical Method in Biological Assay, Charles Griffin, London, 1978. Search in Google Scholar

[5] G. G. Judge and M. E. Bock, The Statistical Implication of Pre-Test and Stein-Rule Estimators in Econometrics, North-Holland Publishing Company, North-Holland, New York, 1978. Search in Google Scholar

[6] J. K. Sengupta, G. Tintner, and B. Morrison, Stochastic linear programming with applications to economic models, Economica 30 (1963), no. 119, 262–276, DOI: https://doi.org/10.2307/2601546. 10.2307/2601546Search in Google Scholar

[7] H. Toutenburg and Shalabh, Estimation of linear regression models with missing data: the role of stochastic linear constraints, Comm. Statist. Theory Methods 34 (2005), no. 2, 375–387, DOI: https://doi.org/10.1080/03610920509342427. 10.1080/03610920509342427Search in Google Scholar

[8] M. Arashi and S. Tabatabaey, Stein-type improvement under stochastic constraints: use of multivariate Student-t model in regression, Statist. Probab. Lett. 78 (2008), no. 14, 2142–2153, DOI: https://doi.org/10.1016/j.spl.2008.02.003. 10.1016/j.spl.2008.02.003Search in Google Scholar

[9] A. Zellner, An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias, J. Amer. Statist. Assoc. 57 (1962), no. 298, 348–368, DOI: https://doi.org/10.1080/01621459.1962.10480664. 10.1080/01621459.1962.10480664Search in Google Scholar

[10] N. Güler and M. E. Büyükkaya, Statistical inference of a stochastically restricted linear mixed model, AIMS Math. 8 (2023), no. 10, 24401–24417, DOI: https://doi.org/10.3934/math.20231244. 10.3934/math.20231244Search in Google Scholar

[11] J. Xu and H. Yang, Estimation in singular linear models with stochastic linear restrictions, Comm. Statist. Theory Methods 36 (2007), no. 10, 1945–1951, DOI: https://doi.org/10.1080/03610920601126530. 10.1080/03610920601126530Search in Google Scholar

[12] S. J. Haslett and S. Puntanen, Equality of BLUEs or BLUPs under two linear models using stochastic restrictions, Statist. Papers 51 (2010), no. 1, 465–475, DOI: https://doi.org/10.1007/S00362-009-0219-7. 10.1007/s00362-009-0219-7Search in Google Scholar

[13] X. Ren, The equalities of estimations under a general partitioned linear model andits stochastically restricted model, Comm. Statist. Theory Methods 45 (2016), no. 22, 1100–1123, DOI: https://doi.org/10.1080/03610926.2014.960587. 10.1080/03610926.2014.960587Search in Google Scholar

[14] N. Varathan and P. Wijekoon, Logistic Liu estimator under stochastic linear restrictions, Statist. Papers 60 (2019), no. 1, 945–962, DOI: https://doi.org/10.1007/s00362-016-0856-6. 10.1007/s00362-016-0856-6Search in Google Scholar

[15] H. Theil and A. S. Goldberger, On pure and mixed statistical estimation in economics, Henri Theil s Contributions to Economics and Econometrics 2 (1961), no. 4, 65–78, DOI: https://doi.org/10.1007/978-94-011-2546-8_18. 10.1007/978-94-011-2546-8_18Search in Google Scholar

[16] J. K. Baksalary, Comparing stochastically restricted estimators in a linear regression model, Biom. J. 26 (1984), no. 5, 555–557, DOI: https://doi.org/10.1002/bimj.4710260512. 10.1002/bimj.4710260512Search in Google Scholar

[17] E. P. Liski, Comparing stochastically restricted linear estimators in a regression model, Biom. J. 31 (1989), no. 3, 313–316, DOI: https://doi.org/10.1002/bimj.4710310309. 10.1002/bimj.4710310309Search in Google Scholar

[18] G. Trenkler, A note on comparing stochastically restricted linear estimators in a regression model, Biom. J. 35 (1993), no. 1, 125–128, DOI: https://doi.org/10.1002/bimj.4710350112. 10.1002/bimj.4710350112Search in Google Scholar

[19] C. R. Rao, Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix, J. Multivariate Anal. 3 (1973), no. 3, 276–292, DOI: https://doi.org/10.1016/0047-259X(73)90042-0. 10.1016/0047-259X(73)90042-0Search in Google Scholar

[20] Y. Tian, Equalities and inequalities for inertias of Hermitian matrices with applications, Linear Algebra Appl. 433 (2010), no. 1, 263–296, DOI: https://doi.org/10.1016/j.laa.2010.02.018. 10.1016/j.laa.2010.02.018Search in Google Scholar

[21] G. Marsaglia and G. P. H. Styan, Equalities and inequalities for ranks of matrices, Linear Multilinear Algebra 2 (1974), no. 3, 269–292, DOI: https://doi.org/10.1080/03081087408817070. 10.1080/03081087408817070Search in Google Scholar

[22] Y. Tian, Some equalities and inequalities for covariance matrices of estimators under linear model, Statist. Papers 58 (2017), no. 1, 467–484, DOI: https://doi.org/10.1007/s00362-015-0707-x. 10.1007/s00362-015-0707-xSearch in Google Scholar

[23] Y. Tian and W. Guo, On comparison of dispersion matrices of estimators under a constrained linear model, Stat. Methods Appl. 25 (2016), no. 4, 623–649, DOI: https://doi.org/10.1007/s10260-016-0350-2. 10.1007/s10260-016-0350-2Search in Google Scholar

[24] Y. Tian and J. Wang, Some remarks on fundamental formulas and facts in the statistical analysis of a constrained general linear model, Comm. Statist. Theory Methods 49 (2020), no. 5, 1201–1216, DOI: https://doi.org/10.1080/03610926.2018.1554138. 10.1080/03610926.2018.1554138Search in Google Scholar

[25] R. Ma and Y. Tian, A matrix approach to a general partitioned linear model with partial parameter restrictions, Linear Multilinear Algebra 70 (2022), no. 13, 2513–2532, DOI: https://doi.org/10.1080/03081087.2020.1804521. 10.1080/03081087.2020.1804521Search in Google Scholar

[26] N. Güler, On relations between BLUPs under two transformed linear random-effects models, Comm. Statist. Simulation Comput. 51 (2022), no. 9, 5099–5125, DOI: https://doi.org/10.1080/03610918.2020.1757709. 10.1080/03610918.2020.1757709Search in Google Scholar

[27] N. Güler and M. E. Büyükkaya, Notes on comparison of covariance matrices of BLUPs under linear random-effects model with its two subsample models, Iran. J. Sci. Technol. A 43 (2019), no. 6, 2993–3002, DOI: https://doi.org/10.1007/s40995-019-00785-3. 10.1007/s40995-019-00785-3Search in Google Scholar

[28] N. Güler and M. E. Büyükkaya, Rank and inertia formulas for covariance matrices of BLUPs in general linear mixed models, Comm. Statist. Theory Methods 50 (2020), no. 21, 4997–5012, DOI: https://doi.org/10.1080/03610926.2019.1599950. 10.1080/03610926.2019.1599950Search in Google Scholar

[29] N. Güler and M. E. Büyükkaya, Inertia and rank approach in transformed linear mixed models for comparison of BLUPs, Comm. Statist. Theory Methods 52 (2023), no. 9, 3108–3123, DOI: https://doi.org/10.1080/03610926.2021.1967397. 10.1080/03610926.2021.1967397Search in Google Scholar

[30] X. Ren, Estimation in singular linear models with stepwise inclusion of linear restrictions, J. Multivariate Anal. 148 (2016), no. 3, 60–72, DOI: https://doi.org/10.1016/j.jmva.2016.02.018. 10.1016/j.jmva.2016.02.018Search in Google Scholar

[31] X. Ren and L. Lin, Some remarks on BLUP under the general linear model with linear equality restrictions, J. Math. Res. Appl. 38 (2018), no. 5, 496–508, DOI: https://doi.org/10.3770/j.issn:2095-2651.2018.05.008. Search in Google Scholar

[32] G. Trenkler, Mean square error matrix comparisons among restricted least squares estimators, Sankhya A 49 (1987), no. 1, 96–104. Search in Google Scholar

[33] X. Ren and Q. Zhou, Dispersion matrix comparisons among estimators under two competing restricted linear regression models, Indian J. Pure Appl. Math. 56 (2025), no. 2, 601–614, DOI: https://doi.org/10.1007/s13226-023-00505-z. 10.1007/s13226-023-00505-zSearch in Google Scholar

[34] R. D. Cook, Detection of influential observations in linear regression, Technometrics 19 (1977), no. 1, 15–18, DOI: https://doi.org/10.1080/00401706.1977.10489493. 10.1080/00401706.1977.10489493Search in Google Scholar

[35] R. E. Welsch and E. Kuh, Linear regression diagnostics, Technical Report 923, Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, 1977. 10.3386/w0173Search in Google Scholar

[36] D. F. Andrews and D. Pregibon, Finding outliers that matter, J. R. Stat. Soc. Ser. B. Stat. Methodol. 40 (1978), no. 1, 85–93, DOI: https://doi.org/10.1111/j.2517-6161.1978.tb01652.x. 10.1111/j.2517-6161.1978.tb01652.xSearch in Google Scholar

[37] R. J. Beckman and H. J. Trussel, The distribution of an arbitrary studentized residual and the effects of updating in multiple regression, J. Amer. Statist. Assoc. 69 (1974), no. 345, 199–201, DOI: https://doi.org/10.1080/01621459.1974.10480152. 10.1080/01621459.1974.10480152Search in Google Scholar

[38] S. G. Wang, J. H. Shi, S. J. Yin, and M. X. Wu, An Introduction to Linear Models, Sci. Press, Beijing, 2004. Search in Google Scholar

Received: 2024-10-30
Revised: 2025-04-12
Accepted: 2025-05-26
Published Online: 2025-07-29

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. On approximation by Stancu variant of Bernstein-Durrmeyer-type operators in movable compact disks
  3. Circular n,m-rung orthopair fuzzy sets and their applications in multicriteria decision-making
  4. Grand Triebel-Lizorkin-Morrey spaces
  5. Coefficient estimates and Fekete-Szegö problem for some classes of univalent functions generalized to a complex order
  6. Proofs of two conjectures involving sums of normalized Narayana numbers
  7. On the Laguerre polynomial approximation errors and lower type of entire functions of irregular growth
  8. New convolutions for the Hartley integral transform imbedded in the Banach algebras and convolution-type integral equations
  9. Some inequalities for rational function with prescribed poles and restricted zeros
  10. Lucas difference sequence spaces defined by Orlicz function in 2-normed spaces
  11. Evaluating the efficacy of fuzzy Bayesian networks for financial risk assessment
  12. Fixed point results for contractions of polynomial type
  13. Estimation for spatial semi-functional partial linear regression model with missing response at random
  14. Investigating the controllability of differential systems with nonlinear fractional delays, characterized by the order 0 < η ≤ 1 < ζ ≤ 2
  15. New forms of bilateral inequalities for K-g-frames
  16. Rate of pole detection using Padé approximants to polynomial expansions
  17. Existence results for nonhomogeneous Choquard equation involving p-biharmonic operator and critical growth
  18. Note on the shape-preservation of a new class of Kantorovich-type operators via divided differences
  19. Geršhgorin-type theorems for Z1-eigenvalues of tensors with applications
  20. New topologies derived from the old one via operators
  21. Blow up solutions for two-dimensional semilinear elliptic problem of Liouville type with nonlinear gradient terms
  22. Infinitely many normalized solutions for Schrödinger equations with local sublinear nonlinearity
  23. Nonparametric expectile shortfall regression for functional data
  24. Advancing analytical solutions: Novel wave insights and methodologies for beta fractional Kuralay-II equations
  25. A generalized p-Laplacian problem with parameters
  26. A study of solutions for several classes of systems of complex nonlinear partial differential difference equations in ℂ2
  27. Towards finding equalities involving mixed products of the Moore-Penrose and group inverses by matrix rank methodology
  28. ω -biprojective and ω ¯ -contractible Banach algebras
  29. Coefficient functionals for Sakaguchi-type-Starlike functions subordinated to the three-leaf function
  30. Solutions of several general quadratic partial differential-difference equations in ℂ2
  31. Inequalities for the generalized trigonometric functions with respect to weighted power mean
  32. Optimization of Lagrange problem with higher-order differential inclusion and special boundary-value conditions
  33. Hankel determinants for q-starlike functions connected with q-sine function
  34. System of partial differential hemivariational inequalities involving nonlocal boundary conditions
  35. A new family of multivalent functions defined by certain forms of the quantum integral operator
  36. A matrix approach to compare BLUEs under a linear regression model and its two competing restricted models with applications
  37. Weighted composition operators on bicomplex Lorentz spaces with their characterization and properties
  38. Behavior of spatial curves under different transformations in Euclidean 4-space
  39. Commutators for the maximal and sharp functions with weighted Lipschitz functions on weighted Morrey spaces
  40. A new kind of Durrmeyer-Stancu-type operators
  41. A study of generalized Mittag-Leffler-type function of arbitrary order
  42. On the approximation of Kantorovich-type Szàsz-Charlier operators
  43. Split quaternion Fourier transforms for two-dimensional real invariant field
  44. Quantum injectivity of G-frames in Hilbert spaces
  45. Some results on disjointly weakly compact sets
  46. On Motzkin sequence spaces via q-analog and compact operators
  47. Existence and multiplicity of nontrivial solutions for Schrödinger-Bopp-Podolsky systems with critical nonlinearity in ℝ3
  48. Stability analysis of linear time-invariant difference-differential system with constant and distributed delays
  49. The discriminant of quasi m-boundary singularities
  50. Norm constrained empirical portfolio optimization with stochastic dominance: Robust optimization non-asymptotics
  51. Fuzzy stability of multi-additive mappings
  52. On inequalities involving n-polynomial exponential-type convex functions
  53. Singularities of multiplicative spherical Darboux image and multiplicative rectifying developable surface
  54. A golden ratio technique for equilibrium problem in reflexive Banach spaces
  55. A parallel inertial three-step iteration monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with graphs applicable to signal recovery problems
  56. Multiple and unique nontrivial solutions for fractional differential equations with singular property and derivatives contained in the nonlinear term
  57. New soliton solutions for a nonlinear complex hyperbolic Schrödinger dynamical equation with a truncated M-fractional derivative
  58. On a generalization of derangement polynomials and numbers
  59. The description of entire solutions of complex PDEs and PDDEs
  60. A modified RMIL conjugate gradient-based projection algorithm for constrained nonlinear equations: application to image denoising
  61. Fast solution strategies for time-space fractional linear complementarity problems governing American options pricing
  62. Existence results for Robin problems involving p(x)-Laplacian-like operators with convection term
  63. On asymptotic behaviors of a specific cubic functional equation and its hyperstability
  64. The description of entire solutions for some class of complex nonlinear partial differential equation (systems) in C 2
  65. Variations in the geometry of the basins of escape in a modified Hénon–Heiles potential
  66. A Rothe method for a viscoelastic contact problem involving time-fractional derivatives in locking materials
  67. Upper and lower solution method for a higher order ϕ-Laplacian BVPs on an infinite interval
  68. Weyl almost periodic functions on time scales and their Fourier series
  69. Integrable system of null curve and Betchov-Da Rios equation
  70. Fekete–Szegö problems for (β, Φ)-spirllike mapping of complex order γ in Banach space
  71. Modulated convergence: a deferred approach
  72. Infinitely many solutions for an instantaneous and non-instantaneous fourth-order differential system with local assumptions
  73. Existence and nonexistence of normalized solutions for the Biharmonic equation with combined nonlinearities
  74. Ekeland’s variational principle for interval-valued functions with an α-level set in Kaleva-Seikkala’s type fuzzy metric spaces
  75. On Kurzweil integral of fuzzy number valued functions with two variables
  76. On split common null point and common fixed point problems for multivalued demicontractive mappings
  77. Approximation by weighted Durrmeyer-type max-product neural network operators
  78. A new predictor-corrector interior-point algorithm for semidefinite optimization
  79. Densities of measures: fine properties and examples
  80. Review Articles
  81. Characterization generalized derivations of tensor products of nonassociative algebras
  82. On the performance of the new minimax shrinkage estimators for a normal mean vector
  83. Special Issue on Differential Equations and Numerical Analysis - Part II
  84. Existence and optimal control of Hilfer fractional evolution equations
  85. Persistence of a unique periodic wave train in convecting shallow water fluid
  86. Existence results for critical growth Kohn-Laplace equations with jumping nonlinearities
  87. Monotonicity and oscillation for fractional differential equations with Riemann-Liouville derivatives
  88. Nontrivial solutions for a generalized poly-Laplacian system on finite graphs
  89. Stability and bifurcation analysis of a modified chemostat model
  90. Some new quantum derivatives and integrals with their applications in integral error bounds
  91. Special Issue on Nonlinear Evolution Equations and Their Applications - Part II
  92. Analytic solutions of a generalized complex multi-dimensional system with fractional order
  93. Extraction of soliton solutions and Painlevé test for fractional Peyrard-Bishop DNA model
  94. Special Issue on Recent Developments in Fixed-Point Theory and Applications - Part II
  95. Some fixed point results with the vector degree of nondensifiability in generalized Banach spaces and application on coupled Caputo fractional delay differential equations
  96. On the sum form functional equation related to diversity index
  97. Special Issue on International E-Conference on Mathematical and Statistical Sciences - Part II
  98. Simpson, midpoint, and trapezoid-type inequalities for multiplicatively s-convex functions
  99. Converses of nabla Pachpatte-type dynamic inequalities on arbitrary time scales
  100. Special Issue on Blow-up Phenomena in Nonlinear Equations of Mathematical Physics - Part II
  101. Energy decay of a coupled system involving a biharmonic Schrödinger equation with an internal fractional damping
  102. Special Issue on Some Integral Inequalities, Integral Equations, and Applications - Part II
  103. Nonlinear heat equation with viscoelastic term: Global existence and blowup in finite time
  104. New Jensen's bounds for HA-convex mappings with applications to Shannon entropy
  105. Special Issue on Approximation Theory and Special Functions 2024 conference
  106. Ulam-type stability for Caputo-type fractional delay differential equations
  107. Faster approximation to multivariate functions by combined Bernstein-Taylor operators
  108. (λ, ψ)-Bernstein-Kantorovich operators
  109. Some special functions and cylindrical diffusion equation on α-time scale
  110. (q, p)-Mixing Bloch maps
  111. Orthogonalizing q-Bernoulli polynomials
  112. On better approximation order for the max-product Meyer-König and Zeller operator
  113. Moment-based approximation for a renewal reward process with generalized gamma-distributed interference of chance
  114. A note on linear compositions of the Mellin convolution operators in the weighted Mellin-Lebesgue spaces
  115. A new perspective on generalized Laguerre polynomials
  116. Global existence of semilinear system of Klein-Gordon equations in anti-de Sitter spacetime
  117. Estimates for Durrmeyer-type exponential sampling series in Mellin-Orlicz spaces
  118. -αβ-statistical relative uniform convergence for double sequences and its applications
  119. New developments for the Jacobi polynomials
  120. Generalization of Sheffer-λ polynomials
  121. Fractional calculus containing certain bivariate Mittag-Leffler kernel with respect to function
  122. A new type of soft multi rough sets
  123. Special Issue on Variational Methods and Nonlinear PDEs
  124. A note on mean field type equations
  125. Ground states for fractional Kirchhoff double-phase problem with logarithmic nonlinearity
  126. Solution of nonlinear Langevin equations involving Hilfer-Hadamard fractional order derivatives and variable coefficients
  127. Bifurcation, quasi-periodic, and wave solutions to the fractional model of optical fibers in communication systems
  128. Multiplicity and concentration behavior of solutions for the generalized quasilinear Schrödinger equation with critical growth
  129. Ground state solutions to singularly perturbed Chern-Simons-Schrödinger systems with a neutral scalar field
  130. Weak solutions to an asymptotic equation of the variational sine-Gordon equation
  131. Multiplicity of positive solutions for a concave-convex fractional elliptic system with critical growth
Downloaded on 1.4.2026 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2025-0152/html
Scroll to top button