Home A matrix approach to determine optimal predictors in a constrained linear mixed model
Article Open Access

A matrix approach to determine optimal predictors in a constrained linear mixed model

  • Nesrin Güler EMAIL logo and Melek Eriş Büyükkaya
Published/Copyright: December 26, 2024

Abstract

For a general vector of all unknown vectors in a constrained linear mixed model (CLMM), this study compared the dispersion matrices of the best linear unbiased predictors with any symmetric matrix for determining the optimality of predictors among others. Using the methodology of block matrix inertias and ranks as well as elementary block matrix operations, some equalities and inequalities are derived for comparisons. Additionally, the comparison findings for the CLMM reduce to special instances of the general vector of all unknown vectors. We also provide some comparison findings between the CLMM and its unconstrained form. Finally, we give a numerical example to illustrate our theoretical findings.

MSC 2010: 15A03; 62H12; 62J05

1 Introduction

Linear mixed models ( LMM s), which allow the use of both fixed and random effects in the same analysis, are a type of linear regression model. These models ensure practical and useful approaches for statistical inferences by considering the variability of model parameters influenced by numerous factors affecting the response variable. For the analysis of longitudinal and correlated data, which can be encountered in many different fields as well as in the field of statistics, LMM s and some of their special versions such as hierarchical, nested, multilevel, and linear random effects models are frequently utilized.

In statistical research, there may exist relevant extraneous information in the form of exact or stochastic restrictions on unknown parameters in linear models. Classical restrictions, mixed estimation restrictions, adding-up restrictions, and stochastic response restrictions are some examples of such restrictions [1]. If relevant extraneous information comes from theory or known knowledge of the relationships among unknown parameters, restrictions on unknown parameters, i.e., classical restrictions, arise, see, e.g., [2]. Usually, these restrictions are incorporated into the model assumptions. In such cases, by adding certain restrictions on unknown parameters to LMM assumptions, LMM s become constrained linear mixed models ( CLMM s). The reparameterization of models is one of the most used approaches for the adding process of exact linear restrictions to LMM s. Through this approach, a CLMM is made into a reparameterized LMM by setting the general solution of the linear restriction equation into the model, i.e., an unconstrained LMM for the new parameter vectors is derived from a CLMM by this approach, see, e.g., [3,4]. In the present study, we consider an LMM with an exact linear restriction on its parameters, i.e., we consider a CLMM and turn this model into its reparameterized form. We give the derivation to establish the best linear unbiased predictors ( BLUP s) of all unknown vectors in the models using some quadratic matrix optimization methods including matrix ranks and inertias together with some essential properties of the BLUP s.

The development of various potential predictors (or also potential estimators) of unknown parameter vectors in linear regression models and their specific situations is one of the main objectives of theoretical and practical studies. In order to define predictors of unknown parameter vectors in the model, various types of optimality criteria have been used in mathematics. The unbiasedness of predictors of unknown parameter vectors according to the parameter spaces in the model is one of the crucial characteristics among others. Finding the unbiased predictors with the smallest dispersion matrix among all the unbiased predictors makes sense when there are numerous unbiased predictors for the same parameter space. The BLUP by its definition corresponds to these two requirements. Thus, due to the minimizing property of the BLUP ’s dispersion matrix, the BLUP has a significant role to play in statistical inference theory and is frequently used as the foundation for evaluating the effectiveness of various predictors. Comparing any predictor and the BLUP for a unified form of all unknown parameters under the linear regression models in terms of the dispersion matrix of BLUP provides to determine the optimality of this compared predictor. The expressions that are obtained from the comparison issues of the predictors include various complicated matrix operations. Some of the methods to simplify expressions composed by matrices and their Moore-Penrose generalized inverses are based on conventional operations of matrices, as well as some known expressions for inverses or Moore-Penrose generalized inverses of matrices. The effectiveness of these methods is quite limited, and the processes involved are quite tedious. Statisticians, in general, have widely used the Löwner partial ordering ( LPO ) for symmetric matrices to obtain certain satisfactory results in problems of comparing different predictors. On the other hand, in recent years, many new expansion formulas for calculating inertias and ranks of matrices have been established, and these formulas can be used to characterize various types of equalities and inequalities between dispersion matrices of BLUPs and other predictors. In other words, the methodology of block matrix inertias and ranks and also elementary block matrix operations ( EBMO s) is quite effective to remove complex matrix operations in various complicated matrix expressions or equalities, while the results obtained can easily be represented as some simple inertia and rank equalities.

In this study, we deal with the problems of comparing predictors for determining optimal predictors among others. We construct numerous equality and inequality relations among the dispersion matrix of the BLUP and any symmetric matrix. The main purpose of this study is to describe how to establish exact formulas for the comparisons of any predictor and the BLUP for the same general vector of all unknown vectors under CLMM by using the methodology of block matrix inertias and ranks as well as EBMO s. In this way, this article offers new theoretical techniques and innovative approaches in the comparison of two predictors, one of which is BLUP , by establishing various equalities and inequalities CLMM by using the matrix inertia and rank methodology under the general assumptions. As mentioned earlier, the reason for using the dispersion matrix of BLUP in comparisons as a comparison criterion to obtain optimal predictors is based on the BLUP definition, which requires a minimum covariance matrix of unbiased predictors of unknown vectors. In addition, the reason for preferring the methodology of block matrix inertias and ranks with EBMO s is to convert the complex matrix expressions into simpler forms since the operations we encounter within the framework of the subject considered in the study include complex matrix expressions. Thus, what we want to say is that the approaches used in the study of the problem of comparing any predictor and the BLUP for determining optimality of predictors among others reduce to the problem of comparing quantities since the inertia and rank of a matrix are simple quantities to understand and calculate.

In order to calculate the set of formulas based on the inertias of differences between any symmetric matrix and the dispersion matrix of the BLUP of all unknown vectors in the CLMM , first, a CLMM with its reparameterized form is introduced, and then, the classical concepts of the predictability, estimability, and some fundamental results on the BLUP are reviewed. After constructing the comparison formulas, as an application, the results obtained are reduced to the corresponding results under the restricted and unrestricted model, and finally, a numerical example is given to illustrate the theoretical results depending on whether the dispersion matrix is non-singular or singular. LMM s have been widely studied in the statistical literature from various aspects. We may refer to the following studies for LMM s with restrictions: [57], for linear random effects models with restrictions: [8,9], and for linear models with restrictions: [1014], among others. The comparison of dispersion matrices under different types of models with or without restrictions has been presented, e.g., [1517].

We first describe the notations that are utilized in this work before moving on, and then, we present a number of formulas for calculating the ranks and inertias of block matrices to characterize equalities and inequalities of matrices in the following two lemmas, see, [18]. R k × t is written to represent the set of all k × t real matrices. ( ) denotes the transpose of a matrix; furthermore, r ( ) is used for the rank of a matrix, and for the column space of a matrix, we use C ( ) . Λ = I k Λ Λ + and F Λ = I t Λ + Λ represent the orthogonal projectors related to Λ R k × t , where Λ + is the Moore-Penrose generalized inverse of Λ and I k R k × k and I t R t × t are the identity matrices. i + ( ) and i ( ) represent the positive and negative inertias of a symmetric matrix, while i ± ( ) and i ( ) are used to denote both signs of inertias jointly. When comparing dispersion matrices, the notations and are employed, namely, for two symmetric matrices Λ 1 and Λ 2 of the same size, the inequalities Λ 1 Λ 2 0 ( Λ 1 Λ 2 ) and Λ 1 Λ 2 0 ( Λ 1 Λ 2 ) mean that the differences Λ 1 Λ 2 are negative semi-definite and positive semi-definite (psd) matrices in the LPO , respectively. In addition, E ( ) is used for the expectation of a random vector, and we also use D ( ) and cov ( , ) to denote the dispersion and covariance matrices of random vectors, respectively.

Lemma 1.1

Let Λ , ϒ R k × t , or, let Λ = Λ , ϒ = ϒ R k × k . Then,

(1) r ( Λ ϒ ) = 0 Λ = ϒ ,

(2) i ( Λ ϒ ) = 0 Λ ϒ a n d i + ( Λ ϒ ) = 0 Λ ϒ .

Lemma 1.2

Let Λ = Λ R k × k , Γ = Γ R t × t , ϒ R k × t , and l R . Then,

(3) r ( Λ ) = i + ( Λ ) + i ( Λ ) ,

(4) i ± ( l Λ ) = i ± ( Λ ) , i f l > 0 , a n d i ± ( l Λ ) = i ( Λ ) , i f l < 0 ,

(5) i ± Λ ϒ ϒ Γ = i ± Λ ϒ ϒ Γ = i Λ ϒ ϒ Γ ,

(6) i ± Λ 0 0 Γ = i ± ( Λ ) + i ± ( Γ ) a n d i + 0 Λ Λ 0 = i 0 Λ Λ 0 = r ( Λ ) ,

(7) i ± Λ ϒ ϒ 0 = r ( ϒ ) + i ± ( ϒ Λ ϒ ) ,

(8) i + Λ ϒ ϒ 0 = r Λ , ϒ a n d i Λ ϒ ϒ 0 = r ( ϒ ) , i f Λ 0 ,

(9) i ± Λ ϒ ϒ Γ = i ± ( Λ ) + i ± ( Γ ϒ Λ + ϒ ) , i f C ( ϒ ) C ( Λ ) .

2 CLMM with its reparameterized form and BLUPs

This section of this article is devoted to the introduction of a reparameterized model setup, BLUP s under this reparameterized model, and the corresponding results.

Let us consider a CLMM as follows:

(10) C : y = X α + Z γ + ε , C α = c , with E γ ε = 0 , and D γ ε = cov γ ε , γ ε = D ( γ ) cov ( γ , ε ) cov ( ε , γ ) D ( ε ) = σ 2 V 1 V 2 V 2 V 3 σ 2 V ,

with the following general vector to consider the simultaneous prediction problem of all unknown parameters

(11) ψ = P α + R γ + S ε = P α + R , S γ ε ,

where y R n × 1 is a vector of responses, X R n × k , Z R n × p , and C R m × k are the known matrices of arbitrary ranks, α R k × 1 is a parameter vector of fixed effects, γ R p × 1 is a vector of random effects, ε R n × 1 is a vector of random error, c R m × 1 is a known vector, V R ( n + p ) × ( n + p ) is a known psd matrix of arbitrary rank, σ 2 is a positive unknown parameter, and P R s × k , R R s × p , and S R s × n are given matrices of arbitrary ranks. Furthermore, C α = c is supposed to be a consistent linear matrix equation. According to assumptions in (10),

D ( y ) = σ 2 Z , I n V Z , I n σ 2 Λ V Λ ,

D ( ψ ) = σ 2 R , S V R , S σ 2 Γ V Γ , and cov ( ψ , y ) = σ 2 Γ V Λ

are obtained, where Λ = Z , I n and Γ = R , S .

To draw any statistical conclusions from (10), first, it is needed to convert (10) into an implicitly CLMM . For doing this, we substitute the vector α = C + c + F C u , the general solution of the matrix equation C α = c , into the model equation in (10), where u R k × 1 is an arbitrary vector that has been reparameterized, and thereby, this situation yields the following reparameterized LMM :

(12) : r = X F C u + Z γ + ε ,

where r = y X C + c ; thus, predictions under C can be derived from . Correspondingly, ψ in (11) becomes the following reparameterized vector:

(13) ϕ = P F C u + R γ + S ε = P F C u + R , S γ ε ,

where ϕ = ψ P C + c . In the study, we suppose that the models C and are consistent, i.e., y C X , Λ V Λ and r C X F C , Λ V Λ hold with probability 1, respectively, see, e.g., [19]. According to these consistency requirements, clearly, if is consistent, then C is consistent.

The definitions of predictability and the BLUP s under the models are given as follows, see, e.g., [2022].

Definition 2.1

Let C , ψ , , and ϕ be as given in (10)–(13), respectively.

  1. ψ is said to be predictable under C if there exist K R s × n and k R s × 1 such that E ( K y + k ψ ) = 0 holds, i.e., C ( P ) C X , C , in this case, P α is also estimable under C .

  2. Let ψ be predictable under C . D ( K y + k ψ ) = min s.t. E ( K y + k ψ ) = 0 holds in the LPO , and K y + k is defined to be the BLUP of ψ under C and is denoted by

    K y + k = BLUP C ( ψ ) = BLUP C ( P α + R γ + S ε ) .

    Moreover, K y + k corresponds to the best linear unbiased estimator ( BLUE ) of P α , denoted by BLUE C ( P α ) , by writing R = 0 and S = 0 in ψ .

  3. ϕ is said to be predictable under if there exists K R s × n such that E ( K r ϕ ) = 0 holds, i.e., C ( F C P ) C ( F C X ) ; in this case, P F C u is also estimable under .

  4. Let ϕ be predictable under . D ( K r ϕ ) = min s.t. E ( K r ϕ ) = 0 holds in the LPO , and K r is defined to be the BLUP of ϕ under and is written as

    K r = BLUP ( ϕ ) = BLUP ( P F C u + R γ + S ε ) .

    Moreover, K r = BLUE ( P F C u ) when R = 0 and S = 0 in ϕ .

Note that ψ is predictable under C ϕ is predictable under , see, [23, Theorem 2.4]. It is easily seen from Definition 2.1 that γ , ε , R γ , S ε , R γ + S ε , and X F C u + Z γ + ε are always predictable and X F C u is always estimable under . Furthermore, u is estimable under r ( X F C ) = k . It is also mentioned that

D ( K y + k ψ ) = min s.t. E ( K y + k ψ ) = 0 D ( K r ϕ ) = min s.t. E ( K r ϕ ) = 0 ,

i.e., the following equality

(14) BLUP C ( ψ ) = P C + c + BLUP ( ϕ )

holds, see [23], and thereby,

(15) ψ BLUP C ( ψ ) = ψ P C + c BLUP ( ϕ ) , i.e., D [ ψ BLUP C ( ψ ) ] = D [ ϕ BLUP ( ϕ ) ] .

It is clearly seen from (14) that the BLUP of ψ under C is obtained from the BLUP of ϕ under ; in other words, the BLUP of all unknown vectors under CLMM s can be derived from some well-known standard procedures that have been derived for unconstrained LMM s. It is also seen from (15) that the comparison problem of dispersion matrix of BLUP of ψ under C corresponds to the comparison problem of dispersion matrix of BLUP of ϕ under .

Concerning the existence and representations of BLUP s of unknown vectors ψ and ϕ under the models, as well as some of the properties of these predictors are presented below, see also [24], and for different approaches, see, e.g., [19,22].

Lemma 2.1

Consider the vectors ψ and ϕ that are given in (11) and (13), respectively. Suppose that ϕ is predictable under . Consider the vectors K r and T r that are unbiased linear predictors for ϕ . Then,

(16) max E ( T r ϕ ) = 0 i + ( D ( K r ϕ ) D ( T r ϕ ) ) = r K , I s Λ V Λ Λ V Γ Γ V Λ Γ V Γ X F C P F C

is the maximal positive inertia of D ( K r ϕ ) D ( T r ϕ ) s.t. T X F C = P F C . Hence,

(17) D ( K r ϕ ) = min s.t. E ( K r ϕ ) = 0 K r = BLUP ( ϕ ) K X F C , Λ V Λ ( X F C ) = P F C , Γ V Λ ( X F C ) .

This matrix equation is consistent. According to the general solution of the equation,

(18) BLUP ( ϕ ) = K r = P F C , Γ V Λ ( X F C ) J r + + U J r r

is written, where J r = X F C , Λ V Λ ( X F C ) and U R s × n is an arbitrary matrix. In particular, K is unique r ( J r ) = n and BLUP ( ϕ ) is unique is consistent. Additionally,

(19) r ( J r ) = r X F C , Λ V Λ = r X F C , ( X F C ) Λ V Λ , C ( J r ) = C X F C , Λ V Λ = C X F C , ( X F C ) Λ V Λ .

Correspondingly, the BLUP of ψ under C can be written as

(20) BLUP C ( ψ ) = K y + k = P C + c + P F C , Γ V Λ ( X F C ) J r + + U J r r ,

and the BLUE of P F C u under C can be written as

(21) BLUE C ( P F C u ) = K y + k = P C + c + P F C , 0 J r + + U J r r .

Lemma 2.2

Consider the vectors ψ and ϕ that are given in (11) and (13), respectively. Suppose that ϕ is predictable under . Then,

D [ BLUP ( ϕ ) ] = σ 2 D r J r + Λ V Λ ( J r + ) D r ,

cov { BLUP ( ϕ ) , ϕ } = σ 2 D r J r + Λ V Γ ,

(22) D [ ϕ BLUP ( ϕ ) ] = D [ ψ BLUP C ( ψ ) ] = σ 2 ( D r J r + Λ Γ ) V ( D r J r + Λ Γ ) ,

where D r = P F C , Γ V Λ ( X F C ) and J r = X F C , Λ V Λ ( X F C ) . Also, when R = 0 and S = 0 , (22) corresponds to

(23) D [ BLUE ( P F C u ) ] = D [ BLUE C ( P α ) ] = σ 2 P F C , 0 J r + Λ V P F C , 0 J r + Λ .

3 Characterizing relationships of BLUPs’ dispersion matrices under a CLMM

In this section, various comparisons of predictors under C are made using , and conclusions about special cases are drawn using both rank and inertia formulas for block matrices with EBMO s.

Theorem 3.1

Let C , ψ , , and ϕ be as given in (10)–(13), respectively. Suppose that ϕ is predictable under . Let Q R s × s be any symmetric matrix and denote

G = Λ V Λ Λ V Γ X F C Γ V Λ Γ V Γ σ 2 Q P F C F C X F C P 0 = σ 2 D ( y ) σ 2 cov ( y , ϕ ) X F C σ 2 cov ( ϕ , y ) σ 2 D ( ϕ ) σ 2 Q P F C F C X F C P 0 .

Then,

(24) i + ( Q D [ ϕ BLUP ( ϕ ) ] ) = i ( G ) r ( X F C ) ,

(25) i ( Q D [ ϕ BLUP ( ϕ ) ] ) = i + ( G ) r X F C , Λ V Λ ,

(26) r ( Q D [ ϕ BLUP ( ϕ ) ] ) = r ( G ) r ( X F C ) r X F C , Λ V Λ .

As a result, the outcomes listed below are provided.

  1. Q D [ ϕ BLUP ( ϕ ) ] Q D [ ψ BLUP C ( ψ ) ] i + ( G ) = r X F C , Λ V Λ ,

  2. Q D [ ϕ BLUP ( ϕ ) ] Q D [ ψ BLUP C ( ψ ) ] i ( G ) = r ( X F C ) ,

  3. Q = D [ ϕ BLUP ( ϕ ) ] Q = D [ ψ BLUP C ( ψ ) ] r ( G ) = r X F C , Λ V Λ + r ( X F C ) .

Proof

Let us consider D [ ϕ BLUP ( ϕ ) ] in (22) and apply (9) to the difference between a symmetric matrix Q and D [ ϕ BLUP ( ϕ ) ] . Then, we obtain

(27) i ± ( Q D [ ϕ BLUP ( ϕ ) ] ) = i ± ( Q σ 2 ( D r J r + Λ Γ ) V ( D r J r + Λ Γ ) ) = i ± V V Λ ( J r + ) D r V Γ D r J r + Λ V Γ V σ 2 Q i ± ( V ) = i ± V V Γ Γ V σ 2 Q + V Λ 0 0 D r 0 J r J r 0 + Λ V 0 0 D r i ± ( V ) ,

where D r = P F C , Γ V Λ ( X F C ) and J r = X F C , Λ V Λ ( X F C ) . Note that the following column space inclusions hold:

  1. C ( Λ V ) = C ( Λ V Λ ) C X F C , Λ V Λ = C ( J r ) (by (19)),

  2. C ( F C P ) C ( F C X ) (from Definition 2.1) and C ( Λ V Γ ) C ( Λ V ) = C ( Λ V Λ ) , and thereby, C ( D r ) C ( J r ) .

In this case, reapplying (9) to (27) and also using (4) and (6) with the EBMO s, the expression in (27) becomes

(28) i ± 0 J r Λ V 0 J r 0 0 D r V Λ 0 V V Γ 0 D r Γ V σ 2 Q i 0 J r J r 0 i ± ( V ) = i ± Λ V Λ J r Λ V Γ J r 0 D r Γ V Λ D r σ 2 Q Γ V Γ r ( J r ) .

By setting D r and J r into (28) and also using (5)–(7) and (19) with the EBMO s,

i ± Λ V Λ X F C Λ V Λ ( X F C ) Λ V Γ F C X 0 0 F C P ( X F C ) Λ V Λ 0 0 ( X F C ) Λ V Γ Γ V Λ P F C Γ V Λ ( X F C ) σ 2 Q Γ V Γ r X F C , Λ V Λ

(29) = i ± Λ V Λ X F C Λ V Γ F C X 0 F C P Γ V Λ P F C σ 2 Q Γ V Γ + i ± ( ( X F C ) Λ V Λ ( X F C ) ) r X F C , Λ V Λ = i Λ V Λ Λ V Γ X F C Γ V Λ Γ V Γ σ 2 Q P F C F C X F C P 0 + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ

is obtained. From (8),

i + Λ V Λ X F C F C X 0 = r X F C , Λ V Λ and i Λ V Λ X F C F C X 0 = r ( X F C )

are written, and then, (24) and (25) are obtained from (29). (3) states that (26) is the result of summing the equalities in (24) and (25). Lemma 1.1 applied to (24)–(26) gives items (1)–(3).□

Corollary 3.1

Let C and be as given in (10) and (12), respectively, and denote

G 1 = σ 2 D ( y ) 0 X F C 0 σ 2 Q X F C F C X F C X 0 , G 2 = σ 2 D ( y ) 0 X F C 0 σ 2 Q I k F C X I k 0 , G 3 = σ 2 D ( y ) σ 2 [ Z D ( γ ) + cov ( ε , γ ) ] X F C σ 2 [ D ( γ ) Z + cov ( γ , ε ) ] σ 2 [ D ( γ ) Q ] 0 F C X 0 0 , G 4 = σ 2 D ( y ) σ 2 [ D ( ε ) + Z cov ( γ , ε ) ] X F C σ 2 [ D ( ε ) + cov ( ε , γ ) Z ] σ 2 [ D ( ε ) Q ] 0 F C X 0 0 .

  1. X F C u is always estimable under , and then,

    1. Q D [ BLUE ( X F C u ) ] Q D [ BLUE C ( X α ) ] i + ( G 1 ) = r X F C , Λ V Λ ,

    2. Q D [ BLUE ( X F C u ) ] Q D [ BLUE C ( X α ) ] i ( G 1 ) = r ( X F C ) ,

    3. Q = D [ BLUE ( X F C u ) ] Q = D [ BLUE C ( X α ) ] r ( G 1 ) = r X F C , Λ V Λ + r ( X F C ) .

  2. Suppose that u is estimable under , i.e., r ( X F C ) = k holds. Then,

    1. Q D [ BLUE ( u ) ] Q D [ BLUE C ( α ) ] i + ( G 2 ) = r X F C , Λ V Λ ,

    2. Q D [ BLUE ( u ) ] Q D [ BLUE C ( α ) ] i ( G 2 ) = k ,

    3. Q = D [ BLUE ( u ) ] Q = D [ BLUE C ( α ) ] r ( G 2 ) = r X F C , Λ V Λ + k .

  3. γ and ε are always predictable under , and then,

    1. Q D [ γ BLUP ( γ ) ] Q D [ γ BLUP C ( γ ) ] i + ( G 3 ) = r X F C , Λ V Λ ,

    2. Q D [ γ BLUP ( γ ) ] Q D [ γ BLUP C ( γ ) ] i ( G 3 ) = r ( X F C ) ,

    3. Q = D [ γ BLUP ( γ ) ] Q = D [ γ BLUP C ( γ ) ] r ( G 3 ) = r X F C , Λ V Λ + r ( X F C ) ,

    4. Q D [ ε BLUP ( ε ) ] Q D [ ε BLUP C ( ε ) ] i + ( G 4 ) = r X F C , Λ V Λ ,

    5. Q D [ ε BLUP ( ε ) ] Q D [ ε BLUP C ( ε ) ] i ( G 4 ) = r ( X F C ) ,

    6. Q = D [ ε BLUP ( ε ) ] Q = D [ ε BLUP C ( ε ) ] r ( G 4 ) = r X F C , Λ V Λ + r ( X F C ) .

4 Application to a CLMM and its unconstrained form

Various new results can be derived by choosing special matrices instead of the matrix Q in Theorem 3.1. The results of comparing the dispersion matrices of BLUP s under a CLMM and its unconstrained form are presented in this section.

Let us consider the following unconstrained LMM , obtained by considering the model C in (10) without restriction on parameter vector α ,

(30) : y = X α + Z γ + ε ,

with the same assumptions in (10). It is obvious that

(31) ψ is predictable under C ( P ) C ( X ) .

In this case, ψ is predictable under C .

Suppose that ψ is predictable under . There exist K R s × n such that

D ( K y ψ ) = min s.t. E ( K y ψ ) = 0 K y = BLUP ( ψ ) K X , Λ V Λ X = P , Γ V Λ X ,

and then, BLUP ( ψ ) = K y = ( [ P , Γ V Λ X ] J m + + U J m ) y , where U R s × n is an arbitrary matrix and, in this case,

(32) D [ ψ BLUP ( ψ ) ] = σ 2 ( D m J m + Λ Γ ) V ( D m J m + Λ Γ ) ,

where D m = [ P , Γ V Λ X ] and J m = [ X , Λ V Λ X ] . Furthermore,

(33) r ( J m ) = r X , Λ V Λ = r X , X Λ V Λ , C ( J m ) = C X , Λ V Λ = C X , X Λ V Λ .

Also, when R = 0 and S = 0 , (32) corresponds to

(34) D [ BLUE ( P α ) ] = σ 2 P , 0 J m + Λ V P , 0 J m + Λ .

Theorem 4.1

Let C , ψ , and be as given in (10), (11), and (30), respectively, and suppose that ψ is predictable under . Denote

H = Λ V Λ Λ V Λ Λ V Γ 0 X F C Λ V Λ 0 0 X 0 Γ V Λ 0 0 P 0 0 X P 0 0 F C X 0 0 0 0 = σ 2 D ( y ) σ 2 D ( y ) σ 2 cov ( y , ψ ) 0 X F C σ 2 D ( y ) 0 0 X 0 σ 2 cov ( ψ , y ) 0 0 P 0 0 X P 0 0 F C X 0 0 0 0 .

Then,

(35) i + ( D [ ψ BLUP ( ψ ) ] D [ ψ BLUP C ( ψ ) ] ) = i + ( H ) r X , Λ V Λ r ( X F C ) ,

(36) i ( D [ ψ BLUP ( ψ ) ] D [ ψ BLUP C ( ψ ) ] ) = i ( H ) r X F C , Λ V Λ r ( X ) ,

(37) r ( D [ ψ BLUP ( ψ ) ] D [ ψ BLUP C ( ψ ) ] ) = r ( H ) r X , Λ V Λ r ( X F C ) r X F C , Λ V Λ r ( X ) .

As a result, the outcomes listed below are provided:

  1. D [ ψ BLUP ( ψ ) ] D [ ψ BLUP C ( ψ ) ] i ( H ) = r X F C , Λ V Λ + r ( X ) ,

  2. D [ ψ BLUP ( ψ ) ] D [ ψ BLUP C ( ψ ) ] i + ( H ) = r X , Λ V Λ + r ( X F C ) ,

  3. D [ ψ BLUP ( ψ ) ] = D [ ψ BLUP C ( ψ ) ] r ( H ) = r X , Λ V Λ + r ( X F C ) + r X F C , Λ V Λ + r ( X ) .

Proof

By considering D [ ψ BLUP ( ψ ) ] instead of the matrix Q in (29),

(38) i ± ( D [ ψ BLUP ( ψ ) ] D [ ψ BLUP C ( ψ ) ] ) = i Λ V Λ Λ V Γ X F C Γ V Λ Γ V Γ P F C F C X F C P 0 σ 2 0 I s 0 D [ ψ BLUP ( ψ ) ] 0 I s 0 + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ

is obtained. Substituting D [ ψ BLUP ( ψ ) ] in (32) into (38) and then applying (9) to (38), the expression in (38) becomes

(39) i V 0 V Γ 0 0 Λ V Λ Λ V Γ X F C Γ V Γ V Λ Γ V Γ P F C 0 F C X F C P 0 + V Λ 0 0 0 0 D m 0 0 0 J m J m 0 + Λ V 0 0 0 0 0 D m 0 i ( V ) + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ ,

where D m = P , Γ V Λ X and J m = X , Λ V Λ X . Note that the following column space inclusions C ( Λ V ) = C ( Λ V Λ ) C X , Λ V Λ = C ( J m ) (by (33)), C ( P ) C ( X ) (by (31)) and C ( Λ V Γ ) C ( Λ V ) = C ( Λ V Λ ) , and thereby, C ( D m ) C ( J m ) hold. In this case, reapplying (9) to (39) and also using (4) and (6) with the EBMO s, the expression in (39) is equivalently written as follows:

(40) i 0 J m Λ V 0 0 0 J m 0 0 0 D m 0 V Λ 0 V 0 V Γ 0 0 0 0 Λ V Λ Λ V Γ X F C 0 D m Γ V Γ V Λ Γ V Γ P F C 0 0 0 F C X F C P 0 i ± 0 J m J m 0 i ( V ) + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ .

By setting D m and J m into (40) and also using (5)–(7) and (33) with the EBMO s, (40) is equivalently written as

i 0 X Λ V Λ X Λ V 0 0 0 X 0 0 0 0 P 0 X Λ V Λ 0 0 0 0 X Λ V Γ 0 V Λ 0 0 V 0 V Γ 0 0 0 0 0 Λ V Λ Λ V Γ X F C 0 P Γ V Λ X Γ V Γ V Λ Γ V Γ P F C 0 0 0 0 F C X F C P 0 r X , Λ V Λ i ( V ) + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ = i Λ V Λ X Λ V Λ X 0 Λ V Γ 0 X 0 0 0 P 0 X Λ V Λ 0 0 0 X Λ V Γ 0 0 0 0 Λ V Λ Λ V Γ X F C Γ V Λ P Γ V Λ X Γ V Λ 0 P F C 0 0 0 F C X F C P 0 r X , Λ V Λ + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ

(41) = i Λ V Λ X 0 Λ V Γ 0 X 0 0 P 0 0 0 Λ V Λ Λ V Γ X F C Γ V Λ P Γ V Λ 0 P F C 0 0 F C X F C P 0 + i ( X Λ V Λ X ) r X , Λ V Λ + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ = i ± Λ V Λ Λ V Λ Λ V Γ 0 X F C Λ V Λ 0 0 X 0 Γ V Λ 0 0 P 0 0 X P 0 0 F C X 0 0 0 0 + i Λ V Λ X X 0 r ( X ) r X , Λ V Λ + i ± Λ V Λ X F C F C X 0 r ( X F C ) r X F C , Λ V Λ .

Using (8), (35) and (36) are obtained from (41). From (35) and (36), (37) is obtained. Lemma 1.1 applied to (35)–(37) gives items (1)–(3).□

For various selections of the matrices P , R , and S in ψ , many results can be drawn from Theorem 4.1. Specifically, when R = 0 , S = 0 , and P = X in ψ , Corollary 4.1 is obtained.

Corollary 4.1

Let C and be as given in (10) and (30), respectively, X α is always estimable under C and . Denote

H 1 = Λ V Λ Λ V Λ 0 0 X F C Λ V Λ 0 0 X 0 0 0 0 X 0 0 X X 0 0 F C X 0 0 0 0 .

  1. D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] i ( H 1 ) = r X F C , Λ V Λ + r ( X ) ,

  2. D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] i + ( H 1 ) = r X , Λ V Λ + r ( X F C ) ,

  3. D [ BLUE ( X α ) ] = D [ BLUE C ( X α ) ] r ( H 1 ) = r X , Λ V Λ + r ( X F C ) + r X F C , Λ V Λ + r ( X ) .

5 Numerical example

In this section, we use a numerical example to explain the theoretical results presented in Sections 3 and 4. We focus on an example including real data used by [25]. The data have also been studied by many authors such as [26, p. 77] and [27]. The first lactation yields of dairy cows with herd effects and sire additive genetic merits are included in this data. The yields are assigned as the response ( y ), while herd effects are treated as fixed effects represented by h i , i = 1 , 2 , 3 , where h i is the environmental effect of the i th herd, and sire additive genetic merits are treated as random effects represented by s j , j = 1 , 2 , 3 , 4 , corresponding to sires A , B , C , and D , where s j is the effect of the j th sire on the lactation yield of the daughter. Therefore, we fit the model y = X α + Z γ + ε with

X = 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 , Z = 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 , α = h 1 h 2 h 3 , and γ = s 1 s 2 s 3 s 4 ,

to the data. Let us consider the restriction C α = c with C = 1 1 0 . This restriction may be a known fact from the theory or experiment view. The matrix V 1 is assumed to be 0.1 times the identity matrix and V 3 is taken to be the identity matrix as in [25].

As it is widely recognized, two frequently employed estimators within the framework of general linear models are the BLUE s and the ordinary least-squares estimators ( OLSE s). Both of these estimators exhibit a range of straightforward yet noteworthy properties. Given that these estimators are characterized by distinct optimality criteria, their expressions and properties are not necessarily the same. Therefore, it is only natural to explore potential connections between them. We use the aforementioned numerical example to illustrate the comparison results obtained in Section 3 by considering the BLUE and the OLSE as two estimators of X F c α under . Note that X F c α is estimable under . The relationship between the BLUE and the OLSE of X F c α , denoted as OLSE ( X F c α ) , under the model , is presented as follows. Using the NumPy library in Python and setting the aforementioned considerations and findings in the matrix G 1 in Corollary 3.1, we find the eigenvalues of the matrix G 1 as

3.074 , 2.475 , 1.100 , 1.100 , 1.100 , 1.100 , 1.100 , 1.053 , 1.074 , 1.100 , 1.413 , 2.478 , 3.081 .

Thus, we can see that i + ( G 1 ) = 9 , i ( G 1 ) = 4 , and thereby r ( G 1 ) = 13 . We also obtain r [ X F c , Λ V Λ ] = 9 and r ( X F c ) = 2 . Therefore, i + ( G 1 ) = 9 = r [ X F c , Λ V Λ ] D [ BLUE ( X F c α ) ] D [ OLSE ( X F c α ) ] holds. This is already an expected result, meaning that this inequality is already a well-known property in statistical theory by definition of the estimators BLUE and OLSE . Since r ( X F c ) = 2 , i.e., i ( G 1 ) r ( X F c ) , D [ OLSE ( X F c α ) ] D [ BLUE ( X F c α ) ] is not provided.

Now, we use the aforementioned numerical example again to illustrate the comparison results obtained in Section 4 by considering the BLUE s of X α under the models and C . Direct calculations show that

D [ BLUE ( X α ) ] = 0.55 0.55 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.55 0.55 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.03 0.03 0.38 0.38 0.38 0.03 0.03 0.03 0.03 0.03 0.03 0.38 0.38 0.38 0.03 0.03 0.03 0.03 0.03 0.03 0.38 0.38 0.38 0.03 0.03 0.03 0.03 0.02 0.02 0.03 0.03 0.03 0.30 0.30 0.30 0.30 0.02 0.02 0.03 0.03 0.03 0.30 0.30 0.30 0.30 0.02 0.02 0.03 0.03 0.03 0.30 0.30 0.30 0.30 0.02 0.02 0.03 0.03 0.03 0.30 0.30 0.30 0.30

and

D [ BLUE C ( X α ) ] = 0.21 0.21 0.21 0.21 0.21 0.01 0.01 0.01 0.01 0.21 0.21 0.21 0.21 0.21 0.01 0.01 0.01 0.01 0.21 0.21 0.21 0.21 0.21 0.01 0.01 0.01 0.01 0.21 0.21 0.21 0.21 0.21 0.01 0.01 0.01 0.01 0.21 0.21 0.21 0.21 0.21 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.29 0.29 0.29 0.29 0.01 0.01 0.01 0.01 0.01 0.29 0.29 0.29 0.29 0.01 0.01 0.01 0.01 0.01 0.29 0.29 0.29 0.29 0.01 0.01 0.01 0.01 0.01 0.29 0.29 0.29 0.29 .

Using the NumPy library in Python and setting the aforementioned considerations and findings in the matrix H 1 in Corollary 4.1, we find the eigenvalues of the matrix H 1 as

3.403 , 2.883 , 2.522 , 2.385 , 2.102 , 1.881 , 1.742 , 1.702 , 1.618 , 1.618 , 1.618 , 1.326 ,

0.416 , 0.618 , 0.618 , 0.618 , 0.652 , 0.669 , 0.813 , 1.233 , 1.584 , 2.131 , 2.573 , 2.974 .

Thus, we can see that i + ( H 1 ) = 12 , i ( H 1 ) = 12 , and thereby, r ( H 1 ) = 24 . We also obtain r [ X , Λ V Λ ] = 9 and r ( X ) = 3 . Therefore, i ( H 1 ) = 12 = r ( X ) + r [ X F c , Λ V Λ ] , i.e., D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] = D [ BLUE ( X F c α ) ] holds. Furthermore, D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] is not provided since i + ( H 1 ) r [ X , Λ V Λ ] + r ( X F c ) . We also mention that since it is easily seen from the calculations that the difference D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] or D [ BLUE ( X α ) ] D [ BLUE ( X F c α ) ] is psd, we observe that the inequality D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] or D [ BLUE ( X α ) ] D [ BLUE ( X F c α ) ] already holds.

Finally, we add further calculations to illustrate the comparison results by assuming that the dispersion matrices in the aforementioned numerical example are singular. Consider the following singular matrices for the dispersion matrices V 1 and V 3 , respectively,

3 2 2 1 2 3 1 2 2 1 3 2 1 2 2 3 and 8 4 2 0 0 0 0 0 0 4 2 1 0 0 0 0 0 0 2 1 2 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 2

with σ 2 = 1 . Using the NumPy library in Python and setting the aforementioned considerations and findings in the matrix H 1 in Corollary 4.1, we find the eigenvalues of the matrix H 1 as

0.191 , 0.443 , 0.545 , 0.868 , 1.211 , 1.395 , 1.599 , 2.047 , 2.683 , 4.167 , 6.98 , 16.02

41.548 , 17.719 , 9.732 , 5.67 , 3.797 , 2.363 , 2.126 , 1.916 , 1.532 , 1.324 , 1.057 , 0.366 .

Then, i + ( H 1 ) = i ( H 1 ) = 12 , and thereby, r ( H 1 ) = 24 . Moreover, r [ X , Λ V Λ ] = r [ X F c , Λ V Λ ] = 9 , r ( X ) = 3 , and r ( X F c ) = 2 . Therefore, i ( H 1 ) = 12 = r ( X ) + r [ X F c , Λ V Λ ] , i.e., D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] = D [ BLUE ( X F c α ) ] holds. Furthermore, D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] is not provided since i + ( H 1 ) r [ X , Λ V Λ ] + r ( X F c ) . We also mention that since it is easily seen from the calculations that the difference D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] or D [ BLUE ( X α ) ] D [ BLUE ( X F c α ) ] is psd, namely, we observe that the inequality D [ BLUE ( X α ) ] D [ BLUE C ( X α ) ] or D [ BLUE ( X α ) ] D [ BLUE ( X F c α ) ] already holds.

6 Concluding remarks

This article conducts comparisons related to the dispersion matrices of the BLUPs in the context of a CLMM for a general vector of all unknown vectors. The goal is to assess the optimality of these predictors and make comparisons with any symmetric matrix. The methodology employed for these comparisons includes the use of block matrix inertias and ranks, along with EBMO s. By applying these techniques, this article derives various equalities and inequalities that help in evaluating and comparing the performance of predictors within the CLMM . Furthermore, the comparison results obtained for the CLMM can be extended to apply to special instances of the general vector of all unknown vectors, providing a more comprehensive understanding of how these predictors perform in different scenarios. Additionally, this article explores the relationship between the CLMM and its unconstrained counterpart, shedding light on the impact of constraints on prediction and estimation within these models.

In this study, the approach of reparameterization of LMM s subject to exact linear restrictions is used for inference results. Another popular approach to handling a CLMM is to construct a new combined model by merging two given parts of the CLMM , i.e., the model part and the constraint part, into a combined form. According to this approach, the explicitly constrained model C is converted into the following implicitly CLMM :

(42) C ^ : y c = X C α + Z 0 γ + ε 0 , with D y c = σ 2 Λ V Λ 0 0 0 .

Then, by taking into consideration the model C ^ in (42), similar conclusions can be derived for the BLUP of ψ .

Overall, the article’s findings contribute to the field of statistical modeling and analysis by offering insights into the optimality and performance of predictors in the context of CLMM s and their comparisons with other matrices. The additional comparison findings that reduce to special instances of the general vector of all unknown vectors indicate that the insights gained from the CLMM can be applied to specific cases or scenarios within this broader model. This reduction allows for a more targeted understanding of how the CLMM and its predictors perform under particular conditions, potentially providing practical guidance for model selection and estimation. Furthermore, this article provides comparisons between the CLMM and its unconstrained form. This comparison is valuable for understanding the impact of constraints on the model’s performance. It can reveal how imposing certain restrictions on the model’s parameters, as is done in the CLMM , affects the quality of predictions and estimations compared to the unconstrained version of the model. In summary, these comparisons not only offer insights into the CLMM but also allow for the application of these insights in specific scenarios and provide a basis for evaluating the trade-offs between constrained and unconstrained modeling approaches.

Acknowledgement

The authors are grateful to anonymous referees for their careful reading, valuable suggestions, helpful comments on the original version of this article.

  1. Funding information: This work was funded by Sakarya University.

  2. Author contributions: All authors contributed equally to the manuscript and read and approved the final manuscript.

  3. Conflict of interest: The authors state no conflicts of interest.

References

[1] H. Haupt and W. Oberhofer, Stochastic response restrictions, J. Multivariate Anal. 95 (2005), no. 1, 66–75, DOI: https://doi.org/10.1016/j.jmva.2004.08.006. 10.1016/j.jmva.2004.08.006Search in Google Scholar

[2] X. Ren, Corrigendum to “On the equivalence of the BLUEs under a general linear model and its restricted and stochastically restricted models” [Statist. Probab. Lett. 90 (2014) 1–10], Statist. Probab. Lett. 104 (2015), 181–185, DOI: https://doi.org/10.1016/j.spl.2015.05.004. 10.1016/j.spl.2015.05.004Search in Google Scholar

[3] A. R. Gallant and T. M. Gerig, Computations for constrained linear models, J. Econometrics 12 (1980), no. 1, 59–84, DOI: https://doi.org/10.1016/0304-4076(80)90053-6. 10.1016/0304-4076(80)90053-6Search in Google Scholar

[4] C. R. Hallum, T. O. Lewis, and T. L. Boullion, Estimation in the restricted general linear model with a positive semidefinite covariance matrix, Comm. Statist. 1 (1973), no. 2, 157–166, DOI: https://doi.org/10.1080/03610927308827014. 10.1080/03610927308827014Search in Google Scholar

[5] N. Güler and M. E. Büyükkaya, Statistical inference of a stochastically restricted linear mixed model, AIMS Math. 8 (2023), no. 10, 24401–24417, DOI: https://doi.org/10.3934/math.20231244. 10.3934/math.20231244Search in Google Scholar

[6] C. A. McGilchrist and C. W. Aisbett, Restricted BLUP for mixed linear models, Biom. J. 33 (1991), no. 2, 131–141, DOI: https://doi.org/10.1002/bimj.4710330202. 10.1002/bimj.4710330202Search in Google Scholar

[7] M. Satoh, An alternative derivation method of mixed model equations from best linear unbiased prediction (BLUP) and restricted BLUP of breeding values not using maximum likelihood, Anim. Sci. J. 89 (2018), no. 6, 876–879, DOI: https://doi.org/10.1111/asj.13016. 10.1111/asj.13016Search in Google Scholar PubMed PubMed Central

[8] B. Jiang and Y. Tian, On best linear unbiased estimation and prediction under a constrained linear random-effects model, J. Ind. Manag. Optim. 19 (2023), no. 2, 852–867, DOI: https://doi.org/10.3934/jimo.2021209. 10.3934/jimo.2021209Search in Google Scholar

[9] Y. Sun, B. Jiang, and H. Jiang, Computations of predictors/estimators under a linear random-effects model with parameter restrictions, Comm. Statist. Theory Methods 48 (2019), no. 14, 3482–3497, DOI: https://doi.org/10.1080/03610926.2018.1476714. 10.1080/03610926.2018.1476714Search in Google Scholar

[10] J. K. Baksalary and R. Kala, Best linear unbiased estimation in the restricted general linear model, Series Statistics 10 (1979), no. 1, 27–35, DOI: https://doi.org/10.1080/02331887908801464. 10.1080/02331887908801464Search in Google Scholar

[11] J. S. Chipman and M. M. Rao, The treatment of linear restrictions in regression analysis, Econometrica 32 (1964), no. 1/2, 198–209, DOI: https://doi.org/10.2307/1913745. 10.2307/1913745Search in Google Scholar

[12] H. Jiang, J. Qian, and Y. Sun, Best linear unbiased predictors and estimators under a pair of constrained seemingly unrelated regression models, Statist. Probab. Lett. 158 (2020), 108669, DOI: https://doi.org/10.1016/j.spl.2019.108669. 10.1016/j.spl.2019.108669Search in Google Scholar

[13] W. Li, Y. Tian, and R. Yuan, Statistical analysis of a linear regression model with restrictions and superfluous variables, J. Ind. Manag. Optim. 19 (2023), no. 5, 3107–3127, DOI: https://doi.org/10.3934/jimo.2022079. 10.3934/jimo.2022079Search in Google Scholar

[14] T. Mathew, A note on best linear unbiased estimation in the restricted general linear model, Series Statistics 14 (1983), no. 1, 3–6, DOI: https://doi.org/10.1080/02331888308801679. 10.1080/02331888308801679Search in Google Scholar

[15] N. Güler and M. E. Büyükkaya, Some remarks on comparison of predictors in seemingly unrelated linear mixed models, Appl. Math. 67 (2022), 525–542, DOI: https://doi.org/10.21136/AM.2021.0366-20. 10.21136/AM.2021.0366-20Search in Google Scholar

[16] Y. Tian, Some equalities and inequalities for covariance matrices of estimators under linear model, Statist. Papers 58 (2017), 467–484, DOI: https://doi.org/10.1007/s00362-015-0707-x. 10.1007/s00362-015-0707-xSearch in Google Scholar

[17] Y. Tian and W. Guo, On comparison of dispersion matrices of estimators under a constrained linear model, Stat. Methods Appl. 25 (2016), 623–649, DOI: https://doi.org/10.1007/s10260-016-0350-2. 10.1007/s10260-016-0350-2Search in Google Scholar

[18] Y. Tian, Equalities and inequalities for inertias of Hermitian matrices with applications, Linear Algebra Appl. 433 (2010), no. 1, 263–296, DOI: https://doi.org/10.1016/j.laa.2010.02.018. 10.1016/j.laa.2010.02.018Search in Google Scholar

[19] C. R. Rao, Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix, J. Multivariate Anal. 3 (1973), no. 3, 276–292, DOI: https://doi.org/10.1016/0047-259X(73)90042-0. 10.1016/0047-259X(73)90042-0Search in Google Scholar

[20] I. S. Alalouf and G. P. H. Styan, Characterizations of estimability in the general linear model, Ann. Statist. 7 (1979), no. 1, 194–200, DOI: https://www.jstor.org/stable/2958842. 10.1214/aos/1176344564Search in Google Scholar

[21] A. S. Goldberger, Best linear unbiased prediction in the generalized linear regression model, J. Amer. Statist. Assoc. 57 (1962), no. 298, 369–375, DOI: https://doi.org/10.2307/2281645. 10.1080/01621459.1962.10480665Search in Google Scholar

[22] S. Puntanen, G. P. H. Styan, and J. Isotalo, Matrix Tricks for Linear Statistical Models: Our Personal Top Twenty, 1st edn., Springer Berlin, Heidelberg, 2011. 10.1007/978-3-642-10473-2_1Search in Google Scholar

[23] Y. Tian and J. Wang, Some remarks on fundamental formulas and facts in the statistical analysis of a constrained general linear model, Comm. Statist. Theory Methods 49 (2020), no. 5, 1201–1216, DOI: https://doi.org/10.1080/03610926.2018.1554138. 10.1080/03610926.2018.1554138Search in Google Scholar

[24] N. Güler and M. E. Büyükkaya, Inertia and rank approach in transformed linear mixed models for comparison of BLUPs, Comm. Statist. Theory Methods 52 (2023), no. 9, 3108–3123, DOI: https://doi.org/10.1080/03610926.2021.1967397. 10.1080/03610926.2021.1967397Search in Google Scholar

[25] G. K. Robinson, That BLUP is a good thing: The estimation of random effects, Statist. Sci. 6 (1991), no. 1, 15–32, DOI: https://www.jstor.org/stable/2245695. 10.1214/ss/1177011926Search in Google Scholar

[26] J. Jiang, Linear and Generalized Linear Mixed Models and Their Applications: Springer Series in Statistics, 1st edn., Springer, New York, NY, 2007. 10.1007/978-1-0716-1282-8_1Search in Google Scholar

[27] H. Yang, H. Ye, and K. Xue, A further study of predictions in linear mixed models, Comm. Statist. Theory Methods 43 (2014), no. 20, 4241–4252, DOI: https://doi.org/10.1080/03610926.2012.725497. 10.1080/03610926.2012.725497Search in Google Scholar

Received: 2023-12-21
Revised: 2024-11-09
Accepted: 2024-12-03
Published Online: 2024-12-26

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Special Issue on Contemporary Developments in Graph Topological Indices
  2. On the maximum atom-bond sum-connectivity index of graphs
  3. Upper bounds for the global cyclicity index
  4. Zagreb connection indices on polyomino chains and random polyomino chains
  5. On the multiplicative sum Zagreb index of molecular graphs
  6. The minimum matching energy of unicyclic graphs with fixed number of vertices of degree two
  7. Special Issue on Convex Analysis and Applications - Part I
  8. Weighted Hermite-Hadamard-type inequalities without any symmetry condition on the weight function
  9. Scattering threshold for the focusing energy-critical generalized Hartree equation
  10. (pq)-Compactness in spaces of holomorphic mappings
  11. Characterizations of minimal elements of upper support with applications in minimizing DC functions
  12. Some new Hermite-Hadamard-type inequalities for strongly h-convex functions on co-ordinates
  13. Global existence and extinction for a fast diffusion p-Laplace equation with logarithmic nonlinearity and special medium void
  14. Extension of Fejér's inequality to the class of sub-biharmonic functions
  15. On sup- and inf-attaining functionals
  16. Regularization method and a posteriori error estimates for the two membranes problem
  17. Rapid Communication
  18. Note on quasivarieties generated by finite pointed abelian groups
  19. Review Articles
  20. Amitsur's theorem, semicentral idempotents, and additively idempotent semirings
  21. A comprehensive review of the recent numerical methods for solving FPDEs
  22. On an Oberbeck-Boussinesq model relating to the motion of a viscous fluid subject to heating
  23. Pullback and uniform exponential attractors for non-autonomous Oregonator systems
  24. Regular Articles
  25. On certain functional equation related to derivations
  26. The product of a quartic and a sextic number cannot be octic
  27. Combined system of additive functional equations in Banach algebras
  28. Enhanced Young-type inequalities utilizing Kantorovich approach for semidefinite matrices
  29. Local and global solvability for the Boussinesq system in Besov spaces
  30. Construction of 4 x 4 symmetric stochastic matrices with given spectra
  31. A conjecture of Mallows and Sloane with the universal denominator of Hilbert series
  32. The uniqueness of expression for generalized quadratic matrices
  33. On the generalized exponential sums and their fourth power mean
  34. Infinitely many solutions for Schrödinger equations with Hardy potential and Berestycki-Lions conditions
  35. Computing the determinant of a signed graph
  36. Two results on the value distribution of meromorphic functions
  37. Zariski topology on the secondary-like spectrum of a module
  38. On deferred f-statistical convergence for double sequences
  39. About j-Noetherian rings
  40. Strong convergence for weighted sums of (α, β)-mixing random variables and application to simple linear EV regression model
  41. On the distribution of powered numbers
  42. Almost periodic dynamics for a delayed differential neoclassical growth model with discontinuous control strategy
  43. A new distributionally robust reward-risk model for portfolio optimization
  44. Asymptotic behavior of solutions of a viscoelastic Shear beam model with no rotary inertia: General and optimal decay results
  45. Silting modules over a class of Morita rings
  46. Non-oscillation of linear differential equations with coefficients containing powers of natural logarithm
  47. Mutually unbiased bases via complex projective trigonometry
  48. Hyers-Ulam stability of a nonlinear partial integro-differential equation of order three
  49. On second-order linear Stieltjes differential equations with non-constant coefficients
  50. Complex dynamics of a nonlinear discrete predator-prey system with Allee effect
  51. The fibering method approach for a Schrödinger-Poisson system with p-Laplacian in bounded domains
  52. On discrete inequalities for some classes of sequences
  53. Boundary value problems for integro-differential and singular higher-order differential equations
  54. Existence and properties of soliton solution for the quasilinear Schrödinger system
  55. Hermite-Hadamard-type inequalities for generalized trigonometrically and hyperbolic ρ-convex functions in two dimension
  56. Endpoint boundedness of toroidal pseudo-differential operators
  57. Matrix stretching
  58. A singular perturbation result for a class of periodic-parabolic BVPs
  59. On Laguerre-Sobolev matrix orthogonal polynomials
  60. Pullback attractors for fractional lattice systems with delays in weighted space
  61. Singularities of spherical surface in R4
  62. Variational approach to Kirchhoff-type second-order impulsive differential systems
  63. Convergence rate of the truncated Euler-Maruyama method for highly nonlinear neutral stochastic differential equations with time-dependent delay
  64. On the energy decay of a coupled nonlinear suspension bridge problem with nonlinear feedback
  65. The limit theorems on extreme order statistics and partial sums of i.i.d. random variables
  66. Hardy-type inequalities for a class of iterated operators and their application to Morrey-type spaces
  67. Solving multi-point problem for Volterra-Fredholm integro-differential equations using Dzhumabaev parameterization method
  68. Finite groups with gcd(χ(1), χc (1)) a prime
  69. Small values and functional laws of the iterated logarithm for operator fractional Brownian motion
  70. The hull-kernel topology on prime ideals in ordered semigroups
  71. ℐ-sn-metrizable spaces and the images of semi-metric spaces
  72. Strong laws for weighted sums of widely orthant dependent random variables and applications
  73. An extension of Schweitzer's inequality to Riemann-Liouville fractional integral
  74. Construction of a class of half-discrete Hilbert-type inequalities in the whole plane
  75. Analysis of two-grid method for second-order hyperbolic equation by expanded mixed finite element methods
  76. Note on stability estimation of stochastic difference equations
  77. Trigonometric integrals evaluated in terms of Riemann zeta and Dirichlet beta functions
  78. Purity and hybridness of two tensors on a real hypersurface in complex projective space
  79. Classification of positive solutions for a weighted integral system on the half-space
  80. A quasi-reversibility method for solving nonhomogeneous sideways heat equation
  81. Higher-order nonlocal multipoint q-integral boundary value problems for fractional q-difference equations with dual hybrid terms
  82. Noetherian rings of composite generalized power series
  83. On generalized shifts of the Mellin transform of the Riemann zeta-function
  84. Further results on enumeration of perfect matchings of Cartesian product graphs
  85. A new extended Mulholland's inequality involving one partial sum
  86. Power vector inequalities for operator pairs in Hilbert spaces and their applications
  87. On the common zeros of quasi-modular forms for Γ+0(N) of level N = 1, 2, 3
  88. One special kind of Kloosterman sum and its fourth-power mean
  89. The stability of high ring homomorphisms and derivations on fuzzy Banach algebras
  90. Integral mean estimates of Turán-type inequalities for the polar derivative of a polynomial with restricted zeros
  91. Commutators of multilinear fractional maximal operators with Lipschitz functions on Morrey spaces
  92. Vector optimization problems with weakened convex and weakened affine constraints in linear topological spaces
  93. The curvature entropy inequalities of convex bodies
  94. Brouwer's conjecture for the sum of the k largest Laplacian eigenvalues of some graphs
  95. High-order finite-difference ghost-point methods for elliptic problems in domains with curved boundaries
  96. Riemannian invariants for warped product submanifolds in Q ε m × R and their applications
  97. Generalized quadratic Gauss sums and their 2mth power mean
  98. Euler-α equations in a three-dimensional bounded domain with Dirichlet boundary conditions
  99. Enochs conjecture for cotorsion pairs over recollements of exact categories
  100. Zeros distribution and interlacing property for certain polynomial sequences
  101. Random attractors of Kirchhoff-type reaction–diffusion equations without uniqueness driven by nonlinear colored noise
  102. Study on solutions of the systems of complex product-type PDEs with more general forms in ℂ2
  103. Dynamics in a predator-prey model with predation-driven Allee effect and memory effect
  104. A note on orthogonal decomposition of 𝔰𝔩n over commutative rings
  105. On the δ-chromatic numbers of the Cartesian products of graphs
  106. Binomial convolution sum of divisor functions associated with Dirichlet character modulo 8
  107. Commutator of fractional integral with Lipschitz functions related to Schrödinger operator on local generalized mixed Morrey spaces
  108. System of degenerate parabolic p-Laplacian
  109. Stochastic stability and instability of rumor model
  110. Certain properties and characterizations of a novel family of bivariate 2D-q Hermite polynomials
  111. Stability of an additive-quadratic functional equation in modular spaces
  112. Monotonicity, convexity, and Maclaurin series expansion of Qi's normalized remainder of Maclaurin series expansion with relation to cosine
  113. On k-prime graphs
  114. On the existence of tripartite graphs and n-partite graphs
  115. Classifying pentavalent symmetric graphs of order 12pq
  116. Almost periodic functions on time scales and their properties
  117. Some results on uniqueness and higher order difference equations
  118. Coding of hypersurfaces in Euclidean spaces by a constant vector
  119. Cycle integrals and rational period functions for Γ0+(2) and Γ0+(3)
  120. Degrees of (L, M)-fuzzy bornologies
  121. A matrix approach to determine optimal predictors in a constrained linear mixed model
  122. On ideals of affine semigroups and affine semigroups with maximal embedding dimension
  123. Solutions of linear control systems on Lie groups
  124. A uniqueness result for the fractional Schrödinger-Poisson system with strong singularity
  125. On prime spaces of neutrosophic extended triplet groups
  126. On a generalized Krasnoselskii fixed point theorem
  127. On the relation between one-sided duoness and commutators
  128. Non-homogeneous BVPs for second-order symmetric Hamiltonian systems
  129. Erratum
  130. Erratum to “Infinitesimals via Cauchy sequences: Refining the classical equivalence”
  131. Corrigendum
  132. Corrigendum to “Matrix stretching”
  133. Corrigendum to “A comprehensive review of the recent numerical methods for solving FPDEs”
Downloaded on 10.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2024-0114/html
Scroll to top button