Home On decompositions of estimators under a general linear model with partial parameter restrictions
Article Open Access

On decompositions of estimators under a general linear model with partial parameter restrictions

  • Bo Jiang , Yongge Tian EMAIL logo and Xuan Zhang
Published/Copyright: December 2, 2017

Abstract

A general linear model can be given in certain multiple partitioned forms, and there exist submodels associated with the given full model. In this situation, we can make statistical inferences from the full model and submodels, respectively. It has been realized that there do exist links between inference results obtained from the full model and its submodels, and thus it would be of interest to establish certain links among estimators of parameter spaces under these models. In this approach the methodology of additive matrix decompositions plays an important role to obtain satisfactory conclusions. In this paper, we consider the problem of establishing additive decompositions of estimators in the context of a general linear model with partial parameter restrictions. We will demonstrate how to decompose best linear unbiased estimators (BLUEs) under the constrained general linear model (CGLM) as the sums of estimators under submodels with parameter restrictions by using a variety of effective tools in matrix analysis. The derivation of our main results is based on heavy algebraic operations of the given matrices and their generalized inverses in the CGLM, while the whole contributions illustrate various skillful uses of state-of-the-art matrix analysis techniques in the statistical inference of linear regression models.

MSC 2010: 15A03; 15A09; 62F10; 62F30

1 Introduction

Consider a partitioned linear model with partial parameter restrictions

M:y=Xβ+ε=X1β1++Xkβk+ε,A1β1=b1,,Akβk=bk,E(ε)=0,D(ε)=σ2Σ, (1)

where

y is an n × 1 vector of observable response variables,

X = [X1, …, Xk] is an n × p matrix of arbitrary rank,

X1, …, Xk are k known n × p1, …, n × pk matrices with p = p1 + ⋅ + pk,

β=[β1,,βk] and β1, …, βk are p1 × 1, …, pk × 1 vectors of fixed but unknown parameters, ε is an n × 1 vector of randomly distributed error terms,

E(⋅) and D(⋅) denote expectation and dispersion matrix,

Σ is an n × n known nonnegative definite matrix of arbitrary rank,

σ2 is an arbitrary positive scaling factor,

A1, …, Ak are given m1 × p1, …, mk × pk matrices, respectively, with m = m1 + … + mk,

b1, …, bk are m1 × 1, …, mk × 1 known vectors, respectively.

The system of linear equations in 𝓜 is often available as extraneous information for the unknown parameter vector β to satisfy which is an integral part of the constrained general linear model (CGLM) about the parameter space, and thus should ideally be utilized in any estimation procedure of the parameter space in (1). Associated with 𝓜 are the following k submodels

Mi:y=Xiβi+εi,Aiβi=bi,E(εi)=0,D(εi)=σ2Σ,i=1,,k. (2)

Obviously, these models can be considered as reduced versions of 𝓜 by deleting k − 1 regressors except Xiβi, i = 1, …, k. It has been realized that estimators of the unknown parameters in 𝓜 and 𝓜i have some intrinsic connections, and people are interested in establishing certain additive decomposition of estimators under the partitioned model and its submodels.

For convenience of representation, denote

y^=yb,X^=XA,ε^=ε0,Σ^=Σ000,A=diag(A1,,Ak),b=b1bk, (3)

y^i=y0bi0=I^1iy^,X^i=Xi0Ai0,ε^i=εi0,I^1i=diag(In,0,,Imi,,0), (4)

yi=[0,,Xi,,0],Zi=[X1,,Xi1,0,Xi+1,,Xk], (5)

y^i=[0,,X^i,,0],Z^i=[X^1,,X^i1,0,X^i+1,,X^k] (6)

for i = 1, …, k. In this setting,

X=Yi+Zi=Y1++Yk,X^=Y^i+Z^i=Y^1++Y^k, (7)

Xiβi=Yiβ,X^iβi=Y^iβ (8)

for i = 1, …, k.

CGLMs are usually handled by transforming into certain implicitly constrained model. The most popular transformations are based on model reduction, Lagrangian multipliers, or general solutions of matrix equations through generalized inverses of matrices. A well-known method of incorporating equality constraints in CGLMs is to merge the equations in 𝓜 as an implicitly restricted model

M^:y^=X^β+ε^=X^1β1++X^kβk+ε^,E(ε^)=0,D(ε^)=σ2Σ^. (9)

Also, merging the equations in 𝓜i yields

M^i:y^i=X^iβi+ε^i,E(ε^i)=0,D(ε^i)=σ2Σ^,i=1,,k. (10)

Linear regression analysis is one of the most-used statistical methods, while linear models are the first type of regression models to be studied extensively in regression analysis, which have had a profound impact and play a central role in both theoretical and applied statistical science, see e.g. [1, 2, 3]. It has a long history in regression analysis to rewrite linear models as certain partitioned forms, and then to make estimation and statistical inference under the partitioned linear models. One of the main objectives in the statistical inference of linear models is to establish various estimators of the parameter spaces in the models and to characterize mathematical and statistical properties and features of these estimators under various model assumptions. In this approach statisticians are often interested in the connections of different estimators and especially in establishing possible equalities between estimators. There have been various attempts to establish additive decomposition equalities for estimators under linear models. Under the assumptions in (9) and (10), it is natural to consider relations among the best linear unbiased estimators (BLUEs) of X^β in (9) and X^iβi in (9) and (10). In this paper, we first verify or prove that under the assumptions that X1β1, …, Xkβk, X^1β1,,X^kβk are estimable in (9), the BLUE of Xβ in M^ admits the following two additive decomposition identities

BLUEM^(Xβ)=BLUEM^(X1β1)++BLUEM^(Xkβk), (11)

BLUEM^(X^β)=BLUEM^(X^1β1)++BLUEM^(X^kβk). (12)

In view of the above observations, we propose the following two additive decomposition equalities for the BLUEs of Xβ and X^β in M^:

BLUEM^(Xβ)=BLUEM^1(X1β1)++BLUEM^k(Xkβk), (13)

BLUEM^(X^β)=BLUEM^1(X^1β1)++BLUEM^k(X^kβk), (14)

and then derive identifying conditions for the equalities to hold, respectively. These estimator decomposition identities have many different statistical interpretations and are not rare to see in statistical analysis of CGLMs. The problem on additive decompositions of BLUEs under general liner models was approached in [4, 5]. Zhang and Tian [6] recently investigated the above two decomposition identities for k = 2 by using some effective algebraic methods of dealing with additive decompositions of matrix expressions and ranks/ranges of matrices.

Before proceeding, we introduce the notation to the reader and explain its usage in this paper. ℝm×n stands for the collection of all m × n real matrices. The symbols A, r(A), and 𝓡(A) stand for the transpose, the rank, and the range (column space) of a matrix A ∈ ℝm×n, respectively; Im denotes the identity matrix of order m. The Moore-Penrose inverse of A, denoted by A+, is defined to be the unique solution G satisfying the four matrix equations AGA = A, GAG = G, (AG) = AG, and (GA) = GA. Further, let PA, EA, and FA stand for the three orthogonal projectors (symmetric idempotent matrices) PA = AA+, EA = A = ImAA+, and FA = InA+A. Two symmetric matrices A and B of the same size are said to satisfy the inequality AB in the Löwner partial ordering if AB is nonnegative definite. Further information about the orthogonal projectors PA, EA, and FA with their applications in the linear statistical models can be found in [7, 8, 9]. Also, it is well known that the Löwner partial ordering is a surprisingly strong and useful property between two symmetric matrices. For more results about the Löwner partial ordering of symmetric matrices and applications in statistical analysis see, e.g., [8]. Generalized inverses of matrices are common tools to deal with singular matrices, which now are a fruitful and core part in current matrix theory and have profound impact in the field of statistics.

2 Some preliminaries in linear algebra

Statistical inference for linear models, as is well known, is entirely based on computations with the given vectors and matrices in the models, and formulas and algebraic tricks for handling matrices in linear algebra and matrix theory play an important role in the derivations of these estimators and the characterization of their performance. Because BLUEs of parameter spaces in linear models are calculated from given matrices and vectors in the models and are often represented by certain formulas composed by given matrices and vectors in linear models, the approach we take to the above problems is in fact to establish and characterize matrix equalities composed by matrices and their generalized inverses, and thus we need to use many influential and effective mathematical tools in order to characterize the above equalities of estimators and their covariance matrices under CGLMs. Many mathematical methods in statistical science require algebraical computations with vectors and matrices. In particular, formulas and algebraic techniques for handling matrices in linear algebra and matrix theory play important roles in the derivations and characterizations of estimators and their performances under linear models. As remarked in [10], a good starting point for the entry of matrices into statistics was in 1930s, while it is now a routine procedure to use given vectors, matrices and their generalized inverses in statistical models to formulate various estimators of parameter spaces in linear models and to make the corresponding statistical inferences.

As the study of additive decompositions of estimators in the contexts of linear regression models requires more effective mathematical analysis tools, it is forced toward algebraic questions that overlap with precise description and characterization of matrix decomposition identities in linear algebra. The scope of this section is to introduce various formulas for ranks of matrices in linear algebra suitable for establishing and characterizing various possible equalities for estimators under CGLMs. In this section, we first introduce some fundamental formulas for calculating ranks of matrices that will be used in the statistical analysis. Recall that the rank of matrix is conceptual foundation in matrix theory and is the most significant finite nonnegative integer in reflecting intrinsic properties of matrices, while the mathematical prerequisites for understanding the rank of matrix are minimal and do not go beyond elementary linear algebra. The intriguing connections between generalized inverses of matrices and rank formulas of matrices were recognized in 1970s, and a seminal work on establishing formulas for calculating matrices and their generalized inverses was presented in [11]. It has been known that matrix rank formulas are direct and effective tools of simplifying matrix expressions and equalities. The whole work in this paper is based on the effective use of the matrix rank methodology (MRM), which is a set of quantitative description techniques that encompass:

  1. establishing non-trivial analytical formulas for calculating the maximum and minimum ranks of a matrix expression, and using the ranks to determine the singularity and nonsingularity of the matrix expression, the rank invariance of the matrix expression, the dimension of the row/column space of the matrix expression;

  2. establishing formulas for calculating the rank of the difference of two matrix expressions, and using them to derive necessary and sufficient conditions for the two matrix expressions to be equal, i. e., proving matrix equality by matrix rank formulas;

  3. characterizing relations between two linear subspaces, or two matrix sets by matrix rank formulas.

The above assertions show that there are important and peculiar consequences of establishing various formulas for calculating ranks of matrices from theoretical point of view. Thus, the MRM in fact provides us with a specified algebraic framework for tackling matrix expressions and matrix equalities, and gives a glimpse into a very broad and interesting field of matrix mathematics. But it was not until a few decades ago that the MRM was essentially recognized as an effective and influential tool in the field of mathematics and was extensively applied in matrix theory and applications. Because matrices are common objects in linear regression analysis, the advent of the MRM has greatly extended from the domain of matrix theory into statistical areas, some seminal work on the fundamental theory of the MRM and its applications in statistics can be found in e.g. in [11, 12, 13]. Some recent work on the MRM in the analysis of additive decompositions of BLUEs under linear models were presented in [4, 5, 6], while some contributions on MRM in the statistical analysis of CGLMs can be found in [14, 15, 16, 17, 18, 19 20, 21, 22, 23, 24].

In order to establish and characterize various possible equalities for estimators in the context of linear models and to simplify various matrix equalities composed by Moore–Penrose inverses of matrices, we will need the following well-known rank formulas involving Moore–Penrose inverses to make the paper self-contained.

Lemma 2.1

([11]). Let A ∈ ℝm×n, B ∈ ℝm×k, C ∈ 1 ℝl×n, and D ∈ ℝl×k. Then

r[A,B]=r(A)+r(EAB)=r(B)+r(EBA), (15)

r[AC]=r(A)+r(CFA)=r(C)+r(AFC), (16)

r[ABC0]=r(B)+r(C)+r(EBAFC), (17)

r[AABB0]=r[A,B]+r(B). (18)

If 𝓡(B) ⊆ 𝓡(A) and 𝓡(C) ⊆ 𝓡(A), then

r[ABCD]=r(A)+r(DCA+B). (19)

Furthermore, the following results hold.

  1. r[A, B] = r(A) ⇔ 𝓡(B) ⊆ 𝓡(A)⇔ AA+B = BEAB = 0.

  2. r [AC] = r(A) ⇔ 𝓡(C) ⊆ 𝓡(A) ⇔ CA+A = CCFA = 0.

  3. r[A, B] = r(A) + r(B) ⇔ 𝓡 (A) ∩ 𝓡(B) = {0} ⇔ 𝓡[(EAB)] = 𝓡(B) ⇔ 𝓡[(EBA)] = 𝓡(A).

  4. r [AC] = r(A) + r(C) ⇔ 𝓡(A) ∩ 𝓡(C) = {0} ⇔ 𝓡(CFA) = 𝓡(C) ⇔ 𝓡(AFC) = 𝓡(A).

Lemma 2.2

([25]). Suppose that 𝓡(A) ⊆ 𝓡(B1), 𝓡(C2) ⊆ 𝓡(C1), 𝓡(A) ⊆ R(C1),andR(B2)R(B1). Then

r(B2B1+AC1+C2)=rAB10C10C20B20r(B1)r(C1). (20)

Lemma 2.3

([26]). The linear matrix equation AX = B is consistent if and only if r[A, B] = r(A), or equivalently, AA+B = B. In this case, the general solution of the equation can be written as X = A+B+(IA+A)U, where U is an arbitrary matrix.

With the support of the formulas in Lemmas 2.12.3, we are able to covert the problems in (11)(14) into certain algebraic problems characterizing matrix equalities composed by the given matrices in the models and their generalized inverses, and to derive analytical solutions of the problems by using the methods of matrix equations, matrix rank formulas, and various skillful partitioned matrix calculations.

3 Estimability of parameter spaces under CGLMs

We take σ2 = 1 in (1)(10) for the convenience of presentation below, because it doesn’t play any role in the main results in this paper. In what follows, we assume that the model in (9) is consistent, i.e.,

y^R[X^,Σ^]holds with probability 1; (21)

see [27, 28]. We next introduce the definitions of the estimability of parameter spaces in CGLMs.

Definition 3.1

Let M^ be as given in (9) and let K ∈ ℝk×p be given. Then, the vector Kβ of the unknown parameters is said to be estimable under M^ if there exists a linear statistic L y^ , where L ∈ ℝk×(n+m), such that E(L y^ ) = L X^ β = Kβ holds under M^ .

It is well known in statistical theory that the unbiasedness of linear statistics with respect to given parameter spaces in linear models is an important property. Considerable literature exists on estimability of parameter spaces in linear models; see e.g. [29 30 31 32 33 34 35 36 37 38] for some excellent expositions. We next present some classic and new results on the estimability of the parameter space in (9) and give their proofs.

Lemma 3.2

([29]). Let M^ be as given in (9) and let K ∈ ℝk×p be given. Then, the following results hold.

  1. Kβ is estimable under M^R(K)R(X^)rX^K=r(X^).

  2. The following statements are equivalent:

    1. Xiβi = Yiβ is estimable under M^ , i = 1, …, k.

    2. X^iβi=Y^iβ is estimable under M^ , i = 1, …, k.

    3. R(Yi)R(X^), i = 1, …, k.

    4. R(Y^i)R(X^), i = 1, …, k.

    5. R(Y^i)R(Z^i)={0}, i = 1, …, k.

    6. r(X^)=r(Y^i)+r(Z^i), i = 1, …, k.

Lemma 3.3

Let M^ be as given in (9). Then, the following statements are equivalent:

  1. All X1β1, …, Xkβk are estimable under M^ .

  2. All X^1β1,,X^kβk are estimable under M^ .

  3. r(X^)=r(X^1)++r(X^k).

Proof

It is obvious from (7) that

r(X^)r(Y^i)+r(Z^i)r(X^1)++r(X^k),i=1,,k. (22)

Hence if (c) holds, we obtain from (22) that

r(X^)=r(Y^1)+r(Z^1)==r(Y^k)+r(Z^k), (23)

which means that (a) and (b) hold by Lemma 3.2. The equivalence of (c) and (23) can be proved by induction, we leave it to the reader.□

Lemma 3.3(c) is easily verifiable for a given model matrix. In particular, they are satisfied under the condition r( X^ ) = p.

4 BLUEs’ computations

Theoretical and applied researches of a CGLM seek to develop various possible estimators of the parameter space in the CGLM. When there exist unbiased estimators for a given parameter space, there are usually many unbiased estimators for the parameter space. Thus, it is natural to seek such an unbiased estimator that has the smallest dispersion matrix among all the unbiased estimators, that is to say, the unbiasedness and smallest dispersion matrices of estimators are most intrinsic requirements in statistical analysis and inference. The concepts of BLUEs of parameter spaces in the contexts of (1)(10) are given below.

Definition 4.1

Let M^ be as given in (9), and assume that Kβ is estimable under M^ for K ∈ ℝk×p. If there exists an L ∈ ℝk×(m+n) such that

E(Ly^Kβ)=0 and D(Ly^Kβ)=min (24)

hold in the Löwner partial ordering, the linear statistic L y^ is defined to be the BLUE of Kβ under M^ , and is denoted by

Ly^=BLUEM^(Kβ). (25)

If K = X or X^ in (24), then the L y^ satisfying (24) is the BLUEs of Xβ and X^ β under M^ , respectively, and are denoted by Ly^=BLUEM^(Xβ)andLy^=BLUEM^(X^β), respectively.

Estimators of the parameter spaces in linear models are usually formulated from mathematical operations of the observed response vectors, the given model matrices, and the covariance matrices of the error terms in the models. Hence, the standard inference theory of linear statistical models can be established from the exact algebraic expressions of estimators, which is easily acceptable from both mathematical and statistical points of view. In fact, linear statistical models are the only type of statistical models that have complete and solid support from linear algebra and matrix theory. Observing that (9) is a special case of GLMs, the following lemma follows from the well-known results on the BLUEs under linear models; see e.g. [28, p. 282] and [39, p. 55].

Lemma 4.2

Let M^ be as given in (9), assume that Kβ is estimable under M^ for K ∈ ℝk×p, and denote t = n + m. Then, the following results hold.

  1. The following implication

    E(Ly^Kβ)=0andD(Ly^Kβ)=minL[X^,Σ^X^]=[K,0] (26)

    holds. The matrix equation on the right-hand side of (26) is consistent, i. e.,

    [K,0][X^,Σ^X^]+[X^,Σ^X^]=[K,0] (27)

    holds under 𝓡(K) ⊆ 𝓡( X^ ), while the general solution of the matrix equation, denoted by PK;X^;Σ^, and the corresponding BLUE of Kβ under M^ can be written as

    BLUEM^(Kβ)=PK;X^;Σ^y^=([K,0][X^,Σ^X^]++U[X^,Σ^X^])y^, (28)

    where U ∈ ℝk×t is arbitrary.

  2. Xβ is always estimable under M^ , and the general expression of BLUE of Xβ under M^ can be written as

    BLUEM^(Xβ)=PX;X^;Σ^y^=([X,0][X^,Σ^X^]++U[X^,Σ^X^])y^, (29)

    where U ∈ ℝn×t is arbitrary.

  3. y^ β is always estimable under M^ , and the general expression of BLUE of X^ β under M^ can be written as

    BLUEM^(X^β)=PX^;Σ^y^=([X^,0][X^,Σ^X^]++V[X^,Σ^X^])y^, (30)

    where V ∈ ℝt×t is arbitrary.

  4. [8, p. 123] R[X^,Σ^X^]=R[X^,Σ^],r[X^,Σ^X^]=r[X^,Σ^],andR(X^)R(Σ^X^)={0}.

  5. PK;X^;Σ^ is unique if and only if r[X^,Σ^]=t.

  6. BLUEM^ (Kβ) is unique with probability 1 if and only if M^ is consistent.

Note that BLUEs of unknown parameters in linear models are defined from the requirements of both the unbiasedness and the smallest covariance matrices of linear statistics. In order to reveal more deep and fundamental properties and features of BLUEs of unknown parameters in (1), the so-called standard forms of the decomposition refer to the fact that the whole and partial mean vectors Xβ, X^ β, Xiβi, and X^ iβi are the components in (1)(10), while the BLUEs of Xβ and X^ β always exist under (9), as demonstrated in Lemma 4.2.

5 Direct additive decompositions of BLUEs under a CGLM

In this section, we give the analytical expressions of the BLUEs of Xiβi and X^ iβi of interest, and present some of their statistical properties.

Theorem 5.1

Let M^ be as given in (9), and assume that Xiβi and X^ iβi are estimable under M^ , i = 1, …, k. Then, the following results hold.

  1. The BLUEs of Xiβi under M^ can be written as

    BLUEM^(Xiβi)=PYi;X^;Σ^y^=([Yi,0][X^,Σ^X^]++Ui[X^,Σ^X^])y^ (31)

    with

    E[BLUEM^(Xiβi)]=Xiβi, (32)

    CovBLUEM^(Xiβi)=([Yi,0][X^,Σ^X^]+)Σ^([Yi,0][X^,Σ^X^]+), (33)

    CovBLUEM^(Xiβi),BLUEM^(Xjβj)=([Yi,0][X^,Σ^X^]+)Σ^([Yj,0][X^,Σ^X^]+), (34)

    where Ui ∈ ℝn×t is arbitrary, i, j = 1, …, k.

  2. The BLUEs of X^ iβi under M^ can be written as

    BLUEM^(X^iβi)=PY^i;X^;Σ^y^=([Y^i,0][X^,Σ^X^]++Vi[X^,Σ^X^])y^ (35)

    with

    E[BLUEM^(X^iβi)]=X^iβi, (36)

    CovBLUEM^(X^iβi)=([Y^i,0][X^,Σ^X^]+)Σ^([Y^i,0][X^,Σ^X^]+), (37)

    CovBLUEM^(X^iβi),BLUEM^(X^jβj)=([Y^i,0][X^,Σ^X^]+)Σ^([Y^j,0][X^,Σ^X^]+), (38)

    where Vi ∈ ℝt×t is arbitrary, i, j = 1, …, k.

  3. The following two decomposition identities hold

    BLUEM^(Xβ)=BLUEM^(X1β1)++BLUEM^(Xkβk), (39)

    BLUEM^(X^β)=BLUEM^(X^1β1)++BLUEM^(X^kβk). (40)

Proof

Results (a) and (b) follow directly from (8) and (28) by letting K = yi, Y^ i, respectively. Result (c) follows directly from (7), (29), and (30).□

In what follows, we use { BLUEM^ (Kβ)} to denote the collection of all BLUEM^ (Kβ) in (28).

6 Additive decompositions of BLUEs under a full CGLM and its submodels

For convenience of representation, we adopt the notation in this section.

X~=diag(X1,,Xk),Σ~=diag(Σ,,Σ),I^n=[In,0,,0], (41)

Z=0X2XkX10XkX1X20,Si=ΣXi0AiZi0,Ti=ΣXiZi0Ai0,i=1,,k, (42)

S=Σ~X~0AZ0,T=Σ~X~Z0A0,Vij=Σ0Xi00AiXjAj0,ij,i,j=1,,k. (43)

The misspecified BLUEs under the submodels in (10) are given below.

Lemma 6.1

Let M^ i be as given in (10), i = 1, …, k. Then, the misspecified BLUEs of Xiβi and X^ iβi under the k submodels in M^ i are

BLUEM^i(Xiβi)=PXi;X^i;Σ^y^i=PXi;X^i;Σ^I^1iy^, (44)

BLUEM^i(X^iβi)=PX^i;Σ^y^i=PX^i;Σ^I^1iy^, (45)

where

PXi;X^i;Σ^=[Xi,0][X^i,Σ^X^i]++Hi[X^i,Σ^X^i],PX^i;Σ^=[X^i,0][X^i,Σ^X^i]++Gi[X^i,Σ^X^i],

Hi ∈ ℝn×t and Gi ∈ ℝt×t are arbitrary matrices, i = 1, …, k.

It should be pointed out that under the assumptions in (9), the k submodels in (10) are misspecified versions of (9). So that the estimators in (44) and (45) are not true BLUEs of Xiβi and X^ iβi under the models in (10), that is to say, they neither are unbiased for Xiβi and X^ iβi under (9), nor have the smallest covariance matrices in the Löwner sense. In such a case, the sums of the BLUEs may, however, be the BLUEs of Xβ and X^ β under some conditions. In this section, we derive some algebraical and statistical properties and features of the BLUEs under (9) and (10), and then give necessary and sufficient conditions for the equalities in (13) and (14) to hold. Although the results in the last section present exact formulas of BLUEs under various assumptions, we have to pay more attention to the mathematical manipulations hidden behind the BLUE formulas in order to establish the connections among the BLUEs. During this process, many skillful calculations of matrix ranks and elementary block matrix operations will be conducted in establishing and simplifying matrix equalities and expressions.

The following theorem gives a variety of properties of the two estimators in Lemma 6.1.

Theorem 6.2

Let M^ i be as given in (10), i = 1, …, k, and let BLUEM^i (Xiβi) and BLUEM^i(X^iβi) be as given in (44) and (45), respectively, i = 1, …, k. Then, the following results hold.

  1. Under (9), the expectations of BLUEM^i (Xiβi) and their sum are given by

    EBLUEM^i(Xiβi)=Xiβi+PXi;X^i;Σ^I^nZiβ,i=1,,k, (46)

    EBLUEM^1(X1β1)++BLUEM^k(Xkβk)=Xβ+[PX1;X^1;Σ^I^n,,PXk;X^k;Σ^I^n]Zβ. (47)
  2. Under (9), the expectations of BLUEM^i(X^iβi) and their sum are

    EBLUEM^i(X^iβi)=X^iβi+PX^i;Σ^I^nZiβ,i=1,,k, (48)

    EBLUEM^1(X^1β1)++BLUEM^k(X^kβk)=X^β+[PX^1;Σ^I^n,,PX^k;Σ^I^n]Zβ. (49)
  3. The following statements are equivalent:

    1. There exists a BLUEM^i (Xiβi) such that

      EBLUEM^i(Xiβi)=Xiβi,i=1,,k. (50)
    2. There exists a BLUEM^i(X^iβi) such that

      EBLUEM^i(X^iβi)=X^iβi,i=1,,k. (51)
    3. r(Si) = r(Ti), i = 1, …, k.

  4. The covariance matrix between BLUEM^i (Xiβi) and BLUEM^j (Xjβj) is

    CovBLUEM^i(Xiβi),BLUEM^j(Xjβj)=[Xi,0][X^i,Σ^X^i]+Σ^([Xj,0][X^j,Σ^X^j]+), (52)

    rCovBLUEM^i(Xiβi),BLUEM^j(Xjβj)=r(Vij)r[X^i,Σ^]r[X^j,Σ^]+r(Σ) (53)

    for ij, i, j = 1, …, k.

  5. The covariance matrix between BLUEM^i(X^iβi)andBLUEM^j(X^jβj) is

    CovBLUEM^i(X^iβi),BLUEM^j(X^jβj)=[X^i,0][X^i,Σ^X^i]+Σ^([X^j,0][X^j,Σ^X^j]+), (54)

    rCovBLUEM^i(X^iβi),BLUEM^j(X^jβj)=r(Vij)r[X^i,Σ^]r[X^j,Σ^]+r(Σ) (55)

    for ij, i, j = 1, …, k.

  6. The following statements are equivalent:

    1. Some/any pair of BLUEM^i(Xiβi)andBLUEM^j(Xjβj) are uncorrelated, ij, i, j = 1, …, k.

    2. Some/any pair of BLUEM^i(X^iβi)andBLUEM^j(X^jβj) are uncorrelated, ij, i, j = 1, …, k.

    3. r(Vij)=r[X^i,Σ^]+r[X^j,Σ^]r(Σ),ij,i,j=1,,k.

  7. Under the assumption that Σ is positive definite, the following statements are equivalent:

    1. Some/any pair of BLUEM^i(Xiβi)andBLUEM^j(Xjβj) are uncorrelated, ij, i, j = 1, …, k.

    2. Some/any pair of BLUEM^i(X^iβi)andBLUEM^j(X^jβj) are uncorrelated, ij, i, j = 1, …, k.

    3. rXjΣ1XiAjAi0=r(Ai)+r(Aj),ij,i,j=1,,k.

Proof

It can be derived from (44) that

EBLUEM^i(Xiβi)=PXi;X^i;Σ^I^1iE(y^)=PXi;X^i;Σ^X1β1++Xkβk0bi0=PXi;X^i;Σ^X1β1++Xiβi++Xkβk0Aiβi0=PXi;X^i;Σ^(I^nX1β1++X^iβi++I^nXkβk)=PXi;X^i;Σ^I^nX1β1++Xiβi++PXi;X^i;Σ^I^nXkβk=Xiβi+PXi;X^i;Σ^I^nZiβ,i=1,,k.

establishing (46) and (47). From (46), E[ BLUEM^i (Xiβi)] = Xiβi holds if and only if

PXi;X^i;Σ^I^nZi=0,i=1,,k. (56)

Substituting PXi;X^i;Σ^=[Xi,0][X^i,Σ^X^i]++Hi[X^i,Σ^X^i] into (56) gives

[Xi,0][X^i,Σ^X^i]+I^nZi+Hi[X^i,Σ^X^i]I^nZi=0,i=1,,k. (57)

It follows from Lemma 2.3 that there exists an Hi such that (57) holds if and only if

r[Xi,0][X^i,Σ^X^i]+I^nZi[X^i,Σ^X^i]I^nZi=r[X^i,Σ^X^i]I^nZi,i=1,,k. (58)

Applying (15), (16) and simplifying by elementary block matrix operations gives

r[X^i,Σ^X^i]I^nZi=r[X^i,Σ^X^i,I^nZi]r[X^i,Σ^X^i]=r[X^i,Σ^,I^nZi]r[X^i,Σ^]=rΣXiZi0Ai0r[X^i,Σ^],i=1,,k,r[Xi,0][X^i,Σ^X^i]+I^nZi[X^i,Σ^X^i]I^nZi=r[Xi,0][X^i,Σ^X^i]+I^nZi0I^nZi[X^i,Σ^X^i]r[X^i,Σ^X^i]=r0[Xi,0]I^nZi[X^i,Σ^X^i]r[X^i,Σ^]=r0Xi0I^nZiX^iΣ^00X^ir(X^i)r[X^i,Σ^]=r0Xi00Zi0Σ00Ai0000XiAir(X^i)r[X^i,Σ^]=rΣXi0AiZi0r[X^i,Σ^],i=1,,k.

Substituting these two equalities into (58) leads to the equivalence of (i) and (iii) in (c).

It follows from (45) that

EBLUEM^i(X^iβi)=PX^i;Σ^I^1iE(y^)=PX^i;Σ^X1β1++Xkβk0bi0=PX^i;Σ^X1β1++Xiβi++Xkβk0Aiβi0=PX^i;Σ^(I^nX1β1++X^iβi++I^nXkβk)=PX^i;Σ^I^nX1β1++X^iβi++PX^i;Σ^I^nXkβk=X^iβi+PX^i;Σ^I^nZiβ,i=1,,k,

thus establishing (48) and (49). From (48), E[BLUEM^i(X^iβi)]=X^iβi holds if and only if

PX^i;Σ^I^nZi=0,i=1,,k. (59)

Substituting PX^i;Σ^=[X^i,0][X^i,Σ^X^i]++Gi[X^i,Σ^X^i] into (59) gives

[X^i,0][X^i,Σ^X^i]+I^nZi+Gi[X^i,Σ^X^i]I^nZi=0,i=1,,k. (60)

It follows from Lemma 2.3 that there exists a Gi such that (60) holds if and only if

r[X^i,0][X^i,Σ^X^i]+I^nZi[X^i,Σ^X^i]I^nZi=r[X^i,Σ^X^i]I^nZi,i=1,,k. (61)

Applying (15), (16), and simplifying, we obtain

r([X^i,Σ^X^i]I^nZi)=r[X^i,Σ^X^i,I^nZi]r[X^i,Σ^X^i]=r[X^i,Σ^,I^nZi]r[X^i,Σ^]=rΣXiZi0Ai0r[X^i,Σ^],i=1,,k,

and

r[X^i,0][X^i,Σ^X^i]+I^nZi[X^i,Σ^X^i]I^nZi=r[X^i,0][X^i,Σ^X^i]+I^nZi0I^nZi[X^i,Σ^X^i]r[X^i,Σ^X^i]=r0[X^i,0]I^nZi[X^i,Σ^X^i]r[X^i,Σ^]=r0X^i0I^nZi0Σ^X^ir[X^i,Σ^]=r(X^i)+r[I^nZi,Σ^X^i]r[X^i,Σ^]=rI^nZiΣ^0X^ir[X^i,Σ^]=rΣXi0AiZi0r[X^i,Σ^],i=1,,k.

Substituting these two equalities into (61) leads to the equivalence of (ii) and (iii) in (c).

It can be derived from (44) that

Cov{BLUEM^i(Xiβi),BLUEM^j(Xjβj)}=PXi;x^i;Σ^Σ^PXj,X^j;Σ^=[Xi,0][X^i,Σ^X^i]+Σ^([Xj,0][X^j,Σ^X^j]+),ij,i,j=1,,k,

as required for (52). Also note 𝓡([Xi, 0]) ⊆ R([X^i,Σ^X^i]), 𝓡([Xj, 0]) ⊆ R([X^j,Σ^X^j]),R(Σ^)R[X^i,Σ^X^i] and R(Σ^)R[X^j,Σ^X^j], ij, i, j = 1, …, k. Applying (20) and simplifying gives

r[Xi,0][X^i,Σ^X^i]+Σ^([Xj,0][X^j,Σ^X^j]+)=rΣ^X^iΣ^X^i0X^j00XjX^jΣ^0000Xi00r[X^i,Σ^X^i]r[X^j,Σ^X^j]=rΣ0XiΣ00000Ai0000XjAj0000XjΣ0000Xj000000Aj0000XiAi0000Xi0000r(X^i)r(X^j)r[X^i,Σ^]r[X^j,Σ^]=rΣ00000000Ai00000Aj0000Xj000Σ0Xj000000Aj0000XiAi0000Xi0000r(X^i)r(X^j)r[X^i,Σ^]r[X^j,Σ^]=r(Vij)r[X^i,Σ^]r[X^j,Σ^]+r(Σ),ij,i,j=1,,k,

thus establishing (53). Also from (45),

CovBLUEM^i(X^iβi),BLUEM^j(X^jβj)=Px^i;Σ^Σ^PX^j;Σ^=[X^i,0][X^i,Σ^X^i]+Σ^([X^j,0][X^j,Σ^X^j]+),ij,i,j=1,,k,

as required for (54). Also note R([X^i,0])R([X^i,Σ^X^i]),R([X^j,0])R([X^j,Σ^X^j]),R(Σ^)R[X^i,Σ^X^i] and R(Σ^)R[X^j,Σ^X^j], ij, i, j = 1, …, k. Applying (20) and simplifying gives

r[X^i,0][X^i,Σ^X^i]+Σ^([X^j,0][X^j,Σ^X^j]+)=rΣ^X^iΣ^X^i0X^j00X^jX^jΣ^0000X^i00r[X^i,Σ^X^i]r[X^j,Σ^X^j]=rΣ^000000X^j00X^jΣ^X^i00X^i00r[X^i,Σ^]r[ΣX^j,Σ^]=r(X^jΣ^X^i)+r(X^i)+r(X^j)+r(Σ)r[X^i,Σ^]r[X^j,Σ^]=rΣ^X^iX^j0r[X^i,Σ^]r[X^j,Σ^]+r(Σ)=r(Vij)r[X^i,Σ^]r[X^j,Σ^]+r(Σ),ij,i,j=1,,k,

thus establishing (55). Results (f) and (g) are direct consequences of (d) and (e). □

Concerning the relations between BLUEM^(Xiβi) and BLUEM^i(Xiβi),BLUEM^(X^iβi) and BLUEM^i(X^iβi), i = 1, …, k, we have the following conclusions.

Theorem 6.3

Let M^ be as given in (9), and assume that Xiβi and X^i βi are estimable under M^ , and let BLUEM^(Xiβi),BLUEM^(X^iβi),BLUEM^i(Xiβi),andBLUEM^i(X^iβi) be as given in (31), (35), (44), and (45), respectively, i = 1, …, k. Then, the following statements are equivalent:

  1. There exist BLUEM^i(Xiβi)andBLUEM^i(Xiβi) such that

    BLUEM^(Xiβi)=BLUEM^i(Xiβi),i=1,,k. (62)
  2. There exist BLUEM^(X^iβi)andBLUEM^i(X^iβi) such that

    BLUEM^(X^iβi)=BLUEM^i(X^iβi),i=1,,k. (63)
  3. r(Si) = r(Ti), i = 1, …, k.

Proof

Under the condition that Xiβi is estimable under (9), we see from (26) and (44) that there exist BLUEM^(Xiβi) and BLUEM^i(Xiβi) such that (62) holds if and only if PXi,X^i;Σ^I^1i in (44) satisfies (26), that is, the matrix equation

[Xi,0][X^i,Σ^X^i]++Hi[X^i,Σ^X^i]I^1i[X^,Σ^X^]=[Yi,0] (64)

is solvable for Hi, i = 1, …, k. By Lemma 2.3, (64) is solvable for Hi if and only if

r[Xi,0][X^i,Σ^X^i]+I^1i[X^,Σ^X^][Yi,0][X^i,Σ^X^i]I^1i[X^,Σ^X^]=r[X^i,Σ^X^i]I^1i[X^,Σ^X^],i=1,,k, (65)

where

r[Xi,0][X^i,Σ^X^i]+I^1i[X^,Σ^X^][Yi,0][X^i,Σ^X^i]I^1i[X^,Σ^X^]=r[Xi,0][X^i,Σ^X^i]+I^1i[X^,Σ^X^][Yi,0]0I^1i[X^,Σ^X^][X^i,Σ^X^i]r[X^i,Σ^X^i]=r[Yi,0][Xi,0]I^1i[X^,Σ^X^][X^i,Σ^X^i]r[X^i,Σ^]=rYi0Xi0I^1iX^I^1iΣ^X^X^iΣ^X^ir[X^i,Σ^]=r0Xi0Zi0X^iΣ^X^ir[X^i,Σ^]=r0Xi00ZiXiΣ00Ai0000XiAir(X^i)r[X^i,Σ^]=r0Xi00Zi0Σ00Ai0000XiAir(X^i)r[X^i,Σ^]=rZiΣ00XiAir[X^i,Σ^]=r(Si)r[X^i,Σ^],i=1,,k,

and

r[X^i,Σ^X^i]I^1i[X^,Σ^X^]=r[I^1iX^,I^1iΣ^X^,X^i,Σ^X^i]r[X^i,Σ^X^i]=rZi0,X^i,Σ^X^ir[X^i,Σ^]=rZiXiΣ00Ai0000XiAir(X^i)r[X^i,Σ^]=rΣ0XiZi00Ai0XiAi00r(X^i)r[X^i,Σ^]=rΣXiZi0Ai0r[X^i,Σ^](by (18))=r(Ti)r[X^i,Σ^],i=1,,k.

Hence, (65) is equivalent to (c).

Under the condition that X^i βi is estimable under (9), we see from (26) and (45) that there exist BLUEM^(X^iβi) and BLUEM^(X^iβi) such that (63) holds if and only if the PX^i;Σ^I^1i in (45) satisfies (26), that is, the matrix equation

[X^i,0][X^i,Σ^X^i]++Gi[X^i,Σ^X^i]I^1i[X^,Σ^X^]=[Y^i,0] (66)

is solvable for Gi, i, = 1, …, k. By Lemma 2.3, (66) is solvable for Gi if and only if

r[X^i,0][X^i,Σ^X^i]+I^1i[X^,Σ^X^][Y^i,0][X^i,Σ^X^i]I^1i[X^,Σ^X^]=r[X^i,Σ^X^i]I^1i[X^,Σ^X^],i=1,,k, (67)

where

r[X^i,0][X^i,Σ^X^i]+I^1i[X^,Σ^X^][Y^i,0][X^i,Σ^X^i]I^1i[X^,Σ^X^]=r[X^i,0][X^i,Σ^X^i]+I^1i[X^,Σ^X^][Y^i,0]0I^1i[X^,Σ^X^][X^i,Σ^X^i]r[X^i,Σ^X^i]=r[Y^i,0][X^i,0]I^1i[X^,Σ^X^][X^i,Σ^X^i]r[X^i,Σ^]=rY^i0X^i0I^1iX^I^1iΣ^X^X^iΣ^X^ir[X^i,Σ^]=r0X^i0Zi00Σ^X^ir[X^i,Σ^]=rZiΣ00XiAir[X^i,Σ^]=r(Si)r[X^i,Σ^],i=1,,k.

Hence, (67) is equivalent to (c). □

It can be seen from (47) and (49) that neither the sum BLUEM^1(X1β1)++BLUEM^k(Xkβk) is necessarily unbiased for Xβ, nor BLUEM^1(X^1β1)++BLUEM^k(X^kβk) is necessarily unbiased for X^ β under (1). Concerning the unbiasedness of the two sums and the corresponding BLUE decompositions, we have the following general conclusions.

Theorem 6.4

Let BLUEM^i(Xiβi)andBLUEM^i(X^iβi), i = 1, …, k, be as given in (44) and (45), respectively. Then, the following statements are equivalent:

  1. There exist BLUEM^i(Xiβi) , i = 1, …, k, such that

    EBLUEM^1(X1β1)++BLUEM^k(Xkβk)=Xβ. (68)
  2. There exist BLUEM^i(X^iβi) , i = 1, …, k, such that

    EBLUEM^1(X^1β1)++BLUEM^k(X^kβk)=X^β. (69)
  3. There exist BLUEM^i(Xiβi) , i = 1, …, k, such that

    BLUEM^1(X1β1)++BLUEM^k(Xkβk)=BLUEM^(Xβ). (70)
  4. There exist BLUEM^i(X^iβi) , i = 1, …, k, such that

    BLUEM^1(X^1β1)++BLUEM^k(X^kβk)=BLUEM^(X^β). (71)
  5. r(S)−r(T) = r( X^ 1)+⋯+r( X^ k)−r( X^ ).

  6. If all X1β1, …, Xkβk are estimable under (9), then the following statements are equivalent:

    1. There exist BLUEM^i(Xiβi) , i = 1, …, k, such that EBLUEM^1(X1β1)++BLUEM^k(Xkβk) Xβ.

    2. There exist BLUEM^i(X^iβi) , i = 1, …, k, such that EBLUEM^1(X^1β1)++BLUEM^k(X^kβk)=X^β.

    3. There exist BLUEM^i(Xiβi) , i = 1, …, k, such that BLUEM^1(X1β1)++BLUEM^k(Xkβk)=BLUEM^(Xβ).

    4. There exist BLUEM^i(X^iβi) , i = 1, …, k, such that BLUEM^1(X^1β1)++BLUEM^k(X^kβk)=BLUEM^(X^β).

    5. r(S) = r(T).

Proof

It can be derived from (47) that the equality (a) holds if and only if

[PX1;X^1;Σ^I^n,,PXk;X^k;Σ^I^n]Z=0. (72)

Substituting PXi;X^i;Σ^=[Xi,0][X^i,Σ^X^i]++Hi[X^i,Σ^X^i] in (44) into (72) gives

PX1;X^1;Σ^I^n,,PXk;X^k;Σ^I^nZ=[X1,0][X^1,Σ^X^1]+I^n+H1[X^1,Σ^X^1]I^n,,[Xk,0][X^k,Σ^X^k]+I^n+Hk[X^k,Σ^X^k]I^nZ=[X1,0][X^1,Σ^X^1]+I^n,,[Xk,0][X^k,Σ^X^k]+I^nZ+[H1,,Hk]diag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ=0. (73)

The matrix equation is solvable for [H1, …, Hk] if and only if

r[X1,0][X^1,Σ^X^1]+I^n,,[Xk,0][X^k,Σ^X^k]+I^nZdiag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ=rdiag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ, (74)

where

r[X1,0][X^1,Σ^X^1]+I^n,,[Xk,0][X^k,Σ^X^k]+I^nZdiag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ=r[X1,0][X^1,Σ^X^1]+I^n,,[Xk,0][X^k,Σ^X^k]+I^nZ0diagI^n,,I^nZdiag[X^1,Σ^X^1],,[X^k,Σ^X^k]rdiag[X^1,Σ^X^1],,[X^k,Σ^X^k]=r0[X1,0],,[Xk,0]diagI^n,,I^nZdiag[X^1,Σ^X^1],,[X^k,Σ^X^k]rdiag[X^1,Σ^],,[X^k,Σ^]=rΣ~0X~Z00A0X~A0000X0r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]=rΣ~00Z00A0X~A0000X0r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]=rΣ~0ZX~A0+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]=r(S)+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^], (75)

and

rdiag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ=rdiagI^n,,I^nZ,diag[X^1,Σ^X^1],,[X^k,Σ^X^k]rdiag[X^1,Σ^X^1],,[X^k,Σ^X^k]=rΣ~0X~Z00A0X~A00r(X^1)r(X^k)rdiag[X^1,Σ^],,[X^k,Σ^]=rΣ~X~Z0A0+r[X~,A]r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^](by (18))=r(T)r[X^1,Σ^]r[X^k,Σ^]. (76)

Substituting (75) and (76) into (74) yields the rank equality in (e).

It can be derived from (49) that the equality in (b) holds if and only if

[PX^1;Σ^I^n,,PX^k;Σ^I^n]Z=0. (77)

Substituting PX^i;Σ^=[X^i,0][X^i,Σ^X^i]++Gi[X^i,Σ^X^i] in (45) into (77) gives

PX^1;Σ^I^n,,PX^k;Σ^I^nZ=[X^1,0][X^1,Σ^X^1]+I^n+G1[X^1,Σ^X^1]I^n,,[X^k,0][X^k,Σ^X^k]+I^n+Gk[X^k,Σ^X^k]I^nZ=[X^1,0][X^1,Σ^X^1]+I^n,,[X^k,0][X^k,Σ^X^k]+I^nZ+[G1,,Gk]diag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ=0. (78)

This matrix equation is solvable for [G1, …, Gk] if and only if

r[X^1,0][X^1,Σ^X^1]+I^n,,[X^k,0][X^k,Σ^X^k]+I^nZdiag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ=rdiag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ, (79)

where

r[X^1,0][X^1,Σ^X^1]+I^n,,[X^k,0][X^k,Σ^X^k]+I^nZdiag[X^1,Σ^X^1]I^n,,[X^k,Σ^X^k]I^nZ=r[X^1,0][X^1,Σ^X^1]+I^n,,[X^k,0][X^k,Σ^X^k]+I^nZ0diagI^n,,I^nZdiag[X^1,Σ^X^1],,[X^k,Σ^X^k]rdiag[X^1,Σ^X^1],,[X^k,Σ^X^k]=r0[X^1,0],,[X^k,0]diagI^n,,I^nZdiag[X^1,Σ^X^1],,[X^k,Σ^X^k]r[X^1,Σ^]r[X^k,Σ^]=rΣ~X~0AZ0+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]=r(S)+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]. (80)

Substituting (76) and (80) into (79) gives the equivalence of (b) and (e).

It can be seen from Lemma 4.2 that the equality in (c) holds if and only if there exist PXi;X^i;Σ^ such that

(PX1;X^1;Σ^I^11++PXk;X^k;Σ^I^1k)[X^,Σ^X^]=[X,0]. (81)

Substituting PXi;X^i;Σ^ in (44) into (81) gives the following matrix equation

[H1,,Hk][X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=D, (82)

where

D=[X,0][X1,0][X^1,Σ^X^1]+I^11[X^,Σ^X^][Xk,0][X^k,Σ^X^k]+I^1k[X^,Σ^X^].

By Lemma 2.3, there exists a [H1, …, Hk] such that (82) holds if and only if

rD[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=r[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]. (83)

Applying (15) and (16) to both sides of (83) and simplifying, we obtain

rD[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=rD00I^11[X^,Σ^X^][X^1,Σ^X^1]0I^1k[X^,Σ^X^]0[X^k,Σ^X^k]r[X^1,Σ^X^1]r[X^k,Σ^X^k]=r[X,0][X1,0][Xk,0]I^11[X^,Σ^X^]X^1,Σ^X^10I^1k[X^,Σ^X^]0X^k,Σ^X^kr[X^1,Σ^]r[X^k,Σ^]=r[X,0][X1,0][Xk,0]I^11[X^,0]X^1,Σ^X^10I^1k[X^,0]0X^k,Σ^X^kr[X^1,Σ^]r[X^k,Σ^]=r0X00Z0Σ~00A0000X~Ar(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]=rΣ~X~0AZ0+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]=r(S)+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^], (84)

and

r[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=rI^11[X^,Σ^X^]X^1,Σ^X^10I^1k[X^,Σ^X^]0X^k,Σ^X^kr[X^1,Σ^X^1]r[X^k,Σ^X^k]=rI^11X^X^1,Σ^0I^1kX^0X^k,Σ^r[X^1,Σ^]r[X^k,Σ^]=rZ10[X^1,Σ^]0Zk00[X^k,Σ^]r[X^1,Σ^]r[X^k,Σ^]=rΣ~X~Z0A0r[X^1,Σ^]r[X^k,Σ^]=r(T)r[X^1,Σ^]r[X^k,Σ^]. (85)

Substituting (84) and (85) into (83) yields the rank equality in (e).

By Lemma 4.2, the equality in (d) holds if and only if there exist PX^i;Σ^, i = 1, …, k, such that

(PX^1;Σ^I^11++PX^k;Σ^I^1k)[X^,Σ^X^]=[X^,0]. (86)

Substituting PX^i;Σ^, i = 1, …, k, in (45) into (86) gives the following matrix equation

[G1,,Gk][X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=D, (87)

where

D=[X^,0][X^1,0][X^1,Σ^X^1]+I^11[X^,Σ^X^][X^k,0][X^k,Σ^X^k]+I^1k[X^,Σ^X^].

By Lemma 2.3, there exists a [G1, …, Gk] such that (87) holds if and only if

rD[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=r[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]. (88)

Applying (15) and (16) to the left-hand side of (88) and simplifying gives

rD[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=rD00I^11[X^,Σ^X^][X^1,Σ^X^1]0I^1k[X^,Σ^X^]0[X^k,Σ^X^k]r[X^1,Σ^X^1]r[X^k,Σ^X^k]=r[X^,0][X^1,0][X^k,0]I^11[X^,Σ^X^][X^1,Σ^X^1]0I^1k[X^,Σ^X^]0[X^k,Σ^X^k]r[X^1,Σ^]r[X^k,Σ^]=r[X^,0][X^1,0][X^k,0]I^11[X^,0][X^1,Σ^X^1]0I^1k[X^,0]0[X^k,Σ^X^k]r[X^1,Σ^]r[X^k,Σ^]=rX^000[0,Σ^X^1]I^11[X^k,0]0I^1k[X^1,0][0,Σ^X^k]r[X^1,Σ^]r[X^k,Σ^]=r[0,Σ^X^1]I^11[X^k,0]I^1k[X^1,0][0,Σ^X^k]+r(X^)r[X^1,Σ^]r[X^k,Σ^]=rΣ~X~0AZ0+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]=r(S)+r(X^)r(X^1)r(X^k)r[X^1,Σ^]r[X^k,Σ^]. (89)

Substituting (85) and (89) into (88) yields the rank equality in (e). □

Theorem 6.5

Let BLUEM^i(Xiβi)andBLUEM^i(X^iβi), i = 1, …, k, be as given in (44) and (45), respectively. Then, the following statements are equivalent:

  1. BLUEM^1(X1β1)++BLUEM^k(Xkβk)BLUEM^(Xβ).

  2. BLUEM^1(X^1β1)++BLUEM^k(X^kβk)BLUEM^(X^β).

  3. r(S)+r(X^)r(X^1)r(X^k)=r(T)=r[X^1,Σ^]++r[X^k,Σ^].

  4. If all X1β1, …, Xkβk are estimable under (9), then the following statements are equivalent:

    1. BLUEM^1(X1β1)++BLUEM^k(Xkβk)BLUEM^(Xβ).

    2. BLUEM^1(X^1β1)++BLUEM^k(X^kβk)BLUEM^(X^β).

    3. r(S)=r(T)=r[X^1,Σ^]++r[X^k,Σ^].

Proof

It can be seen from (82) that the set inclusion in (a) holds if and only if (82) holds for all [H1, …, Hk], which is equivalent to the following equalities

[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=0,D=0. (90)

From (85), the first equality in (90) is equivalent to

rΣ~X~Z0A0=r[X^1,Σ^]++r[X^k,Σ^]. (91)

In this case, applying (19) to D in (82) and simplifying by I^1iΣ^=Σ^ and R(X^i)R(X^) gives

r(D)=r[X,0][X1,0][X^1,Σ^X^1]+I^11[X^,Σ^X^][Xk,0][X^k,Σ^X^k]+I^1k[X^,Σ^X^]=r[X^1,Σ^X^1]0I^11[X^,Σ^X^]0[X^k,Σ^X^k]I^1k[X^,Σ^X^][X1,0][Xk,0][X,0]r[X^1,Σ^]r[X^k,Σ^]=r[X^1,Σ^X^1]0I^11X^0[X^k,Σ^X^k]I^1kX^[X1,0][Xk,0]Xr[X^1,Σ^]r[X^k,Σ^]=r(S)+r(X^)r(X^1)r(X^k)r[X^1,Σ^][X^k,Σ^](by (84)).

Hence, D = 0 is equivalent to

r(S)=r(X^1)++r(X^k)r(X^)+r[X^1,Σ^]++r[X^k,Σ^]. (92)

Combining (91) and (92) yields (c).

It can be seen from (87) that the set inclusion in (b) holds if and only if (87) holds for all [G1, …, Gk], which is equivalent to the following equalities

[X^1,Σ^X^1]I^11[X^,Σ^X^][X^k,Σ^X^k]I^1k[X^,Σ^X^]=0,D=0. (93)

Applying (19) to D in (87) and simplifying, we obtain

r(D)=r[X^,0][X^1,0][X^1,Σ^X^1]+I^11[X^,Σ^X^][X^k,0][X^k,Σ^X^k]+I^1k[X^,Σ^X^]=r[X^1,Σ^X^1]0I^11[X^,Σ^X^]0[X^k,Σ^X^k]I^1k[X^,Σ^X^][X^1,0][X^k,0][X^,0]r[X^1,Σ^]r[X^k,Σ^]=r[X^1,Σ^X^1]0I^11X^0[X^k,Σ^X^k]I^1kX^[X^1,0][X^k,0]X^r[X^1,Σ^]r[X^k,Σ^]=r(S)+r(X^)r(X^1)r(X^k)r[X^1,Σ^][X^k,Σ^](by (89)). (94)

Combining (91) and (94), we see that (b) is also equivalent to (c). Results (i), (ii), and (iii) in (d) hold from Lemma 3.3.□

Theorem 6.6

Assume that Σ in (1) is positive definite. Then, BLUEM^i(X^iβi)andBLUEM^i(X^iβi) are all unique, i = 1, …, k, and the following statements are equivalent:

  1. EBLUEM^1(X1β1)++BLUEM^k(Xkβk)=Xβ.

  2. EBLUEM^1(X^1β1)++BLUEM^k(X^kβk)=X^β.

  3. BLUEM^1(X1β1)++BLUEM^k(Xkβk)=BLUEM^(Xβ).

  4. BLUEM^1(X^1β1)++BLUEM^k(X^kβk)=BLUEM^(X^β).

  5. rZΣ~1X~A=r(X^1)++r(X^k)r(X^)+r(A).

  6. If all X1β1, …, Xkβk are estimable under (9), then the following statements are equivalent:

    1. EBLUEM^1(X1β1)++BLUEM^k(Xkβk)=Xβ.

    2. EBLUEM^1(X^1β1)++BLUEM^k(X^kβk)=X^β.

    3. BLUEM^1(X1β1)++BLUEM^k(Xkβk)=BLUEM^(Xβ).

    4. BLUEM^1(X^1β1)++BLUEM^k(X^kβk)=BLUEM^(X^β).

    5. rZΣ~1X~A=r(A).

    6. 𝓡[(ZΣ͠−1)] ⊆ 𝓡(A).

Observing that Z in (42) is a skew diagonal block matrix when k = 2, we then obtain

r(S)=r(S1)+r(S2),r(T)=r(T1)+r(T2),r(S1)r(T1),r(S2)r(T2)

hold under (42) and (43), while the rank formulas in Theorems 6.26.6 reduce to certain simple and separated forms, as these given in [6]. When A1 = 0, …, Ak = 0 in (1) and (2), Theorems 6.26.6 reduce to the results given in [5].

7 Summary

We have established some fundamental additive decomposition equalities for BLUEs under a full CGLM and its submodels with parameter restrictions by using the methods of matrix equations, matrix rank formulas, and various skillful and transparent partitioned matrix calculations. Thus the whole work in the paper provides a comprehensive coverage of topics on additive decompositions of BLUEs under general model assumptions, while the decomposition identities obtained demonstrate many valuable mathematical and statistical properties and features of BLUEs. Thus they can serve as useful references in the statistical analysis of CGLMs. This contribution also shows that algebraic tools in matrix theory play exclusive roles in the establishment and development of statistical analysis. In fact, linear models are best representatives of statistical models that attract linear algebraists to consider possible applications of their matrix contributions in statistical theory.

Notice furthermore, that the two decompositions of BLUEs in (13) and (14) are special cases of the following general decomposition identity

BLUEM^(Kβ)=BLUEM^1(K1β1)++BLUEM^k(Kkβk),

where Kβ = K1β1 + ⋯ + Kkβk is assumed to be estimable under (9). Thus, it would be of interest to consider this general decomposition, and derive identifying conditions for this decomposition identity to hold. As demonstrated in Theorems 6.26.6, this is in fact a challenging algebraic problem in matrix theory, because the given matrices X, K, A, Σ, Xi, Ki, and Ai in (1), i = 1, …, k, and their generalized inverses will occur in the mathematical calculations associated with the decomposition identity of estimators.

It should be pointed out that many exclusive and tricky methods for establishing and simplifying matrix expressions and matrix equalities have been developed in linear algebra and matrix theory, which have greatly benefited both mathematics and applications. In particular, these new methodologies have also found essential applications in statistical analysis, such as establishing various intriguing and sophisticated formulas, equalities, and inequalities associated with estimators under linear statistical models.

Acknowledgement

The authors are grateful to anonymous referees for their helpful comments and constructive suggestions that improved the presentation of the paper.

References

[1] Graybill F.A., An Introduction to Linear Statistical Models, Vol. I, McGraw–Hill, New York, 1961Search in Google Scholar

[2] Rao C.R., Toutenburg H., Shalabh, Heumann C., Linear Models and Generalizations Least Squares and Alternatives, 3rd ed., Springer, Berlin Heidelberg, 2008Search in Google Scholar

[3] Searle S.R., Linear Models, Wiley, New York, 1971Search in Google Scholar

[4] Tian Y., Some decompositions of OLSEs and BLUEs under a partitioned linear model, Internat. Statist. Rev., 2007, 75, 224–24810.1111/j.1751-5823.2007.00018.xSearch in Google Scholar

[5] Tian Y., On an additive decomposition of the BLUE in a multiple partitioned linear model, J. Multivariate Anal., 2009, 100, 767–77610.1016/j.jmva.2008.08.006Search in Google Scholar

[6] Zhang X., Tian Y., On decompositions of BLUEs under a partitioned linear model with restrictions, Stat. Papers, 2016, 57, 345– 36410.1007/s00362-014-0654-ySearch in Google Scholar

[7] Markiewicz A., Puntanen S., All about the ⊥ with its applications in the linear statistical models, Open Math., 2015, 13, 33–5010.1515/math-2015-0005Search in Google Scholar

[8] Puntanen S., Styan G.P.H., Isotalo J., Matrix Tricks for Linear Statistical Models: Our Personal Top Twenty, Springer, Heidelberg, 201110.1007/978-3-642-10473-2Search in Google Scholar

[9] Rao C.R., Mitra S.K., Generalized Inverse of Matrices and Its Applications, Wiley, New York, 1971Search in Google Scholar

[10] Searle S.R., The infusion of matrices into statistics, Bull. Internat. Lin. Alg. Soc., 2000, 24, 25–32Search in Google Scholar

[11] Marsaglia G., Styan G.P.H., Equalities and inequalities for ranks of matrices, Linear Multilinear Algebra, 1974, 2, 269–29210.1080/03081087408817070Search in Google Scholar

[12] Baksalary J.K., Styan G.P.H., Around a formula for the rank of a matrix product with some statistical applications, in: Graphs, matrices, and designs, Lecture Notes in Pure and Appl. Math., 139, Dekker, New York, 1993, pp. 1–1810.1201/9780203719916-1Search in Google Scholar

[13] Marsaglia G., Styan G.P.H., Rank conditions for generalized inverses of partitioned matrices, Sankhyā Ser. A, 1974, 36, 437–442Search in Google Scholar

[14] Jiang B., Sun Y., On the equality of estimators under a general partitioned linear model with parameter restrictions, Stat. Papers, 2017, DOI:10.1007/s00362-016-0837-910.1007/s00362-016-0837-9Search in Google Scholar

[15] Jiang B., Tian Y., Decomposition approaches of a constrained general linear model with fixed parameters, Electron. J. Linear Algebra, 2017, 32, 232–25310.13001/1081-3810.3428Search in Google Scholar

[16] Jiang B., Tian Y., On equivalence of predictors/estimators under a multivariate general linear model with augmentation, J. Korean Stat. Soc., 2017, 46, 551–56110.1016/j.jkss.2017.04.001Search in Google Scholar

[17] Tian Y., On equalities of estimations of parametric functions under a general linear model and its restricted models, Metrika, 2010, 72, 313–33010.1007/s00184-009-0255-2Search in Google Scholar

[18] Tian Y., Characterizing relationships between estimations under a general linear model with explicit and implicit restrictions by rank of matrix, Comm. Statist. Theory Methods, 2012, 41, 2588–260110.1080/03610926.2011.594537Search in Google Scholar

[19] Tian Y., Matrix rank and inertia formulas in the analysis of general linear models, Open Math., 2017, 15, 126–15010.1515/math-2017-0013Search in Google Scholar

[20] Tian Y., Some equalities and inequalities for covariance matrices of estimators under linear model, Stat. Papers, 2017, 58, 467– 48410.1007/s00362-015-0707-xSearch in Google Scholar

[21] Tian Y., Jiang B., Equalities for estimators of partial parameters under linear model with restrictions, J. Multivariate Anal., 2016, 143, 299–31310.1016/j.jmva.2015.09.007Search in Google Scholar

[22] Tian Y., Puntanen S., On the equivalence of estimations under a general linear model and its transformed models, Linear Algebra Appl., 2009, 430, 2622–264110.1016/j.laa.2008.09.016Search in Google Scholar

[23] Tian Y., Takane Y., On sum decompositions of weighted least-squares estimators for the partitioned linear model, Comm. Statist. Theory Methods, 2008, 37, 55–6910.1080/03610920701648862Search in Google Scholar

[24] Tian Y., Tian Z., On additive and block decompositions of WLSEs under a multiple partitioned regression model, Statistics, 2010, 44, 361–37910.1080/02331880903189109Search in Google Scholar

[25] Tian Y., More on maximal and minimal ranks of Schur complements with applications, Appl. Math. Comput., 2004, 152, 675–69210.1016/S0096-3003(03)00585-XSearch in Google Scholar

[26] Penrose R., A generalized inverse for matrices, Proc. Cambridge Phil. Soc., 1955, 51, 406–41310.1017/S0305004100030401Search in Google Scholar

[27] Rao C.R., Unified theory of linear estimation, Sankhyā Ser. A, 1971, 33, 371–394Search in Google Scholar

[28] Rao C.R., Representations of best linear unbiased estimators in the Gauss–Markoff model with a singular dispersion matrix, J. Multivariate Anal., 1973, 3, 276–29210.1016/0047-259X(73)90042-0Search in Google Scholar

[29] Alalouf I.S., Styan G.P.H., Characterizations of estimability in the general linear model, Ann. Stat., 1979, 7, 194–20010.1214/aos/1176344564Search in Google Scholar

[30] Bunke H., Bunke O., Identifiability and estimability, Statistics, 1974, 5, 223–23310.1080/02331937408842188Search in Google Scholar

[31] Majumdar D., Mitra S.K., Statistical analysis of nonestimable functionals, in: W. Klonecki et al. (eds.), Mathematical Statistics and Probability Theory, Springer, New York, 1980, pp. 288—31610.1007/978-1-4615-7397-5_20Search in Google Scholar

[32] Milliken G.A., New criteria for estimability for linear models, Ann. Math. Statist., 1971, 42, 1588–159410.1214/aoms/1177693157Search in Google Scholar

[33] Searle S.R., Additional results concerning estimable functions and generalized inverse matrices, J. Roy. Statist. Soc. Ser. B, 1965, 27, 486–49010.1111/j.2517-6161.1965.tb00608.xSearch in Google Scholar

[34] Seely J., Linear spaces and unbiased estimation, Ann. Math. Statist., 1970, 41, 1725–173410.1214/aoms/1177696817Search in Google Scholar

[35] Seely J., Estimability and linear hypotheses, Amer. Statist., 1977, 31, 121–12310.1080/00031305.1977.10479216Search in Google Scholar

[36] Seely J., Birkes D., Estimability in partitioned linear models, Ann. Statist., 1980, 8, 399–40610.1214/aos/1176344960Search in Google Scholar

[37] Stewart I., Wynn H.P., The estimability structure of linear models and submodels, J. Roy. Stat. Soc. Ser. B, 1981, 43, 197–20710.1111/j.2517-6161.1981.tb01171.xSearch in Google Scholar

[38] Tian Y., Beisiegel M., Dagenais E., Haines C., On the natural restrictions in the singular Gauss–Markov model, Stat. Papers, 2008, 49, 553–56410.1007/s00362-006-0032-5Search in Google Scholar

[39] Drygas H., The Coordinate-free Approach to Gauss–Markov Estimation, Springer, Heidelberg, 197010.1007/978-3-642-65148-9Search in Google Scholar

Received: 2017-2-23
Accepted: 2017-9-26
Published Online: 2017-12-2

© 2017 Jiang et al.

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Articles in the same Issue

  1. Regular Articles
  2. Integrals of Frullani type and the method of brackets
  3. Regular Articles
  4. Edge of chaos in reaction diffusion CNN model
  5. Regular Articles
  6. Calculus using proximities: a mathematical approach in which students can actually prove theorems
  7. Regular Articles
  8. An investigation on hyper S-posets over ordered semihypergroups
  9. Regular Articles
  10. The Leibniz algebras whose subalgebras are ideals
  11. Regular Articles
  12. Fixed point and multidimensional fixed point theorems with applications to nonlinear matrix equations in terms of weak altering distance functions
  13. Regular Articles
  14. Matrix rank and inertia formulas in the analysis of general linear models
  15. Regular Articles
  16. The hybrid power mean of quartic Gauss sums and Kloosterman sums
  17. Regular Articles
  18. Tauberian theorems for statistically (C,1,1) summable double sequences of fuzzy numbers
  19. Regular Articles
  20. Some properties of graded comultiplication modules
  21. Regular Articles
  22. The characterizations of upper approximation operators based on special coverings
  23. Regular Articles
  24. Bi-integrable and tri-integrable couplings of a soliton hierarchy associated with SO(4)
  25. Regular Articles
  26. Dynamics for a discrete competition and cooperation model of two enterprises with multiple delays and feedback controls
  27. Regular Articles
  28. A new view of relationship between atomic posets and complete (algebraic) lattices
  29. Regular Articles
  30. A class of extensions of Restricted (s, t)-Wythoff’s game
  31. Regular Articles
  32. New bounds for the minimum eigenvalue of 𝓜-tensors
  33. Regular Articles
  34. Shintani and Shimura lifts of cusp forms on certain arithmetic groups and their applications
  35. Regular Articles
  36. Empirical likelihood for quantile regression models with response data missing at random
  37. Regular Articles
  38. Convex combination of analytic functions
  39. Regular Articles
  40. On the Yang-Baxter-like matrix equation for rank-two matrices
  41. Regular Articles
  42. Uniform topology on EQ-algebras
  43. Regular Articles
  44. Integrations on rings
  45. Regular Articles
  46. The quasilinear parabolic kirchhoff equation
  47. Regular Articles
  48. Avoiding rainbow 2-connected subgraphs
  49. Regular Articles
  50. On non-Hopfian groups of fractions
  51. Regular Articles
  52. Singularly perturbed hyperbolic problems on metric graphs: asymptotics of solutions
  53. Regular Articles
  54. Rings in which elements are the sum of a nilpotent and a root of a fixed polynomial that commute
  55. Regular Articles
  56. Superstability of functional equations related to spherical functions
  57. Regular Articles
  58. Evaluation of the convolution sum involving the sum of divisors function for 22, 44 and 52
  59. Regular Articles
  60. Weighted minimal translation surfaces in the Galilean space with density
  61. Regular Articles
  62. Complete convergence for weighted sums of pairwise independent random variables
  63. Regular Articles
  64. Binomials transformation formulae for scaled Fibonacci numbers
  65. Regular Articles
  66. Growth functions for some uniformly amenable groups
  67. Regular Articles
  68. Hopf bifurcations in a three-species food chain system with multiple delays
  69. Regular Articles
  70. Oscillation and nonoscillation of half-linear Euler type differential equations with different periodic coefficients
  71. Regular Articles
  72. Osculating curves in 4-dimensional semi-Euclidean space with index 2
  73. Regular Articles
  74. Some new facts about group 𝒢 generated by the family of convergent permutations
  75. Regular Articles
  76. lnfinitely many solutions for fractional Schrödinger equations with perturbation via variational methods
  77. Regular Articles
  78. Supersolvable orders and inductively free arrangements
  79. Regular Articles
  80. Asymptotically almost automorphic solutions of differential equations with piecewise constant argument
  81. Regular Articles
  82. Finite groups whose all second maximal subgroups are cyclic
  83. Regular Articles
  84. Semilinear systems with a multi-valued nonlinear term
  85. Regular Articles
  86. Positive solutions for Hadamard differential systems with fractional integral conditions on an unbounded domain
  87. Regular Articles
  88. Calibration and simulation of Heston model
  89. Regular Articles
  90. One kind sixth power mean of the three-term exponential sums
  91. Regular Articles
  92. Cyclic pairs and common best proximity points in uniformly convex Banach spaces
  93. Regular Articles
  94. The uniqueness of meromorphic functions in k-punctured complex plane
  95. Regular Articles
  96. Normalizers of intermediate congruence subgroups of the Hecke subgroups
  97. Regular Articles
  98. The hyperbolicity constant of infinite circulant graphs
  99. Regular Articles
  100. Scott convergence and fuzzy Scott topology on L-posets
  101. Regular Articles
  102. One sided strong laws for random variables with infinite mean
  103. Regular Articles
  104. The join of split graphs whose completely regular endomorphisms form a monoid
  105. Regular Articles
  106. A new branch and bound algorithm for minimax ratios problems
  107. Regular Articles
  108. Upper bound estimate of incomplete Cochrane sum
  109. Regular Articles
  110. Value distributions of solutions to complex linear differential equations in angular domains
  111. Regular Articles
  112. The nonlinear diffusion equation of the ideal barotropic gas through a porous medium
  113. Regular Articles
  114. The Sheffer stroke operation reducts of basic algebras
  115. Regular Articles
  116. Extensions and improvements of Sherman’s and related inequalities for n-convex functions
  117. Regular Articles
  118. Classification lattices are geometric for complete atomistic lattices
  119. Regular Articles
  120. Possible numbers of x’s in an {x, y}-matrix with a given rank
  121. Regular Articles
  122. New error bounds for linear complementarity problems of weakly chained diagonally dominant B-matrices
  123. Regular Articles
  124. Boundedness of vector-valued B-singular integral operators in Lebesgue spaces
  125. Regular Articles
  126. On the Golomb’s conjecture and Lehmer’s numbers
  127. Regular Articles
  128. Some applications of the Archimedean copulas in the proof of the almost sure central limit theorem for ordinary maxima
  129. Regular Articles
  130. Dual-stage adaptive finite-time modified function projective multi-lag combined synchronization for multiple uncertain chaotic systems
  131. Regular Articles
  132. Corrigendum to: Dual-stage adaptive finite-time modified function projective multi-lag combined synchronization for multiple uncertain chaotic systems
  133. Regular Articles
  134. Convergence and stability of generalized φ-weak contraction mapping in CAT(0) spaces
  135. Regular Articles
  136. Triple solutions for a Dirichlet boundary value problem involving a perturbed discrete p(k)-Laplacian operator
  137. Regular Articles
  138. OD-characterization of alternating groups Ap+d
  139. Regular Articles
  140. On Jordan mappings of inverse semirings
  141. Regular Articles
  142. On generalized Ehresmann semigroups
  143. Regular Articles
  144. On topological properties of spaces obtained by the double band matrix
  145. Regular Articles
  146. Representing derivatives of Chebyshev polynomials by Chebyshev polynomials and related questions
  147. Regular Articles
  148. Chain conditions on composite Hurwitz series rings
  149. Regular Articles
  150. Coloring subgraphs with restricted amounts of hues
  151. Regular Articles
  152. An extension of the method of brackets. Part 1
  153. Regular Articles
  154. Branch-delete-bound algorithm for globally solving quadratically constrained quadratic programs
  155. Regular Articles
  156. Strong edge geodetic problem in networks
  157. Regular Articles
  158. Ricci solitons on almost Kenmotsu 3-manifolds
  159. Regular Articles
  160. Uniqueness of meromorphic functions sharing two finite sets
  161. Regular Articles
  162. On the fourth-order linear recurrence formula related to classical Gauss sums
  163. Regular Articles
  164. Dynamical behavior for a stochastic two-species competitive model
  165. Regular Articles
  166. Two new eigenvalue localization sets for tensors and theirs applications
  167. Regular Articles
  168. κ-strong sequences and the existence of generalized independent families
  169. Regular Articles
  170. Commutators of Littlewood-Paley gκ -functions on non-homogeneous metric measure spaces
  171. Regular Articles
  172. On decompositions of estimators under a general linear model with partial parameter restrictions
  173. Regular Articles
  174. Groups and monoids of Pythagorean triples connected to conics
  175. Regular Articles
  176. Hom-Lie superalgebra structures on exceptional simple Lie superalgebras of vector fields
  177. Regular Articles
  178. Numerical methods for the multiplicative partial differential equations
  179. Regular Articles
  180. Solvable Leibniz algebras with NFn Fm1 nilradical
  181. Regular Articles
  182. Evaluation of the convolution sums ∑al+bm=n lσ(l) σ(m) with ab ≤ 9
  183. Regular Articles
  184. A study on soft rough semigroups and corresponding decision making applications
  185. Regular Articles
  186. Some new inequalities of Hermite-Hadamard type for s-convex functions with applications
  187. Regular Articles
  188. Deficiency of forests
  189. Regular Articles
  190. Perfect codes in power graphs of finite groups
  191. Regular Articles
  192. A new compact finite difference quasilinearization method for nonlinear evolution partial differential equations
  193. Regular Articles
  194. Does any convex quadrilateral have circumscribed ellipses?
  195. Regular Articles
  196. The dynamic of a Lie group endomorphism
  197. Regular Articles
  198. On pairs of equations in unlike powers of primes and powers of 2
  199. Regular Articles
  200. Differential subordination and convexity criteria of integral operators
  201. Regular Articles
  202. Quantitative relations between short intervals and exceptional sets of cubic Waring-Goldbach problem
  203. Regular Articles
  204. On θ-commutators and the corresponding non-commuting graphs
  205. Regular Articles
  206. Quasi-maximum likelihood estimator of Laplace (1, 1) for GARCH models
  207. Regular Articles
  208. Multiple and sign-changing solutions for discrete Robin boundary value problem with parameter dependence
  209. Regular Articles
  210. Fundamental relation on m-idempotent hyperrings
  211. Regular Articles
  212. A novel recursive method to reconstruct multivariate functions on the unit cube
  213. Regular Articles
  214. Nabla inequalities and permanence for a logistic integrodifferential equation on time scales
  215. Regular Articles
  216. Enumeration of spanning trees in the sequence of Dürer graphs
  217. Regular Articles
  218. Quotient of information matrices in comparison of linear experiments for quadratic estimation
  219. Regular Articles
  220. Fourier series of functions involving higher-order ordered Bell polynomials
  221. Regular Articles
  222. Simple modules over Auslander regular rings
  223. Regular Articles
  224. Weighted multilinear p-adic Hardy operators and commutators
  225. Regular Articles
  226. Guaranteed cost finite-time control of positive switched nonlinear systems with D-perturbation
  227. Regular Articles
  228. A modified quasi-boundary value method for an abstract ill-posed biparabolic problem
  229. Regular Articles
  230. Extended Riemann-Liouville type fractional derivative operator with applications
  231. Topical Issue on Topological and Algebraic Genericity in Infinite Dimensional Spaces
  232. The algebraic size of the family of injective operators
  233. Topical Issue on Topological and Algebraic Genericity in Infinite Dimensional Spaces
  234. The history of a general criterium on spaceability
  235. Topical Issue on Topological and Algebraic Genericity in Infinite Dimensional Spaces
  236. On sequences not enjoying Schur’s property
  237. Topical Issue on Topological and Algebraic Genericity in Infinite Dimensional Spaces
  238. A hierarchy in the family of real surjective functions
  239. Topical Issue on Topological and Algebraic Genericity in Infinite Dimensional Spaces
  240. Dynamics of multivalued linear operators
  241. Topical Issue on Topological and Algebraic Genericity in Infinite Dimensional Spaces
  242. Linear dynamics of semigroups generated by differential operators
  243. Special Issue on Recent Developments in Differential Equations
  244. Isomorphism theorems for some parabolic initial-boundary value problems in Hörmander spaces
  245. Special Issue on Recent Developments in Differential Equations
  246. Determination of a diffusion coefficient in a quasilinear parabolic equation
  247. Special Issue on Recent Developments in Differential Equations
  248. Homogeneous two-point problem for PDE of the second order in time variable and infinite order in spatial variables
  249. Special Issue on Recent Developments in Differential Equations
  250. A nonlinear plate control without linearization
  251. Special Issue on Recent Developments in Differential Equations
  252. Reduction of a Schwartz-type boundary value problem for biharmonic monogenic functions to Fredholm integral equations
  253. Special Issue on Recent Developments in Differential Equations
  254. Inverse problem for a physiologically structured population model with variable-effort harvesting
  255. Special Issue on Recent Developments in Differential Equations
  256. Existence of solutions for delay evolution equations with nonlocal conditions
  257. Special Issue on Recent Developments in Differential Equations
  258. Comments on behaviour of solutions of elliptic quasi-linear problems in a neighbourhood of boundary singularities
  259. Special Issue on Recent Developments in Differential Equations
  260. Coupled fixed point theorems in complete metric spaces endowed with a directed graph and application
  261. Special Issue on Recent Developments in Differential Equations
  262. Existence of entropy solutions for nonlinear elliptic degenerate anisotropic equations
  263. Special Issue on Recent Developments in Differential Equations
  264. Integro-differential systems with variable exponents of nonlinearity
  265. Special Issue on Recent Developments in Differential Equations
  266. Elliptic operators on refined Sobolev scales on vector bundles
  267. Special Issue on Recent Developments in Differential Equations
  268. Multiplicity solutions of a class fractional Schrödinger equations
  269. Special Issue on Recent Developments in Differential Equations
  270. Determining of right-hand side of higher order ultraparabolic equation
  271. Special Issue on Recent Developments in Differential Equations
  272. Asymptotic approximation for the solution to a semi-linear elliptic problem in a thin aneurysm-type domain
  273. Topical Issue on Metaheuristics - Methods and Applications
  274. Learnheuristics: hybridizing metaheuristics with machine learning for optimization with dynamic inputs
  275. Topical Issue on Metaheuristics - Methods and Applications
  276. Nature–inspired metaheuristic algorithms to find near–OGR sequences for WDM channel allocation and their performance comparison
  277. Topical Issue on Cyber-security Mathematics
  278. Monomial codes seen as invariant subspaces
  279. Topical Issue on Cyber-security Mathematics
  280. Expert knowledge and data analysis for detecting advanced persistent threats
  281. Topical Issue on Cyber-security Mathematics
  282. Feedback equivalence of convolutional codes over finite rings
Downloaded on 19.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2017-0109/html
Scroll to top button