Home Integrative analysis with a system of semiparametric projection non-linear regression models
Article
Licensed
Unlicensed Requires Authentication

Integrative analysis with a system of semiparametric projection non-linear regression models

  • Ao Yuan EMAIL logo , Tianmin Wu , Hong-Bin Fang and Ming T. Tan EMAIL logo
Published/Copyright: September 16, 2020

Abstract

In integrative analysis parametric or nonparametric methods are often used. The former is easier for interpretation but not robust, while the latter is robust but not easy to interpret the relationships among the different types of variables. To combine the advantages of both methods and for flexibility, here a system of semiparametric projection non-linear regression models is proposed for the integrative analysis, to model the innate coordinate structure of these different types of data, and a diagnostic tool is constructed to classify new subjects to the case or control group. Simulation studies are conducted to evaluate the performance of the proposed method, and shows promising results. Then the method is applied to analyze a real omics data from The Cancer Genome Atlas study, compared the results with those from the similarity network fusion, another integrative analysis method, and results from our method are more reasonable.


Corresponding authors: Ao Yuan and Ming T. Tan, Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University, 20057Washington DC, USA, E-mail ;

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

Appendix

Proof of Theorem 1

The conditional density of Y given X is

p(y|x,θ,f)exp{(yiαf(xiβ))Ω1(yiαf(xiβ))}.

Let g(x) be the density of X, the joint density of (Y, X) is p(y,x|θ,f)=p(y|x,θ,f)g(x). Let

q(y,x|θ,f)=logp(y,x|θ,f)p(y,x|θ0,f0)

be the log-likelihood ratio. Let P be the probability measure of p(y,x|θ0,f0), Pq(Y,X|θ,f)=q(y,x|θ,f)dP(y,x), the true mean of q; and let Pnq(Y,X|θ,f)=(1/n)i=1nq(yi,xi|θ,f), the empirical mean of q based on the data {(yi,xi):i=1,,n}.

Note that Pq(Y,X|θ,f) is the negative Kullback-Leibler divergence of p(y,x|θ,f) from p(y,x|θ0,f0), and as a function of (θ, f), it is always non-positive, attaining its maximum value of 0 at (θ, f) = (θ0, f0). Since (θ^,f^) is the MLE of (θ0, f0),

(θ0,f0)=arg max(θ,f)(Θ,)Pq(Y,X|θ,f),and(θ^,f^)=arg max(θ,f)(Θ,3)Pnq(Y,X|θ,f).

Denote d(θ,f;θ0,f0)=||θθ0||+j=13supsDj|fj(s)fj0(s)|. By our model specification, Pq(Y,X|θ,f) is continuous with respect to (θ, f); also, the model is identifiable, so (θ0, f0) is the unique maximizer of Pq(Y,X|θ,f). Thus for all η > 0,

sup(θ,f)(Θ,3):d(θ,f;θ0,f0)>ηPq(Y,X|θ,f)<Pq(Y,X|θ0,f0)andPnq(Y,X|θ^,f^)Pnq(Y,X|θ0,f0),

so if we show that 𝒜={q(y,x|θ,f):(θ,f)(Θ,3)} is a P-Glivenko-Cantelli class, then by Theorem 5.8 in van der Vaart [28],

d(θ^,f^;θ0,f0)a.s.0,

which is the desired result.

Now we show that 𝒜 is P-Glivenko-Cantelli. For any function g, let ||g||L1(P)=|g(y,x)|dP(y,x), and N[ ](ϵ,𝒜,L1(P)) be the minimum number of ϵ-brackets needed to cover 𝒜 under norm L1(P). We first show that N[ ](ϵ,𝒜,L1(P)) is finite ϵ>0.

By our specification of p(y,x|θ,f) and with (C1), it can be checked that q(y,x|θ,f) is boundedly differentiable with respect to θ, and boundedly Gáteaux differentiable with respect to f. In fact, let (θ,f|y,x) be the log-likelihood, ˙θ(θ,f|y,x)=(θ,f|y,x)/θ be the score for θ, and ˙fj(θ,f|y,x)[h]=(θ,fj+λh|y,x)/λ|λ=0 be the Gáteaux derivative of (θ,f|y,x) with respect to fj in the direction h. Then for some (θ, f) lies between (θ1, f1) and (θ2, f2),

q(y,x|θ2,f2)q(y,x|θ1,f1)=(y,x|θ2,f2)(y,x|θ1,f1)=˙θ(θ,f|y,x)(θ2θ1)+j=13˙fj(θ,fj|y,x)[f2jf1j].

By our model specification, and (C1)–(C4) (note that (C1) together with β= implies Θ is bounded), both ˙θ(θ,f|y,x) and ˙fj(θ,f|y,x)[h] have finite second moment. Consequently, by Taylor expansion and HÖlder’s inequality, there are constants 0 < Cj < ∞ (j = 1, 4), such that

||q(Y,X|θ1,m1)q(Y,X|θ2,m2)||L1(P)C4||θ1θ2||+j=13f1jf2jL2(P),
 (θ1,f1),(θ2,f2)(Θ,3).

So,

N[ ](ϵ,𝒜,L1(P))N[ ](ϵ4C4,Θ,)×j=13N[ ](ϵ4Cj,,L2(P)), ϵ>0.

By (C1), N[ ](ϵ/(4C4),Θ,)=O(1/ϵd), with d=dim(Θ). Since is a collection of bounded monotone functions, by Theorem 2.7.5 in van der Vaart and Wellner [29], for some constant 0 < C < ∞, and for all r ≥ 1,

N[ ](ϵ,,Lr(P))exp{Cϵ}, ϵ>0.

Thus, for some generic constant 0<C<,

N[ ](ϵ,𝒜,L1(P))N[ ](ϵ4C4,Θ,)j=13×N[ ](ϵ4Cj,,L2(P))Cϵdexp{Cϵ}<, ϵ>0,

and so by Theorem 2.4.1 in van der Vaart and Wellner [29], 𝒜 is a Glivenko-Cantelli class with respect to P.

Proof of Theorem 2

We only give an outline proof, the detailed proof is similar to that of Theorem 2 in Yuan, Yin and Tan [30] and is omitted.

Below we identify the asymptotic covariance matrices of n(β^β0) and n(α^α0) (j = 1, 2,3 ).

We first compute the efficient score for β and α. For a function g(), denote g˙() for its derivative, ˙β(θ,f|y,x)=(θ,f|y,x)/β, and ˙α(θ,f|y,x)=(θ,f|y,x)/α. Denote f˙(xβ)x=(f˙1(xβ1)x,f˙2(xβ2)x,f˙3(xβ3)x). We have

˙β(θ,f|y,x)=Ω1(yαf(xβ))f˙(xβ)x,˙α(θ,f|y,x)=Ω1(yαf(xβ))(g,p,m)

are the scores for β and α. We assume dim(β) > 1. The score operator for f at direction g = (g1, g2, g3) is, with g(xβ)=(g1(xβ1),g2(xβ2),g3(xβ3)),

˙f(θ,f|y,x)[g]=Ω1(yαf(xβ))g(xβ).

Note that ˙θ*(θ0,f0|y,x)=˙θ(θ0,f0|y,x)˙f(θ0,f0|y,x)[g*] is the efficient score for θ, where g*=(g1*,,gd*)(d=dim(θ)) is the least favorable direction, ˙f(θ0,f0|y,x)[g*]=(˙f(θ0,f0|y,x)[g1],,˙f(θ0,f0|y,x)[gd]), and g* is determined by

(A.1)0=E((˙θ(θ0,f0|Y,X)˙f(θ0,f0|Y,X)[g*])˙f(θ0,f0|Y,X)[g]),g.

Let Pn and P as defined in the proof of Theorem 1, Z = (Y, X), and ˜θ(θ0,f0|Z) be given latter, with n(PnP)˜θ(θ0,f0|Z)DN(0,*I(θ0|f0)). Below we will show

n(θ^θ0)=J*(θ0|f0)n(PnP)˜˙θ(θ0,f0|Z)+op(1).

This will complete the proof.

Since we proved the asymptotic equivalent of (θ^,f^) and (θ˜,f˜), now we derive the asymptotic distribution of θ^ as if it were obtained along with a version of f^ which has the adequate smoothness, in particular, it has consistent a derivative f˙^ with ||f˙^f˙0||P0. f˙^ will appear in ˜˙θ(θ^,f^|z), which will be used in (A.2) latter. Due to the constraint ||β||=1 and α = 1, we need Lagrange’s method. Let

˜(θ,f,η,ζ|z)=(θ,m|z)η(||β||21)ζ(α21),
˜˙β(θ,f,η,ζ|z)=˜(θ,f|z)/β=˙β(θ,f|z)2ηβ is the constrained score for β, and ˜˙α(θ,f,η,ζ|z)=˜(θ,f|z)/α=˙α(θ,f|z)2ζα is the constrained score for α. Then (θ^,f^) satisfies
Pn˜˙β(θ,f,η,ζ|Z)=Pn˙β(θ,f|Z)2ηβ=0

and

Pn˜˙α(θ,f,η,ζ|Z)=Pn˙α(θ,f|Z)2ζα=0.

Applying the constraint ||β||2=1 and ||α||2=1 to get η=Pn{β˙β(θ,f|Z)}/2 and ζ=Pn{α˙α(θ,f|Z)}/2, and then Pn˜˙β(θ,f,η,ζ|Z)=Pn˜˙β(θ,f|Z) and Pn˜˙α(θ,f,η,ζ|Z)=Pn˜˙α(θ,f|Z), with

˜˙β(θ,f|z)=˙β(θ,f|z)β˙β(θ,f|z)β  and ˜˙α(θ,f|z)=˙α(θ,f|z)α˙α(θ,f|z)α.

Denote ˜˙θ(θ,f|z)=(˜˙β(θ,f|z),˜˙α(θ,f|z),˙Ω(θ,f|z)).

Since (θ^,f^) is the MLE, Pn˜˙θ(θ^,f^|Z)=0 and Pn˙f(θ^,f^|Z)[g*]=0. Also, it is apparent that P˜˙θ(θ0,f0|Z)=0 and P˙f(θ0,f0|Z)[g*]=0. Let 1={˜˙θ(θ,f|z):θΘ,f3} and 2={˙f(θ,f|z)[g*]:θΘ,f3}. Using the entropy calculation method in the proof of Theorem 1, it can be checked that ℳ1 and ℳ2 are Donsker classes. Since by Theorem 1, θ^θ0+j=13f^jfj00 (a.s.), also, conditions (C1)–(C4) and the given model imply ˜˙θ(θ,f|z) and ˙f(θ,f|z)[g*] are continuous in (θ, f) and with finite second moments, thus by Corollary 2.3.12 (ii) in van der Vaart and Wellner [29], we have

|n(PnP)˜˙θ(θ^,f^|Z)n(PnP)˜˙θ(θ0,f0|Z)|=op(1)

where the op(1) is in the vector sense, and

|n(PnP)˙f(θ^,f^|Z)[g*]n(PnP)˙f(θ0,f0|Z)[g*]|=op(1).

From these facts we get

(A.2)Pn˜˙θ(θ0,f0|Z)=P˜˙θ(θ^,f^|Z)+op(n1/2)

and

(A.3)Pn˙f(θ0,f0|Z)[g*]=P˙f(θ^,f^|Z)[g*]+op(n1/2).

Denote ¨θθ(θ,f|z)=(θ,f|z)/(θθ), ¨θfj(θ,f)[g]=˙θ(θ,fj+λg|z)/λ|λ=0, and define ¨fjθ(θ,f|z)[g], ¨fifj(θ,f|z)[g1,g2], and ˜¨θθ(θ,fj|z), ˜¨θfj(θ,f|z)[g] similarly. In particular, ˜¨θf(θ,fj|z)[g]=¨θfj(θ,f|z)[g], and ˜¨θθ(θ,f|z)=(ij)1i,j3, with 11=¨ββ(θ,f|z)β˙β(θ,f|z)I1β{β¨ββ(θ,f|z)+˙β(θ,f|z)}, where I1 is the dim(β)-dimensional identity matrix; 12=21=¨βα(θ,f|z)β¨βα(θ,f|z)β, 13=31=¨βΩ(θ,f|z)β¨βΩ(θ,f|z)β, 22=¨αα(θ,f|z)α˙α(θ,f|z)I2α{α¨αα(θ,f|z)+˙α(θ,f|z)}, 23=32=¨αΩ(θ,f|z)α¨αΩ(θ,f|z)α, and 33=¨ΩΩ(θ,f|z), here we treat Ω as a vector in the computation of partial derivatives.

Using Taylor expansion, and note that θ^θ02+j=13f^jfj02 is bounded, so by the Lemma and dominated convergence E(θ^θ02+j=13f^jfj02)=O(n2/3), also, P˜¨θfj(θ0,f0|Z)[f^jfj0]=P¨θf(θ0,f0|Z)[f^jfj0], and so

P˜˙θ(θ^,f^|Z)P˜˙θ(θ0,f0|Z)P˜¨θθ(θ0,f0|Z)(θ^θ0)j=13P¨θfj(θ0,f0|Z)[f^jfj0]=O[E(θ^θ02+j=13f^jfj02)]=O(n2/3).

The above and (A.2) give

(A.4)Pn˜˙θ(θ0,f0|Z)P˜¨θθ(θ0,f0|Z)(θ^θ0)j=13P¨θfj(θ0,f0|Z)[f^jfj0]=op(n1/2).

Similarly,

P˙f(θ^,f^|Z)[g*]P˙f(θ0,f0|Z)[g*]P¨fθ(θ0,f0|Z)[g*](θ^θ0)P¨ff(θ0,f0|Z)[g*,f^f0]=O(θ^θ02+j=13f^jfj02)=Op(n2/3).

The above and (A.3) give

(A.5)Pn˙f(θ0,f0|Z)[g*]P¨fθ(θ0,f0|Z)[g*](θ^θ0)P¨ff(θ0,f0|Z)[g*,f^f0]=op(n1/2).

Now, subtracting (A.4) from (A.5) we get

(A.6)nP(˜¨θθ(θ0,f0|Z)¨fθ(θ0,f0|Z)[g*]P¨fθ(θ0,f0|Z)[g*]P¨ff(θ0,f0|Z)[g*,f^f0])(θ^θ0)=nPn(˜˙θ(θ0,f0|Z)˙f(θ0,f0|Z)[g*])+op(1)=n(PnP)(˜˙θ(θ0,f0|Z)˙f(θ0,f0|Z)[g*])+op(1).

It is known that P(˙θ(θ0,f0|Z)˙θ(θ0,f0|Z))=P¨θθ(θ0,f0|Z). By the same way, P¨θf(θ0,f0|Z)[g]=P(˙θ(θ0,f0|Z)˙f(θ0,f0|Z)[g]) for all g, and P(¨ff(θ0,f0|Z)[g1,g2])=P(˙f(θ0,f0|Z)[g1]˙f(θ0,f0|Z)[g2]) for all g1, g2. Note that (A.1) holds for any g. Thus,

P¨θf(θ0,f0|Z)[f^jfj0]P¨ff(θ0,f0|Z)[g*,f^f0]=P(˙θ(θ0,f0|Z)˙f(θ0,f0|Z)[f^f0])+P(˙f(θ0,f0|Z)[g*]˙f(θ0,f0|Z)[f^f0])=P((˙θ(θ0,f0|Z)˙f(θ0,f0|Z)[g*])˙f(θ0,f0|Z)[f^f0])=0.

So (A.6) is

P(˜¨θθ(θ0,f0|Z)¨fθ(θ0,f0|Z)[g*])n(θ^θ0)=n(PnP)(˜˙θ(θ0,f0|Z)˙f(θ0,f0|Z)[g*])+op(1).

Denote J*(θ0|f0)=P(˜¨θθ(θ0,f0|Z)¨fθ(θ0,f0|Z)[g*]) and θ*˜˙(θ0,f0|Z)=˜˙θ(θ0,f0|Z)˙f(θ0,f0|Z)[g*], and I*(θ0|f0)=E(θ0,f0)[˜˙θ(θ0,f0|Z)˜˙θT(θ0,f0|Z)]. Since ||β^||=1, J* is a d-dimensional matrix. Let J* be its Moore-Penrose inverse, then the above is written as

n(θ^θ0)=J*(θ0|f0)n(PnP)θ*˜˙(θ0,f0|Z)+op(1),

which gives the desired result.

Proof of Theorem 3

We only give the proof for f^1, those for f^2 and f^3 are the same. Let

S^1i=(ω^2,ω^3)(Ω^23)1(piα^2f^2(xiβ^2),miα^3f^3(xiβ^3)),

and

S1i=(ω2,0,ω3,0)(Ω23,0)1(piα2,0f2,0(xiβ2,0),miα3,0f3,0(xiβ3,0)).

Then

f^1=arg minf11ni=1n(f1(xiβ^1)S^1i)2.

The mean in the right hand side above is

1ni=1n(f1(xiβ^1)S1i(S^1iS1i))2=1ni=1n(f1(xiβ^1)S1i)22ni=1n(f1(xiβ^1)S1i)(S^1iS1i))+1ni=1n(S^1iS1i)2)

By the Lemma in Yuan, Yin and Tan (2019), the last term on the right hand side above is Op(n2/3), the second term is 2ni=1n(f1(xiβ^1)S1i)Op(n1/3)=Op(n1/2)O(n1/3). Thus

f^1=arg minf1(1ni=1n(f1(xiβ^1)S1i)2+op(n1/3))

and so we can just write f^ as, asymptotically,

f^1=arg minf11ni=1n(f1(xiβ^1)S1i)2=arg minf1i=1n(f1(xiβ^1)S1i)2.

The rest proof is similar to that of Theorem 3 in Yuan, Yin, Tan [30] and is omitted.

References

1. Ramaswamy, V, Chanin, ML, Angell, J, Barnett, J, Gaffen, D, Gelman, M, et al.. Stratospheric temperature trends: observations and model simulations. Rev Geophys 2001;39:71–122. https://doi.org/10.1029/1999rg000065.10.1029/1999RG000065Search in Google Scholar

2. Mikeska, T, Alsop, K, Australian Ovarian Cancer Study Group, Mitchell, G, Bowtell, DD, Dobrovic, A. No evidence for PALB2 methylation in high-grade serous ovarian cancer. J Ovarian Res 2013;6:26. https://doi.org/10.1186/1757-2215-6-26.10.1186/1757-2215-6-26Search in Google Scholar PubMed PubMed Central

3. Curran, PJ, Hussong, AM. Integrative data analysis: the simultaneous analysis of multiple data sets. Psychol Methods 2009;14:81–100. https://doi.org/10.1037/a0015914.10.1037/a0015914Search in Google Scholar PubMed PubMed Central

4. Gao, J, Aksoy, BA, Dogrusoz, U, Dresdner, G, Gross, B, Sumer, SO, et al.. Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal. Sci Signal 2013;6:pl1. https://doi.org/10.1126/scisignal.2004088.10.1126/scisignal.2004088Search in Google Scholar PubMed PubMed Central

5. Roadmap Epigenomics Consortium, Kundaje, A, Meuleman, W, Ernst, J, Bilenky, M, Yen, A, et al.. Integrative analysis of 111 reference human epigenomes. Nature 2015;518:317–30. https://doi.org/10.1038/nature14248.10.1038/nature14248Search in Google Scholar PubMed PubMed Central

6. Li, W, Zhou, H, Abujarour, R, Zhu, S, Joo, JY, Lin, T, et al.. Generation of human-induced pluripotent stem cells in the absence of exogenous Sox2. Stem Cell 2009;27:2992–3000. https://doi.org/10.1002/stem.240.10.1002/stem.240Search in Google Scholar PubMed PubMed Central

7. Cacchiarelli, D, Trapnell, C, Ziller, MJ, Soumillon, M, Cesana, M, Karnik, R, et al.. Integrative analyses of human reprogramming reveal dynamic nature of induced pluripotency. Cell 2015;162:412–24. https://doi.org/10.1016/j.cell.2015.06.016.10.1016/j.cell.2015.06.016Search in Google Scholar PubMed PubMed Central

8. Castro, FG, Kellison, JG, Boyd, SJ, Kopak, A. A methodology for conducting integrative mixed methods research and data analyses. J Mix Methods Res 2010;4:342–60. https://doi.org/10.1177/1558689810382916.10.1177/1558689810382916Search in Google Scholar PubMed PubMed Central

9. Zhao, Q, Shi, X, Huang, J, Liu, J, Li, Y, Ma, S. Integrative analysis of ‘-Omics’ data using penalty functions. Wiley Interdiscip Rev Comput Stat 2015;7:99–108. https://doi.org/10.1002/wics.1322.10.1002/wics.1322Search in Google Scholar PubMed PubMed Central

10. Fang, H, Huang, H, Yuan, A, Fan, R, Tan, MT. Structural equation modelling for cancer early detection with integrative data; 2019. (Submitted).Search in Google Scholar

11. Shen, R, Wang, S, Mo, Q. Sparse integrative clustering of multiple omics data sets. Ann Appl Stat 2013;7:269–94. https://doi.org/10.1214/12-aoas578.10.1214/12-AOAS578Search in Google Scholar PubMed PubMed Central

12. Lock, EF, Dunson, DB. Bayesian consensus clustering. Bioinformatics 2013;29:2610–6. https://doi.org/10.1093/bioinformatics/btt425.10.1093/bioinformatics/btt425Search in Google Scholar

13. Wang, B, Mezlini, AM, Demir, F, Fiume, M, Tu, Z, Brudno, M, et al.. Similarity network fusion for aggregating data types on a genomic scale. Nat Methods 2014;11:333–7. https://doi.org/10.1038/nmeth.2810.10.1038/nmeth.2810Search in Google Scholar

14. Zhang, S, Liu, CC, Li, W, Shen, H, Laird, PW, Zhou, XJ. Discovery of multi-dimensional modules by integrative analysis of cancer genomic data. Nucleic Acids Res 2012;40:9379–91. https://doi.org/10.1093/nar/gks725.10.1093/nar/gks725Search in Google Scholar

15. Wei, Y. Integrative analyses of cancer data: a review from a statistical perspective. Cancer Inform 2015;14:173–81. https://doi.org/10.4137/cin.s17303.10.4137/CIN.S17303Search in Google Scholar

16. Klein, RW, Spady, RH. An efficient semiparametric estimator for binary response models. Econometrica 1993;61:387. https://doi.org/10.2307/2951556.10.2307/2951556Search in Google Scholar

17. Cox, DR. Regression models and life-tables. J Roy Stat Soc B 1972;34:187–202. https://doi.org/10.1111/j.2517-6161.1972.tb00899.x.10.1007/978-1-4612-4380-9_37Search in Google Scholar

18. Qin, J, Garcia, TP, Ma, Y, Tang, MX, Marder, K, Wang, Y. Combining isotonic regression and EM algorithm to predict genetic risk under monotonicity constraint. Ann Appl Stat 2014;8:1182–208. https://doi.org/10.1214/14-aoas730.10.1214/14-AOAS730Search in Google Scholar

19. Yuan, A, Chen, X, Zhou, Y, Tan, MT. Subgroup analysis with semiparametric models toward precision medicine. Stat Med 2018;37:1830–45. https://doi.org/10.1002/sim.7638.10.1002/sim.7638Search in Google Scholar

20. Tibshirani, R. Regression shrinkage and selection via the lasso. J Roy Stat Soc B. 1996;58:267–88. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x.10.1111/j.2517-6161.1996.tb02080.xSearch in Google Scholar

21. Tibshirani, R. The lasso method for variable selection in the Cox model. Stat Med 1997;16:385–95. https://doi.org/10.1002/(sici)1097-0258(19970228)16:4%3c;385::aid-sim380%3e;3.0.co;2-3.10.1002/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3Search in Google Scholar

22. Edwards, D. An introduction to graphical modelling, 2nd ed. New York: Springer Verlag; 2000.10.1007/978-1-4612-0493-0Search in Google Scholar

23. Anandkumar, A, Tan, VYF, Huang, F, Willsky, A. High-dimensional Gaussian graphical model selection: walk summability and local separation criterion. J Mach Learn Res 2012;13:2293–337.Search in Google Scholar

24. Yuan, M, Lin, Y. Model selection and estimation in the Gaussian graphical model. Biometrika 2007;94:19–35. https://doi.org/10.1093/biomet/asm018.10.1093/biomet/asm018Search in Google Scholar

25. Friedman, J, Hastie, T, Tibshirani, R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics 2008;9:432–41. https://doi.org/10.1093/biostatistics/kxm045.10.1093/biostatistics/kxm045Search in Google Scholar PubMed PubMed Central

26. Robertson, T, Wright, FT, Dykstra, R. Order restricted statistical inference. Chichester, New York, Brisbane, Toronto, Singapore: John Wiley & Sons; 1988.Search in Google Scholar

27. Best, MJ, Chakravarti, N. Active set algorithms for isotonic regression; a unifying framework. Math Program 1990;47:425–39. https://doi.org/10.1007/bf01580873.10.1007/BF01580873Search in Google Scholar

28. van der Vaart, A. Semiparametric statistics, in part III. Lectures on probability theory and statistics. Berlin: Springer; 2002.Search in Google Scholar

29. van der Vaart, A, Wellner, J. Weak convergence and empirical processes. New York: Springer; 1996.10.1007/978-1-4757-2545-2Search in Google Scholar

30. Yuan, A, Yin, A, Tan, MT. Semiparametric subgroup causal inference on treatment difference; 2019. (Submitted).Search in Google Scholar

Received: 2019-06-17
Accepted: 2020-07-18
Published Online: 2020-09-16

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 29.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijb-2019-0124/html?lang=en
Scroll to top button