Startseite Variable selection for bivariate interval-censored failure time data under linear transformation models
Artikel
Lizenziert
Nicht lizenziert Erfordert eine Authentifizierung

Variable selection for bivariate interval-censored failure time data under linear transformation models

  • Rong Liu , Mingyue Du EMAIL logo und Jianguo Sun
Veröffentlicht/Copyright: 3. Juni 2022
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Variable selection is needed and performed in almost every field and a large literature on it has been established, especially under the context of linear models or for complete data. Many authors have also investigated the variable selection problem for incomplete data such as right-censored failure time data. In this paper, we discuss variable selection when one faces bivariate interval-censored failure time data arising from a linear transformation model, for which it does not seem to exist an established procedure. For the problem, a penalized maximum likelihood approach is proposed and in particular, a novel Poisson-based EM algorithm is developed for the implementation. The oracle property of the proposed method is established, and the numerical studies suggest that the method works well for practical situations.


Corresponding author: Mingyue Du, Center for Applied Statistical Research, School of Mathematics, Jilin University, Changchun 130012, China, E-mail:

Funding source: National Natural Science Foundation of China http://dx.doi.org/10.13039/501100001809

Award Identifier / Grant number: 12101522

Acknowledgements

The authors wish to thank an Associate Editor and two reviewers for their helpful and insightful comments and suggestions that greatly improved the paper. A R package for the implementation of the proposed approach is available from the first author.

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: This research was partly supported by National Natural Science Foundation of China (12101522).

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

Appendix A: Proof of Theorem 1

To prove Theorem 1, let g ( η i ; γ ) = ( 2 π ) d / 2 | γ | 1 exp η i T γ 2 η i / 2 and assume that the regularity conditions needed for the consistency of the maximum likelihood estimators of Λ0j given in Zeng et al. [16] hold. Also we need the following regularity conditions.

Condition 1

There exists a compact neighborhood B 0 of the true value β 0 such that

sup β B 0 n 1 Ω n ( β ) I β 0 a . s . 0 ,

where I β 0 is a positive definite matrix, n 1 Ω n = 2 H β | Λ ̃ 01 , Λ ̃ 02 .

Condition 2

There exists a constant c > 1 such that c 1 < eigen min n 1 Ω n eigen max n 1 Ω n < c for sufficiently large n, where eigenmin(⋅) and eigenmax(⋅) stand for the smallest and largest eigenvalues of the matrix. There exits positive constants a 0 and a 1 such that a 0 β 0 j a 1 , 1 j q n .

Condition 3

As n → ∞, p n q n / n 0 , λ n / n 0 , λ n q n / n 0 and λ n 2 / ( p n n ) .

Condition 4

The initial estimator β ̂ ( 0 ) satisfies β ̂ ( 0 ) β 0 = O p ( p n / n ) .

Conditions 1 and 2 assume that n −1 Ω n ( β ) is positive definite almost surely and its eigenvalues are bounded away from zero and infinity and nonzero coefficients are uniformly bounded away from zero and infinity. We assume that the nonzero coefficients are uniformly bounded away from zero and infinity. Condition 3 gives some sufficient, but not necessary, conditions needed to prove the numerical convergence and asymptotic properties of the BAR estimator. Condition 4 is useful to establish the oracle property of β ̂ .

In addition, we need the notation and two lemmas below. Define

(A.1) α * ( β ) γ * ( β ) g ( β ) = Ω n + λ n D ( β ) 1 v n

and partition the matrix n 1 Ω n 1 into

n 1 Ω n 1 = A B B G ,

where A is a q n × q n matrix. Note that since Ω n is nonsingular, it follows by multiplying Ω n 1 Ω n + λ n D ( β ) and subtracting β 0 on both sides of (A.1) that we have

(A.2) α * β 01 γ * + λ n n A D 1 β 1 α * + B D 2 β 2 γ * B D 1 β 1 α * + G D 2 β 2 γ * = b ̂ β 0 ,

where b ̂ = Ω n 1 v n , D 1 β 1 = diag β 1 2 , , β q n 2 and D 2 β 2 = diag β q n + 1 2 , , β p n 2 .

Lemma 1

Let δ n be a sequence of positive real numbers satisfying δ n → ∞ and δ n 2 p n / λ n 0 . Define H n β = β 1 , β 2 : β 1 1 / K 0 , K 0 q n , β 2 δ n p n / n , where K 0 > 1 is a constant such that β 01 1 / K 0 , K 0 q n . Then under the regular conditions (C1) − (C4) given above and with probability tending to 1, we have

  1. sup β H n γ * ( β ) β 2 < 1 c 0 for some constant c 0 > 1.

  2. g(⋅) is a mapping from H n to itself.

Proof

First from formula (3.11), it is easy to see that b ̂ is equal to g( β ) with λ n = 0, which is equivalent to the maximizer of the log-likelihood function. By using arguments similar to those in Zhang et al. [29] and referencing from Zeng et al. [16], one can obtain that b ̂ β 0 = O ( p n / n ) . Hence it follows from (A.2) that

sup β H n γ * + λ n n B D 1 β 1 α * + λ n n G D 2 β 2 γ * = O p ( p n / n ) .

By condition (C2) and the fact that

B B A 2 B B + A 2 n 1 Ω n 2 < c 2 ,

we can derive B 2 c . Furthermore, based on conditions (C2) and (C3) and note that β 1 1 / K 0 , K 0 q n and α * g ( β ) b ̂ = O p ( p n ) , we have

(A.3) sup β H n λ n n B D 1 β 1 α * = o p ( p n / n )

Since λ min( G ) > c −1, it follows from (A.2) that, with probability tending to 1

(A.4) c 1 λ n n D 2 β 2 γ * γ * sup β H n γ * + λ n n G D 2 β 2 γ * = O p p n n δ n p n n .

Let m γ * / β 2 = γ 1 * / β q n + 1 , γ 2 * / β q n + 2 , , γ p n q n * / β p n . It then follows from the Cauchy–Schwarz inequality and the assumption β 2 δ n p n / n that

m γ * / β 2 D 2 β 2 γ * δ n p n / n ,

and

(A.5) γ * = D 2 β 2 1 / 2 m γ * / β 2 m γ * / β 2 β 2 m γ * / β 2 δ n p n / n

for all large n. Thus, from Eqs. (A.4) and (A.5), we have the following inequality:

λ n n C n δ n p n m γ * / β 2 m γ * / β 2 δ n p n n δ n p n n .

Immediately from p n δ n 2 / λ n 0 , we have that

(A.6) m γ * / β 2 1 λ n p n δ n 2 c 1 < 1 c 0 , c 0 > 1

with probability tending to one. Hence, it follows from Eqs. (A.5) and (A.6) that

(A.7) γ * < β 2 δ n p n / n 0  as  n ,

which implies that conclusion (i) holds.

To prove (ii), we only need to verify that α * 1 / K 0 , K 0 q n with probability tending to 1 since (A.7) has showed that γ * δ n p n / n with probability tending to 1. Analogously, given condition ( C 2 ) , β 1 1 / K 0 , K 0 q n and α * < O p ( p n ) , we have

sup β H n λ n n A D 1 β 1 α * = o p ( p n / n ) .

Then from (A.2), we have

(A.8) sup β H n α * β 01 + λ n n B D 2 β 2 γ * = O p ( p n / n ) δ n p n / n

and according to (A.4) and (A.7), we have λ n n D 2 β 2 γ * 2 c δ n p n / n . Hence based on condition (C2), we know that as n → ∞ and with probability tending to one,

(A.9) sup β H n λ n n B D 2 β 2 γ * λ n n B sup β H n D 2 β 2 γ * 2 2 c 2 δ n p n n .

Therefore, from (A.8) and (A.9), we can get

sup β H n α * β 01 2 2 c 2 + 1 δ n p n n 0

with probability tending to one, which implies that for any ϵ > 0, P α * β 01 ϵ 1 . Thus, it follows from β 01 1 / K 0 , K 0 q n that α * 1 / K 0 , K 0 q n holds for large n, which implies that conclusion (ii) holds. This completes the proof.

Lemma 2

Under the regular conditions (C1)–(C4) given above and with probability tending to 1, the equation α = Ω n ( 1 ) + λ n D 1 ( α ) 1 v n ( 1 ) has a unique fixed-point α ̂ * in the domain 1 / K 0 , K 0 q n

Proof

Define

(A.10) f ( α ) = f 1 ( α ) , f 2 ( α ) , , f q n ( α ) Ω n ( 1 ) + λ n D 1 ( α ) 1 v n ( 1 ) ,

where α = α 1 , , α q n . By multiplying Ω n ( 1 ) 1 Ω n ( 1 ) + λ n D 1 ( α ) and then minus β 01 on both sides of (A.10), we have

f ( α ) β 01 + λ n Ω n ( 1 ) 1 D 1 ( α ) f ( α ) = Ω n ( 1 ) 1 v n ( 1 ) β 01 = X 1 X 1 1 X 1 ϵ ,

where X 1 is the first q n columns of X ̂ , and ϵ = y X ̂ β . Therefore,

sup α 1 / K 0 , K 0 q n f ( α ) β 01 + λ n Ω n ( 1 ) 1 D 1 ( α ) f ( α ) = O p ( q n / n ) .

Similar to (A.3), it can be shown that

sup α 1 / K 0 , K 0 q n λ n n n 1 Ω n ( 1 ) 1 D 1 ( α ) f ( α ) = o p ( q n / n ) .

Thus

sup α 1 / K 0 , K 0 q n f ( α ) β 01 δ n q n / n 0

which implies that f ( α ) 1 / K 0 , K 0 q n with probability tending to one. That is, f( α ) is a mapping from 1 / K 0 , K 0 q n to itself.

Also by multiplying Ω n ( 1 ) + λ n D 1 ( α ) and taking derivative with respect to α on both sides of (A.10), we have

1 n Ω n ( 1 ) + λ n n D 1 ( α ) f ̇ ( α ) + λ n n diag 2 f 1 ( α ) α 1 3 , , 2 f n ( α ) α q n 3 = 0 ,

where f ̇ ( α ) = f ( α ) / α . Then

sup α 1 / K 0 , K 0 q n 1 n Ω n ( 1 ) + λ n n D 1 ( α ) f ̇ ( α ) = sup α 1 / K 0 , K 0 q n 2 λ n n diag f 1 ( α ) α 1 3 , , f q ( α ) α q n 3 = o p ( 1 ) .

According to condition (C3) and the fact that α 1 / K 0 , K 0 q n , we can derive

1 n Ω n ( 1 ) + λ n n D 1 ( α ) f ̇ ( α ) 1 n Ω n ( 1 ) f ̇ ( α ) λ n n D 1 ( α ) f ̇ ( α ) 1 c λ n n K 0 2 f ̇ ( α ) .

Thus, we have that sup α 1 / K 0 , K 0 q n f ̇ ( α ) 0 , which implies that f(⋅) is a contraction mapping from 1 / K 0 , K 0 q n to itself with probability tending to one. Hence, according to the contraction mapping theorem, there exists one unique fixed-point α ̂ * 1 / K 0 , K 0 q n such that

(A.11) α ̂ * = Ω n ( 1 ) + λ n D 1 α ̂ * 1 v n ( 1 ) .

Proof of Theorem 1

First according to the definition of β ̂ * and β ̂ 2 ( k ) , it follows from (A.7) that

(A.12) β ̂ 2 * lim k β ̂ 2 ( k ) = 0

with the probability tending to 1. That is, the conclusion (1) holds. Now we will show that P β ̂ 1 * = α ̂ * 1 . For this, consider (A.2) and define γ * = 0 if β 2 = 0. Note that for any fixed large n, from (A.2), we have

lim β 2 0 γ * ( β ) = 0 .

Furthermore, by multiplying Ω n + λ n D ( β ) on both sides of (A.1), we can get

(A.13) lim β 2 0 α * ( β ) = Ω n ( 1 ) + λ n D 1 β 1 1 v n ( 1 ) = f β 1 .

By combining Eqs. (A.12) and (A.13), it follows that

(A.14) η k sup β 1 1 / K 0 , K 0 q n f β 1 α * β 1 , β ̂ 2 ( k ) 0 ,  as  k

since f(⋅) is a contract mapping. Hence (A.11) yields

(A.15) f β ̂ 1 ( k ) α ̂ * = f β ̂ 1 ( k ) f α ̂ * 1 c β ̂ 1 ( k ) α ̂ * , ( c > 1 ) .

Let h k = β ̂ 1 ( k ) α ̂ * It then follows from (A.14) and (A.15) that

h k + 1 = α * β ̂ ( k ) α ̂ * α * β ̂ ( k ) f β ̂ 1 ( k ) + f β ̂ 1 ( k ) α ̂ * η k + 1 c h k .

From (A.14), for any ϵ ≥ 0, there exists N > 0 such that when k > N, η k < ϵ . Employing some recursive calculation, we have h k → 0 as k → ∞. Hence, with probability tending to one, we have

β ̂ 1 ( k ) α ̂ * 0  as  k

since β ̂ 1 lim k β ̂ 1 ( k ) , and it follows from the uniqueness of the fixed-point that

P β ̂ 1 = α ̂ * 1 , k .

Finally, based on (A.11), we have n α ̂ * β 01 = Π 1 + Π 2 , where

Π 1 n Ω n ( 1 ) + λ n D 1 α ̂ * 1 Ω n ( 1 ) I q n β 01 ,

and

Π 2 n Ω n ( 1 ) + λ n D 1 α ̂ * 1 v n ( 1 ) Ω n ( 1 ) β 01 .

It follows from the first-order resolvent expansion formula that

(A.16) Ω n ( 1 ) + λ n D 1 α ̂ * 1 = Ω n ( 1 ) 1 λ n Ω n ( 1 ) 1 D 1 α ̂ * Ω n ( 1 ) + λ n D 1 α ̂ * 1 .

This yields that

Π 1 = λ n n 1 n Ω n ( 1 ) 1 D 1 α ̂ * 1 n Ω n ( 1 ) + λ n n D 1 α ̂ * 1 1 n Ω n ( 1 ) β 01 .

By the assumption (C2) and (C3), we have

Π 1 = O p λ n q n / n 0 .

Furthermore, it follows from (A.16) and the assumption λ n / n 0 that

Π 2 = n 1 n Ω n ( 1 ) 1 o p ( 1 / n ) 1 n v n ( 1 ) 1 n Ω n ( 1 ) β 01 .

We denote −n*H( β |Λ 0) as l n ( β |Λ 0) and Λ 0 as Λ01, Λ02. Then we have that n 1 / 2 v n ( 1 ) Ω n ( 1 ) β 01 = n 1 / 2 l ̇ n ( 1 ) β ̂ | Λ ̂ 0 + o p ( 1 ) with l ̇ n ( 1 ) β ̂ | Λ ̂ 0 denoting the first q n components of l ̇ n β ̂ | Λ ̂ 0 . Let I ( β ) = E l ̈ n β | Λ ̂ 0 be the Fisher information matrix, where l ̈ n ( β | Λ 0 ) is the partial Hessian matrix about β . Since n 1 / 2 l n β ̂ | Λ ̂ 0 N 0 , n 1 I β 0 , we have n α ̂ * β 01 N q n ( 0 , Σ ) with Σ = n Ω n ( 1 ) β 0 1 I ( 1 ) β 0 Ω n ( 1 ) β 0 1 , where I ( 1 ) β 0 is the leading q n × q n submatrix of I β 0 . This completes the proof.

Appendix B: Expressions of conditional expectations

In this Appendix B, we provide the expressions of several conditional expectations used in the E-step of the algorithm. More specifically,

(6.1) E Z ijq = E η i c j q exp X i T β + η i exp G j S ij1 exp G j S ij2 ξ i j ξ i j exp ξ i j S ij1 exp ξ i j S ij2 1 exp ξ i j S ij2 S ij1 × f ξ i j | r d ξ i j I L i < t j q R i + c j q exp X i T β E ξ i j exp ( η i ) I t j q > R i I R i < + c j q exp X i T β E ξ i j exp ( η i ) I t j q > L i I R i = ,

(6.2) E ξ i j exp ( η i ) = E η i exp ( η i ) exp G j S ij 1 G j S ij 1 exp G j S ij 2 G j S ij 2 exp G j S ij 1 exp G j S ij 2 I R i < + E η i ( exp ( η i ) G j ( S ij 1 ) I ( R i = ) ) ,

(6.3) E h η i = η i h η i j = 1 2 exp ( G j ( S ij 1 ) ) exp G j S ij 2 δ i j exp G j S ij 1 1 δ i j g η i | γ d η i η i j = 1 2 exp ( G j ( S ij 1 ) ) exp G j S i j 2 δ i j exp G j S i j 1 1 δ i j g η i | γ d η i .

In the above, h(η i ) denotes an arbitrary function of η i , S ij 1 = t j q L i j c j q exp X i T β + η i , and S ij 2 = t j q R i j c j q exp X i T β + η i . For the determination of E{h(η i )}, we suggest to employ the probability integral transformation technique to transform η i into a standard normal random variable and then adopt Gaussian-Hermite quadrature method. Note that

(6.4) ξ i j ξ i j e x ξ i j f ξ i j | r j d ξ i j = r j x + 1 r j 1 1 ,

when f ξ i j | r j denotes the gamma density function with known parameter r j . Furthermore, we propose to employ Gauss-Laguerre quadrature technique to calculate the following item that has no closed form

ξ i j ξ i j exp ξ i j S i j 1 exp ξ i j S i j 2 1 exp ξ i j S i j 2 S i j 1 f ( ξ i j | r j ) d ξ i j .

References

1. Dai, L, Chen, K, Sun, Z, Liu, Z, Li, G. Broken adaptive ridge regression and its asymptotic properties. J Multivariate Anal 2018;168:334–51. https://doi.org/10.1016/j.jmva.2018.08.007.Suche in Google Scholar

2. Fan, J, Li, R. Variable selection via nonconcave penalized likelihood and its oracle property. J Am Stat Assoc 2001;96:1348–60. https://doi.org/10.1198/016214501753382273.Suche in Google Scholar

3. Liu, Z, Li, G. Efficient regularized regression with penalty for variable selection and network construction. Comput Math Methods Med 2016:3456153. https://doi.org/10.1155/2016/3456153.Suche in Google Scholar

4. Tibshirani, R. Regression shrinkage and selection via the lasso. Journel of the Royal Statistical Society 1996;58:267–88. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x.Suche in Google Scholar

5. Zou, H. The adaptive lasso and its oracle properties. J Am Stat Assoc 2006;101:1418–29. https://doi.org/10.1198/016214506000000735.Suche in Google Scholar

6. Dicker, L, Huang, B, Lin, X. Variable selection and estimation with the seamless-L0 penalty. Stat Sin 2013;1:929–62.10.5705/ss.2011.074Suche in Google Scholar

7. Cai, J, Fan, J, Li, R, Zhou, H. Variable selection for multivariate failure time data. Biometrika 2005;92:303–16. https://doi.org/10.1093/biomet/92.2.303.Suche in Google Scholar

8. Tibshirani, R. The lasso method for variable selection in the Cox model. Stat Med 1997;16:385–95. https://doi.org/10.1002/(sici)1097-0258(19970228)16:4<385::aid-sim380>3.0.co;2-3.10.1002/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3Suche in Google Scholar

9. Zhang, H, Lu, W. Adaptive lasso for Cox proportional hazards model. Biometrika 2007;94:691–703. https://doi.org/10.1093/biomet/asm037.Suche in Google Scholar

10. Zhao, H, Wu, Q, Li, G, Sun, J. Simultaneous estimation and variable selection for interval censored data with broken adaptive ridge regression. J Am Stat Assoc 2020;20:1537–2746.10.1080/01621459.2018.1537922Suche in Google Scholar

11. Frinkelstein, DM. A proportional hazards model for interval-censored failure time data. Biometrics 1986;42:845–54.10.2307/2530698Suche in Google Scholar

12. Sun, J. The statistical analysis of interval-censored failure time data. New York: Springer; 2006.Suche in Google Scholar

13. Wang, P, Zhao, H, Du, M, Sun, J. Inference on semiparametric transformation model with general interval censored failure time data. J Nonparametric Statistics 2008;30:758–73.10.1080/10485252.2018.1478091Suche in Google Scholar

14. Wang, L, McMahan, CS, Hudgens, MG, Qureshi, ZP. A flexible, computationally efficient method for fitting the proportional hazards model to interval censored data. Biometrics 2016;72:222–31. https://doi.org/10.1111/biom.12389.Suche in Google Scholar PubMed PubMed Central

15. Wang, L, Sun, J, Tong, X. Regression analysis of case II interval-censored failure time data with the additive hazards model. Stat Sin 2010;20:1709.Suche in Google Scholar

16. Zeng, D, Gao, F, Lin, DY. Maximum likelihood estimation for semiparametric regression models with multivariate interval-censored data. Biometrika 2017;104:505–25. https://doi.org/10.1093/biomet/asx029.Suche in Google Scholar PubMed PubMed Central

17. Sun, T, Ding, Y. Copula-based semiparametric regression method for bivariate data under general interval censoring. Biostatistics 2019;10:1–16. https://doi.org/10.1093/biostatistics/kxz032.Suche in Google Scholar PubMed

18. Zhou, Q, Hu, T, Sun, J. A sieve semiparametric maximum likelihood approach for regression analysis of bivariate interval-censored failure time data. J Am Stat Assoc 2017;112:664–72. https://doi.org/10.1080/01621459.2016.1158113.Suche in Google Scholar

19. Cafri, G, Calhoun, P, Fan, J. High dimensional variable selection with clustered data: an application of random multivariate survival forests for detection of outlier medical device components. J Stat Comput Simulat 2019;89:1410–22.10.1080/00949655.2019.1584198Suche in Google Scholar

20. Liu, J, Zhang, R, Zhao, W, Lv, Y. Variable selection in semiparametric hazard regression for multivariate survival data. J Multivariate Anal 2015;142:26–40. https://doi.org/10.1016/j.jmva.2015.07.015.Suche in Google Scholar

21. Li, S, Wu, Q, Sun, J. Penalized estimation of semiparametric transformation models with interval-censored data and application to Alzheimer’s disease. Stat Methods Med Res 2020;29:2151–66. https://doi.org/10.1177/0962280219884720.Suche in Google Scholar PubMed

22. Sun, L, Li, S, Wang, L, Song, X. Variable selection in semiparametric nonmixture cure model with interval-censored failure time data: application to the prostate cancer screening study. Stat Med 2019;38:3026–39. https://doi.org/10.1002/sim.8165.Suche in Google Scholar PubMed

23. Wu, Q, Zhao, H, Zhu, L, Sun, J. Variable selection for high-dimensional partly linear additive Cox model with application to Alzheimer’s disease. Stat Med 2020;39:3120–34. https://doi.org/10.1002/sim.8594.Suche in Google Scholar PubMed PubMed Central

24. Gamage, PW, McMahan, CS, Wang, L, Tu, W. A Gamma-frailty proportional hazards model for bivariate interval-censored data. Comput Stat Data Anal 2018;128:354–66. https://doi.org/10.1016/j.csda.2018.07.016.Suche in Google Scholar PubMed PubMed Central

25. Li, S, Hu, T, Zhao, S, Sun, J. Regression analysis of multivariate current status data with semiparametric transformation frailty models. Stat Sin 2020;30:1117–34. https://doi.org/10.5705/ss.202017.0156.Suche in Google Scholar

26. Chen, K, Jin, Z, Ying, Z. Semiparametric analysis of transformation model with censored data. Biometrika 2002;3:659–68. https://doi.org/10.1093/biomet/89.3.659.Suche in Google Scholar

27. Chen, K, Sun, L, Tong, X. Analysis of cohort survival data with transformation model. Stat Sin 2012;22:489–509. https://doi.org/10.5705/ss.2010.228.Suche in Google Scholar

28. Li, K, Chan, W, Doody, RS, Quinn, J, Luo, S. Prediction of conversion to Alzheimers disease with longitudinal measures and time-to-event data. J Alzheimers Dis 2017;58:361–71. https://doi.org/10.3233/jad-161201.Suche in Google Scholar

29. Zhang, Y, Hua, L, Huang, J. A spline-based semiparametric maximum likelihood estimation method for the Cox model with interval-censored data. Scand Stat Theory Appl 2010;37:338–54. https://doi.org/10.1111/j.1467-9469.2009.00680.x.Suche in Google Scholar

Received: 2021-04-03
Revised: 2022-02-21
Accepted: 2022-04-20
Published Online: 2022-06-03

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Artikel in diesem Heft

  1. Frontmatter
  2. Research Articles
  3. Two-sample t α -test for testing hypotheses in small-sample experiments
  4. Estimating risk and rate ratio in rare events meta-analysis with the Mantel–Haenszel estimator and assessing heterogeneity
  5. Estimating population-averaged hazard ratios in the presence of unmeasured confounding
  6. Commentary
  7. Comments on ‘A weighting analogue to pair matching in propensity score analysis’ by L. Li and T. Greene
  8. Research Articles
  9. Variable selection for bivariate interval-censored failure time data under linear transformation models
  10. A quantile regression estimator for interval-censored data
  11. Modeling sign concordance of quantile regression residuals with multiple outcomes
  12. Robust statistical boosting with quantile-based adaptive loss functions
  13. A varying-coefficient partially linear transformation model for length-biased data with an application to HIV vaccine studies
  14. Application of the patient-reported outcomes continual reassessment method to a phase I study of radiotherapy in endometrial cancer
  15. Borrowing historical information for non-inferiority trials on Covid-19 vaccines
  16. Multivariate small area modelling of undernutrition prevalence among under-five children in Bangladesh
  17. The optimal dynamic treatment rule superlearner: considerations, performance, and application to criminal justice interventions
  18. Estimators for the value of the optimal dynamic treatment rule with application to criminal justice interventions
  19. Efficient estimation of pathwise differentiable target parameters with the undersmoothed highly adaptive lasso
Heruntergeladen am 23.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/ijb-2021-0031/html
Button zum nach oben scrollen