Corrigendum to: Targeted Learning of the Mean Outcome under an Optimal Dynamic Treatment Rule [J Causal Inference DOI: 10.1515/jci-2013-0022]
There was a coding error in the simulations of van der Laan and Luedtke [1]. This error has been corrected and the simulation section has been updated to reflect this change. The TMLE and estimating equation approach now perform similarly in all of our simulation settings. This is consistent with prior works which show that the two often perform similarly in large sample sizes when there are no near positivity violations (see, e.g. Gruber and van der Laan [2]).
We have included amended versions of all figures from van der Laan and Luedtke [1] in this errata (Figures 1–3). In Figure 2(b) we have included results from the same ensemble estimator of the optimal rule, and also the result when the recursive partitioning candidate is omitted from the library of candidate estimators. These figures show that the coverage of the non-cross-validated methods improves considerably this candidate is omitted, while the coverage of the cross-validated methods is unchanged. This is in line with the general theoretical arguments given in van der Laan and Luedtke [1] that the cross-validated approaches make no restrictions on the data adaptivity of the initial parameters, i.e. do not require any entropy conditions on the optimal rule estimator.

Relative efficiency of TMLE and DR-IPCW methods compared with both the optimal value and the data adaptive target parameter. Gives results both for the cases where the outcome regression estimate is correctly specified and the case where this estimate is incorrectly specified with the constant function

Coverage of 95% confidence intervals from the TMLE and DR-IPCW methods with respect to both the optimal value and the data adaptive target parameter. Gives results both for the cases where the outcome regression estimate is correctly specified and the case where this estimate is incorrectly specified with the constant function

(a) Relative efficiency of TMLE and DR-IPCW methods compared to both the optimal value and the data adaptive target parameter. (b) Coverage of 95% confidence intervals from the TMLE and DR-IPCW methods with respect to both the optimal value and the data adaptive target parameter. Includes coverage estimates which use reduced super-learner library (generalized additive model and stepwise regression) to estimate the optimal rule to show sensitivity to data adaptive estimators. Both (a) and (b) give results for the cases where outcome regression estimates are correctly specified and the case where these estimates are incorrectly specified with the constant function
In Figure 3(b) we show results for the original ensemble estimator for the optimal rule, as well as an ensemble estimator whose library only includes a stepwise regression for the blip function and a generalized additive model with a weighted classification loss. Again we see that the non-cross-validated approaches are highly sensitive to the data adaptiveness of the initial estimators. Coverage improved when the sample size with the full library was increased to 10,000, though rate of improvement appears slow for this simulation: coverage for the optimal value was approximately 87% for all estimation methods considered. This slow rate of convergence may in part be caused by nonsmooth first time point blip function.
It is fairly easy to show that the cross-validated methods will typically have minimal bias for the corresponding data adaptive parameter and negative bias for the optimal value. It is also easy to show that the non-cross-validated approaches will typically have positive bias for the optimal value when a data adaptive approach is used to estimate the optimal rule. Under the conditions given in the original paper, both of these biases are asymptotically negligible. As the data adaptive parameter is strictly less than the optimal value, the positive bias of the non-cross-validated methods for the optimal value implies even larger bias for the data adaptive parameter, which explains why we see the TMLE and DR-IPCW methods have lower coverage for the data adaptive parameter than for the optimal value in all of our simulations. Nonetheless, as we showed in the original paper, these approaches still give valid asymptotic coverage for the data adaptive parameter under weaker conditions than for the optimal value: if the optimal rule estimator satisfies entropy conditions, then the TMLE has valid asymptotic coverage for the data adaptive parameter even if the estimate is not consistent for the optimal rule.
The results of Appendix C of the original work analyzed the erroneously coded estimator and thus are incorrect. The estimators presented in Appendix B of the original work are the correct TMLEs, analogous to those presented in van der Laan and Gruber [3].
We thank Jeremy Coyle for bringing this error to our attention.
References
1. van der LaanMJ, LuedtkeAR. Targeted learning of the mean outcome under an optimal dynamic treatment rule. J Causal Inference2014;3(1):61–95.10.1515/jci-2013-0022Search in Google Scholar PubMed PubMed Central
2. GruberS, van der LaanMJ. A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome. Int J Biostat2010;6:Article 26.10.2202/1557-4679.1260Search in Google Scholar PubMed PubMed Central
3. van der LaanMJ, GruberS. Targeted minimum loss based estimation of causal effects of multiple time point interventions. Int J Biostat2012;8:Article 9.10.1515/1557-4679.1370Search in Google Scholar PubMed
©2015 by De Gruyter
Articles in the same Issue
- Frontmatter
- Balancing Score Adjusted Targeted Minimum Loss-based Estimation
- Surrogate Endpoint Evaluation: Principal Stratification Criteria and the Prentice Definition
- A Causal Perspective on OSIM2 Data Generation, with Implications for Simulation Study Design and Interpretation
- Parameter Identifiability of Discrete Bayesian Networks with Hidden Variables
- The Bayesian Causal Effect Estimation Algorithm
- Propensity Score Analysis with Survey Weighted Data
- Comment
- Reply to Professor Pearl’s Comment
- M-bias, Butterfly Bias, and Butterfly Bias with Correlated Causes – A Comment on Ding and Miratrix (2015)
- Causal, Casual and Curious
- Generalizing Experimental Findings
- Corrigendum
- Corrigendum to: Targeted Learning of the Mean Outcome under an Optimal Dynamic Treatment Rule [J Causal Inference DOI: 10.1515/jci-2013-0022]
Articles in the same Issue
- Frontmatter
- Balancing Score Adjusted Targeted Minimum Loss-based Estimation
- Surrogate Endpoint Evaluation: Principal Stratification Criteria and the Prentice Definition
- A Causal Perspective on OSIM2 Data Generation, with Implications for Simulation Study Design and Interpretation
- Parameter Identifiability of Discrete Bayesian Networks with Hidden Variables
- The Bayesian Causal Effect Estimation Algorithm
- Propensity Score Analysis with Survey Weighted Data
- Comment
- Reply to Professor Pearl’s Comment
- M-bias, Butterfly Bias, and Butterfly Bias with Correlated Causes – A Comment on Ding and Miratrix (2015)
- Causal, Casual and Curious
- Generalizing Experimental Findings
- Corrigendum
- Corrigendum to: Targeted Learning of the Mean Outcome under an Optimal Dynamic Treatment Rule [J Causal Inference DOI: 10.1515/jci-2013-0022]