Startseite Sharp bounds for causal effects based on Ding and VanderWeele's sensitivity parameters
Artikel Open Access

Sharp bounds for causal effects based on Ding and VanderWeele's sensitivity parameters

  • Arvid Sjölander EMAIL logo
Veröffentlicht/Copyright: 23. Mai 2024
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

In a seminal article, Ding and VanderWeele proposed a method of constructing bounds for causal effects that has become widely recognized in causal inference. This method requires the analyst to provide guesses of certain “sensitivity parameters,” loosely defined as the maximal strength of association that an unmeasured confounder may have with the exposure and with the outcome. Ding and VanderWeele stated that their bounds are sharp, but without defining this term. Using a common definition of sharpness, Sjölander A. A note on a sensitivity analysis for unmeasured confounding, and the related E-value. J Causal Inference. 2020;8(1):229–48 showed that Ding and VanderWeele’s bounds are sharp in some regions of the sensitivity parameters, but are non-sharp in other regions. In this note, we follow up the work by Sjölander A. A note on a sensitivity analysis for unmeasured confounding, and the related E-value. J Causal Inference. 2020;8(1):229–48, by deriving bounds that are guaranteed to be sharp in all regions of Ding and VanderWeele’s sensitivity parameters. We illustrate the discrepancy between Ding and VanderWeele’s bounds and the sharp bounds with a real data example on vitamin D insufficiency and urine incontinence in pregnant women.

MSC 2010: 62D20; 62J12

1 Introduction

Unmeasured confounding is an important obstacle when estimating causal effects from observational data. In the presence of unmeasured confounding, causal effects cannot be point-identified; however, it is often possible to construct bounds for them [15]. In a seminal article, Ding and VanderWeele (DV) [6] proposed a method of constructing such bounds that has become widely recognized in causal inference and related fields; as of 2023-04-11, DV’s article [6] has more than 400 citations according to Google Scholar. Briefly, DV’s method requires the analyst to provide guesses of certain “sensitivity parameters,” loosely defined as the maximal strength of association that an unmeasured confounder may have with the exposure and with the outcome. Given these parameters, DV derived bounds for the causal risk ratio and the causal risk difference.

DV stated that their bounds are sharp, but without defining this term. In the causal inference literature, bounds are usually said to be sharp if all values inside the bounds are logically compatible with the observed data distribution and with any auxiliary information, such as the specified values of the sensitivity parameters [79]. Sjölander [10] showed that, under this definition, DV’s bounds are sharp in some regions of the sensitivity parameters, but are non-sharp in other regions. He characterized certain regions where DV’s bounds are guaranteed to be sharp, but he did neither prove that these are the only regions where the bounds are sharp, nor did he derive bounds that are narrower than DV’s bounds in regions where the latter are non-sharp.

Given the wide recognition of DV’s bounds, it is important to establish in which regions they are guaranteed to be sharp, in which regions they are guaranteed to be non-sharp, and to derive sharp bounds in the latter regions, given DV’s sensitivity parameters. In this note, we accomplish these tasks. We start by laying down notation, definitions, and assumptions. Then, we briefly review DV’s bounds. Then, we present bounds that are guaranteed to be sharp in all regions of DV’s sensitivity parameters. Finally, we illustrate the discrepancy between DV’s bounds and the sharp bounds with a real data example on vitamin D insufficiency and urine incontinence in pregnant women. Like DV, we focus on scenarios where both the exposure and the outcome are binary. However, whereas DV only derived bounds for the causal risk ratio and risk difference, our bounds are applicable to any measure of causal effect that can be written as a contrast between two counterfactual outcome probabilities.

2 Notation, definitions, and assumptions

DV developed their theory conditional on measured covariates, and we do the same. For brevity, we keep the conditioning on measured covariates implicit in all probability expressions below.

Let E and D denote the binary exposure and outcome, respectively. Let p ( E = e ) denote the marginal probability of E = e , and let p ( D = d E = e ) denote the conditional probability of D = d , given E = e , for ( d , e ) { 0 , 1 } . The exposure–outcome association is defined as some contrast between p ( D = 1 E = 1 ) and p ( D = 1 E = 0 ) , for instance, the risk ratio RR E D = p ( D = 1 E = 1 ) p ( D = 1 E = 0 ) or the risk difference RD E D = p ( D = 1 E = 1 ) p ( D = 1 E = 0 ) .

Let D ( e ) be the potential outcome [11,12] for a given subject, had the exposure been set to E = e for that subject. Similarly, let p { D ( e ) = 1 } be the counterfactual probability of the outcome, had the exposure been set to E = e for all subjects. The causal effect of the exposure on the outcome is defined as some contrast between p { D ( 1 ) = 1 } and p { D ( 0 ) = 1 } , for instance, the causal risk ratio CRR E D = p { D ( 1 ) = 1 } p { D ( 0 ) = 1 } or the causal risk difference CRD E D = p { D ( 1 ) = 1 } p { D ( 0 ) = 1 } . We assume consistency

(1) E = e D ( e ) = D

and the existence of a set of (unmeasured) confounders U sufficient for confounding control. In terms of potential outcomes, we require conditional exchangeability, given U :

(2) { D ( 0 ) , D ( 1 ) } E U .

Under assumptions (1) and (2), the potential outcome probability p { D ( e ) = 1 } can be expressed as a function of p ( D , E , U ) :

(3) p { D ( e ) = 1 } = E [ p { D ( e ) = 1 U } ] = E [ p { D ( e ) = 1 E = e , U } ] = E { p ( D = 1 E = e , U ) } ,

where the expectation is taken over the marginal distribution of U , the first equality follows from the law of total probability, the second from assumption (2), and the third from assumption (1).

3 DV’s bounds

The bounds proposed by DV use two sensitivity parameters, informally defined as the maximal strength of association between E and U , and between U and D , respectively. Formally, the parameter RR U D is defined as

RR U D = max e max u p ( D = 1 E = e , U = u ) min u p ( D = 1 E = e , U = u ) ,

and the parameter RR E e U is defined as

RR E e U = max u p ( U = u E = e ) p ( U = u E = 1 e ) , for e { 0 , 1 } .

DV defined the bounding factor

BF e = RR E e U × RR U D RR E e U + RR U D 1 , for e { 0 , 1 } .

They showed that, given { BF 0 , BF 1 } , CRR E D is bounded by

(4) RR E D BF 1 CRR E D RR E D BF 0

and CRD E D is bounded by

(5) RD E D { p ( E = 0 ) p ( D = 1 E = 1 ) ( 1 1 BF 1 ) + p ( E = 1 ) p ( D = 1 E = 0 ) ( BF 1 1 ) } CRD E D RD E D + { p ( E = 1 ) p ( D = 1 E = 0 ) ( 1 1 BF 0 ) + p ( E = 0 ) p ( D = 1 E = 1 ) ( BF 0 1 ) } .

Sjölander [10] showed that DV’s lower bounds are sharp if BF 1 1 p ( D = 1 E = 0 ) and that DV’s upper bounds are sharp if BF 0 1 p ( D = 1 E = 1 ) , but that DV’s bounds are not necessarily sharp outside these regions. However, he did neither prove that these are the only regions where the bounds are sharp, nor did he derive bounds that are narrower than DV’s bounds in regions where the latter are non-sharp.

4 Sharp bounds

Define

l e = p ( D = 1 E = e ) { p ( E = e ) + p ( E = 1 e ) BF e }

and

u e = p ( D = 1 E = e ) [ p ( E = e ) + p ( E = 1 e ) min { BF ( 1 e ) , 1 p ( D = 1 E = e ) } ] ,

and consider the following bounds for p { D ( e ) = 1 } :

(6) l e p { D ( e ) = 1 } u e .

In the Appendix, we show that the bounds in (6) have two important properties, which we summarize in a theorem.

Theorem 1

Validness and simultaneous sharpness of the proposed bounds.

  • The bounds ( l e , u e ) are valid, in the sense that the inequalities in (6) hold for all distributions p ( D , E , U ) .

  • The bounds ( l 1 , u 0 ) are simultaneously sharp, in the sense that, for any specific distribution p * ( D , E ) and bias factor BF 1 * , there exists a distribution p ( D , E , U ) such that p ( D , E ) = p * ( D , E ) , BF 1 = BF 1 * , p { D ( 1 ) = 1 } = l 1 , and p { D ( 0 ) = 1 } = u 0 .

  • The bounds ( l 0 , u 1 ) are simultaneously sharp, in the sense that, for any specific distribution p * ( D , E ) and bias factor BF 0 * , there exists a distribution p ( D , E , U ) such that p ( D , E ) = p * ( D , E ) , BF 0 = BF 0 * , p { D ( 1 ) = 1 } = u 1 , and p { D ( 0 ) = 1 } = l 0 .

An important corollary of Theorem 1 is that one can obtain a sharp lower bound for any contrast between p { D ( 1 ) = 1 } and p { D ( 0 ) = 1 } by contrasting the minimal value of p { D ( 1 ) = 1 } with the maximal value of p { D ( 0 ) = 1 } , within the range in (6). Similarly, one can obtain a sharp upper bound for any contrast between p { D ( 1 ) = 1 } and p { D ( 0 ) = 1 } by contrasting the maximal value of p { D ( 1 ) = 1 } with the minimal value of p { D ( 0 ) = 1 } , within the range in (6). For instance, we obtain sharp bounds for the causal risk ratio as

(7) RR E D p ( E = 0 ) + p ( E = 1 ) min { BF 1 , 1 p ( D = 1 E = 0 ) } p ( E = 1 ) + p ( E = 0 ) BF 1 CRR E D RR E D p ( E = 1 ) + p ( E = 0 ) min { BF 0 , 1 p ( D = 1 E = 1 ) } p ( E = 0 ) + p ( E = 1 ) BF 0

and sharp bounds for the causal risk difference as

(8) RD E D ( p ( E = 0 ) p ( D = 1 E = 1 ) ( 1 1 BF 1 ) + p ( E = 1 ) p ( D = 1 E = 0 ) [ min { BF 1 , 1 p ( D = 1 E = 0 ) } 1 ] ) CRD E D RD E D + ( p ( E = 1 ) p ( D = 1 E = 0 ) ( 1 1 BF 0 ) + p ( E = 0 ) p ( D = 1 E = 1 ) [ min { BF 0 , 1 p ( D = 1 E = 1 ) } 1 ] ) .

One can easily obtain sharp bounds for other contrasts as well, such as the odds ratio or odds difference.

DV’s bounds in (4) and (5) coincide with the sharp bounds in (7) and (8), respectively, in the regions where Sjölander [10] proved that DV’s bounds are sharp. Specifically, DV’s lower bounds in (4) and (5) coincide with the sharp the lower bounds in (7) and (8) when min { BF 1 , 1 p ( D = 1 E = 0 ) } = BF 1 , and DV’s upper bounds in (4) and (5) coincide with the sharp upper bounds in (7) and (8) when min { BF 0 , 1 p ( D = 1 E = 1 ) } = BF 0 . However, outside these regions, DV’s bounds are wider than the sharp bounds.

The reason why DV’s bounds are not generally sharp is that, by replacing the term min { BF ( 1 e ) , 1 p ( D = 1 E = e ) } in the upper bound u e with BF ( 1 e ) , as DV effectively did, one ignores a particular restriction E { p ( D = 1 E = e , U ) E = 1 e } 1 on the underlying distribution p ( D , E , U ) . In the Appendix, we show where this restriction enters in the derivation of the bounds. Ignoring this restriction will have consequences when BF ( 1 e ) is large, i.e., when there is a substantial degree of unmeasured confounding.

When BF 0 and BF 1 go to infinity, the bounds for p { D ( e ) = 1 } in (6) converge to

p ( D = 1 E = e ) p ( E = e ) p { D ( e ) = 1 } p ( D = 1 E = e ) p ( E = e ) + p ( E = 1 e ) ,

which were previously derived by Robins [1]. These bounds are assumption-free, in the sense that they are guaranteed to include the true value of p { D ( e ) = 1 } , irrespective of the values of { BF 0 , BF 1 } .

5 Illustration

Stafne et al. [13] carried out a cross-sectional study to estimate the causal effect of vitamin D insufficiency, defined as having low levels of circulating 25-hydroxvitamin D (25(OH)D), on the risk of urine incontinence in pregnant women. The study included 851 women in mid-pregnancy (gestational weeks 18–22), who were generally healthy and above 18 years of age. Levels of 25(OH)D were measured with blood samples, whereas urine incontinence was self-reported and categorized as stress incontinence or urge incontinence.

Stafne et al. [13] carried out various analyses, using different cutoffs for 25(OH)D levels in their exposure definition, different combinations of stress/urge incontinence in their outcome definition, and adjusting for different sets of potential confounders. Here, we focus on one of these analyses, in which they defined the exposure ( E = 1 ) as 25(OH) D < 50  nmol/l, the outcome ( D = 1 ) as either stress or urge incontinence, and did not adjust for any confounders.

Table 1 shows the crude data under these exposure and outcome definitions. Based on these data, we have that p ( E = 1 ) = 0.27 , p ( D = 1 E = 0 ) = 0.38 , and p ( D = 1 E = 1 ) = 0.49 . The risk ratio and risk difference are equal to 1.27 and 0.10, respectively, and a χ 2 -test gives a p -value equal to 0.01. Hence, there is strong evidence for a statistical association between vitamin D insufficiency and urine incontinence.

Table 1

Data from Stafne et al. [13] on vitamin D insufficiency ( E ) and urine incontinence ( D )

E = 0 E = 1
D = 0 382 118
D = 1 239 112

Figure 1 shows the sharp bounds (solid lines) and the assumption-free bounds (dotted lines) for p { D ( 0 ) = 1 } (left panel) and p { D ( 1 ) = 1 } (right panel) as functions of BF e , assuming that BF 0 = BF 1 .

Figure 1 
               Sharp bounds (solid lines) and the assumption-free bounds (dotted lines) for 
                     
                        
                        
                           p
                           
                              {
                              
                                 D
                                 
                                    (
                                    
                                       0
                                    
                                    )
                                 
                                 =
                                 1
                              
                              }
                           
                        
                        p\left\{D\left(0)=1\right\}
                     
                   (left panel) and 
                     
                        
                        
                           p
                           
                              {
                              
                                 D
                                 
                                    (
                                    
                                       1
                                    
                                    )
                                 
                                 =
                                 1
                              
                              }
                           
                        
                        p\left\{D\left(1)=1\right\}
                     
                   (right panel), for the data in Table 1.
Figure 1

Sharp bounds (solid lines) and the assumption-free bounds (dotted lines) for p { D ( 0 ) = 1 } (left panel) and p { D ( 1 ) = 1 } (right panel), for the data in Table 1.

Figure 2 shows the bounds for several contrasts between p { D ( 1 ) = 1 } and p { D ( 0 ) = 1 } . The top panels of Figure 2 show the sharp bounds (solid lines) and DV’s bounds (dashed lines) for the causal log risk ratio (top-left panel) and causal risk difference (top-right panel) as functions of BF e , assuming that BF 0 = BF 1 . The vertical dotted lines in Figure 2 indicate the values BF e = 1 p ( D = 1 E = 0 ) and BF e = 1 p ( D = 1 E = 1 ) . Up to these points, DV’s lower and upper bounds agree with the sharp lower and upper bounds, respectively, but are wider than the sharp bounds thereafter. The horizontal dotted lines indicate the assumption-free lower and upper bounds. The sharp bounds are confined within the assumption-free bounds, whereas DV’s bounds exceed the assumption-free bounds for large values of BF e . Furthermore, for large values of BF e , DV’s bounds exceed the logical limits 1 and 1 for the causal risk difference.

Figure 2 
               Sharp bounds (solid lines), DV’s bounds (dashed lines), and the assumption-free bounds (dotted lines) for the causal log risk ratio (top-left panel), causal risk difference (top-right panel), causal log odds ratio (bottom-left panel), and causal odds difference (bottom-right panel), for the data in Table 1.
Figure 2

Sharp bounds (solid lines), DV’s bounds (dashed lines), and the assumption-free bounds (dotted lines) for the causal log risk ratio (top-left panel), causal risk difference (top-right panel), causal log odds ratio (bottom-left panel), and causal odds difference (bottom-right panel), for the data in Table 1.

The bottom panels of Figure 2 show the sharp bounds (solid lines) for the causal log odds ratio (bottom-left panel) and causal odds difference, together with the corresponding assumption-free bounds (dotted lines). We note that DV did not derive any bounds for these parameters based on the bounding factors { BF 0 , BF 1 } . However, the sharp bounds for these, or any other contrasts between p { D ( 1 ) = 1 } and p { D ( 0 ) = 1 } , are easily obtained from the bounds in (6).

6 Discussion

In this note, we have derived sharp bounds for causal effects based on DV’s sensitivity parameters. We have shown that the bounds previously derived by DV are equal to the sharp bounds in the regions where Sjölander [10] proved that DV’s bounds are sharp, but that DV’s bounds are wider than the sharp bounds outside these regions.

The sharp bounds are clearly of strong theoretical interest, but may also have important practical relevance, since they may sometimes be substantially narrower than both DV’s bounds and the assumption free bounds. As an example, Figure 3 shows the same bounds as Figure 2, for p ( E = 1 ) = 0.5 , p ( D = 1 E = 0 ) = 0.6 , and p ( D = 1 E = 1 ) = 0.7 .

Figure 3 
               Sharp bounds (solid lines), DV’s bounds (dashed lines), and the assumption-free bounds (dotted lines) for the causal log risk ratio (top-left panel), causal risk difference (top-right panel), causal log odds ratio (bottom-left panel), and causal odds difference (bottom-right panel), for 
                     
                        
                        
                           p
                           
                              (
                              
                                 E
                                 =
                                 1
                              
                              )
                           
                           =
                           0.5
                        
                        p\left(E=1)=0.5
                     
                  , 
                     
                        
                        
                           p
                           
                              (
                              
                                 D
                                 =
                                 1
                                 ∣
                                 E
                                 =
                                 0
                              
                              )
                           
                           =
                           0.6
                        
                        p\left(D=1| E=0)=0.6
                     
                  , and 
                     
                        
                        
                           p
                           
                              (
                              
                                 D
                                 =
                                 1
                                 ∣
                                 E
                                 =
                                 1
                              
                              )
                           
                           =
                           0.7
                        
                        p\left(D=1| E=1)=0.7
                     
                  .
Figure 3

Sharp bounds (solid lines), DV’s bounds (dashed lines), and the assumption-free bounds (dotted lines) for the causal log risk ratio (top-left panel), causal risk difference (top-right panel), causal log odds ratio (bottom-left panel), and causal odds difference (bottom-right panel), for p ( E = 1 ) = 0.5 , p ( D = 1 E = 0 ) = 0.6 , and p ( D = 1 E = 1 ) = 0.7 .

A reader may wonder if our developments have any implications for the E-value. The answer is no. To see this, note that, for an observed risk ratio larger than 1, the E-value is defined as the common value of the sensitivity parameters such that the lower bound for the causal risk ratio is equal to 1. Suppose that DV’s lower bound for the causal risk ratio was not sharp when being equal to 1. If so, then the E-value would be conservative in the sense that, even if the sensitivity parameters were as large as the E-value, the observed association could still not be explained away by unmeasured confounding. However, Sjölander [10] showed that DV’s lower bound for the causal risk ratio is always sharp when it is equal to 1; hence, the observed association can, indeed, be explained away by unmeasured confounding if the sensitivity parameters are as large as the E-value.

Given the wide recognition of DV’s work, we hope that this note will make an important contribution to the literature on sensitivity analysis and bounds for causal effects in the presence of unmeasured confounding.

  1. Funding information: The author gratefully acknowledges funding from the Swedish Research Council, grant number 2020-01188.

  2. Author contribution: The author confirms the sole responsibility for the conception of the study, presented results and manuscript preparation.

  3. Conflict of interest: The author has no conflict of interest.

Appendix A Proof of Theorem 1

A.1 Proof of validness

From (3), we have that p { D ( e ) = 1 } = E { p ( D = 1 E = e , U ) } . Using the law of total probability, we further have that

E { p ( D = 1 E = e , U ) } = e { 0 , 1 } E { p ( D = 1 E = e , U ) E = e } p ( E = e ) = p ( D = 1 E = e ) p ( E = e ) + E { p ( D = 1 E = e , U ) E = 1 e } p ( E = 1 e ) = p ( D = 1 E = e ) { p ( E = e ) + CRR E e D p ( E = 1 e ) } ,

where we have defined

CRR E e D = E { p ( D = 1 E = e , U ) E = 1 e } p ( D = 1 E = e ) .

Our parameters CRR E 0 D and CRR E 1 D correspond to the parameters CRR E D + and 1 CRR E D , defined in Section 2.2 of DV’s eAppendix. In that section of eAppendix, DV showed that

1 CRR E 1 D RR E 1 U × RR 1 U D RR E 1 U + RR 1 U D 1

and

CRR E 0 D RR E 1 U × RR 0 U D RR E 1 U + RR 0 U D 1 ,

where

RR e U D = max u p ( D = 1 E = e , U = u ) min u p ( D = 1 E = e , U = u ) .

Since RR e U D RR U D and x y ( x + y 1 ) is monotonically increasing in y , it follows that 1 CRR E 1 D BF 1 and CRR E 0 D BF 1 , and by symmetry also that 1 CRR E 0 D BF 0 and CRR E 1 D BF 0 . In short, we have that

1 BF e CRR E e D BF ( 1 e ) .

However, we also have that

E { p ( D = 1 E = e , U ) E = 1 e } 1 ,

so that

1 BF e CRR E e D min { BF ( 1 e ) , 1 p ( D = 1 E = e ) } ,

which gives the bounds in (6).

A.2 Proof of sharpness

We prove that ( l 1 , u 0 ) are simultaneously sharp. That ( l 0 , u 1 ) are simultaneously sharp follows by symmetry. Thus, we prove that it is possible to find a distribution p ( D , E , U ) that is consistent with any given { p * ( D , E ) , BF 1 * } , and is such that p { D ( 1 ) = 1 } = l 1 and p { D ( 0 ) = 1 } = u 0 . We only consider the case BF 1 * > 1 p * ( D = 1 E = 0 ) since sharpness was proven by Sjölander [10] for the opposite case. We construct the distribution p ( D , E , U ) in the following steps:

  1. Let p ( E ) = p * ( E ) .

  2. Let U be binary, with

    p ( U = 1 E = 1 ) = 1 , p ( U = 1 E = 0 ) = 1 x ,

    where x is an arbitrary number such that x BF 1 * . We have that 0 < p ( U = 1 E = e ) 1 for e { 0 , 1 } . We also have that p ( U = 1 E = 1 ) p ( U = 1 E = 0 ) = x > 1 and p ( U = 0 E = 1 ) p ( U = 0 E = 0 ) = 0 , so that RR E 1 U = p ( U = 1 E = 1 ) p ( U = 1 E = 0 ) = x .

  3. Let

    p ( D = 1 E = 0 , U = 0 ) = p * ( D = 1 E = 0 ) 1 x 1 1 x , p ( D = 1 E = 0 , U = 1 ) = 1 , p ( D = 1 E = 1 , U = 0 ) = p * ( D = 1 E = 1 ) 1 BF 1 * 1 x 1 1 x , p ( D = 1 E = 1 , U = 1 ) = p * ( D = 1 E = 1 ) .

    We have that 0 p ( D = 1 E = e , U = u ) 1 for ( e , u ) { 0 , 1 } . We further have that

    p ( D = 1 E = 0 ) = p ( D = 1 E = 0 , U = 1 ) p ( U = 1 E = 0 ) + p ( D = 1 E = 0 , U = 0 ) p ( U = 0 E = 0 ) = 1 × 1 x + p * ( D = 1 E = 0 ) 1 x 1 1 x × ( 1 1 x ) = p * ( D = 1 E = 0 )

    and

    p ( D = 1 E = 1 ) = p ( D = 1 E = 1 , U = 1 ) p ( U = 1 E = 1 ) + p ( D = 1 E = 1 , U = 0 ) p ( U = 0 E = 1 ) = p * ( D = 1 E = 1 ) × 1 + p * ( D = 1 E = 1 ) 1 BF 1 * 1 x 1 1 x × 0 = p * ( D = 1 E = 1 ) .

    We further have that

    p ( D = 1 E = 1 , U = 1 ) p ( D = 1 E = 1 , U = 0 ) = 1 1 x 1 BF 1 * 1 x > p ( D = 1 E = 0 , U = 1 ) p ( D = 1 E = 0 , U = 0 ) = 1 1 x p * ( D = 1 E = 0 ) 1 x 1 ,

    so that

    RR U D = p ( D = 1 E = 1 , U = 1 ) p ( D = 1 E = 1 , U = 0 ) = 1 1 x 1 BF 1 * 1 x = 1 1 RR E 1 U 1 BF 1 * 1 RR E 1 U = BF 1 * ( RR E 1 U 1 ) RR E 1 U BF 1 * .

    We now have that

    BF 1 = RR E 1 U × RR U D RR E 1 U + RR U D 1 = BF 1 * ( RR E 1 U 1 ) RR E 1 U RR E 1 U ( RR E 1 U BF 1 * ) + BF 1 * ( RR E 1 U 1 ) ( RR E 1 U BF 1 * ) = BF 1 * .

    Finally, we have that

    E { p ( D = 1 E = 1 , U ) E = 0 } = p ( D = 1 E = 1 , U = 1 ) p ( U = 1 E = 0 ) + p ( D = 1 E = 1 , U = 0 ) p ( U = 0 E = 0 ) = p * ( D = 1 E = 1 ) × 1 x + p * ( D = 1 E = 1 ) 1 BF 1 * 1 x 1 1 x × ( 1 1 x ) = p * ( D = 1 E = 1 ) BF 1 * = p ( D = 1 E = 1 ) BF 1

    and

    E { p ( D = 1 E = 0 , U ) E = 1 } = p ( D = 1 E = 0 , U = 1 ) p ( U = 1 E = 1 ) + p ( D = 1 E = 0 , U = 0 ) p ( U = 0 E = 1 ) = 1 × 1 + p * ( D = 1 E = 0 ) 1 x 1 1 x × 0 = 1 ,

    so that

    p { D ( 1 ) = 1 } = E { p ( D = 1 E = 1 , U ) } = p ( D = 1 E = 1 ) p ( E = 1 ) + E { p ( D = 1 E = 1 , U ) E = 0 } p ( E = 0 ) = p ( D = 1 E = 1 ) { p ( E = 1 ) + p ( E = 0 ) BF 1 } = l 1

    and

    p { D ( 0 ) = 1 } = E { p ( D = 1 E = 0 , U ) } = p ( D = 1 E = 0 ) p ( E = 0 ) + E { p ( D = 1 E = 0 , U ) E = 1 } p ( E = 1 ) = p ( D = 1 E = 0 ) { p ( E = 0 ) + p ( E = 1 ) p ( D = 1 E = 0 ) } = u 0 .

We end the proof with a technical remark. For the distribution, we have constructed p ( U = 0 E = 1 ) = 0 so that, strictly speaking, p ( D = 1 E = 1 , U = 0 ) is undefined. To overcome this technical obstacle, one can modify the proof and let p ( U = 0 E = 1 ) = ε approach 0 in such a way that { p ( D , E ) , BF 1 } converges to { p * ( D , E ) , BF 1 * } and p { D ( 1 ) = 1 } and p { D ( 0 ) = 1 } converge to the lower and upper bounds.

References

[1] Robins JM. The analysis of randomized and non-randomized AIDS treatment trials using a new approach to causal inference in longitudinal studies. In: Sechrest L, Freeman H, Mulley A, editors. Health service research methodology: a focus on AIDS. US Public Health Service, National Center for Health Services Research; 1989. p. 113–59. Suche in Google Scholar

[2] Balke A, Pearl J. Bounds on treatment effects from studies with imperfect compliance. J Amer Stat Assoc. 1997;92(439):1171–6. 10.1080/01621459.1997.10474074Suche in Google Scholar

[3] Zhang JL, Rubin DB. Estimation of causal effects via principal stratification when some outcomes are truncated by death. J Educat Behav Stat. 2003;28(4):353–68. 10.3102/10769986028004353Suche in Google Scholar

[4] Cai Z, Kuroki M, Pearl J, Tian J. Bounds on direct effects in the presence of confounded intermediate variables. Biometrics. 2008;64(3):695–701. 10.1111/j.1541-0420.2007.00949.xSuche in Google Scholar PubMed

[5] Sjölander A. Bounds on natural direct effects in the presence of confounded intermediate variables. Stat Med. 2009;28(4):558–71. 10.1002/sim.3493Suche in Google Scholar PubMed

[6] Ding P, VanderWeele TJ. Sensitivity analysis without assumptions. Epidemiology. 2016;27(3):368–77. 10.1097/EDE.0000000000000457Suche in Google Scholar PubMed PubMed Central

[7] Tian J, Pearl J. Probabilities of causation: Bounds and identification. Ann Math Artif Intell. 2000;28(1-4):287–313. 10.1023/A:1018912507879Suche in Google Scholar

[8] Imai K. Sharp bounds on the causal effects in randomized experiments with truncation-by-death. Stat Probability Lett. 2008;78(2):144–9. 10.1016/j.spl.2007.05.015Suche in Google Scholar

[9] Huber M, Mellace G. Sharp bounds on causal effects under sample selection. Oxford Bulletin Econ Stat. 2015;77(1):129–51. 10.1111/obes.12056Suche in Google Scholar

[10] Sjölander A. A note on a sensitivity analysis for unmeasured confounding, and the related E-value. J Causal Inference. 2020;8(1):229–48. 10.1515/jci-2020-0012Suche in Google Scholar

[11] Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. J Educat Psychol. 1974;66(5):688–701. 10.1037/h0037350Suche in Google Scholar

[12] Pearl J. Causality: models, reasoning, and inference. 2nd ed. New York: Cambridge University Press; 2009. 10.1017/CBO9780511803161Suche in Google Scholar

[13] Stafne SN, Mørkved S, Gustafsson MK, Syversen U, Stunes AK, Salvesen KÅ, et al. Vitamin D and stress urinary incontinence in pregnancy: a cross-sectional study. Int J Obstetr Gynaecol. 2020;127(13):1704–11. 10.1111/1471-0528.16340Suche in Google Scholar PubMed

Received: 2023-04-12
Revised: 2023-12-06
Accepted: 2024-02-19
Published Online: 2024-05-23

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Research Articles
  2. Evaluating Boolean relationships in Configurational Comparative Methods
  3. Doubly weighted M-estimation for nonrandom assignment and missing outcomes
  4. Regression(s) discontinuity: Using bootstrap aggregation to yield estimates of RD treatment effects
  5. Energy balancing of covariate distributions
  6. A phenomenological account for causality in terms of elementary actions
  7. Nonparametric estimation of conditional incremental effects
  8. Conditional generative adversarial networks for individualized causal mediation analysis
  9. Mediation analyses for the effect of antibodies in vaccination
  10. Sharp bounds for causal effects based on Ding and VanderWeele's sensitivity parameters
  11. Detecting treatment interference under K-nearest-neighbors interference
  12. Bias formulas for violations of proximal identification assumptions in a linear structural equation model
  13. Current philosophical perspectives on drug approval in the real world
  14. Foundations of causal discovery on groups of variables
  15. Improved sensitivity bounds for mediation under unmeasured mediator–outcome confounding
  16. Potential outcomes and decision-theoretic foundations for statistical causality: Response to Richardson and Robins
  17. Quantifying the quality of configurational causal models
  18. Design-based RCT estimators and central limit theorems for baseline subgroup and related analyses
  19. An optimal transport approach to estimating causal effects via nonlinear difference-in-differences
  20. Estimation of network treatment effects with non-ignorable missing confounders
  21. Double machine learning and design in batch adaptive experiments
  22. The functional average treatment effect
  23. An approach to nonparametric inference on the causal dose–response function
  24. Review Article
  25. Comparison of open-source software for producing directed acyclic graphs
  26. Special Issue on Neyman (1923) and its influences on causal inference
  27. Optimal allocation of sample size for randomization-based inference from 2K factorial designs
  28. Direct, indirect, and interaction effects based on principal stratification with a binary mediator
  29. Interactive identification of individuals with positive treatment effect while controlling false discoveries
  30. Neyman meets causal machine learning: Experimental evaluation of individualized treatment rules
  31. From urn models to box models: Making Neyman's (1923) insights accessible
  32. Prospective and retrospective causal inferences based on the potential outcome framework
  33. Causal inference with textual data: A quasi-experimental design assessing the association between author metadata and acceptance among ICLR submissions from 2017 to 2022
  34. Some theoretical foundations for the design and analysis of randomized experiments
Heruntergeladen am 14.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jci-2023-0019/html
Button zum nach oben scrollen