 Research article
 Open Access
 Open Peer Review
 Published:
Combining directed acyclic graphs and the changeinestimate procedure as a novel approach to adjustmentvariable selection in epidemiology
BMC Medical Research Methodology volume 12, Article number: 156 (2012)
Abstract
Background
Directed acyclic graphs (DAGs) are an effective means of presenting expertknowledge assumptions when selecting adjustment variables in epidemiology, whereas the changeinestimate procedure is a common statisticsbased approach. As DAGs imply specific empirical relationships which can be explored by the changeinestimate procedure, it should be possible to combine the two approaches. This paper proposes such an approach which aims to produce welladjusted estimates for a given research question, based on plausible DAGs consistent with the data at hand, combining prior knowledge and standard regression methods.
Methods
Based on the relationships laid out in a DAG, researchers can predict how a collapsible estimator (e.g. risk ratio or risk difference) for an effect of interest should change when adjusted on different variable sets. Implied and observed patterns can then be compared to detect inconsistencies and so guide adjustmentvariable selection.
Results
The proposed approach involves i. drawing up a set of plausible backgroundknowledge DAGs; ii. starting with one of these DAGs as a working DAG, identifying a minimal variable set, S, sufficient to control for bias on the effect of interest; iii. estimating a collapsible estimator adjusted on S, then adjusted on S plus each variable not in S in turn (“addone pattern”) and then adjusted on the variables in S minus each of these variables in turn (“minusone pattern”); iv. checking the observed addone and minusone patterns against the pattern implied by the working DAG and the other prior DAGs; v. reviewing the DAGs, if needed; and vi. presenting the initial and all final DAGs with estimates.
Conclusion
This approach to adjustmentvariable selection combines backgroundknowledge and statisticsbased approaches using methods already common in epidemiology and communicates assumptions and uncertainties in a standardized graphical format. It is probably best suited to areas where there is considerable background knowledge about plausible variable relationships. Researchers may use this approach as an additional tool for selecting adjustment variables when analyzing epidemiological data.
Background
Adjustmentvariable selection in epidemiology can be broadly grouped into background knowledgebased and statisticsbased approaches. Directed acyclic graphs (DAGs) have come to be a core tool in the backgroundknowledge approach as they allow researchers to present assumed relationships between variables graphically and, based on these assumptions, to identify variables to adjust for confounding and other biases [1–3]. There is, however, no guarantee that the assumptions in such a prior DAG align with the patterns in the data. Stepwise selection based on pvalues or the changeinestimate are common statisticsbased approaches [4]. In contrast to the backgroundknowledge approach, these allow patterns in the data to decide the final adjustment variables but risks in such datadriven approaches have been highlighted [5].
To our knowledge, only one methodological article in epidemiology to date has explicitly looked at combining background knowledge in DAGs with a statistical selection procedure for variable selection [6]. However, this article only considered stepwise deletion from an adjustment set defined from a prior DAG without checking whether the data supported the starting adjustment set. DAGdiscovery algorithms, such as the PC and other algorithms in the TETRAD suite [7], combine background knowledge with statistical selection rules to discover DAG structures but they have proven controversial [8] and have not yet crossed over into epidemiological research. In fact, empirical articles [9–15] reporting DAGs for variable selection usually report only using prior DAGs, sometimes with subsequent stepwise deletion, but apparently without checking the starting assumptions against the data. Since the performance of these approaches depends on the appropriateness of the starting assumptions, a simple method for checking DAGs against the data may be valuable.
In this article, we propose an approach to adjustmentvariable selection which aims to produce welladjusted estimates for a given research question based on plausible DAGs which are also consistent with the data at hand, and to clearly communicate assumptions and uncertainties underlying the estimates in DAG format. It asks researchers to lay out prior assumptions about variable relationships in one or more prior DAGs, uses the changeinestimate patterns in the data to refine and revise these DAGs, and presents the prior and final DAGs with corresponding estimates. The approach is based on recent theoretical results regarding confounding equivalence (cequivalence) [16] and work on the collapsibility of estimates over different DAG structures [17]. To be pragmatic, the approach focuses on an exposureoutcome relationship of interest and uses regression models and the changeinestimate procedure familiar to epidemiologists.
Methods
DAGs and minimally sufficient adjustment variable sets
In this article, we assume that the reader is familiar with the terminology of and rules for reading DAGs. There are now many introductions to DAGs for epidemiologists [[1, 2, 17–20], annexe in [21]], including applications to specific areas of epidemiology [20, 22]. DAGs are a graphical description of the joint probability distribution of a set of random variables, showing marginal and conditional (in)dependencies between variables [3, 7, 23, 24]. We follow standard practice in epidemiology and give the arrows causal meaning, thereby interpreting a DAG as a causal diagram. We only address total associations in this article but the approach can be extended to direct and indirect effects based on graphical criteria for their identification [25–27].
DAGs allow the identification of the variable set or sets sufficient to adjust for confounding and other biases, based on the variable relationships shown. Greenland et al. [1] give conditions for this: a variable set is sufficient if i. there is no unblocked backdoor path joining the two variables which does not contain a variable in the set, and ii. there is no unblocked path joining the two variables induced by adjustment on the set which does not contain a variable in the set. This second condition means that if a collider is in the set and if adjusting on the collider unblocks the path between the two variables, then another variable on the path has also to be in the set to ensure that the path remains blocked. No variable in the set can be a descendant of the exposure or outcome [1]. (See [28] for a more recent formalization.) In practice, these conditions mean that the only unblocked paths joining exposure and outcome after conditioning on the adjustment variables can be mediating paths. A minimally sufficient adjustment set is a sufficient adjustment set which would no longer be sufficient if any variable were removed [2, 29]. Minimally sufficient adjustment sets can be identified by manual [1, 18] or computer [30, 31] algorithms but a visual inspection is frequently sufficient.
Drawing up prior DAGs
The first step is preparing a set of DAGs which encode prior, expert knowledge about variable relationships and show the major prior uncertainties. These DAGs should include

1.
all measured variables considered relevant, including those routinely used for adjustment in the research area (e.g. sex) even if not thought a priori to be associated with other variables on the graph;

2.
plausible proxy and measurement error relations;

3.
plausible unmeasured parents with two or more children in the DAG; and

4.
participation or selection variables conditioned upon during datacollection, including voluntary participation by subjects and restriction of the study to particular groups, such as hospitalized patients.
In most cases, more than one prior DAG will be needed to show the main uncertainties in variable relationships, including the presence or absence of arrows between variables, arrow direction, and the presence of unmeasured variables.
It is important to consider the source population of the data in preparing the prior DAG or DAGs. As much prior knowledge will come from research in other contexts, there will be cases when a researcher judges that an association between variables found in other studies do not apply in his or her dataset. For example, socioeconomic status may have an association with access to healthcare in systems with large outofpocket payments but not in wellfunctioning nationalized systems. In this case, the researcher needs to explain why he or she has chosen not to connect two variables which other researchers would connect, based on knowledge about source populations. Possible differences in source populations should also be borne in mind when revising the DAG, as discussed below.
Using minimally sufficient adjustment sets to compare a DAG with data
For any given DAG, a researcher can identify the minimally sufficient adjustment set or sets for the effect of interest. Once done, he or she can identify the changes expected in this estimate when adjusting on different variable sets according to the DAG. To do this, we need to assume compatibility, faithfulness [32], and correct model specification. We also need to use a collapsible estimator (e.g. risk ratio (RR), risk difference (RD)), as the noncollapsible estimators (e.g. conditional odds ratio) can change upon adjusting on a variable which is strongly related with the outcome but is not, in fact, a confounder [33–35]. The RR and RD are therefore recommended and can now be readily estimated by regression [36–39].
Given the above, a collapsible effect estimate conditional on a minimally sufficient adjustment set will not change when estimated on this set plus the variables excluded from the set, provided that the excluded variables are not mediators (or ancestors or descendants of mediators) lying on an open path or colliders (or descendants of colliders) which, if conditioned upon, would open the path on which they lie. Conversely, a collapsible effect estimate conditional on a minimally sufficient adjustment set should change when estimated on this set minus any variable in the set. This allows a researcher to identify the changeinestimate pattern implied by the DAG and so compare it with the observed pattern from the data.
Practically, we propose the following steps for this. Sample Rcode is in Additional file 1 (web appendix):

1.
Draw up the DAGs encoding prior, expert knowledge and the main prior uncertainties as described above and select an initial working DAG from this set (the most plausible DAG);

2.
From the working DAG, identify a minimally sufficient adjustment set, S, for the effect of interest (A→Y);

3.
Using a collapsible estimator, estimate A→Y conditional on S;

4.
Reestimate A→Y conditional on S plus each of the variables not included in S in turn (“addone pattern”);

5.
Plot each estimate on a single graph, thereby showing differences in the estimates between the models;

6.
Repeat steps 4 and 5 but deleting each variable in turn from S (“minusone pattern”);

7.
Determine whether the addone and minusone patterns found are consistent with the working DAG;

8.
If the patterns are consistent with the working DAG, check to see if any of the other prior DAGs give the same expected patterns. Take all prior DAGs with consistent patterns as the revised working DAGs and move to step 11;

9.
If the patterns are not consistent with the working DAG, check to see if any of the other prior DAGs imply the patterns as observed. Take all such consistent prior DAGs as the revised working DAGs and move to step 11;

10.
If the patterns are not consistent with the working DAG or with any of the other prior DAGs, undertake an ad hoc revision (see web appendix) to create a new working DAG;

11.
Repeat steps 2 to 11 for each revised working DAG, moving to step 12 when there are no inconsistent addone and minusone patterns;

12.
Present the prior and all final DAGs with corresponding effect estimates.
The key to step 7 is recognizing when the observed patterns are consistent with the patterns implied by the DAG. If S is minimally sufficient, the addone pattern is consistent if the only meaningful changes arise when conditioning on mediators lying on open paths from A to Y or when conditioning on colliders which open a path from A to Y. All variables in S should show meaningful minusone changes, but this may not always be the case in practice because of incidental cancellations (see Discussion). Once familiar with the rules of DAGs, it is straightforward for a researcher to identify the expected changes for any adjustment set for a given DAG: for example, if adjusting on {C_{1},C_{3}} in Figure 1, the implied addone pattern is no change for C_{2} and a change for C_{4} and C_{5}. The implied minusone pattern is a change for C_{1} and C_{3}.
Importantly, DAGs will commonly have more than one minimally sufficient adjustment set. In this case, the researcher should also compare the effects estimated on each minimally sufficient set in steps 8 and 9 above. These adjusted effect estimates should not differ, meaning that any observed differences can help distinguish between the different working DAGs in these steps.
Defining a meaningful change
A key decision is defining the change in the estimate sufficient to warrant reviewing the DAG. The first issue here is the size of the change. For this, a researcher could choose to follow (and defend) the commonly used threshold of a 10% relative difference in the starting estimate [4, 40]. Although standard practice in epidemiology, the relative nature of this rule means that the chance of declaring a change meaningful will differ with the magnitude of the starting estimate (see empirical example below). An alternative to consider is therefore using absolute change, which, given arguments that the absolute RD is particularly relevant to decisionmaking [37], also has the benefit of allowing a researcher to determine the threshold based on judgements of clinical or publichealth relevance [36]. For example, the threshold could be the difference in mortality or in nonpersistence to a prescribed treatment which would warrant a clinical or publichealth reaction. If no consensus threshold is available for certain questions, the researcher will need to propose (and defend) a reasonable value. Although arbitrary, this approach has the benefit of transparently communicating the decision rule and its rationale to other researchers, who can adopt or challenge it. The choice of estimator and of the meaningful threshold therefore clearly depend on the research question but should be defined and justified before analysis.
The second issue here is variability in the change in estimate because of sampling error or other problems such as unstable models. In this case, a researcher may inappropriately revise (or not revise) a prior DAG because the observed patterns have failed to align with the patterns in the source population by chance. We note, however, that this is the case for the changeinestimate procedure as currently practised as it only uses the point estimate change to guide covariable selection.
To incorporate variability into the proposed approach, we suggest estimating the expected proportion of times the addone and minusone patterns would lead to a revision of the DAG under resampling and using this information in a sensitivity analysis. This can be done by bootstrap, calculating the proportion of resampled estimates lying beyond the meaningful change threshold for each variable during the addone and minusone steps. The researcher should report these proportions for the prior working and final DAGs. We also suggest undertaking a sensitivity analysis by revising the prior working DAG considering only variables with >50% of resampled addone changes outside the meaningful threshold as showing meaningful changes. Although this will mean presenting several final DAGs, it has the merit of communicating uncertainty in the assumptions used for the final models. In contrast, for the minusone step we suggest only reporting the proportion of resampled estimates without undertaking the sensitivity analysis for the reasons outlined in the Discussion.
There are two important caveats here. First, the proposed 50% cutoff for the addone changes is arbitrary and further studies should explore the performance of different cutoff values. Second, inflated variance estimates because of unstable regression models (e.g. small sample size, collinearity) would also lead to a high estimated variability of the changes, highlighting the importance of routine model checking in the approach.
Reviewing the DAG
An important issue in reviewing the working DAG (steps 7 to 10 above) is that, as numerous DAGs can be constructed around the same variables, there is a risk of revision a posteriori to fit the observed empirical pattern. To mitigate this, we suggest first addressing the prior uncertainties as represented by the set of alternative, prior DAGs. If these DAGs do not include a graph consistent with the observed patterns, the researcher will need to consider other possible misspecification of confounding, mediating, and collision pathways, measurement error, and bias amplification as outlined in the Results. A structured approach to working through these possibilities is in Additional file 1 (web appendix). However, given the risk of post hoc fitting the DAG to the data at this stage, the researcher should state that none of the prior DAGs was consistent with the observed patterns. Note that model misspecification, another reason to consider, is not addressed in this article for reasons of space. As noted, usual methods for model checking clearly apply.
Results
We now run through a theoretical example to illustrate the approach before presenting an empirical example from clinical epidemiology.
Confounding, mediation, collision
Take the (as yet unknown) bestworking DAG in Figure 1, the prior DAG in Figure 2 as the preferred initial working DAG, and the DAGs in Figures 1, 3, and 4 as prior alternative DAGs. These figures are also available in Additional file 2 in slide format to follow the changes by flicking back and forth between figures. From Figure 2, a researcher identifies a putative minimally sufficient adjustment set of {C_{1}}. The implied addone pattern for Figure 2 when adjusting on {C_{1}} is a change for C_{4} and C_{5} and no change for C_{2} or C_{3}; the implied minusone pattern is a change for C_{1}. He or she estimates the A→Y effect adjusted on {C_{1}} and the addone and minusone patterns. Graphing this (step 5 above) gives a pattern as in Figure 5, where the dotted horizontal lines represent the predefined threshold for a meaningful change. The changes on adding C_{4} and C_{5} and for removing C_{1} are consistent with Figure 2. In contrast, the changes for adding C_{2} and C_{3} are not consistent with Figure 2, flagging the need to reconsider them.
During preparation of the prior DAGs, our researcher flagged the possible confounding pathways in Figures 1 or 3 and C_{2} as a collider in Figure 4. Both Figures 1 and 4 have the same implied addone and minusone patterns when adjusting on C_{1} only, namely addone changes for C_{2}, C_{3}, C_{4}, and C_{5} and minusone changes for C_{1}. These are consistent with Figure 3. The implied patterns for Figure 4 when adjusting on C_{1} only are addone changes for C_{2}, C_{4}, and C_{5}; no addone change for C_{3}; and a minusone change for C_{1}. These do not correspond to those observed in Figure 5 (the addone pattern should not change for C_{3}). Consequently, the researcher can discount the DAG in Figure 4 and focus on Figures 1 and 3.
The researcher should reapply the above steps to each of Figures 1 and 3. In Figure 3, the minimally sufficient adjustment set is {C_{1},C_{2},C_{3}}. The implied patterns adjusting on this set is an addone change for C4 and C5 and a minusone change for C_{1}, C_{2}, and C_{3}. As Figure 1 is the still unknown best working DAG, the observed pattern will have no minusone change for C_{2} and C_{3}. In contrast, rerunning the steps on Figure 1 will obviously give consistent addone and minusone patterns. This favours Figure 1. The researcher can go further, noting that both {C_{1},C_{2}} and {C_{1},C_{3}} are minimally sufficient adjustment sets in Figure 1. The effect estimate adjusted on each of these sets does not change, consistent with Figure 1 as the final working DAG based on these prior starting DAGs.
Alternatively, the researcher may have preidentified uncertain mediation paths involving C_{2} and C_{3}, for example a single mediating path (A→C_{2}→C_{3}→Y) or two separate mediating paths (A→C_{2}→Y and A→C_{3}→Y) (not shown but easily constructed by replacing A←C_{2} with A→C_{2} in Figures 1 and 3 and A←C_{3} by A→C_{3} in Figure 3). The same approach as for the confounding scenarios will help distinguish between these, although, as discussed below, background knowledge is required to decide on the confounding vs. mediating direction of the arrows.
Measurement error
Measurement error can also cause an estimate to change when adding or deleting variables to or from the adjustment set, even though this would not be the case had the variables been measured perfectly. To see why, consider Figure 6, which is Figure 1 with measurement error of C_{2} and C_{3}. Following [41], we define C* as the measured variable, and U_{C} as representing all factors affecting measurement of C. Adjusting on C_{2}* only partially blocks A←C_{2}→C_{3}→Y at C_{2}; similarly, adjusting on C_{3}* only partially blocks this pathway at C_{3}; consequently the estimate adjusted on {C_{1},C_{2}*} will not equal that adjusted on {C_{1},C_{2}*,C_{3}*} even though they would have been the same if we could have adjusted on {C_{1},C_{2}} and {C_{1},C_{2},C_{3}}.
To see how measurement error fits into the proposed approach, consider the case of Figure 6 as the (unknown) best working DAG, Figure 1 as a researcher’s initial working prior DAG, and measurement error of C_{2} and C_{3} in Figure 6 as an alternative prior DAG. Running through the above steps on Figure 1 using a minimally sufficient adjustment set of {C_{1},C_{2}} will give addone and minusone patterns as in Figure 7. These are inconsistent for C_{3} in Figure 1, since adding C_{3} to the {C_{1},C_{2}} adjustment set should not change the estimate. In contrast, this pattern is consistent with the measurement error in Figure 6. Although, intuitively, the “best” adjustment set is expected to be {C_{1},C_{2}*,C_{3}*}, adjusting on a mismeasured confounder may increase bias under certain conditions [42, 43] such as the presence of a qualitative interaction between exposure and confounder if the confounder is binary [43]. Even in conditions for which adjustment on {C_{1},C_{2}*,C_{3}*} will be bias reducing, arguably common in epidemiological research [43–45], this will not be a sufficient adjustment set as it only partially blocks the A←C_{2}→C_{3}→Y pathway. Regardless of the direction of the bias, the proposed changeinestimate approach should flag the need to review the associations involving the mismeasured variables in the DAG.
Bias amplification
Recent work has shown that residual bias can be amplified by adjustment on instrumentlike variables [46, 47], a finding which, although its quantitative relevance is still under debate [48, 49], has potentially major implications for adjustmentvariable selection in epidemiology. Such bias amplification can also lead to a change in the effect estimate when adjusting on different variable sets, so researchers should consider it when reviewing a DAG based on the addone and minusone patterns. Note that “instrumentlike” refers to variables which are strong predictors of the exposure but can be also associated with the outcome (see [46] for detailed discussion and estimate of the ratio of two associations). Confounders can therefore be instrumentlike, depending on the relative strength of their relationships with the exposure and the outcome. This is not to be confused with standard instrumental variables which, by definition, are associated only with the exposure and which have biasreducing properties in appropriate analyses (see [50] for this) and biasamplifying effects in other analyses [46].
Consider Figure 1 as a prior DAG, Figure 8 as the unknown best working DAG, and major residual confounding, shown by the pathway A←Z_{U}→Y in Figure 8, as a prior uncertainty. In the absence of residual confounding (Figure 1), a collapsible estimate adjusted on {C_{1},C_{2}}, {C_{1},C_{3}}, and {C_{1},C_{2},C_{3}} should not differ. However, with residual confounding (Figure 8), these estimates will differ because C_{2} and C_{3} have different “instrument strengths” (i.e. relative to C_{3}, C_{2} is more strongly associated with the exposure A) and so amplify the residual bias differently [16]. Consequently, a researcher starting with a minimally sufficient adjustment set of {C_{1},C_{2}} (based on Figure 1) will find addone and minusone patterns similar to those shown in Figure 7. These patterns are inconsistent with Figure 1 but are consistent with the alternative DAG in Figure 8. The question again becomes which adjustment set to choose to minimize bias. Until further theoretical and simulation work is available on bias amplification, a conservative strategy is to adjust on {C_{1},C_{3}}, as C_{3} should be a weaker instrument than C_{2}, but also to present the estimate adjusted on {C_{1},C_{2}} and {C_{1},C_{2},C_{3}}.
Presenting more than one final DAG
In many instances, the researcher will need to present more than one final DAG with implied addone and minusone patterns consistent with the patterns observed. Sometimes the adjusted estimate will be the same as the DAGs imply the same minimally sufficient adjustment set. An example is removing the C_{5}→Y arrow and adding a C_{5}←C_{3} arrow in Figure 2. This DAG has similar implied patterns as the current Figure 2 and so, if matching the observed patterns, both would need to be presented amongst the final DAGs. The minimally sufficient adjustment set in both is {C_{1}} and so the adjusted effect estimate will be the same. However, in some cases the minimally sufficient adjustment sets will be different, so that an estimate for each DAG will need to be presented. One example of this involves the confounding vs. mediating pathways mentioned above, if both types of relationship were identified as plausible during the preparation of the prior DAGs (e.g. the DAG in Figure 4 and the DAG created by replacing A←C_{2}→Y with A→C_{2}→Y in Figure 4).
Empirical example
We now consider an empirical example to illustrate the approach. We compare mortality 5 years after peritonealdialysis (PD) initiation amongst patients with polycystic kidney disease (PKD) versus other nephropathies, using data from the French Language Peritoneal Dialysis Registry (RDPLF) (details in Additional file 1 (web appendix); see also [51] for background). We estimate the RD by linear regression with robust standard errors [52] and use a ±0.01 absolute change in the point estimate of the RD as meaningful, considering that difference of this magnitude in the cumulative incidence of death would warrant attention from clinical or public health decisionmakers. To compare the absolute with relative scales, we also show a ±10% change in the RD. We calculated the proportion of estimates lying outside the ±0.01 absolute change threshold on resampling using 2000 nonparametric bootstrap samples.
The DAG in Figure 9 illustrates prior assumptions regarding variable relationships. Type of peritoneal dialysis refers to the two modalities of treatment, namely continuous ambulatory peritoneal dialysis and automated peritoneal dialysis. The other variables are selfexplanatory. Figure 9 shows, for example, that we assume that Type of peritoneal dialysis and Sex have no direct association with Death and that both PKD vs. other nephropathies and Comorbidity index are associated with the Peritoneal dialysis vs. haemodialysis participation variable. The square around this latter variable shows that it has been conditioned upon during data collection, since only PD patients are included in the registry. Our prior uncertainties are absence of the Type of assistance→Death arrow (Figure 10), absence of the Sex→Type of assistance arrow (Figure 11), and whether Comorbidity index and Type of assistance are better considered as proxies for two unmeasured variables, Major concurrent illnesses and Frailty, respectively (Figure 12). In this last case, we consider Frailty also to be associated with the Peritoneal dialysis vs. haemodialysis collider and with Death.
There is only one minimally sufficient adjustment set in the prior DAG (Figure 9), simply {Age, Comorbidity index}. Figure 13 shows the addone and minusone patterns for this adjustment set. The dotted lines are the ±0.01 threshold; the dashed lines are the 10% relative change in the RD. The addone pattern shows a meaningful change for Type of assistance (i.e. lies outside of the dotted line in Figure 13), inconsistent with the implied pattern from Figure 9, whereas the minusone pattern shows a meaningful change for both variables in the set, consistent with Figure 9. The proportions of bootstrapped estimates lying outside of the meaningful threshold are in Table 1: only Type of assistance had >50% of the addone estimates outside of the meaningful threshold.
We therefore need to review the DAG, focusing on Type of assistance. Looking at the prior uncertainties, dropping the Type of assistance→Death (Figure 10) or the Sex→Type of assistance arrows (Figure 11) does not change the implied patterns compared with Figure 9. In contrast, specifying the proxy relations in Figure 12 changes the adjustment set. (Note that there is no sufficient adjustment set (of measured variables) according to this DAG as the paths PKD vs. other nephropathies←Major concurrent illnesses→Death, PKD vs. other nephropathies←Major concurrent illnesses→Frailty→Death, PKD vs. other nephropathies←Major concurrent illnesses→Peritoneal dialysis vs. haemodialysis←Frailty→Death, and PKD vs. other nephropathies→Peritoneal dialysis vs. haemodialysis←Frailty→Death remain partially open at Major concurrent illnesses and Frailty.) The implied addone pattern for a starting adjustment set of {Age, Comorbidity index} in Figure 12 is therefore a meaningful change for Type of assistance, Sex, and Type of peritoneal dialysis.
Now using Figure 12 as our revised working DAG, the best adjustment set is {Age, Comorbidity index, Type of assistance, Sex}. The last three variables are included as descending or ascending proxies of the two unmeasured variables. We did not include Type of peritoneal dialysis in this set as its net biasreducing effect is not clear, noting that it will contributed to partially conditioning on the unmeasured Frailty variable but will also open biasing pathways, e.g. PKD vs. other nephropathies→Type of peritoneal dialysis←Frailty→Death. The RD adjusted on the final set did not show a meaningful change in the addone pattern (proportion of bootstrapped estimates outside of threshold <50% shown in Table 1) and the minusone pattern showed a meaningful change for all adjustment variables except Age (Figure 14). Age also had <50% of bootstrapped estimates lying outside of the meaningful threshold (Table 1). We maintain Age in the adjustment set as this pattern is coherent with the DAG, since the other adjustment variables, Comorbidity index and Type of assistance, may already condition effectively on Age owing to a strong correlation. However, we note that Age may be dropped if it improves the efficiency of the estimate (see [6]). We would therefore present our prior working DAG (Figure 9) with an RD of −0.07 (95%CI: 0.14, 0.00) and our final working DAG (Figure 12) with an RD of −0.02 (95%CI: 0.10, 0.05).
As an aside, Figures 13 and 14 show the difference between using relative and absolute scales as the threshold for a meaningful change. In Figure 13, the starting RD is −0.07 and so the width of the relative change (dashed lines) is close to that of the absolute change (dotted lines). In Figure 14, the starting RD is considerably smaller, at −0.02, and so the width of the relative change is much smaller than that of the absolute change.
Discussion
We have presented an approach to selecting adjustment variables which combines prior knowledge expressed in a DAG with results from analysis of the data. The approach is pragmatic in that it focuses only on the effect of interest (also emphasized by others [5]); uses regression models and the changeinestimate procedure familiar to epidemiologists; and can incorporate realdata problems such as measurement error and residual bias. It aims at producing a plausible, best working DAG or set of DAGs for a given research question, given the data at hand, and at communicating the assumptions underlying variable selection in the initial and final models using a standardized, graphical form [3]. The approach also communicates the uncertainties in the assumptions in the final models by presenting all the DAGs identified by the researcher which are consistent with the observed changeinestimate patterns. This aims to help other research teams to focus on the areas of uncertainty and corroborate or refute the DAGs, based on the analysis of different datasets in an iterative way.
The approach depends on recent theoretical work on c (confounding) equivalence [16] and collapsibility of estimates over different DAG structures [17]. Pearl and Paz [16] have developed conditions for cequivalence which apply to any subsets of the variables in a DAG. Our approach uses two of their results: that all sufficient adjustment sets are cequivalent and that failure to find cequivalence of putative sufficient adjustment sets rules out a DAG implying such cequivalence [3]. The approach also uses Pearl and Paz’s insights into bias amplification, in which they note that bias amplification will lead to changes in associations conditional on different variables even if the variables block the same path. In a recent, detailed review of collapsibility (i.e. equivalence) of different estimators over different DAGs [17], Greenland and Pearl noted that regression coefficients may be used to check collapsibility over different covariable sets, an approach which we develop here for applied work.
To our knowledge, only one other article in the epidemiology literature to date has looked at adjustment variable selection by explicitly combining DAGs and a statistical selection procedure [6]. This article addressed deletion of variables from an adjustment set defined from a prior DAG using the changeinestimate procedure, but considered only odds ratios from simulations of case–control studies and explicitly excluded colliders. Our approach is therefore broader as it addresses whether the data support the initial DAG which defines the starting adjustment set, applies to any collapsible estimator, and covers the range of possible relationships between variables. Interestingly, this article found largest bias (using simulated data) when including covariables associated only with the outcome in the adjustment set and suggested that noncollapsibility of the odds ratio may have been involved [6]. This reinforces our insistence on collapsible estimators.
The proposed approach has some potential advantages over other variableselection methods. It can reduce the “blackbox” nature of using the pvalue or the changeinestimate alone to select variables, as it lays out the rationale for adjustmentvariable choice graphically. It will also frequently lead to a more parsimonious model than selection based on pvalues since it chooses variables by relevance to the exposureoutcome association, rather than the association with the outcome alone. The approach also extends backgroundknowledge methods by checking starting assumptions against the data and requiring researchers to justify mismatches or adapt assumptions appropriately. The approach complements the recently proposed method of adjusting on all assumed parents of exposure and outcome [21] as it can incorporate adjustment decisions when parent variables are measured with error and can achieve a more parsimonious model by excluding parent variables which do not lie on biasing pathways. Of course, sensitivity analyses to explore the impact of possible unmeasured confounding [53] remain important.
An important point concerns the possibility of incidental cancellations and small effects. Finding a meaningful difference in the addone pattern for a variable when no difference is implied by the DAG indicates the need to review the variable’s relationships. However, finding no meaningful difference in the addone or minusone patterns when a difference is implied is not, strictly speaking, inconsistent with the DAG. This is because of the possibilities of incidental cancellations across pathways and of changes which simply do not exceed the predefined meaningful threshold. For this reason, we suggest that the researcher maintain such arrows (thereby assuming “weak faithfulness” rather than faithfulness (see [32] p.190), but label these arrows for other research teams to examine with different datasets.
A potential criticism of the approach is that it does not eliminate background knowledge from adjustmentvariable selection. Indeed, the examples include instances of needing background knowledge to distinguish between DAGs giving the same addone and minusone patterns (e.g. confounding vs. mediatingpathway examples, measurementerror vs. biasamplification examples). It is well known that different DAGs can imply the same statistical relationships [3, 7, 54], making an appeal to background knowledge unavoidable when using DAGs in applied work. We do not consider this a limitation, however, seeing background knowledge as valid information which should rarely be overruled by any single dataset but, rather, reviewed in light of the patterns in the data. This is particularly appropriate in clinical epidemiology, where we frequently know quite a lot about likely relationships between variables. In contrast, the approach is unlikely to be well adapted to datasets for which researchers have very little background knowledge, when alternative approaches such as DAGdiscovery algorithms (below) may be used.
Another potential criticism is that the approach only addresses variable relationships relevant to the effect of interest, remaining agnostic about other regions of the DAG. This aims to focus on the research question at hand and to minimize the risk of “getting lost” in trying to explore all possible associations in the DAG, many of which do not directly impact on the selected exposureoutcome estimate. A researcher wishing to explore the full DAG could apply a DAGdiscovery algorithm (e.g. the PC, GES, or FCI algorithms; see the TETRAD project’s website and [7]). Such algorithmic approaches use statistical tests or scoring rules to identify edges between variables and can incorporate background knowledge such as the temporal ordering of variables or the forced inclusion or exclusion of arrows. However, they have proven controversial [8] and have not yet crossed over into applied epidemiologic research. Nonetheless, recent applications of these algorithms in the biomedical literature for data with many variables and little background knowledge have been interesting [55]. In the approach proposed in this article, a researcher could use these algorithms to explore additional prior starting DAGs. In our experience, however, there are challenges to using these algorithms currently, including handling datasets with mixed continuous and categorical variables and dealing with issues such as measurement error and bias amplification.
We wish to highlight several additional limitations of the proposed approach. Like the changeinestimate procedure, the approach is ad hoc and informal as it depends on arbitrary thresholds and is not founded on welldefined statistical tests with appropriate theoretical properties. In addition, as discussed above, different DAG structures can give the same implied addone and minusone patterns and so more than one DAG will be consistent with the observed patterns. For this reason, the researcher should present all identified DAGs with implied patterns consistent with those observed; further, researchers should always remember that other DAGs (not identified) will also be consistent with the patterns.
Several extensions to the approach are possible, should it appeal to epidemiologists working on applied questions. These include how best to address sampling variability in the patterns, comparing the performance of different rules based on the proportion of bootstrap samples which fall outside the meaningful threshold. Another potential extension concerns precision in choosing the adjustment set. We note that a researcher may wish to adjust on additional variables to improve precision [56] and may wish to delete variables from the final adjustment set based on precision of estimates, as concluded in [6]. Researchers should of course bear in mind that, as with any a posteriori variable selection, estimates from a revised DAG will tend to be overprecise. Finally, it may be possible to extend the approach to include recent advances in DAG theory, including selection variables to encode differences between populations (and so uncertainty about arrows) [57], signed DAGs which specify assumptions about the positive or negative direction of paths [58], and interactions using sufficient causation DAGs [59].
Conclusions
In summary, we have proposed a novel approach to adjustmentvariable selection in epidemiology which combines existing knowledgebased and statisticsbased methods. It requires a researcher to present backgroundknowledge assumptions in a DAG, to compare these against patterns in the data, and to review assumptions accordingly. It also ensures clear communication of assumptions and uncertainties to other researchers and readers in a standardized graphical format. As the approach requires background knowledge, it is probably best suited to areas such as clinical epidemiology where researchers know quite a lot about a priori plausible variable relationships. Researchers can use this approach as an additional tool for selecting adjustment variables when analyzing epidemiological data.
References
 1.
Greenland S, Pearl J, Robins JM: Causal diagrams for epidemiologic research. Epidemiology. 1999, 10: 3748. 10.1097/0000164819990100000008.
 2.
Glymour M, Greenland S: Causal diagrams. Modern epidemiology. 2008, Philadelphia, PA: Lippincott Williams &Wilkins, 183209. 3rd
 3.
Pearl J: Causality: models, reasoning, and inference. 2009, Cambridge: Cambridge University Press, 2nd
 4.
Greenland S: Modeling and variable selection in epidemiologic analysis. Am J Public Health. 1989, 79: 340349. 10.2105/AJPH.79.3.340.
 5.
Vansteelandt S, Bekaert M, Claeskens G: On model selection and model misspecification in causal inference. Stat Methods Med Res. 2012, 21: 730. 10.1177/0962280210387717.
 6.
Weng HY, Hsueh YH, Messam LLM, HertzPicciotto I: Methods of covariate selection: directed acyclic graphs and the changeinestimate procedure. Am J Epidemiol. 2009, 169: 11821190. 10.1093/aje/kwp035.
 7.
Spirtes P, Glymour C, Scheines R: Causation, prediction, and search, second edition. 2001, Cambridge: The MIT Press, 2nd
 8.
Rejoinder to glymour and spirtes. Computation, causation, and discovery. Edited by: Glymour C, Cooper G. 1999, Cambridge MA: AAAI Press/The MIT Press, 333342.
 9.
Leiss JK: Management practices and risk of occupational blood exposure in U.S. Paramedics: Nonintact skin exposure. Ann Epidemiol. 2009, 19: 884890. 10.1016/j.annepidem.2009.08.006.
 10.
Nyitray AG, Smith D, Villa L, Lazcano Ponce E, Abrahamsen M, Papenfuss M, Giuliano AR: Prevalence of and risk factors for anal human papillomavirus infection in Men Who have Sex with women: a cross national study. J Infect Dis. 2010, 201: 14981508. 10.1086/652187.
 11.
Rod NH, Vahtera J, Westerlund H, Kivimaki M, Zins M, Goldberg M, Lange T: Sleep disturbances and causespecific mortality: results from the GAZEL cohort study. Am J Epidemiol. 2010, 173: 300309.
 12.
Edmonds A, Yotebieng M, Lusiama J, Matumona Y, Kitetele F, Napravnik S, Cole SR, Van Rie A, Behets F: The effect of highly active antiretroviral therapy on the survival of HIVinfected children in a resourcedeprived setting: a cohort study. PLoS Med. 2011, 8: e100104410.1371/journal.pmed.1001044.
 13.
Leval A, Sundström K, Ploner A, Arnheim Dahlström L, Widmark C, Sparén P: Assessing perceived risk and STI prevention behavior: a national populationbased study with special reference to HPV. PLoS One. 2011, 6: e2062410.1371/journal.pone.0020624.
 14.
Gaskins AJ, Mumford SL, Rovner AJ, Zhang C, Chen L, WactawskiWende J, Perkins NJ, Schisterman EF, for the BioCycle Study Group: Whole grains Are associated with serum concentrations of high sensitivity Creactive protein among premenopausal women. J Nutr. 2010, 140: 16691676. 10.3945/jn.110.124164.
 15.
Gaskins AJ, Mumford SL, Zhang CL, WactawskiWende J, Hovey KM, Whitcomb BW, Howards PP, Perkins NJ, Yeung E, Schisterman EF: Effect of daily fiber intake on reproductive function: the BioCycle study. Am J Clin Nutr. 2009, 90: 10611069. 10.3945/ajcn.2009.27990.
 16.
Pearl J, Paz A: Confounding equivalence in observational studies (or, when are two measurements equally valuable for effect estimation?). Proceedings of the twentysixth conference on uncertainty in artificial intelligence. 2010, Corvallis: AUAI, 433441.
 17.
Greenland S, Pearl J: Adjustments and their consequences  Collapsibility analysis using graphical models. Int Stat Rev. 2011, 79: 401426. 10.1111/j.17515823.2011.00158.x.
 18.
Shrier I, Platt RW: Reducing bias through directed acyclic graphs. BMC Med Res Methodol. 2008, 8: 7010.1186/14712288870.
 19.
Fleischer NL, Diez Roux AV: Using directed acyclic graphs to guide analyses of neighbourhood health effects: an introduction. J Epidemiol Community Health. 2008, 62: 842846. 10.1136/jech.2007.067371.
 20.
Richiardi L, BaroneAdesi F, Merletti F, Pearce N: Using directed acyclic graphs to consider adjustment for socioeconomic status in occupational cancer studies. J Epidemiol Community Health. 2008, 62: e1410.1136/jech.2007.065581.
 21.
Vanderweele TJ, Shpitser I: A New criterion for confounder selection. Biometrics. 2011, 67: 14061413. 10.1111/j.15410420.2011.01619.x.
 22.
Tsai CL, Camargo CA: Methodological considerations, such as directed acyclic graphs, for studying “acute on chronic” disease epidemiology: chronic obstructive pulmonary disease example. J Clin Epidemiol. 2009, 62: 982990. 10.1016/j.jclinepi.2008.10.005.
 23.
Dawid AP: Beware of the DAG!. Journal of Machine Learning Research Workshop and Conference Proceedings. 2010, 6: 5986.
 24.
Dawid AP: Influence diagrams for causal modelling and inference. Int Stat Rev. 2002, 70: 161189.
 25.
Petersen ML, Sinisi SE, van der Laan MJ: Estimation of direct causal effects. Epidemiology. 2006, 17: 276284. 10.1097/01.ede.0000208475.99429.2d.
 26.
Robins JM, Greenland S: Identifiability and exchangeability for direct and indirect effects. Epidemiology. 1992, 3: 143155. 10.1097/0000164819920300000013.
 27.
Shpitser I, Vanderweele TJ: A complete graphical criterion for the adjustment formula in mediation analysis. Int J Biostat. 2011, 7: 1610.2202/1557–4679.1297.
 28.
Shpitser I, VanderWeele TJ, Robins JM: On the validity of covariate adjustment for estimating causal effects. Proceedings of the TwentySixth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI10). 2010, Corvallis: AUAI, 527536.
 29.
Greenland S, Robins JM, Pearl J: Confounding and collapsibility in causal inference. Stat Sci. 1999, 14: 2946. 10.1214/ss/1009211805.
 30.
Breitling L: A suite of R functions for directed acyclic graphs. Epidemiology. 2010, 21: 586587. 10.1097/EDE.0b013e3181e09112.
 31.
Knueppel S, Stang A: DAG program: identifying minimal sufficient adjustment sets. Epidemiology. 2010, 21: 159
 32.
Rothman K, Greenland S, Lash T: Modern epidemiology. 2008, Philadelphia: Lipincott Williams &Wilkins, 3rd
 33.
Kaufman JS: Marginalia: comparing adjusted effect measures. Epidemiology. 2010, 21: 490493. 10.1097/EDE.0b013e3181e00730.
 34.
Greenland S: Absence of confounding does not correspond to collapsibility of the rate ratio or rate difference. Epidemiology. 1996, 7: 498501. 10.1097/0000164819960900000007.
 35.
Miettinen OS, Cook EF: Confounding  essence and detection. Am J Epidemiol. 1981, 114: 593603.
 36.
Austin PC, Laupacis A: A tutorial on methods to estimating clinically and policymeaningful measures of treatment effects in prospective observational studies: a review. Int J Biostat. 2011, 7 (1): 6
 37.
Austin PC: Absolute risk reductions, relative risks, relative risk reductions, and numbers needed to treat can be obtained from a logistic regression model. J Clin Epidemiol. 2010, 63: 26. 10.1016/j.jclinepi.2008.11.004.
 38.
Gehrmann U, Kuss O, Wellmann J, Bender R: Logistic regression was preferred to estimate risk differences and numbers needed to be exposed adjusted for covariates. J Clin Epidemiol. 2010, 63: 12231231. 10.1016/j.jclinepi.2010.01.011.
 39.
McNutt LA, Wu C, Xue X, Hafner JP: Estimating the relative risk in cohort studies and clinical trials of common outcomes. Am J Epidemiol. 2003, 157: 940943. 10.1093/aje/kwg074.
 40.
Maldonado G, Greenland S: Simulation study of confounderselection strategies. Am J Epidemiol. 1993, 138: 923936.
 41.
Hernán MA, Cole SR: Invited Commentary: causal diagrams and measurement bias. Am J Epidemiol. 2009, 170: 959962. 10.1093/aje/kwp293.
 42.
Brenner H: Bias due to nondifferential misclassification of polytomous confounders. J Clin Epidemiol. 1993, 46: 5763. 10.1016/08954356(93)90009P.
 43.
Ogburn EL, VanderWeele TJ: On the nondifferential misclassification of a binary confounder. Epidemiology. 2012, 23: 433439. 10.1097/EDE.0b013e31824d1f63.
 44.
Greenland S: Intuitions, simulations, theorems: the role and limits of methodology. Epidemiology. 2012, 23: 440442. 10.1097/EDE.0b013e31824e278d.
 45.
Vanderweele TJ, Ogburnb EL: Theorems, proofs, examples, and rules in the practice of epidemiology. Epidemiology. 2012, 23: 443445. 10.1097/EDE.0b013e31824e2d4e.
 46.
Pearl J: On a class of biasamplifying covariates that endanger effect estimates. Proceedings of the twentysixth conference on uncertainty in artificial intelligence, 417424. AUAI, Corvallis, OR, 2010. 2010, 417424. Technical report (R356)
 47.
Wooldridge J: Should instrumental variables be used as matching variables?. Tech. Rep. Michigan state university. 2006
 48.
Myers JA, Rassen JA, Gagne JJ, Huybrechts KF, Schneeweiss S, Rothman KJ, Joffe MM, Glynn RJ: Effects of adjusting for instrumental variables on bias and precision of effect estimates. Am J Epidemiol. 2011, 174: 12131222. 10.1093/aje/kwr364.
 49.
Pearl J: Invited commentary: understanding bias amplification. Am J Epidemiol. 2011, 174: 12231227. 10.1093/aje/kwr352.
 50.
Rassen JA, Brookhart MA, Glynn RJ, Mittleman MA, Schneeweiss S: Instrumental variables I: instrumental variables exploit natural variation in nonexperimental data to estimate causal relationships. J Clin Epidemiol. 2009, 62: 12261232. 10.1016/j.jclinepi.2008.12.005.
 51.
Lobbedez T, Touam M, Evans D, Ryckelynck JP, Knebelman B, Verger C: Peritoneal dialysis in polycystic kidney disease patients. Report from the French peritoneal dialysis registry (RDPLF). Nephrol Dial Transplant. 2011, 26: 23322339. 10.1093/ndt/gfq712.
 52.
Cheung YB: A modified leastsquares regression approach to the estimation of risk difference. Am J Epidemiol. 2007, 166: 13371344. 10.1093/aje/kwm223.
 53.
Groenwold RHH, Hak E, Hoes AW: Quantitative assessment of unobserved confounding is mandatory in nonrandomized intervention studies. J Clin Epidemiol. 2009, 62: 2228. 10.1016/j.jclinepi.2008.02.011.
 54.
Robins JM: Data, design, and background knowledge in etiologic inference. Epidemiology. 2001, 12: 313320. 10.1097/0000164820010500000011.
 55.
Kalisch M, Fellinghauer BAG, Grill E, Maathuis MH, Mansmann U, Bühlmann P, Stucki G: Understanding human functioning using graphical models. BMC Med Res Methodol. 2010, 10: 1410.1186/147122881014.
 56.
Robinson LD, Jewell NP: Some surprising results about covariate adjustment in logisticregression models. Int Stat Rev. 1991, 59: 227240. 10.2307/1403444.
 57.
Pearl J, Bareinboim E: Transportability across studies: a formal approach. Technical report R372. 2011
 58.
VanderWeele TJ, Robins JM: Signed directed acyclic graphs for causal inference. Journal of the Royal Statistical Society Series BStatistical Methodology. 2009, 72: 111127.
 59.
VanderWeele TJ, Robins JM: Directed acyclic graphs, sufficient causes, and the properties of conditioning on a common effect. Am J Epidemiol. 2007, 166: 10961104. 10.1093/aje/kwm179.
Prepublication history
The prepublication history for this paper can be accessed here:http://0www.biomedcentral.com.brum.beds.ac.uk/14712288/12/156/prepub
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
DE, BC, and AF conceived the idea through their interests in confounder selection and directed acyclic graphs. CV and TL were responsible for the peritoneal dialysis data and contributed to the development and interpretation of the empirical example. DE did the analyses and drafted the manuscript. All authors critically reviewed the drafts and approved the final version.
Electronic supplementary material
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Evans, D., Chaix, B., Lobbedez, T. et al. Combining directed acyclic graphs and the changeinestimate procedure as a novel approach to adjustmentvariable selection in epidemiology. BMC Med Res Methodol 12, 156 (2012) doi:10.1186/1471228812156
Received
Accepted
Published
DOI
Keywords
 Directed acyclic graph
 Adjustmentvariable selection
 Changeinestimate
 Peritoneal dialysis