Skip to main content

A scoping review of indirect comparison methods and applications using individual patient data



Several indirect comparison methods, including network meta-analyses (NMAs), using individual patient data (IPD) have been developed to synthesize evidence from a network of trials. Although IPD indirect comparisons are published with increasing frequency in health care literature, there is no guidance on selecting the appropriate methodology and on reporting the methods and results.


In this paper we examine the methods and reporting of indirect comparison methods using IPD. We searched MEDLINE, Embase, the Cochrane Library, and CINAHL from inception until October 2014. We included published and unpublished studies reporting a method, application, or review of indirect comparisons using IPD and at least three interventions.


We identified 37 papers, including a total of 33 empirical networks. Of these, only 9 (27 %) IPD-NMAs reported the existence of a study protocol, whereas 3 (9 %) studies mentioned that protocols existed without providing a reference. The 33 empirical networks included 24 (73 %) IPD-NMAs and 9 (27 %) matching adjusted indirect comparisons (MAICs). Of the 21 (64 %) networks with at least one closed loop, 19 (90 %) were IPD-NMAs, 13 (68 %) of which evaluated the prerequisite consistency assumption, and only 5 (38 %) of the 13 IPD-NMAs used statistical approaches. The median number of trials included per network was 10 (IQR 4–19) (IPD-NMA: 15 [IQR 8–20]; MAIC: 2 [IQR 3–5]), and the median number of IPD trials included in a network was 3 (IQR 1–9) (IPD-NMA: 6 [IQR 2–11]; MAIC: 2 [IQR 1–2]). Half of the networks (17; 52 %) applied Bayesian hierarchical models (14 one-stage, 1 two-stage, 1 used IPD as an informative prior, 1 unclear-stage), including either IPD alone or with aggregated data (AD). Models for dichotomous and continuous outcomes were available (IPD alone or combined with AD), as were models for time-to-event data (IPD combined with AD).


One in three indirect comparison methods modeling IPD adjusted results from different trials to estimate effects as if they had come from the same, randomized, population. Key methodological and reporting elements (e.g., evaluation of consistency, existence of study protocol) were often missing from an indirect comparison paper.

Peer Review reports


Systematic reviews and meta-analyses using individual patient data (IPD) aim to obtain, verify, and synthesize original research data for each participant from all studies that compare the same two treatments to address a specified clinical question. Although IPD meta-analyses may be more time consuming and expensive than conventional meta-analyses using aggregated data, they are considered the gold standard approach for systematic reviews of interventions and are being published with increasing frequency [1, 2]. They can improve clinical practice guidelines [3] because they offer advantages over conventional meta-analyses with respect to data quality and the type of analyses that can be conducted. For example, in contrast to aggregated data, the use of IPD allows investigation of patient-level moderators, intention-to-treat analysis (when data are available for all patients in randomized studies), and application of appropriate multiple imputation techniques to overcome issues related to missing data.

Network meta-analysis (NMA) allows the simultaneous comparison of many relevant interventions, and there has been an exponential increase in the number of NMAs published in recent years [4]. Although NMA is commonly performed with aggregated data, the inclusion of IPD can increase confidence in the results [5, 6], identify interactions that are otherwise undetectable [1, 79], and reduce variation in treatment effects both between studies within pairwise comparisons (heterogeneity) and between pairwise comparisons (inconsistency) by adjusting trial results for factors that may cause this variation [6]. The use of IPD may also allow estimation of subgroup effects, which in turn allows tailoring of results to patient characteristics. Several investigators have recognized that the use of IPD in NMAs may generate the most trustworthy evidence to inform clinical decision making, and hence they have been developing statistical methods to enhance IPD-NMAs [5, 6, 10, 11]. The objective of this study is to conduct a comprehensive scoping review of the methods used to perform indirect comparisons with IPD or IPD combined with aggregated data. We also aim to review applications of indirect comparisons with IPD and summarize network, methods and reporting characteristics.


This review was guided by the research questions: “What are the existing methodologies available to apply an IPD-NMA or an indirect comparison using IPD?" and "What are the characteristics of the empirical networks that include IPD (e.g., number of trials, patients, and treatments)?”. A scoping review was applied for this study based on the framework outlined by Arksey and O’Malley [12] and using the Joanna Briggs Institute methods manual [13]. We described the methods in detail in our protocol publication [14].

Identifying relevant studies: data sources and search strategy

We searched MEDLINE, Embase, the Cochrane Library, and CINAHL from inception until the end of October 2014. No limits were placed on date of publication, language, population, intervention, or outcome. The search was carried out by an experienced librarian (Ms Becky Skidmore), and a second librarian (Ms Heather MacDonald) peer-reviewed the MEDLINE electronic search strategy (see Additional file 1: Appendix 1) using the Peer Review of Electronic Search Strategies (PRESS) checklist [15]. Modified search strategies for remaining databases are available upon request from the authors. Grey literature sources (Google, Agency for Healthcare Research and Quality, Canadian Medical Libraries List, Medical Research Council, and National Health Service) were searched, and references from included studies were scanned.

Eligibility criteria

We included published papers, protocols, and abstracts, as well as unpublished studies, that reported on a method, application, or review of IPD indirect comparison methods involving studies of any design. Eligible were application studies that compared the clinical effectiveness or safety of three or more interventions and applied any type of indirect comparison, including adjusted indirect comparison, unadjusted indirect comparison, matching adjusted indirect comparison (MAIC), simulated treatment comparison (STC), mixed comparison, and NMA. Studies including narrative comparisons were excluded.

Several approaches have been suggested to conduct an indirect comparison using IPD only or in combination with aggregated data. The different types of IPD indirect comparison methods identified in this scoping review are outlined in Table 1. The adjusted indirect comparison, mixed comparison, and NMA approaches modeling IPD can be categorized as one-stage and two-stage approaches. In one-stage methods, the IPD from all eligible studies are analyzed within the same (usually linear) model simultaneously, accounting for clustering of participants within each study. Two-stage methods are used to reduce IPD to aggregated data and then synthesize the aggregated data from each study using an adjusted indirect comparison, mixed comparison, or NMA model [16].

Table 1 Individual patient data indirect comparison methods

Study selection and data abstraction

Following a calibration exercise, two reviewers (AAV and CS or MJE) independently screened each title and abstract of the literature search results (level 1) and the full-text of potentially relevant articles (level 2) using Synthesi.SR [17]. Conflicts were resolved by discussion. The final inter-rater agreement (across levels 1 and 2) between reviewers was 85 %. The same process was followed for data extraction. When multiple publications were identified for the same study, we abstracted data from the most recent study (when the literature search differed across studies) and considered the remaining publications as companion reports, which were used for supplementary material only. Details on the data abstraction process can be found in Additional file 1: Appendix 2.


Quantitative data from the retrieved networks with IPD (e.g., number of patients, studies, and treatments in the network) were summarized in terms of medians and interquartile ranges (IQRs), and categorical data (e.g., effect measures, outcome data type, reference treatment type) by frequencies and percentages. We compared continuous network characteristics between different methods using the Wilcoxon-Mann-Whitney test. All tests were two-sided with a significance level of 0.05.


The literature search yielded 201 potentially relevant citations, of which 91 unique citations met the eligibility criteria based on title and abstract. Following review of the corresponding full-text articles, 37 papers were eligible for this review and included, along with 10 companion reports (Fig. 1). All excluded citations and reasons for exclusion are available in Additional file 1: Appendix 3.

Fig. 1

PRISMA flow chart for study selection. IPD-NMA = individual patient data network meta-analysis

General characteristics of identified networks

We identified 23 (62 %) application articles [1840], 11 (30 %) methodological articles [6, 4149], 2 (5 %) reviews of methods [50, 51], and 1 (3 %) protocol [52] for an application article that has not yet been published (Additional file 1: Appendix 4). The number of studies with indirect comparison methods using IPD has increased steeply since 2007 (Fig. 2). The IPD indirect comparison methods were published in a wide variety of journals, and most of the networks (17; 46 %) were industry-sponsored. Further details can be found in Additional file 1: Appendix 5.

Fig. 2

Bar plot of the indirect methods using individual patient data (IPD) by year, method, and type of network. The frequencies of the identified methods (n = 33) were 17 (52 %) Bayesian hierarchical models†, 2 (6 %) Bucher methods‡, 8 (24 %) matching adjusted indirect comparisons (MAIC)#, 1 (3 %) extended MAIC#, 4 (12 %) meta-regression models*, 1 (3 %) mixed comparison**.

†Bayesian hierarchical models are multi-level models presented as a generalization of regression methods. Different levels account for the variation in patients between and within studies which form the hierarchical model. Network meta-analyses conducted in a Bayesian framework express the observed treatment effects via their ‘true’ underlying treatment effects. ‡The Bucher method (or adjusted indirect comparison) is the statistical approach to derive an indirect treatment effect estimate for two competing treatments that have been compared with a common intervention [68]. #Matching-adjusted indirect comparisons are indirect comparisons that use IPD from the active treatment trial(s) and aggregate data (AD) from the comparator treatment trial(s). The patient characteristics from the IPD trial(s) are weighted a priori and matched with the characteristics of the population in the AD trial(s) so that the baseline characteristics are similar between the two treatment groups. A recent extension of the method accounts for differences in endpoint definitions and missing data [46]. *A linear (or meta-regression) model with dummy variables reflecting the basic parameters (comparisons of all treatments vs. a common comparator), and with regression coefficients the NMA treatment effect estimates [69]. Under the consistency assumption, all treatment comparisons are written as functions of the basic parameters. **A mixed comparison between two treatments is the weighted average of direct and indirect estimates for the same treatment comparison, with weights the inverse of the variance of the estimated effects [69]

Characteristics of identified methodologies

Summary of indirect comparison methodologies using IPD

A variety of indirect comparison methods using IPD were identified (Table 2). Twenty-four IPD-NMA (73 %) and 9 MAIC (27 %) approaches were applied in total in the empirical studies. The first IPD-NMA study, published in 2007, applied a meta-regression model for time-to-event data [19]. About half of the networks (17; 52 %) applied a Bayesian hierarchical model, whereas the second most frequently used method was the MAIC approach (8; 24 %) (Fig. 3).

Table 2 Properties of methods to derive indirect and network meta-analysis estimates using individual patient data
Fig. 3

Bubble plot of indirect methods using individual patient data by year of publication and discipline. The size of each bubble is proportional to the number of studies published in the corresponding year and discipline. Light grey bubbles represent publications using the matching adjusted indirect comparison (MAIC) and simulated treatment comparison (STC) methods, white bubbles represent publications using an individual patient data network meta-analysis (IPD-NMA) method, and dark grey bubbles represent publications using both IPD-NMAs and MAIC/STC methods

Most IPD-NMAs involved one- or two-stage approaches (see Additional file 1: Appendix 4 and Additional file 2). Several one-stage Bayesian hierarchical models were discussed across the methodological papers, including either IPD alone [6, 4143] or a mixture of IPD and aggregated data [41, 42, 44, 45] (see Table 3). For IPD alone, three studies [6, 10, 41] presented models for dichotomous outcome data using the odds ratio, and a fourth study [43] proposed a model for multiple continuous outcomes using the mean difference. For combining IPD with aggregated data, three studies [41, 42] presented models for dichotomous outcome data using the odds ratio, a fourth study [44] proposed a model for time-to-event data using the hazard ratio, and a fifth study [45] suggested a model for continuous data using the mean difference. All of the aforementioned models were developed to model randomized clinical trials (RCTs), apart from the models suggested by Saramago and colleagues [10], which can combine cluster- and patient-randomized trials, and the approach proposed by Thom and colleagues [45], which models RCTs and single-arm observational trials.

Table 3 Bayesian hierarchical IPD-NMA models described in the identified methodological articles

The majority (15; 63 %) of the 24 empirical IPD-NMAs used a one-stage analysis; two-stage analysis was the second most frequent method (7, 29 %), one study (4 %) used IPD as an informative prior [32], and one study (4 %) [33] was unclear about the analysis format. Among the 33 networks, 16 (52 %) implemented indirect comparison methods modeling IPD in Bayesian statistics software (JAGS [1; 3 %] [53] OpenBUGS [2; 6 %] [54] WinBUGS [14; 43 %] [55] (Table 4). Of the total 37 papers, only three (8 %) IPD-NMAs [10, 44, 45] provided their code in the manuscript, whereas one (3 %) reported that the code is available upon request [31]. Of the 24 empirical IPD-NMAs, 9 (38 %) used IPD only, 13 (54 %) used a mixture of IPD and aggregated data, and two (8 %) applied a combination of methods using both IPD alone and a mixture of IPD and aggregated data. The data format used in all MAICs was a mixture of IPD and aggregated data. The design of the studies included in all of the empirical networks was an RCT, except for in three studies (9 %), which included non-randomized data [10, 31, 45]. The reasons for the choice between IPD or their combination with aggregated data included the following: (not) having access to IPD, not contacting authors outside the collaborative research group, to use IPD as a prior distribution in the analysis, to assess the benefits of acquiring IPD for a subset of trials, to compare IPD-NMA models with aggregated NMA models, and to apply a MAIC (Additional file 2).

Table 4 Methodological characteristics of identified empirical networks, including unpublished data provided by study authors. Figures are no. (%) of studies

Key methodological components of indirect comparison methods with IPD

Of the 22 empirical IPD-NMAs that reported which model was selected among fixed and random-effects, 10 (45 %) employed a random-effects model, 7 (32 %) applied a fixed-effect model, and 5 (23 %) used both approaches. All but two of the Bayesian random-effects IPD-NMA models [10, 32] used a non-informative prior for the between-study variance parameter. Many networks applied various modeling approaches, which were most frequently compared using the deviance information criterion (13; 40 %). The rank order effectiveness or safety of treatments per network was assessed in 11 (33 %) empirical studies using the probability of being the best. Several authors identified differences in the results, when both IPD methods and aggregated data approaches were applied, such as differences in the consistency evaluation, precision in treatment effects, and significance of treatment effect modifier (Additional file 2).

The majority (26; 79 %) of the 33 empirical studies did not report whether an approach had been applied to handle missing data. The approach most commonly applied to follow the intention-to-treat principle in the identified indirect comparison methods was the last observation carried forward (4; 12 %), where missing values are replaced with the last observed measurement. Thirteen (68 %) of the 19 full IPD-NMAs assessed inconsistency, but only 5 (38 %) of these used statistical approaches for this evaluation. One of the full networks was composed of one closed loop of multi-arm studies, and consistency could not be evaluated because of inherent correlations [27]. Of the 13 IPD-NMAs that assessed the consistency assumption, 5 (38 %) detected inconsistency in their network and used IPD to adjust for differences in effect modifiers across treatment comparisons. Among the nine networks that included different treatment doses, the relationship between treatment and dose was ignored either by lumping (5; 56 %) or splitting (4; 44 %) the doses as if they were different treatments.

Methods used to report results in the identified networks

The methods used to report the summary estimates from the analyses varied across the papers. Almost half of the empirical studies (15; 45 %) included a network diagram in the results section or in supplementary material. Tables (14; 42 %) and forest plots (27; 82 %) were the most common methods of reporting the results of indirect comparison methods (Additional file 1: Appendix 6).

Characteristics of empirical studies

Protocol and rationale for using IPD

The 33 studies with empirical indirect comparison methods using IPD, included 23 application articles [1840], 8 methodological articles with empirical examples [6, 10, 4246, 48], 1 review [51], and 1 protocol [52] (Additional file 1: Appendix 6). Of these 33 studies, 9 (27 %) IPD-NMAs reported the existence of a study protocol; an additional 3 (9 %) studies (two IPD-NMAs and one MAIC) mentioned that protocols existed [20, 33, 44], but references were not provided, and we were unable to locate them. None of the eight methodological articles cited a study protocol, but 4 of them provided a reference of the original publication of the empirical dataset, which cited a protocol. Around 3 to 4 years were required to publish the final IPD review after the protocol was published (Additional file 2). We identified 22 (67 %) studies in which investigators had access to IPD through a collaborative research group, whereas 9 (27 %) systematic reviews used several methods to contact the original authors and collect IPD. Six studies reported the proportion of contacted authors who provided IPD, and the median proportion of studies that obtained IPD was 68 % (IQR 58–78 %). No IPD review reported reasons for any non-located IPD studies. Our response rate to requests for additional information for 29 papers was 82 % (14/17 authors; some authors were contacted for more than one paper).

Many of the papers reported the rationale for using IPD instead of aggregated data (26; 79 %); these reasons included adjusting for potential confounding factors [4, 6, 21, 23, 29, 30, 32, 34, 42, 48, 56], exploring reasons for heterogeneity and/or inconsistency [6, 10, 20, 23, 31, 42], increasing power to detect treatment effect modifiers [10, 19, 45], overcoming bias (e.g., aggregation bias) [10, 43], producing more precise estimates of treatment effect (even in the absence of treatment-by-covariate interactions) [19, 44], adjusting for differences in patient-level characteristics even when a small number of studies (<10) was available [35, 37, 10], increasing power due to rare events [18], and matching differences in baseline characteristics [3538, 57]. One of the identified simulation studies evaluated the advantages of including IPD in NMA [5]. In that study, Jansen [5] evaluated the performance of tree-shaped triangular IPD-NMAs modeling a combination of IPD and aggregated data compared with NMAs using aggregated data and showed that an IPD-NMA can considerably reduce bias and increase precision of treatment effect estimates when there is an imbalance in patient-level treatment effect modifiers across comparisons.

Primary outcome and competing treatments

The primary outcome was an effectiveness outcome in 31 (94 %) studies and was categorized as objective in 26 (79 %) networks. The median number of outcomes assessed in the eligible networks was one (IQR 1-3) (Additional file 1: Appendix 4 and Appendix 6). About half of the networks (17; 52 %) reported a dichotomous primary outcome, and nine (27 %) included a continuous primary outcome (see Additional file 1: Appendix 6). The empirical networks evaluated a wide range of interventions, pharmacological versus placebo or control being the most common type of intervention comparison (17; 52 %). The median number of participants in the empirical networks was 899 (IQR 310–1735) (for IPD-NMAs, 1342 [IQR 493–2567]; for MAICs, 329 [IQR 221–601]; P = 0.024).

Size and geometry of the identified networks

We identified 33 empirical networks: 21 (64 %) full networks and 12 (36 %) tree-shaped networks. In Additional file 1: Appendix 7 and Appendix 8 we present the distribution of trials, treatment groups, and patients for each network, shown separately for IPD-NMA and MAIC approaches. The median number of interventions assessed per network was 5 (IQR 3–6) (for IPD-NMAs, 6 [IQR 5–7]; for MAICs, 3 [IQR 3–4]; P = 0.003), and the median number of closed loops in full networks was 1 (IQR 0–4) (for IPD-NMAs, 2 [IQR 1–5]; for MAICs, 0 [IQR 0-0]; P = 0.002). Most IPD-NMAs (19; 79 %) were applied to full networks (including 13 Bayesian hierarchical models, four meta-regression models, one adjusted indirect comparison, one mixed comparison), whereas most MAIC (7; 78 %) were used for tree-shaped networks.

The median number of trials included per network was 10 (IQR 4–19) (for IPD-NMAs, 15 [IQR 8–20]; for MAICs, 2 [IQR 3–5]; P <0.001), and the median number of IPD trials included in a network was 3 (IQR 1–9) (for IPD-NMAs, 6 [IQR 2–11]; for MAICs, 2 [IQR 1–2]; P = 0.007). Full networks had a median number of multi-arm studies of 0 [IQR 0–2] (for IPD-NMAs, 0 [IQR 0–3]; for MAICs, 0 [IQR 0-0]; P = 0.251). The median number of patients in a network was 3874 (IQR 1162–9830) (for IPD-NMAs, 5310 [IQR 3290–14750]; for MAICs, 997 [IQR 520–1264]; P <0.001), and the median number of patients in IPD trials was 1790 (IQR 599–5110) (for IPD-NMAs, 3848 [IQR 1444–5643]; for MAICs, 541 [IQR 350–625]; P = 0.007). No application papers using the STC method were identified.


Recommendations to authors

This study is the first scoping review to provide a comprehensive overview of the methods for completing indirect comparison analyses using IPD. It also describes the methodological and reporting characteristics of empirical networks in healthcare, which will help not only in the design of future simulation studies, but also in refining the preferred reporting items for systematic reviews and meta-analyses (PRISMA) using IPD [58] and developing the PRISMA for IPD-NMAs. This review showed that essential methodological and reporting items suggested to be included by PRISMA-IPD [58] and PRISMA-NMA [59], such as evaluation of the consistency assumption, existence of a study protocol, and methods used to request, collect, and manage IPD, were poorly reported in IPD indirect comparisons. An IPD indirect comparison review should be clearly reported in line with the International Society for Pharmacoeconomics and Outcomes Research (ISPOR), PRISMA-IPD and PRISMA-NMA tools [5860]. However, given that these guidelines are not specific to IPD indirect comparison methods, we outline some additional information that we suggest be reported in IPD indirect comparisons to improve transparency in Table 5 [5860]. For example, the rationale for the choice of IPD indirect comparison method should be provided, since different approaches are associated with different properties, and hence they may lead to different and potentially conflicting results.

Table 5 Suggested information to report in an individual patient data indirect comparison to supplement ISPOR, PRISMA-IPD and PRISMA-NMA

Comparison with existing evidence

The IPD indirect comparisons are only a minority of the aggregated data indirect comparisons, which is also true for IPD meta-analyses compared to aggregated data meta-analyses [2]. Our review showed that a variety of methods are used to synthesize evidence from networks of trials, including both IPD-NMAs and MAIC approaches. Indirect comparison methods using IPD have been used in a wide range of clinical disciplines, as have NMAs modeling aggregated data [61, 62]. The majority of the IPD networks applied Bayesian hierarchical models, which is also preferred in NMAs with aggregated data [4, 63]. Similar to IPD meta-analyses [2], one-stage analyses dominated among the statistical approaches. For IPD alone or in combination with aggregated data, models have been developed for dichotomous and continuous outcomes, whereas for the combination of IPD with aggregated data, models also exist for time-to-event data. However, the statistical code is only rarely available to the reader, which was also observed by Sobieraj et al. [61] in NMAs with aggregated data. In agreement with aggregated data NMAs [4, 62], most IPD networks included at least one closed loop. Although the identified IPD-NMAs have been recently published and IPD can be used to assess and adjust for differences in effect modifiers across treatment comparisons avoiding aggregation bias, our findings on consistency agree with findings on aggregated data NMAs [4, 64, 65]. For a review of methods to assess the consistency assumption with an application to an empirical IPD-NMA, we encourage the readers to consult Donegan et al. [66].

Consistent with aggregated NMAs [62], almost half of the 33 empirical IPD indirect comparisons included a network diagram. Among the 33 identified empirical networks, the typical IPD network had a dichotomous, objective primary outcome, compared pharmacological and placebo/control interventions, and involved five interventions and ten trials. Nikolakopoulou et al. [4] indicated that the typical network with aggregated data had a dichotomous, semi-objective primary outcome, compared pharmacological and placebo/control interventions, involved six interventions, and was informed by 21 trials in their scoping review. This difference may be because the conduct of an IPD indirect comparison is resource-intensive and because IPD allows the assessment of more targeted clinical questions, where fewer studies are available. In the retrieved IPD indirect comparisons, no study reported reasons for missing or incomplete IPD, which was also underreported in IPD reviews for meta-analyses [2]. In contrast to NMAs modeling aggregated data, half of the IPD studies were industry-sponsored (27 % vs. 46 %) [61].

One in three empirical approaches used the MAIC method to model IPD. In contrast to IPD-NMAs, both MAIC and STC provide more targeted comparison results, and consider the outcomes observed in the treatments of interest directly. As such, these methods produce a comparison of outcomes based on two specific arms of the available trials reflecting what may have been observed if the treatments had come from the same randomized trial, whereas the remaining treatment comparators involved in the network of trials are analyzed alongside the selected treatments of interest. The advantage of MAIC and STC methods is that they may be used when NMA is impossible, serving as an alternative approach to NMA. However, caution is needed, as these methods are based on the assumption that the studies should have the same clinical characteristics and they do not account for reasons for potential differences across trials examining the treatments of interest.


One limitation of our study is our focus on the presentation and description of methods, characteristics, and reporting of indirect comparison methods with IPD without assessment of the quality of included papers or the methods themselves. However, scoping reviews typically do not include assessment of the risk of bias [13]. Another limitation is our reliance on information reported in the identified articles; as such, we may have missed important methods that were omitted from the authors’ reports, even if these were appropriately applied in their studies. For example, in the 33 empirical networks we included eight methodological articles and one review with empirical examples, where key reporting items may be missing due to space constraints. An additional limitation is that we may not have retrieved all indirect comparison methods with IPD, as some studies may not have been indexed using the search terms we used. However, we believe that our sample is representative of the indirect comparison methods applied in the medical literature, and most of our results are comparable with previous reviews of NMAs using aggregated data, as well as with the results of scoping reviews on IPD meta-analyses.

Previous scoping reviews of NMAs have also shown inadequate reporting [4, 61, 64, 67]. Hence, it is imperative that guidelines are developed to improve the quality of reporting in IPD-NMAs. Further research is also needed to assess the properties and performance of the various indirect comparison methods modeling IPD.


This is the first scoping review that we are aware of focusing on methods for performing indirect comparisons with IPD, describing also the methodological and reporting characteristics of empirical networks in healthcare. To date, one in three approaches used to model IPD in connected networks of evidence disregarded patient randomization and between-study heterogeneity, considering only information from treatments of interest as if they had come from the same randomized trial. Key methodological and reporting elements (e.g., evaluation of the consistency assumption, existence of a study protocol) were frequently missing, even for networks of trials published in high impact journals. The impact of failing to consider and report important methodological aspects may result in erroneous clinical decisions. It is of paramount importance that reporting of IPD-NMAs is improved and that investigators are aware of the properties of the various indirect methods using IPD before applying them.


Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Availability of data and materials

The data supporting our findings are available at Additional file 2. This is the data abstraction file we used to extract information from all eligible studies with individual patient data indirect comparison methods.



individual patient data


interquartile ranges


International Society for Pharmacoeconomics and Outcomes Research


matching adjusted indirect comparison


network meta-analyses


peer review of electronic search strategies


preferred reporting items for systematic reviews and meta-analyses


randomized clinical trial


stimulated treatment comparison


  1. 1.

    Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ. 2010;340:c221.

    Article  PubMed  Google Scholar 

  2. 2.

    Simmonds M, Stewart G, Stewart L. A decade of individual participant data meta-analyses: a review of current practice. Contemp Clin Trials. 2015;45(Pt A):76–83.

    Article  PubMed  Google Scholar 

  3. 3.

    Vale CL, Rydzewska LH, Rovers MM, Emberson JR, Gueyffier F, Stewart LA, Cochrane IPDM-aMG. Uptake of systematic reviews and meta-analyses based on individual participant data in clinical practice guidelines: descriptive study. BMJ. 2015;350:h1088.

    Article  PubMed  PubMed Central  Google Scholar 

  4. 4.

    Nikolakopoulou A, Chaimani A, Veroniki AA, Vasiliadis HS, Schmid CH, Salanti G. Characteristics of networks of interventions: a description of a database of 186 published networks. PLoS One. 2014;9(1):e86754.

    Article  PubMed  PubMed Central  Google Scholar 

  5. 5.

    Jansen JP. Network meta-analysis of individual and aggregate level data. Res Synth Methods. 2012;3(2):14.

    Article  Google Scholar 

  6. 6.

    Donegan S, Williamson P, D’Alessandro U, Smith CT. Assessing the consistency assumption by exploring treatment by covariate interactions in mixed treatment comparison meta-analysis: Individual patient-level covariates versus aggregate trial-level covariates. Stat Med. 2012;31(29):3840–57.

    Article  PubMed  Google Scholar 

  7. 7.

    Higgins J, Whitehead A, Turner RM, Omar RZ, Thompson SG. Meta-analysis of continuous outcome data from individual patients. Stat Med. 2001;20:2219–41.

    CAS  Article  PubMed  Google Scholar 

  8. 8.

    Berlin JA, Santanna J, Schmid CH, Szczech LA, Feldman HI, Anti-Lymphocyte Antibody Induction Therapy Study G. Individual patient- versus group-level data meta-regressions for the investigation of treatment effect modifiers: ecological bias rears its ugly head. Stat Med. 2002;21(3):371–87.

    Article  PubMed  Google Scholar 

  9. 9.

    Cooper H, Patall EA. The relative benefits of meta-analysis conducted with individual participant data versus aggregated data. Psychol Methods. 2009;14(2):165–76.

    Article  PubMed  Google Scholar 

  10. 10.

    Saramago P, Sutton AJ, Cooper NJ, Manca A. Mixed treatment comparisons using aggregate and individual participant level data. Stat Med. 2012;31(28):3516–36.

    Article  PubMed  Google Scholar 

  11. 11.

    Johnson B, Scott-Sheldon LA, Snyder LB, Noar SM, Huedo-Medina TB. Contemporary approaches to meta-analysis of communication research. In: Slater MD, Hayes A, Snyder LB, editors. The Sage guide to advanced data analysis methods for communication research. Thousand Oaks: Sage; 2008. p. 311–47.

    Google Scholar 

  12. 12.

    Arksey HOML. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:14.

    Article  Google Scholar 

  13. 13.

    Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13(3):141–6.

    Article  PubMed  Google Scholar 

  14. 14.

    Veroniki AA, Soobiah C, Tricco AC, Elliott MJ, Straus SE. Methods and characteristics of published network meta-analyses using individual patient data: protocol for a scoping review. BMJ Open. 2015;5(4):e007103.

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009;62(9):944–52.

    Article  PubMed  Google Scholar 

  16. 16.

    Simmonds MC, Higgins JP, Stewart LA, Tierney JF, Clarke MJ, Thompson SG. Meta-analysis of individual patient data from randomized trials: a review of methods used in practice. Clin Trials. 2005;2(3):209–17.

    Article  PubMed  Google Scholar 

  17. 17.

    Synthesi.SR []. Accessed 13 Apr 2016.

  18. 18.

    Palmerini T, Sangiorgi D, Valgimigli M, Biondi-Zoccai G, Feres F, Abizaid A, Costa RA, Hong MK, Kim BK, Jang Y, et al. Short- versus long-term dual antiplatelet therapy after drug-eluting stent implantation: an individual patient data pairwise and network meta-analysis. J Am Coll Cardiol. 2015;65(11):1092–102.

    CAS  Article  PubMed  Google Scholar 

  19. 19.

    Tudur Smith C, Marson AG, Chadwick DW, Williamson PR. Multiple treatment comparisons in epilepsy monotherapy trials. Trials. 2007;8:34.

    Article  PubMed  PubMed Central  Google Scholar 

  20. 20.

    Pignon JP, Maitre A, Maillard E, Bourhis J. Meta-analysis of chemotherapy in head and neck cancer (MACH-NC): an update on 93 randomised trials and 17,346 patients. Radiother Oncol. 2009;92(1):4–14.

    Article  PubMed  Google Scholar 

  21. 21.

    Middleton LJ, Champaneria R, Daniels JP, Bhattacharya S, Cooper KG, Hilken NH, O’Donovan P, Gannon M, Gray R, Khan KS. Hysterectomy, endometrial destruction, and levonorgestrel releasing intrauterine system (Mirena) for heavy menstrual bleeding: systematic review and meta-analysis of data from individual patients. BMJ. 2010;341(7769):379.

    Google Scholar 

  22. 22.

    Blanchard P, Hill C, Guihenneuc-Jouyaux C, Baey C, Bourhis J, Pignon JP. Mixed treatment comparison meta-analysis of altered fractionated radiotherapy and chemotherapy in head and neck cancer. J Clin Epidemiol. 2011;64(9):985–92.

    Article  PubMed  Google Scholar 

  23. 23.

    Cope S, Zhang J, Williams J, Jansen JP. Efficacy of once-daily indacaterol 75 mug relative to alternative bronchodilators in COPD: a study level and a patient level network meta-analysis. BMC Pulm Med. 2012;12:29.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  24. 24.

    Cope S, Capkun-Niggli G, Gale R, Lassen C, Owen R, Ouwens MJNM, Bergman G, Jansen JP. Efficacy of once-daily indacaterol relative to alternative bronchodilators in COPD: a patient-level mixed treatment comparison. Value Health. 2012;15(3):524–33.

    Article  PubMed  Google Scholar 

  25. 25.

    Daniels JP, Middleton LJ, Champaneria R, Khan KS, Cooper K, Mol BW, Bhattacharya S, International Heavy Menstrual Bleeding IPDM-aCG. Second generation endometrial ablation techniques for heavy menstrual bleeding: network meta-analysis. BMJ. 2012;344:e2564.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  26. 26.

    Szegedi A, Verweij P, Van Duijnhoven W, Mackle M, Cazorla P, Karson C, Fennema H. Efficacy of asenapine for schizophrenia: comparison with placebo and comparative efficacy of all atypical antipsychotics using all available head-to-head randomized trials using meta-analytical techniques. Neuropsychopharmacology. 2012;35:S105.

    Google Scholar 

  27. 27.

    Whegang Youdom S, Samson A, Basco LK, Thalabard JC. Multiple treatment comparisons in a series of anti-malarial trials with an ordinal primary outcome and repeated treatment evaluations. Malar J. 2012;11:147.

    Article  PubMed  Google Scholar 

  28. 28.

    Coxib, traditional NTC, Bhala N, Emberson J, Merhi A, Abramson S, Arber N, Baron JA, Bombardier C, Cannon C, et al. Vascular and upper gastrointestinal effects of non-steroidal anti-inflammatory drugs: meta-analyses of individual participant data from randomised trials. Lancet. 2013;382(9894):769–79.

    Article  Google Scholar 

  29. 29.

    Ellis AG, Reginster JY, Luo X, Cappelleri JC, Chines A, Sutradhar S, Jansen JP. Bazedoxifene versus oral bisphosphonates for the prevention of nonvertebral fractures in postmenopausal women with osteoporosis at higher risk of fracture: a network meta-analysis. Value Health. 2014;17(4):424–32.

    Article  PubMed  Google Scholar 

  30. 30.

    Ellis AG, Reginster JY, Luo X, Bushmakin AG, Williams R, Sutradhar S, Mirkin S, Jansen JP. Indirect comparison of bazedoxifene vs oral bisphosphonates for the prevention of vertebral fractures in postmenopausal osteoporotic women. Curr Med Res Opin. 2014;30(8):1617–26.

    CAS  Article  PubMed  Google Scholar 

  31. 31.

    Goodacre S. Pre-hospital non-invasive ventilation for acute respiratory failure: a systematic review and network meta analysis. Emerg Med J. 2014;31(9):778.

    Article  Google Scholar 

  32. 32.

    Mealing S, Ghement I, Hawkins N, Scott DA, Lescrauwaet B, Watt M, Thursz M, Lampertico P, Mantovani L, Morais E, et al. The importance of baseline viral load when assessing relative efficacy in treatment-naive HBeAg-positive chronic hepatitis B: a systematic review and network meta-analysis. Syst Rev. 2014;3:21.

    Article  PubMed  PubMed Central  Google Scholar 

  33. 33.

    Mills EJ, Lester R, Thorlund K, Lorenzi M, Muldoon K, Kanters S, Linnemayr S, Gross R, Calderon Y, Amico KR, et al. Interventions to promote adherence to antiretroviral therapy in Africa: a network meta-analysis. Lancet HIV. 2014;1(3):e104–11.

    Article  PubMed  Google Scholar 

  34. 34.

    Signorovitch J, Erder MH, Xie J, Sikirica V, Lu M, Hodgkins PS, Wu EQ. Comparative effectiveness research using matching-adjusted indirect comparison: an application to treatment with guanfacine extended release or atomoxetine in children with attention-deficit/hyperactivity disorder and comorbid oppositional defiant disorder. Pharmacoepidemiol Drug Saf. 2012;21 Suppl 2:130–7.

    CAS  Article  PubMed  Google Scholar 

  35. 35.

    Signorovitch J, Swallow E, Kantor E, Wang X, Klimovsky J, Haas T, Devine B, Metrakos P. Everolimus and sunitinib for advanced pancreatic neuroendocrine tumors: a matching-adjusted indirect comparison. Exp Hematol Oncol. 2013;2(1):32.

    Article  PubMed  PubMed Central  Google Scholar 

  36. 36.

    Signorovitch JE, Wu EQ, Betts KA, Parikh K, Kantor E, Guo A, Bollu VK, Williams D, Wei LJ, DeAngelo DJ. Comparative efficacy of nilotinib and dasatinib in newly diagnosed chronic myeloid leukemia: a matching-adjusted indirect comparison of randomized trials. Curr Med Res Opin. 2011;27(6):1263–71.

    CAS  Article  PubMed  Google Scholar 

  37. 37.

    Signorovitch JE, Wu EQ, Swallow E, Kantor E, Fan L, Gruenberger JB. Comparative efficacy of vildagliptin and sitagliptin in Japanese patients with type 2 diabetes mellitus: a matching-adjusted indirect comparison of randomized trials. Clin Drug Investig. 2011;31(9):665–74.

    CAS  Article  PubMed  Google Scholar 

  38. 38.

    Sikirica V, Findling RL, Signorovitch J, Erder MH, Dammerman R, Hodgkins P, Lu M, Xie J, Wu EQ. Comparative efficacy of guanfacine extended release versus atomoxetine for the treatment of attention-deficit/hyperactivity disorder in children and adolescents: applying matching-adjusted indirect comparison methodology. CNS Drugs. 2013;27(11):943–53.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  39. 39.

    Bergvall N, Nixon R, Tomic D, Sfikas N, Cutter G, Giovannoni G. Efficacy of oral fingolimod versus dimethyl fumarate on measures of freedom from disease activity in patients with multiple sclerosis, based on indirect comparisons of phase 3 trials. Mult Scler. 2013;1):519.

    Google Scholar 

  40. 40.

    Xie J, Juday T, Swallow E, Du X, Uy J, Hebden T, Signorovitch J.. Comparative efficacy at 48 weeks of atazanavir/ritonavir versus darunavir/ritonavir in treatment naive HIV-1 patients: a matching adjusted indirect comparison of randomized trials. Value Health. 2012;15(4):A10.

    Article  Google Scholar 

  41. 41.

    Jansen JP, Cope S. Network meta-analysis of individual and aggregate level data. Value Health. 2012;15(4):A159.

    Article  Google Scholar 

  42. 42.

    Donegan S, Williamson P, D’Alessandro U, Garner P, Smith CT. Combining individual patient data and aggregate data in mixed treatment comparison meta-analysis: Individual patient data may be beneficial if only for a subset of trials. Stat Med. 2013;32(6):914–30.

    Article  PubMed  Google Scholar 

  43. 43.

    Hong H, Fu H, Price KL, Carlin BP. Incorporation of individual-patient data in network meta-analysis for multiple continuous endpoints, with application to diabetes treatment. Stat Med. 2015;34(20):2794–819.

    Article  PubMed  Google Scholar 

  44. 44.

    Saramago P, Chuang LH, Soares MO. Network meta-analysis of (individual patient) time to event data alongside (aggregate) count data. BMC Med Res Methodol. 2014;14:105.

    Article  PubMed  PubMed Central  Google Scholar 

  45. 45.

    Thom HH, Capkun G, Cerulli A, Nixon RM, Howard LS. Network meta-analysis combining individual patient and aggregate data from a mixture of study designs with an application to pulmonary arterial hypertension. BMC Med Res Methodol. 2015;15:34.

    Article  PubMed  PubMed Central  Google Scholar 

  46. 46.

    Nixon R, Bergvall N, Tomic D, Sfikas N, Cutter G, Giovannoni G. No evidence of disease activity: indirect comparisons of oral therapies for the treatment of relapsing-remitting multiple sclerosis. Adv Ther. 2014;31(11):1134–54.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  47. 47.

    Signorovitch J, Ayyagari R, Cheng D, Wu EQ. Matching-adjusted indirect comparisons: a simulation study of statistical performance. Value Health. 2013;16(3):A48.

    Article  Google Scholar 

  48. 48.

    Signorovitch JE, Wu EQ, Yu AP, Gerrits CM, Kantor E, Bao Y, Gupta SR, Mulani PM. Comparative effectiveness without head-to-head trials: a method for matching-adjusted indirect comparisons applied to psoriasis treatment with adalimumab or etanercept. Pharmacoeconomics. 2010;28(10):935–45.

    Article  PubMed  Google Scholar 

  49. 49.

    Caro JJ, Ishak KJ. No head-to-head trial? Simulate the missing arms. Pharmacoeconomics. 2010;28(10):957–67.

    Article  PubMed  Google Scholar 

  50. 50.

    Ishak KJ, Proskorovsky I, Benedict A. Simulation and matching-based approaches for indirect comparison of treatments. Pharmacoeconomics. 2015;33(6):537–49.

    Article  PubMed  Google Scholar 

  51. 51.

    Veroniki AA, Huedo-Medina TB, Fountoulakis KN. Moving from study-level to patient-level data: individual patient network meta-analysis. Network meta-analysis: evidence synthesis with mixed treatment comparison. NY: Nova; 2014.

    Google Scholar 

  52. 52.

    Ruifrok AE, Rogozinska E, van Poppel MN, Rayanagoudar G, Kerry S, de Groot CJ, Yeo S, Molyneaux E, McAuliffe FM, Poston L, et al. Study protocol: differential effects of diet and physical activity based interventions in pregnancy on maternal and fetal outcomes--individual patient data (IPD) meta-analysis and health economic evaluation. Syst Rev. 2014;3:131.

    Article  PubMed  PubMed Central  Google Scholar 

  53. 53.

    Plummer M. JAGS: a program for analysis of Bayesian graphical models using Gibbs sampling. Vienna: Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003); 2003.

    Google Scholar 

  54. 54.

    OpenBUGS Overview. []. Accessed 13 Apr 2016.

  55. 55.

    Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS -- a Bayesian modelling framework: concepts, structure, and extensibility. Stat Comput. 2000;10:13.

    Article  Google Scholar 

  56. 56.

    Boucher R, Abrams KR, Crowther MJ, Lambert PC, Wailoo AJ, Latimer NR. Adjusting for treatment switching in clinical trials when only summary data are available - an evaluation of potential methods. Value Health. 2013;16(7):A610–1.

    Article  Google Scholar 

  57. 57.

    Bergvall N, Rathi H, Nixon RM, Thom HHZ, Alsop J, Dunsire L. Modeling the impact of disease modifying treatment on time to disability health states in multiple sclerosis: an evaluation of oral therapies through indirect comparisons of 6-month confirmed disability progression. Value Health. 2013;16(7):A619.

    Article  Google Scholar 

  58. 58.

    Stewart LA, Clarke M, Rovers M, Riley RD, Simmonds M, Stewart G, Tierney JF, Group P-ID. Preferred reporting items for systematic review and meta-analyses of individual participant data: the PRISMA-IPD statement. JAMA. 2015;313(16):1657–65.

    Article  PubMed  Google Scholar 

  59. 59.

    Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, Ioannidis JP, Straus S, Thorlund K, Jansen JP, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162(11):777–84.

    Article  PubMed  Google Scholar 

  60. 60.

    Jansen JP, Trikalinos T, Cappelleri JC, Daw J, Andes S, Eldessouki R, Salanti G. Indirect treatment comparison/network meta-analysis study questionnaire to assess relevance and credibility to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health. 2014;17(2):157–73.

    Article  PubMed  Google Scholar 

  61. 61.

    Sobieraj DM, Cappelleri JC, Baker WL, Phung OJ, White CM, Coleman CI. Methods used to conduct and report Bayesian mixed treatment comparisons published in the medical literature: a systematic review. BMJ Open. 2013;3(7):e003111.

  62. 62.

    Lee AW. Review of mixed treatment comparisons in published systematic reviews shows marked increase since 2009. J Clin Epidemiol. 2014;67(2):138–43.

    Article  PubMed  Google Scholar 

  63. 63.

    Chambers JD, Naci H, Wouters OJ, Pyo J, Gunjal S, Kennedy IR, Hoey MG, Winn A, Neumann PJ. An assessment of the methodological quality of published network meta-analyses: a systematic review. PLoS One. 2015;10(4):e0121715.

    Article  PubMed  PubMed Central  Google Scholar 

  64. 64.

    Bafeta A, Trinquart L, Seror R, Ravaud P. Analysis of the systematic reviews process in reports of network meta-analyses: methodological systematic review. BMJ. 2013;347:f3675.

    Article  PubMed  PubMed Central  Google Scholar 

  65. 65.

    Donegan S, Williamson P, Gamble C, Tudur-Smith C. Indirect comparisons: a review of reporting and methodological quality. PLoS One. 2010;5(11):e11054.

    Article  PubMed  PubMed Central  Google Scholar 

  66. 66.

    Donegan S, Williamson P, D’Alessandro U, Tudur Smith C. Assessing key assumptions of network meta-analysis: a review of methods. Res Synth Methods. 2013;4(4):291–323.

    Article  PubMed  Google Scholar 

  67. 67.

    Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews. BMJ. 2009;338:b1147.

    Article  PubMed  PubMed Central  Google Scholar 

  68. 68.

    Bucher HC, Guyatt GH, Griffith LE, Walter SD. The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J Clin Epidemiol. 1997;50(6):683–91.

    CAS  Article  PubMed  Google Scholar 

  69. 69.

    Salanti G. Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Res Synth Methods. 2012;3(2):80–97.

    Article  PubMed  Google Scholar 

  70. 70.

    Lumley T. Network meta-analysis for indirect treatment comparisons. Stat Med. 2002;21(16):2313–24.

    Article  PubMed  Google Scholar 

  71. 71.

    Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004;23(20):3105–24.

    CAS  Article  PubMed  Google Scholar 

  72. 72.

    DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177–88.

    CAS  Article  PubMed  Google Scholar 

  73. 73.

    Raudenbush SW. Analyzing effect sizes: Random-effects models. In: The Handbook of Research Synthesis and Meta-Analysis. edn. Edited by Cooper H, Hedges LV, Valentine JC. New York: Russell Sage Foundation; 2009;295-315.

  74. 74.

    Spiegelhalter DJ, Best NG, Carlin BP, van der Linde A. Bayesian measures of model complexity and fit. J R Statist Soc B. 2002;64(4):57.

    Article  Google Scholar 

  75. 75.

    Akaike H. A new look at the statistical model identification. IEEE Trans Autom Control. 1974;19(6):8.

    Google Scholar 

  76. 76.

    Hosmer DW, Lemeshow S. Applied logistic regression. New York: Wiley; 2000.

    Google Scholar 

  77. 77.

    Veroniki AA, Vasiliadis HS, Higgins JP, Salanti G. Evaluation of inconsistency in networks of interventions. Int J Epidemiol. 2013;42(1):332–45.

    Article  PubMed  Google Scholar 

  78. 78.

    Dias S, Welton NJ, Caldwell DM, Ades AE. Checking consistency in mixed treatment comparison meta-analysis. Stat Med. 2010;29(7-8):932–44.

    CAS  Article  PubMed  Google Scholar 

  79. 79.

    Lu G, Ades AE. Assessing evidence inconsistency in mixed treatment comparisons. J Am Stat Assoc. 2006;101:13.

    Google Scholar 

  80. 80.

    SAS Institute Inc.. SAS Software. Cary, NC: SAS Institute Inc.; 2003.

  81. 81.

    R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2015.

    Google Scholar 

  82. 82.

    StataCorp. Stata Statistical Software. College Station: StataCorp LP; 2013.

    Google Scholar 

Download references


We would like to thank L Citrome, JP Daniels, P Frith, S Goodacre, J Ishak, J Jeroen, EJ Mills, RM Nixon, JP Pignon (and his colleagues P Blanchard, L Majed, and S Michiels), E Salazar Lindo, J Signorovitch, H Hong, P Saramago (and his colleagues D Kendrick and NJ Cooper), C Tudur Smith (and her colleague S Nolan), HHZ Thom, and SW Youdom for providing additional data.

We thank Ms Becky Skidmore for conducting the literature search, Ms Heather MacDonald for peer reviewing the search strategy, and Ms Alissa Epworth for conducting the grey literature search and obtaining full text of identified articles. We would like to thank Inthuja Selvaratnam and Jaimie Ann Adams for formatting the tables and the manuscript and Peggy Robinson for copyediting the manuscript. We would also like to thank the peer reviewers of our paper Drs. Sarah Donegan and Mark Simmonds for their valuable and insightful comments, which have improved our manuscript substantially.


AAV is funded by the Canadian Institutes of Health Research Banting Postdoctoral Fellowship Program. SES is funded by a Tier 1 Canada Research Chair in Knowledge Translation. MJE is supported by an Alberta Innovates - Health Solutions Clinician Fellowship. ACT is funded by a Drug Safety and Effectiveness Network/Canadian Institutes of Health Research New Investigator Award in Knowledge Synthesis.

Author information



Corresponding author

Correspondence to Andrea C. Tricco.

Additional information

Competing interests

ACT is an associate editor for this journal but was not involved with the peer review process or decision to publish. All other authors declare that they have no competing interests.

Authors’ contributions

AAV, ACT, and SES conceived and designed the study and helped to draft the manuscript. CS and MJE helped to design the study and edited the manuscript. All authors read and approved the final manuscript. ACT is the guarantor.

Additional files

Additional file 1:

Appendix 1. Literature search for MEDLINE. Appendix 2. Data abstraction process from identified studies. Appendix 3. Studies excluded during the screening process. Appendix 4. Characteristics of the identified indirect comparisons using individual patient data. Appendix 5. Epidemiological and descriptive statistics of the identified networks. Appendix 6. Reporting characteristics of the identified empirical networks, including unpublished data provided by study authors. Appendix 7. Distribution of the number of trials and treatment groups in a network, as well of number of outcomes assessed in indirect comparison methods with individual patient data. Appendix 8. Distribution of the number of patients in a network. Appendix 9. Included IPD indirect comparison studies. References in Additional file 1. (DOCX 219 kb)

Additional file 2:

Data abstraction file used to extract information from all eligible studies with individual patient data indirect comparison methods. (XLSX 52 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Veroniki, A.A., Straus, S.E., Soobiah, C. et al. A scoping review of indirect comparison methods and applications using individual patient data. BMC Med Res Methodol 16, 47 (2016).

Download citation


  • Network meta-analysis
  • Individual participant data
  • Patient-level data
  • Multiple treatments meta-analysis
  • Knowledge synthesis
  • Research methods
  • Scoping review