Skip to main content

Reasons for missing evidence in rehabilitation meta-analyses: a cross-sectional meta-research study

Abstract

Background

Systematic reviews of randomized controlled trials are the best evidence for informing on intervention effectiveness. Their results, however, can be biased due to omitted evidence in the quantitative analyses. We aimed to assess the proportion of randomized controlled trials omitted from meta-analyses in the rehabilitation field and explore related reasons.

Methods

This is a cross-sectional meta-research study. For each systematic review included in a published selected sample in the rehabilitation field, we identified an index meta-analysis on the primary outcome and the main comparison. We then looked at all the studies considered eligible for the chosen comparison in the systematic review and identified those trials that have been omitted (i.e., not included) from each index meta-analysis. Reasons for omission were collected based on an eight-reason classification. We used descriptive statistics to describe the proportion of omitted trials overall and according to each reason.

Results

Starting from a cohort of 827 systematic reviews, 131 index meta-analyses comprising a total of 1761 eligible trials were selected. Only 16 index meta-analyses included all eligible studies while 15 omitted studies without providing references. From the remaining 100 index meta-analyses, 717 trials (40,7%) were omitted overall. Specific reasons for omission were: "unable to distinguish between selective reporting and inadequate planning" (39,3%, N = 282), "inadequate planning" (17%, N = 122), "justified to be not included" (15,1%, N = 108), "incomplete reporting" (8,4%, N = 60), "selective reporting" (3,3%, N = 24) and other situations (e.g., outcome present but no motivation for omission) (5,2%, N = 37). The 11,7% (N = 84) of omitted trials were not assessed due to non-English language or full text not available.

Conclusions

Almost half of the eligible trials were omitted from their index meta-analyses. Better reporting, protocol registration, definition and adoption of core outcome sets are needed to prevent omission of evidence in systematic reviews.

Peer Review reports

Introduction

The best evidence to assess interventions is informed by systematic reviews (SRs) of randomized controlled trials (RCTs). In a SR, the clinical effectiveness and safety is evaluated by calculating the weighted pooled estimate for the interventions on a specific outcome through a meta-analysis. The quantitative effect estimates, however, can be biased when a meta-analysis fails to include all the published and unpublished studies on a specific topic regarding that specific outcome. Indeed, systematic reviews' validity may be compromised not only when they selectively include trials, outcomes and results but also when results of some eligible trials are unavailable for inclusion.

This problem has been known as “non-reporting bias”, which can occur in different ways in RCTs [1,2,3]. For instance, it comprises both "selective reporting", when results are selected based on the nature of results, and "incomplete reporting", when results are reported in a way that cannot be used in meta-analysis [2, 4]. An example might be found in the lack of reporting or selective reporting of harms in published clinical trials, which can give a false impression of safety leading to misinformation for clinical and policy decisions [5]. It has been supported that statistically significant results are more likely to be published or reported in a complete way than non-significant ones [6,7,8]: including only such results might overestimate the effects of an intervention or underestimate its undesirable effect, leading to the uptake of interventions that were actually ineffective or harmful. Similarly, incomplete reporting can prevent studies' outcome data to be included in meta-analyses, thus resulting in an analysis of a subset of data which is a biased representation of all recorded outcomes [9,10,11]. RCTs failing to plan and measure important outcomes may also be seen as a missed opportunity and a waste of research [10], impacting on the reliability of the meta-analysis.

Several studies investigated the issues of selective outcome reporting [1, 12,13,14,15,16,17,18], incomplete reporting [19,20,21,22] and waste of research due to lack of planning [23,24,25,26,27] in several biomedical fields, showing an under-recognised problem that affects the conclusions in a substantial proportion of systematic reviews.

In rehabilitation, the quality of reporting and conducting in RCTs is still suboptimal in various fields (e.g, orthopedics, rheumatology, neurology) [28,29,30], influencing the validity of the effect estimates of rehabilitation interventions. To the best of our knowledge, there has been no assessment of the impact of evidence omission in meta-analyses in this specific field.

Objectives

Starting from a recent meta-research study including 827 SRs in the rehabilitation field [31], the primary aim of this cross-sectional meta-research study was to assess the proportion of RCTs omitted from the index meta-analysis of Cochrane (CSRs) and non-Cochrane systematic review (nCSRs) for outcome-related issues. Secondly, we aimed to compare this proportion among CSRs and nCSRs.

Materials and methods

Study design

This is a cross-sectional meta-research study [32, 33]. The protocol was registered on Open Science Framework (OSF) (https://osf.io/p25zy/). Since the reporting checklist for methods research studies is currently under development [34], we adapted items from the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) for reporting meta-research studies [35].

Selection and characteristics of systematic reviews

We started from the sample of Gianola et al. [31], which collected 827 SRs published in 2020 in the rehabilitation field, encompassing different areas such as orthopedics, neurology, geriatrics. We selected SRs of interventions including RCTs only with at least one pairwise meta-analysis (of at least two RCTs). SRs should be previously registered with a protocol available from a repository (i.e., PROSPERO, OSF) or published in a peer-reviewed journal. Empty reviews (i.e., reviews with no included studies) [36], reviews with no meta-analysis, reviews with dose–response meta-analyses, network meta-analyses, and meta-analyses of individual participant data have been excluded.

General characteristics were extracted from the SRs sample: country of the corresponding author, number of studies included in the SR and in the index meta-analysis, source of funding, source of protocol registration, reporting of a list of excluded studies, and if the SR excluded studies because they do not report any outcome of interest. Furthermore, we explored whether the SRs searched for unpublished literature and if the SRs planned to assess, assessed or discussed non-reporting bias of an entire study (i.e., publication bias) or of a planned outcome within a study (i.e., selective reporting).

Identification of index meta-analysis

To perform the assessment, we selected an index meta-analysis (IMA) from each SR as our unit of analysis. We identified as IMA the meta-analysis on the primary outcome of the main comparison, as defined by the SR’s authors. In case of multiple meta-analyses on the primary outcome, or unclear definition of primary outcomes and comparisons, we considered as IMA the first meta-analysis reported in the results section. For each IMA, we then collected the outcome and the number of RCTs included.

Identification of the primary studies omitted from the index meta-analysis

We first identified all the RCTs considered eligible for the chosen comparison either from the list of the included studies of the SRs or, when available, from the list of excluded studies. Particularly, we looked at studies that have been excluded from the SRs because they do not report any outcome of interest, which, for example, may potentially be selectively not reported in the primary study. For this reason, we also retrieved and assessed the full texts and protocols of these trials, when available. We finally identified all RCTs omitted (i.e., not included) from the IMA.

The process of selection from the SR to the IMA and the omitted studies is represented in Fig. 1.

Fig. 1
figure 1

Process of IMA and omitted RCTs selection. Legend: IMA Index meta-analysis, MA Meta-analysis, RCTs Randomized controlled trials, SR Systematic review

Assessment of the reasons for omission

For each omitted RCT, we extracted the presence of a registered protocol, whether the trial was retrieved from the SR list of included or excluded studies and if the outcome identified by the IMA was planned and/or reported (either completely or incompletely) by each trial. To collect this information, we read each trial's full text and, if available, the registered protocol and/or statistical analysis plan, looking for discrepancies between them and between different sections of the published trial (i.e., abstract, methods, and results).

We then assessed the reason for omission, following the adapted classification of Yordanov et al. [10] (Table 1): a) inadequate planning; b) selective reporting; c) incomplete reporting; d) unable to distinguish between selective reporting and inadequate planning; e) other situations; f) justified to be not included. Non-English trials or trials with full text not available were not assessed, although still representing omitted trials.

Table 1 Reasons for omission, adapted from Yordanov et al. [10]

We adapted the category "Inadequate planning" of Yordanov et al. [10], considering when the omitted outcome had to be planned or it’s likely that the omitted outcome had to be planned.

For each omitted RCT we further extracted the publication year. Considering an uptake of one year for the 2013 SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statement [37] to be disseminated in the scientific community, we decided to use 2014 as the cut-off year and to investigate post-hoc whether the publication year of the RCTs affected the classification of the omitted studies.

Before starting the assessment, a calibration phase was performed by two reviewers (SGL, MSY) piloting a small sample of 20 SRs (randomly selected and not equally distributed among CSRs and nCSRs). Disagreements were discussed with a third reviewer (SG). The final assessment was performed by one assessor. As a quality assessment measure, 30% of the sample was cross-checked by the second assessor and disagreements were discussed with a third reviewer (SG).

Statistical analysis

We used descriptive statistics to assess the proportion of omitted RCTs overall (i.e., any reason for omission) and for each reason. We also assessed the proportion of omitted RCTs focusing only on those with registered protocol. Moreover, we assessed the proportion of IMAs with at least one omitted trial for each reason. We then descriptively compared these proportions in CSRs and nCSR.

Results

Selection and general characteristics of the included systematic reviews

Starting from a cohort of 827 SRs identified by Gianola et al. [31], 131 SRs and IMAs were selected for assessment (references in Appendix 1). The general characteristics of the included SRs are summarised in Supplementary Table 1 and reported in Appendix 2. 16,8% (N = 22) were CSR and 83,2% (N = 109) were nCSR.

Of the included SRs, 85,5% (N = 112) mentioned or assessed non-reporting biases (assessment of selective reporting in the Cochrane Risk of Bias Tool, assessment or planning of assessment of publication bias through visual assessment of funnel plots or appropriate statistical tests).

Seventy-seven SRs (58,8%) excluded studies because they do not report any outcome of interest (due to "no relevant outcome data" or similar reasons), and sixty of these (45,8% of the whole sample) did not report a list of excluded studies with detailed exclusion reasons for each study either (Supplementary Table 2).

Characteristics of the index meta-analyses

Overall, the 131 IMAs included a median number of 6 (interquartile range [IQR] 3—10,5) and a total number of 1044 RCTs. Of these, 16 IMAs included all the eligible studies, whereas 15 IMAs reported the exclusion of RCTs because they do not report any outcome of interest but we were not able to assess them due to bibliographic references not reported. The remaining 100 IMAs omitted a total of 717 RCTs (median 3; IQR 1 – 7). Of these, 87,7% (N = 629) were retrieved from the list of included studies, while the remaining 12,3% (N = 88) were retrieved from the list of studies excluded because they do not report any outcome of interest. The characteristics of the IMAs are reported in Supplementary Table 3.

Assessment of the reasons for omission

Overall, 717 out of 1761 total eligible RCTs (40,7%) have been omitted from the corresponding IMAs. The proportions of omitted RCTs for each of the reasons for omission are reported in Table 2.

Table 2 Proportion of omitted RCTs for each reason for omission

The assessments of all primary studies, with their classification and rationale, are included in Appendix 3. The quality assessment on 30% of the assessed RCTs provided an almost perfect agreement between the two assessors (Cohen’s κ = 0,82).

Overall, only 29,3% (N = 210) of the assessed RCTs were registered (Supplementary Table 4). Table 3 reports the reasons for omission of registered RCTs only.

Table 3 Proportion of omitted registered RCTs for each reason for omission

Assessment of the reasons for omission according to the publication year

According to the publication year cut-off, proportion of studies omitted from the IMA changes across reasons: those assessed as "Unable to distinguish between selective reporting and inadequate planning" considerably reduced, whereas those assessed as "Inadequate planning" more than doubled (Supplementary Table 5).

Comparison between CSRs and nCSRs

Comparing CSRs and nCSRs, trial omission occurred in 59,2% (231 out of 390 eligible RCTs) and 35,4% (486 out of 1371 eligible RCTs) of the eligible studies, respectively (Supplementary Table 3). Figure 2 shows the comparison of the proportion of studies omitted for each reason between CSRs and nCSRs, in descending order. The reasons with greater differences are "Justified to be not included" (Δ% -15,9%), "Unable to distinguish between selective reporting and inadequate planning" (Δ% 10,3%) and "Other situations" (Δ% 8,3%).

Fig. 2
figure 2

Comparison of the proportion of omitted RCTs for each reason for omission between CSRs and nCSRs. Abbreviations: CSRs Cochrane Systematic Reviews, nCSRs Non-Cochrane Systematic Reviews

Discussion

Considering 131 SRs and corresponding IMAs, which included a total number of 1044 RCTs, our results show that omission of evidence occurred in more than 40% of eligible studies (40,7%; 717/1761) in 100 IMAs, comprising both studies already included in the SRs and studies retrieved from the list of excluded studies because they do not report any outcome of interest. Only 16 IMAs included all eligible studies, whereas 15 IMAs omitted studies excluded because they do not report any outcome of interest without providing the references.

At SRs level, almost 60% (77/131) of the selected SRs excluded studies because they do not report any outcome of interest. Furthermore, almost four of out five of these (60/77) did not provide a list of excluded studies either. These were in all cases nCRS as CSRs have a systematic ad hoc format for collecting and reporting the characteristics of the excluded studies [38]. This should be acknowledged as it prevented us to achieve a complete assessment and consequently the proportion of omitted trials has been underestimated.

At IMA level, some choices of SR authors prevented the possible inclusion of some trials that would have otherwise been included. For example, RCTs may have been omitted because they measured the outcome in a way that was different than what planned by the SR (e.g., different outcome measure than the one(s) identified by the SR, dichotomized outcome instead of continuous outcome or vice versa, different time frame than the one(s) selected by the SR).

At RCT level, 3,3% (24/717) and 8,4% (60/717) were omitted due to selective reporting and incomplete reporting, respectively. It is important to carefully evaluate the results of these meta-analyses, considering that certain RCTs may have been omitted due to the presumed negative or unfavourable nature of the results based on the magnitude or direction of the effect. If these excluded trials were included, they could potentially shift the effect estimate from positive to null or negative results. Considering registered RCTs only, the proportion of studies omitted due to selective reporting and incomplete reporting increases to 7,6% and 10,5%, respectively. To possibly overcome these issues, results of registered RCTs in the rehabilitation field should be made publicly available at ClinicalTrials.gov or at any other registry within one year as it already is for trials on drugs and devices [39], since it has been shown that results at ClinicalTrials.gov seem to be more completely reported than in published reports [40].

Almost 40% (39,3%, 282/717) of the omitted RCTs were classified as "Unable to distinguish between selective reporting and inadequate planning", not contributing to 68% (N = 68) of meta-analyses. This is a direct consequence of trial non-registration, which occurred in 59% (423/717) of omitted RCTs although it is common knowledge that “studies involving human beings must be registered” as stated in the Declaration of Helsinki [41]. Nevertheless, a great number of studies are still not previously registered. Since 2014, however, it should be acknowledged that this phenomenon has been improving, as shown in Supplementary Table 4.

Planning problems were observed in 17% (122/717) of the RCTs with a missing contribution in 40% (N = 40) of the meta-analyses. The creation and implementation of core outcome sets will help reduce research waste and judge when a study has truly failed in planning and measuring an important outcome [42, 43]. When considering registered RCTs only, planning issues represent the main reason for omission.

Studies that were legitimately omitted from the IMAs (i.e., "Justified to be not included") accounted for 15,1% (108/717) of the omitted RCTs, including those reporting results in a different modality (i.e., mean change and standard deviations, repeated measures time x treatment) (2%, 14/717) and those that provided results as medians due to the non-normal distribution of the data (0,6%, 4/717). Additionally, other RCTs were legitimately omitted because used a different outcome measure than the one(s) selected by the SR authors (7,5%, 54/717), because measured outcome at a different time frame (2%, 14/717), because they were secondary analyses or follow-ups of studies included in the IMA or of studies omitted from the IMA (and consequently already assessed) (1,3%, 9/717) or for other reasons (1,8%, 13/717).

Comparing CSRs and nCSRs, the former present a higher proportion of omitted RCTs (59,2% vs. 35,4%, respectively). This may be a consequence of the fact that CSRs had more comprehensive searches (i.e., published sources and unpublished sources) [44], included sources with more often incomplete or hardly accessible data (i.e., congress abstracts, theses) and provided a list of excluded studies, being more methodologically rigorous and showing better reporting and higher quality [45, 46] compared to nCSRs. Conversely, the majority of nCSRs (94/109) did not provide a list of excluded studies, thus reducing the number of RCTs assessed from nCSRs and underestimating the proportion of omitted RCTs from these sources. Among RCTs omitted form CSRs and nCSRs, it seems that differences may exist, but they are probably related to the specific outcomes/comparisons addressed by the individual SRs rather than a real difference between CSRs and nCSRs.

Comparison with previous studies

Our results slightly differ from those obtained by Yordanov et al. [10], who showed that, in a sample of CSRs of different medical fields, 78% of RCTs included did not contribute to meta-analyses of the most important outcomes showing a waste of research in a large part avoidable. Specifically, they reported a higher proportion of studies omitted due to inadequate planning and incomplete reporting. However, in Yordanov et al. a) only CRSs in different fields of medicine published between 2011 and 2014 were used to identify studies to be assessed, whereas we included both CSRs and nCSRs published in 2020 in the rehabilitation field; b) all the meta-analyses contributing to the Summary of Findings were evaluated, whereas we focused on IMAs only; c) only RCTs published after 2010 were assessed to maximise the possibility of identifying study registrations or protocols, whereas we considered RCTs irrespective of the publication year; d) studies excluded from the SRs because they do not report any outcome of interest were not assessed, whereas we assessed them. Furthermore, studies that were omitted because they reported data differently than what planned by the SR were classified as incomplete reporting by Yordanov et al., whereas in the present work an additional reason ("Justified") was added.

Strengths and limitations

To the best of our knowledge, this is the first study in the rehabilitation field to focus on omitted studies from IMAs and investigate the reasons behind this. We assessed a high number of SRs, including both CSRs and nCSRs on several interventions and clinical conditions and more than 700 RCTs omitted from their IMAs.

The present study has some limitations: i) there might be studies that were improperly assessed or conversely unassessed that should have been assessed because of poor reporting of SRs on included and not included studies and lack of exclusion reason(s); ii) the identification of omitted RCTs was solely based on the studies included (and not) in each SR, but it was not possible to quantify omitted trials due to inaccuracy in the selection by the SRs; iii) we read the RCTs' protocols only when the reference and/or registration number were reported by the authors and we did not search registries or contact authors for clarification; iv) RCTs that were included in the meta-analysis were not assessed; v) SRs with meta-analysis of one study only, SRs with no meta-analysis and empty SRs were not included, but omission of trials might occur in these cases as well (particularly in excluded studies for the empty ones); vi) non-English studies have not been assessed. These considerations suggest that this phenomenon may have been underestimated in this work.

Finally, due to paucity of core outcome set available in literature and the wide range of health conditions addressed by our sample, we did not consider the core outcome sets for assessing the "Inadequate planning" omission reason but we limited our assessment to the lack of planning of the outcome in the registered protocol, when available.

Clinical and research implications

From a clinical point of view, our results warn clinicians, consumers and policy makers about the reliability of the effect estimates provide by meta-analyses in the rehabilitation field irrespective of the quality of the reviews. Since missing results might systematically differ from the included ones [2], failing to include some RCTs may substantially bias the results of the meta-analyses. The impact of such missing results might be particularly serious when omission is caused by the nature of the results based on magnitude or direction of the effect.

From a research point of view, our results may serve as a call for researchers to improve reporting in RCTs, to register clinical trials on international reviewed registries, such as ClinicalTrials.gov and the WHO International Clinical Trials Registry, and to plan and measure outcomes that are relevant to the consumers and the public health [42]. Additionally, our work may also recommend systematic reviewers to systematically check RCTs protocols to identify whether the outcome was planned or likely to be measured, but not reported in the publication even in trials that do not report any outcome of interest in order to transparently justify the reasons for exclusion [47].

Conclusion

Almost half of the eligible RCTs have been omitted from the index meta-analyses of CSRs and nCSRs for outcome-related reasons, representing a missed opportunity to include evidence in rehabilitation research. Compared to nCSRs, CSRs omitted a higher proportion of eligible studies. Our results highlight the urgent need for better reporting and implementation of core outcome sets for each clinical condition to be produced and used in the design of clinical studies in rehabilitation. As well, previous registration for every clinical study should be systematically performed.

Availability of data and materials

All data generated or analysed during this study are included in this published article and stored at https://osf.io/p25zy/.

Abbreviations

CSR:

Cochrane systematic review

IMA:

Index meta-analysis

IQR:

Interquartile range

nCSR:

Non-Cochrane systematic review

OSF:

Open Science Framework

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-analyses

RCT:

Randomized controlled trial

SPIRIT:

Standard Protocol Items: Recommendations for Interventional Trials

SR:

Systematic review

References

  1. Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ. 2010;340: c365.

    PubMed  Google Scholar 

  2. Page MJ, Higgins JPT, Sterne JAC. Chapter 13: Assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.

  3. Page MJ, McKenzie JE, Kirkham J, Dwan K, Kramer S, Green S, et al. Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions. Cochrane Database Syst Rev. 2014;2014(10):Mr000035.

    PubMed  PubMed Central  Google Scholar 

  4. Boutron I, Page MJ, Higgins JPT, Altman DG, Lundh A, Hróbjartsson A. Chapter 7: Considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.

  5. Junqueira DR, Phillips R, Zorzela L, Golder S, Loke Y, Moher D, et al. Time to improve the reporting of harms in randomized controlled trials. J Clin Epidemiol. 2021;136:216–20.

    PubMed  Google Scholar 

  6. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE. 2008;3(8):e3081.

    PubMed  PubMed Central  Google Scholar 

  7. Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting BG. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS ONE. 2013;8(7):e66844.

    CAS  PubMed  PubMed Central  Google Scholar 

  8. Schmucker C, Schell LK, Portalupi S, Oeller P, Cabrera L, Bassler D, et al. Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS ONE. 2014;9(12):e114023.

    PubMed  PubMed Central  Google Scholar 

  9. Page MJ, McKenzie JE, Higgins JPT. Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review. BMJ Open. 2018;8(3): e019703.

    PubMed  PubMed Central  Google Scholar 

  10. Yordanov Y, Dechartres A, Atal I, Tran VT, Boutron I, Crequit P, et al. Avoidable waste of research related to outcome planning and reporting in clinical trials. BMC Med. 2018;16(1):87.

    PubMed  PubMed Central  Google Scholar 

  11. Williamson PR, Gamble C. Identification and impact of outcome selection bias in meta-analysis. Stat Med. 2005;24(10):1547–61.

    CAS  PubMed  Google Scholar 

  12. Komukai K, Sugita S, Fujimoto S. Publication Bias and Selective Outcome Reporting in Randomized Controlled Trials Related to Rehabilitation: A Literature Review. Arch Phys Med Rehabil. 2023;S0003-9993(23):00362–3.

    Google Scholar 

  13. Zhang S, Liang F, Li W. Comparison between publicly accessible publications, registries, and protocols of phase III trials indicated persistence of selective outcome reporting. J Clin Epidemiol. 2017;91:87–94.

    PubMed  Google Scholar 

  14. Jones CW, Keil LG, Holland WC, Caughey MC, Platts-Mills TF. Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC Med. 2015;13:282.

    PubMed  PubMed Central  Google Scholar 

  15. Wayant C, Scheckel C, Hicks C, Nissen T, Leduc L, Som M, et al. Evidence of selective reporting bias in hematology journals: A systematic review. PLoS ONE. 2017;12(6): e0178379.

    PubMed  PubMed Central  Google Scholar 

  16. Saini P, Loke YK, Gamble C, Altman DG, Williamson PR, Kirkham JJ. Selective reporting bias of harm outcomes within studies: findings from a cohort of systematic reviews. BMJ. 2014;349:g6501.

    PubMed  PubMed Central  Google Scholar 

  17. Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, et al. A systematic review of comparisons between protocols or registrations and full reports in primary biomedical research. BMC Med Res Methodol. 2018;18(1):9.

    PubMed  PubMed Central  Google Scholar 

  18. Smyth RM, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson PR. Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists. BMJ. 2011;342:c7153.

    CAS  PubMed  PubMed Central  Google Scholar 

  19. Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291(20):2457–65.

    CAS  PubMed  Google Scholar 

  20. Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ. 2004;171(7):735–40.

    PubMed  PubMed Central  Google Scholar 

  21. Wang A, Menon R, Li T, Harris L, Harris IA, Naylor J, et al. Has the degree of outcome reporting bias in surgical randomized trials changed? A meta-regression analysis ANZ J Surg. 2023;93(1–2):76–82.

    PubMed  Google Scholar 

  22. Miranda JS, Deonizio AP, Abbade JF, Miot HA, Mbuagbaw L, Thabane L, et al. Quality of reporting of outcomes in trials of therapeutic interventions for pressure injuries in adults: a systematic methodological survey. Int Wound J. 2021;18(2):147–57.

    PubMed  Google Scholar 

  23. James A, Ravaud P, Riveros C, Raux M, Tran VT. Completeness and Mismatch of Patient-Important Outcomes After Trauma. Ann Surg Open. 2022;3(4): e211.

    PubMed  PubMed Central  Google Scholar 

  24. Gandhi GY, Murad MH, Fujiyoshi A, Mullan RJ, Flynn DN, Elamin MB, et al. Patient-important outcomes in registered diabetes trials. JAMA. 2008;299(21):2543–9.

    CAS  PubMed  Google Scholar 

  25. González-Díaz SN, García-Campa M, Noyola-Pérez A, Guzmán-Avilán RI, de Lira-Quezada CE, Álvarez-Villalobos N, et al. Patient-important outcomes in clinical trials of atopic diseases and asthma in the last decade: A systematic review. World Allergy Organ J. 2023;16(4):100769.

    PubMed  PubMed Central  Google Scholar 

  26. Rahimi K, Malhotra A, Banning AP, Jenkinson C. Outcome selection and role of patient reported outcomes in contemporary cardiovascular trials: systematic review. BMJ. 2010;341:c5707.

    PubMed  PubMed Central  Google Scholar 

  27. Treweek S, Miyakoda V, Burke D, Shiely F. Getting it wrong most of the time? Comparing trialists’ choice of primary outcome with what patients and health professionals want. Trials. 2022;23(1):537.

    PubMed  PubMed Central  Google Scholar 

  28. Gianola S, Gasparini M, Agostini M, Castellini G, Corbetta D, Gozzer P, et al. Survey of the reporting characteristics of systematic reviews in rehabilitation. Phys Ther. 2013;93(11):1456–66.

    PubMed  Google Scholar 

  29. Innocenti T, Giagio S, Salvioli S, Feller D, Minnucci S, Brindisino F, et al. Completeness of Reporting Is Suboptimal in Randomized Controlled Trials Published in Rehabilitation Journals, With Trials With Low Risk of Bias Displaying Better Reporting: A Meta-research Study. Arch Phys Med Rehabil. 2022;103(9):1839–47.

    PubMed  Google Scholar 

  30. Arienti C, Armijo-Olivo S, Minozzi S, Tjosvold L, Lazzarini SG, Patrini M, et al. Methodological Issues in Rehabilitation Research: A Scoping Review. Arch Phys Med Rehabil. 2021;102(8):1614-22 e14.

    PubMed  Google Scholar 

  31. Gianola S, Bargeri S, Nembrini G, Varvello A, Lunny C, Castellini G. One-Third of Systematic Reviews in Rehabilitation Applied the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) System to Evaluate Certainty of Evidence: A Meta-Research Study. Arch Phys Med Rehabil. 2023;104(3):410–7.

    PubMed  Google Scholar 

  32. Puljak L. Methodological research: open questions, the need for “research on research” and its implications for evidence-based health care and reducing research waste. Int J Evid Based Healthc. 2019;17(3):145–6.

    PubMed  Google Scholar 

  33. Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020;9(7):497–508.

    PubMed  Google Scholar 

  34. Lawson DO, Puljak L, Pieper D, Schandelmaier S, Collins GS, Brignardello-Petersen R, et al. Reporting of methodological studies in health research: a protocol for the development of the MethodologIcal STudy reportIng Checklist (MISTIC). BMJ Open. 2020;10(12):e040478.

    PubMed  PubMed Central  Google Scholar 

  35. Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evid Based Med. 2017;22(4):139–42.

    PubMed  PubMed Central  Google Scholar 

  36. Yaffe J, Montgomery P, Hopewell S, Shepard LD. Empty reviews: a description and consideration of Cochrane systematic reviews with no included studies. PLoS ONE. 2012;7(5):e36626.

    CAS  PubMed  PubMed Central  Google Scholar 

  37. Chan AW, Tetzlaff JM, Altman DG, Laupacis A, Gotzsche PC, Krleza-Jeric K, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013;158(3):200–7.

    PubMed  PubMed Central  Google Scholar 

  38. Cumpston M, Lasserson T, Chandler J, Page MJ. Chapter III: Reporting the review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.

  39. Tse T, Williams RJ, Zarin DA. Reporting “basic results” in ClinicalTrials.gov. Chest. 2009;136(1):295–303.

    PubMed  PubMed Central  Google Scholar 

  40. Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS Med. 2013;10(12):e1001566 (discussion e).

    PubMed  PubMed Central  Google Scholar 

  41. World Medical Association. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2013;310(20):2191–4.

    Google Scholar 

  42. Williamson PR, Altman DG, Bagley H, Barnes KL, Blazeby JM, Brookes ST, et al. The COMET Handbook: version 1.0. Trials. 2017;18(Suppl 3):280.

    PubMed  PubMed Central  Google Scholar 

  43. Clarke M, Williamson PR. Core outcome sets and systematic reviews. Syst Rev. 2016;5:11.

    PubMed  PubMed Central  Google Scholar 

  44. Biocic M, Fidahic M, Cikes K, Puljak L. Comparison of information sources used in Cochrane and non-Cochrane systematic reviews: A case study in the field of anesthesiology and pain. Res Synth Methods. 2019;10(4):597–605.

    PubMed  Google Scholar 

  45. Useem J, Brennan A, LaValley M, Vickery M, Ameli O, Reinen N, et al. Systematic Differences between Cochrane and Non-Cochrane Meta-Analyses on the Same Topic: A Matched Pair Analysis. PLoS ONE. 2015;10(12): e0144980.

    PubMed  PubMed Central  Google Scholar 

  46. Moseley AM, Elkins MR, Herbert RD, Maher CG, Sherrington C. Cochrane reviews used more rigorous methods than non-Cochrane reviews: survey of systematic reviews in physiotherapy. J Clin Epidemiol. 2009;62(10):1021–30.

    PubMed  Google Scholar 

  47. McKenzie JE, Brennan SE, Ryan RE, Thomson HJ, Johnston RV, Thomas J. Chapter 3: Defining the criteria for including studies and how they will be grouped for the synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.

Download references

Acknowledgements

Not applicable.

Funding

This study was supported and funded by the Italian Ministry of Health. The APC was funded by Italian Ministry of Health - Ricerca Corrente. The funder had no role in study design, data collection and analysis, decision to publish, or manuscript preparation.

Author information

Authors and Affiliations

Authors

Contributions

SGL, SB, GC and SG contributed to the conception and design of the work. SGL and MSY contributed to the acquisition and analysis of the data. SGL, MSY, SB, GC and SG contributed to the interpretation of the data. SGL, SB, GC and SG have drafted the work or substantively revised it. All authors reviewed the manuscript for important intellectual content. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Stefano Giuseppe Lazzarini.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Appendix 1.

 References of included systematic reviews.

Additional file 2: 

Appendix 2. Characteristics of included systematic reviews.

Additional file 3: 

Appendix 3. Assessment of the reason for omission.

Additional file 4.

Prisma Checklist.

Additional file 5: Supplementary Table 1. 

Characteristics of included reviews.

Additional file 6: Supplementary Table 2. 

Absolute frequencies, relative frequencies and column percentages obtained by cross-referencing the information on the list of excluded studies with detailed exclusion reasons for each study and the exclusion of studies because they do not report any outcome of interest.

Additional file 7: Supplementary Table 3. 

Characteristics of the index meta–analyses.

Additional file 8: Supplementary Table 4. 

Information concerning a registered protocol.

Additional file 9: Supplementary Table 5. 

Comparison of the proportion of omitted RCTs for each reason for omission according to year of publication.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lazzarini, S.G., Stella Yousif, M., Bargeri, S. et al. Reasons for missing evidence in rehabilitation meta-analyses: a cross-sectional meta-research study. BMC Med Res Methodol 23, 245 (2023). https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-023-02064-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-023-02064-7

Keywords