Skip to main content

Scoping implementation science for the beginner: locating yourself on the “subway line” of translational research

Abstract

Background

Beginners to the discipline of implementation science often struggle to determine whether their research questions “count” as implementation science.

Main text

In this paper, three implementation scientists share a heuristic tool to help investigators determine where their research questions fall in the translational research continuum. They use a “subway model” that envisions a journey to implementation research with stops along the way at efficacy and effectiveness research.

Conclusions

A series of structured questions about intervention efficacy, effectiveness, and implementation can help guide researchers to select research questions and appropriate study designs along the spectrum of translational research.

Peer Review reports

Introduction

Given evidence that it may take 17 years for research findings to be taken up into practice [1], there is a growing urgency in health services research to address the seemingly intractable research-to-practice gap. This urgency has fueled the development of implementation science, defined as the “scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and hence, to improve the quality and effectiveness of health services and care” [2]. The term “implementation science” is used in the United States, but this discipline is alternatively known as “dissemination and implementation research” and “knowledge translation” [3]. The growth of implementation science is evidenced by an increasing number of established frameworks [4] and recognized implementation outcomes [5]. We acknowledge that implementation science draws from and is related to numerous disciplines, including public health, psychology, organizational theory, human factors engineering, and others [2]. However, the similarities and differences between implementation science, dissemination and implementation research, knowledge translation, and other terms for the enterprise focused on facilitating the uptake of evidence into practice is beyond the scope of this commentary. Interested readers are referred to pre-existing literature addressing these distinctions [6, 7].

Helping researchers distinguish between implementation science and related disciplines

There are growing efforts to build capacity for a cadre of implementation science researchers from both federal funders such as the National Institutes of Health [8] and academic universities [9]. We have been part of efforts to train researchers at our respective institutions and in regional, national, and international training efforts. These programs include conventional graduate-level courses, intensive 3-day immersion experiences, and informal and formal mentorship across a range of training stages including undergraduate, graduate, and postgraduate trainees. We engage trainees through a variety of mechanisms including both experiential and didactic approaches. Trainees across the training continuum (e.g., undergraduates to seasoned researchers making a lateral move into implementation science) often ask basic questions like the following: What kinds of research questions fall under the umbrella of implementation science? How do I know if my research focus is ready to be examined with an implementation science lens? How do I know when my intervention is ready for implementation? To answer these questions, we draw on a subway metaphor (Fig. 1), iteratively created with input from our trainees, that we have found to be a useful teaching aid. The goal of this brief commentary is to share our thinking and subway metaphor with the broader research community in the hopes that it can be useful.

Fig. 1
figure 1

“Subway” schematic to guide researchers contemplating implementation studies of evidence-based interventions

We first encourage our trainees to identify the practice that they want to implement, a “practice of interest,” or POI. We use this terminology because there are myriad interventions, procedures, guidelines, tools, and practices that individuals or organizations might seek to implement [10]. Although our trainees tend to be interested in improving health care delivery and outcomes, we note that a “practice of interest” could refer to practices deployed in educational or community settings.

The next step is to evaluate the evidence in support of the POI. Although this step is second nature to implementation scientists, we find that other types of investigators may make their way to this discipline having identified an implementation strategy (e.g., audit and feedback, education) without having scrutinized the degree to which the practice of interest has a strong basis in evidence. Occasionally, students present an “evidence-based” concept or bundle - an example is the use of advanced care planning in patients with advanced illness [11] - whose particular manifestations or components have varying levels of evidence.

Therefore, the first branch point after identifying the practice of interest is to consider whether the POI has been shown to be efficacious. That is, does the POI improve clinical outcomes of interest when deployed under tightly controlled, ideally randomized, conditions? In the development of new drugs, this stage of research often occurs in Phase 2 or Phase 3 testing [12]. For non-drug interventions meant to improve health, efficacy testing may take many forms, including high-fidelity simulation or randomized controlled trials [13]. If the practice of interest has not yet shown efficacy, efficacy studies are needed. In this case, we encourage students to consider future implementation in the development and refinement of the intervention; we call this “designing for dissemination and implementation” [12].

If the POI has shown efficacy, the next question to consider is whether it has also shown effectiveness. We define effectiveness as “real world efficacy,” or evidence of benefit outside the realm of randomized controlled trials with strict inclusion and exclusion criteria [13]. Effectiveness studies should reflect real world practice with heterogeneous settings, clinicians, service providers, and patients or clients. If effectiveness studies showing benefit in the target population are lacking, we submit that two courses of action are reasonable. If no real world studies of the evidence-based practice have been conducted at all, these studies are needed to address the question of whether the POI works in the real world. Oftentimes however, effectiveness studies exist, but the setting or population of interest to the researcher is not adequately represented in the scientific literature. In this case, it may be prudent to test both effectiveness and implementation research questions using an effectiveness-implementation hybrid design [14]. For action-oriented researchers, this family of study designs is highly appealing because it allows simultaneous study of effectiveness and implementation outcomes, thereby expediting the translation of research findings into practice. However, these hybrid studies are often resource intensive and may be unfamiliar to readers and reviewers unfamiliar with implementation science. For these reasons, hybrid trials should only be undertaken by research teams with the expertise to design, execute, analyze, and disseminate them.

In the event that the POI has shown both efficacy and effectiveness, it is appropriate to proceed with studies focused on implementation [15]. Depending on what is already known about the implementation context and potential strategies, these studies might focus on contextual inquiry of implementation determinants (i.e., barriers and facilitators), the development and/or selection of implementation strategies [16, 17], and/or comparative effectiveness studies of different implementation strategies. In our experience, clinical audiences and some funders may demand effectiveness outcomes even in the context of strong effectiveness evidence. Later stage hybrid designs (Type 2 or Type 3) [14] represent one approach to focus on implementation while satisfying a desire to measure effectiveness outcomes. Irrespective of the implementation focus, mixed methods designs are commonly used in implementation research [18].

Hypothetical case studies to test the implementation science schematic

In our color-coded schematic, only those arms in green (studies of implementation or hybrid studies) fall under the umbrella of “implementation science.” The following hypothetical research questions illustrate the utility of this schematic.

Research question 1

Does a new clinical protocol for sepsis improve patient outcomes? The POI is the clinical protocol. It has been pilot tested in a non-randomized fashion with promising results. Without randomized controlled trial data, we would say that evidence of efficacy is missing, so implementation studies are not yet warranted. The protocol should undergo efficacy testing, but the intervention developers should consider future implementation in the refinement and testing of the intervention (e.g., is it too complicated to work in routine clinical practice?).

Research question 2

Multiple randomized clinical trials have shown that Drug X improves outpatient blood pressure control in hypertensive patients. Does Drug X work in heterogeneous patient populations? The POI is drug X. Although there is evidence of efficacy, we do not yet know whether Drug X works in routine clinical practice. Efficacious interventions can fall flat in the real world once the realities of dosing, side effects, and interactions with other conditions and medications are considered. Drug X is not yet ready for studies of implementation, so effectiveness studies should occur next. However, effectiveness studies may yield observational data that will inform future implementation efforts.

Research question 3

Care coordination pathway Y improves outcomes for heart failure patients in both efficacy and effectiveness studies. Will it work for patients with diabetes? The POI is the care coordination pathway. It would be reasonable to study Pathway Y in effectiveness studies focused on patients with diabetes. Alternatively, a hybrid Type 1 trial would maintain a focus on effectiveness while either prospectively or retrospectively collecting information to inform future implementation efforts. Given that the existing effectiveness data arise from another population and that the intervention itself is complex and involves clinical process/workflow changes, a choice of a hybrid Type 1 study could be warranted. Relevant implementation outcomes for this early implementation evaluation include acceptability (i.e., how palatable or agreeable a POI is from the perspective of stakeholders), appropriateness (i.e., the perceived fit of the POI for a given setting, clinician, or patient), feasibility (i.e., the extent to which a POI can be successfully deployed in a given setting), and fidelity (i.e., the degree to which a POI is implemented as it was intended) [5]. These implementation outcomes and their relationship to more studied clinical effectiveness outcomes are explained in detail in seminal papers by Proctor and colleagues in 2009 [19] and 2011 [5].

Research question 4

Colon cancer screening leads to earlier cancer detection and improved patient outcomes [20]. What strategies can be used to increase colon cancer screening? The POI is colon cancer screening. This evidence-based practice is ripe for studies of implementation given the robust evidence base supporting it. Implementation studies are warranted. The focus of implementation studies will depend on what is known about the context to be studied. Potential study designs range from observational contextual inquiry to randomized controlled trials of implementation strategies.

Discussion

We have outlined a series of structured questions useful in helping newcomers to implementation research answer the question, “Is this implementation science?” As with any heuristic tool, there are limitations. First, the schematic may also be difficult to apply prospectively to questions of de-implementation or in a retrospective fashion as with program evaluation. Further, the schematic places a great deal of emphasis on randomized controlled trials as the defining feature of research. Many research questions are not amenable to testing with randomized designs, either because the unit of analysis is an organizational unit (e.g., a hospital ward) or because the population or clinical locations to be studied do not accept randomized trials [21]. However, we have found this work useful in guiding discussions about implementation science research and knowledge translation and we hope that it is useful to others in both assisting in training efforts and also in allowing for evaluation of published research. It is likely that the schematic will be dynamic; we will continue to build upon it in the future. 

Conclusions

A series of structured questions about intervention efficacy, effectiveness, and implementation can help guide researchers to select research questions and appropriate study designs along the spectrum of translational research.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Abbreviations

POI:

Practice of interest

References

  1. Balas EA, Boren SA. Managing clinical knowledge for health care improvement. Yearb Med Inform. 2000;(1):65–70.

    Article  Google Scholar 

  2. Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1:1–1.

    Article  Google Scholar 

  3. Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102(7):1274–81.

    Article  Google Scholar 

  4. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    Article  Google Scholar 

  5. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

    Article  Google Scholar 

  6. Meissner HI, Glasgow RE, Vinson CA, et al. The U.S. training institute for dissemination and implementation research in health. Implement Sci. 2013;8(1):12.

    Article  Google Scholar 

  7. Moore JE, Rashid S, Park JS, Khan S, Straus SE. Longitudinal evaluation of a course to build core competencies in implementation practice. Implement Sci. 2018;13(1):106.

    Article  Google Scholar 

  8. Wensing M, Grol R. Knowledge translation in health: how implementation science could contribute more. BMC Med. 2019;17(1):88.

    Article  Google Scholar 

  9. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50.

    Article  Google Scholar 

  10. Jordan Z, Lockwood C, Munn Z, Aromataris E. The updated Joanna Briggs institute model of evidence-based healthcare. Int J Evid Based Healthc. 2019;17(1):58–71.

    Article  Google Scholar 

  11. Conroy S, Fade P, Fraser A, Schiff R. Advance care planning: concise evidence-based guidelines. Clinical Medicine. 2009;9(1):76–9.

    Article  Google Scholar 

  12. Food and Drug Administration. Step 3: clinical research. 2018. https://www.fda.gov/ForPatients/Approvals/Drugs/ucm405622.htm. Accessed 1/26/2019.

  13. Flay BR. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Prev Med. 1986;15.

    Article  CAS  Google Scholar 

  14. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

    Article  Google Scholar 

  15. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Republished research: implementation research: what it is and how to do it. Br J Sports Med. 2014;48(8):731–6.

    Article  Google Scholar 

  16. Powell BJ, Beidas RS, Lewis CC, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.

    Article  Google Scholar 

  17. Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10:21.

    Article  Google Scholar 

  18. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Admin Pol Ment Health. 2011;38(1):44–53.

    Article  Google Scholar 

  19. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Admin Pol Ment Health. 2009;36(1):24–34.

    Article  Google Scholar 

  20. Steinwachs D, Allen JD, Barlow WE, et al. National Institutes of Health state-of-the-science conference statement: enhancing use and quality of colorectal Cancer screening. Ann Intern Med. 2010;152(10):663–7.

    Article  Google Scholar 

  21. Handley MA, Lyles CR, McCulloch C, Cattamanchi A. Selecting and improving quasi-experimental designs in effectiveness and implementation research. Annu Rev Public Health. 2018;39:5–25.

    Article  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge the insights of their students, whose questions inspired the work presented.

Funding

Dr. Beidas receives salary support from the National Institutes of Health P50MH113840 [Beidas, Mandell, Volpp/Buttenheim]. The funding body had no role in generating the idea for the manuscript or in writing, revising, or approving the manuscript. Study design, data collection, and analysis are not applicable to this manuscript, which does not describe empirical research.

Author information

Authors and Affiliations

Authors

Contributions

MBLF conceived of the manuscript concept, drafted and revised the manuscript, approved the final manuscript, and agrees to be personally accountable for the accuracy and integrity of the work. GMC critically and substantively revised the manuscript, approved the final manuscript, and agrees to be personally accountable for the accuracy and integrity of the work. RSB conceived of the manuscript concept, drafted and revised the manuscript, approved the final manuscript, and agrees to be personally accountable for the accuracy and integrity of the work.

Authors’ information

MBLF is Assistant Professor of Anesthesiology and Critical Care, Co-Director of the Penn Center for Perioperative Outcomes Research and Transformation, and Senior Fellow of the Leonard Davis Institute of Health Economics at the University of Pennsylvania. GMC is Professor of Pharmacy Practice and Psychiatry and Director of the Center for Implementation Research at the University of Arkansas for Medical Sciences and Research Health Scientist, Central Arkansas Veterans Healthcare System. RSB is Associate Professor of Psychiatry and of Medical Ethics and Health Policy at the University of Pennsylvania; Founding Director of the Penn Implementation Science Center at the Leonard Davis Institute of Health Economics (PISCE@LDI) at the University of Pennsylvania and Senior Fellow of the Leonard Davis Institute of Health Economics at the University of Pennsylvania.

Corresponding author

Correspondence to Meghan B. Lane-Fall.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lane-Fall, M.B., Curran, G.M. & Beidas, R.S. Scoping implementation science for the beginner: locating yourself on the “subway line” of translational research. BMC Med Res Methodol 19, 133 (2019). https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-019-0783-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-019-0783-z

Keywords