10 Shattuck St, Boston MA 02115 | (617) 432-2136
| |
Copyright © 2020 President and Fellows of Harvard College. All rights reserved.
In this guide.
Knowledge synthesis is a term used to describe the method of synthesizing results from individual studies and interpreting these results within the larger body of knowledge on the topic. It requires highly structured, transparent and reproducible methods using quantitative and/or qualitative evidence. Systematic reviews, meta-analyses, scoping reviews, rapid reviews, narrative syntheses, practice guidelines, among others, are all forms of knowledge syntheses. For more information on types of reviews, visit the "Types of Reviews" tab on the left.
A systematic review varies from an ordinary literature review in that it uses a comprehensive, methodical, transparent and reproducible search strategy to ensure conclusions are as unbiased and closer to the truth as possible. The Cochrane Handbook for Systematic Reviews of Interventions defines a systematic review as:
"A systematic review attempts to identify, appraise and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question. Researchers conducting systematic reviews use explicit methods aimed at minimizing bias, in order to produce more reliable findings that can be used to inform decision making [...] This involves: the a priori specification of a research question; clarity on the scope of the review and which studies are eligible for inclusion; making every effort to find all relevant research and to ensure that issues of bias in included studies are accounted for; and analysing the included studies in order to draw conclusions based on all the identified research in an impartial and objective way." ( Chapter 1: Starting a review )
What are systematic reviews? from Cochrane on Youtube .
A "high-level overview of primary research on a focused question" utilizing high-quality research evidence through: Source: Kysh, Lynn (2013): Difference between a systematic review and a literature review. [figshare]. Available at:
|
Depending on your learning style, please explore the resources in various formats on the tabs above.
For additional tutorials, visit the SR Workshop Videos from UNC at Chapel Hill outlining each stage of the systematic review process.
Know the difference! Systematic review vs. literature review
It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic. Even with this common ground, both types vary significantly. Please review the following chart (and its corresponding poster linked below) for a detailed explanation of each as well as the differences between each type of review. Source: Kysh, L. (2013). What’s in a name? The difference between a systematic review and a literature review and why it matters. [Poster] Retrieved from . Check the website from UNC at Chapel Hill, |
Types of literature reviews along with associated methodologies
JBI Manual for Evidence Synthesis . Find definitions and methodological guidance.
- Systematic Reviews - Chapters 1-7
- Mixed Methods Systematic Reviews - Chapter 8
- Diagnostic Test Accuracy Systematic Reviews - Chapter 9
- Umbrella Reviews - Chapter 10
- Scoping Reviews - Chapter 11
- Systematic Reviews of Measurement Properties - Chapter 12
Systematic reviews vs scoping reviews -
Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal , 26 (2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and methods. Systematic Reviews, 1 (28). htt p s://doi.org/ 10.1186/2046-4053-1-28
Munn, Z., Peters, M., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review ? Guidance for authors when choosing between a systematic or scoping review approach. BMC medical research methodology, 18 (1), 143. https://doi.org/10.1186/s12874-018-0611-x. Also, check out the Libguide from Weill Cornell Medicine for the differences between a systematic review and a scoping review and when to embark on either one of them.
Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: Exploring review types and associated information retrieval requirements . Health Information & Libraries Journal , 36 (3), 202–222. https://doi.org/10.1111/hir.12276
Temple University. Review Types . - This guide provides useful descriptions of some of the types of reviews listed in the above article.
UMD Health Sciences and Human Services Library. Review Types . - Guide describing Literature Reviews, Scoping Reviews, and Rapid Reviews.
Whittemore, R., Chao, A., Jang, M., Minges, K. E., & Park, C. (2014). Methods for knowledge synthesis: An overview. Heart & Lung: The Journal of Acute and Critical Care, 43 (5), 453–461. https://doi.org/10.1016/j.hrtlng.2014.05.014
Differences between a systematic review and other types of reviews
Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). ‘ Scoping the scope ’ of a cochrane review. Journal of Public Health , 33 (1), 147–150. https://doi.org/10.1093/pubmed/fdr015
Kowalczyk, N., & Truluck, C. (2013). Literature reviews and systematic reviews: What is the difference? Radiologic Technology , 85 (2), 219–222.
White, H., Albers, B., Gaarder, M., Kornør, H., Littell, J., Marshall, Z., Matthew, C., Pigott, T., Snilstveit, B., Waddington, H., & Welch, V. (2020). Guidance for producing a Campbell evidence and gap map . Campbell Systematic Reviews, 16 (4), e1125. https://doi.org/10.1002/cl2.1125. Check also this comparison between evidence and gaps maps and systematic reviews.
Rapid Reviews Tutorials
Rapid Review Guidebook by the National Collaborating Centre of Methods and Tools (NCCMT)
Hamel, C., Michaud, A., Thuku, M., Skidmore, B., Stevens, A., Nussbaumer-Streit, B., & Garritty, C. (2021). Defining Rapid Reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews. Journal of clinical epidemiology , 129 , 74–85. https://doi.org/10.1016/j.jclinepi.2020.09.041
Image: by WeeblyTutorials |
under the tab on the left side menu. |
Videos on systematic reviews
This video lecture explains in detail the steps necessary to conduct a systematic review (44 min.) | Here's a brief introduction to how to evaluate systematic reviews (16 min.) |
Systematic Reviews: What are they? Are they right for my research? - 47 min. video recording with a closed caption option.
More training videos on systematic reviews:
from Yale University (approximately 5-10 minutes each) | with Margaret Foster (approximately 55 min each) |
Books on Systematic Reviews
Books on Meta-analysis
Guidelines for a systematic review as part of the dissertation
Further readings on experiences of PhD students and doctoral programs with systematic reviews
Puljak, L., & Sapunar, D. (2017). Acceptance of a systematic review as a thesis: Survey of biomedical doctoral programs in Europe . Systematic Reviews , 6 (1), 253. https://doi.org/10.1186/s13643-017-0653-x
Perry, A., & Hammond, N. (2002). Systematic reviews: The experiences of a PhD Student . Psychology Learning & Teaching , 2 (1), 32–35. https://doi.org/10.2304/plat.2002.2.1.32
Daigneault, P.-M., Jacob, S., & Ouimet, M. (2014). Using systematic review methods within a Ph.D. dissertation in political science: Challenges and lessons learned from practice . International Journal of Social Research Methodology , 17 (3), 267–283. https://doi.org/10.1080/13645579.2012.730704
UMD Doctor of Philosophy Degree Policies
Before you embark on a systematic review research project, check the UMD PhD Policies to make sure you are on the right path. Systematic reviews require a team of at least two reviewers and an information specialist or a librarian. Discuss with your advisor the authorship roles of the involved team members. Keep in mind that the UMD Doctor of Philosophy Degree Policies (scroll down to the section, Inclusion of one's own previously published materials in a dissertation ) outline such cases, specifically the following:
" It is recognized that a graduate student may co-author work with faculty members and colleagues that should be included in a dissertation . In such an event, a letter should be sent to the Dean of the Graduate School certifying that the student's examining committee has determined that the student made a substantial contribution to that work. This letter should also note that the inclusion of the work has the approval of the dissertation advisor and the program chair or Graduate Director. The letter should be included with the dissertation at the time of submission. The format of such inclusions must conform to the standard dissertation format. A foreword to the dissertation, as approved by the Dissertation Committee, must state that the student made substantial contributions to the relevant aspects of the jointly authored work included in the dissertation."
by CommLab India |
|
by Vinova |
|
Bioinformatics
Environmental Sciences
Collaboration for Environmental Evidence. 2018. Guidelines and Standards for Evidence synthesis in Environmental Management. Version 5.0 (AS Pullin, GK Frampton, B Livoreil & G Petrokofsky, Eds) www.environmentalevidence.org/information-for-authors .
Pullin, A. S., & Stewart, G. B. (2006). Guidelines for systematic review in conservation and environmental management. Conservation Biology, 20 (6), 1647–1656. https://doi.org/10.1111/j.1523-1739.2006.00485.x
Engineering Education
Public Health
Social Sciences
by Day Translations |
Resources for your writing |
A systematic review is an evidence synthesis that uses explicit, reproducible methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate statistical techniques to combine these valid studies.
Generally, systematic reviews must have:
A meta-analysis is a systematic review that uses quantitative methods to synthesize and summarize the pooled data from included studies.
A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman 1992, Oxman 1993) . The key characteristics of a systematic review are:
a clearly stated set of objectives with pre-defined eligibility criteria for studies;
an explicit, reproducible methodology;
a systematic search that attempts to identify all studies that would meet the eligibility criteria;
an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias; and
a systematic presentation, and synthesis, of the characteristics and findings of the included studies.
Many systematic reviews contain meta-analyses. Meta-analysis is the use of statistical methods to summarize the results of independent studies (Glass 1976). By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review (see Chapter 9, Section 9.1.3 ). They also facilitate investigations of the consistency of evidence across studies, and the exploration of differences across studies.
Volume 70, 2019, review article, how to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses.
Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.
Article metrics loading...
Full text loading...
Literature Cited
Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.
A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are:
The following presentation is a recording of the Getting Started with Systematic Reviews workshop (4/2022), offered by the Duke Medical Center Library & Archives. A NetID/pw is required to access the tutorial via Warpwire.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Email citation, add to collections.
Your saved search, create a file for external citation management software, your rss feed.
Affiliations.
A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a predefined search strategy to identify and appraise all available published literature on a specific topic. The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.
Keywords: research; research design; systematic review.
© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).
PubMed Disclaimer
Linkout - more resources, full text sources.
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Rapid reviews, scoping reviews.
Definition : A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors.
When to use : If you want to identify, appraise, and synthesize all available research that is relevant to a particular question with reproduceable search methods.
Limitations : It requires extensive time and a team
Resources :
Definition : Rapid reviews are a form of evidence synthesis that may provide more timely information for decision making compared with standard systematic reviews
When to use : When you want to evaluate new or emerging research topics using some systematic review methods at a faster pace
Limitations : It is not as rigorous or as thorough as a systematic review and therefore may be more likely to be biased
Definition : Scoping reviews are often used to categorize or group existing literature in a given field in terms of its nature, features, and volume.
When to use : Label body of literature with relevance to time, location (e.g. country or context), source (e.g. peer-reviewed or grey literature), and origin (e.g. healthcare discipline or academic field) It also is used to clarify working definitions and conceptual boundaries of a topic or field or to identify gaps in existing literature/research
Limitations : More citations to screen and takes as long or longer than a systematic review. Larger teams may be required because of the larger volumes of literature. Different screening criteria and process than a systematic review
University libraries.
See all library locations
Need help? Email us at [email protected]
What is a systematic review.
We are very grateful to Duke Libraries for allowing us to use their guide to systematic reviews as a template for our own.
One of the most familiar types of evidence synthesis is a systematic review. A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are:
There are many types of evidence synthesis projects, including systematic reviews as well as others. The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews.
The table below summarizes various review types and associated methodologies. Librarians can also help your team determine which review type might be most appropriate for your project.
Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108. doi:10.1111/j.1471-1842.2009.00848.x
|
|
|
|
|
|
| Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or mode | Seeks to identify most significant items in the field | No formal quality assessment. Attempts to evaluate according to contribution | Typically narrative, perhaps conceptual or chronological | Significant component: seeks to identify conceptual contribution to embody existing or derive new theory |
| Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness. May include research findings | May or may not include comprehensive searching | May or may not include quality assessment | Typically narrative | Analysis may be chronological, conceptual, thematic, etc. |
| Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature | Completeness of searching determined by time/scope constraints | No formal quality assessment | May be graphical and tabular | Characterizes quantity and quality of literature, perhaps by study design and other key features. May identify need for primary or secondary research |
| Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results | Aims for exhaustive, comprehensive searching. May use funnel plot to assess completeness | Quality assessment may determine inclusion/ exclusion and/or sensitivity analyses | Graphical and tabular with narrative commentary | Numerical analysis of measures of effect assuming absence of heterogeneity |
| Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies | Requires either very sensitive search to retrieve all studies or separately conceived quantitative and qualitative strategies | Requires either a generic appraisal instrument or separate appraisal processes with corresponding checklists | Typically both components will be presented as narrative and in tables. May also employ graphical means of integrating quantitative and qualitative studies | Analysis may characterise both literatures and look for correlations between characteristics or use gap analysis to identify aspects absent in one literature but missing in the other |
| Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics | May or may not include comprehensive searching (depends whether systematic overview or not) | May or may not include quality assessment (depends whether systematic overview or not) | Synthesis depends on whether systematic or not. Typically narrative but may include tabular features | Analysis may be chronological, conceptual, thematic, etc. |
| Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies | May employ selective or purposive sampling | Quality assessment typically used to mediate messages not for inclusion/exclusion | Qualitative, narrative synthesis | Thematic analysis, may include conceptual models |
| Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research | Completeness of searching determined by time constraints | Time-limited formal quality assessment | Typically narrative and tabular | Quantities of literature and overall quality/direction of effect of literature |
| Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research evidence (usually including ongoing research) | Completeness of searching determined by time/scope constraints. May include research in progress | No formal quality assessment | Typically tabular with some narrative commentary | Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review |
| Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives | Aims for comprehensive searching of current literature | No formal quality assessment | Typically narrative, may have tabular accompaniment | Current state of knowledge and priorities for future investigation and research |
| Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review | Aims for exhaustive, comprehensive searching | Quality assessment may determine inclusion/exclusion | Typically narrative with tabular accompaniment | What is known; recommendations for practice. What remains unknown; uncertainty around findings, recommendations for future research |
| Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis’ | Aims for exhaustive, comprehensive searching | May or may not include quality assessment | Minimal narrative, tabular summary of studies | What is known; recommendations for practice. Limitations |
| Attempt to include elements of systematic review process while stopping short of systematic review. Typically conducted as postgraduate student assignment | May or may not include comprehensive searching | May or may not include quality assessment | Typically narrative with tabular accompaniment | What is known; uncertainty around findings; limitations of methodology |
| Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results | Identification of component reviews, but no search for primary studies | Quality assessment of studies within component reviews and/or of reviews themselves | Graphical and tabular with narrative commentary | What is known; recommendations for practice. What remains unknown; recommendations for future research |
Newton Gresham Library | (936) 294-1614 | (866) NGL-INFO | Ask a Question | Share a Suggestion Sam Houston State University | Huntsville, Texas 77341 | (936) 294-1111 | (866) BEARKAT © Copyright Sam Houston State University | All rights reserved. | A Member of The Texas State University System
Implementation Science volume 19 , Article number: 43 ( 2024 ) Cite this article
1750 Accesses
18 Altmetric
Metrics details
Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify strategies and the Reach Effectiveness Adoption Implementation and Maintenance (RE-AIM) framework to classify outcomes.
We conducted a systematic review of studies examining implementation strategies from 2010-2022 and registered with PROSPERO (CRD42021235592). We searched databases using terms “implementation strategy”, “intervention”, “bundle”, “support”, and their variants. We also solicited study recommendations from implementation science experts and mined existing systematic reviews. We included studies that quantitatively assessed the impact of at least one implementation strategy to improve health or health care using an outcome that could be mapped to the five evaluation dimensions of RE-AIM. Only studies meeting prespecified methodologic standards were included. We described the characteristics of studies and frequency of implementation strategy use across study arms. We also examined common strategy pairings and cooccurrence with significant outcomes.
Our search resulted in 16,605 studies; 129 met inclusion criteria. Studies tested an average of 6.73 strategies (0-20 range). The most assessed outcomes were Effectiveness ( n =82; 64%) and Implementation ( n =73; 56%). The implementation strategies most frequently occurring in the experimental arm were Distribute Educational Materials ( n =99), Conduct Educational Meetings ( n =96), Audit and Provide Feedback ( n =76), and External Facilitation ( n =59). These strategies were often used in combination. Nineteen implementation strategies were frequently tested and associated with significantly improved outcomes. However, many strategies were not tested sufficiently to draw conclusions.
This review of 129 methodologically rigorous studies built upon prior implementation science data syntheses to identify implementation strategies that had been experimentally tested and summarized their impact on outcomes across diverse outcomes and clinical settings. We present recommendations for improving future similar efforts.
Peer Review reports
While many implementation strategies exist, it has been challenging to compare their effectiveness across a wide range of trial designs and practice settings
This systematic review provides a transdisciplinary evaluation of implementation strategies across population, practice setting, and evidence-based interventions using a standardized taxonomy of strategies and outcomes.
Educational strategies were employed ubiquitously; nineteen other commonly used implementation strategies, including External Facilitation and Audit and Provide Feedback, were associated with positive outcomes in these experimental trials.
This review offers guidance for scholars and practitioners alike in selecting implementation strategies and suggests a roadmap for future evidence generation.
Implementation strategies are “methods or techniques used to enhance the adoption, implementation, and sustainment of evidence-based practices or programs” (EBPs) [ 1 ]. In 2015, the Expert Recommendations for Implementing Change (ERIC) study organized a panel of implementation scientists to compile a standardized set of implementation strategy terms and definitions [ 2 , 3 , 4 ]. These 73 strategies were then organized into nine “clusters” [ 5 ]. The ERIC taxonomy has been widely adopted and further refined [ 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 ]. However, much of the evidence for individual or groups of ERIC strategies remains narrowly focused. Prior systematic reviews and meta-analyses have assessed strategy effectiveness, but have generally focused on a specific strategy, (e.g., Audit and Provide Feedback) [ 14 , 15 , 16 ], subpopulation, disease (e.g., individuals living with dementia) [ 16 ], outcome [ 15 ], service setting (e.g., primary care clinics) [ 17 , 18 , 19 ] or geography [ 20 ]. Given that these strategies are intended to have broad applicability, there remains a need to understand how well implementation strategies work across EBPs and settings and the extent to which implementation knowledge is generalizable.
There are challenges in assessing the evidence of implementation strategies across many EBPs, populations, and settings. Heterogeneity in population characteristics, study designs, methods, and outcomes have made it difficult to quantitatively compare which strategies work and under which conditions [ 21 ]. Moreover, there remains significant variability in how researchers operationalize, apply, and report strategies (individually or in combination) and outcomes [ 21 , 22 ]. Still, synthesizing data related to using individual strategies would help researchers replicate findings and better understand possible mediating factors including the cost, timing, and delivery by specific types of health providers or key partners [ 23 , 24 , 25 ]. Such an evidence base would also aid practitioners with implementation planning such as when and how to deploy a strategy for optimal impact.
Building upon previous efforts, we therefore conducted a systematic review to evaluate the level of evidence supporting the ERIC implementation strategies across a broad array of health and human service settings and outcomes, as organized by the evaluation framework, RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) [ 26 , 27 , 28 ]. A secondary aim of this work was to identify patterns in scientific reporting of strategy use that could not only inform reporting standards for strategies but also the methods employed in future. The current study was guided by the following research questions Footnote 1 :
What implementation strategies have been most commonly and rigorously tested in health and human service settings?
Which implementation strategies were commonly paired?
What is the evidence supporting commonly tested implementation strategies?
We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-P) model [ 29 , 30 , 31 ] to develop and report on the methods for this systematic review (Additional File 1). This study was considered to be non-human subjects research by the RAND institutional review board.
The protocol was registered with PROSPERO (PROSPERO 2021 CRD42021235592).
This review sought to synthesize evidence for implementation strategies from research studies conducted across a wide range of health-related settings and populations. Inclusion criteria required studies to: 1) available in English; 2) published between January 1, 2010 and September 20, 2022; 3) based on experimental research (excluded protocols, commentaries, conference abstracts, or proposed frameworks); 4) set in a health or human service context (described below); 5) tested at least one quantitative outcome that could be mapped to the RE-AIM evaluation framework [ 26 , 27 , 28 ]; and 6) evaluated the impact of an implementation strategy that could be classified using the ERIC taxonomy [ 2 , 32 ]. We defined health and human service setting broadly, including inpatient and outpatient healthcare settings, specialty clinics, mental health treatment centers, long-term care facilities, group homes, correctional facilities, child welfare or youth services, aging services, and schools, and required that the focus be on a health outcome. We excluded hybrid type I trials that primarily focused on establishing EBP effectiveness, qualitative studies, studies that described implementation barriers and facilitators without assessing implementation strategy impact on an outcome, and studies not meeting standardized rigor criteria defined below.
Our three-pronged search strategy included searching academic databases (i.e., CINAHL, PubMed, and Web of Science for replicability and transparency), seeking recommendations from expert implementation scientists, and assessing existing, relevant systematic reviews and meta-analyses.
Search terms included “implementation strateg*” OR “implementation intervention*” OR “implementation bundl*” OR “implementation support*.” The search, conducted on September 20, 2022, was limited to English language and publication between 2010 and 2022, similar to other recent implementation science reviews [ 22 ]. This timeframe was selected to coincide with the advent of Implementation Science and when the term “implementation strategy” became conventionally used [ 2 , 4 , 33 ]. A full search strategy can be found in Additional File 2.
Each study’s title and abstract were read by two reviewers, who dichotomously scored studies on each of the six eligibility criteria described above as yes=1 or no=0, resulting in a score ranging from 1 to 6. Abstracts receiving a six from both reviewers were included in the full text review. Those with only one score of six were adjudicated by a senior member of the team (MJC, SSR, DEG). The study team held weekly meetings to troubleshoot and resolve any ongoing issues noted through the abstract screening process.
During the full text screening process, we reviewed, in pairs, each article that had progressed through abstract screening. Conflicts between reviewers were adjudicated by a senior member of the team for a final inclusion decision (MJC, SSR, DEG).
After reviewing published rigor screening tools [ 34 , 35 , 36 ], we developed an assessment of study rigor that was appropriate for the broad range of reviewed implementation studies. Reviewers evaluated studies on the following: 1) presence of a concurrent comparison or control group (=2 for traditional randomized controlled trial or stepped wedge cluster randomized trial and =1 for pseudo-randomized and other studies with concurrent control); 2) EBP standardization by protocol or manual (=1 if present); 3) EBP fidelity tracking (=1 if present); 4) implementation strategy standardization by operational description, standard training, or manual (=1 if present); 5) length of follow-up from full implementation of intervention (=2 for twelve months or longer, =1 for six to eleven months, or =0 for less than six months); and 6) number of sites (=1 for more than one site). Rigor scores ranged from 0 to 8, with 8 indicating the most rigorous. Articles were included if they 1) included a concurrent control group, 2) had an experimental design, and 3) received a score of 7 or 8 from two independent reviewers.
We contacted 37 global implementation science experts who were recognized by our study team as leaders in the field or who were commonly represented among first or senior authors in the included abstracts. We asked each expert for recommendations of publications meeting study inclusion criteria (i.e., quantitatively evaluating the effectiveness of an implementation strategy). Recommendations were recorded and compared to the full abstract list.
Eighty-four systematic reviews were identified through the initial search strategy (See Additional File 3). Systematic reviews that examined the effectiveness of implementation strategies were reviewed in pairs for studies that were not found through our initial literature search.
Data from the full text review were abstracted in pairs, with conflicts resolved by senior team members (DEG, MJC) using a standard Qualtrics abstraction form. The form captured the setting, number of sites and participants studied, evidence-based practice/program of focus, outcomes assessed (based on RE-AIM), strategies used in each study arm, whether the study took place in the U.S. or outside of the U.S., and the findings (i.e., was there significant improvement in the outcome(s)?). We coded implementation strategies used in the Control and Experimental Arms. We defined the Control Arm as receiving the lowest number of strategies (which could mean zero strategies or care as usual) and the Experimental Arm as the most intensive arm (i.e., receiving the highest number of strategies). When studies included multiple Experimental Arms, the Experimental Arm with the least intensive implementation strategy(ies) was classified as “Control” and the Experimental Arm with the most intensive implementation strategy(ies) was classified as the “Experimental” Arm.
Implementation strategies were classified using standard definitions (MJC, SSR, DEG), based on minor modifications to the ERIC taxonomy [ 2 , 3 , 4 ]. Modifications resulted in 70 named strategies and were made to decrease redundancy and improve clarity. These modifications were based on input from experts, cognitive interview data, and team consensus [ 37 ] (See Additional File 4). Outcomes were then coded into RE-AIM outcome domains following best practices as recommended by framework experts [ 26 , 27 , 28 ]. We coded the RE-AIM domain of Effectiveness as either an assessment of the effectiveness of the EBP or the implementation strategy. We did not assess implementation strategy fidelity or effects on health disparities as these are recently adopted reporting standards [ 27 , 28 ] and not yet widely implemented in current publications. Further, we did not include implementation costs as an outcome because reporting guidelines have not been standardized [ 38 , 39 ].
Assessment and minimization of bias is an important component of high-quality systematic reviews. The Cochrane Collaboration guidance for conducting high-quality systematic reviews recommends including a specific assessment of bias for individual studies by assessing the domains of randomization, deviations of intended intervention, missing data, measurement of the outcome, and selection of the reported results (e.g., following a pre-specified analysis plan) [ 40 , 41 ]. One way we addressed bias was by consolidating multiple publications from the same study into a single finding (i.e., N =1), so-as to avoid inflating estimates due to multiple publications on different aspects of a single trial. We also included high-quality studies only, as described above. However, it was not feasible to consistently apply an assessment of bias tool due to implementation science’s broad scope and the heterogeneity of study design, context, outcomes, and variable measurement, etc. For example, most implementation studies reviewed had many outcomes across the RE-AIM framework, with no one outcome designated as primary, precluding assignment of a single score across studies.
We used descriptive statistics to present the distribution of health or healthcare area, settings, outcomes, and the median number of included patients and sites per study, overall and by country (classified as U.S. vs. non-U.S.). Implementation strategies were described individually, using descriptive statistics to summarize the frequency of strategy use “overall” (in any study arm), and the mean number of strategies reported in the Control and Experimental Arms. We additionally described the strategies that were only in the experimental (and not control) arm, defining these as strategies that were “tested” and may have accounted for differences in outcomes between arms.
We described frequencies of pair-wise combinations of implementation strategies in the Experimental Arm. To assess the strength of the evidence supporting implementation strategies that were used in the Experimental Arm, study outcomes were categorized by RE-AIM and coded based on whether the association between use of the strategies resulted in a significantly positive effect (yes=1; no=0). We then created an indicator variable if at least one RE-AIM outcome in the study was significantly positive (yes=1; no=0). We plotted strategies on a graph with quadrants based on the combination of median number of studies in which a strategy appears and the median percent of studies in which a strategy was associated with at least one positive RE-AIM outcome. The upper right quadrant—higher number of studies overall and higher percent of studies with a significant RE-AIM outcome—represents a superior level of evidence. For implementation strategies in the upper right quadrant, we describe each RE-AIM outcome and the proportion of studies which have a significant outcome.
We identified 14,646 articles through the initial literature search, 17 articles through expert recommendation (three of which were not included in the initial search), and 1,942 articles through reviewing prior systematic reviews (Fig. 1 ). After removing duplicates, 9,399 articles were included in the initial abstract screening. Of those, 48% ( n =4,075) abstracts were reviewed in pairs for inclusion. Articles with a score of five or six were reviewed a second time ( n =2,859). One quarter of abstracts that scored lower than five were reviewed for a second time at random. We screened the full text of 1,426 articles in pairs. Common reasons for exclusion were 1) study rigor, including no clear delineation between the EBP and implementation strategy, 2) not testing an implementation strategy, and 3) article type that did not meet inclusion criteria (e.g., commentary, protocol, etc.). Six hundred seventeen articles were reviewed for study rigor with 385 excluded for reasons related to study design and rigor, and 86 removed for other reasons (e.g., not a research article). Among the three additional expert-recommended articles, one met inclusion criteria and was added to the analysis. The final number of studies abstracted was 129 representing 143 publications.
Expanded PRISMA Flow Diagram
The expanded PRISMA flow diagram provides a description of each step in the review and abstraction process for the systematic review
Of 129 included studies (Table 1 ; see also Additional File 5 for Summary of Included Studies), 103 (79%) were conducted in a healthcare setting. EBP health care setting varied and included primary care ( n =46; 36%), specialty care ( n =27; 21%), mental health ( n =11; 9%), and public health ( n =30; 23%), with 64 studies (50%) occurring in an outpatient health care setting. Studies included a median of 29 sites and 1,419 target population (e.g., patients or students). The number of strategies varied widely across studies, with Control Arms averaging approximately two strategies (Range = 0-20, including studies with no strategy in the comparison group) and Experimental Arms averaging eight strategies (Range = 1-21). Non-US studies ( n =73) included more sites and target population on average, with an overall median of 32 sites and 1,531 patients assessed in each study.
Organized by RE-AIM, the most evaluated outcomes were Effectiveness ( n = 82, 64%) and Implementation ( n = 73, 56%); followed by Maintenance ( n =40; 31%), Adoption ( n =33; 26%), and Reach ( n =31; 24%). Most studies ( n = 98, 76%) reported at least one significantly positive outcome. Adoption and Implementation outcomes showed positive change in three-quarters of studies ( n =78), while Reach ( n =18; 58%), Effectiveness ( n =44; 54%), and Maintenance ( n =23; 58%) outcomes evidenced positive change in approximately half of studies.
The following describes the results for each research question.
Table 2 shows the frequency of studies within which an implementation strategy was used in the Control Arm, Experimental Arm(s), and tested strategies (those used exclusively in the Experimental Arm) grouped by strategy type, as specified by previous ERIC reports [ 2 , 6 ].
In about half the studies (53%; n =69), the Control Arms were “active controls” that included at least one strategy, with an average of 1.64 (and up to 20) strategies reported in control arms. The two most common strategies used in Control Arms were: Distribute Educational Materials ( n =52) and Conduct Educational Meetings ( n =30).
Experimental conditions included an average of 8.33 implementation strategies per study (Range = 1-21). Figure 2 shows a heat map of the strategies that were used in the Experimental Arms in each study. The most common strategies in the Experimental Arm were Distribute Educational Materials ( n =99), Conduct Educational Meetings ( n =96), Audit and Provide Feedback ( n =76), and External Facilitation ( n =59).
Implementation strategies used in the Experimental Arm of included studies. Explore more here: https://public.tableau.com/views/Figure2_16947070561090/Figure2?:language=en-US&:display_count=n&:origin=viz_share_link
The average number of implementation strategies that were included in the Experimental Arm only (and not in the Control Arm) was 6.73 (Range = 0-20). Footnote 2 Overall, the top 10% of tested strategies included Conduct Educational Meetings ( n =68), Audit and Provide Feedback ( n =63), External Facilitation ( n =54), Distribute Educational Materials ( n =49), Tailor Strategies ( n =41), Assess for Readiness and Identify Barriers and Facilitators ( n =38) and Organize Clinician Implementation Team Meetings ( n =37). Few studies tested a single strategy ( n =9). These strategies included, Audit and Provide Feedback, Conduct Educational Meetings, Conduct Ongoing Training, Create a Learning Collaborative, External Facilitation ( n =2), Facilitate Relay of Clinical Data To Providers, Prepare Patients/Consumers to be Active Participants, and Use Other Payment Schemes. Three implementation strategies were included in the Control or Experimental Arms but were not Tested including, Use Mass Media, Stage Implementation Scale Up, and Fund and Contract for the Clinical Innovation.
Table 3 shows the five most used strategies in Experimental Arms with their top ten most frequent pairings, excluding Distribute Educational Materials and Conduct Educational Meetings, as these strategies were included in almost all Experimental and half of Control Arms. The five most used strategies in the Experimental Arm included Audit and Provide Feedback ( n =76), External Facilitation ( n =59), Tailor Strategies ( n =43), Assess for Readiness and Identify Barriers and Facilitators ( n =43), and Organize Implementation Teams ( n =42).
Strategies frequently paired with these five strategies included two educational strategies: Distribute Educational Materials and Conduct Educational Meetings. Other commonly paired strategies included Develop a Formal Implementation Blueprint, Promote Adaptability, Conduct Ongoing Training, Purposefully Reexamine the Implementation, and Develop and Implement Tools for Quality Monitoring.
We classified the strength of evidence for each strategy by evaluating both the number of studies in which each strategy appeared in the Experimental Arm and the percentage of times there was at least one significantly positive RE-AIM outcome. Using these factors, Fig. 3 shows the number of studies in which individual strategies were evaluated (on the y axis) compared to the percentage of times that studies including those strategies had at least one positive outcome (on the x axis). Due to the non-normal distribution of both factors, we used the median (rather than the mean) to create four quadrants. Strategies in the lower left quadrant were tested in fewer than the median number of studies (8.5) and were less frequently associated with a significant RE-AIM outcome (75%). The upper right quadrant included strategies that occurred in more than the median number of studies (8.5) and had more than the median percent of studies with a significant RE-AIM outcome (75%); thus those 19 strategies were viewed as having stronger evidence. Of those 19 implementation strategies, Conduct Educational Meetings, Distribute Educational Materials, External Facilitation, and Audit and Provide Feedback continued to occur frequently, appearing in 59-99 studies.
Experimental Arm Implementation Strategies with significant RE-AIM outcome. Explore more here: https://public.tableau.com/views/Figure3_16947017936500/Figure3?:language=en-US&publish=yes&:display_count=n&:origin=viz_share_link
Figure 4 graphically illustrates the proportion of significant outcomes for each RE-AIM outcome for the 19 commonly used and evidence-based implementation strategies in the upper right quadrant. These findings again show the widespread use of Conduct Educational Meetings and Distribute Educational Materials. Implementation and Effectiveness outcomes were assessed most frequently, with Implementation being the mostly commonly reported significantly positive outcome.
RE-AIM outcomes for the 19 Top-Right Quadrant Implementation Strategies . The y-axis is the number of studies and the x-axis is a stacked bar chart for each RE-AIM outcome with R=Reach, E=Effectiveness, A=Adoption, I=Implementation, M=Maintenance. Blue denotes at least one significant RE-AIM outcome; Light blue denotes studies which used the given implementation strategy and did not have a significant RE-AIM . Explore more here: https://public.tableau.com/views/Figure4_16947017112150/Figure4?:language=en-US&publish=yes&:display_count=n&:origin=viz_share_link
This systematic review identified 129 experimental studies examining the effectiveness of implementation strategies across a broad range of health and human service studies. Overall, we found that evidence is lacking for most ERIC implementation strategies, that most studies employed combinations of strategies, and that implementation outcomes, categorized by RE-AIM dimensions, have not been universally defined or applied. Accordingly, other researchers have described the need for universal outcomes definitions and descriptions across implementation research studies [ 28 , 42 ]. Our findings have important implications not only for the current state of the field but also for creating guidance to help investigators determine which strategies and in what context to examine.
The four most evaluated strategies were Distribute Educational Materials, Conduct Educational Meetings, External Facilitation, and Audit and Provide Feedback. Conducting Educational Meetings and Distributing Educational Materials were surprisingly the most common. This may reflect the fact that education strategies are generally considered to be “necessary but not sufficient” for successful implementation [ 43 , 44 ]. Because education is often embedded in interventions, it is critical to define the boundary between the innovation and the implementation strategies used to support the innovation. Further specification as to when these strategies are EBP core components or implementation strategies (e.g., booster trainings or remediation) is needed [ 45 , 46 ].
We identified 19 implementation strategies that were tested in at least 8 studies (more than the median) and were associated with positive results at least 75% of the time. These strategies can be further categorized as being used in early or pre-implementation versus later in implementation. Preparatory activities or pre-implementation, strategies that had strong evidence included educational activities (Meetings, Materials, Outreach visits, Train for Leadership, Use Train the Trainer Strategies) and site diagnostic activities (Assess for Readiness, Identify Barriers and Facilitators, Conduct Local Needs Assessment, Identify and Prepare Champions, and Assess and Redesign Workflows). Strategies that target the implementation phase include those that provide coaching and support (External and Internal Facilitation), involve additional key partners (Intervene with Patients to Enhance Uptake and Adherence), and engage in quality improvement activities (Audit and Provide Feedback, Facilitate the Relay of Clinical Data to Providers, Purposefully Reexamine the Implementation, Conduct Cyclical Small Tests of Change, Develop and Implement Tools for Quality Monitoring).
There were many ERIC strategies that were not represented in the reviewed studies, specifically the financial and policy strategies. Ten strategies were not used in any studies, including: Alter Patient/Consumer Fees, Change Liability Laws, Change Service Sites, Develop Disincentives, Develop Resource Sharing Agreements, Identify Early Adopters, Make Billing Easier, Start a Dissemination Organization, Use Capitated Payments, and Use Data Experts. One of the limitations of this investigation was that not all individual strategies or combinations were investigated. Reasons for the absence of these strategies in our review may include challenges with testing certain strategies experimentally (e.g., changing liability laws), limitations in our search terms, and the relative paucity of implementation strategy trials compared to clinical trials. Many “untested” strategies require large-scale structural changes with leadership support (see [ 47 ] for policy experiment example). Recent preliminary work has assessed the feasibility of applying policy strategies and described the challenges with doing so [ 48 , 49 , 50 ]. While not impossible in large systems like VA (for example: the randomized evaluation of the VA Stratification Tool for Opioid Risk Management) the large size, structure, and organizational imperative makes these initiatives challenging to experimentally evaluate. Likewise, the absence of these ten strategies may have been the result of our inclusion criteria, which required an experimental design. Thus, creative study designs may be needed to test high-level policy or financial strategies experimentally.
Some strategies that were likely under-represented in our search strategy included electronic medical record reminders and clinical decision support tools and systems. These are often considered “interventions” when used by clinical trialists and may not be indexed as studies involving ‘implementation strategies’ (these tools have been reviewed elsewhere [ 51 , 52 , 53 ]). Thus, strategies that are also considered interventions in the literature (e.g., education interventions) were not sought or captured. Our findings do not imply that these strategies are ineffective, rather that more study is needed. Consistent with prior investigations [ 54 ], few studies meeting inclusion criteria tested financial strategies. Accordingly, there are increasing calls to track and monitor the effects of financial strategies within implementation science to understand their effectiveness in practice [ 55 , 56 ]. However, experts have noted that the study of financial strategies can be a challenge given that they are typically implemented at the system-level and necessitate research designs for studying policy-effects (e.g., quasi-experimental methods, systems-science modeling methods) [ 57 ]. Yet, there have been some recent efforts to use financial strategies to support EBPs that appear promising [ 58 ] and could be a model for the field moving forward.
The relationship between the number of strategies used and improved outcomes has been described inconsistently in the literature. While some studies have found improved outcomes with a bundle of strategies that were uniquely combined or a standardized package of strategies (e.g., Replicating Effective Programs [ 59 , 60 ] and Getting To Outcomes [ 61 , 62 ]), others have found that “more is not always better” [ 63 , 64 , 65 ]. For example, Rogal and colleagues documented that VA hospitals implementing a new evidence-based hepatitis C treatment chose >20 strategies, when multiple years of data linking strategies to outcomes showed that 1-3 specific strategies would have yielded the same outcome [ 39 ]. Considering that most studies employed multiple or multifaceted strategies, it seems that there is a benefit of using a targeted bundle of strategies that are purposefully aligns with site/clinic/population norms, rather than simply adding more strategies [ 66 ].
It is difficult to assess the effectiveness of any one implementation strategy in bundles where multiple strategies are used simultaneously. Even a ‘single’ strategy like External Facilitation is, in actuality, a bundle of narrowly constructed strategies (e.g., Conduct Educational Meetings, Identify and Prepare Champions, and Develop a Formal Implementation Blueprint). Thus, studying External Facilitation does not allow for a test of the individual strategies that comprise it, potentially masking the effectiveness of any individual strategy. While we cannot easily disaggregate the effects of multifaceted strategies, doing so may not yield meaningful results. Because strategies often synergize, disaggregated results could either underestimate the true impact of individual strategies or conversely, actually undermine their effectiveness (i.e., when their effectiveness comes from their combination with other strategies). The complexity of health and human service settings, imperative to improve public health outcomes, and engagement with community partners often requires the use of multiple strategies simultaneously. Therefore, the need to improve real-world implementation may outweigh the theoretical need to identify individual strategy effectiveness. In situations where it would be useful to isolate the impact of single strategies, we suggest that the same methods for documenting and analyzing the critical components (or core functions) of complex interventions [ 67 , 68 , 69 , 70 ] may help to identify core components of multifaceted implementation strategies [ 71 , 72 , 73 , 74 ].
In addition, to truly assess the impacts of strategies on outcomes, it may be necessary to track fidelity to implementation strategies (not just the EBPs they support). While this can be challenging, without some degree of tracking and fidelity checks, one cannot determine whether a strategy’s apparent failure to work was because it 1) was ineffective or 2) was not applied well. To facilitate this tracking there are pragmatic tools to support researchers. For example, the Longitudinal Implementation Strategy Tracking System (LISTS) offers a pragmatic and feasible means to assess fidelity to and adaptations of strategies [ 75 ].
Based on our findings, we offer four recommended “best practices” for implementation studies.
Prespecify strategies using standard nomenclature. This study reaffirmed the need to apply not only a standard naming convention (e.g., ERIC) but also a standard reporting of for implementation strategies. While reporting systems like those by Proctor [ 1 ] or Pinnock [ 75 ] would optimize learning across studies, few manuscripts specify strategies as recommended [ 76 , 77 ]. Pre-specification allows planners and evaluators to assess the feasibility and acceptability of strategies with partners and community members [ 24 , 78 , 79 ] and allows evaluators and implementers to monitor and measure the fidelity, dose, and adaptations to strategies delivered over the course of implementation [ 27 ]. In turn, these data can be used to assess the costs, analyze their effectiveness [ 38 , 80 , 81 ], and ensure more accurate reporting [ 82 , 83 , 84 , 85 ]. This specification should include, among other data, the intensity, stage of implementation, and justification for the selection. Information regarding why strategies were selected for specific settings would further the field and be of great use to practitioners. [ 63 , 65 , 69 , 79 , 86 ].
Ensure that standards for measuring and reporting implementation outcomes are consistently applied and account for the complexity of implementation studies. Part of improving standardized reporting must include clearly defining outcomes and linking each outcome to particular implementation strategies. It was challenging in the present review to disentangle the impact of the intervention(s) (i.e., the EBP) versus the impact of the implementation strategy(ies) for each RE-AIM dimension. For example, often fidelity to the EBP was reported but not for the implementation strategies. Similarly, Reach and Adoption of the intervention would be reported for the Experimental Arm but not for the Control Arm, prohibiting statistical comparisons of strategies on the relative impact of the EBP between study arms. Moreover, there were many studies evaluating numerous outcomes, risking data dredging. Further, the significant heterogeneity in the ways in which implementation outcomes are operationalized and reported is a substantial barrier to conducting large-scale meta-analytic approaches to synthesizing evidence for implementation strategies [ 67 ]. The field could look to others in the social and health sciences for examples in how to test, validate, and promote a common set of outcome measures to aid in bringing consistency across studies and real-world practice (e.g., the NIH-funded Patient-Reported Outcomes Measurement Information System [PROMIS], https://www.healthmeasures.net/explore-measurement-systems/promis ).
Develop infrastructure to learn cross-study lessons in implementation science. Data repositories, like those developed by NCI for rare diseases, U.S. HIV Implementation Science Coordination Initiative [ 87 ], and the Behavior Change Technique Ontology [ 88 ], could allow implementation scientists to report their findings in a more standardized manner, which would promote ease of communication and contextualization of findings across studies. For example, the HIV Implementation Science Coordination Initiative requested all implementation projects use common frameworks, developed user friendly databases to enable practitioners to match strategies to determinants, and developed a dashboard of studies that assessed implementation determinants [ 89 , 90 , 91 , 92 , 93 , 94 ].
Develop and apply methods to rigorously study common strategies and bundles. These findings support prior recommendations for improved empirical rigor in implementation studies [ 46 , 95 ]. Many studies were excluded from our review based on not meeting methodological rigor standards. Understanding the effectiveness of discrete strategies deployed alone or in combination requires reliable and low burden tracking methods to collect information about strategy use and outcomes. For example, frameworks like the Implementation Replication Framework [ 96 ] could help interpret findings across studies using the same strategy bundle. Other tracking approaches may leverage technology (e.g., cell phones, tablets, EMR templates) [ 78 , 97 ] or find novel, pragmatic approaches to collect recommended strategy specifications over time (e.g.., dose, deliverer, and mechanism) [ 1 , 9 , 27 , 98 , 99 ]. Rigorous reporting standards could inform more robust analyses and conclusions (e.g., moving toward the goal of understanding causality, microcosting efforts) [ 24 , 38 , 100 , 101 ]. Such detailed tracking is also required to understand how site-level factors moderate implementation strategy effects [ 102 ]. In some cases, adaptive trial designs like sequential multiple assignment randomized trials (SMARTs) and just-in-time adaptive interventions (JITAIs) can be helpful for planning strategy escalation.
Despite the strengths of this review, there were certain notable limitations. For one, we only included experimental studies, omitting many informative observational investigations that cover the range of implementation strategies. Second, our study period was centered on the creation of the journal Implementation Science and not on the standardization and operationalization of implementation strategies in the publication of the ERIC taxonomy (which came later). This, in conjunction with latency in reporting study results and funding cycles, means that the employed taxonomy was not applied in earlier studies. To address this limitation, we retroactively mapped strategies to ERIC, but it is possible that some studies were missed. Additionally, indexing approaches used by academic databases may have missed relevant studies. We addressed this particular concern by reviewing other systematic reviews of implementation strategies and soliciting recommendations from global implementation science experts.
Another potential limitation comes from the ERIC taxonomy itself—i.e., strategy listings like ERIC are only useful when they are widely adopted and used in conjunction with guidelines for specifying and reporting strategies [ 1 ] in protocol and outcome papers. Although the ERIC paper has been widely cited (over three thousand times, accessed about 186 thousand times), it is still not universally applied, making tracking the impact of specific strategies more difficult. However, our experience with this review seemed to suggest that ERIC’s use was increasing over time. Also, some have commented that ERIC strategies can be unclear and are missing key domains. Thus, researchers are making definitions clearer for lay users [ 37 , 103 ], increasing the number of discrete strategies for specific domains like HIV treatment, acknowledging strategies for new functions (e.g., de-implementation [ 104 ], local capacity building), accounting for phases of implementation (dissemination, sustainment [ 13 ], scale-up), addressing settings [ 12 , 20 ], actors roles in the process, and making mechanisms of change to select strategies more user-friendly through searchable databases [ 9 , 10 , 54 , 73 , 104 , 105 , 106 ]. In sum, we found the utility of the ERIC taxonomy to outweigh any of the taxonomy’s current limitations.
As with all reviews, the search terms influenced our findings. As such, the broad terms for implementation strategies (e.g., “evidence-based interventions”[ 7 ] or “behavior change techniques” [ 107 ]) may have led to inadvertent omissions of studies of specific strategies. For example, the search terms may not have captured tests of policies, financial strategies, community health promotion initiatives, or electronic medical record reminders, due to differences in terminology used in corresponding subfields of research (e.g., health economics, business, health information technology, and health policy). To manage this, we asked experts to inform us about any studies that they would include and cross-checked their lists with what was identified through our search terms, which yielded very few additional studies. We included standard coding using the ERIC taxonomy, which was a strength, but future work should consider including the additional strategies that have been recommended to augment ERIC, around sustainment [ 13 , 79 , 106 , 108 ], community and public health research [ 12 , 109 , 110 , 111 ], consumer or service user engagement [ 112 ], de-implementation [ 104 , 113 , 114 , 115 , 116 , 117 ] and related terms [ 118 ].
We were unable to assess the bias of studies due to non-standard reporting across the papers and the heterogeneity of study designs, measurement of implementation strategies and outcomes, and analytic approaches. This could have resulted in over- or underestimating the results of our synthesis. We addressed this limitation by being cautious in our reporting of findings, specifically in identifying “effective” implementation strategies. Further, we were not able to gather primary data to evaluate effect sizes across studies in order to systematically evaluate bias, which would be fruitful for future study.
This novel review of 129 studies summarized the body of evidence supporting the use of ERIC-defined implementation strategies to improve health or healthcare. We identified commonly occurring implementation strategies, frequently used bundles, and the strategies with the highest degree of supportive evidence, while simultaneously identifying gaps in the literature. Additionally, we identified several key areas for future growth and operationalization across the field of implementation science with the goal of improved reporting and assessment of implementation strategies and related outcomes.
All data for this study are included in this published article and its supplementary information files.
We modestly revised the following research questions from our PROSPERO registration after reading the articles and better understanding the nature of the literature: 1) What is the available evidence regarding the effectiveness of implementation strategies in supporting the uptake and sustainment of evidence intended to improve health and healthcare outcomes? 2) What are the current gaps in the literature (i.e., implementation strategies that do not have sufficient evidence of effectiveness) that require further exploration?
Tested strategies are those which exist in the Experimental Arm but not in the Control Arm. Comparative effectiveness or time staggered trials may not have any unique strategies in the Experimental Arm and therefore in our analysis would have no Tested Strategies.
Centers for Disease Control
Cumulated Index to Nursing and Allied Health Literature
Dissemination and Implementation
Evidence-based practices or programs
Expert Recommendations for Implementing Change
Multiphase Optimization Strategy
National Cancer Institute
National Institutes of Health
The Pittsburgh Dissemination and Implementation Science Collaborative
Sequential Multiple Assignment Randomized Trial
United States
Department of Veterans Affairs
Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.
Article PubMed PubMed Central Google Scholar
Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.
Waltz TJ, Powell BJ, Chinman MJ, Smith JL, Matthieu MM, Proctor EK, et al. Expert recommendations for implementing change (ERIC): protocol for a mixed methods study. Implement Sci IS. 2014;9:39.
Article PubMed Google Scholar
Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A Compilation of Strategies for Implementing Clinical Innovations in Health and Mental Health. Med Care Res Rev. 2012;69:123–57.
Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, Chinman MJ, Smith JL, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:109.
Perry CK, Damschroder LJ, Hemler JR, Woodson TT, Ono SS, Cohen DJ. Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory. Implement Sci. 2019;14(1):32.
Community Preventive Services Task Force. Community Preventive Services Task Force: All Active Findings June 2023 [Internet]. 2023 [cited 2023 Aug 7]. Available from: https://www.thecommunityguide.org/media/pdf/CPSTF-All-Findings-508.pdf
Solberg LI, Kuzel A, Parchman ML, Shelley DR, Dickinson WP, Walunas TL, et al. A Taxonomy for External Support for Practice Transformation. J Am Board Fam Med JABFM. 2021;34:32–9.
Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12:1–9.
Article Google Scholar
Leeman J, Calancie L, Hartman MA, Escoffery CT, Herrmann AK, Tague LE, et al. What strategies are used to build practitioners’ capacity to implement community-based interventions and are they effective?: a systematic review. Implement Sci. 2015;10:1–15.
Nathan N, Shelton RC, Laur CV, Hailemariam M, Hall A. Editorial: Sustaining the implementation of evidence-based interventions in clinical and community settings. Front Health Serv. 2023;3:1176023.
Balis LE, Houghtaling B, Harden SM. Using implementation strategies in community settings: an introduction to the Expert Recommendations for Implementing Change (ERIC) compilation and future directions. Transl Behav Med. 2022;12:965–78.
Nathan N, Powell BJ, Shelton RC, Laur CV, Wolfenden L, Hailemariam M, et al. Do the Expert Recommendations for Implementing Change (ERIC) strategies adequately address sustainment? Front Health Serv. 2022;2:905909.
Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.
Google Scholar
Moore L, Guertin JR, Tardif P-A, Ivers NM, Hoch J, Conombo B, et al. Economic evaluations of audit and feedback interventions: a systematic review. BMJ Qual Saf. 2022;31:754–67.
Sykes MJ, McAnuff J, Kolehmainen N. When is audit and feedback effective in dementia care? A systematic review. Int J Nurs Stud. 2018;79:27–35.
Barnes C, McCrabb S, Stacey F, Nathan N, Yoong SL, Grady A, et al. Improving implementation of school-based healthy eating and physical activity policies, practices, and programs: a systematic review. Transl Behav Med. 2021;11:1365–410.
Tomasone JR, Kauffeldt KD, Chaudhary R, Brouwers MC. Effectiveness of guideline dissemination and implementation strategies on health care professionals’ behaviour and patient outcomes in the cancer care context: a systematic review. Implement Sci. 2020;15:1–18.
Seda V, Moles RJ, Carter SR, Schneider CR. Assessing the comparative effectiveness of implementation strategies for professional services to community pharmacy: A systematic review. Res Soc Adm Pharm. 2022;18:3469–83.
Lovero KL, Kemp CG, Wagenaar BH, Giusto A, Greene MC, Powell BJ, et al. Application of the Expert Recommendations for Implementing Change (ERIC) compilation of strategies to health intervention implementation in low- and middle-income countries: a systematic review. Implement Sci. 2023;18:56.
Chapman A, Rankin NM, Jongebloed H, Yoong SL, White V, Livingston PM, et al. Overcoming challenges in conducting systematic reviews in implementation science: a methods commentary. Syst Rev. 2023;12:1–6.
Article CAS Google Scholar
Proctor EK, Bunger AC, Lengnick-Hall R, Gerke DR, Martin JK, Phillips RJ, et al. Ten years of implementation outcomes research: a scoping review. Implement Sci. 2023;18:1–19.
Michaud TL, Pereira E, Porter G, Golden C, Hill J, Kim J, et al. Scoping review of costs of implementation strategies in community, public health and healthcare settings. BMJ Open. 2022;12:e060785.
Sohn H, Tucker A, Ferguson O, Gomes I, Dowdy D. Costing the implementation of public health interventions in resource-limited settings: a conceptual framework. Implement Sci. 2020;15:1–8.
Peek C, Glasgow RE, Stange KC, Klesges LM, Purcell EP, Kessler RS. The 5 R’s: an emerging bold standard for conducting relevant research in a changing world. Ann Fam Med. 2014;12:447–55.
Article CAS PubMed PubMed Central Google Scholar
Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.
Shelton RC, Chambers DA, Glasgow RE. An Extension of RE-AIM to Enhance Sustainability: Addressing Dynamic Context and Promoting Health Equity Over Time. Front Public Health. 2020;8:134.
Holtrop JS, Estabrooks PA, Gaglio B, Harden SM, Kessler RS, King DK, et al. Understanding and applying the RE-AIM framework: Clarifications and resources. J Clin Transl Sci. 2021;5:e126.
Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.
Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647.
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ [Internet]. 2021;372. Available from: https://www.bmj.com/content/372/bmj.n71
Rabin BA, Brownson RC, Haire-Joshu D, Kreuter MW, Weaver NL. A Glossary for Dissemination and Implementation Research in Health. J Public Health Manag Pract. 2008;14:117–23.
Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1:1.
Article PubMed Central Google Scholar
Miller WR, Wilbourne PL. Mesa Grande: a methodological analysis of clinical trials of treatments for alcohol use disorders. Addict Abingdon Engl. 2002;97:265–77.
Miller WR, Brown JM, Simpson TL, Handmaker NS, Bien TH, Luckie LF, et al. What works? A methodological analysis of the alcohol treatment outcome literature. Handb Alcohol Treat Approaches Eff Altern 2nd Ed. Needham Heights, MA, US: Allyn & Bacon; 1995:12–44.
Wells S, Tamir O, Gray J, Naidoo D, Bekhit M, Goldmann D. Are quality improvement collaboratives effective? A systematic review BMJ Qual Saf. 2018;27:226–40.
Yakovchenko V, Chinman MJ, Lamorte C, Powell BJ, Waltz TJ, Merante M, et al. Refining Expert Recommendations for Implementing Change (ERIC) strategy surveys using cognitive interviews with frontline providers. Implement Sci Commun. 2023;4:1–14.
Wagner TH, Yoon J, Jacobs JC, So A, Kilbourne AM, Yu W, et al. Estimating costs of an implementation intervention. Med Decis Making. 2020;40:959–67.
Gold HT, McDermott C, Hoomans T, Wagner TH. Cost data in implementation science: categories and approaches to costing. Implement Sci. 2022;17:11.
Boutron I, Page MJ, Higgins JP, Altman DG, Lundh A, Hróbjartsson A. Considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions. 2019. https://doi.org/10.1002/9781119536604.ch7 .
Higgins JP, Savović J, Page MJ, Elbers RG, Sterne J. Assessing risk of bias in a randomized trial. Cochrane Handb Syst Rev Interv. 2019;6:205–28.
Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, Contrasting, and Integrating Dissemination and Implementation Outcomes Included in the RE-AIM and Implementation Outcomes Frameworks. Front Public Health [Internet]. 2020 [cited 2024 Apr 24];8. Available from: https://www.frontiersin.org/journals/public-health/articles/ https://doi.org/10.3389/fpubh.2020.00430/full
Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess Winch Engl. 2004;8:iii–iv 1-72.
CAS Google Scholar
Beidas RS, Kendall PC. Training Therapists in Evidence-Based Practice: A Critical Review of Studies From a Systems-Contextual Perspective. Clin Psychol Publ Div Clin Psychol Am Psychol Assoc. 2010;17:1–30.
Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to Improve the Selection and Tailoring of Implementation Strategies. J Behav Health Serv Res. 2017;44:177–94.
Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda. Front Public Health [Internet]. 2019 [cited 2021 Mar 31];7. Available from: https://www.frontiersin.org/articles/ https://doi.org/10.3389/fpubh.2019.00003/full
Frakt AB, Prentice JC, Pizer SD, Elwy AR, Garrido MM, Kilbourne AM, et al. Overcoming Challenges to Evidence-Based Policy Development in a Large. Integrated Delivery System Health Serv Res. 2018;53:4789–807.
PubMed Google Scholar
Crable EL, Lengnick-Hall R, Stadnick NA, Moullin JC, Aarons GA. Where is “policy” in dissemination and implementation science? Recommendations to advance theories, models, and frameworks: EPIS as a case example. Implement Sci. 2022;17:80.
Crable EL, Grogan CM, Purtle J, Roesch SC, Aarons GA. Tailoring dissemination strategies to increase evidence-informed policymaking for opioid use disorder treatment: study protocol. Implement Sci Commun. 2023;4:16.
Bond GR. Evidence-based policy strategies: A typology. Clin Psychol Sci Pract. 2018;25:e12267.
Loo TS, Davis RB, Lipsitz LA, Irish J, Bates CK, Agarwal K, et al. Electronic Medical Record Reminders and Panel Management to Improve Primary Care of Elderly Patients. Arch Intern Med. 2011;171:1552–8.
Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ Can Med Assoc J. 2010;182:E216-25.
Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al. A Randomized Trial of Electronic Clinical Reminders to Improve Quality of Care for Diabetes and Coronary Artery Disease. J Am Med Inform Assoc JAMIA. 2005;12:431–7.
Dopp AR, Kerns SEU, Panattoni L, Ringel JS, Eisenberg D, Powell BJ, et al. Translating economic evaluations into financing strategies for implementing evidence-based practices. Implement Sci. 2021;16:1–12.
Dopp AR, Hunter SB, Godley MD, Pham C, Han B, Smart R, et al. Comparing two federal financing strategies on penetration and sustainment of the adolescent community reinforcement approach for substance use disorders: protocol for a mixed-method study. Implement Sci Commun. 2022;3:51.
Proctor EK, Toker E, Tabak R, McKay VR, Hooley C, Evanoff B. Market viability: a neglected concept in implementation science. Implement Sci. 2021;16:98.
Dopp AR, Narcisse M-R, Mundey P, Silovsky JF, Smith AB, Mandell D, et al. A scoping review of strategies for financing the implementation of evidence-based practices in behavioral health systems: State of the literature and future directions. Implement Res Pract. 2020;1:2633489520939980.
PubMed PubMed Central Google Scholar
Dopp AR, Kerns SEU, Panattoni L, Ringel JS, Eisenberg D, Powell BJ, et al. Translating economic evaluations into financing strategies for implementing evidence-based practices. Implement Sci IS. 2021;16:66.
Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, Stall R. Implementing evidence-based interventions in health care:application of the replicating effective programs framework. Implement Sci. 2007;2:42–51.
Kegeles SM, Rebchook GM, Hays RB, Terry MA, O’Donnell L, Leonard NR, et al. From science to application: the development of an intervention package. AIDS Educ Prev Off Publ Int Soc AIDS Educ. 2000;12:62–74.
Wandersman A, Imm P, Chinman M, Kaftarian S. Getting to outcomes: a results-based approach to accountability. Eval Program Plann. 2000;23:389–95.
Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: Tools, training, technical assistance, and quality assurance/quality improvement. Am J Community Psychol. 2012;50:445–59.
Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Kirchner JE, Proctor EK, et al. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample. Implement Sci. 2017;12:1–13.
Smith SN, Almirall D, Prenovost K, Liebrecht C, Kyle J, Eisenberg D, et al. Change in patient outcomes after augmenting a low-level implementation strategy in community practices that are slow to adopt a collaborative chronic care model: a cluster randomized implementation trial. Med Care. 2019;57:503.
Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Gonzalez R, Park A, et al. Longitudinal assessment of the association between implementation strategy use and the uptake of hepatitis C treatment: Year 2. Implement Sci. 2019;14:1–12.
Harvey G, Kitson A. Translating evidence into healthcare policy and practice: Single versus multi-faceted implementation strategies – is there a simple answer to a complex question? Int J Health Policy Manag. 2015;4:123–6.
Engell T, Stadnick NA, Aarons GA, Barnett ML. Common Elements Approaches to Implementation Research and Practice: Methods and Integration with Intervention Science. Glob Implement Res Appl. 2023;3:1–15.
Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci IS. 2009;4:40.
Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci IS. 2020;15:84.
Perez Jolles M, Lengnick-Hall R, Mittman BS. Core Functions and Forms of Complex Health Interventions: a Patient-Centered Medical Home Illustration. JGIM J Gen Intern Med. 2019;34:1032–8.
Schroeck FR, Ould Ismail AA, Haggstrom DA, Sanchez SL, Walker DR, Zubkoff L. Data-driven approach to implementation mapping for the selection of implementation strategies: a case example for risk-aligned bladder cancer surveillance. Implement Sci IS. 2022;17:58.
Frank HE, Kemp J, Benito KG, Freeman JB. Precision Implementation: An Approach to Mechanism Testing in Implementation Research. Adm Policy Ment Health. 2022;49:1084–94.
Lewis CC, Klasnja P, Lyon AR, Powell BJ, Lengnick-Hall R, Buchanan G, et al. The mechanics of implementation strategies and measures: advancing the study of implementation mechanisms. Implement Sci Commun. 2022;3:114.
Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19:e1003918.
Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;356:i6795.
Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38:65–76.
Hooley C, Amano T, Markovitz L, Yaeger L, Proctor E. Assessing implementation strategy reporting in the mental health literature: a narrative review. Adm Policy Ment Health Ment Health Serv Res. 2020;47:19–35.
Proctor E, Ramsey AT, Saldana L, Maddox TM, Chambers DA, Brownson RC. FAST: a framework to assess speed of translation of health innovations to practice and policy. Glob Implement Res Appl. 2022;2:107–19.
Cullen L, Hanrahan K, Edmonds SW, Reisinger HS, Wagner M. Iowa Implementation for Sustainability Framework. Implement Sci IS. 2022;17:1.
Saldana L, Ritzwoller DP, Campbell M, Block EP. Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers. Implement Sci Commun. 2022;3:40.
Eisman AB, Kilbourne AM, Dopp AR, Saldana L, Eisenberg D. Economic evaluation in implementation science: making the business case for implementation strategies. Psychiatry Res. 2020;283:112433.
Akiba CF, Powell BJ, Pence BW, Nguyen MX, Golin C, Go V. The case for prioritizing implementation strategy fidelity measurement: benefits and challenges. Transl Behav Med. 2022;12:335–42.
Akiba CF, Powell BJ, Pence BW, Muessig K, Golin CE, Go V. “We start where we are”: a qualitative study of barriers and pragmatic solutions to the assessment and reporting of implementation strategy fidelity. Implement Sci Commun. 2022;3:117.
Rudd BN, Davis M, Doupnik S, Ordorica C, Marcus SC, Beidas RS. Implementation strategies used and reported in brief suicide prevention intervention studies. JAMA Psychiatry. 2022;79:829–31.
Painter JT, Raciborski RA, Matthieu MM, Oliver CM, Adkins DA, Garner KK. Engaging stakeholders to retrospectively discern implementation strategies to support program evaluation: Proposed method and case study. Eval Program Plann. 2024;103:102398.
Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Res Policy Syst. 2017;15:1–12.
Mustanski B, Smith JD, Keiser B, Li DH, Benbow N. Supporting the growth of domestic HIV implementation research in the united states through coordination, consultation, and collaboration: how we got here and where we are headed. JAIDS J Acquir Immune Defic Syndr. 2022;90:S1-8.
Marques MM, Wright AJ, Corker E, Johnston M, West R, Hastings J, et al. The Behaviour Change Technique Ontology: Transforming the Behaviour Change Technique Taxonomy v1. Wellcome Open Res. 2023;8:308.
Merle JL, Li D, Keiser B, Zamantakis A, Queiroz A, Gallo CG, et al. Categorising implementation determinants and strategies within the US HIV implementation literature: a systematic review protocol. BMJ Open. 2023;13:e070216.
Glenshaw MT, Gaist P, Wilson A, Cregg RC, Holtz TH, Goodenow MM. Role of NIH in the Ending the HIV Epidemic in the US Initiative: Research Improving Practice. J Acquir Immune Defic Syndr. 1999;2022(90):S9-16.
Purcell DW, Namkung Lee A, Dempsey A, Gordon C. Enhanced Federal Collaborations in Implementation Science and Research of HIV Prevention and Treatment. J Acquir Immune Defic Syndr. 1999;2022(90):S17-22.
Queiroz A, Mongrella M, Keiser B, Li DH, Benbow N, Mustanski B. Profile of the Portfolio of NIH-Funded HIV Implementation Research Projects to Inform Ending the HIV Epidemic Strategies. J Acquir Immune Defic Syndr. 1999;2022(90):S23-31.
Zamantakis A, Li DH, Benbow N, Smith JD, Mustanski B. Determinants of Pre-exposure Prophylaxis (PrEP) Implementation in Transgender Populations: A Qualitative Scoping Review. AIDS Behav. 2023;27:1600–18.
Li DH, Benbow N, Keiser B, Mongrella M, Ortiz K, Villamar J, et al. Determinants of Implementation for HIV Pre-exposure Prophylaxis Based on an Updated Consolidated Framework for Implementation Research: A Systematic Review. J Acquir Immune Defic Syndr. 1999;2022(90):S235-46.
Chambers DA, Emmons KM. Navigating the field of implementation science towards maturity: challenges and opportunities. Implement Sci. 2024;19:26, s13012-024-01352–0.
Chinman M, Acosta J, Ebener P, Shearer A. “What we have here, is a failure to [replicate]”: Ways to solve a replication crisis in implementation science. Prev Sci. 2022;23:739–50.
Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.
Lengnick-Hall R, Gerke DR, Proctor EK, Bunger AC, Phillips RJ, Martin JK, et al. Six practical recommendations for improved implementation outcomes reporting. Implement Sci. 2022;17:16.
Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci IS. 2021;16:36.
Xu X, Lazar CM, Ruger JP. Micro-costing in health and medicine: a critical appraisal. Health Econ Rev. 2021;11:1.
Barnett ML, Dopp AR, Klein C, Ettner SL, Powell BJ, Saldana L. Collaborating with health economists to advance implementation science: a qualitative study. Implement Sci Commun. 2020;1:82.
Lengnick-Hall R, Williams NJ, Ehrhart MG, Willging CE, Bunger AC, Beidas RS, et al. Eight characteristics of rigorous multilevel implementation research: a step-by-step guide. Implement Sci. 2023;18:52.
Riley-Gibson E, Hall A, Shoesmith A, Wolfenden L, Shelton RC, Doherty E, et al. A systematic review to determine the effect of strategies to sustain chronic disease prevention interventions in clinical and community settings: study protocol. Res Sq [Internet]. 2023 [cited 2024 Apr 19]; Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10312971/
Ingvarsson S, Hasson H, von Thiele Schwarz U, Nilsen P, Powell BJ, Lindberg C, et al. Strategies for de-implementation of low-value care—a scoping review. Implement Sci IS. 2022;17:73.
Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11:e053474.
Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14:N.PAG-N.PAG.
Michie S, Atkins L, West R. The behaviour change wheel. Guide Des Interv 1st Ed G B Silverback Publ. 2014;1003:1010.
Birken SA, Haines ER, Hwang S, Chambers DA, Bunger AC, Nilsen P. Advancing understanding and identifying strategies for sustaining evidence-based practices: a review of reviews. Implement Sci IS. 2020;15:88.
Metz A, Jensen T, Farley A, Boaz A, Bartley L, Villodas M. Building trusting relationships to support implementation: A proposed theoretical model. Front Health Serv. 2022;2:894599.
Rabin BA, Cain KL, Watson P, Oswald W, Laurent LC, Meadows AR, et al. Scaling and sustaining COVID-19 vaccination through meaningful community engagement and care coordination for underserved communities: hybrid type 3 effectiveness-implementation sequential multiple assignment randomized trial. Implement Sci IS. 2023;18:28.
Gyamfi J, Iwelunmor J, Patel S, Irazola V, Aifah A, Rakhra A, et al. Implementation outcomes and strategies for delivering evidence-based hypertension interventions in lower-middle-income countries: Evidence from a multi-country consortium for hypertension control. PLOS ONE. 2023;18:e0286204.
Woodward EN, Ball IA, Willging C, Singh RS, Scanlon C, Cluck D, et al. Increasing consumer engagement: tools to engage service users in quality improvement or implementation efforts. Front Health Serv. 2023;3:1124290.
Norton WE, Chambers DA. Unpacking the complexities of de-implementing inappropriate health interventions. Implement Sci IS. 2020;15:2.
Norton WE, McCaskill-Stevens W, Chambers DA, Stella PJ, Brawley OW, Kramer BS. DeImplementing Ineffective and Low-Value Clinical Practices: Research and Practice Opportunities in Community Oncology Settings. JNCI Cancer Spectr. 2021;5:pkab020.
McKay VR, Proctor EK, Morshed AB, Brownson RC, Prusaczyk B. Letting Go: Conceptualizing Intervention De-implementation in Public Health and Social Service Settings. Am J Community Psychol. 2018;62:189–202.
Patey AM, Grimshaw JM, Francis JJ. Changing behaviour, ‘more or less’: do implementation and de-implementation interventions include different behaviour change techniques? Implement Sci IS. 2021;16:20.
Rodriguez Weno E, Allen P, Mazzucca S, Farah Saliba L, Padek M, Moreland-Russell S, et al. Approaches for Ending Ineffective Programs: Strategies From State Public Health Practitioners. Front Public Health. 2021;9:727005.
Gnjidic D, Elshaug AG. De-adoption and its 43 related terms: harmonizing low-value care terminology. BMC Med. 2015;13:273.
Download references
The authors would like to acknowledge the early contributions of the Pittsburgh Dissemination and Implementation Science Collaborative (Pitt DISC). LEA would like to thank Dr. Billie Davis for analytical support. The authors would like to acknowledge the implementation science experts who recommended articles for our review, including Greg Aarons, Mark Bauer, Rinad Beidas, Geoffrey Curran, Laura Damschroder, Rani Elwy, Amy Kilbourne, JoAnn Kirchner, Jennifer Leeman, Cara Lewis, Dennis Li, Aaron Lyon, Gila Neta, and Borsika Rabin.
Dr. Rogal’s time was funded in part by a University of Pittsburgh K award (K23-DA048182) and by a VA Health Services Research and Development grant (PEC 19-207). Drs. Bachrach and Quinn were supported by VA HSR Career Development Awards (CDA 20-057, PI: Bachrach; CDA 20-224, PI: Quinn). Dr. Scheunemann’s time was funded by the US Agency for Healthcare Research and Quality (K08HS027210). Drs. Hero, Chinman, Goodrich, Ernecoff, and Mr. Qureshi were funded by the Patient-Centered Outcomes Research Institute (PCORI) AOSEPP2 Task Order 12 to conduct a landscape review of US studies on the effectiveness of implementation strategies with results reported here ( https://www.pcori.org/sites/default/files/PCORI-Implementation-Strategies-for-Evidence-Based-Practice-in-Health-and-Health-Care-A-Review-of-the-Evidence-Full-Report.pdf and https://www.pcori.org/sites/default/files/PCORI-Implementation-Strategies-for-Evidence-Based-Practice-in-Health-and-Health-Care-Brief-Report-Summary.pdf ). Dr. Ashcraft and Ms. Phares were funded by the Center for Health Equity Research and Promotion, (CIN 13-405). The funders had no involvement in this study.
Shari S. Rogal and Matthew J. Chinman are co-senior authors.
Center for Health Equity Research and Promotion, Corporal Michael Crescenz VA Medical Center, Philadelphia, PA, USA
Laura Ellen Ashcraft
Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
Center for Health Equity Research and Promotion, VA Pittsburgh Healthcare System, Pittsburgh, PA, USA
David E. Goodrich, Angela Phares, Deirdre A. Quinn, Shari S. Rogal & Matthew J. Chinman
Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
David E. Goodrich, Deirdre A. Quinn & Matthew J. Chinman
Clinical & Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA
David E. Goodrich & Lisa G. Lederer
RAND Corporation, Pittsburgh, PA, USA
Joachim Hero, Nabeel Qureshi, Natalie C. Ernecoff & Matthew J. Chinman
Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, Michigan, USA
Rachel L. Bachrach
Department of Psychiatry, University of Michigan Medical School, Ann Arbor, MI, USA
Division of Geriatric Medicine, University of Pittsburgh, Department of Medicine, Pittsburgh, PA, USA
Leslie Page Scheunemann
Division of Pulmonary, Allergy, Critical Care, and Sleep Medicine, University of Pittsburgh, Department of Medicine, Pittsburgh, PA, USA
Departments of Medicine and Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
Shari S. Rogal
You can also search for this author in PubMed Google Scholar
LEA, SSR, and MJC conceptualized the study. LEA, SSR, MJC, and JOH developed the study design. LEA and JOH acquired the data. LEA, DEG, AP, RLB, DAQ, LGL, LPS, SSR, NQ, and MJC conducted the abstract, full text review, and rigor assessment. LEA, DEG, JOH, AP, RLB, DAQ, NQ, NCE, SSR, and MJC conducted the data abstraction. DEG, SSR, and MJC adjudicated conflicts. LEA and SSR analyzed the data. LEA, SSR, JOH, and MJC interpreted the data. LEA, SSR, and MJC drafted the work. All authors substantially revised the work. All authors approved the submitted version and agreed to be personally accountable for their contributions and the integrity of the work.
Correspondence to Laura Ellen Ashcraft .
Ethics approval and consent to participate.
Not applicable.
The manuscript does not contain any individual person’s data.
Additional information, publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., supplementary material 5., supplementary material 6., supplementary material 7., supplementary material 8., rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Ashcraft, L.E., Goodrich, D.E., Hero, J. et al. A systematic review of experimentally tested implementation strategies across health and human service settings: evidence from 2010-2022. Implementation Sci 19 , 43 (2024). https://doi.org/10.1186/s13012-024-01369-5
Download citation
Received : 09 November 2023
Accepted : 27 May 2024
Published : 24 June 2024
DOI : https://doi.org/10.1186/s13012-024-01369-5
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1748-5908
Our search ended in July 2022, and investigators were contacted to confirm their data accuracy in February 2023. The Figure includes 4 planned platform trials and the planned year of initiation.
eAppendix 1. Search Strategy
eAppendix 2. Baseline Characteristics
eFigure 1. Detailed Flow Chart and Reasons for Exclusion
eTable 1. Report Labels and Reasons for Exclusion of Reports in Literature and Registry Screening
eTable 2. Other Baseline Characteristics
eTable 3. Baseline Characteristics by COVID and Non-COVID Platform Trials
eTable 4. Specific Platform Trial Characteristics in COVID and Non-COVID Trials
eTable 5. Specific Platform Trial Characteristics for Platform Trials With Full Available Master Protocol
eTable 6. Platform Trial Progression and Output of COVID and Non-COVID Trials
eTable 7. Status of Platform Trial Arms and Trial Arm Results in COVID and Non-COVID Trials
eTable 8. How Were Results Made Available for Arms?
eTable 9. Survey Response Rates
eAppendix 3. Example of eMail Template and Report Sent to Platform Trial Teams
eTable 10. List of Randomized Platform Trials
Data Sharing Statement
Sign up for emails based on your interests, select your interests.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Others also liked.
Griessbach A , Schönenberger CM , Taji Heravi A, et al. Characteristics, Progression, and Output of Randomized Platform Trials : A Systematic Review . JAMA Netw Open. 2024;7(3):e243109. doi:10.1001/jamanetworkopen.2024.3109
© 2024
Question What are the characteristics, progression, and output of randomized platform trials?
Findings In this systematic review of 127 platform trials with a total of 823 arms, primarily in the fields of oncology and COVID-19, the adpative features of the trials were often poorly reported and only used in 49.6% of all trials; results were available for only 65.2% of completed trial arms.
Meaning The planning and reporting of platform features and the availability of results were insufficient in randomized platform trials.
Importance Platform trials have become increasingly common, and evidence is needed to determine how this trial design is actually applied in current research practice.
Objective To determine the characteristics, progression, and output of randomized platform trials.
Evidence Review In this systematic review of randomized platform trials, Medline, Embase, Scopus, trial registries, gray literature, and preprint servers were searched, and citation tracking was performed in July 2022. Investigators were contacted in February 2023 to confirm data accuracy and to provide updated information on the status of platform trial arms. Randomized platform trials were eligible if they explicitly planned to add or drop arms. Data were extracted in duplicate from protocols, publications, websites, and registry entries. For each platform trial, design features such as the use of a common control arm, use of nonconcurrent control data, statistical framework, adjustment for multiplicity, and use of additional adaptive design features were collected. Progression and output of each platform trial were determined by the recruitment status of individual arms, the number of arms added or dropped, and the availability of results for each intervention arm.
Findings The search identified 127 randomized platform trials with a total of 823 arms; most trials were conducted in the field of oncology (57 [44.9%]) and COVID-19 (45 [35.4%]). After a more than twofold increase in the initiation of new platform trials at the beginning of the COVID-19 pandemic, the number of platform trials has since declined. Platform trial features were often not reported (not reported: nonconcurrent control, 61 of 127 [48.0%]; multiplicity adjustment for arms, 98 of 127 [77.2%]; statistical framework, 37 of 127 [29.1%]). Adaptive design features were only used by half the studies (63 of 127 [49.6%]). Results were available for 65.2% of closed arms (230 of 353). Premature closure of platform trial arms due to recruitment problems was infrequent (5 of 353 [1.4%]).
Conclusions and Relevance This systematic review found that platform trials were initiated most frequently during the COVID-19 pandemic and declined thereafter. The reporting of platform features and the availability of results were insufficient. Premature arm closure for poor recruitment was rare.
Randomized clinical trials (RCTs) are the criterion standard for evaluating health care interventions. However, RCTs are criticized for being slow, inflexible, inefficient, and costly. 1 - 6 The platform trial design 7 may overcome some of the challenges associated with traditional RCTs. 5 , 8
In the literature, the definition of platform trials is inconsistent. 7 , 9 - 16 Common characteristics of platform trials include the simultaneous assessment of multiple interventions, as well as the ability to drop ineffective interventions or add promising new interventions (arms). 10 , 13 , 17 - 20 Platform trial planning and conduct require consideration of their unique design features, methodological framework, and level of sophistication. This planning includes the potential use of a common control arm, nonconcurrent control data, the statistical framework (bayesian and/or frequentist), in silico trials (simulations), and the use of additional adaptive design features, such as response adaptive randomization (RAR; the change of the randomization ratio based on data collected during the trial), sample size reassessment, seamless design (seamless study phase transition), and adaptive enrichment (modification of eligibility criteria). 9 , 11 , 16 Platform trials are stipulated to be more time efficient and cost efficient and are able to increase the output of the trial, benefiting both patients and researchers. 8 , 9 , 17 Further potential benefits include the use of regulatory documentation (master protocol) and contracts beyond 1 trial and its respective duration, 8 quick initiation of new sites and intervention arms, 21 reuse of established infrastructure, 22 and quick study phase transition. 22
Empirical evidence about platform trials is needed to gain insight into the actual application of this design in clinical research practice and to learn about its benefits and pitfalls, so that the planning and conduct of platform trials can be further improved. Previous systematic reviews on platform trials are outdated 13 , 14 ; are restricted to the late-phase, multiarm, multistage design or COVID-19 trials 23 , 24 ; only investigated a small number of distinct platform trial features 23 ; or did not consider the output of platform trials in terms of completed, prematurely closed, and published trial arms. 25 A comprehensive overview is currently lacking. We specifically wondered whether the incidence of platform trials continued to increase despite a fading pandemic, the extent to which distinctive features were actually used, whether recruitment failures were rare, and whether results from platform trials were consistently made available. We, therefore, conducted a systematic review of all available randomized platform trials to empirically determine (1) their incidence over time, (2) the actual frequencies of various distinctive platform trial characteristics (eg, common control arm, use of nonconcurrent control data, and RAR), (3) the incidence of added and dropped arms over time, (4) the prevalence of discontinued trials due to poor participant recruitment, and (5) the availability of results for closed trial arms.
This systematic review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses ( PRISMA ) reporting guideline. 26 A detailed protocol was prospectively registered on Open Science Framework (OSF). 27
The systematic search (including registries) was conducted on January 12, 2021, and was updated on July 28, 2022. Data were extracted until December 2022. Investigators were contacted for verification of the data in February 2023. We performed a systematic search of Medline (OVID), Embase (OVID), Scopus, and several trial registries (Clinicaltrials.gov, European Union Drug Regulating Authorities Clinical Trials Database, and International Standard Randomized Controlled Trial Number registry). To increase the sensitivity of the search, we included gray literature servers (OSF and Zenodo) and preprint servers (Europe PubMed Central) (search date: July 21, 2022). The detailed search strategy is available in eAppendix 1 in Supplement 1 . An information specialist helped us design and review our search strategy. Trials were included if they were RCTs and planned to add or drop arms.
Screening of titles and abstracts, trial registries, and full text were performed in duplicate. Discrepancies were resolved by discussion or by involving a third reviewer (B.S. or M.B.). For each included report, we continued with forward and backward citation tracking (using Scopus). Citation tracking, gray literature, and preprint server screening was conducted by only 1 reviewer (A.G. or C.M.S.). If multiple reports were available for 1 platform trial, these reports were organized and consolidated by registry numbers, acronyms, and the title of the trial. Once a platform trial was included, we determined if an official trial website was available (by screening the literature and registries and searching via Google). For each platform trial and each of their recorded arms, we searched in duplicate (registry, website, Google Scholar, and Google) for the master protocol, subprotocols, and results publications, if not previously found in the literature search.
The variables for this systematic review were chosen based on discussions with methodologists and statisticians of platform trials, previous reviews on the topic, and the critical appraisal checklists by Park et al. 20 , 28 All relevant data were extracted in duplicate (by different researchers). Differences were consolidated by a third reviewer. All authors worked in teams of 2 from trial protocols (master and subprotocols), result publications, trial registries, and the official trial websites into a REDCap data sheet. 29 , 30 We documented the different labels used in study records (eg, “platform trial,” “trial platform,” “platform study,” “platform design,” or “platform protocol”) to explore the general use of the term platform trial . We extracted baseline characteristics for each included platform trial and each of their individual arms (see list of all baseline characteristics in eAppendix 2 in Supplement 1 ). Furthermore, distinct platform trial features were recorded. These features included the use of a common control arm and, if the common control arm could be updated during the trial, the use of nonconcurrent control data, adaptive design elements (eg, RAR, adaptive enrichment, seamless design, sample size readjustment), a statistical framework (bayesian, frequentist, or both), multiplicity adjustments (to multiple arms and for interim analyses), and feasibility studies (in silico trials or simulations or pilot trials). We determined the progression and output of the platform trial by the starting number of arms, the total number of arms, the number of arms added, the number of arms dropped (including the reason), and the status and availability of the results for each intervention arm (output of platform trial). Further features of interest included the use of biomarker stratification or subpopulations, integration of nonrandomized arms, interim analysis (reporting of frequency, outcome, and trigger), or the use of a factorial design. The format of the master protocol and the results publications were also recorded (as peer-reviewed publication, preprint, and full protocol on website or registry). Furthermore, we calculated the ratio of available results publications to the number of closed arms. The ratio was calculated twice, once including and once excluding results available as abstracts only. We contacted all principal investigators with a report detailing the most important information extracted from their platform trial. Principal investigators were asked to approve the accuracy of extracted data and to clarify missing or unclear information (eAppendix 3 in Supplement 1 ).
We summarized the characteristics of the included platform trials using the median and IQR for continuous variables and numbers and percentages for categorical variables. Baseline characteristics were stratified by sponsorship (industry vs not industry sponsored) and COVID-19 indication. Previous research has identified differences in the discontinuation rate, reporting quality, and transparency between industry-sponsored and non–industry-sponsored traditional RCTs 31 , 32 ; as such, we stratified platform trial characteristics by sponsorship. Because it was expected that platform trial features are often recorded in the master protocol, we conducted a sensitivity analysis including only trials with an available master protocol. Data cleaning and analysis were conducted with R, version 1.4.1103 (R Project for Statistical Computing).
A total of 9155 records were identified. We determined 431 eligible records, resulting in 127 unique randomized platform trials included in our sample (the list of all included platform trials can be found in eTable 10 in Supplement 1 ). Labels such as “platform trial” and “platform study” were often used in a non–clinical trial context (see detailed list of all excluded reports using such terms in eTable 1 in Supplement 1 ). Platform trials were excluded if not randomized or if they did not allow for the adding and dropping of new arms (eFigure in Supplement 1 ).
Most platform trials were conducted in the fields of oncology (57 of 127 [44.9%]) and COVID-19 (45 of 127 [35.4%]), were multicenter and international (74 of 127 [58.3%]), tested drugs (108 of 127 [85.0%]), and were not industry sponsored (90 of 127 [70.9%]) ( Table 1 ). All platform trials were registered. A master protocol was publicly available for 59.8% of all platform trials (76 of 127), and 16.5% (21 of 127) had also made older versions of protocols (amendments) available. A website existed for 51.2% of platform trials (65 of 127), with a higher prevalence observed in non–industry-sponsored trials than in industry-sponsored trials (55 of 90 [61.1%] vs 10 of 37 [27.0%]). Additional platform trial characteristics (eg, use of blinding, interim analyses, factorial design, nonrandomized arms, biomarker stratification, and number of subpopulations) and a stratification by COVID-19 and non–COVID-19 trials are presented in eTable 2, eTable 3, eTable 4, eTable 6, and eTable 7 in Supplement 1 . A total of 38 platform trials (29.9%) were initiated in 2020, the highest reported incidence of newly started platform trials in 1 year thus far. This number has since decreased (25 of 127 [19.7%] in 2021) ( Figure ).
A common control arm was reported to be used in 73.2% of all platform trials (93 of 127); 7.9% trials (10 of 127) planned to use nonconcurrent control data for their statistical analysis (not reported for 61 of 127 trials [48.0%]) ( Table 2 ). Adaptive design elements were integrated in approximately half the platform trials (63 of 127 [49.6%]), and 17.3% of trials (22 of 127) implemented more than 1 adaptive design element. A correction for multiple testing for multiple arms was typically not reported (98 of 127 [77.2%]) or not considered (21 of 127 [16.5%]). The statistical framework was not reported by 37 studies (29.1%). Seamless designs, combining early- and late-phase trials, were used in 18.1% of trials (23 of 127). Characteristics stratified by COVID-19 vs non–COVID-19 trials can be found in eTable 4 in Supplement 1 .
Most randomized platform trials were ongoing (86 of 127 [67.7%]) or completed (26 of 127 [20.5%]), 4 of 127 (3.1%) were in planning, and 10 of 127 (7.9%) were discontinued ( Table 3 ). Reasons for discontinuation included change in treatment landscape (3 of 10), low event rates (3 of 10), insufficient funding (2 of 10), and safety concerns (1 of 10), and, for 1 platform trial, the reason for discontinuation remained unclear. The number of arms at the start of the platform trial and the total number of arms was typically higher in industry-sponsored trials (median number of arms at start, 4 [IQR, 2-5]; median total number of arms, 6 [IQR, 4-8]) than in non–industry-sponsored trials (median number of arms at start, 3 [IQR, 2-4]; median total number of arms, 5 [IQR, 4-7]) ( Table 3 ). Overall, 58.3% platform trials (74 of 127) added at least 1 arm, and 62.2% (79 of 127) dropped at least 1 arm during their progression; although planned, 21.3% of platform trials (27 of 127) neither added nor dropped an arm. Of the 85 platform trials that added or dropped an arm during the trial, the corresponding registry entry was not updated for 19 trials (22.4%). Half of all platform trials (64 of 127 [50.4%]) made results available from at least 1 comparison. Data on progression and output stratified by COVID-19 vs non–COVID-19 trials can be found in eTable 6 in Supplement 1 .
The 127 platform trials had a total of 823 arms, including 206 control arms ( Table 4 ). Of the 823 arms, 385 (46.8%) were ongoing, 34 (4.1%) were in the planning phase, and 353 (42.9%) were closed. Of the 353 closed arms, 189 (53.5%) were completed, 56 (15.9%) were stopped for futility, 20 (5.7%) were stopped due to new external evidence, 9 (2.5%) were stopped for safety concerns, and 26 (7.4%) were stopped for practical reasons, including poor recruitment (5 [1.4%]). Less than half of the closed arms (169 of 353 [47.9%]) made full results available. Making results available was more common and faster for non–industry-sponsored trials compared with industry-sponsored trials (150 of 277 [54.2%] vs 19 of 76 [25.0%]); however, there is evidence for confounding because COVID-19 trial results were available substantially faster than results for non–COVID-19 trials ( Table 4 ). The detailed status of platform trial arms stratified by COVID-19 vs non–COVID-19 trials can be found in eTable 7 in Supplement 1 . The form of results availability (as peer review, preprint, abstract, and on registry) is available in eTable 8 in Supplement 1 . We contacted investigators of platform trials to verify the extracted data and achieved a high response rate (active agreement, 46.5% [59 of 127]; taciturn agreement, 15.7% [20 of 127]; no response, 37.8% [48 of 127]) (eTable 9 in Supplement 1 ).
Existing platform trials predominantly focus on evaluating drugs and tend to cluster in medical areas, such as oncology, COVID-19, and other infectious diseases. After the peak in 2020 with the arrival of the COVID-19 pandemic, the initiation of new platform trials has decreased. However, there has been a noticeable diversification of medical fields and interventions of platform trials over the past 5 years. This diversification encompasses areas such as neurology, dermatology, and general surgery, as well as the testing of behavioral, surgical, or dietary interventions.
Among the observed platform trials, 49.6% incorporated at least 1 additional adaptive design feature. A total of 58.3% of platform trials added at least 1 arm, and 62.2% dropped at least 1 arm (21.3% did neither, although planned). Consequently, the approximately 40% of trials that never added an arm may have incurred higher planning and setup costs compared with traditional RCTs without benefiting from the cost savings of additional arms. 33 A common control arm was used in only 73.2% of platform trials, which is lower than one would expect for a major platform trial advantage (increased efficiency) and is below the percentage previously reported. 23 This finding may underline the belief of many stakeholders that the establishment of collective trial infrastructures (including communication networks, overall data management and monitoring plans, and standardized documents across arms) is reason enough to justify the use of the platform trial design. 22 Nevertheless, the benefits of only submitting an amendment instead of a new application for each added arm, and the quicker activation of sites, compared with new traditional RCTs, need to be balanced with substantial operational, statistical, and legal complexities of platform trials 21 , 34
Many statistical features of platform trials are currently contended in literature, form the foundation of the platform trial design, and affirm the validity of the trial results. 12 , 16 , 22 , 35 - 37 A bayesian design was frequently used because this statistical framework fits well with the adaptive nature of platform trials 25 , 35 ; however, bayesian trial designs may be less commonly understood by a general medical and scientific readership, posing challenges for interpretation and uptake of results. In addition, the use of features such as RAR and nonconcurrent controls should be considered carefully. Response adaptive randomization, for instance, requires a well-planned run-in phase, may inflate type I error, typically requires a higher sample size, and can be associated with slow accrual of outcome data. 38 About 8% of platform trials considered nonconcurrent control data in an attempt to further increase statistical power; however, this approach carries a high risk for bias. 22 , 37 , 39 Regulators criticize the use of nonconcurrent controls in confirmatory trials because statistical modeling can only partially address the potential bias. 37 , 38
Almost 80% of platform trial protocols were publicly available in some format, much higher than previously determined for traditional RCTs. 24 , 25 However, reporting of essential features, such as adjustment for multiplicity, use of nonconcurrent control data, and criteria for dropping and adding new arms, was often unsatisfactory. Full results publications were available for 47.9% of closed arms. Premature closure of platform trial arms due to recruitment problems was infrequent, occurring in only 1.4% of trials, which is in contrast to traditional RCTs (discontinuation rate due to poor recruitment in RCTs, 10%-15%). 31 , 32 However, it is possible that this proportion will increase due to recruitment hurdles and the increasing scarcity of eligible patients for COVID-19 trials toward the end of the pandemic. Publication of full results for closed arms (47.9%) was lower than what is generally seen for traditional RCTs (78.5% at 10-year follow-up). 32 Availability of full results publications and overall transparency were generally better in non–industry-sponsored platform trials.
Overall, industry-sponsored platform trials accounted for approximately one-third of the total and predominantly focused on early-phase investigations, while late-phase trials were mostly not sponsored by industry. Seamless designs, combining early- and late-phase trials, although still a minority (18.1%), are becoming increasingly more common. 14
Our study has some strengths. To our knowledge, it is the first study investigating key platform trial features, protocol and results availability, and the status of individual arms. An additional strength of our study was that we contacted investigators of platform trials to verify the extracted data and achieved a high response rate (active agreement, 46.5% [59 of 127]; taciturn agreement, 15.7% [20 of 127]; no response, 37.8% [48 of 127]) (eTable 9 in Supplement 1 ); responses typically confirmed the accuracy of gathered data, and only minor adjustments were necessary.
Our study has the following limitations. First, available information was sometimes limited, especially if only a registry entry was available. We have, therefore, conducted sensitivity analyses showing how the proportion of certain variables changed if only platform trials with an available master protocol (n = 76 [59.8%]) were considered (eTable 5 in Supplement 1 ). Second, the reporting was not always consistent across different sources. We handled these discrepancies by creating an information hierarchy, giving priority to peer-reviewed manuscripts and the feedback received by investigators (followed by preprints, websites, and then other sources). Third, although highly desirable, we did not consider resource use and costs of platform trials in this review. Evidence from a hypothetical costing study suggested increased costs associated with the planning and setup of platform trials compared with traditional RCTs are due to the complex protocols and longer setup times. 33 These increased costs were mitigated when more arms were added to the trial, which was less time intensive and reduced costs long term. 40 , 41 Fourth, a comparison of platform trials with traditional parallel-arm RCTs was possible only on an indirect level. However, a direct comparison of platform trials with traditional RCTs with the same research question is planned in a future project, as described in our study protocol. 27 Fifth, this systematic review provides only a snapshot of the current platform trial landscape. Two-thirds of identified platform trials are still ongoing, and the COVID-19 pandemic may have had an influence on the progression and output of our sample. Furthermore, methodological background and reporting guidelines for platform trials were lacking at the start of this project and are currently still evolving. Therefore, regular updates of this systematic review are necessary to gain further insights into progression patterns and output from randomized platform trials and to determine the most appropriate application of this design in the future.
In this systematic review, we found that platform trials were initiated most frequently during the beginning of the COVID-19 pandemic and appeared to decrease thereafter, with a trend toward more diversified medical fields and interventions. Despite the potential for complexity, most made use of only 1 adaptive feature, or none. Forty percent of platform trials did not add an arm and, thereby, may have missed efficiency gains and incurred higher planning and setup costs compared with traditional RCTs. 33 Premature arm closure for poor recruitment was rare. The reporting of platform features, the status of trial arms, and the results of closed arms needs to be improved. Guidance and infrastructure are needed so that the status and results of individual trial arms can be reported in a timely manner (eg, adaptations of trial registries for platform trials) and so that decisions about the need for a platform design and its planning is optimized.
Accepted for Publication: January 24, 2024.
Published: March 20, 2024. doi:10.1001/jamanetworkopen.2024.3109
Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Griessbach A et al. JAMA Network Open .
Corresponding Author: Alexandra Griessbach, MSc, CLEAR Methods Center, Division of Clinical Epidemiology, Department of Clinical Research, University Hospital Basel, Totengaesslein 3, 4031 Basel, Switzerland ( [email protected] ).
Author Contributions: Ms Griessbach and Dr Briel had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs Speich and Briel shared last authorship.
Concept and design: Griessbach, Speich, Briel.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Griessbach, Covino, Mall, Briel.
Critical review of the manuscript for important intellectual content: Griessbach, Schönenberger, Taji Heravi, Gloy, Agarwal, Hallenberger, Schandelmaier, Janiaud, Amstutz, Speich, Briel.
Statistical analysis: Griessbach.
Obtained funding: Griessbach.
Administrative, technical, or material support: Griessbach, Gloy.
Supervision: Griessbach, Amstutz, Speich, Briel.
Conflict of Interest Disclosures: Drs Schönenberger and Hallenberger reported receiving grants from the Swiss National Science Foundation outside the submitted work. Dr Speich reported receiving grants from Moderna outside the submitted work. No other disclosures were reported.
Meeting Presentation: This study was presented at the Sixth International Clinical Trials Methodology Conference; October 3, 2022; Harrogate, England; and at the Australian Clinical Trials Alliance–Adaptive Platform Trials Operations Meeting; August 10, 2023; virtual meeting.
Data Sharing Statement: See Supplement 2 .
Additional Contributions: We thank Hannah Ewald, PhD, University Basel, for reviewing our search strategy; she was compensated for her contribution.
share this!
July 2, 2024
This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
by Michigan State University
Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State University found that people of color and white women are significantly underrepresented in RCTs due to systematic biases.
, published in the Journal of Ethnicity in Substance Abuse , reviewed 18 RCTs conducted over the last 15 years that tested treatments for post-traumatic stress and alcohol use disorder. The researchers found that despite women having double the rates of post-traumatic stress and alcohol use disorder than men, and
"Because RCTs are the gold standard for treatment studies and drug trials , we rarely ask the important questions about their limitations and failings," said Nicole Buchanan, co-author of the study and professor in MSU's Department of Psychology.
"For RCTs to meet their full potential, investigators need to fix barriers to inclusion. Increasing representation in RCTs is not simply an issue for equity, but it is also essential to enhancing the quality of our science and meeting the needs of the public that funds these studies through their hard-earned tax dollars."
The researchers found that the design and implementation of the randomized controlled trials contributed to the lack of representation of people of color and women. This happened because trials were conducted in areas where white men were the majority demographic group and study samples almost always reflected the demographic makeup where studies occurred.
Additionally, those designing the studies seldom acknowledged race or gender differences , meaning they did not intentionally recruit diverse samples.
Furthermore, the journals publishing these studies did not have regulations requiring sample diversity, equity or inclusion as appropriate to the conditions under investigation.
"Marginalized groups have unique experiences from privileged groups, and when marginalized groups are poorly included in research, we remain in the dark about their experiences, insights, needs and strengths," said Mallet Reid, co-author of the study and doctoral candidate in MSU's Department of Psychology.
"This means that clinicians and researchers may unknowingly remain ignorant to how to attend to the trauma and addiction challenges facing marginalized groups and may unwittingly perpetuate microaggressions against marginalized groups in clinical settings or fail to meet their needs."
Explore further
Feedback to editors
5 hours ago
7 hours ago
8 hours ago
9 hours ago
Related stories.
Jul 29, 2022
Jun 11, 2024
Dec 5, 2023
Feb 2, 2024
Feb 1, 2024
11 hours ago
12 hours ago
13 hours ago
18 hours ago
10 hours ago
Let us know if there is a problem with our content.
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.
Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.
More information Privacy policy
We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.
New msu study finds systematic biases at play in clinical trials.
July 1, 2024 - Shelly DeJong
Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State University found that people of color and white women are significantly underrepresented in RCTs due to systematic biases.
The study , published in the Journal of Ethnicity in Substance Abuse, reviewed 18 RCTs conducted over the last 15 years that tested treatments for post-traumatic stress and alcohol use disorder. The researchers found that despite women having double the rates of post-traumatic stress and alcohol use disorder than men, and people of color having worse chronicity than white people, most participants were white (59.5%) and male (about 78%).
“Because RCTs are the gold standard for treatment studies and drug trials, we rarely ask the important questions about their limitations and failings,” said NiCole Buchanan , co-author of the study and professor in MSU’s Department of Psychology. “For RCTs to meet their full potential, investigators need to fix barriers to inclusion. Increasing representation in RCTs is not simply an issue for equity, but it is also essential to enhancing the quality of our science and meeting the needs of the public that funds these studies through their hard-earned tax dollars.”
The researchers found that the design and implementation of the randomized controlled trials contributed to the lack of representation of people of color and women. This happened because trials were conducted in areas where white men were the majority demographic group and study samples almost always reflected the demographic makeup where studies occurred. Additionally, those designing the studies seldom acknowledged race or gender differences, meaning they did not intentionally recruit diverse samples.
Furthermore, the journals publishing these studies did not have regulations requiring sample diversity, equity or inclusion as appropriate to the conditions under investigation.
“Marginalized groups have unique experiences from privileged groups, and when marginalized groups are poorly included in research, we remain in the dark about their experiences, insights, needs and strengths,” said Mallet Reid , co-author of the study and doctoral candidate in MSU’s Department of Psychology. “This means that clinicians and researchers may unknowingly remain ignorant to how to attend to the trauma and addiction challenges facing marginalized groups and may unwittingly perpetuate microaggressions against marginalized groups in clinical settings or fail to meet their needs.”
Warning: The NCBI web site requires JavaScript to function. more...
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-.
In brief: what are systematic reviews and meta-analyses.
Last Update: September 8, 2016 ; Next update: 2024.
Individual studies are often not big and powerful enough to provide reliable answers on their own. Or several studies on the effects of a treatment might come to different conclusions. In order to find reliable answers to research questions, you therefore have to look at all of the studies and analyze their results together.
Systematic reviews summarize the results of all the studies on a medical treatment and assess the quality of the studies. The analysis is done following a specific, methodologically sound process. In a way, it’s a “study of studies.” Good systematic reviews can provide a reliable overview of the current knowledge in a certain area.
They are normally done by teams of authors working together. The authors are usually specialists with backgrounds in medicine, epidemiology, medical statistics and research.
Systematic reviews can only provide reliable answers if the studies they are based on are searched for and selected very carefully. The individual steps needed before they can be published are usually quite complex.
Sometimes the results of all of the studies found and included in a systematic review can be summarized and expressed as an overall result. This is known as a meta-analysis. The overall outcome of the studies is often more conclusive than the results of individual studies.
But it only makes sense to do a meta-analysis if the results of the individual studies are fairly similar (homogeneous). If there are big differences between the results, there are likely to be important differences between the studies. These should be looked at more closely. It is then sometimes possible to split the participants into smaller subgroups and summarize the results separately for each subgroup.
IQWiG health information is written with the aim of helping people understand the advantages and disadvantages of the main treatment options and health care services.
Because IQWiG is a German institute, some of the information provided here is specific to the German health care system. The suitability of any of the described options in an individual case can be determined by talking to a doctor. informedhealth.org can provide support for talks with doctors and other medical professionals, but cannot replace them. We do not offer individual consultations.
Our information is based on the results of good-quality studies. It is written by a team of health care professionals, scientists and editors, and reviewed by external experts. You can find a detailed description of how our health information is produced and updated in our methods.
Informed health links, related information.
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
Background Despite restoration of epicardial blood flow in acute ST-elevation myocardial infarction (STEMI), inadequate microcirculatory perfusion is common and portends a poor prognosis. Intracoronary (IC) thrombolytic therapy can reduce microvascular thrombotic burden; however, contemporary studies have produced conflicting outcomes.
Objectives This meta-analysis aims to evaluate the efficacy and safety of adjunctive IC thrombolytic therapy at the time of primary percutaneous coronary intervention (PCI) among patients with STEMI.
Methods Comprehensive literature search of six electronic databases identified relevant randomised controlled trials. The primary outcome was major adverse cardiac events (MACE). The pooled risk ratio (RR) and weighted mean difference (WMD) with a 95% CI were calculated.
Results 12 studies with 1915 patients were included. IC thrombolysis was associated with a significantly lower incidence of MACE (RR=0.65, 95% CI 0.51 to 0.82, I 2 =0%, p<0.0004) and improved left ventricular ejection fraction (WMD=1.87; 95% CI 1.07 to 2.67; I 2 =25%; p<0.0001). Subgroup analysis demonstrated a significant reduction in MACE for trials using non-fibrin (RR=0.39, 95% CI 0.20 to 0.78, I 2 =0%, p=0.007) and moderately fibrin-specific thrombolytic agents (RR=0.62, 95% CI 0.47 to 0.83, I 2 =0%, p=0.001). No significant reduction was observed in studies using highly fibrin-specific thrombolytic agents (RR=1.10, 95% CI 0.62 to 1.96, I 2 =0%, p=0.75). Furthermore, there were no significant differences in mortality (RR=0.91; 95% CI 0.48 to 1.71; I 2 =0%; p=0.77) or bleeding events (major bleeding, RR=1.24; 95% CI 0.47 to 3.28; I 2 =0%; p=0.67; minor bleeding, RR=1.47; 95% CI 0.90 to 2.40; I 2 =0%; p=0.12).
Conclusion Adjunctive IC thrombolysis at the time of primary PCI in patients with STEMI improves clinical and myocardial perfusion parameters without an increased rate of bleeding. Further research is needed to optimise the selection of thrombolytic agents and treatment protocols.
All data relevant to the study are included in the article or uploaded as supplemental information.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .
https://doi.org/10.1136/heartjnl-2024-324078
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
ST-elevation myocardial infarction (STEMI) is a significant cause of morbidity and mortality worldwide. Microvascular obstruction affects about half of patients with STEMI, leading to adverse outcomes. Previous studies on adjunctive intracoronary thrombolysis have shown inconsistent results.
This meta-analysis demonstrates that adjunctive intracoronary thrombolysis during primary percutaneous coronary intervention (PCI) significantly reduces major adverse cardiac events and improves left ventricular ejection fraction. Furthermore, it significantly improves myocardial perfusion parameters without increasing bleeding risk.
Adjunctive intracoronary thrombolysis in patients with STEMI undergoing primary PCI shows promise for clinical benefit. Future studies should identify high-risk patients for microcirculatory dysfunction to optimise treatment strategies and clinical outcomes.
Ischaemic heart disease remains a leading cause of morbidity and mortality worldwide. 1 2 ST-elevation myocardial infarction (STEMI) occurs due to coronary vessel occlusion causing transmural myocardial ischaemia and subsequent necrosis. 3 The cornerstone of contemporary management involves prompt reopening of the occluded coronary artery with percutaneous coronary intervention (PCI). 4 5 Despite restoring epicardial blood flow, roughly 50% of patients fail to achieve adequate microvascular perfusion. 6 This phenomenon, known as microvascular obstruction (MVO), is predictive of a poor cardiac prognosis driven by left ventricular remodelling and larger infarct size. 7–9
In patients with STEMI, MVO is characterised by distal embolisation of atherothrombotic debris and fibrin-rich microvascular thrombi. 10 A growing body of evidence supports the efficacy of adjunctive low-dose intracoronary (IC) thrombolysis in this population. Sezer et al performed the first randomised controlled trial (RCT), demonstrating an improvement in myocardial perfusion with low-dose IC streptokinase post-PCI. 11 Subsequent studies focused on newer fibrin-specific agents with a lower propensity for systemic bleeding. 12 Despite encouraging results, many studies were inadequately powered and yielded conflicting outcomes. This meta-analysis aims to evaluate the efficacy and safety of adjunctive IC thrombolytic therapy at the time of primary PCI in patients with STEMI.
The present study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. 13
Electronic searches were performed using PubMed, Ovid Medline, Cochrane Library, ProQuest, ACP Journal Club and Google Scholar from their dates of inception to January 2022. The search terms “STEMI” AND “intracoronary” AND (“thrombolysis” OR “tenecteplase” OR “alteplase” OR “prourokinase” OR “urokinase” OR “streptokinase”) were combined as both keywords and Medical Subject Headings terms, with filters for RCTs. This was supplemented by hand searching the bibliographies of review articles and all potentially relevant studies.
Two reviewers (RR and SV) independently screened the title and abstracts of articles identified in the search. Full-text publications were subsequently reviewed separately if either reviewer considered the manuscript as being potentially eligible. Any disagreements regarding final study inclusion were resolved by discussion and consensus with a third reviewer (CCYW).
Studies were included if they met following inclusion criteria: (1) RCT, (2) STEMI population, (3) IC thrombolysis given to treatment group with comparison with a control group (CG) receiving no thrombolytic therapy, (4) major adverse cardiovascular event (MACE) was an outcome reported.
All publications were limited to those involving human subjects and no restrictions were based on language. Reviews, meta-analyses, abstracts, case reports, conference presentations, editorials and expert opinions were excluded. When institutions published duplicate studies with accumulating numbers of patients or increased lengths of follow-up, only the most complete reports were included for assessment.
Two investigators (RR and SV) independently extracted data from text, tables and figures. Any discrepancies were resolved by discussion and consensus with a third reviewer (CCYW). For each of the included trials, the following data were extracted: publication year, number of patients, baseline characteristics of participants, treatment details (including specific agents administered), follow-up duration and endpoints.
Study quality and risk of bias were critically appraised using the updated Cochrane Collaboration Risk-of-Bias Tool V.2. 14 Five domains of bias were evaluated: (1) randomisation process, (2) deviations from study protocol, (3) missing outcome data, (4) outcome measurement and (5) selective reporting of results.
The predetermined primary endpoint was MACE, which represented a composite outcome as defined by each individual study. While the individual components of MACE were generally consistent across studies, minor discrepancies existed ( online supplemental table 1 ). Secondary outcomes included clinical endpoints (mortality, heart failure (HF), major and minor bleeding), myocardial perfusion endpoints (thrombolysis in myocardial infarction (TIMI) flow grade 3, TIMI myocardial perfusion grade (TMPG), corrected TIMI frame count (CTFC), ST-resolution (STR)) and echocardiographic parameters (left ventricular ejection fraction (LVEF)). Subgroup analysis for MACE was conducted based on fibrin specificity of the thrombolytic agent. This classification comprised non-fibrin-specific agents (streptokinase and urokinase), moderately fibrin-specific agents (prourokinase) and highly fibrin-specific agents (alteplase and tenectaplase). Clinical outcomes were assessed at the end of the follow-up period, which ranged from 1 to 12 months, while echocardiographic parameters were evaluated within a time frame of 1–6 months.
Statistical analysis.
The mean difference (MD) or relative risk (RR) was used as summary statistics and reported with 95% CIs. Meta-analyses were performed using random-effects models to take into account the anticipated clinical and methodological diversity between studies. The I 2 statistic was used to estimate the percentage of total variation across studies due to heterogeneity rather than chance, with values exceeding 50% indicative of considerable heterogeneity. For meta-analysis of continuous data, values presented as median and IQR were converted to mean and SD using the quantile method previously described by Wan et al . 15 For subgroup analyses, a standard test of heterogeneity was used to assess for significant difference between subgroups with p<0.05 considered statistically significant.
Meta-regression analyses were performed to explore potential heterogeneity with the following moderator variables individually assessed for significance: publication year, mean age, proportion of male participants, percentage of left anterior descending artery infarcts, proportion of smokers, as well as baseline prevalence of diabetes, hypertension and dyslipidaemia.
Publication bias was assessed for the primary endpoint of MACE using funnel plots comparing log of point estimates with their SE. Egger’s linear regression method and Begg’s rank correlation test were used to detect funnel plot asymmetry. 16 17 Statistical analysis was conducted with Review Manager V.5.3.5 (Cochrane Collaboration, Oxford, UK) and Comprehensive Meta-Analysis V.3.0 (Biostat, Englewood, New Jersey, USA). All p values were two sided, and values <0.05 were considered statistically significant.
A total of 245 unique records were identified through electronic searches using six online databases, from which 85 duplicates were removed. Of these, 120 were excluded based on title and abstract alone. After screening the full text of the remaining 40 articles, 12 studies 18–29 were found to meet the inclusion criteria, as summarised on the PRISMA flow chart in figure 1 .
Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow chart of literature search and study selection.
IC thrombolysis was examined in 12 studies (n=1030 received IC thrombolysis and 885 no IC thrombolysis). Included studies used non-fibrin-specific (streptokinase, urokinase), moderately fibrin-specific (prourokinase) and highly fibrin-specific thrombolytic (alteplase, tenecteplase) agents. The timing and delivery of IC thrombolytic therapy varied between studies. A complete summary of study characteristics and baseline participant characteristics is presented in tables 1 and 2 , respectively. Primary and secondary outcomes are summarised in online supplemental table 2 . According to the revised Cochrane tool, the overall risk of bias assessment for procedural measures was judged to be ‘low risk’ in two studies, ‘some concerns’ in eight studies and ‘high risk’ in two studies ( online supplemental figure 1 ).
Summary of studies investigating intracoronary thrombolysis for patients with STEMI
Summary of baseline patient characteristics in studies investigating intracoronary thrombolysis for patients with STEMI
All 12 RCTs reported the incidence of MACE. Compared with the CG, IC thrombolysis treatment significantly improved the occurrence of MACE at the end of follow-up (RR=0.65, 95% CI 0.51 to 0.82, I 2 =0%, p<0.0004; figure 2 ). Subgroup analysis demonstrated a significant reduction in MACE for trials using non-fibrin (RR=0.39, 95% CI 0.20 to 0.78, I 2 =0%, p=0.007) and moderately fibrin-specific thrombolysis (RR=0.62, 95% CI 0.47 to 0.83, I 2 =0%, p=0.001). MACE was observed at a similar rate in studies using highly fibrin-specific thrombolysis (RR=1.10, 95% CI 0.62 to 1.96, I 2 =0%, p=0.75). Test for subgroup difference was not significant (p=0.07). Furthermore, IC thrombolysis was associated with an improvement of LVEF (weighted MD (WMD)=1.87; 95% CI, 1.07 to 2.67; I 2 =25%; p<0.0001; online supplemental figure 2 ). There was a trend towards lower incidence of HF hospitalisation (RR=0.66; 95% CI 0.42 to 1.05; I 2 =0%; p=0.08; online supplemental figure 3 ), though not statistically significant. No significant differences were observed in mortality (RR=0.95; 95% CI 0.50 to 1.81; I 2 =0%; p=0.88; online supplemental figure 4 ), major bleeding (RR=1.24; 95% CI 0.47 to 3.28; I 2 =0%; p=0.67; online supplemental figure 5 ) and minor bleeding events (RR=1.47; 95% CI 0.90 to 2.40; I 2 =0%; p=0.12; online supplemental figure 6 ) between the two groups.
Forest plot displaying relative risk for major adverse cardiovascular events with intracoronary (IC) thrombolysis (stratified by fibrin-specific and non-fibrin-specific agents) or placebo in ST-elevation myocardial infarction. Squares and diamonds=risk ratios. Lines=95% CIs.
In patients with STEMI, IC thrombolysis significantly improved TIMI flow grade 3 (RR=1.09; 95% CI 1.02 to 1.15; I 2 =63%; p=0.006), TMPG (RR=1.38; 95% CI 1.13 to 1.68; I 2 =54%; p=0.001), complete STR (RR=1.20; 95% CI 1.10 to 1.31; I 2 =51%; p<0.0001) and CTFC (WMD=−4.58; 95% CI −6.23 to –2.72; I 2 =41%; p<0.0001) when compared with the CG ( figure 3 ).
Forest plots of myocardial perfusion outcomes with intracoronary (IC) thrombolysis or placebo in ST-elevation myocardial infarction. (A) Thrombolysis in myocardial infarction (TIMI) flow grade 3. (B) TIMI myocardial perfusion grade 3. (C) ST-segment resolution. (D) Corrected TIMI frame count. Squares and diamonds=risk ratios/weighted mean difference. Lines=95% CIs.
For primary endpoint of MACE, meta-regression analyses did not identify the following moderator variables as significant effect modifiers: publication year (p=0.97), proportion of male (p=0.23), prevalence of diabetes (p=0.44), proportion of smokers (p=0.68), prevalence of dyslipidaemia (p=0.44) and prevalence of hypertension (p=0.21).
Both Egger’s linear regression method (p=0.73) and Begg’s rank correlation test (p=0.63) suggested that publication bias was not an influencing factor when MACE was selected as the primary endpoint.
The present meta-analysis examined 12 RCTs that included 1915 patients with STEMI undergoing primary PCI. All trials evaluated the efficacy and safety of IC thrombolytic agents compared with a CG. The main findings were that patients administered IC thrombolysis had: (1) significantly lower incidence of MACE, (2) improvement in LVEF and (3) superior myocardial perfusion parameters (TIMI flow grade 3, TMPG, CTFC and complete STR). Notably, there were no significant differences observed in mortality and bleeding events in both groups.
Mortality rates following STEMI remain high, with 30-day mortality rates ranging from 5.4% to 14% and 1-year mortality rates ranging from 6.6% to 17.5%. 30 Despite the increased availability of primary PCI facilities and advancements in reperfusion strategies, there has been limited improvement in STEMI mortality rates. 31 Moreover, complications such as HF, arrhythmia, repeat revascularisation and reinfarction continue to be prevalent. 32–34 Despite restoring epicardial blood flow through PCI, MVO is evident in almost half of patients with STEMI. 6 It is characterised by distal embolisation of atherothrombotic debris, de novo microvascular thrombosis formation and plugging of circulating blood cells. 35 Furthermore, the upregulation of inflammatory mediators leads to intramyocardial haemorrhage and further microvascular necrosis. 36 37 These mechanistic pathways contribute to a larger infarct size, adverse myocardial remodelling and worse prognosis. 7 8 38
Thrombolytic therapy is an effective treatment for acute coronary thrombosis. 39 It inhibits red blood cell aggregation and dissolves thrombi to facilitate adequate microvascular perfusion. 40 41 Thrombolytic agents are commonly classified based on their affinity for fibrin. Streptokinase and urokinase lack fibrin specificity, indiscriminately activating both circulating and clot-bound plasminogen. Prourokinase has moderate fibrin specificity with a propensity for activation on fibrin surfaces, although systemic fibrinogen degradation has been observed. Alteplase and tenectaplase are highly fibrin specific, activating fibrin-bound plasminogen with minimal impact on circulating free plasminogen.
Utilisation of a facilitated PCI strategy with adjunctive intravenous thrombolysis improves coronary flow acutely, 42 however, causes paradoxical activation of thrombin, leading to increased bleeding. 43 44 As a result, clinicians considered the administration of IC thrombolytic therapy. Encouraging results from an open-chest animal model 45 led to the first randomised trial using adjunctive IC streptokinase in 41 patients with STEMI undergoing primary PCI. 11 In the IC streptokinase group, patients demonstrated improved coronary flow reserve, index of microcirculatory resistance (IMR) and CTFC 2 days after primary PCI. 11 Further RCTs with moderately fibrin-specific thrombolytic agents (prourokinase) demonstrated similar results with improved myocardial perfusion parameters. 19 20 22 23 26–28 Notably, the T-TIME Study, a large RCT of 440 patients comparing a highly fibrin-specific thrombolytic agent (alteplase) against placebo, reported different outcomes. At 3-month follow-up, there were no significant differences in rates of death or HF hospitalisation between groups. In addition, microvascular obstruction (% left ventricular mass) on cardiac magnetic resonance (CMR) between groups at 2–7 days did not differ. The ICE T-TIMI trial, which also used a highly fibrin-specific thrombolytic agent (tenecteplase), investigated its efficacy in 40 patients. This small study administered two fixed doses of 4 mg of IC tenecteplase and evaluated the primary endpoint of culprit lesion per cent diameter stenosis after the first bolus of tenecteplase or placebo. The results indicated no significant difference in the primary endpoint between the two groups.
In an initial meta-analysis of six RCTs investigating the use of IC thrombolysis in patients with STEMI compared with placebo, findings revealed a reduction in MVO but no impact on MACE. 46 Subsequent analyses, including studies with larger sample sizes or focusing on specific thrombolytic agents, have since been conducted with varied results. 47 48 Our meta-analysis, which is the largest to date, demonstrates that adjunctive IC thrombolysis in patients with STEMI improves both clinical and microcirculation outcomes. Although bleeding events did not significantly increase, it is plausible that a tradeoff may exist for reducing MACE. Notably, subgroup analysis for MACE demonstrated no significant benefit for highly fibrin-specific agents ( figure 2 ).
Intuitively, fibrin-specific thrombolytics are presumed to offer inherent advantages over their less fibrin-specific counterparts. In vivo studies have revealed that administration of alteplase in patients with STEMI induced shorter periods of thrombin and kallikrein activation, less reduction in fibrinogen, and a decrease in D-dimer and plasmin–antiplasmin complexes compared with streptokinase. 49 In this regard, tenecteplase demonstrates superior performance relative to alteplase with almost no paradoxical procoagulant effect due to reduced activation of thrombin and the kallikrein–factor XII system. 50
Nonetheless, other variables may diminish the significance of fibrin specificity. It has been argued that administration of IC alteplase, a short-acting thrombolytic with a half-life of 4–6 min, before flow optimisation with stenting may have contributed to the negative results seen in T-TIME. Although prourokinase has a similarly short half-life and was also given before stenting in multiple studies, it was associated with better results. 19 20 22 23 26–28 The therapeutic efficacy of prourokinase predominantly relies on its conversion to urokinase, a non-fibrin-specific direct plasminogen activator, potentially resulting in a prolonged duration of action. Furthermore, inducing a systemic fibrinolytic state with a non-selective agent may be paradoxically desirable in patients receiving adjunctive IC thrombolytics during primary PCI. This approach can potentially prevent further thrombus reaccumulation and embolisation to the microcirculation, especially in a highly thrombogenic environment. In contrast, fibrin-specific agents may heighten the risk of rethrombosis and reocclusion due to their limited impact on systemic fibrinogen depletion. Nevertheless, such varied outcomes across these studies could be attributed to the heterogeneous methodologies used.
Despite encouraging results, future studies targeting patients at the highest risk of MVO with appropriately powered sample sizes are required. The ongoing RESTORE-MI (Restoring Microcirculatory Perfusion in STEMI) trial ( NCT03998319 ) has a unique approach in which all study participants will undergo assessment of microvascular integrity after primary PCI prior to inclusion. Only patients with objective evidence of microvascular dysfunction (IMR value >32) following reperfusion will be randomised to treatment with IC tenecteplase or placebo. The primary endpoint measured will be cardiovascular mortality and rehospitalisation for HF at 24 months, in addition to infarct size on CMR at 6 months post-PCI. This study may potentially support a novel therapeutic approach towards treating MVO in patients with STEMI in the future.
Several key limitations should be considered when interpreting the findings of the present meta-analysis. First, several studies were subject to bias due to issues in randomisation and blinding, leading to an increased chance of type 1 (false-positive) error. In addition, the sample size of individual studies, except for the T-TIME trial, was relatively small. Second, inconsistencies in the duration of follow-up and the definition of clinical outcomes, such as MACE, were observed among the studies. Third, interventional protocols varied between RCTs. For example, IC thrombolytic therapy differed in agent, dosage, timing and route of administration. Initial studies used non-fibrin-specific agents, while contemporary studies moved towards newer fibrin-specific therapy. Besides Sezer et al , 25 all other studies administered IC thrombolysis therapy prior to stent implantation. 18–24 26–29 Within the latter group, some delivered before flow restoration, 19 21 29 though most did so after balloon dilation or thrombus aspiration. 18 20 22–24 26–28 Similarly, the methods of IC administration of the agents varied between non-selective delivery through guiding catheters 24 25 to selective delivery via IC catheters. 18–24 26–29 Furthermore, antiplatelet, anticoagulant and glycoprotein IIb/IIIa inhibitors (GPI) regimens also differed ( table 1 ). Finally, patient selection was diverse between studies. Though regression analysis did not detect any significant effect modifiers, total ischaemic time was omitted due to significant heterogeneity in reporting.
Impaired myocardial perfusion remains a clinical challenge in patients with STEMI. Despite its limitations, this meta-analysis favours the use of IC thrombolytic therapy during primary PCI. Overall, IC thrombolysis reduced the incidence of MACE and improved myocardial perfusion markers without increasing the risk of bleeding. Future clinical trials should be appropriately powered for clinical outcomes and focus on patients at high risk of microcirculatory dysfunction.
Patient consent for publication.
Not applicable.
Supplementary data.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
X @RajanRehan23
Contributors RR—conceptualisation, methodology, data analysis, writing (original draft preparation), reviewing and editing the final manuscript. SV—methodology, data analysis. CCYW—conceptualisation, methodology, data analysis. FP—supervision, writing (reviewing and editing). JL—supervision, writing (reviewing and editing). AK—supervision, writing (reviewing and editing). AY—conceptualisation, methodology, writing (reviewing and editing). HDW—conceptualisation, methodology, writing (reviewing and editing). WF—conceptualisation, methodology, writing (reviewing and editing). MN—conceptualisation, methodology, supervision, writing (reviewing and editing), guarantor.
Funding This study is funded by the National Health and Medical Research Council (2022150).
Competing interests JL has received minor honoraria from Abbott Vascular, Boehringer Ingelheim and Bayer. AY has received minor honoraria and research support from Abbot Vascular and Philips Healthcare. WF has received research support from Abbott Vascular and Medtronic; and has minor stock options with HeartFlow. MN has received research support from Abbot Vascular. HDW has received grant support paid to the institution and fees for serving on Steering Committees of the ODYSSEY trial from Sanofi and Regeneron Pharmaceuticals, the ISCHEMIA and MINT Study from the National Institutes of Health, the STRENGTH trial from Omthera Pharmaceuticals, the HEART-FID Study from American Regent, the DAL-GENE Study from DalCor Pharma UK, the AEGIS-II Study from CSL Behring, the CLEAR OUTCOMES Study from Esperion Therapeutics, and the SOLIST-WHF and SCOREDS trials from Sanofi Aventis Australia. The remaining authors have nothing to disclose.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
COMMENTS
A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...
It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical ...
Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.
A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic (in the scientific literature), then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based ...
A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr Robert Boyle and his colleagues published a systematic review in ...
Study designs: Part 7 - Systematic reviews. In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies.
Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.
A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies.
A systematic review collects secondary data, and is a synthesis of all available, relevant evidence which brings together all existing primary studies for review (Cochrane 2016). A systematic review differs from other types of literature review in several major ways.
"A systematic review attempts to identify, appraise and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question. Researchers conducting systematic reviews use explicit methods aimed at minimizing bias, in order to produce more reliable findings that can be used to inform decision ...
A meta-study of qualitative research examining determinants of children's independent active free play. ... one of the challenges is interpreting such apparently conflicting research. A systematic review is a method to systematically identify relevant research, appraise its quality, and synthesize the results. ...
an explicit, reproducible methodology. a systematic search that attempts to identify all studies that would meet the eligibility criteria. an assessment of the validity of the findings of the included studies, for example through the assessment of the risk of bias. a systematic presentation, and synthesis, of the characteristics and findings of ...
1.2.2. What is a systematic review? A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can ...
Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...
A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are: a clearly defined question with inclusion and exclusion criteria; a rigorous and systematic search of the literature;
A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...
Definition: A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue.It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors. When to use: If you want to identify, appraise, and synthesize all ...
Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...
A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are: a clearly defined question with inclusion and exclusion criteria; a rigorous and systematic search of the literature;
rigorous research method. That is the short answer to "what is a systematic review?". But systematic reviews go much deeper and play an important role in mitigating bias, establishing the credibility of authors, and producing evidence-based quality studies. peer review and a systematic review. systematic reviews are essential.
Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review. Systematic Review: Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review. Aims for exhaustive, comprehensive searching.
Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify strategies ...
Importance Platform trials have become increasingly common, and evidence is needed to determine how this trial design is actually applied in current research practice.. Objective To determine the characteristics, progression, and output of randomized platform trials.. Evidence Review In this systematic review of randomized platform trials, Medline, Embase, Scopus, trial registries, gray ...
HFO SBT was associated with a lower risk of reintubation in comparison to other SBTmethods. The results of our analysis should be considered with caution due to the low number of studies that investigated HFO SBT, and potential clinical heterogeneity related to co-interventions. Further trials should be performed to confirm the results on larger cohorts of patients and assess specific subgroups.
Because no study, regardless of its type, should be interpreted in isolation, a systematic review is generally the best form of evidence. So, the preferred method is a good summary of research reports, i.e., systematic reviews and meta-analysis, which will give evidence-based answers to clinical situations.
Stress generation posits that (a) individuals at-risk for psychopathology may inadvertently experience higher rates of prospective dependent stress (i.e., stressors that are in part influenced by their thoughts and behaviors) but not independent stress (i.e., stressors occurring outside their influence), and (b) this elevated dependent stress, in some measure, is what places these individuals ...
Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State ...
Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State University found that people of color and white women are significantly underrepresented in RCTs due to systematic biases.
Systematic reviews summarize the results of all the studies on a medical treatment and assess the quality of the studies. The analysis is done following a specific, methodologically sound process. In a way, it's a "study of studies." Good systematic reviews can provide a reliable overview of the current knowledge in a certain area.
The present study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.13 Search strategy and study selection Electronic searches were performed using PubMed, Ovid Medline, Cochrane Library, ProQuest, ACP Journal Club and Google Scholar from their dates of inception to ...