Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism. Run a free check.

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved July 2, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 2 July 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis
         


10 Shattuck St, Boston MA 02115 | (617) 432-2136

| |
Copyright © 2020 President and Fellows of Harvard College. All rights reserved.

Introduction to Systematic Reviews

In this guide.

  • Introduction
  • Lane Research Services
  • Types of Reviews
  • Systematic Review Process
  • Protocols & Guidelines
  • Data Extraction and Screening
  • Resources & Tools
  • Systematic Review Online Course

What is a Systematic Review?

Knowledge synthesis is a term used to describe the method of synthesizing results from individual studies and interpreting these results within the larger body of knowledge on the topic. It requires highly structured, transparent and reproducible methods using quantitative and/or qualitative evidence. Systematic reviews, meta-analyses, scoping reviews, rapid reviews, narrative syntheses, practice guidelines, among others, are all forms of knowledge syntheses. For more information on types of reviews, visit the "Types of Reviews" tab on the left.

A systematic review varies from an ordinary literature review in that it uses a comprehensive, methodical, transparent and reproducible search strategy to ensure conclusions are as unbiased and closer to the truth as possible. The Cochrane Handbook for Systematic Reviews of Interventions  defines a systematic review as:

"A systematic review attempts to identify, appraise and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question. Researchers conducting systematic reviews use explicit methods aimed at minimizing bias, in order to produce more reliable findings that can be used to inform decision making [...] This involves: the a priori specification of a research question; clarity on the scope of the review and which studies are eligible for inclusion; making every effort to find all relevant research and to ensure that issues of bias in included studies are accounted for; and analysing the included studies in order to draw conclusions based on all the identified research in an impartial and objective way." ( Chapter 1: Starting a review )

What are systematic reviews? from Cochrane on Youtube .

  • Next: Lane Research Services >>
  • Last Updated: Jun 21, 2024 2:50 PM
  • URL: https://laneguides.stanford.edu/systematicreviews

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?
  • Steps of a Systematic Review
  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Introduction to Systematic Review

  • Introduction
  • Types of literature reviews
  • Other Libguides
  • Systematic review as part of a dissertation
  • Tutorials & Guidelines & Examples from non-Medical Disciplines

A "high-level overview of primary research on a focused question" utilizing high-quality research evidence through:

Source: Kysh, Lynn (2013): Difference between a systematic review and a literature review. [figshare]. Available at:  

Depending on your learning style, please explore the resources in various formats on the tabs above.

For additional tutorials, visit the SR Workshop Videos  from UNC at Chapel Hill outlining each stage of the systematic review process.

Know the difference! Systematic review vs. literature review

It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic. Even with this common ground, both types vary significantly.  Please review the following chart (and its corresponding poster linked below) for a detailed explanation of each as well as the differences between each type of review.

Source: Kysh, L. (2013). What’s in a name? The difference between a systematic review and a literature review and why it matters. [Poster] Retrieved from  .

Check the website from UNC at Chapel Hill,

what is a systematic research study

Types of literature reviews along with associated methodologies

JBI Manual for Evidence Synthesis .  Find definitions and methodological guidance.

- Systematic Reviews - Chapters 1-7

- Mixed Methods Systematic Reviews -  Chapter 8

- Diagnostic Test Accuracy Systematic Reviews -  Chapter 9

- Umbrella Reviews -  Chapter 10

- Scoping Reviews -  Chapter 11

- Systematic Reviews of Measurement Properties -  Chapter 12

Systematic reviews vs scoping reviews - 

Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal , 26 (2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x

Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and methods. Systematic Reviews, 1 (28). htt p s://doi.org/ 10.1186/2046-4053-1-28

Munn, Z., Peters, M., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018).  Systematic review or  scoping review ?  Guidance for authors when choosing between a systematic or scoping review approach.  BMC medical research methodology, 18 (1), 143. https://doi.org/10.1186/s12874-018-0611-x. Also, check out the  Libguide from Weill Cornell Medicine  for the  differences between a systematic review and a scoping review  and when to embark on either one of them.

Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: Exploring review types and associated information retrieval requirements . Health Information & Libraries Journal , 36 (3), 202–222. https://doi.org/10.1111/hir.12276

Temple University. Review Types . - This guide provides useful descriptions of some of the types of reviews listed in the above article.

UMD Health Sciences and Human Services Library.  Review Types . - Guide describing Literature Reviews, Scoping Reviews, and Rapid Reviews.

Whittemore, R., Chao, A., Jang, M., Minges, K. E., & Park, C. (2014). Methods for knowledge synthesis: An overview. Heart & Lung: The Journal of Acute and Critical Care, 43 (5), 453–461. https://doi.org/10.1016/j.hrtlng.2014.05.014

Differences between a systematic review and other types of reviews

Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). ‘ Scoping the scope ’ of a cochrane review. Journal of Public Health , 33 (1), 147–150. https://doi.org/10.1093/pubmed/fdr015

Kowalczyk, N., & Truluck, C. (2013). Literature reviews and systematic reviews: What is the difference? Radiologic Technology , 85 (2), 219–222.

White, H., Albers, B., Gaarder, M., Kornør, H., Littell, J., Marshall, Z., Matthew, C., Pigott, T., Snilstveit, B., Waddington, H., & Welch, V. (2020). Guidance for producing a Campbell evidence and gap map . Campbell Systematic Reviews, 16 (4), e1125. https://doi.org/10.1002/cl2.1125. Check also this comparison between evidence and gaps maps and systematic reviews.

Rapid Reviews Tutorials

Rapid Review Guidebook  by the National Collaborating Centre of Methods and Tools (NCCMT)

Hamel, C., Michaud, A., Thuku, M., Skidmore, B., Stevens, A., Nussbaumer-Streit, B., & Garritty, C. (2021). Defining Rapid Reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews.  Journal of clinical epidemiology ,  129 , 74–85. https://doi.org/10.1016/j.jclinepi.2020.09.041

Image: by WeeblyTutorials

 

 

under the tab on the left side menu.

  • Müller, C., Lautenschläger, S., Meyer, G., & Stephan, A. (2017). Interventions to support people with dementia and their caregivers during the transition from home care to nursing home care: A systematic review . International Journal of Nursing Studies, 71 , 139–152. https://doi.org/10.1016/j.ijnurstu.2017.03.013
  • Bhui, K. S., Aslam, R. W., Palinski, A., McCabe, R., Johnson, M. R. D., Weich, S., … Szczepura, A. (2015). Interventions to improve therapeutic communications between Black and minority ethnic patients and professionals in psychiatric services: Systematic review . The British Journal of Psychiatry, 207 (2), 95–103. https://doi.org/10.1192/bjp.bp.114.158899
  • Rosen, L. J., Noach, M. B., Winickoff, J. P., & Hovell, M. F. (2012). Parental smoking cessation to protect young children: A systematic review and meta-analysis . Pediatrics, 129 (1), 141–152. https://doi.org/10.1542/peds.2010-3209

Scoping Review

  • Hyshka, E., Karekezi, K., Tan, B., Slater, L. G., Jahrig, J., & Wild, T. C. (2017). The role of consumer perspectives in estimating population need for substance use services: A scoping review . BMC Health Services Research, 171-14.  https://doi.org/10.1186/s12913-017-2153-z
  • Olson, K., Hewit, J., Slater, L.G., Chambers, T., Hicks, D., Farmer, A., & ... Kolb, B. (2016). Assessing cognitive function in adults during or following chemotherapy: A scoping review . Supportive Care In Cancer, 24 (7), 3223-3234. https://doi.org/10.1007/s00520-016-3215-1
  • Pham, M. T., Rajić, A., Greig, J. D., Sargeant, J. M., Papadopoulos, A., & McEwen, S. A. (2014). A scoping review of scoping reviews: Advancing the approach and enhancing the consistency . Research Synthesis Methods, 5 (4), 371–385. https://doi.org/10.1002/jrsm.1123
  • Scoping Review Tutorial from UNC at Chapel Hill

Qualitative Systematic Review/Meta-Synthesis

  • Lee, H., Tamminen, K. A., Clark, A. M., Slater, L., Spence, J. C., & Holt, N. L. (2015). A meta-study of qualitative research examining determinants of children's independent active free play . International Journal Of Behavioral Nutrition & Physical Activity, 12 (5), 121-12. https://doi.org/10.1186/s12966-015-0165-9

Videos on systematic reviews

This video lecture explains in detail the steps necessary to conduct a systematic review (44 min.) Here's a brief introduction to how to evaluate systematic reviews (16 min.)

Systematic Reviews: What are they? Are they right for my research? - 47 min. video recording with a closed caption option.

More training videos  on systematic reviews:   

 from Yale University 

(approximately 5-10 minutes each)

 with Margaret Foster 

(approximately 55 min each)

           

Books on Systematic Reviews

Cover Art

Books on Meta-analysis

what is a systematic research study

  • University of Toronto Libraries  - very detailed with good tips on the sensitivity and specificity of searches.
  • Monash University  - includes an interactive case study tutorial. 
  • Dalhousie University Libraries - a comprehensive How-To Guide on conducting a systematic review.

Guidelines for a systematic review as part of the dissertation

  • Guidelines for Systematic Reviews in the Context of Doctoral Education Background  by University of Victoria (PDF)
  • Can I conduct a Systematic Review as my Master’s dissertation or PhD thesis? Yes, It Depends!  by Farhad (blog)
  • What is a Systematic Review Dissertation Like? by the University of Edinburgh (50 min video) 

Further readings on experiences of PhD students and doctoral programs with systematic reviews

Puljak, L., & Sapunar, D. (2017). Acceptance of a systematic review as a thesis: Survey of biomedical doctoral programs in Europe . Systematic Reviews , 6 (1), 253. https://doi.org/10.1186/s13643-017-0653-x

Perry, A., & Hammond, N. (2002). Systematic reviews: The experiences of a PhD Student . Psychology Learning & Teaching , 2 (1), 32–35. https://doi.org/10.2304/plat.2002.2.1.32

Daigneault, P.-M., Jacob, S., & Ouimet, M. (2014). Using systematic review methods within a Ph.D. dissertation in political science: Challenges and lessons learned from practice . International Journal of Social Research Methodology , 17 (3), 267–283. https://doi.org/10.1080/13645579.2012.730704

UMD Doctor of Philosophy Degree Policies

Before you embark on a systematic review research project, check the UMD PhD Policies to make sure you are on the right path. Systematic reviews require a team of at least two reviewers and an information specialist or a librarian. Discuss with your advisor the authorship roles of the involved team members. Keep in mind that the  UMD Doctor of Philosophy Degree Policies (scroll down to the section, Inclusion of one's own previously published materials in a dissertation ) outline such cases, specifically the following: 

" It is recognized that a graduate student may co-author work with faculty members and colleagues that should be included in a dissertation . In such an event, a letter should be sent to the Dean of the Graduate School certifying that the student's examining committee has determined that the student made a substantial contribution to that work. This letter should also note that the inclusion of the work has the approval of the dissertation advisor and the program chair or Graduate Director. The letter should be included with the dissertation at the time of submission.  The format of such inclusions must conform to the standard dissertation format. A foreword to the dissertation, as approved by the Dissertation Committee, must state that the student made substantial contributions to the relevant aspects of the jointly authored work included in the dissertation."

 by CommLab India

 

  • Cochrane Handbook for Systematic Reviews of Interventions - See Part 2: General methods for Cochrane reviews
  • Systematic Searches - Yale library video tutorial series 
  • Using PubMed's Clinical Queries to Find Systematic Reviews  - From the U.S. National Library of Medicine
  • Systematic reviews and meta-analyses: A step-by-step guide - From the University of Edinsburgh, Centre for Cognitive Ageing and Cognitive Epidemiology

by Vinova

 

Bioinformatics

  • Mariano, D. C., Leite, C., Santos, L. H., Rocha, R. E., & de Melo-Minardi, R. C. (2017). A guide to performing systematic literature reviews in bioinformatics .  arXiv preprint arXiv:1707.05813.

Environmental Sciences

Collaboration for Environmental Evidence. 2018.  Guidelines and Standards for Evidence synthesis in Environmental Management. Version 5.0 (AS Pullin, GK Frampton, B Livoreil & G Petrokofsky, Eds) www.environmentalevidence.org/information-for-authors .

Pullin, A. S., & Stewart, G. B. (2006). Guidelines for systematic review in conservation and environmental management. Conservation Biology, 20 (6), 1647–1656. https://doi.org/10.1111/j.1523-1739.2006.00485.x

Engineering Education

  • Borrego, M., Foster, M. J., & Froyd, J. E. (2014). Systematic literature reviews in engineering education and other developing interdisciplinary fields. Journal of Engineering Education, 103 (1), 45–76. https://doi.org/10.1002/jee.20038

Public Health

  • Hannes, K., & Claes, L. (2007). Learn to read and write systematic reviews: The Belgian Campbell Group . Research on Social Work Practice, 17 (6), 748–753. https://doi.org/10.1177/1049731507303106
  • McLeroy, K. R., Northridge, M. E., Balcazar, H., Greenberg, M. R., & Landers, S. J. (2012). Reporting guidelines and the American Journal of Public Health’s adoption of preferred reporting items for systematic reviews and meta-analyses . American Journal of Public Health, 102 (5), 780–784. https://doi.org/10.2105/AJPH.2011.300630
  • Pollock, A., & Berge, E. (2018). How to do a systematic review.   International Journal of Stroke, 13 (2), 138–156. https://doi.org/10.1177/1747493017743796
  • Institute of Medicine. (2011). Finding what works in health care: Standards for systematic reviews . https://doi.org/10.17226/13059
  • Wanden-Berghe, C., & Sanz-Valero, J. (2012). Systematic reviews in nutrition: Standardized methodology . The British Journal of Nutrition, 107 Suppl 2, S3-7. https://doi.org/10.1017/S0007114512001432

Social Sciences

  • Bronson, D., & Davis, T. (2012).  Finding and evaluating evidence: Systematic reviews and evidence-based practice (Pocket guides to social work research methods). Oxford: Oxford University Press.
  • Petticrew, M., & Roberts, H. (2006).  Systematic reviews in the social sciences: A practical guide . Malden, MA: Blackwell Pub.
  • Cornell University Library Guide -  Systematic literature reviews in engineering: Example: Software Engineering
  • Biolchini, J., Mian, P. G., Natali, A. C. C., & Travassos, G. H. (2005). Systematic review in software engineering .  System Engineering and Computer Science Department COPPE/UFRJ, Technical Report ES, 679 (05), 45.
  • Biolchini, J. C., Mian, P. G., Natali, A. C. C., Conte, T. U., & Travassos, G. H. (2007). Scientific research ontology to support systematic review in software engineering . Advanced Engineering Informatics, 21 (2), 133–151.
  • Kitchenham, B. (2007). Guidelines for performing systematic literature reviews in software engineering . [Technical Report]. Keele, UK, Keele University, 33(2004), 1-26.
  • Weidt, F., & Silva, R. (2016). Systematic literature review in computer science: A practical guide .  Relatórios Técnicos do DCC/UFJF ,  1 .
by Day Translations

 

Resources for your writing

  • Academic Phrasebank - Get some inspiration and find some terms and phrases for writing your research paper
  • Oxford English Dictionary  - Use to locate word variants and proper spelling
  • << Previous: Library Help
  • Next: Steps of a Systematic Review >>
  • Last Updated: May 8, 2024 1:44 PM
  • URL: https://lib.guides.umd.edu/SR

Systematic Reviews

  • What is a Systematic Review?

A systematic review is an evidence synthesis that uses explicit, reproducible methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate statistical techniques to combine these valid studies.

Key Characteristics of a Systematic Review:

Generally, systematic reviews must have:

  • a clearly stated set of objectives with pre-defined eligibility criteria for studies
  • an explicit, reproducible methodology
  • a systematic search that attempts to identify all studies that would meet the eligibility criteria
  • an assessment of the validity of the findings of the included studies, for example through the assessment of the risk of bias
  • a systematic presentation, and synthesis, of the characteristics and findings of the included studies.

A meta-analysis is a systematic review that uses quantitative methods to synthesize and summarize the pooled data from included studies.

Additional Information

  • How-to Books
  • Beyond Health Sciences

Cover Art

  • Cochrane Handbook For Systematic Reviews of Interventions Provides guidance to authors for the preparation of Cochrane Intervention reviews. Chapter 6 covers searching for reviews.
  • Systematic Reviews: CRD’s Guidance for Undertaking Reviews in Health Care From The University of York Centre for Reviews and Dissemination: Provides practical guidance for undertaking evidence synthesis based on a thorough understanding of systematic review methodology. It presents the core principles of systematic reviewing, and in complementary chapters, highlights issues that are specific to reviews of clinical tests, public health interventions, adverse effects, and economic evaluations.
  • Cornell, Sytematic Reviews and Evidence Synthesis Beyond the Health Sciences Video series geared for librarians but very informative about searching outside medicine.
  • << Previous: Getting Started
  • Next: Levels of Evidence >>
  • Getting Started
  • Levels of Evidence
  • Locating Systematic Reviews
  • Searching Systematically
  • Developing Answerable Questions
  • Identifying Synonyms & Related Terms
  • Using Truncation and Wildcards
  • Identifying Search Limits/Exclusion Criteria
  • Keyword vs. Subject Searching
  • Where to Search
  • Search Filters
  • Sensitivity vs. Precision
  • Core Databases
  • Other Databases
  • Clinical Trial Registries
  • Conference Presentations
  • Databases Indexing Grey Literature
  • Web Searching
  • Handsearching
  • Citation Indexes
  • Documenting the Search Process
  • Managing your Review

Research Support

  • Last Updated: Jun 6, 2024 9:14 AM
  • URL: https://guides.library.ucdavis.edu/systematic-reviews

1.2.2  What is a systematic review?

A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question.  It  uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman 1992, Oxman 1993) . The key characteristics of a systematic review are:

a clearly stated set of objectives with pre-defined eligibility criteria for studies;

an explicit, reproducible methodology;

a systematic search that attempts to identify all studies that would meet the eligibility criteria;

an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias; and

a systematic presentation, and synthesis, of the characteristics and findings of the included studies.

Many systematic reviews contain meta-analyses. Meta-analysis is the use of statistical methods to summarize the results of independent studies (Glass 1976). By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review (see Chapter 9, Section 9.1.3 ). They also facilitate investigations of the consistency of evidence across studies, and the exploration of differences across studies.

  • A-Z Publications

Annual Review of Psychology

Volume 70, 2019, review article, how to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses.

  • Andy P. Siddaway 1 , Alex M. Wood 2 , and Larry V. Hedges 3
  • View Affiliations Hide Affiliations Affiliations: 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected] 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected]
  • Vol. 70:747-770 (Volume publication date January 2019) https://doi.org/10.1146/annurev-psych-010418-102803
  • First published as a Review in Advance on August 08, 2018
  • Copyright © 2019 by Annual Reviews. All rights reserved

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Article metrics loading...

Full text loading...

Literature Cited

  • APA Publ. Commun. Board Work. Group J. Artic. Rep. Stand. 2008 . Reporting standards for research in psychology: Why do we need them? What might they be?. Am. Psychol . 63 : 848– 49 [Google Scholar]
  • Baumeister RF 2013 . Writing a literature review. The Portable Mentor: Expert Guide to a Successful Career in Psychology MJ Prinstein, MD Patterson 119– 32 New York: Springer, 2nd ed.. [Google Scholar]
  • Baumeister RF , Leary MR 1995 . The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117 : 497– 529 [Google Scholar]
  • Baumeister RF , Leary MR 1997 . Writing narrative literature reviews. Rev. Gen. Psychol. 3 : 311– 20 Presents a thorough and thoughtful guide to conducting narrative reviews. [Google Scholar]
  • Bem DJ 1995 . Writing a review article for Psychological Bulletin. Psychol . Bull 118 : 172– 77 [Google Scholar]
  • Borenstein M , Hedges LV , Higgins JPT , Rothstein HR 2009 . Introduction to Meta-Analysis New York: Wiley Presents a comprehensive introduction to meta-analysis. [Google Scholar]
  • Borenstein M , Higgins JPT , Hedges LV , Rothstein HR 2017 . Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res. Synth. Methods 8 : 5– 18 [Google Scholar]
  • Braver SL , Thoemmes FJ , Rosenthal R 2014 . Continuously cumulating meta-analysis and replicability. Perspect. Psychol. Sci. 9 : 333– 42 [Google Scholar]
  • Bushman BJ 1994 . Vote-counting procedures. The Handbook of Research Synthesis H Cooper, LV Hedges 193– 214 New York: Russell Sage Found. [Google Scholar]
  • Cesario J 2014 . Priming, replication, and the hardest science. Perspect. Psychol. Sci. 9 : 40– 48 [Google Scholar]
  • Chalmers I 2007 . The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the importance of systematic reviews. Treating Individuals: From Randomised Trials to Personalised Medicine PM Rothwell 37– 58 London: Lancet [Google Scholar]
  • Cochrane Collab. 2003 . Glossary Rep., Cochrane Collab. London: http://community.cochrane.org/glossary Presents a comprehensive glossary of terms relevant to systematic reviews. [Google Scholar]
  • Cohn LD , Becker BJ 2003 . How meta-analysis increases statistical power. Psychol. Methods 8 : 243– 53 [Google Scholar]
  • Cooper HM 2003 . Editorial. Psychol. Bull. 129 : 3– 9 [Google Scholar]
  • Cooper HM 2016 . Research Synthesis and Meta-Analysis: A Step-by-Step Approach Thousand Oaks, CA: Sage, 5th ed.. Presents a comprehensive introduction to research synthesis and meta-analysis. [Google Scholar]
  • Cooper HM , Hedges LV , Valentine JC 2009 . The Handbook of Research Synthesis and Meta-Analysis New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Cumming G 2014 . The new statistics: why and how. Psychol. Sci. 25 : 7– 29 Discusses the limitations of null hypothesis significance testing and viable alternative approaches. [Google Scholar]
  • Earp BD , Trafimow D 2015 . Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6 : 621 [Google Scholar]
  • Etz A , Vandekerckhove J 2016 . A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11 : e0149794 [Google Scholar]
  • Ferguson CJ , Brannick MT 2012 . Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychol. Methods 17 : 120– 28 [Google Scholar]
  • Fleiss JL , Berlin JA 2009 . Effect sizes for dichotomous data. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 237– 53 New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Garside R 2014 . Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how. Innovation 27 : 67– 79 [Google Scholar]
  • Hedges LV , Olkin I 1980 . Vote count methods in research synthesis. Psychol. Bull. 88 : 359– 69 [Google Scholar]
  • Hedges LV , Pigott TD 2001 . The power of statistical tests in meta-analysis. Psychol. Methods 6 : 203– 17 [Google Scholar]
  • Higgins JPT , Green S 2011 . Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0 London: Cochrane Collab. Presents comprehensive and regularly updated guidelines on systematic reviews. [Google Scholar]
  • John LK , Loewenstein G , Prelec D 2012 . Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 : 524– 32 [Google Scholar]
  • Juni P , Witschi A , Bloch R , Egger M 1999 . The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282 : 1054– 60 [Google Scholar]
  • Klein O , Doyen S , Leys C , Magalhães de Saldanha da Gama PA , Miller S et al. 2012 . Low hopes, high expectations: expectancy effects and the replicability of behavioral experiments. Perspect. Psychol. Sci. 7 : 6 572– 84 [Google Scholar]
  • Lau J , Antman EM , Jimenez-Silva J , Kupelnick B , Mosteller F , Chalmers TC 1992 . Cumulative meta-analysis of therapeutic trials for myocardial infarction. N. Engl. J. Med. 327 : 248– 54 [Google Scholar]
  • Light RJ , Smith PV 1971 . Accumulating evidence: procedures for resolving contradictions among different research studies. Harvard Educ. Rev. 41 : 429– 71 [Google Scholar]
  • Lipsey MW , Wilson D 2001 . Practical Meta-Analysis London: Sage Comprehensive and clear explanation of meta-analysis. [Google Scholar]
  • Matt GE , Cook TD 1994 . Threats to the validity of research synthesis. The Handbook of Research Synthesis H Cooper, LV Hedges 503– 20 New York: Russell Sage Found. [Google Scholar]
  • Maxwell SE , Lau MY , Howard GS 2015 . Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?. Am. Psychol. 70 : 487– 98 [Google Scholar]
  • Moher D , Hopewell S , Schulz KF , Montori V , Gøtzsche PC et al. 2010 . CONSORT explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340 : c869 [Google Scholar]
  • Moher D , Liberati A , Tetzlaff J , Altman DG PRISMA Group. 2009 . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339 : 332– 36 Comprehensive reporting guidelines for systematic reviews. [Google Scholar]
  • Morrison A , Polisena J , Husereau D , Moulton K , Clark M et al. 2012 . The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int. J. Technol. Assess. Health Care 28 : 138– 44 [Google Scholar]
  • Nelson LD , Simmons J , Simonsohn U 2018 . Psychology's renaissance. Annu. Rev. Psychol. 69 : 511– 34 [Google Scholar]
  • Noblit GW , Hare RD 1988 . Meta-Ethnography: Synthesizing Qualitative Studies Newbury Park, CA: Sage [Google Scholar]
  • Olivo SA , Macedo LG , Gadotti IC , Fuentes J , Stanton T , Magee DJ 2008 . Scales to assess the quality of randomized controlled trials: a systematic review. Phys. Ther. 88 : 156– 75 [Google Scholar]
  • Open Sci. Collab. 2015 . Estimating the reproducibility of psychological science. Science 349 : 943 [Google Scholar]
  • Paterson BL , Thorne SE , Canam C , Jillings C 2001 . Meta-Study of Qualitative Health Research: A Practical Guide to Meta-Analysis and Meta-Synthesis Thousand Oaks, CA: Sage [Google Scholar]
  • Patil P , Peng RD , Leek JT 2016 . What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspect. Psychol. Sci. 11 : 539– 44 [Google Scholar]
  • Rosenthal R 1979 . The “file drawer problem” and tolerance for null results. Psychol. Bull. 86 : 638– 41 [Google Scholar]
  • Rosnow RL , Rosenthal R 1989 . Statistical procedures and the justification of knowledge in psychological science. Am. Psychol. 44 : 1276– 84 [Google Scholar]
  • Sanderson S , Tatt ID , Higgins JP 2007 . Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int. J. Epidemiol. 36 : 666– 76 [Google Scholar]
  • Schreiber R , Crooks D , Stern PN 1997 . Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue JM Morse 311– 26 Thousand Oaks, CA: Sage [Google Scholar]
  • Shrout PE , Rodgers JL 2018 . Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 : 487– 510 [Google Scholar]
  • Stroebe W , Strack F 2014 . The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 : 59– 71 [Google Scholar]
  • Stroup DF , Berlin JA , Morton SC , Olkin I , Williamson GD et al. 2000 . Meta-analysis of observational studies in epidemiology (MOOSE): a proposal for reporting. JAMA 283 : 2008– 12 [Google Scholar]
  • Thorne S , Jensen L , Kearney MH , Noblit G , Sandelowski M 2004 . Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual. Health Res. 14 : 1342– 65 [Google Scholar]
  • Tong A , Flemming K , McInnes E , Oliver S , Craig J 2012 . Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med. Res. Methodol. 12 : 181– 88 [Google Scholar]
  • Trickey D , Siddaway AP , Meiser-Stedman R , Serpell L , Field AP 2012 . A meta-analysis of risk factors for post-traumatic stress disorder in children and adolescents. Clin. Psychol. Rev. 32 : 122– 38 [Google Scholar]
  • Valentine JC , Biglan A , Boruch RF , Castro FG , Collins LM et al. 2011 . Replication in prevention science. Prev. Sci. 12 : 103– 17 [Google Scholar]
  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations

What is a Systematic Review?

  • Types of Reviews
  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are:

  • a clearly defined question with inclusion and exclusion criteria;
  • a rigorous and systematic search of the literature;
  • two phases of screening (blinded, at least two independent screeners);
  • data extraction and management;
  • analysis and interpretation of results;
  • risk of bias assessment of included studies;
  • and report for publication.

Medical Center Library & Archives Presentations

The following presentation is a recording of the Getting Started with Systematic Reviews workshop (4/2022), offered by the Duke Medical Center Library & Archives. A NetID/pw is required to access the tutorial via Warpwire. 

  • << Previous: Overview
  • Next: Types of Reviews >>
  • Last Updated: Jun 18, 2024 9:41 AM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Easy guide to conducting a systematic review

Affiliations.

  • 1 Discipline of Child and Adolescent Health, University of Sydney, Sydney, New South Wales, Australia.
  • 2 Department of Nephrology, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • 3 Education Department, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • PMID: 32364273
  • DOI: 10.1111/jpc.14853

A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a predefined search strategy to identify and appraise all available published literature on a specific topic. The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.

Keywords: research; research design; systematic review.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

PubMed Disclaimer

Similar articles

  • What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision making in health policy and practice: a rapid review. Haby MM, Chapman E, Clark R, Barreto J, Reveiz L, Lavis JN. Haby MM, et al. Health Res Policy Syst. 2016 Nov 25;14(1):83. doi: 10.1186/s12961-016-0155-7. Health Res Policy Syst. 2016. PMID: 27884208 Free PMC article. Review.
  • What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary. Pollock M, Fernandes RM, Becker LA, Featherstone R, Hartling L. Pollock M, et al. Syst Rev. 2016 Nov 14;5(1):190. doi: 10.1186/s13643-016-0367-5. Syst Rev. 2016. PMID: 27842604 Free PMC article. Review.
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Aromataris E, et al. Int J Evid Based Healthc. 2015 Sep;13(3):132-40. doi: 10.1097/XEB.0000000000000055. Int J Evid Based Healthc. 2015. PMID: 26360830
  • Conducting systematic reviews of association (etiology): The Joanna Briggs Institute's approach. Moola S, Munn Z, Sears K, Sfetcu R, Currie M, Lisy K, Tufanaru C, Qureshi R, Mattis P, Mu P. Moola S, et al. Int J Evid Based Healthc. 2015 Sep;13(3):163-9. doi: 10.1097/XEB.0000000000000064. Int J Evid Based Healthc. 2015. PMID: 26262566
  • Cervical spondylotic myelopathy: methodological approaches to evaluate the literature and establish best evidence. Skelly AC, Hashimoto RE, Norvell DC, Dettori JR, Fischer DJ, Wilson JR, Tetreault LA, Fehlings MG. Skelly AC, et al. Spine (Phila Pa 1976). 2013 Oct 15;38(22 Suppl 1):S9-18. doi: 10.1097/BRS.0b013e3182a7ebbf. Spine (Phila Pa 1976). 2013. PMID: 24026148
  • A Systematic Literature Review of Substance-Use Prevention Programs Amongst Refugee Youth. Aleer E, Alam K, Rashid A. Aleer E, et al. Community Ment Health J. 2024 Aug;60(6):1151-1170. doi: 10.1007/s10597-024-01267-6. Epub 2024 Apr 9. Community Ment Health J. 2024. PMID: 38592351 Free PMC article.
  • The validity of electronic health data for measuring smoking status: a systematic review and meta-analysis. Haque MA, Gedara MLB, Nickel N, Turgeon M, Lix LM. Haque MA, et al. BMC Med Inform Decis Mak. 2024 Feb 2;24(1):33. doi: 10.1186/s12911-024-02416-3. BMC Med Inform Decis Mak. 2024. PMID: 38308231 Free PMC article.
  • Methodological quality and reporting quality of COVID-19 living systematic review: a cross-sectional study. Luo J, Chen Z, Liu D, Li H, He S, Zeng L, Yang M, Liu Z, Xiao X, Zhang L. Luo J, et al. BMC Med Res Methodol. 2023 Jul 31;23(1):175. doi: 10.1186/s12874-023-01980-y. BMC Med Res Methodol. 2023. PMID: 37525117 Free PMC article.
  • State of the Art of the Molecular Biology of the Interaction between Cocoa and Witches' Broom Disease: A Systematic Review. Santos AS, Mora-Ocampo IY, de Novais DPS, Aguiar ERGR, Pirovani CP. Santos AS, et al. Int J Mol Sci. 2023 Mar 16;24(6):5684. doi: 10.3390/ijms24065684. Int J Mol Sci. 2023. PMID: 36982760 Free PMC article. Review.
  • Clarke M, Chalmers I. Reflections on the history of systematic reviews. BMJ Evid. Based Med. 2018; 23: 121-2.
  • Oxman AD, Guyatt GH. The science of reviewing research. Ann. N. Y. Acad. Sci. 1993; 703: 125-33.
  • Chalmers I, Hedges LV, Cooper H. A brief history of research synthesis. Eval. Health Prof. 2002; 25: 12-37.
  • Lind J. A treatise of the scurvy. Three Parts Containing an Inquiry into the Nature, Causes and Cure, of that Disease Together with a Critical and Chronological View of what has been Published on the Subject. Edinburgh: Sands, Murray & Cochran; 1753. Available from: https://www.jameslindlibrary.org/lind-j-1753/ [accessed 16 February 2020].
  • Oxman AD, Guyatt GH. Guidelines for reading literature reviews. CMAJ 1988; 138: 697-703.

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Ovid Technologies, Inc.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

what is a systematic research study

Evidence Synthesis and Systematic Reviews

  • Question Formulation

Systematic Reviews

Rapid reviews, scoping reviews.

  • Other Review Types
  • Resources for Reviews by Discipline and Type
  • Tools for Evidence Synthesis
  • Grey Literature

Definition : A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors.

When to use : If you want to identify, appraise, and synthesize all available research that is relevant to a particular question with reproduceable search methods.

Limitations : It requires extensive time and a team

Resources :

  • Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare
  • The 8 stages of a systematic review
  • Determining the scope of the review and the questions it will address
  • Reporting the review

Definition : Rapid reviews are a form of evidence synthesis that may provide more timely information for decision making compared with standard systematic reviews

When to use : When you want to evaluate new or emerging research topics using some systematic review methods at a faster pace

Limitations : It is not as rigorous or as thorough as a systematic review and therefore may be more likely to be biased

  • Cochrane guidance for rapid reviews
  • Steps for conducting a rapid review
  • Expediting systematic reviews: methods and implications of rapid reviews

Definition : Scoping reviews are often used to categorize or group existing literature in a given field in terms of its nature, features, and volume.

When to use : Label body of literature with relevance to time, location (e.g. country or context), source (e.g. peer-reviewed or grey literature), and origin (e.g. healthcare discipline or academic field) It also is used to clarify working definitions and conceptual boundaries of a topic or field or to identify gaps in existing literature/research

Limitations : More citations to screen and takes as long or longer than a systematic review.  Larger teams may be required because of the larger volumes of literature.  Different screening criteria and process than a systematic review

  • PRISMA-ScR for scoping reviews
  • JBI Updated methodological guidance for the conduct of scoping reviews
  • JBI Manual: Scoping Reviews (2020)
  • Equator Network-Current Best Practices for the Conduct of Scoping Reviews
  • << Previous: Question Formulation
  • Next: Other Review Types >>
  • Last Updated: Jun 11, 2024 10:02 AM
  • URL: https://guides.temple.edu/systematicreviews

Temple University

University libraries.

See all library locations

  • Library Directory
  • Locations and Directions
  • Frequently Called Numbers

Twitter Icon

Need help? Email us at [email protected]

Banner

  • SHSU Library
  • Research Guides
  • Scholarly Communication

Systematic Reviews: How-To in Detail

What is a systematic review.

  • Manuals and Reporting Guidelines
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Finding Full-Text Articles

Profile Photo

Guide Credit

We are very grateful to Duke Libraries for allowing us to use their guide to systematic reviews as a template for our own.

One of the most familiar types of evidence synthesis is a systematic review. A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are:

  • a clearly defined question with inclusion and exclusion criteria;
  • a rigorous and systematic search of the literature;
  • two phases of screening (blinded, at least two independent screeners);
  • data extraction and management;
  • analysis and interpretation of results;
  • risk of bias assessment of included studies;
  • and report for publication.

Other Types of Evidence Synthesis Reviews

There are many types of evidence synthesis projects, including systematic reviews as well as others. The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews.

  • What Type of Review is Right For You? (Flowchart/Decision Tree) From Cornell University Library  

The table below summarizes various review types and associated methodologies. Librarians can also help your team determine which review type might be most appropriate for your project. 

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108.  doi:10.1111/j.1471-1842.2009.00848.x

Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or mode

Seeks to identify most significant items in the field

No formal quality assessment. Attempts to evaluate according to contribution

Typically narrative, perhaps conceptual or chronological

Significant component: seeks to identify conceptual contribution to embody existing or derive new theory

Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness. May include research findings

May or may not include comprehensive searching

May or may not include quality assessment

Typically narrative

Analysis may be chronological, conceptual, thematic, etc.

Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature

Completeness of searching determined by time/scope constraints

No formal quality assessment

May be graphical and tabular

Characterizes quantity and quality of literature, perhaps by study design and other key features. May identify need for primary or secondary research

Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results

Aims for exhaustive, comprehensive searching. May use funnel plot to assess completeness

Quality assessment may determine inclusion/ exclusion and/or sensitivity analyses

Graphical and tabular with narrative commentary

Numerical analysis of measures of effect assuming absence of heterogeneity

Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies

Requires either very sensitive search to retrieve all studies or separately conceived quantitative and qualitative strategies

Requires either a generic appraisal instrument or separate appraisal processes with corresponding checklists

Typically both components will be presented as narrative and in tables. May also employ graphical means of integrating quantitative and qualitative studies

Analysis may characterise both literatures and look for correlations between characteristics or use gap analysis to identify aspects absent in one literature but missing in the other

Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics

May or may not include comprehensive searching (depends whether systematic overview or not)

May or may not include quality assessment (depends whether systematic overview or not)

Synthesis depends on whether systematic or not. Typically narrative but may include tabular features

Analysis may be chronological, conceptual, thematic, etc.

Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies

May employ selective or purposive sampling

Quality assessment typically used to mediate messages not for inclusion/exclusion

Qualitative, narrative synthesis

Thematic analysis, may include conceptual models

Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research

Completeness of searching determined by time constraints

Time-limited formal quality assessment

Typically narrative and tabular

Quantities of literature and overall quality/direction of effect of literature

Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research evidence (usually including ongoing research)

Completeness of searching determined by time/scope constraints. May include research in progress

No formal quality assessment

Typically tabular with some narrative commentary

Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review

Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives

Aims for comprehensive searching of current literature

No formal quality assessment

Typically narrative, may have tabular accompaniment

Current state of knowledge and priorities for future investigation and research

Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review

Aims for exhaustive, comprehensive searching

Quality assessment may determine inclusion/exclusion

Typically narrative with tabular accompaniment

What is known; recommendations for practice. What remains unknown; uncertainty around findings, recommendations for future research

Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis’

Aims for exhaustive, comprehensive searching

May or may not include quality assessment

Minimal narrative, tabular summary of studies

What is known; recommendations for practice. Limitations

Attempt to include elements of systematic review process while stopping short of systematic review. Typically conducted as postgraduate student assignment

May or may not include comprehensive searching

May or may not include quality assessment

Typically narrative with tabular accompaniment

What is known; uncertainty around findings; limitations of methodology

Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results

Identification of component reviews, but no search for primary studies

Quality assessment of studies within component reviews and/or of reviews themselves

Graphical and tabular with narrative commentary

What is known; recommendations for practice. What remains unknown; recommendations for future research

  • Next: Manuals and Reporting Guidelines >>
  • Last Updated: May 7, 2024 9:36 AM
  • URL: https://shsulibraryguides.org/systematicreview

Newton Gresham Library | (936) 294-1614 | (866) NGL-INFO | Ask a Question | Share a Suggestion Sam Houston State University | Huntsville, Texas 77341 | (936) 294-1111 | (866) BEARKAT © Copyright Sam Houston State University | All rights reserved. | A Member of The Texas State University System

  • Systematic review
  • Open access
  • Published: 24 June 2024

A systematic review of experimentally tested implementation strategies across health and human service settings: evidence from 2010-2022

  • Laura Ellen Ashcraft   ORCID: orcid.org/0000-0001-9957-0617 1 , 2 ,
  • David E. Goodrich 3 , 4 , 5 ,
  • Joachim Hero 6 ,
  • Angela Phares 3 ,
  • Rachel L. Bachrach 7 , 8 ,
  • Deirdre A. Quinn 3 , 4 ,
  • Nabeel Qureshi 6 ,
  • Natalie C. Ernecoff 6 ,
  • Lisa G. Lederer 5 ,
  • Leslie Page Scheunemann 9 , 10 ,
  • Shari S. Rogal 3 , 11   na1 &
  • Matthew J. Chinman 3 , 4 , 6   na1  

Implementation Science volume  19 , Article number:  43 ( 2024 ) Cite this article

1750 Accesses

18 Altmetric

Metrics details

Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify strategies and the Reach Effectiveness Adoption Implementation and Maintenance (RE-AIM) framework to classify outcomes.

We conducted a systematic review of studies examining implementation strategies from 2010-2022 and registered with PROSPERO (CRD42021235592). We searched databases using terms “implementation strategy”, “intervention”, “bundle”, “support”, and their variants. We also solicited study recommendations from implementation science experts and mined existing systematic reviews. We included studies that quantitatively assessed the impact of at least one implementation strategy to improve health or health care using an outcome that could be mapped to the five evaluation dimensions of RE-AIM. Only studies meeting prespecified methodologic standards were included. We described the characteristics of studies and frequency of implementation strategy use across study arms. We also examined common strategy pairings and cooccurrence with significant outcomes.

Our search resulted in 16,605 studies; 129 met inclusion criteria. Studies tested an average of 6.73 strategies (0-20 range). The most assessed outcomes were Effectiveness ( n =82; 64%) and Implementation ( n =73; 56%). The implementation strategies most frequently occurring in the experimental arm were Distribute Educational Materials ( n =99), Conduct Educational Meetings ( n =96), Audit and Provide Feedback ( n =76), and External Facilitation ( n =59). These strategies were often used in combination. Nineteen implementation strategies were frequently tested and associated with significantly improved outcomes. However, many strategies were not tested sufficiently to draw conclusions.

This review of 129 methodologically rigorous studies built upon prior implementation science data syntheses to identify implementation strategies that had been experimentally tested and summarized their impact on outcomes across diverse outcomes and clinical settings. We present recommendations for improving future similar efforts.

Peer Review reports

Contributions to the literature

While many implementation strategies exist, it has been challenging to compare their effectiveness across a wide range of trial designs and practice settings

This systematic review provides a transdisciplinary evaluation of implementation strategies across population, practice setting, and evidence-based interventions using a standardized taxonomy of strategies and outcomes.

Educational strategies were employed ubiquitously; nineteen other commonly used implementation strategies, including External Facilitation and Audit and Provide Feedback, were associated with positive outcomes in these experimental trials.

This review offers guidance for scholars and practitioners alike in selecting implementation strategies and suggests a roadmap for future evidence generation.

Implementation strategies are “methods or techniques used to enhance the adoption, implementation, and sustainment of evidence-based practices or programs” (EBPs) [ 1 ]. In 2015, the Expert Recommendations for Implementing Change (ERIC) study organized a panel of implementation scientists to compile a standardized set of implementation strategy terms and definitions [ 2 , 3 , 4 ]. These 73 strategies were then organized into nine “clusters” [ 5 ]. The ERIC taxonomy has been widely adopted and further refined [ 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 ]. However, much of the evidence for individual or groups of ERIC strategies remains narrowly focused. Prior systematic reviews and meta-analyses have assessed strategy effectiveness, but have generally focused on a specific strategy, (e.g., Audit and Provide Feedback) [ 14 , 15 , 16 ], subpopulation, disease (e.g., individuals living with dementia) [ 16 ], outcome [ 15 ], service setting (e.g., primary care clinics) [ 17 , 18 , 19 ] or geography [ 20 ]. Given that these strategies are intended to have broad applicability, there remains a need to understand how well implementation strategies work across EBPs and settings and the extent to which implementation knowledge is generalizable.

There are challenges in assessing the evidence of implementation strategies across many EBPs, populations, and settings. Heterogeneity in population characteristics, study designs, methods, and outcomes have made it difficult to quantitatively compare which strategies work and under which conditions [ 21 ]. Moreover, there remains significant variability in how researchers operationalize, apply, and report strategies (individually or in combination) and outcomes [ 21 , 22 ]. Still, synthesizing data related to using individual strategies would help researchers replicate findings and better understand possible mediating factors including the cost, timing, and delivery by specific types of health providers or key partners [ 23 , 24 , 25 ]. Such an evidence base would also aid practitioners with implementation planning such as when and how to deploy a strategy for optimal impact.

Building upon previous efforts, we therefore conducted a systematic review to evaluate the level of evidence supporting the ERIC implementation strategies across a broad array of health and human service settings and outcomes, as organized by the evaluation framework, RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) [ 26 , 27 , 28 ]. A secondary aim of this work was to identify patterns in scientific reporting of strategy use that could not only inform reporting standards for strategies but also the methods employed in future. The current study was guided by the following research questions Footnote 1 :

What implementation strategies have been most commonly and rigorously tested in health and human service settings?

Which implementation strategies were commonly paired?

What is the evidence supporting commonly tested implementation strategies?

We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-P) model [ 29 , 30 , 31 ] to develop and report on the methods for this systematic review (Additional File 1). This study was considered to be non-human subjects research by the RAND institutional review board.

Registration

The protocol was registered with PROSPERO (PROSPERO 2021 CRD42021235592).

Eligibility criteria

This review sought to synthesize evidence for implementation strategies from research studies conducted across a wide range of health-related settings and populations. Inclusion criteria required studies to: 1) available in English; 2) published between January 1, 2010 and September 20, 2022; 3) based on experimental research (excluded protocols, commentaries, conference abstracts, or proposed frameworks); 4) set in a health or human service context (described below); 5) tested at least one quantitative outcome that could be mapped to the RE-AIM evaluation framework [ 26 , 27 , 28 ]; and 6) evaluated the impact of an implementation strategy that could be classified using the ERIC taxonomy [ 2 , 32 ]. We defined health and human service setting broadly, including inpatient and outpatient healthcare settings, specialty clinics, mental health treatment centers, long-term care facilities, group homes, correctional facilities, child welfare or youth services, aging services, and schools, and required that the focus be on a health outcome. We excluded hybrid type I trials that primarily focused on establishing EBP effectiveness, qualitative studies, studies that described implementation barriers and facilitators without assessing implementation strategy impact on an outcome, and studies not meeting standardized rigor criteria defined below.

Information sources

Our three-pronged search strategy included searching academic databases (i.e., CINAHL, PubMed, and Web of Science for replicability and transparency), seeking recommendations from expert implementation scientists, and assessing existing, relevant systematic reviews and meta-analyses.

Search strategy

Search terms included “implementation strateg*” OR “implementation intervention*” OR “implementation bundl*” OR “implementation support*.” The search, conducted on September 20, 2022, was limited to English language and publication between 2010 and 2022, similar to other recent implementation science reviews [ 22 ]. This timeframe was selected to coincide with the advent of Implementation Science and when the term “implementation strategy” became conventionally used [ 2 , 4 , 33 ]. A full search strategy can be found in Additional File 2.

Title and abstract screening process

Each study’s title and abstract were read by two reviewers, who dichotomously scored studies on each of the six eligibility criteria described above as yes=1 or no=0, resulting in a score ranging from 1 to 6. Abstracts receiving a six from both reviewers were included in the full text review. Those with only one score of six were adjudicated by a senior member of the team (MJC, SSR, DEG). The study team held weekly meetings to troubleshoot and resolve any ongoing issues noted through the abstract screening process.

Full text screening

During the full text screening process, we reviewed, in pairs, each article that had progressed through abstract screening. Conflicts between reviewers were adjudicated by a senior member of the team for a final inclusion decision (MJC, SSR, DEG).

Review of study rigor

After reviewing published rigor screening tools [ 34 , 35 , 36 ], we developed an assessment of study rigor that was appropriate for the broad range of reviewed implementation studies. Reviewers evaluated studies on the following: 1) presence of a concurrent comparison or control group (=2 for traditional randomized controlled trial or stepped wedge cluster randomized trial and =1 for pseudo-randomized and other studies with concurrent control); 2) EBP standardization by protocol or manual (=1 if present); 3) EBP fidelity tracking (=1 if present); 4) implementation strategy standardization by operational description, standard training, or manual (=1 if present); 5) length of follow-up from full implementation of intervention (=2 for twelve months or longer, =1 for six to eleven months, or =0 for less than six months); and 6) number of sites (=1 for more than one site). Rigor scores ranged from 0 to 8, with 8 indicating the most rigorous. Articles were included if they 1) included a concurrent control group, 2) had an experimental design, and 3) received a score of 7 or 8 from two independent reviewers.

Outside expert consultation

We contacted 37 global implementation science experts who were recognized by our study team as leaders in the field or who were commonly represented among first or senior authors in the included abstracts. We asked each expert for recommendations of publications meeting study inclusion criteria (i.e., quantitatively evaluating the effectiveness of an implementation strategy). Recommendations were recorded and compared to the full abstract list.

Systematic reviews

Eighty-four systematic reviews were identified through the initial search strategy (See Additional File 3). Systematic reviews that examined the effectiveness of implementation strategies were reviewed in pairs for studies that were not found through our initial literature search.

Data abstraction and coding

Data from the full text review were abstracted in pairs, with conflicts resolved by senior team members (DEG, MJC) using a standard Qualtrics abstraction form. The form captured the setting, number of sites and participants studied, evidence-based practice/program of focus, outcomes assessed (based on RE-AIM), strategies used in each study arm, whether the study took place in the U.S. or outside of the U.S., and the findings (i.e., was there significant improvement in the outcome(s)?). We coded implementation strategies used in the Control and Experimental Arms. We defined the Control Arm as receiving the lowest number of strategies (which could mean zero strategies or care as usual) and the Experimental Arm as the most intensive arm (i.e., receiving the highest number of strategies). When studies included multiple Experimental Arms, the Experimental Arm with the least intensive implementation strategy(ies) was classified as “Control” and the Experimental Arm with the most intensive implementation strategy(ies) was classified as the “Experimental” Arm.

Implementation strategies were classified using standard definitions (MJC, SSR, DEG), based on minor modifications to the ERIC taxonomy [ 2 , 3 , 4 ]. Modifications resulted in 70 named strategies and were made to decrease redundancy and improve clarity. These modifications were based on input from experts, cognitive interview data, and team consensus [ 37 ] (See Additional File 4). Outcomes were then coded into RE-AIM outcome domains following best practices as recommended by framework experts [ 26 , 27 , 28 ]. We coded the RE-AIM domain of Effectiveness as either an assessment of the effectiveness of the EBP or the implementation strategy. We did not assess implementation strategy fidelity or effects on health disparities as these are recently adopted reporting standards [ 27 , 28 ] and not yet widely implemented in current publications. Further, we did not include implementation costs as an outcome because reporting guidelines have not been standardized [ 38 , 39 ].

Assessment and minimization of bias

Assessment and minimization of bias is an important component of high-quality systematic reviews. The Cochrane Collaboration guidance for conducting high-quality systematic reviews recommends including a specific assessment of bias for individual studies by assessing the domains of randomization, deviations of intended intervention, missing data, measurement of the outcome, and selection of the reported results (e.g., following a pre-specified analysis plan) [ 40 , 41 ]. One way we addressed bias was by consolidating multiple publications from the same study into a single finding (i.e., N =1), so-as to avoid inflating estimates due to multiple publications on different aspects of a single trial. We also included high-quality studies only, as described above. However, it was not feasible to consistently apply an assessment of bias tool due to implementation science’s broad scope and the heterogeneity of study design, context, outcomes, and variable measurement, etc. For example, most implementation studies reviewed had many outcomes across the RE-AIM framework, with no one outcome designated as primary, precluding assignment of a single score across studies.

We used descriptive statistics to present the distribution of health or healthcare area, settings, outcomes, and the median number of included patients and sites per study, overall and by country (classified as U.S. vs. non-U.S.). Implementation strategies were described individually, using descriptive statistics to summarize the frequency of strategy use “overall” (in any study arm), and the mean number of strategies reported in the Control and Experimental Arms. We additionally described the strategies that were only in the experimental (and not control) arm, defining these as strategies that were “tested” and may have accounted for differences in outcomes between arms.

We described frequencies of pair-wise combinations of implementation strategies in the Experimental Arm. To assess the strength of the evidence supporting implementation strategies that were used in the Experimental Arm, study outcomes were categorized by RE-AIM and coded based on whether the association between use of the strategies resulted in a significantly positive effect (yes=1; no=0). We then created an indicator variable if at least one RE-AIM outcome in the study was significantly positive (yes=1; no=0). We plotted strategies on a graph with quadrants based on the combination of median number of studies in which a strategy appears and the median percent of studies in which a strategy was associated with at least one positive RE-AIM outcome. The upper right quadrant—higher number of studies overall and higher percent of studies with a significant RE-AIM outcome—represents a superior level of evidence. For implementation strategies in the upper right quadrant, we describe each RE-AIM outcome and the proportion of studies which have a significant outcome.

Search results

We identified 14,646 articles through the initial literature search, 17 articles through expert recommendation (three of which were not included in the initial search), and 1,942 articles through reviewing prior systematic reviews (Fig. 1 ). After removing duplicates, 9,399 articles were included in the initial abstract screening. Of those, 48% ( n =4,075) abstracts were reviewed in pairs for inclusion. Articles with a score of five or six were reviewed a second time ( n =2,859). One quarter of abstracts that scored lower than five were reviewed for a second time at random. We screened the full text of 1,426 articles in pairs. Common reasons for exclusion were 1) study rigor, including no clear delineation between the EBP and implementation strategy, 2) not testing an implementation strategy, and 3) article type that did not meet inclusion criteria (e.g., commentary, protocol, etc.). Six hundred seventeen articles were reviewed for study rigor with 385 excluded for reasons related to study design and rigor, and 86 removed for other reasons (e.g., not a research article). Among the three additional expert-recommended articles, one met inclusion criteria and was added to the analysis. The final number of studies abstracted was 129 representing 143 publications.

figure 1

Expanded PRISMA Flow Diagram

The expanded PRISMA flow diagram provides a description of each step in the review and abstraction process for the systematic review

Descriptive results

Of 129 included studies (Table 1 ; see also Additional File 5 for Summary of Included Studies), 103 (79%) were conducted in a healthcare setting. EBP health care setting varied and included primary care ( n =46; 36%), specialty care ( n =27; 21%), mental health ( n =11; 9%), and public health ( n =30; 23%), with 64 studies (50%) occurring in an outpatient health care setting. Studies included a median of 29 sites and 1,419 target population (e.g., patients or students). The number of strategies varied widely across studies, with Control Arms averaging approximately two strategies (Range = 0-20, including studies with no strategy in the comparison group) and Experimental Arms averaging eight strategies (Range = 1-21). Non-US studies ( n =73) included more sites and target population on average, with an overall median of 32 sites and 1,531 patients assessed in each study.

Organized by RE-AIM, the most evaluated outcomes were Effectiveness ( n = 82, 64%) and Implementation ( n = 73, 56%); followed by Maintenance ( n =40; 31%), Adoption ( n =33; 26%), and Reach ( n =31; 24%). Most studies ( n = 98, 76%) reported at least one significantly positive outcome. Adoption and Implementation outcomes showed positive change in three-quarters of studies ( n =78), while Reach ( n =18; 58%), Effectiveness ( n =44; 54%), and Maintenance ( n =23; 58%) outcomes evidenced positive change in approximately half of studies.

The following describes the results for each research question.

Table 2 shows the frequency of studies within which an implementation strategy was used in the Control Arm, Experimental Arm(s), and tested strategies (those used exclusively in the Experimental Arm) grouped by strategy type, as specified by previous ERIC reports [ 2 , 6 ].

Control arm

In about half the studies (53%; n =69), the Control Arms were “active controls” that included at least one strategy, with an average of 1.64 (and up to 20) strategies reported in control arms. The two most common strategies used in Control Arms were: Distribute Educational Materials ( n =52) and Conduct Educational Meetings ( n =30).

Experimental arm

Experimental conditions included an average of 8.33 implementation strategies per study (Range = 1-21). Figure 2 shows a heat map of the strategies that were used in the Experimental Arms in each study. The most common strategies in the Experimental Arm were Distribute Educational Materials ( n =99), Conduct Educational Meetings ( n =96), Audit and Provide Feedback ( n =76), and External Facilitation ( n =59).

figure 2

Implementation strategies used in the Experimental Arm of included studies. Explore more here: https://public.tableau.com/views/Figure2_16947070561090/Figure2?:language=en-US&:display_count=n&:origin=viz_share_link

Tested strategies

The average number of implementation strategies that were included in the Experimental Arm only (and not in the Control Arm) was 6.73 (Range = 0-20). Footnote 2 Overall, the top 10% of tested strategies included Conduct Educational Meetings ( n =68), Audit and Provide Feedback ( n =63), External Facilitation ( n =54), Distribute Educational Materials ( n =49), Tailor Strategies ( n =41), Assess for Readiness and Identify Barriers and Facilitators ( n =38) and Organize Clinician Implementation Team Meetings ( n =37). Few studies tested a single strategy ( n =9). These strategies included, Audit and Provide Feedback, Conduct Educational Meetings, Conduct Ongoing Training, Create a Learning Collaborative, External Facilitation ( n =2), Facilitate Relay of Clinical Data To Providers, Prepare Patients/Consumers to be Active Participants, and Use Other Payment Schemes. Three implementation strategies were included in the Control or Experimental Arms but were not Tested including, Use Mass Media, Stage Implementation Scale Up, and Fund and Contract for the Clinical Innovation.

Table 3  shows the five most used strategies in Experimental Arms with their top ten most frequent pairings, excluding Distribute Educational Materials and Conduct Educational Meetings, as these strategies were included in almost all Experimental and half of Control Arms. The five most used strategies in the Experimental Arm included Audit and Provide Feedback ( n =76), External Facilitation ( n =59), Tailor Strategies ( n =43), Assess for Readiness and Identify Barriers and Facilitators ( n =43), and Organize Implementation Teams ( n =42).

Strategies frequently paired with these five strategies included two educational strategies: Distribute Educational Materials and Conduct Educational Meetings. Other commonly paired strategies included Develop a Formal Implementation Blueprint, Promote Adaptability, Conduct Ongoing Training, Purposefully Reexamine the Implementation, and Develop and Implement Tools for Quality Monitoring.

We classified the strength of evidence for each strategy by evaluating both the number of studies in which each strategy appeared in the Experimental Arm and the percentage of times there was at least one significantly positive RE-AIM outcome. Using these factors, Fig. 3 shows the number of studies in which individual strategies were evaluated (on the y axis) compared to the percentage of times that studies including those strategies had at least one positive outcome (on the x axis). Due to the non-normal distribution of both factors, we used the median (rather than the mean) to create four quadrants. Strategies in the lower left quadrant were tested in fewer than the median number of studies (8.5) and were less frequently associated with a significant RE-AIM outcome (75%). The upper right quadrant included strategies that occurred in more than the median number of studies (8.5) and had more than the median percent of studies with a significant RE-AIM outcome (75%); thus those 19 strategies were viewed as having stronger evidence. Of those 19 implementation strategies, Conduct Educational Meetings, Distribute Educational Materials, External Facilitation, and Audit and Provide Feedback continued to occur frequently, appearing in 59-99 studies.

figure 3

Experimental Arm Implementation Strategies with significant RE-AIM outcome. Explore more here: https://public.tableau.com/views/Figure3_16947017936500/Figure3?:language=en-US&publish=yes&:display_count=n&:origin=viz_share_link

Figure 4 graphically illustrates the proportion of significant outcomes for each RE-AIM outcome for the 19 commonly used and evidence-based implementation strategies in the upper right quadrant. These findings again show the widespread use of Conduct Educational Meetings and Distribute Educational Materials. Implementation and Effectiveness outcomes were assessed most frequently, with Implementation being the mostly commonly reported significantly positive outcome.

figure 4

RE-AIM outcomes for the 19 Top-Right Quadrant Implementation Strategies . The y-axis is the number of studies and the x-axis is a stacked bar chart for each RE-AIM outcome with R=Reach, E=Effectiveness, A=Adoption, I=Implementation, M=Maintenance. Blue denotes at least one significant RE-AIM outcome; Light blue denotes studies which used the given implementation strategy and did not have a significant RE-AIM . Explore more here: https://public.tableau.com/views/Figure4_16947017112150/Figure4?:language=en-US&publish=yes&:display_count=n&:origin=viz_share_link

This systematic review identified 129 experimental studies examining the effectiveness of implementation strategies across a broad range of health and human service studies. Overall, we found that evidence is lacking for most ERIC implementation strategies, that most studies employed combinations of strategies, and that implementation outcomes, categorized by RE-AIM dimensions, have not been universally defined or applied. Accordingly, other researchers have described the need for universal outcomes definitions and descriptions across implementation research studies [ 28 , 42 ]. Our findings have important implications not only for the current state of the field but also for creating guidance to help investigators determine which strategies and in what context to examine.

The four most evaluated strategies were Distribute Educational Materials, Conduct Educational Meetings, External Facilitation, and Audit and Provide Feedback. Conducting Educational Meetings and Distributing Educational Materials were surprisingly the most common. This may reflect the fact that education strategies are generally considered to be “necessary but not sufficient” for successful implementation [ 43 , 44 ]. Because education is often embedded in interventions, it is critical to define the boundary between the innovation and the implementation strategies used to support the innovation. Further specification as to when these strategies are EBP core components or implementation strategies (e.g., booster trainings or remediation) is needed [ 45 , 46 ].

We identified 19 implementation strategies that were tested in at least 8 studies (more than the median) and were associated with positive results at least 75% of the time. These strategies can be further categorized as being used in early or pre-implementation versus later in implementation. Preparatory activities or pre-implementation, strategies that had strong evidence included educational activities (Meetings, Materials, Outreach visits, Train for Leadership, Use Train the Trainer Strategies) and site diagnostic activities (Assess for Readiness, Identify Barriers and Facilitators, Conduct Local Needs Assessment, Identify and Prepare Champions, and Assess and Redesign Workflows). Strategies that target the implementation phase include those that provide coaching and support (External and Internal Facilitation), involve additional key partners (Intervene with Patients to Enhance Uptake and Adherence), and engage in quality improvement activities (Audit and Provide Feedback, Facilitate the Relay of Clinical Data to Providers, Purposefully Reexamine the Implementation, Conduct Cyclical Small Tests of Change, Develop and Implement Tools for Quality Monitoring).

There were many ERIC strategies that were not represented in the reviewed studies, specifically the financial and policy strategies. Ten strategies were not used in any studies, including: Alter Patient/Consumer Fees, Change Liability Laws, Change Service Sites, Develop Disincentives, Develop Resource Sharing Agreements, Identify Early Adopters, Make Billing Easier, Start a Dissemination Organization, Use Capitated Payments, and Use Data Experts. One of the limitations of this investigation was that not all individual strategies or combinations were investigated. Reasons for the absence of these strategies in our review may include challenges with testing certain strategies experimentally (e.g., changing liability laws), limitations in our search terms, and the relative paucity of implementation strategy trials compared to clinical trials. Many “untested” strategies require large-scale structural changes with leadership support (see [ 47 ] for policy experiment example). Recent preliminary work has assessed the feasibility of applying policy strategies and described the challenges with doing so [ 48 , 49 , 50 ]. While not impossible in large systems like VA (for example: the randomized evaluation of the VA Stratification Tool for Opioid Risk Management) the large size, structure, and organizational imperative makes these initiatives challenging to experimentally evaluate. Likewise, the absence of these ten strategies may have been the result of our inclusion criteria, which required an experimental design. Thus, creative study designs may be needed to test high-level policy or financial strategies experimentally.

Some strategies that were likely under-represented in our search strategy included electronic medical record reminders and clinical decision support tools and systems. These are often considered “interventions” when used by clinical trialists and may not be indexed as studies involving ‘implementation strategies’ (these tools have been reviewed elsewhere [ 51 , 52 , 53 ]). Thus, strategies that are also considered interventions in the literature (e.g., education interventions) were not sought or captured. Our findings do not imply that these strategies are ineffective, rather that more study is needed. Consistent with prior investigations [ 54 ], few studies meeting inclusion criteria tested financial strategies. Accordingly, there are increasing calls to track and monitor the effects of financial strategies within implementation science to understand their effectiveness in practice [ 55 , 56 ]. However, experts have noted that the study of financial strategies can be a challenge given that they are typically implemented at the system-level and necessitate research designs for studying policy-effects (e.g., quasi-experimental methods, systems-science modeling methods) [ 57 ]. Yet, there have been some recent efforts to use financial strategies to support EBPs that appear promising [ 58 ] and could be a model for the field moving forward.

The relationship between the number of strategies used and improved outcomes has been described inconsistently in the literature. While some studies have found improved outcomes with a bundle of strategies that were uniquely combined or a standardized package of strategies (e.g., Replicating Effective Programs [ 59 , 60 ] and Getting To Outcomes [ 61 , 62 ]), others have found that “more is not always better” [ 63 , 64 , 65 ]. For example, Rogal and colleagues documented that VA hospitals implementing a new evidence-based hepatitis C treatment chose >20 strategies, when multiple years of data linking strategies to outcomes showed that 1-3 specific strategies would have yielded the same outcome [ 39 ]. Considering that most studies employed multiple or multifaceted strategies, it seems that there is a benefit of using a targeted bundle of strategies that are purposefully aligns with site/clinic/population norms, rather than simply adding more strategies [ 66 ].

It is difficult to assess the effectiveness of any one implementation strategy in bundles where multiple strategies are used simultaneously. Even a ‘single’ strategy like External Facilitation is, in actuality, a bundle of narrowly constructed strategies (e.g., Conduct Educational Meetings, Identify and Prepare Champions, and Develop a Formal Implementation Blueprint). Thus, studying External Facilitation does not allow for a test of the individual strategies that comprise it, potentially masking the effectiveness of any individual strategy. While we cannot easily disaggregate the effects of multifaceted strategies, doing so may not yield meaningful results. Because strategies often synergize, disaggregated results could either underestimate the true impact of individual strategies or conversely, actually undermine their effectiveness (i.e., when their effectiveness comes from their combination with other strategies). The complexity of health and human service settings, imperative to improve public health outcomes, and engagement with community partners often requires the use of multiple strategies simultaneously. Therefore, the need to improve real-world implementation may outweigh the theoretical need to identify individual strategy effectiveness. In situations where it would be useful to isolate the impact of single strategies, we suggest that the same methods for documenting and analyzing the critical components (or core functions) of complex interventions [ 67 , 68 , 69 , 70 ] may help to identify core components of multifaceted implementation strategies [ 71 , 72 , 73 , 74 ].

In addition, to truly assess the impacts of strategies on outcomes, it may be necessary to track fidelity to implementation strategies (not just the EBPs they support). While this can be challenging, without some degree of tracking and fidelity checks, one cannot determine whether a strategy’s apparent failure to work was because it 1) was ineffective or 2) was not applied well. To facilitate this tracking there are pragmatic tools to support researchers. For example, the Longitudinal Implementation Strategy Tracking System (LISTS) offers a pragmatic and feasible means to assess fidelity to and adaptations of strategies [ 75 ].

Implications for implementation science: four recommendations

Based on our findings, we offer four recommended “best practices” for implementation studies.

Prespecify strategies using standard nomenclature. This study reaffirmed the need to apply not only a standard naming convention (e.g., ERIC) but also a standard reporting of for implementation strategies. While reporting systems like those by Proctor [ 1 ] or Pinnock [ 75 ] would optimize learning across studies, few manuscripts specify strategies as recommended [ 76 , 77 ]. Pre-specification allows planners and evaluators to assess the feasibility and acceptability of strategies with partners and community members [ 24 , 78 , 79 ] and allows evaluators and implementers to monitor and measure the fidelity, dose, and adaptations to strategies delivered over the course of implementation [ 27 ]. In turn, these data can be used to assess the costs, analyze their effectiveness [ 38 , 80 , 81 ], and ensure more accurate reporting [ 82 , 83 , 84 , 85 ]. This specification should include, among other data, the intensity, stage of implementation, and justification for the selection. Information regarding why strategies were selected for specific settings would further the field and be of great use to practitioners. [ 63 , 65 , 69 , 79 , 86 ].

Ensure that standards for measuring and reporting implementation outcomes are consistently applied and account for the complexity of implementation studies. Part of improving standardized reporting must include clearly defining outcomes and linking each outcome to particular implementation strategies. It was challenging in the present review to disentangle the impact of the intervention(s) (i.e., the EBP) versus the impact of the implementation strategy(ies) for each RE-AIM dimension. For example, often fidelity to the EBP was reported but not for the implementation strategies. Similarly, Reach and Adoption of the intervention would be reported for the Experimental Arm but not for the Control Arm, prohibiting statistical comparisons of strategies on the relative impact of the EBP between study arms. Moreover, there were many studies evaluating numerous outcomes, risking data dredging. Further, the significant heterogeneity in the ways in which implementation outcomes are operationalized and reported is a substantial barrier to conducting large-scale meta-analytic approaches to synthesizing evidence for implementation strategies [ 67 ]. The field could look to others in the social and health sciences for examples in how to test, validate, and promote a common set of outcome measures to aid in bringing consistency across studies and real-world practice (e.g., the NIH-funded Patient-Reported Outcomes Measurement Information System [PROMIS], https://www.healthmeasures.net/explore-measurement-systems/promis ).

Develop infrastructure to learn cross-study lessons in implementation science. Data repositories, like those developed by NCI for rare diseases, U.S. HIV Implementation Science Coordination Initiative [ 87 ], and the Behavior Change Technique Ontology [ 88 ], could allow implementation scientists to report their findings in a more standardized manner, which would promote ease of communication and contextualization of findings across studies. For example, the HIV Implementation Science Coordination Initiative requested all implementation projects use common frameworks, developed user friendly databases to enable practitioners to match strategies to determinants, and developed a dashboard of studies that assessed implementation determinants [ 89 , 90 , 91 , 92 , 93 , 94 ].

Develop and apply methods to rigorously study common strategies and bundles. These findings support prior recommendations for improved empirical rigor in implementation studies [ 46 , 95 ]. Many studies were excluded from our review based on not meeting methodological rigor standards. Understanding the effectiveness of discrete strategies deployed alone or in combination requires reliable and low burden tracking methods to collect information about strategy use and outcomes. For example, frameworks like the Implementation Replication Framework [ 96 ] could help interpret findings across studies using the same strategy bundle. Other tracking approaches may leverage technology (e.g., cell phones, tablets, EMR templates) [ 78 , 97 ] or find novel, pragmatic approaches to collect recommended strategy specifications over time (e.g.., dose, deliverer, and mechanism) [ 1 , 9 , 27 , 98 , 99 ]. Rigorous reporting standards could inform more robust analyses and conclusions (e.g., moving toward the goal of understanding causality, microcosting efforts) [ 24 , 38 , 100 , 101 ]. Such detailed tracking is also required to understand how site-level factors moderate implementation strategy effects [ 102 ]. In some cases, adaptive trial designs like sequential multiple assignment randomized trials (SMARTs) and just-in-time adaptive interventions (JITAIs) can be helpful for planning strategy escalation.

Limitations

Despite the strengths of this review, there were certain notable limitations. For one, we only included experimental studies, omitting many informative observational investigations that cover the range of implementation strategies. Second, our study period was centered on the creation of the journal Implementation Science and not on the standardization and operationalization of implementation strategies in the publication of the ERIC taxonomy (which came later). This, in conjunction with latency in reporting study results and funding cycles, means that the employed taxonomy was not applied in earlier studies. To address this limitation, we retroactively mapped strategies to ERIC, but it is possible that some studies were missed. Additionally, indexing approaches used by academic databases may have missed relevant studies. We addressed this particular concern by reviewing other systematic reviews of implementation strategies and soliciting recommendations from global implementation science experts.

Another potential limitation comes from the ERIC taxonomy itself—i.e., strategy listings like ERIC are only useful when they are widely adopted and used in conjunction with guidelines for specifying and reporting strategies [ 1 ] in protocol and outcome papers. Although the ERIC paper has been widely cited (over three thousand times, accessed about 186 thousand times), it is still not universally applied, making tracking the impact of specific strategies more difficult. However, our experience with this review seemed to suggest that ERIC’s use was increasing over time. Also, some have commented that ERIC strategies can be unclear and are missing key domains. Thus, researchers are making definitions clearer for lay users [ 37 , 103 ], increasing the number of discrete strategies for specific domains like HIV treatment, acknowledging strategies for new functions (e.g., de-implementation [ 104 ], local capacity building), accounting for phases of implementation (dissemination, sustainment [ 13 ], scale-up), addressing settings [ 12 , 20 ], actors roles in the process, and making mechanisms of change to select strategies more user-friendly through searchable databases [ 9 , 10 , 54 , 73 , 104 , 105 , 106 ]. In sum, we found the utility of the ERIC taxonomy to outweigh any of the taxonomy’s current limitations.

As with all reviews, the search terms influenced our findings. As such, the broad terms for implementation strategies (e.g., “evidence-based interventions”[ 7 ] or “behavior change techniques” [ 107 ]) may have led to inadvertent omissions of studies of specific strategies. For example, the search terms may not have captured tests of policies, financial strategies, community health promotion initiatives, or electronic medical record reminders, due to differences in terminology used in corresponding subfields of research (e.g., health economics, business, health information technology, and health policy). To manage this, we asked experts to inform us about any studies that they would include and cross-checked their lists with what was identified through our search terms, which yielded very few additional studies. We included standard coding using the ERIC taxonomy, which was a strength, but future work should consider including the additional strategies that have been recommended to augment ERIC, around sustainment [ 13 , 79 , 106 , 108 ], community and public health research [ 12 , 109 , 110 , 111 ], consumer or service user engagement [ 112 ], de-implementation [ 104 , 113 , 114 , 115 , 116 , 117 ] and related terms [ 118 ].

We were unable to assess the bias of studies due to non-standard reporting across the papers and the heterogeneity of study designs, measurement of implementation strategies and outcomes, and analytic approaches. This could have resulted in over- or underestimating the results of our synthesis. We addressed this limitation by being cautious in our reporting of findings, specifically in identifying “effective” implementation strategies. Further, we were not able to gather primary data to evaluate effect sizes across studies in order to systematically evaluate bias, which would be fruitful for future study.

Conclusions

This novel review of 129 studies summarized the body of evidence supporting the use of ERIC-defined implementation strategies to improve health or healthcare. We identified commonly occurring implementation strategies, frequently used bundles, and the strategies with the highest degree of supportive evidence, while simultaneously identifying gaps in the literature. Additionally, we identified several key areas for future growth and operationalization across the field of implementation science with the goal of improved reporting and assessment of implementation strategies and related outcomes.

Availability and materials

All data for this study are included in this published article and its supplementary information files.

We modestly revised the following research questions from our PROSPERO registration after reading the articles and better understanding the nature of the literature: 1) What is the available evidence regarding the effectiveness of implementation strategies in supporting the uptake and sustainment of evidence intended to improve health and healthcare outcomes? 2) What are the current gaps in the literature (i.e., implementation strategies that do not have sufficient evidence of effectiveness) that require further exploration?

Tested strategies are those which exist in the Experimental Arm but not in the Control Arm. Comparative effectiveness or time staggered trials may not have any unique strategies in the Experimental Arm and therefore in our analysis would have no Tested Strategies.

Abbreviations

Centers for Disease Control

Cumulated Index to Nursing and Allied Health Literature

Dissemination and Implementation

Evidence-based practices or programs

Expert Recommendations for Implementing Change

Multiphase Optimization Strategy

National Cancer Institute

National Institutes of Health

The Pittsburgh Dissemination and Implementation Science Collaborative

Sequential Multiple Assignment Randomized Trial

United States

Department of Veterans Affairs

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

Article   PubMed   PubMed Central   Google Scholar  

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

Waltz TJ, Powell BJ, Chinman MJ, Smith JL, Matthieu MM, Proctor EK, et al. Expert recommendations for implementing change (ERIC): protocol for a mixed methods study. Implement Sci IS. 2014;9:39.

Article   PubMed   Google Scholar  

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A Compilation of Strategies for Implementing Clinical Innovations in Health and Mental Health. Med Care Res Rev. 2012;69:123–57.

Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, Chinman MJ, Smith JL, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:109.

Perry CK, Damschroder LJ, Hemler JR, Woodson TT, Ono SS, Cohen DJ. Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory. Implement Sci. 2019;14(1):32.

Community Preventive Services Task Force. Community Preventive Services Task Force: All Active Findings June 2023 [Internet]. 2023 [cited 2023 Aug 7]. Available from: https://www.thecommunityguide.org/media/pdf/CPSTF-All-Findings-508.pdf

Solberg LI, Kuzel A, Parchman ML, Shelley DR, Dickinson WP, Walunas TL, et al. A Taxonomy for External Support for Practice Transformation. J Am Board Fam Med JABFM. 2021;34:32–9.

Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12:1–9.

Article   Google Scholar  

Leeman J, Calancie L, Hartman MA, Escoffery CT, Herrmann AK, Tague LE, et al. What strategies are used to build practitioners’ capacity to implement community-based interventions and are they effective?: a systematic review. Implement Sci. 2015;10:1–15.

Nathan N, Shelton RC, Laur CV, Hailemariam M, Hall A. Editorial: Sustaining the implementation of evidence-based interventions in clinical and community settings. Front Health Serv. 2023;3:1176023.

Balis LE, Houghtaling B, Harden SM. Using implementation strategies in community settings: an introduction to the Expert Recommendations for Implementing Change (ERIC) compilation and future directions. Transl Behav Med. 2022;12:965–78.

Nathan N, Powell BJ, Shelton RC, Laur CV, Wolfenden L, Hailemariam M, et al. Do the Expert Recommendations for Implementing Change (ERIC) strategies adequately address sustainment? Front Health Serv. 2022;2:905909.

Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.

Google Scholar  

Moore L, Guertin JR, Tardif P-A, Ivers NM, Hoch J, Conombo B, et al. Economic evaluations of audit and feedback interventions: a systematic review. BMJ Qual Saf. 2022;31:754–67.

Sykes MJ, McAnuff J, Kolehmainen N. When is audit and feedback effective in dementia care? A systematic review. Int J Nurs Stud. 2018;79:27–35.

Barnes C, McCrabb S, Stacey F, Nathan N, Yoong SL, Grady A, et al. Improving implementation of school-based healthy eating and physical activity policies, practices, and programs: a systematic review. Transl Behav Med. 2021;11:1365–410.

Tomasone JR, Kauffeldt KD, Chaudhary R, Brouwers MC. Effectiveness of guideline dissemination and implementation strategies on health care professionals’ behaviour and patient outcomes in the cancer care context: a systematic review. Implement Sci. 2020;15:1–18.

Seda V, Moles RJ, Carter SR, Schneider CR. Assessing the comparative effectiveness of implementation strategies for professional services to community pharmacy: A systematic review. Res Soc Adm Pharm. 2022;18:3469–83.

Lovero KL, Kemp CG, Wagenaar BH, Giusto A, Greene MC, Powell BJ, et al. Application of the Expert Recommendations for Implementing Change (ERIC) compilation of strategies to health intervention implementation in low- and middle-income countries: a systematic review. Implement Sci. 2023;18:56.

Chapman A, Rankin NM, Jongebloed H, Yoong SL, White V, Livingston PM, et al. Overcoming challenges in conducting systematic reviews in implementation science: a methods commentary. Syst Rev. 2023;12:1–6.

Article   CAS   Google Scholar  

Proctor EK, Bunger AC, Lengnick-Hall R, Gerke DR, Martin JK, Phillips RJ, et al. Ten years of implementation outcomes research: a scoping review. Implement Sci. 2023;18:1–19.

Michaud TL, Pereira E, Porter G, Golden C, Hill J, Kim J, et al. Scoping review of costs of implementation strategies in community, public health and healthcare settings. BMJ Open. 2022;12:e060785.

Sohn H, Tucker A, Ferguson O, Gomes I, Dowdy D. Costing the implementation of public health interventions in resource-limited settings: a conceptual framework. Implement Sci. 2020;15:1–8.

Peek C, Glasgow RE, Stange KC, Klesges LM, Purcell EP, Kessler RS. The 5 R’s: an emerging bold standard for conducting relevant research in a changing world. Ann Fam Med. 2014;12:447–55.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

Shelton RC, Chambers DA, Glasgow RE. An Extension of RE-AIM to Enhance Sustainability: Addressing Dynamic Context and Promoting Health Equity Over Time. Front Public Health. 2020;8:134.

Holtrop JS, Estabrooks PA, Gaglio B, Harden SM, Kessler RS, King DK, et al. Understanding and applying the RE-AIM framework: Clarifications and resources. J Clin Transl Sci. 2021;5:e126.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ [Internet]. 2021;372. Available from: https://www.bmj.com/content/372/bmj.n71

Rabin BA, Brownson RC, Haire-Joshu D, Kreuter MW, Weaver NL. A Glossary for Dissemination and Implementation Research in Health. J Public Health Manag Pract. 2008;14:117–23.

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1:1.

Article   PubMed Central   Google Scholar  

Miller WR, Wilbourne PL. Mesa Grande: a methodological analysis of clinical trials of treatments for alcohol use disorders. Addict Abingdon Engl. 2002;97:265–77.

Miller WR, Brown JM, Simpson TL, Handmaker NS, Bien TH, Luckie LF, et al. What works? A methodological analysis of the alcohol treatment outcome literature. Handb Alcohol Treat Approaches Eff Altern 2nd Ed. Needham Heights, MA, US: Allyn & Bacon; 1995:12–44.

Wells S, Tamir O, Gray J, Naidoo D, Bekhit M, Goldmann D. Are quality improvement collaboratives effective? A systematic review BMJ Qual Saf. 2018;27:226–40.

Yakovchenko V, Chinman MJ, Lamorte C, Powell BJ, Waltz TJ, Merante M, et al. Refining Expert Recommendations for Implementing Change (ERIC) strategy surveys using cognitive interviews with frontline providers. Implement Sci Commun. 2023;4:1–14.

Wagner TH, Yoon J, Jacobs JC, So A, Kilbourne AM, Yu W, et al. Estimating costs of an implementation intervention. Med Decis Making. 2020;40:959–67.

Gold HT, McDermott C, Hoomans T, Wagner TH. Cost data in implementation science: categories and approaches to costing. Implement Sci. 2022;17:11.

Boutron I, Page MJ, Higgins JP, Altman DG, Lundh A, Hróbjartsson A. Considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions. 2019. https://doi.org/10.1002/9781119536604.ch7 . 

Higgins JP, Savović J, Page MJ, Elbers RG, Sterne J. Assessing risk of bias in a randomized trial. Cochrane Handb Syst Rev Interv. 2019;6:205–28.

Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, Contrasting, and Integrating Dissemination and Implementation Outcomes Included in the RE-AIM and Implementation Outcomes Frameworks. Front Public Health [Internet]. 2020 [cited 2024 Apr 24];8. Available from: https://www.frontiersin.org/journals/public-health/articles/ https://doi.org/10.3389/fpubh.2020.00430/full

Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess Winch Engl. 2004;8:iii–iv 1-72.

CAS   Google Scholar  

Beidas RS, Kendall PC. Training Therapists in Evidence-Based Practice: A Critical Review of Studies From a Systems-Contextual Perspective. Clin Psychol Publ Div Clin Psychol Am Psychol Assoc. 2010;17:1–30.

Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to Improve the Selection and Tailoring of Implementation Strategies. J Behav Health Serv Res. 2017;44:177–94.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda. Front Public Health [Internet]. 2019 [cited 2021 Mar 31];7. Available from: https://www.frontiersin.org/articles/ https://doi.org/10.3389/fpubh.2019.00003/full

Frakt AB, Prentice JC, Pizer SD, Elwy AR, Garrido MM, Kilbourne AM, et al. Overcoming Challenges to Evidence-Based Policy Development in a Large. Integrated Delivery System Health Serv Res. 2018;53:4789–807.

PubMed   Google Scholar  

Crable EL, Lengnick-Hall R, Stadnick NA, Moullin JC, Aarons GA. Where is “policy” in dissemination and implementation science? Recommendations to advance theories, models, and frameworks: EPIS as a case example. Implement Sci. 2022;17:80.

Crable EL, Grogan CM, Purtle J, Roesch SC, Aarons GA. Tailoring dissemination strategies to increase evidence-informed policymaking for opioid use disorder treatment: study protocol. Implement Sci Commun. 2023;4:16.

Bond GR. Evidence-based policy strategies: A typology. Clin Psychol Sci Pract. 2018;25:e12267.

Loo TS, Davis RB, Lipsitz LA, Irish J, Bates CK, Agarwal K, et al. Electronic Medical Record Reminders and Panel Management to Improve Primary Care of Elderly Patients. Arch Intern Med. 2011;171:1552–8.

Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ Can Med Assoc J. 2010;182:E216-25.

Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al. A Randomized Trial of Electronic Clinical Reminders to Improve Quality of Care for Diabetes and Coronary Artery Disease. J Am Med Inform Assoc JAMIA. 2005;12:431–7.

Dopp AR, Kerns SEU, Panattoni L, Ringel JS, Eisenberg D, Powell BJ, et al. Translating economic evaluations into financing strategies for implementing evidence-based practices. Implement Sci. 2021;16:1–12.

Dopp AR, Hunter SB, Godley MD, Pham C, Han B, Smart R, et al. Comparing two federal financing strategies on penetration and sustainment of the adolescent community reinforcement approach for substance use disorders: protocol for a mixed-method study. Implement Sci Commun. 2022;3:51.

Proctor EK, Toker E, Tabak R, McKay VR, Hooley C, Evanoff B. Market viability: a neglected concept in implementation science. Implement Sci. 2021;16:98.

Dopp AR, Narcisse M-R, Mundey P, Silovsky JF, Smith AB, Mandell D, et al. A scoping review of strategies for financing the implementation of evidence-based practices in behavioral health systems: State of the literature and future directions. Implement Res Pract. 2020;1:2633489520939980.

PubMed   PubMed Central   Google Scholar  

Dopp AR, Kerns SEU, Panattoni L, Ringel JS, Eisenberg D, Powell BJ, et al. Translating economic evaluations into financing strategies for implementing evidence-based practices. Implement Sci IS. 2021;16:66.

Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, Stall R. Implementing evidence-based interventions in health care:application of the replicating effective programs framework. Implement Sci. 2007;2:42–51.

Kegeles SM, Rebchook GM, Hays RB, Terry MA, O’Donnell L, Leonard NR, et al. From science to application: the development of an intervention package. AIDS Educ Prev Off Publ Int Soc AIDS Educ. 2000;12:62–74.

Wandersman A, Imm P, Chinman M, Kaftarian S. Getting to outcomes: a results-based approach to accountability. Eval Program Plann. 2000;23:389–95.

Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: Tools, training, technical assistance, and quality assurance/quality improvement. Am J Community Psychol. 2012;50:445–59.

Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Kirchner JE, Proctor EK, et al. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample. Implement Sci. 2017;12:1–13.

Smith SN, Almirall D, Prenovost K, Liebrecht C, Kyle J, Eisenberg D, et al. Change in patient outcomes after augmenting a low-level implementation strategy in community practices that are slow to adopt a collaborative chronic care model: a cluster randomized implementation trial. Med Care. 2019;57:503.

Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Gonzalez R, Park A, et al. Longitudinal assessment of the association between implementation strategy use and the uptake of hepatitis C treatment: Year 2. Implement Sci. 2019;14:1–12.

Harvey G, Kitson A. Translating evidence into healthcare policy and practice: Single versus multi-faceted implementation strategies – is there a simple answer to a complex question? Int J Health Policy Manag. 2015;4:123–6.

Engell T, Stadnick NA, Aarons GA, Barnett ML. Common Elements Approaches to Implementation Research and Practice: Methods and Integration with Intervention Science. Glob Implement Res Appl. 2023;3:1–15.

Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci IS. 2009;4:40.

Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci IS. 2020;15:84.

Perez Jolles M, Lengnick-Hall R, Mittman BS. Core Functions and Forms of Complex Health Interventions: a Patient-Centered Medical Home Illustration. JGIM J Gen Intern Med. 2019;34:1032–8.

Schroeck FR, Ould Ismail AA, Haggstrom DA, Sanchez SL, Walker DR, Zubkoff L. Data-driven approach to implementation mapping for the selection of implementation strategies: a case example for risk-aligned bladder cancer surveillance. Implement Sci IS. 2022;17:58.

Frank HE, Kemp J, Benito KG, Freeman JB. Precision Implementation: An Approach to Mechanism Testing in Implementation Research. Adm Policy Ment Health. 2022;49:1084–94.

Lewis CC, Klasnja P, Lyon AR, Powell BJ, Lengnick-Hall R, Buchanan G, et al. The mechanics of implementation strategies and measures: advancing the study of implementation mechanisms. Implement Sci Commun. 2022;3:114.

Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19:e1003918.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;356:i6795.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38:65–76.

Hooley C, Amano T, Markovitz L, Yaeger L, Proctor E. Assessing implementation strategy reporting in the mental health literature: a narrative review. Adm Policy Ment Health Ment Health Serv Res. 2020;47:19–35.

Proctor E, Ramsey AT, Saldana L, Maddox TM, Chambers DA, Brownson RC. FAST: a framework to assess speed of translation of health innovations to practice and policy. Glob Implement Res Appl. 2022;2:107–19.

Cullen L, Hanrahan K, Edmonds SW, Reisinger HS, Wagner M. Iowa Implementation for Sustainability Framework. Implement Sci IS. 2022;17:1.

Saldana L, Ritzwoller DP, Campbell M, Block EP. Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers. Implement Sci Commun. 2022;3:40.

Eisman AB, Kilbourne AM, Dopp AR, Saldana L, Eisenberg D. Economic evaluation in implementation science: making the business case for implementation strategies. Psychiatry Res. 2020;283:112433.

Akiba CF, Powell BJ, Pence BW, Nguyen MX, Golin C, Go V. The case for prioritizing implementation strategy fidelity measurement: benefits and challenges. Transl Behav Med. 2022;12:335–42.

Akiba CF, Powell BJ, Pence BW, Muessig K, Golin CE, Go V. “We start where we are”: a qualitative study of barriers and pragmatic solutions to the assessment and reporting of implementation strategy fidelity. Implement Sci Commun. 2022;3:117.

Rudd BN, Davis M, Doupnik S, Ordorica C, Marcus SC, Beidas RS. Implementation strategies used and reported in brief suicide prevention intervention studies. JAMA Psychiatry. 2022;79:829–31.

Painter JT, Raciborski RA, Matthieu MM, Oliver CM, Adkins DA, Garner KK. Engaging stakeholders to retrospectively discern implementation strategies to support program evaluation: Proposed method and case study. Eval Program Plann. 2024;103:102398.

Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Res Policy Syst. 2017;15:1–12.

Mustanski B, Smith JD, Keiser B, Li DH, Benbow N. Supporting the growth of domestic HIV implementation research in the united states through coordination, consultation, and collaboration: how we got here and where we are headed. JAIDS J Acquir Immune Defic Syndr. 2022;90:S1-8.

Marques MM, Wright AJ, Corker E, Johnston M, West R, Hastings J, et al. The Behaviour Change Technique Ontology: Transforming the Behaviour Change Technique Taxonomy v1. Wellcome Open Res. 2023;8:308.

Merle JL, Li D, Keiser B, Zamantakis A, Queiroz A, Gallo CG, et al. Categorising implementation determinants and strategies within the US HIV implementation literature: a systematic review protocol. BMJ Open. 2023;13:e070216.

Glenshaw MT, Gaist P, Wilson A, Cregg RC, Holtz TH, Goodenow MM. Role of NIH in the Ending the HIV Epidemic in the US Initiative: Research Improving Practice. J Acquir Immune Defic Syndr. 1999;2022(90):S9-16.

Purcell DW, Namkung Lee A, Dempsey A, Gordon C. Enhanced Federal Collaborations in Implementation Science and Research of HIV Prevention and Treatment. J Acquir Immune Defic Syndr. 1999;2022(90):S17-22.

Queiroz A, Mongrella M, Keiser B, Li DH, Benbow N, Mustanski B. Profile of the Portfolio of NIH-Funded HIV Implementation Research Projects to Inform Ending the HIV Epidemic Strategies. J Acquir Immune Defic Syndr. 1999;2022(90):S23-31.

Zamantakis A, Li DH, Benbow N, Smith JD, Mustanski B. Determinants of Pre-exposure Prophylaxis (PrEP) Implementation in Transgender Populations: A Qualitative Scoping Review. AIDS Behav. 2023;27:1600–18.

Li DH, Benbow N, Keiser B, Mongrella M, Ortiz K, Villamar J, et al. Determinants of Implementation for HIV Pre-exposure Prophylaxis Based on an Updated Consolidated Framework for Implementation Research: A Systematic Review. J Acquir Immune Defic Syndr. 1999;2022(90):S235-46.

Chambers DA, Emmons KM. Navigating the field of implementation science towards maturity: challenges and opportunities. Implement Sci. 2024;19:26, s13012-024-01352–0.

Chinman M, Acosta J, Ebener P, Shearer A. “What we have here, is a failure to [replicate]”: Ways to solve a replication crisis in implementation science. Prev Sci. 2022;23:739–50.

Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

Lengnick-Hall R, Gerke DR, Proctor EK, Bunger AC, Phillips RJ, Martin JK, et al. Six practical recommendations for improved implementation outcomes reporting. Implement Sci. 2022;17:16.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci IS. 2021;16:36.

Xu X, Lazar CM, Ruger JP. Micro-costing in health and medicine: a critical appraisal. Health Econ Rev. 2021;11:1.

Barnett ML, Dopp AR, Klein C, Ettner SL, Powell BJ, Saldana L. Collaborating with health economists to advance implementation science: a qualitative study. Implement Sci Commun. 2020;1:82.

Lengnick-Hall R, Williams NJ, Ehrhart MG, Willging CE, Bunger AC, Beidas RS, et al. Eight characteristics of rigorous multilevel implementation research: a step-by-step guide. Implement Sci. 2023;18:52.

Riley-Gibson E, Hall A, Shoesmith A, Wolfenden L, Shelton RC, Doherty E, et al. A systematic review to determine the effect of strategies to sustain chronic disease prevention interventions in clinical and community settings: study protocol. Res Sq [Internet]. 2023 [cited 2024 Apr 19]; Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10312971/

Ingvarsson S, Hasson H, von Thiele Schwarz U, Nilsen P, Powell BJ, Lindberg C, et al. Strategies for de-implementation of low-value care—a scoping review. Implement Sci IS. 2022;17:73.

Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11:e053474.

Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14:N.PAG-N.PAG.

Michie S, Atkins L, West R. The behaviour change wheel. Guide Des Interv 1st Ed G B Silverback Publ. 2014;1003:1010.

Birken SA, Haines ER, Hwang S, Chambers DA, Bunger AC, Nilsen P. Advancing understanding and identifying strategies for sustaining evidence-based practices: a review of reviews. Implement Sci IS. 2020;15:88.

Metz A, Jensen T, Farley A, Boaz A, Bartley L, Villodas M. Building trusting relationships to support implementation: A proposed theoretical model. Front Health Serv. 2022;2:894599.

Rabin BA, Cain KL, Watson P, Oswald W, Laurent LC, Meadows AR, et al. Scaling and sustaining COVID-19 vaccination through meaningful community engagement and care coordination for underserved communities: hybrid type 3 effectiveness-implementation sequential multiple assignment randomized trial. Implement Sci IS. 2023;18:28.

Gyamfi J, Iwelunmor J, Patel S, Irazola V, Aifah A, Rakhra A, et al. Implementation outcomes and strategies for delivering evidence-based hypertension interventions in lower-middle-income countries: Evidence from a multi-country consortium for hypertension control. PLOS ONE. 2023;18:e0286204.

Woodward EN, Ball IA, Willging C, Singh RS, Scanlon C, Cluck D, et al. Increasing consumer engagement: tools to engage service users in quality improvement or implementation efforts. Front Health Serv. 2023;3:1124290.

Norton WE, Chambers DA. Unpacking the complexities of de-implementing inappropriate health interventions. Implement Sci IS. 2020;15:2.

Norton WE, McCaskill-Stevens W, Chambers DA, Stella PJ, Brawley OW, Kramer BS. DeImplementing Ineffective and Low-Value Clinical Practices: Research and Practice Opportunities in Community Oncology Settings. JNCI Cancer Spectr. 2021;5:pkab020.

McKay VR, Proctor EK, Morshed AB, Brownson RC, Prusaczyk B. Letting Go: Conceptualizing Intervention De-implementation in Public Health and Social Service Settings. Am J Community Psychol. 2018;62:189–202.

Patey AM, Grimshaw JM, Francis JJ. Changing behaviour, ‘more or less’: do implementation and de-implementation interventions include different behaviour change techniques? Implement Sci IS. 2021;16:20.

Rodriguez Weno E, Allen P, Mazzucca S, Farah Saliba L, Padek M, Moreland-Russell S, et al. Approaches for Ending Ineffective Programs: Strategies From State Public Health Practitioners. Front Public Health. 2021;9:727005.

Gnjidic D, Elshaug AG. De-adoption and its 43 related terms: harmonizing low-value care terminology. BMC Med. 2015;13:273.

Download references

Acknowledgements

The authors would like to acknowledge the early contributions of the Pittsburgh Dissemination and Implementation Science Collaborative (Pitt DISC). LEA would like to thank Dr. Billie Davis for analytical support. The authors would like to acknowledge the implementation science experts who recommended articles for our review, including Greg Aarons, Mark Bauer, Rinad Beidas, Geoffrey Curran, Laura Damschroder, Rani Elwy, Amy Kilbourne, JoAnn Kirchner, Jennifer Leeman, Cara Lewis, Dennis Li, Aaron Lyon, Gila Neta, and Borsika Rabin.

Dr. Rogal’s time was funded in part by a University of Pittsburgh K award (K23-DA048182) and by a VA Health Services Research and Development grant (PEC 19-207). Drs. Bachrach and Quinn were supported by VA HSR Career Development Awards (CDA 20-057, PI: Bachrach; CDA 20-224, PI: Quinn). Dr. Scheunemann’s time was funded by the US Agency for Healthcare Research and Quality (K08HS027210). Drs. Hero, Chinman, Goodrich, Ernecoff, and Mr. Qureshi were funded by the Patient-Centered Outcomes Research Institute (PCORI) AOSEPP2 Task Order 12 to conduct a landscape review of US studies on the effectiveness of implementation strategies with results reported here ( https://www.pcori.org/sites/default/files/PCORI-Implementation-Strategies-for-Evidence-Based-Practice-in-Health-and-Health-Care-A-Review-of-the-Evidence-Full-Report.pdf and https://www.pcori.org/sites/default/files/PCORI-Implementation-Strategies-for-Evidence-Based-Practice-in-Health-and-Health-Care-Brief-Report-Summary.pdf ). Dr. Ashcraft and Ms. Phares were funded by the Center for Health Equity Research and Promotion, (CIN 13-405). The funders had no involvement in this study.

Author information

Shari S. Rogal and Matthew J. Chinman are co-senior authors.

Authors and Affiliations

Center for Health Equity Research and Promotion, Corporal Michael Crescenz VA Medical Center, Philadelphia, PA, USA

Laura Ellen Ashcraft

Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA

Center for Health Equity Research and Promotion, VA Pittsburgh Healthcare System, Pittsburgh, PA, USA

David E. Goodrich, Angela Phares, Deirdre A. Quinn, Shari S. Rogal & Matthew J. Chinman

Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA

David E. Goodrich, Deirdre A. Quinn & Matthew J. Chinman

Clinical & Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA

David E. Goodrich & Lisa G. Lederer

RAND Corporation, Pittsburgh, PA, USA

Joachim Hero, Nabeel Qureshi, Natalie C. Ernecoff & Matthew J. Chinman

Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, Michigan, USA

Rachel L. Bachrach

Department of Psychiatry, University of Michigan Medical School, Ann Arbor, MI, USA

Division of Geriatric Medicine, University of Pittsburgh, Department of Medicine, Pittsburgh, PA, USA

Leslie Page Scheunemann

Division of Pulmonary, Allergy, Critical Care, and Sleep Medicine, University of Pittsburgh, Department of Medicine, Pittsburgh, PA, USA

Departments of Medicine and Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, USA

Shari S. Rogal

You can also search for this author in PubMed   Google Scholar

Contributions

LEA, SSR, and MJC conceptualized the study. LEA, SSR, MJC, and JOH developed the study design. LEA and JOH acquired the data. LEA, DEG, AP, RLB, DAQ, LGL, LPS, SSR, NQ, and MJC conducted the abstract, full text review, and rigor assessment. LEA, DEG, JOH, AP, RLB, DAQ, NQ, NCE, SSR, and MJC conducted the data abstraction. DEG, SSR, and MJC adjudicated conflicts. LEA and SSR analyzed the data. LEA, SSR, JOH, and MJC interpreted the data. LEA, SSR, and MJC drafted the work. All authors substantially revised the work. All authors approved the submitted version and agreed to be personally accountable for their contributions and the integrity of the work.

Corresponding author

Correspondence to Laura Ellen Ashcraft .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

The manuscript does not contain any individual person’s data.

Competing interests

Additional information, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., supplementary material 5., supplementary material 6., supplementary material 7., supplementary material 8., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ashcraft, L.E., Goodrich, D.E., Hero, J. et al. A systematic review of experimentally tested implementation strategies across health and human service settings: evidence from 2010-2022. Implementation Sci 19 , 43 (2024). https://doi.org/10.1186/s13012-024-01369-5

Download citation

Received : 09 November 2023

Accepted : 27 May 2024

Published : 24 June 2024

DOI : https://doi.org/10.1186/s13012-024-01369-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation strategy
  • Health-related outcomes

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is a systematic research study

  • Introduction
  • Conclusions
  • Article Information

Our search ended in July 2022, and investigators were contacted to confirm their data accuracy in February 2023. The Figure includes 4 planned platform trials and the planned year of initiation.

eAppendix 1. Search Strategy

eAppendix 2. Baseline Characteristics

eFigure 1. Detailed Flow Chart and Reasons for Exclusion

eTable 1. Report Labels and Reasons for Exclusion of Reports in Literature and Registry Screening

eTable 2. Other Baseline Characteristics

eTable 3. Baseline Characteristics by COVID and Non-COVID Platform Trials

eTable 4. Specific Platform Trial Characteristics in COVID and Non-COVID Trials

eTable 5. Specific Platform Trial Characteristics for Platform Trials With Full Available Master Protocol

eTable 6. Platform Trial Progression and Output of COVID and Non-COVID Trials

eTable 7. Status of Platform Trial Arms and Trial Arm Results in COVID and Non-COVID Trials

eTable 8. How Were Results Made Available for Arms?

eTable 9. Survey Response Rates

eAppendix 3. Example of eMail Template and Report Sent to Platform Trial Teams

eTable 10. List of Randomized Platform Trials

Data Sharing Statement

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Griessbach A , Schönenberger CM , Taji Heravi A, et al. Characteristics, Progression, and Output of Randomized Platform Trials : A Systematic Review . JAMA Netw Open. 2024;7(3):e243109. doi:10.1001/jamanetworkopen.2024.3109

Manage citations:

© 2024

  • Permissions

Characteristics, Progression, and Output of Randomized Platform Trials : A Systematic Review

  • 1 CLEAR Methods Center, Division of Clinical Epidemiology, Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland
  • 2 Department of Medicine, McMaster University, Hamilton, Ontario, Canada
  • 3 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
  • 4 Department of Neurosurgery, University Hospital Basel, Basel, Switzerland
  • 5 Pragmatic Evidence Lab, Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland

Question   What are the characteristics, progression, and output of randomized platform trials?

Findings   In this systematic review of 127 platform trials with a total of 823 arms, primarily in the fields of oncology and COVID-19, the adpative features of the trials were often poorly reported and only used in 49.6% of all trials; results were available for only 65.2% of completed trial arms.

Meaning   The planning and reporting of platform features and the availability of results were insufficient in randomized platform trials.

Importance   Platform trials have become increasingly common, and evidence is needed to determine how this trial design is actually applied in current research practice.

Objective   To determine the characteristics, progression, and output of randomized platform trials.

Evidence Review   In this systematic review of randomized platform trials, Medline, Embase, Scopus, trial registries, gray literature, and preprint servers were searched, and citation tracking was performed in July 2022. Investigators were contacted in February 2023 to confirm data accuracy and to provide updated information on the status of platform trial arms. Randomized platform trials were eligible if they explicitly planned to add or drop arms. Data were extracted in duplicate from protocols, publications, websites, and registry entries. For each platform trial, design features such as the use of a common control arm, use of nonconcurrent control data, statistical framework, adjustment for multiplicity, and use of additional adaptive design features were collected. Progression and output of each platform trial were determined by the recruitment status of individual arms, the number of arms added or dropped, and the availability of results for each intervention arm.

Findings   The search identified 127 randomized platform trials with a total of 823 arms; most trials were conducted in the field of oncology (57 [44.9%]) and COVID-19 (45 [35.4%]). After a more than twofold increase in the initiation of new platform trials at the beginning of the COVID-19 pandemic, the number of platform trials has since declined. Platform trial features were often not reported (not reported: nonconcurrent control, 61 of 127 [48.0%]; multiplicity adjustment for arms, 98 of 127 [77.2%]; statistical framework, 37 of 127 [29.1%]). Adaptive design features were only used by half the studies (63 of 127 [49.6%]). Results were available for 65.2% of closed arms (230 of 353). Premature closure of platform trial arms due to recruitment problems was infrequent (5 of 353 [1.4%]).

Conclusions and Relevance   This systematic review found that platform trials were initiated most frequently during the COVID-19 pandemic and declined thereafter. The reporting of platform features and the availability of results were insufficient. Premature arm closure for poor recruitment was rare.

Randomized clinical trials (RCTs) are the criterion standard for evaluating health care interventions. However, RCTs are criticized for being slow, inflexible, inefficient, and costly. 1 - 6 The platform trial design 7 may overcome some of the challenges associated with traditional RCTs. 5 , 8

In the literature, the definition of platform trials is inconsistent. 7 , 9 - 16 Common characteristics of platform trials include the simultaneous assessment of multiple interventions, as well as the ability to drop ineffective interventions or add promising new interventions (arms). 10 , 13 , 17 - 20 Platform trial planning and conduct require consideration of their unique design features, methodological framework, and level of sophistication. This planning includes the potential use of a common control arm, nonconcurrent control data, the statistical framework (bayesian and/or frequentist), in silico trials (simulations), and the use of additional adaptive design features, such as response adaptive randomization (RAR; the change of the randomization ratio based on data collected during the trial), sample size reassessment, seamless design (seamless study phase transition), and adaptive enrichment (modification of eligibility criteria). 9 , 11 , 16 Platform trials are stipulated to be more time efficient and cost efficient and are able to increase the output of the trial, benefiting both patients and researchers. 8 , 9 , 17 Further potential benefits include the use of regulatory documentation (master protocol) and contracts beyond 1 trial and its respective duration, 8 quick initiation of new sites and intervention arms, 21 reuse of established infrastructure, 22 and quick study phase transition. 22

Empirical evidence about platform trials is needed to gain insight into the actual application of this design in clinical research practice and to learn about its benefits and pitfalls, so that the planning and conduct of platform trials can be further improved. Previous systematic reviews on platform trials are outdated 13 , 14 ; are restricted to the late-phase, multiarm, multistage design or COVID-19 trials 23 , 24 ; only investigated a small number of distinct platform trial features 23 ; or did not consider the output of platform trials in terms of completed, prematurely closed, and published trial arms. 25 A comprehensive overview is currently lacking. We specifically wondered whether the incidence of platform trials continued to increase despite a fading pandemic, the extent to which distinctive features were actually used, whether recruitment failures were rare, and whether results from platform trials were consistently made available. We, therefore, conducted a systematic review of all available randomized platform trials to empirically determine (1) their incidence over time, (2) the actual frequencies of various distinctive platform trial characteristics (eg, common control arm, use of nonconcurrent control data, and RAR), (3) the incidence of added and dropped arms over time, (4) the prevalence of discontinued trials due to poor participant recruitment, and (5) the availability of results for closed trial arms.

This systematic review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses ( PRISMA ) reporting guideline. 26 A detailed protocol was prospectively registered on Open Science Framework (OSF). 27

The systematic search (including registries) was conducted on January 12, 2021, and was updated on July 28, 2022. Data were extracted until December 2022. Investigators were contacted for verification of the data in February 2023. We performed a systematic search of Medline (OVID), Embase (OVID), Scopus, and several trial registries (Clinicaltrials.gov, European Union Drug Regulating Authorities Clinical Trials Database, and International Standard Randomized Controlled Trial Number registry). To increase the sensitivity of the search, we included gray literature servers (OSF and Zenodo) and preprint servers (Europe PubMed Central) (search date: July 21, 2022). The detailed search strategy is available in eAppendix 1 in Supplement 1 . An information specialist helped us design and review our search strategy. Trials were included if they were RCTs and planned to add or drop arms.

Screening of titles and abstracts, trial registries, and full text were performed in duplicate. Discrepancies were resolved by discussion or by involving a third reviewer (B.S. or M.B.). For each included report, we continued with forward and backward citation tracking (using Scopus). Citation tracking, gray literature, and preprint server screening was conducted by only 1 reviewer (A.G. or C.M.S.). If multiple reports were available for 1 platform trial, these reports were organized and consolidated by registry numbers, acronyms, and the title of the trial. Once a platform trial was included, we determined if an official trial website was available (by screening the literature and registries and searching via Google). For each platform trial and each of their recorded arms, we searched in duplicate (registry, website, Google Scholar, and Google) for the master protocol, subprotocols, and results publications, if not previously found in the literature search.

The variables for this systematic review were chosen based on discussions with methodologists and statisticians of platform trials, previous reviews on the topic, and the critical appraisal checklists by Park et al. 20 , 28 All relevant data were extracted in duplicate (by different researchers). Differences were consolidated by a third reviewer. All authors worked in teams of 2 from trial protocols (master and subprotocols), result publications, trial registries, and the official trial websites into a REDCap data sheet. 29 , 30 We documented the different labels used in study records (eg, “platform trial,” “trial platform,” “platform study,” “platform design,” or “platform protocol”) to explore the general use of the term platform trial . We extracted baseline characteristics for each included platform trial and each of their individual arms (see list of all baseline characteristics in eAppendix 2 in Supplement 1 ). Furthermore, distinct platform trial features were recorded. These features included the use of a common control arm and, if the common control arm could be updated during the trial, the use of nonconcurrent control data, adaptive design elements (eg, RAR, adaptive enrichment, seamless design, sample size readjustment), a statistical framework (bayesian, frequentist, or both), multiplicity adjustments (to multiple arms and for interim analyses), and feasibility studies (in silico trials or simulations or pilot trials). We determined the progression and output of the platform trial by the starting number of arms, the total number of arms, the number of arms added, the number of arms dropped (including the reason), and the status and availability of the results for each intervention arm (output of platform trial). Further features of interest included the use of biomarker stratification or subpopulations, integration of nonrandomized arms, interim analysis (reporting of frequency, outcome, and trigger), or the use of a factorial design. The format of the master protocol and the results publications were also recorded (as peer-reviewed publication, preprint, and full protocol on website or registry). Furthermore, we calculated the ratio of available results publications to the number of closed arms. The ratio was calculated twice, once including and once excluding results available as abstracts only. We contacted all principal investigators with a report detailing the most important information extracted from their platform trial. Principal investigators were asked to approve the accuracy of extracted data and to clarify missing or unclear information (eAppendix 3 in Supplement 1 ).

We summarized the characteristics of the included platform trials using the median and IQR for continuous variables and numbers and percentages for categorical variables. Baseline characteristics were stratified by sponsorship (industry vs not industry sponsored) and COVID-19 indication. Previous research has identified differences in the discontinuation rate, reporting quality, and transparency between industry-sponsored and non–industry-sponsored traditional RCTs 31 , 32 ; as such, we stratified platform trial characteristics by sponsorship. Because it was expected that platform trial features are often recorded in the master protocol, we conducted a sensitivity analysis including only trials with an available master protocol. Data cleaning and analysis were conducted with R, version 1.4.1103 (R Project for Statistical Computing).

A total of 9155 records were identified. We determined 431 eligible records, resulting in 127 unique randomized platform trials included in our sample (the list of all included platform trials can be found in eTable 10 in Supplement 1 ). Labels such as “platform trial” and “platform study” were often used in a non–clinical trial context (see detailed list of all excluded reports using such terms in eTable 1 in Supplement 1 ). Platform trials were excluded if not randomized or if they did not allow for the adding and dropping of new arms (eFigure in Supplement 1 ).

Most platform trials were conducted in the fields of oncology (57 of 127 [44.9%]) and COVID-19 (45 of 127 [35.4%]), were multicenter and international (74 of 127 [58.3%]), tested drugs (108 of 127 [85.0%]), and were not industry sponsored (90 of 127 [70.9%]) ( Table 1 ). All platform trials were registered. A master protocol was publicly available for 59.8% of all platform trials (76 of 127), and 16.5% (21 of 127) had also made older versions of protocols (amendments) available. A website existed for 51.2% of platform trials (65 of 127), with a higher prevalence observed in non–industry-sponsored trials than in industry-sponsored trials (55 of 90 [61.1%] vs 10 of 37 [27.0%]). Additional platform trial characteristics (eg, use of blinding, interim analyses, factorial design, nonrandomized arms, biomarker stratification, and number of subpopulations) and a stratification by COVID-19 and non–COVID-19 trials are presented in eTable 2, eTable 3, eTable 4, eTable 6, and eTable 7 in Supplement 1 . A total of 38 platform trials (29.9%) were initiated in 2020, the highest reported incidence of newly started platform trials in 1 year thus far. This number has since decreased (25 of 127 [19.7%] in 2021) ( Figure ).

A common control arm was reported to be used in 73.2% of all platform trials (93 of 127); 7.9% trials (10 of 127) planned to use nonconcurrent control data for their statistical analysis (not reported for 61 of 127 trials [48.0%]) ( Table 2 ). Adaptive design elements were integrated in approximately half the platform trials (63 of 127 [49.6%]), and 17.3% of trials (22 of 127) implemented more than 1 adaptive design element. A correction for multiple testing for multiple arms was typically not reported (98 of 127 [77.2%]) or not considered (21 of 127 [16.5%]). The statistical framework was not reported by 37 studies (29.1%). Seamless designs, combining early- and late-phase trials, were used in 18.1% of trials (23 of 127). Characteristics stratified by COVID-19 vs non–COVID-19 trials can be found in eTable 4 in Supplement 1 .

Most randomized platform trials were ongoing (86 of 127 [67.7%]) or completed (26 of 127 [20.5%]), 4 of 127 (3.1%) were in planning, and 10 of 127 (7.9%) were discontinued ( Table 3 ). Reasons for discontinuation included change in treatment landscape (3 of 10), low event rates (3 of 10), insufficient funding (2 of 10), and safety concerns (1 of 10), and, for 1 platform trial, the reason for discontinuation remained unclear. The number of arms at the start of the platform trial and the total number of arms was typically higher in industry-sponsored trials (median number of arms at start, 4 [IQR, 2-5]; median total number of arms, 6 [IQR, 4-8]) than in non–industry-sponsored trials (median number of arms at start, 3 [IQR, 2-4]; median total number of arms, 5 [IQR, 4-7]) ( Table 3 ). Overall, 58.3% platform trials (74 of 127) added at least 1 arm, and 62.2% (79 of 127) dropped at least 1 arm during their progression; although planned, 21.3% of platform trials (27 of 127) neither added nor dropped an arm. Of the 85 platform trials that added or dropped an arm during the trial, the corresponding registry entry was not updated for 19 trials (22.4%). Half of all platform trials (64 of 127 [50.4%]) made results available from at least 1 comparison. Data on progression and output stratified by COVID-19 vs non–COVID-19 trials can be found in eTable 6 in Supplement 1 .

The 127 platform trials had a total of 823 arms, including 206 control arms ( Table 4 ). Of the 823 arms, 385 (46.8%) were ongoing, 34 (4.1%) were in the planning phase, and 353 (42.9%) were closed. Of the 353 closed arms, 189 (53.5%) were completed, 56 (15.9%) were stopped for futility, 20 (5.7%) were stopped due to new external evidence, 9 (2.5%) were stopped for safety concerns, and 26 (7.4%) were stopped for practical reasons, including poor recruitment (5 [1.4%]). Less than half of the closed arms (169 of 353 [47.9%]) made full results available. Making results available was more common and faster for non–industry-sponsored trials compared with industry-sponsored trials (150 of 277 [54.2%] vs 19 of 76 [25.0%]); however, there is evidence for confounding because COVID-19 trial results were available substantially faster than results for non–COVID-19 trials ( Table 4 ). The detailed status of platform trial arms stratified by COVID-19 vs non–COVID-19 trials can be found in eTable 7 in Supplement 1 . The form of results availability (as peer review, preprint, abstract, and on registry) is available in eTable 8 in Supplement 1 . We contacted investigators of platform trials to verify the extracted data and achieved a high response rate (active agreement, 46.5% [59 of 127]; taciturn agreement, 15.7% [20 of 127]; no response, 37.8% [48 of 127]) (eTable 9 in Supplement 1 ).

Existing platform trials predominantly focus on evaluating drugs and tend to cluster in medical areas, such as oncology, COVID-19, and other infectious diseases. After the peak in 2020 with the arrival of the COVID-19 pandemic, the initiation of new platform trials has decreased. However, there has been a noticeable diversification of medical fields and interventions of platform trials over the past 5 years. This diversification encompasses areas such as neurology, dermatology, and general surgery, as well as the testing of behavioral, surgical, or dietary interventions.

Among the observed platform trials, 49.6% incorporated at least 1 additional adaptive design feature. A total of 58.3% of platform trials added at least 1 arm, and 62.2% dropped at least 1 arm (21.3% did neither, although planned). Consequently, the approximately 40% of trials that never added an arm may have incurred higher planning and setup costs compared with traditional RCTs without benefiting from the cost savings of additional arms. 33 A common control arm was used in only 73.2% of platform trials, which is lower than one would expect for a major platform trial advantage (increased efficiency) and is below the percentage previously reported. 23 This finding may underline the belief of many stakeholders that the establishment of collective trial infrastructures (including communication networks, overall data management and monitoring plans, and standardized documents across arms) is reason enough to justify the use of the platform trial design. 22 Nevertheless, the benefits of only submitting an amendment instead of a new application for each added arm, and the quicker activation of sites, compared with new traditional RCTs, need to be balanced with substantial operational, statistical, and legal complexities of platform trials 21 , 34

Many statistical features of platform trials are currently contended in literature, form the foundation of the platform trial design, and affirm the validity of the trial results. 12 , 16 , 22 , 35 - 37 A bayesian design was frequently used because this statistical framework fits well with the adaptive nature of platform trials 25 , 35 ; however, bayesian trial designs may be less commonly understood by a general medical and scientific readership, posing challenges for interpretation and uptake of results. In addition, the use of features such as RAR and nonconcurrent controls should be considered carefully. Response adaptive randomization, for instance, requires a well-planned run-in phase, may inflate type I error, typically requires a higher sample size, and can be associated with slow accrual of outcome data. 38 About 8% of platform trials considered nonconcurrent control data in an attempt to further increase statistical power; however, this approach carries a high risk for bias. 22 , 37 , 39 Regulators criticize the use of nonconcurrent controls in confirmatory trials because statistical modeling can only partially address the potential bias. 37 , 38

Almost 80% of platform trial protocols were publicly available in some format, much higher than previously determined for traditional RCTs. 24 , 25 However, reporting of essential features, such as adjustment for multiplicity, use of nonconcurrent control data, and criteria for dropping and adding new arms, was often unsatisfactory. Full results publications were available for 47.9% of closed arms. Premature closure of platform trial arms due to recruitment problems was infrequent, occurring in only 1.4% of trials, which is in contrast to traditional RCTs (discontinuation rate due to poor recruitment in RCTs, 10%-15%). 31 , 32 However, it is possible that this proportion will increase due to recruitment hurdles and the increasing scarcity of eligible patients for COVID-19 trials toward the end of the pandemic. Publication of full results for closed arms (47.9%) was lower than what is generally seen for traditional RCTs (78.5% at 10-year follow-up). 32 Availability of full results publications and overall transparency were generally better in non–industry-sponsored platform trials.

Overall, industry-sponsored platform trials accounted for approximately one-third of the total and predominantly focused on early-phase investigations, while late-phase trials were mostly not sponsored by industry. Seamless designs, combining early- and late-phase trials, although still a minority (18.1%), are becoming increasingly more common. 14

Our study has some strengths. To our knowledge, it is the first study investigating key platform trial features, protocol and results availability, and the status of individual arms. An additional strength of our study was that we contacted investigators of platform trials to verify the extracted data and achieved a high response rate (active agreement, 46.5% [59 of 127]; taciturn agreement, 15.7% [20 of 127]; no response, 37.8% [48 of 127]) (eTable 9 in Supplement 1 ); responses typically confirmed the accuracy of gathered data, and only minor adjustments were necessary.

Our study has the following limitations. First, available information was sometimes limited, especially if only a registry entry was available. We have, therefore, conducted sensitivity analyses showing how the proportion of certain variables changed if only platform trials with an available master protocol (n = 76 [59.8%]) were considered (eTable 5 in Supplement 1 ). Second, the reporting was not always consistent across different sources. We handled these discrepancies by creating an information hierarchy, giving priority to peer-reviewed manuscripts and the feedback received by investigators (followed by preprints, websites, and then other sources). Third, although highly desirable, we did not consider resource use and costs of platform trials in this review. Evidence from a hypothetical costing study suggested increased costs associated with the planning and setup of platform trials compared with traditional RCTs are due to the complex protocols and longer setup times. 33 These increased costs were mitigated when more arms were added to the trial, which was less time intensive and reduced costs long term. 40 , 41 Fourth, a comparison of platform trials with traditional parallel-arm RCTs was possible only on an indirect level. However, a direct comparison of platform trials with traditional RCTs with the same research question is planned in a future project, as described in our study protocol. 27 Fifth, this systematic review provides only a snapshot of the current platform trial landscape. Two-thirds of identified platform trials are still ongoing, and the COVID-19 pandemic may have had an influence on the progression and output of our sample. Furthermore, methodological background and reporting guidelines for platform trials were lacking at the start of this project and are currently still evolving. Therefore, regular updates of this systematic review are necessary to gain further insights into progression patterns and output from randomized platform trials and to determine the most appropriate application of this design in the future.

In this systematic review, we found that platform trials were initiated most frequently during the beginning of the COVID-19 pandemic and appeared to decrease thereafter, with a trend toward more diversified medical fields and interventions. Despite the potential for complexity, most made use of only 1 adaptive feature, or none. Forty percent of platform trials did not add an arm and, thereby, may have missed efficiency gains and incurred higher planning and setup costs compared with traditional RCTs. 33 Premature arm closure for poor recruitment was rare. The reporting of platform features, the status of trial arms, and the results of closed arms needs to be improved. Guidance and infrastructure are needed so that the status and results of individual trial arms can be reported in a timely manner (eg, adaptations of trial registries for platform trials) and so that decisions about the need for a platform design and its planning is optimized.

Accepted for Publication: January 24, 2024.

Published: March 20, 2024. doi:10.1001/jamanetworkopen.2024.3109

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Griessbach A et al. JAMA Network Open .

Corresponding Author: Alexandra Griessbach, MSc, CLEAR Methods Center, Division of Clinical Epidemiology, Department of Clinical Research, University Hospital Basel, Totengaesslein 3, 4031 Basel, Switzerland ( [email protected] ).

Author Contributions: Ms Griessbach and Dr Briel had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs Speich and Briel shared last authorship.

Concept and design: Griessbach, Speich, Briel.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Griessbach, Covino, Mall, Briel.

Critical review of the manuscript for important intellectual content: Griessbach, Schönenberger, Taji Heravi, Gloy, Agarwal, Hallenberger, Schandelmaier, Janiaud, Amstutz, Speich, Briel.

Statistical analysis: Griessbach.

Obtained funding: Griessbach.

Administrative, technical, or material support: Griessbach, Gloy.

Supervision: Griessbach, Amstutz, Speich, Briel.

Conflict of Interest Disclosures: Drs Schönenberger and Hallenberger reported receiving grants from the Swiss National Science Foundation outside the submitted work. Dr Speich reported receiving grants from Moderna outside the submitted work. No other disclosures were reported.

Meeting Presentation: This study was presented at the Sixth International Clinical Trials Methodology Conference; October 3, 2022; Harrogate, England; and at the Australian Clinical Trials Alliance–Adaptive Platform Trials Operations Meeting; August 10, 2023; virtual meeting.

Data Sharing Statement: See Supplement 2 .

Additional Contributions: We thank Hannah Ewald, PhD, University Basel, for reviewing our search strategy; she was compensated for her contribution.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts
  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

July 2, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

New study finds systematic biases at play in clinical trials

by Michigan State University

clinical trial

Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State University found that people of color and white women are significantly underrepresented in RCTs due to systematic biases.

, published in the Journal of Ethnicity in Substance Abuse , reviewed 18 RCTs conducted over the last 15 years that tested treatments for post-traumatic stress and alcohol use disorder. The researchers found that despite women having double the rates of post-traumatic stress and alcohol use disorder than men, and

"Because RCTs are the gold standard for treatment studies and drug trials , we rarely ask the important questions about their limitations and failings," said Nicole Buchanan, co-author of the study and professor in MSU's Department of Psychology.

"For RCTs to meet their full potential, investigators need to fix barriers to inclusion. Increasing representation in RCTs is not simply an issue for equity, but it is also essential to enhancing the quality of our science and meeting the needs of the public that funds these studies through their hard-earned tax dollars."

The researchers found that the design and implementation of the randomized controlled trials contributed to the lack of representation of people of color and women. This happened because trials were conducted in areas where white men were the majority demographic group and study samples almost always reflected the demographic makeup where studies occurred.

Additionally, those designing the studies seldom acknowledged race or gender differences , meaning they did not intentionally recruit diverse samples.

Furthermore, the journals publishing these studies did not have regulations requiring sample diversity, equity or inclusion as appropriate to the conditions under investigation.

"Marginalized groups have unique experiences from privileged groups, and when marginalized groups are poorly included in research, we remain in the dark about their experiences, insights, needs and strengths," said Mallet Reid, co-author of the study and doctoral candidate in MSU's Department of Psychology.

"This means that clinicians and researchers may unknowingly remain ignorant to how to attend to the trauma and addiction challenges facing marginalized groups and may unwittingly perpetuate microaggressions against marginalized groups in clinical settings or fail to meet their needs."

Explore further

Feedback to editors

what is a systematic research study

An innovative test to diagnose Chagas disease in newborns

5 hours ago

what is a systematic research study

Reducing processed meat intake could have significant health benefits, study suggests

what is a systematic research study

Scientists may have discovered how to diagnose elusive neuro disorder

7 hours ago

what is a systematic research study

Study finds poor health, stress in 20s takes toll in 40s with lower cognition

what is a systematic research study

30-minute exercise session found to increase proportion of tumor-killing white blood cells in the bloodstream

what is a systematic research study

Study finds brain stores motor memories differently based on decision uncertainty

8 hours ago

what is a systematic research study

Cracking the code for cerebellar movement disorders

9 hours ago

what is a systematic research study

What is language for? Researchers make the case that it's a tool for communication, not for thought

what is a systematic research study

Study finds smokers are on average more extraverted, but less conscientious and agreeable

what is a systematic research study

Fighting COVID-19 with a cancer drug: A new approach to preventing irreversible organ damage in infectious diseases

Related stories.

what is a systematic research study

Trials of alcohol use disorder treatments routinely exclude sex, gender, race, and ethnicity from consideration

Jul 29, 2022

what is a systematic research study

Underrepresentation in clinical trials leads to cancer disparities, says expert

Jun 11, 2024

what is a systematic research study

A theoretical framework for integrating diversity and organizational embeddedness

Dec 5, 2023

what is a systematic research study

Data show clinical trials are becoming more inclusive

Feb 2, 2024

what is a systematic research study

Use of GLP-1 receptor agonists to treat substance and alcohol use disorders is promising, but premature, say researchers

what is a systematic research study

Study of alcohol treatment completion reveals greater disparities for women of color

Feb 1, 2024

Recommended for you

what is a systematic research study

Deep machine-learning speeds assessment of fruit fly heart aging and disease, a model for human disease

11 hours ago

what is a systematic research study

Mobile phone data helps track pathogen spread and evolution of superbugs

12 hours ago

what is a systematic research study

A predictive model for cross-border COVID spread

13 hours ago

what is a systematic research study

Team develops AI model to improve patient response to cancer therapy

18 hours ago

what is a systematic research study

AI-powered tool helps doctors detect rare diseases

10 hours ago

what is a systematic research study

Experts discuss new screening tool developed for lipoprotein(a) detection

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Department of Psychology College of Social Science

New msu study finds systematic biases at play in clinical trials.

July 1, 2024 - Shelly DeJong

Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State University found that people of color and white women are significantly underrepresented in RCTs due to systematic biases. 

The study , published in the Journal of Ethnicity in Substance Abuse, reviewed 18 RCTs conducted over the last 15 years that tested treatments for post-traumatic stress and alcohol use disorder. The researchers found that despite women having double the rates of post-traumatic stress and alcohol use disorder than men, and people of color having worse chronicity than white people, most participants were white (59.5%) and male (about 78%). 

Headshot of Nicole Buchanan.

“Because RCTs are the gold standard for treatment studies and drug trials, we rarely ask the important questions about their limitations and failings,” said   NiCole Buchanan , co-author of the study and professor in MSU’s Department of Psychology. “For RCTs to meet their full potential, investigators need to fix barriers to inclusion. Increasing representation in RCTs is not simply an issue for equity, but it is also essential to enhancing the quality of our science and meeting the needs of the public that funds these studies through their hard-earned tax dollars.”

The researchers found that the design and implementation of the randomized controlled trials contributed to the lack of representation of people of color and women. This happened because trials were conducted in areas where white men were the majority demographic group   and study samples almost always reflected the demographic makeup where studies occurred. Additionally, those designing the studies seldom acknowledged race or gender differences, meaning they did not intentionally recruit diverse samples.

Furthermore, the journals publishing these studies did not have regulations requiring sample diversity, equity or inclusion as appropriate to the conditions under investigation.

“Marginalized groups have unique experiences from privileged groups, and when marginalized groups are poorly included in research, we remain in the dark about their experiences, insights, needs and strengths,” said   Mallet Reid , co-author of the study and doctoral candidate in MSU’s Department of Psychology. “This means that clinicians and researchers may unknowingly   remain ignorant to how to attend to the trauma and addiction challenges facing marginalized groups and may unwittingly perpetuate microaggressions against marginalized groups in clinical settings or fail to meet their needs.”

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-.

Cover of InformedHealth.org

InformedHealth.org [Internet].

In brief: what are systematic reviews and meta-analyses.

Last Update: September 8, 2016 ; Next update: 2024.

Individual studies are often not big and powerful enough to provide reliable answers on their own. Or several studies on the effects of a treatment might come to different conclusions. In order to find reliable answers to research questions, you therefore have to look at all of the studies and analyze their results together.

Systematic reviews summarize the results of all the studies on a medical treatment and assess the quality of the studies. The analysis is done following a specific, methodologically sound process. In a way, it’s a “study of studies.” Good systematic reviews can provide a reliable overview of the current knowledge in a certain area.

They are normally done by teams of authors working together. The authors are usually specialists with backgrounds in medicine, epidemiology, medical statistics and research.

  • How are systematic reviews performed?

Systematic reviews can only provide reliable answers if the studies they are based on are searched for and selected very carefully. The individual steps needed before they can be published are usually quite complex.

  • Research question: First of all, the researchers have to decide exactly what question they want to find the answer to. Which treatment should be looked at in which group of people, and what should it be compared with? What should be measured? This set of key questions is also referred to as the PICO framework. PICO stands for P opulation (patient group), I ntervention (the treatment or diagnostic test under investigation), C ontrol (comparison group) and O utcome (variable to be measured). The research question also determines which criteria to use when selecting studies to include in the review – for instance, only certain types of studies .
  • Research : Once they know what they are looking for, the researchers have to search as thoroughly and comprehensively as possible for all the studies that might help answer the question. This can easily add up to as many as several hundred studies. Searches for studies are usually done in international databases. Most study results are published online and in English. The relevant information is filtered out using sophisticated methods. The researchers often try to find any unpublished data by contacting and asking other scientists, looking through lists of sources used in other publications, and sometimes even by looking at conference transcripts. One big problem is that some studies are never published. Compared to studies in which treatments are found to have positive outcomes, studies that don’t find any benefits are often published later or never published at all. As a result, the studies that are found and included in reviews might make a treatment seem better than it really is. This kind of systematic bias is also known as “publication bias.”
  • Selection: The suitability of every study that is found has to be checked using very specific pre-defined criteria. Studies that do not fulfill the criteria are not included in the review. The suitability of a study is usually assessed by at least two researchers who go through all the studies separately and then compare and discuss their conclusions. This is done in order to try to avoid including unsuitable studies in the review.
  • Assessment: The studies that fulfill all the inclusion criteria are carefully assessed . The analysis should provide a comprehensive overview of what is known, and what isn’t known, about the topic in question.
  • Peer review: The researchers provide a detailed report of the steps they took, their research methods and what they found. A draft version is critically assessed and commented on by experts. This is called "peer reviewing."
  • Publication: If the systematic review “passes” the peer review, it can be published in scientific journals and relevant databases. One important source of systematic reviews is the “Cochrane Library” database. It is run by the Cochrane Collaboration – an international network of researchers who have specialized in producing systematic reviews.
  • Keeping the information up-to-date: In order to stay up-to-date, systematic reviews must be updated regularly.
  • What is a meta-analysis?

Sometimes the results of all of the studies found and included in a systematic review can be summarized and expressed as an overall result. This is known as a meta-analysis. The overall outcome of the studies is often more conclusive than the results of individual studies.

But it only makes sense to do a meta-analysis if the results of the individual studies are fairly similar (homogeneous). If there are big differences between the results, there are likely to be important differences between the studies. These should be looked at more closely. It is then sometimes possible to split the participants into smaller subgroups and summarize the results separately for each subgroup.

  • Bucher H.C. Kritische Bewertung von Studien zu diagnostischen Tests. In: Kunz R, Ollenschläger G, Raspe H, Jonitz G, Donner-Banzhoff N (eds.): Lehrbuch evidenzbasierte Medizin in Klinik und Praxis. Cologne: Deutscher Ärzte-Verlag; 2007.
  • Cochrane Germany. Systematische Übersichtsarbeiten der Cochrane Library.
  • Greenhalgh T. Einführung in die Evidence-based Medicine: kritische Beurteilung klinischer Studien als Basis einer rationalen Medizin. Bern: Huber; 2003.
  • Institute for Quality and Efficiency in Health Care (IQWiG, Germany). Glossar ​. [ PubMed : 23101074 ]
  • Ziegler, A, Lange S, Bender R. Systematische Übersichten und Meta-Analysen. Dtsch Med Wochenschr 2007; 132: e48-e52. [ PubMed : 17530598 ]

IQWiG health information is written with the aim of helping people understand the advantages and disadvantages of the main treatment options and health care services.

Because IQWiG is a German institute, some of the information provided here is specific to the German health care system. The suitability of any of the described options in an individual case can be determined by talking to a doctor. informedhealth.org can provide support for talks with doctors and other medical professionals, but cannot replace them. We do not offer individual consultations.

Our information is based on the results of good-quality studies. It is written by a team of health care professionals, scientists and editors, and reviewed by external experts. You can find a detailed description of how our health information is produced and updated in our methods.

  • Cite this Page InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-. In brief: What are systematic reviews and meta-analyses? [Updated 2016 Sep 8].
  • Disable Glossary Links

In this Page

Informed health links, related information.

  • PubMed Links to PubMed

Recent Activity

  • In brief: What are systematic reviews and meta-analyses? - InformedHealth.org In brief: What are systematic reviews and meta-analyses? - InformedHealth.org

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Browse by collection
  • BMJ Journals

You are here

  • Online First
  • Intracoronary thrombolysis in ST-elevation myocardial infarction: a systematic review and meta-analysis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-3678-2855 Rajan Rehan 1 , 2 ,
  • http://orcid.org/0000-0002-7235-3919 Sohaib Virk 3 ,
  • http://orcid.org/0000-0002-0180-0072 Christopher C Y Wong 4 , 5 ,
  • Freda Passam 6 ,
  • Jamie Layland 7 ,
  • Anthony Keech 8 ,
  • Andy Yong 4 ,
  • http://orcid.org/0000-0001-7712-6750 Harvey D White 9 ,
  • William Fearon 10 ,
  • Martin Ng 1 , 11
  • 1 Royal Prince Alfred Hospital , Camperdown , New South Wales , Australia
  • 2 Concord Hospital , Concord , New South Wales , Australia
  • 3 Systematic Reviews , CORE Group , Sydney , New South Wales , Australia
  • 4 Cardiology , Concord Repatriation General Hospital , Concord , New South Wales , Australia
  • 5 Stanford Hospital , Stanford , California , USA
  • 6 Department of Hematology , University of Sydney , Sydney , New South Wales , Australia
  • 7 Monash University , Melbourne , Victoria , Australia
  • 8 NHMRC Clinical Trials Centre , The University of Sydney , Sydney , New South Wales , Australia
  • 9 Cardiology Department , Green Lane Cardiovascular Service and Green Lane Cardiovascular Research Unit, Auckland City Hospital , Auckland , New Zealand
  • 10 Stanford University , Stanford , California , USA
  • 11 Department of Cardiology , The University of Sydney , Sydney , New South Wales , Australia
  • Correspondence to Professor Martin Ng, Department of Cardiology, The University of Sydney, Sydney, New South Wales, Australia; Martin.ng{at}sydney.edu.au

Background Despite restoration of epicardial blood flow in acute ST-elevation myocardial infarction (STEMI), inadequate microcirculatory perfusion is common and portends a poor prognosis. Intracoronary (IC) thrombolytic therapy can reduce microvascular thrombotic burden; however, contemporary studies have produced conflicting outcomes.

Objectives This meta-analysis aims to evaluate the efficacy and safety of adjunctive IC thrombolytic therapy at the time of primary percutaneous coronary intervention (PCI) among patients with STEMI.

Methods Comprehensive literature search of six electronic databases identified relevant randomised controlled trials. The primary outcome was major adverse cardiac events (MACE). The pooled risk ratio (RR) and weighted mean difference (WMD) with a 95% CI were calculated.

Results 12 studies with 1915 patients were included. IC thrombolysis was associated with a significantly lower incidence of MACE (RR=0.65, 95% CI 0.51 to 0.82, I 2 =0%, p<0.0004) and improved left ventricular ejection fraction (WMD=1.87; 95% CI 1.07 to 2.67; I 2 =25%; p<0.0001). Subgroup analysis demonstrated a significant reduction in MACE for trials using non-fibrin (RR=0.39, 95% CI 0.20 to 0.78, I 2 =0%, p=0.007) and moderately fibrin-specific thrombolytic agents (RR=0.62, 95% CI 0.47 to 0.83, I 2 =0%, p=0.001). No significant reduction was observed in studies using highly fibrin-specific thrombolytic agents (RR=1.10, 95% CI 0.62 to 1.96, I 2 =0%, p=0.75). Furthermore, there were no significant differences in mortality (RR=0.91; 95% CI 0.48 to 1.71; I 2 =0%; p=0.77) or bleeding events (major bleeding, RR=1.24; 95% CI 0.47 to 3.28; I 2 =0%; p=0.67; minor bleeding, RR=1.47; 95% CI 0.90 to 2.40; I 2 =0%; p=0.12).

Conclusion Adjunctive IC thrombolysis at the time of primary PCI in patients with STEMI improves clinical and myocardial perfusion parameters without an increased rate of bleeding. Further research is needed to optimise the selection of thrombolytic agents and treatment protocols.

  • Acute Coronary Syndrome
  • Myocardial Infarction
  • Meta-Analysis
  • Atherosclerosis

Data availability statement

All data relevant to the study are included in the article or uploaded as supplemental information.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/heartjnl-2024-324078

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

ST-elevation myocardial infarction (STEMI) is a significant cause of morbidity and mortality worldwide. Microvascular obstruction affects about half of patients with STEMI, leading to adverse outcomes. Previous studies on adjunctive intracoronary thrombolysis have shown inconsistent results.

WHAT THIS STUDY ADDS

This meta-analysis demonstrates that adjunctive intracoronary thrombolysis during primary percutaneous coronary intervention (PCI) significantly reduces major adverse cardiac events and improves left ventricular ejection fraction. Furthermore, it significantly improves myocardial perfusion parameters without increasing bleeding risk.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

Adjunctive intracoronary thrombolysis in patients with STEMI undergoing primary PCI shows promise for clinical benefit. Future studies should identify high-risk patients for microcirculatory dysfunction to optimise treatment strategies and clinical outcomes.

Introduction

Ischaemic heart disease remains a leading cause of morbidity and mortality worldwide. 1 2 ST-elevation myocardial infarction (STEMI) occurs due to coronary vessel occlusion causing transmural myocardial ischaemia and subsequent necrosis. 3 The cornerstone of contemporary management involves prompt reopening of the occluded coronary artery with percutaneous coronary intervention (PCI). 4 5 Despite restoring epicardial blood flow, roughly 50% of patients fail to achieve adequate microvascular perfusion. 6 This phenomenon, known as microvascular obstruction (MVO), is predictive of a poor cardiac prognosis driven by left ventricular remodelling and larger infarct size. 7–9

In patients with STEMI, MVO is characterised by distal embolisation of atherothrombotic debris and fibrin-rich microvascular thrombi. 10 A growing body of evidence supports the efficacy of adjunctive low-dose intracoronary (IC) thrombolysis in this population. Sezer et al performed the first randomised controlled trial (RCT), demonstrating an improvement in myocardial perfusion with low-dose IC streptokinase post-PCI. 11 Subsequent studies focused on newer fibrin-specific agents with a lower propensity for systemic bleeding. 12 Despite encouraging results, many studies were inadequately powered and yielded conflicting outcomes. This meta-analysis aims to evaluate the efficacy and safety of adjunctive IC thrombolytic therapy at the time of primary PCI in patients with STEMI.

The present study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. 13

Search strategy and study selection

Electronic searches were performed using PubMed, Ovid Medline, Cochrane Library, ProQuest, ACP Journal Club and Google Scholar from their dates of inception to January 2022. The search terms “STEMI” AND “intracoronary” AND (“thrombolysis” OR “tenecteplase” OR “alteplase” OR “prourokinase” OR “urokinase” OR “streptokinase”) were combined as both keywords and Medical Subject Headings terms, with filters for RCTs. This was supplemented by hand searching the bibliographies of review articles and all potentially relevant studies.

Two reviewers (RR and SV) independently screened the title and abstracts of articles identified in the search. Full-text publications were subsequently reviewed separately if either reviewer considered the manuscript as being potentially eligible. Any disagreements regarding final study inclusion were resolved by discussion and consensus with a third reviewer (CCYW).

Eligibility criteria

Studies were included if they met following inclusion criteria: (1) RCT, (2) STEMI population, (3) IC thrombolysis given to treatment group with comparison with a control group (CG) receiving no thrombolytic therapy, (4) major adverse cardiovascular event (MACE) was an outcome reported.

All publications were limited to those involving human subjects and no restrictions were based on language. Reviews, meta-analyses, abstracts, case reports, conference presentations, editorials and expert opinions were excluded. When institutions published duplicate studies with accumulating numbers of patients or increased lengths of follow-up, only the most complete reports were included for assessment.

Data extraction and quality assessment

Two investigators (RR and SV) independently extracted data from text, tables and figures. Any discrepancies were resolved by discussion and consensus with a third reviewer (CCYW). For each of the included trials, the following data were extracted: publication year, number of patients, baseline characteristics of participants, treatment details (including specific agents administered), follow-up duration and endpoints.

Study quality and risk of bias were critically appraised using the updated Cochrane Collaboration Risk-of-Bias Tool V.2. 14 Five domains of bias were evaluated: (1) randomisation process, (2) deviations from study protocol, (3) missing outcome data, (4) outcome measurement and (5) selective reporting of results.

The predetermined primary endpoint was MACE, which represented a composite outcome as defined by each individual study. While the individual components of MACE were generally consistent across studies, minor discrepancies existed ( online supplemental table 1 ). Secondary outcomes included clinical endpoints (mortality, heart failure (HF), major and minor bleeding), myocardial perfusion endpoints (thrombolysis in myocardial infarction (TIMI) flow grade 3, TIMI myocardial perfusion grade (TMPG), corrected TIMI frame count (CTFC), ST-resolution (STR)) and echocardiographic parameters (left ventricular ejection fraction (LVEF)). Subgroup analysis for MACE was conducted based on fibrin specificity of the thrombolytic agent. This classification comprised non-fibrin-specific agents (streptokinase and urokinase), moderately fibrin-specific agents (prourokinase) and highly fibrin-specific agents (alteplase and tenectaplase). Clinical outcomes were assessed at the end of the follow-up period, which ranged from 1 to 12 months, while echocardiographic parameters were evaluated within a time frame of 1–6 months.

Supplemental material

Statistical analysis.

The mean difference (MD) or relative risk (RR) was used as summary statistics and reported with 95% CIs. Meta-analyses were performed using random-effects models to take into account the anticipated clinical and methodological diversity between studies. The I 2 statistic was used to estimate the percentage of total variation across studies due to heterogeneity rather than chance, with values exceeding 50% indicative of considerable heterogeneity. For meta-analysis of continuous data, values presented as median and IQR were converted to mean and SD using the quantile method previously described by Wan et al . 15 For subgroup analyses, a standard test of heterogeneity was used to assess for significant difference between subgroups with p<0.05 considered statistically significant.

Meta-regression analyses were performed to explore potential heterogeneity with the following moderator variables individually assessed for significance: publication year, mean age, proportion of male participants, percentage of left anterior descending artery infarcts, proportion of smokers, as well as baseline prevalence of diabetes, hypertension and dyslipidaemia.

Publication bias was assessed for the primary endpoint of MACE using funnel plots comparing log of point estimates with their SE. Egger’s linear regression method and Begg’s rank correlation test were used to detect funnel plot asymmetry. 16 17 Statistical analysis was conducted with Review Manager V.5.3.5 (Cochrane Collaboration, Oxford, UK) and Comprehensive Meta-Analysis V.3.0 (Biostat, Englewood, New Jersey, USA). All p values were two sided, and values <0.05 were considered statistically significant.

A total of 245 unique records were identified through electronic searches using six online databases, from which 85 duplicates were removed. Of these, 120 were excluded based on title and abstract alone. After screening the full text of the remaining 40 articles, 12 studies 18–29 were found to meet the inclusion criteria, as summarised on the PRISMA flow chart in figure 1 .

  • Download figure
  • Open in new tab
  • Download powerpoint

Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow chart of literature search and study selection.

IC thrombolysis was examined in 12 studies (n=1030 received IC thrombolysis and 885 no IC thrombolysis). Included studies used non-fibrin-specific (streptokinase, urokinase), moderately fibrin-specific (prourokinase) and highly fibrin-specific thrombolytic (alteplase, tenecteplase) agents. The timing and delivery of IC thrombolytic therapy varied between studies. A complete summary of study characteristics and baseline participant characteristics is presented in tables 1 and 2 , respectively. Primary and secondary outcomes are summarised in online supplemental table 2 . According to the revised Cochrane tool, the overall risk of bias assessment for procedural measures was judged to be ‘low risk’ in two studies, ‘some concerns’ in eight studies and ‘high risk’ in two studies ( online supplemental figure 1 ).

  • View inline

Summary of studies investigating intracoronary thrombolysis for patients with STEMI

Summary of baseline patient characteristics in studies investigating intracoronary thrombolysis for patients with STEMI

Clinical outcomes

All 12 RCTs reported the incidence of MACE. Compared with the CG, IC thrombolysis treatment significantly improved the occurrence of MACE at the end of follow-up (RR=0.65, 95% CI 0.51 to 0.82, I 2 =0%, p<0.0004; figure 2 ). Subgroup analysis demonstrated a significant reduction in MACE for trials using non-fibrin (RR=0.39, 95% CI 0.20 to 0.78, I 2 =0%, p=0.007) and moderately fibrin-specific thrombolysis (RR=0.62, 95% CI 0.47 to 0.83, I 2 =0%, p=0.001). MACE was observed at a similar rate in studies using highly fibrin-specific thrombolysis (RR=1.10, 95% CI 0.62 to 1.96, I 2 =0%, p=0.75). Test for subgroup difference was not significant (p=0.07). Furthermore, IC thrombolysis was associated with an improvement of LVEF (weighted MD (WMD)=1.87; 95% CI, 1.07 to 2.67; I 2 =25%; p<0.0001; online supplemental figure 2 ). There was a trend towards lower incidence of HF hospitalisation (RR=0.66; 95% CI 0.42 to 1.05; I 2 =0%; p=0.08; online supplemental figure 3 ), though not statistically significant. No significant differences were observed in mortality (RR=0.95; 95% CI 0.50 to 1.81; I 2 =0%; p=0.88; online supplemental figure 4 ), major bleeding (RR=1.24; 95% CI 0.47 to 3.28; I 2 =0%; p=0.67; online supplemental figure 5 ) and minor bleeding events (RR=1.47; 95% CI 0.90 to 2.40; I 2 =0%; p=0.12; online supplemental figure 6 ) between the two groups.

Forest plot displaying relative risk for major adverse cardiovascular events with intracoronary (IC) thrombolysis (stratified by fibrin-specific and non-fibrin-specific agents) or placebo in ST-elevation myocardial infarction. Squares and diamonds=risk ratios. Lines=95% CIs.

Myocardial perfusion outcomes

In patients with STEMI, IC thrombolysis significantly improved TIMI flow grade 3 (RR=1.09; 95% CI 1.02 to 1.15; I 2 =63%; p=0.006), TMPG (RR=1.38; 95% CI 1.13 to 1.68; I 2 =54%; p=0.001), complete STR (RR=1.20; 95% CI 1.10 to 1.31; I 2 =51%; p<0.0001) and CTFC (WMD=−4.58; 95% CI −6.23 to –2.72; I 2 =41%; p<0.0001) when compared with the CG ( figure 3 ).

Forest plots of myocardial perfusion outcomes with intracoronary (IC) thrombolysis or placebo in ST-elevation myocardial infarction. (A) Thrombolysis in myocardial infarction (TIMI) flow grade 3. (B) TIMI myocardial perfusion grade 3. (C) ST-segment resolution. (D) Corrected TIMI frame count. Squares and diamonds=risk ratios/weighted mean difference. Lines=95% CIs.

Meta-regression results

For primary endpoint of MACE, meta-regression analyses did not identify the following moderator variables as significant effect modifiers: publication year (p=0.97), proportion of male (p=0.23), prevalence of diabetes (p=0.44), proportion of smokers (p=0.68), prevalence of dyslipidaemia (p=0.44) and prevalence of hypertension (p=0.21).

Publication bias

Both Egger’s linear regression method (p=0.73) and Begg’s rank correlation test (p=0.63) suggested that publication bias was not an influencing factor when MACE was selected as the primary endpoint.

The present meta-analysis examined 12 RCTs that included 1915 patients with STEMI undergoing primary PCI. All trials evaluated the efficacy and safety of IC thrombolytic agents compared with a CG. The main findings were that patients administered IC thrombolysis had: (1) significantly lower incidence of MACE, (2) improvement in LVEF and (3) superior myocardial perfusion parameters (TIMI flow grade 3, TMPG, CTFC and complete STR). Notably, there were no significant differences observed in mortality and bleeding events in both groups.

Mortality rates following STEMI remain high, with 30-day mortality rates ranging from 5.4% to 14% and 1-year mortality rates ranging from 6.6% to 17.5%. 30 Despite the increased availability of primary PCI facilities and advancements in reperfusion strategies, there has been limited improvement in STEMI mortality rates. 31 Moreover, complications such as HF, arrhythmia, repeat revascularisation and reinfarction continue to be prevalent. 32–34 Despite restoring epicardial blood flow through PCI, MVO is evident in almost half of patients with STEMI. 6 It is characterised by distal embolisation of atherothrombotic debris, de novo microvascular thrombosis formation and plugging of circulating blood cells. 35 Furthermore, the upregulation of inflammatory mediators leads to intramyocardial haemorrhage and further microvascular necrosis. 36 37 These mechanistic pathways contribute to a larger infarct size, adverse myocardial remodelling and worse prognosis. 7 8 38

Thrombolytic therapy is an effective treatment for acute coronary thrombosis. 39 It inhibits red blood cell aggregation and dissolves thrombi to facilitate adequate microvascular perfusion. 40 41 Thrombolytic agents are commonly classified based on their affinity for fibrin. Streptokinase and urokinase lack fibrin specificity, indiscriminately activating both circulating and clot-bound plasminogen. Prourokinase has moderate fibrin specificity with a propensity for activation on fibrin surfaces, although systemic fibrinogen degradation has been observed. Alteplase and tenectaplase are highly fibrin specific, activating fibrin-bound plasminogen with minimal impact on circulating free plasminogen.

Utilisation of a facilitated PCI strategy with adjunctive intravenous thrombolysis improves coronary flow acutely, 42 however, causes paradoxical activation of thrombin, leading to increased bleeding. 43 44 As a result, clinicians considered the administration of IC thrombolytic therapy. Encouraging results from an open-chest animal model 45 led to the first randomised trial using adjunctive IC streptokinase in 41 patients with STEMI undergoing primary PCI. 11 In the IC streptokinase group, patients demonstrated improved coronary flow reserve, index of microcirculatory resistance (IMR) and CTFC 2 days after primary PCI. 11 Further RCTs with moderately fibrin-specific thrombolytic agents (prourokinase) demonstrated similar results with improved myocardial perfusion parameters. 19 20 22 23 26–28 Notably, the T-TIME Study, a large RCT of 440 patients comparing a highly fibrin-specific thrombolytic agent (alteplase) against placebo, reported different outcomes. At 3-month follow-up, there were no significant differences in rates of death or HF hospitalisation between groups. In addition, microvascular obstruction (% left ventricular mass) on cardiac magnetic resonance (CMR) between groups at 2–7 days did not differ. The ICE T-TIMI trial, which also used a highly fibrin-specific thrombolytic agent (tenecteplase), investigated its efficacy in 40 patients. This small study administered two fixed doses of 4 mg of IC tenecteplase and evaluated the primary endpoint of culprit lesion per cent diameter stenosis after the first bolus of tenecteplase or placebo. The results indicated no significant difference in the primary endpoint between the two groups.

In an initial meta-analysis of six RCTs investigating the use of IC thrombolysis in patients with STEMI compared with placebo, findings revealed a reduction in MVO but no impact on MACE. 46 Subsequent analyses, including studies with larger sample sizes or focusing on specific thrombolytic agents, have since been conducted with varied results. 47 48 Our meta-analysis, which is the largest to date, demonstrates that adjunctive IC thrombolysis in patients with STEMI improves both clinical and microcirculation outcomes. Although bleeding events did not significantly increase, it is plausible that a tradeoff may exist for reducing MACE. Notably, subgroup analysis for MACE demonstrated no significant benefit for highly fibrin-specific agents ( figure 2 ).

Intuitively, fibrin-specific thrombolytics are presumed to offer inherent advantages over their less fibrin-specific counterparts. In vivo studies have revealed that administration of alteplase in patients with STEMI induced shorter periods of thrombin and kallikrein activation, less reduction in fibrinogen, and a decrease in D-dimer and plasmin–antiplasmin complexes compared with streptokinase. 49 In this regard, tenecteplase demonstrates superior performance relative to alteplase with almost no paradoxical procoagulant effect due to reduced activation of thrombin and the kallikrein–factor XII system. 50

Nonetheless, other variables may diminish the significance of fibrin specificity. It has been argued that administration of IC alteplase, a short-acting thrombolytic with a half-life of 4–6 min, before flow optimisation with stenting may have contributed to the negative results seen in T-TIME. Although prourokinase has a similarly short half-life and was also given before stenting in multiple studies, it was associated with better results. 19 20 22 23 26–28 The therapeutic efficacy of prourokinase predominantly relies on its conversion to urokinase, a non-fibrin-specific direct plasminogen activator, potentially resulting in a prolonged duration of action. Furthermore, inducing a systemic fibrinolytic state with a non-selective agent may be paradoxically desirable in patients receiving adjunctive IC thrombolytics during primary PCI. This approach can potentially prevent further thrombus reaccumulation and embolisation to the microcirculation, especially in a highly thrombogenic environment. In contrast, fibrin-specific agents may heighten the risk of rethrombosis and reocclusion due to their limited impact on systemic fibrinogen depletion. Nevertheless, such varied outcomes across these studies could be attributed to the heterogeneous methodologies used.

Despite encouraging results, future studies targeting patients at the highest risk of MVO with appropriately powered sample sizes are required. The ongoing RESTORE-MI (Restoring Microcirculatory Perfusion in STEMI) trial ( NCT03998319 ) has a unique approach in which all study participants will undergo assessment of microvascular integrity after primary PCI prior to inclusion. Only patients with objective evidence of microvascular dysfunction (IMR value >32) following reperfusion will be randomised to treatment with IC tenecteplase or placebo. The primary endpoint measured will be cardiovascular mortality and rehospitalisation for HF at 24 months, in addition to infarct size on CMR at 6 months post-PCI. This study may potentially support a novel therapeutic approach towards treating MVO in patients with STEMI in the future.

Limitations

Several key limitations should be considered when interpreting the findings of the present meta-analysis. First, several studies were subject to bias due to issues in randomisation and blinding, leading to an increased chance of type 1 (false-positive) error. In addition, the sample size of individual studies, except for the T-TIME trial, was relatively small. Second, inconsistencies in the duration of follow-up and the definition of clinical outcomes, such as MACE, were observed among the studies. Third, interventional protocols varied between RCTs. For example, IC thrombolytic therapy differed in agent, dosage, timing and route of administration. Initial studies used non-fibrin-specific agents, while contemporary studies moved towards newer fibrin-specific therapy. Besides Sezer et al , 25 all other studies administered IC thrombolysis therapy prior to stent implantation. 18–24 26–29 Within the latter group, some delivered before flow restoration, 19 21 29 though most did so after balloon dilation or thrombus aspiration. 18 20 22–24 26–28 Similarly, the methods of IC administration of the agents varied between non-selective delivery through guiding catheters 24 25 to selective delivery via IC catheters. 18–24 26–29 Furthermore, antiplatelet, anticoagulant and glycoprotein IIb/IIIa inhibitors (GPI) regimens also differed ( table 1 ). Finally, patient selection was diverse between studies. Though regression analysis did not detect any significant effect modifiers, total ischaemic time was omitted due to significant heterogeneity in reporting.

Impaired myocardial perfusion remains a clinical challenge in patients with STEMI. Despite its limitations, this meta-analysis favours the use of IC thrombolytic therapy during primary PCI. Overall, IC thrombolysis reduced the incidence of MACE and improved myocardial perfusion markers without increasing the risk of bleeding. Future clinical trials should be appropriately powered for clinical outcomes and focus on patients at high risk of microcirculatory dysfunction.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Chierchia S ,
  • Lawton JS ,
  • Tamis-Holland JE ,
  • Bangalore S , et al
  • Agewall S , et al
  • Sorajja P ,
  • Costantini C , et al
  • van de Hoef TP ,
  • Meuwissen M , et al
  • Regenfus M ,
  • Schlundt C ,
  • Krähner R , et al
  • Fearon WF ,
  • Ng M , et al
  • Holmes DR ,
  • Herrmann J , et al
  • Gören T , et al
  • Liberati A ,
  • Tetzlaff J , et al
  • Sterne JAC ,
  • Savović J ,
  • Page MJ , et al
  • Liu J , et al
  • Davey Smith G ,
  • Schneider M , et al
  • Hao GZ , et al
  • Gibson CM ,
  • Gopalakrishnan L , et al
  • Liu Z , et al
  • Du X , et al
  • McCartney PJ ,
  • Maznyczka AM , et al
  • Aslanger E , et al
  • Wu H , et al
  • Feng Q , et al
  • Wang Y , et al
  • Pelliccia F ,
  • Tanzilli G , et al
  • Mozaffarian D ,
  • Benjamin EJ ,
  • Go AS , et al
  • Chandra M , et al
  • Melenovský V ,
  • Stehlik J , et al
  • Windecker S ,
  • Myat A , et al
  • Rathor P , et al
  • Konijnenberg LSF ,
  • Duncker DJ , et al
  • Betgem RP ,
  • de Waard GA ,
  • Nijveldt R , et al
  • Yellon DM ,
  • Hausenloy DJ
  • Messalli G ,
  • Dymarkowski S , et al
  • ↵ Randomised trial of intravenous streptokinase, oral aspirin, both, or neither among 17,187 cases of suspected acute myocardial infarction: ISIS-2. ISIS-2 (Second international study of infarct survival) collaborative group . Lancet 1988 ; 2 : 349 – 60 . doi:10.1016/S0140-6736(88)92833-4 OpenUrl PubMed Web of Science
  • Schwartz RS ,
  • Farb A , et al
  • Zalewski J ,
  • Godlewski J , et al
  • Hillis WS ,
  • Been M , et al
  • ↵ Primary versus Tenecteplase-facilitated percutaneous coronary intervention in patients with ST-segment elevation acute myocardial infarction (ASSENT-4 PCI): randomised trial . The Lancet 2006 ; 367 : 569 – 78 . doi:10.1016/S0140-6736(06)68147-6 OpenUrl
  • Tendera M ,
  • de Belder MA , et al
  • Armiger LC ,
  • White HD , et al
  • Alyamani M ,
  • Campbell S ,
  • Navarese E , et al
  • Tian W , et al
  • Li ZP , et al
  • Hoffmeister HM ,
  • Kastner C , et al
  • Ehlers R , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

X @RajanRehan23

Contributors RR—conceptualisation, methodology, data analysis, writing (original draft preparation), reviewing and editing the final manuscript. SV—methodology, data analysis. CCYW—conceptualisation, methodology, data analysis. FP—supervision, writing (reviewing and editing). JL—supervision, writing (reviewing and editing). AK—supervision, writing (reviewing and editing). AY—conceptualisation, methodology, writing (reviewing and editing). HDW—conceptualisation, methodology, writing (reviewing and editing). WF—conceptualisation, methodology, writing (reviewing and editing). MN—conceptualisation, methodology, supervision, writing (reviewing and editing), guarantor.

Funding This study is funded by the National Health and Medical Research Council (2022150).

Competing interests JL has received minor honoraria from Abbott Vascular, Boehringer Ingelheim and Bayer. AY has received minor honoraria and research support from Abbot Vascular and Philips Healthcare. WF has received research support from Abbott Vascular and Medtronic; and has minor stock options with HeartFlow. MN has received research support from Abbot Vascular. HDW has received grant support paid to the institution and fees for serving on Steering Committees of the ODYSSEY trial from Sanofi and Regeneron Pharmaceuticals, the ISCHEMIA and MINT Study from the National Institutes of Health, the STRENGTH trial from Omthera Pharmaceuticals, the HEART-FID Study from American Regent, the DAL-GENE Study from DalCor Pharma UK, the AEGIS-II Study from CSL Behring, the CLEAR OUTCOMES Study from Esperion Therapeutics, and the SOLIST-WHF and SCOREDS trials from Sanofi Aventis Australia. The remaining authors have nothing to disclose.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

COMMENTS

  1. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  2. Introduction to systematic review and meta-analysis

    It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical ...

  3. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  4. Systematic review

    A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic (in the scientific literature), then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based ...

  5. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr Robert Boyle and his colleagues published a systematic review in ...

  6. Study designs: Part 7

    Study designs: Part 7 - Systematic reviews. In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies.

  7. How to Do a Systematic Review: A Best Practice Guide for ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  8. Getting Started

    A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies.

  9. Systematic reviews: Structure, form and content

    A systematic review collects secondary data, and is a synthesis of all available, relevant evidence which brings together all existing primary studies for review (Cochrane 2016). A systematic review differs from other types of literature review in several major ways.

  10. Introduction

    "A systematic review attempts to identify, appraise and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question. Researchers conducting systematic reviews use explicit methods aimed at minimizing bias, in order to produce more reliable findings that can be used to inform decision ...

  11. What is a Systematic Review (SR)?

    A meta-study of qualitative research examining determinants of children's independent active free play. ... one of the challenges is interpreting such apparently conflicting research. A systematic review is a method to systematically identify relevant research, appraise its quality, and synthesize the results. ...

  12. What is a Systematic Review?

    an explicit, reproducible methodology. a systematic search that attempts to identify all studies that would meet the eligibility criteria. an assessment of the validity of the findings of the included studies, for example through the assessment of the risk of bias. a systematic presentation, and synthesis, of the characteristics and findings of ...

  13. 1.2.2 What is a systematic review?

    1.2.2. What is a systematic review? A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can ...

  14. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  15. LibGuides: Systematic Reviews: What is a Systematic Review?

    A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are: a clearly defined question with inclusion and exclusion criteria; a rigorous and systematic search of the literature;

  16. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  17. Evidence Synthesis and Systematic Reviews

    Definition: A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue.It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors. When to use: If you want to identify, appraise, and synthesize all ...

  18. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  19. Research Guides: Systematic Reviews: How-To in Detail: What is a

    A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are: a clearly defined question with inclusion and exclusion criteria; a rigorous and systematic search of the literature;

  20. What is a Systematic

    rigorous research method. That is the short answer to "what is a systematic review?". But systematic reviews go much deeper and play an important role in mitigating bias, establishing the credibility of authors, and producing evidence-based quality studies. peer review and a systematic review. systematic reviews are essential.

  21. A Research Guide for Systematic Literature Reviews

    Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review. Systematic Review: Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review. Aims for exhaustive, comprehensive searching.

  22. A systematic review of experimentally tested implementation strategies

    Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify strategies ...

  23. Characteristics, Progression, and Output of Randomized Platform Trials

    Importance Platform trials have become increasingly common, and evidence is needed to determine how this trial design is actually applied in current research practice.. Objective To determine the characteristics, progression, and output of randomized platform trials.. Evidence Review In this systematic review of randomized platform trials, Medline, Embase, Scopus, trial registries, gray ...

  24. Association between spontaneous breathing trial methods and ...

    HFO SBT was associated with a lower risk of reintubation in comparison to other SBTmethods. The results of our analysis should be considered with caution due to the low number of studies that investigated HFO SBT, and potential clinical heterogeneity related to co-interventions. Further trials should be performed to confirm the results on larger cohorts of patients and assess specific subgroups.

  25. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    Because no study, regardless of its type, should be interpreted in isolation, a systematic review is generally the best form of evidence. So, the preferred method is a good summary of research reports, i.e., systematic reviews and meta-analysis, which will give evidence-based answers to clinical situations.

  26. A systematic review and Bayesian meta-analysis of 30 years of stress

    Stress generation posits that (a) individuals at-risk for psychopathology may inadvertently experience higher rates of prospective dependent stress (i.e., stressors that are in part influenced by their thoughts and behaviors) but not independent stress (i.e., stressors occurring outside their influence), and (b) this elevated dependent stress, in some measure, is what places these individuals ...

  27. New study finds systematic biases at play in clinical trials

    Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State ...

  28. New MSU study finds systematic biases at play in clinical trials

    Randomized controlled trials, or RCTs, are believed to be the best way to study the safety and efficacy of new treatments in clinical research. However, a recent study from Michigan State University found that people of color and white women are significantly underrepresented in RCTs due to systematic biases.

  29. In brief: What are systematic reviews and meta-analyses?

    Systematic reviews summarize the results of all the studies on a medical treatment and assess the quality of the studies. The analysis is done following a specific, methodologically sound process. In a way, it's a "study of studies." Good systematic reviews can provide a reliable overview of the current knowledge in a certain area.

  30. Intracoronary thrombolysis in ST-elevation myocardial ...

    The present study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.13 Search strategy and study selection Electronic searches were performed using PubMed, Ovid Medline, Cochrane Library, ProQuest, ACP Journal Club and Google Scholar from their dates of inception to ...