Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Causal Research

Try Qualtrics for free

Causal research: definition, examples and how to use it.

16 min read Causal research enables market researchers to predict hypothetical occurrences & outcomes while improving existing strategies. Discover how this research can decrease employee retention & increase customer success for your business.

What is causal research?

Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables.

It’s often used by companies to determine the impact of changes in products, features, or services process on critical company metrics. Some examples:

  • How does rebranding of a product influence intent to purchase?
  • How would expansion to a new market segment affect projected sales?
  • What would be the impact of a price increase or decrease on customer loyalty?

To maintain the accuracy of causal research, ‘confounding variables’ or influences — e.g. those that could distort the results — are controlled. This is done either by keeping them constant in the creation of data, or by using statistical methods. These variables are identified before the start of the research experiment.

As well as the above, research teams will outline several other variables and principles in causal research:

  • Independent variables

The variables that may cause direct changes in another variable. For example, the effect of truancy on a student’s grade point average. The independent variable is therefore class attendance.

  • Control variables

These are the components that remain unchanged during the experiment so researchers can better understand what conditions create a cause-and-effect relationship.  

This describes the cause-and-effect relationship. When researchers find causation (or the cause), they’ve conducted all the processes necessary to prove it exists.

  • Correlation

Any relationship between two variables in the experiment. It’s important to note that correlation doesn’t automatically mean causation. Researchers will typically establish correlation before proving cause-and-effect.

  • Experimental design

Researchers use experimental design to define the parameters of the experiment — e.g. categorizing participants into different groups.

  • Dependent variables

These are measurable variables that may change or are influenced by the independent variable. For example, in an experiment about whether or not terrain influences running speed, your dependent variable is the terrain.  

Why is causal research useful?

It’s useful because it enables market researchers to predict hypothetical occurrences and outcomes while improving existing strategies. This allows businesses to create plans that benefit the company. It’s also a great research method because researchers can immediately see how variables affect each other and under what circumstances.

Also, once the first experiment has been completed, researchers can use the learnings from the analysis to repeat the experiment or apply the findings to other scenarios. Because of this, it’s widely used to help understand the impact of changes in internal or commercial strategy to the business bottom line.

Some examples include:

  • Understanding how overall training levels are improved by introducing new courses
  • Examining which variations in wording make potential customers more interested in buying a product
  • Testing a market’s response to a brand-new line of products and/or services

So, how does causal research compare and differ from other research types?

Well, there are a few research types that are used to find answers to some of the examples above:

1. Exploratory research

As its name suggests, exploratory research involves assessing a situation (or situations) where the problem isn’t clear. Through this approach, researchers can test different avenues and ideas to establish facts and gain a better understanding.

Researchers can also use it to first navigate a topic and identify which variables are important. Because no area is off-limits, the research is flexible and adapts to the investigations as it progresses.

Finally, this approach is unstructured and often involves gathering qualitative data, giving the researcher freedom to progress the research according to their thoughts and assessment. However, this may make results susceptible to researcher bias and may limit the extent to which a topic is explored.

2. Descriptive research

Descriptive research is all about describing the characteristics of the population, phenomenon or scenario studied. It focuses more on the “what” of the research subject than the “why”.

For example, a clothing brand wants to understand the fashion purchasing trends amongst buyers in California — so they conduct a demographic survey of the region, gather population data and then run descriptive research. The study will help them to uncover purchasing patterns amongst fashion buyers in California, but not necessarily why those patterns exist.

As the research happens in a natural setting, variables can cross-contaminate other variables, making it harder to isolate cause and effect relationships. Therefore, further research will be required if more causal information is needed.

Get started on your market research journey with Strategic Research

How is causal research different from the other two methods above?

Well, causal research looks at what variables are involved in a problem and ‘why’ they act a certain way. As the experiment takes place in a controlled setting (thanks to controlled variables) it’s easier to identify cause-and-effect amongst variables.

Furthermore, researchers can carry out causal research at any stage in the process, though it’s usually carried out in the later stages once more is known about a particular topic or situation.

Finally, compared to the other two methods, causal research is more structured, and researchers can combine it with exploratory and descriptive research to assist with research goals.

Summary of three research types

causal research table

What are the advantages of causal research?

  • Improve experiences

By understanding which variables have positive impacts on target variables (like sales revenue or customer loyalty), businesses can improve their processes, return on investment, and the experiences they offer customers and employees.

  • Help companies improve internally

By conducting causal research, management can make informed decisions about improving their employee experience and internal operations. For example, understanding which variables led to an increase in staff turnover.

  • Repeat experiments to enhance reliability and accuracy of results

When variables are identified, researchers can replicate cause-and-effect with ease, providing them with reliable data and results to draw insights from.

  • Test out new theories or ideas

If causal research is able to pinpoint the exact outcome of mixing together different variables, research teams have the ability to test out ideas in the same way to create viable proof of concepts.

  • Fix issues quickly

Once an undesirable effect’s cause is identified, researchers and management can take action to reduce the impact of it or remove it entirely, resulting in better outcomes.

What are the disadvantages of causal research?

  • Provides information to competitors

If you plan to publish your research, it provides information about your plans to your competitors. For example, they might use your research outcomes to identify what you are up to and enter the market before you.

  • Difficult to administer

Causal research is often difficult to administer because it’s not possible to control the effects of extraneous variables.

  • Time and money constraints

Budgetary and time constraints can make this type of research expensive to conduct and repeat. Also, if an initial attempt doesn’t provide a cause and effect relationship, the ROI is wasted and could impact the appetite for future repeat experiments.

  • Requires additional research to ensure validity

You can’t rely on just the outcomes of causal research as it’s inaccurate. It’s best to conduct other types of research alongside it to confirm its output.

  • Trouble establishing cause and effect

Researchers might identify that two variables are connected, but struggle to determine which is the cause and which variable is the effect.

  • Risk of contamination

There’s always the risk that people outside your market or area of study could affect the results of your research. For example, if you’re conducting a retail store study, shoppers outside your ‘test parameters’ shop at your store and skew the results.

How can you use causal research effectively?

To better highlight how you can use causal research across functions or markets, here are a few examples:

Market and advertising research

A company might want to know if their new advertising campaign or marketing campaign is having a positive impact. So, their research team can carry out a causal research project to see which variables cause a positive or negative effect on the campaign.

For example, a cold-weather apparel company in a winter ski-resort town may see an increase in sales generated after a targeted campaign to skiers. To see if one caused the other, the research team could set up a duplicate experiment to see if the same campaign would generate sales from non-skiers. If the results reduce or change, then it’s likely that the campaign had a direct effect on skiers to encourage them to purchase products.

Improving customer experiences and loyalty levels

Customers enjoy shopping with brands that align with their own values, and they’re more likely to buy and present the brand positively to other potential shoppers as a result. So, it’s in your best interest to deliver great experiences and retain your customers.

For example, the Harvard Business Review found that an increase in customer retention rates by 5% increased profits by 25% to 95%. But let’s say you want to increase your own, how can you identify which variables contribute to it?Using causal research, you can test hypotheses about which processes, strategies or changes influence customer retention. For example, is it the streamlined checkout? What about the personalized product suggestions? Or maybe it was a new solution that solved their problem? Causal research will help you find out.

Improving problematic employee turnover rates

If your company has a high attrition rate, causal research can help you narrow down the variables or reasons which have the greatest impact on people leaving. This allows you to prioritize your efforts on tackling the issues in the right order, for the best positive outcomes.

For example, through causal research, you might find that employee dissatisfaction due to a lack of communication and transparency from upper management leads to poor morale, which in turn influences employee retention.

To rectify the problem, you could implement a routine feedback loop or session that enables your people to talk to your company’s C-level executives so that they feel heard and understood.

How to conduct causal research first steps to getting started are:

1. Define the purpose of your research

What questions do you have? What do you expect to come out of your research? Think about which variables you need to test out the theory.

2. Pick a random sampling if participants are needed

Using a technology solution to support your sampling, like a database, can help you define who you want your target audience to be, and how random or representative they should be.

3. Set up the controlled experiment

Once you’ve defined which variables you’d like to measure to see if they interact, think about how best to set up the experiment. This could be in-person or in-house via interviews, or it could be done remotely using online surveys.

4. Carry out the experiment

Make sure to keep all irrelevant variables the same, and only change the causal variable (the one that causes the effect) to gather the correct data. Depending on your method, you could be collecting qualitative or quantitative data, so make sure you note your findings across each regularly.

5. Analyze your findings

Either manually or using technology, analyze your data to see if any trends, patterns or correlations emerge. By looking at the data, you’ll be able to see what changes you might need to do next time, or if there are questions that require further research.

6. Verify your findings

Your first attempt gives you the baseline figures to compare the new results to. You can then run another experiment to verify your findings.

7. Do follow-up or supplemental research

You can supplement your original findings by carrying out research that goes deeper into causes or explores the topic in more detail. One of the best ways to do this is to use a survey. See ‘Use surveys to help your experiment’.

Identifying causal relationships between variables

To verify if a causal relationship exists, you have to satisfy the following criteria:

  • Nonspurious association

A clear correlation exists between one cause and the effect. In other words, no ‘third’ that relates to both (cause and effect) should exist.

  • Temporal sequence

The cause occurs before the effect. For example, increased ad spend on product marketing would contribute to higher product sales.

  • Concomitant variation

The variation between the two variables is systematic. For example, if a company doesn’t change its IT policies and technology stack, then changes in employee productivity were not caused by IT policies or technology.

How surveys help your causal research experiments?

There are some surveys that are perfect for assisting researchers with understanding cause and effect. These include:

  • Employee Satisfaction Survey – An introductory employee satisfaction survey that provides you with an overview of your current employee experience.
  • Manager Feedback Survey – An introductory manager feedback survey geared toward improving your skills as a leader with valuable feedback from your team.
  • Net Promoter Score (NPS) Survey – Measure customer loyalty and understand how your customers feel about your product or service using one of the world’s best-recognized metrics.
  • Employee Engagement Survey – An entry-level employee engagement survey that provides you with an overview of your current employee experience.
  • Customer Satisfaction Survey – Evaluate how satisfied your customers are with your company, including the products and services you provide and how they are treated when they buy from you.
  • Employee Exit Interview Survey – Understand why your employees are leaving and how they’ll speak about your company once they’re gone.
  • Product Research Survey – Evaluate your consumers’ reaction to a new product or product feature across every stage of the product development journey.
  • Brand Awareness Survey – Track the level of brand awareness in your target market, including current and potential future customers.
  • Online Purchase Feedback Survey – Find out how well your online shopping experience performs against customer needs and expectations.

That covers the fundamentals of causal research and should give you a foundation for ongoing studies to assess opportunities, problems, and risks across your market, product, customer, and employee segments.

If you want to transform your research, empower your teams and get insights on tap to get ahead of the competition, maybe it’s time to leverage Qualtrics CoreXM.

Qualtrics CoreXM provides a single platform for data collection and analysis across every part of your business — from customer feedback to product concept testing. What’s more, you can integrate it with your existing tools and services thanks to a flexible API.

Qualtrics CoreXM offers you as much or as little power and complexity as you need, so whether you’re running simple surveys or more advanced forms of research, it can deliver every time.

Get started on your market research journey with CoreXM

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

What is causal research design?

Last updated

14 May 2023

Reviewed by

Short on time? Get an AI generated summary of this article instead

Examining these relationships gives researchers valuable insights into the mechanisms that drive the phenomena they are investigating.

Organizations primarily use causal research design to identify, determine, and explore the impact of changes within an organization and the market. You can use a causal research design to evaluate the effects of certain changes on existing procedures, norms, and more.

This article explores causal research design, including its elements, advantages, and disadvantages.

Analyze your causal research

Dovetail streamlines causal research analysis to help you uncover and share actionable insights

  • Components of causal research

You can demonstrate the existence of cause-and-effect relationships between two factors or variables using specific causal information, allowing you to produce more meaningful results and research implications.

These are the key inputs for causal research:

The timeline of events

Ideally, the cause must occur before the effect. You should review the timeline of two or more separate events to determine the independent variables (cause) from the dependent variables (effect) before developing a hypothesis. 

If the cause occurs before the effect, you can link cause and effect and develop a hypothesis .

For instance, an organization may notice a sales increase. Determining the cause would help them reproduce these results. 

Upon review, the business realizes that the sales boost occurred right after an advertising campaign. The business can leverage this time-based data to determine whether the advertising campaign is the independent variable that caused a change in sales. 

Evaluation of confounding variables

In most cases, you need to pinpoint the variables that comprise a cause-and-effect relationship when using a causal research design. This uncovers a more accurate conclusion. 

Co-variations between a cause and effect must be accurate, and a third factor shouldn’t relate to cause and effect. 

Observing changes

Variation links between two variables must be clear. A quantitative change in effect must happen solely due to a quantitative change in the cause. 

You can test whether the independent variable changes the dependent variable to evaluate the validity of a cause-and-effect relationship. A steady change between the two variables must occur to back up your hypothesis of a genuine causal effect. 

  • Why is causal research useful?

Causal research allows market researchers to predict hypothetical occurrences and outcomes while enhancing existing strategies. Organizations can use this concept to develop beneficial plans. 

Causal research is also useful as market researchers can immediately deduce the effect of the variables on each other under real-world conditions. 

Once researchers complete their first experiment, they can use their findings. Applying them to alternative scenarios or repeating the experiment to confirm its validity can produce further insights. 

Businesses widely use causal research to identify and comprehend the effect of strategic changes on their profits. 

  • How does causal research compare and differ from other research types?

Other research types that identify relationships between variables include exploratory and descriptive research . 

Here’s how they compare and differ from causal research designs:

Exploratory research

An exploratory research design evaluates situations where a problem or opportunity's boundaries are unclear. You can use this research type to test various hypotheses and assumptions to establish facts and understand a situation more clearly.

You can also use exploratory research design to navigate a topic and discover the relevant variables. This research type allows flexibility and adaptability as the experiment progresses, particularly since no area is off-limits.

It’s worth noting that exploratory research is unstructured and typically involves collecting qualitative data . This provides the freedom to tweak and amend the research approach according to your ongoing thoughts and assessments. 

Unfortunately, this exposes the findings to the risk of bias and may limit the extent to which a researcher can explore a topic. 

This table compares the key characteristics of causal and exploratory research:

Main research statement

Research hypotheses

Research question

Amount of uncertainty characterizing decision situation

Clearly defined

Highly ambiguous

Research approach

Highly structured

Unstructured

When you conduct it

Later stages of decision-making

Early stages of decision-making

Descriptive research

This research design involves capturing and describing the traits of a population, situation, or phenomenon. Descriptive research focuses more on the " what " of the research subject and less on the " why ."

Since descriptive research typically happens in a real-world setting, variables can cross-contaminate others. This increases the challenge of isolating cause-and-effect relationships. 

You may require further research if you need more causal links. 

This table compares the key characteristics of causal and descriptive research.  

Main research statement

Research hypotheses

Research question

Amount of uncertainty characterizing decision situation

Clearly defined

Partially defined

Research approach

Highly structured

Structured

When you conduct it

Later stages of decision-making

Later stages of decision-making

Causal research examines a research question’s variables and how they interact. It’s easier to pinpoint cause and effect since the experiment often happens in a controlled setting. 

Researchers can conduct causal research at any stage, but they typically use it once they know more about the topic.

In contrast, causal research tends to be more structured and can be combined with exploratory and descriptive research to help you attain your research goals. 

  • How can you use causal research effectively?

Here are common ways that market researchers leverage causal research effectively:

Market and advertising research

Do you want to know if your new marketing campaign is affecting your organization positively? You can use causal research to determine the variables causing negative or positive impacts on your campaign. 

Improving customer experiences and loyalty levels

Consumers generally enjoy purchasing from brands aligned with their values. They’re more likely to purchase from such brands and positively represent them to others. 

You can use causal research to identify the variables contributing to increased or reduced customer acquisition and retention rates. 

Could the cause of increased customer retention rates be streamlined checkout? 

Perhaps you introduced a new solution geared towards directly solving their immediate problem. 

Whatever the reason, causal research can help you identify the cause-and-effect relationship. You can use this to enhance your customer experiences and loyalty levels.

Improving problematic employee turnover rates

Is your organization experiencing skyrocketing attrition rates? 

You can leverage the features and benefits of causal research to narrow down the possible explanations or variables with significant effects on employees quitting. 

This way, you can prioritize interventions, focusing on the highest priority causal influences, and begin to tackle high employee turnover rates. 

  • Advantages of causal research

The main benefits of causal research include the following:

Effectively test new ideas

If causal research can pinpoint the precise outcome through combinations of different variables, researchers can test ideas in the same manner to form viable proof of concepts.

Achieve more objective results

Market researchers typically use random sampling techniques to choose experiment participants or subjects in causal research. This reduces the possibility of exterior, sample, or demography-based influences, generating more objective results. 

Improved business processes

Causal research helps businesses understand which variables positively impact target variables, such as customer loyalty or sales revenues. This helps them improve their processes, ROI, and customer and employee experiences.

Guarantee reliable and accurate results

Upon identifying the correct variables, researchers can replicate cause and effect effortlessly. This creates reliable data and results to draw insights from. 

Internal organization improvements

Businesses that conduct causal research can make informed decisions about improving their internal operations and enhancing employee experiences. 

  • Disadvantages of causal research

Like any other research method, casual research has its set of drawbacks that include:

Extra research to ensure validity

Researchers can't simply rely on the outcomes of causal research since it isn't always accurate. There may be a need to conduct other research types alongside it to ensure accurate output.

Coincidence

Coincidence tends to be the most significant error in causal research. Researchers often misinterpret a coincidental link between a cause and effect as a direct causal link. 

Administration challenges

Causal research can be challenging to administer since it's impossible to control the impact of extraneous variables . 

Giving away your competitive advantage

If you intend to publish your research, it exposes your information to the competition. 

Competitors may use your research outcomes to identify your plans and strategies to enter the market before you. 

  • Causal research examples

Multiple fields can use causal research, so it serves different purposes, such as. 

Customer loyalty research

Organizations and employees can use causal research to determine the best customer attraction and retention approaches. 

They monitor interactions between customers and employees to identify cause-and-effect patterns. That could be a product demonstration technique resulting in higher or lower sales from the same customers. 

Example: Business X introduces a new individual marketing strategy for a small customer group and notices a measurable increase in monthly subscriptions. 

Upon getting identical results from different groups, the business concludes that the individual marketing strategy resulted in the intended causal relationship.

Advertising research

Businesses can also use causal research to implement and assess advertising campaigns. 

Example: Business X notices a 7% increase in sales revenue a few months after a business introduces a new advertisement in a certain region. The business can run the same ad in random regions to compare sales data over the same period. 

This will help the company determine whether the ad caused the sales increase. If sales increase in these randomly selected regions, the business could conclude that advertising campaigns and sales share a cause-and-effect relationship. 

Educational research

Academics, teachers, and learners can use causal research to explore the impact of politics on learners and pinpoint learner behavior trends. 

Example: College X notices that more IT students drop out of their program in their second year, which is 8% higher than any other year. 

The college administration can interview a random group of IT students to identify factors leading to this situation, including personal factors and influences. 

With the help of in-depth statistical analysis, the institution's researchers can uncover the main factors causing dropout. They can create immediate solutions to address the problem.

Is a causal variable dependent or independent?

When two variables have a cause-and-effect relationship, the cause is often called the independent variable. As such, the effect variable is dependent, i.e., it depends on the independent causal variable. An independent variable is only causal under experimental conditions. 

What are the three criteria for causality?

The three conditions for causality are:

Temporality/temporal precedence: The cause must precede the effect.

Rationality: One event predicts the other with an explanation, and the effect must vary in proportion to changes in the cause.

Control for extraneous variables: The covariables must not result from other variables.  

Is causal research experimental?

Causal research is mostly explanatory. Causal studies focus on analyzing a situation to explore and explain the patterns of relationships between variables. 

Further, experiments are the primary data collection methods in studies with causal research design. However, as a research design, causal research isn't entirely experimental.

What is the difference between experimental and causal research design?

One of the main differences between causal and experimental research is that in causal research, the research subjects are already in groups since the event has already happened. 

On the other hand, researchers randomly choose subjects in experimental research before manipulating the variables.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 6 February 2023

Last updated: 6 October 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 7 March 2023

Last updated: 9 March 2023

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

cause and effect type of research

Users report unexpectedly high data usage, especially during streaming sessions.

cause and effect type of research

Users find it hard to navigate from the home page to relevant playlists in the app.

cause and effect type of research

It would be great to have a sleep timer feature, especially for bedtime listening.

cause and effect type of research

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

Research-Methodology

Causal Research (Explanatory research)

Causal research, also known as explanatory research is conducted in order to identify the extent and nature of cause-and-effect relationships. Causal research can be conducted in order to assess impacts of specific changes on existing norms, various processes etc.

Causal studies focus on an analysis of a situation or a specific problem to explain the patterns of relationships between variables. Experiments  are the most popular primary data collection methods in studies with causal research design.

The presence of cause cause-and-effect relationships can be confirmed only if specific causal evidence exists. Causal evidence has three important components:

1. Temporal sequence . The cause must occur before the effect. For example, it would not be appropriate to credit the increase in sales to rebranding efforts if the increase had started before the rebranding.

2. Concomitant variation . The variation must be systematic between the two variables. For example, if a company doesn’t change its employee training and development practices, then changes in customer satisfaction cannot be caused by employee training and development.

3. Nonspurious association . Any covarioaton between a cause and an effect must be true and not simply due to other variable. In other words, there should be no a ‘third’ factor that relates to both, cause, as well as, effect.

The table below compares the main characteristics of causal research to exploratory and descriptive research designs: [1]

Amount of uncertainty characterising decision situation Clearly defined Highly ambiguous Partially defined
Key research statement Research hypotheses Research question Research question
When conducted? Later stages of decision making Early stage of decision making Later stages of decision making
Usual research approach Highly structured Unstructured Structured
Examples ‘Will consumers buy more products in a blue package?’

‘Which of two advertising campaigns will be more effective?’

‘Our sales are declining for no apparent reason’

‘What kinds of new products are fast-food consumers interested in?’

‘What kind of people patronize our stores compared to our primary competitor?’

‘What product features are the most important to our customers?’

Main characteristics of research designs

 Examples of Causal Research (Explanatory Research)

The following are examples of research objectives for causal research design:

  • To assess the impacts of foreign direct investment on the levels of economic growth in Taiwan
  • To analyse the effects of re-branding initiatives on the levels of customer loyalty
  • To identify the nature of impact of work process re-engineering on the levels of employee motivation

Advantages of Causal Research (Explanatory Research)

  • Causal studies may play an instrumental role in terms of identifying reasons behind a wide range of processes, as well as, assessing the impacts of changes on existing norms, processes etc.
  • Causal studies usually offer the advantages of replication if necessity arises
  • This type of studies are associated with greater levels of internal validity due to systematic selection of subjects

Disadvantages of Causal Research (Explanatory Research)

  • Coincidences in events may be perceived as cause-and-effect relationships. For example, Punxatawney Phil was able to forecast the duration of winter for five consecutive years, nevertheless, it is just a rodent without intellect and forecasting powers, i.e. it was a coincidence.
  • It can be difficult to reach appropriate conclusions on the basis of causal research findings. This is due to the impact of a wide range of factors and variables in social environment. In other words, while casualty can be inferred, it cannot be proved with a high level of certainty.
  • It certain cases, while correlation between two variables can be effectively established; identifying which variable is a cause and which one is the impact can be a difficult task to accomplish.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  contains discussions of theory and application of research designs. The e-book also explains all stages of the  research process  starting from the  selection of the research area  to writing personal reflection. Important elements of dissertations such as  research philosophy ,  research approach ,  methods of data collection ,  data analysis  and  sampling  are explained in this e-book in simple words.

John Dudovskiy

Causal Research (Explanatory research)

[1] Source: Zikmund, W.G., Babin, J., Carr, J. & Griffin, M. (2012) “Business Research Methods: with Qualtrics Printed Access Card” Cengage Learning

Causal Research: Definition, Design, Tips, Examples

Appinio Research · 21.02.2024 · 34min read

Causal Research Definition Design Tips Examples

Ever wondered why certain events lead to specific outcomes? Understanding causality—the relationship between cause and effect—is crucial for unraveling the mysteries of the world around us. In this guide on causal research, we delve into the methods, techniques, and principles behind identifying and establishing cause-and-effect relationships between variables. Whether you're a seasoned researcher or new to the field, this guide will equip you with the knowledge and tools to conduct rigorous causal research and draw meaningful conclusions that can inform decision-making and drive positive change.

What is Causal Research?

Causal research is a methodological approach used in scientific inquiry to investigate cause-and-effect relationships between variables. Unlike correlational or descriptive research, which merely examine associations or describe phenomena, causal research aims to determine whether changes in one variable cause changes in another variable.

Importance of Causal Research

Understanding the importance of causal research is crucial for appreciating its role in advancing knowledge and informing decision-making across various fields. Here are key reasons why causal research is significant:

  • Establishing Causality:  Causal research enables researchers to determine whether changes in one variable directly cause changes in another variable. This helps identify effective interventions, predict outcomes, and inform evidence-based practices.
  • Guiding Policy and Practice:  By identifying causal relationships, causal research provides empirical evidence to support policy decisions, program interventions, and business strategies. Decision-makers can use causal findings to allocate resources effectively and address societal challenges.
  • Informing Predictive Modeling :  Causal research contributes to the development of predictive models by elucidating causal mechanisms underlying observed phenomena. Predictive models based on causal relationships can accurately forecast future outcomes and trends.
  • Advancing Scientific Knowledge:  Causal research contributes to the cumulative body of scientific knowledge by testing hypotheses, refining theories, and uncovering underlying mechanisms of phenomena. It fosters a deeper understanding of complex systems and phenomena.
  • Mitigating Confounding Factors:  Understanding causal relationships allows researchers to control for confounding variables and reduce bias in their studies. By isolating the effects of specific variables, researchers can draw more valid and reliable conclusions.

Causal Research Distinction from Other Research

Understanding the distinctions between causal research and other types of research methodologies is essential for researchers to choose the most appropriate approach for their study objectives. Let's explore the differences and similarities between causal research and descriptive, exploratory, and correlational research methodologies .

Descriptive vs. Causal Research

Descriptive research  focuses on describing characteristics, behaviors, or phenomena without manipulating variables or establishing causal relationships. It provides a snapshot of the current state of affairs but does not attempt to explain why certain phenomena occur.

Causal research , on the other hand, seeks to identify cause-and-effect relationships between variables by systematically manipulating independent variables and observing their effects on dependent variables. Unlike descriptive research, causal research aims to determine whether changes in one variable directly cause changes in another variable.

Similarities:

  • Both descriptive and causal research involve empirical observation and data collection.
  • Both types of research contribute to the scientific understanding of phenomena, albeit through different approaches.

Differences:

  • Descriptive research focuses on describing phenomena, while causal research aims to explain why phenomena occur by identifying causal relationships.
  • Descriptive research typically uses observational methods, while causal research often involves experimental designs or causal inference techniques to establish causality.

Exploratory vs. Causal Research

Exploratory research  aims to explore new topics, generate hypotheses, or gain initial insights into phenomena. It is often conducted when little is known about a subject and seeks to generate ideas for further investigation.

Causal research , on the other hand, is concerned with testing hypotheses and establishing cause-and-effect relationships between variables. It builds on existing knowledge and seeks to confirm or refute causal hypotheses through systematic investigation.

  • Both exploratory and causal research contribute to the generation of knowledge and theory development.
  • Both types of research involve systematic inquiry and data analysis to answer research questions.
  • Exploratory research focuses on generating hypotheses and exploring new areas of inquiry, while causal research aims to test hypotheses and establish causal relationships.
  • Exploratory research is more flexible and open-ended, while causal research follows a more structured and hypothesis-driven approach.

Correlational vs. Causal Research

Correlational research  examines the relationship between variables without implying causation. It identifies patterns of association or co-occurrence between variables but does not establish the direction or causality of the relationship.

Causal research , on the other hand, seeks to establish cause-and-effect relationships between variables by systematically manipulating independent variables and observing their effects on dependent variables. It goes beyond mere association to determine whether changes in one variable directly cause changes in another variable.

  • Both correlational and causal research involve analyzing relationships between variables.
  • Both types of research contribute to understanding the nature of associations between variables.
  • Correlational research focuses on identifying patterns of association, while causal research aims to establish causal relationships.
  • Correlational research does not manipulate variables, while causal research involves systematically manipulating independent variables to observe their effects on dependent variables.

How to Formulate Causal Research Hypotheses?

Crafting research questions and hypotheses is the foundational step in any research endeavor. Defining your variables clearly and articulating the causal relationship you aim to investigate is essential. Let's explore this process further.

1. Identify Variables

Identifying variables involves recognizing the key factors you will manipulate or measure in your study. These variables can be classified into independent, dependent, and confounding variables.

  • Independent Variable (IV):  This is the variable you manipulate or control in your study. It is the presumed cause that you want to test.
  • Dependent Variable (DV):  The dependent variable is the outcome or response you measure. It is affected by changes in the independent variable.
  • Confounding Variables:  These are extraneous factors that may influence the relationship between the independent and dependent variables, leading to spurious correlations or erroneous causal inferences. Identifying and controlling for confounding variables is crucial for establishing valid causal relationships.

2. Establish Causality

Establishing causality requires meeting specific criteria outlined by scientific methodology. While correlation between variables may suggest a relationship, it does not imply causation. To establish causality, researchers must demonstrate the following:

  • Temporal Precedence:  The cause must precede the effect in time. In other words, changes in the independent variable must occur before changes in the dependent variable.
  • Covariation of Cause and Effect:  Changes in the independent variable should be accompanied by corresponding changes in the dependent variable. This demonstrates a consistent pattern of association between the two variables.
  • Elimination of Alternative Explanations:  Researchers must rule out other possible explanations for the observed relationship between variables. This involves controlling for confounding variables and conducting rigorous experimental designs to isolate the effects of the independent variable.

3. Write Clear and Testable Hypotheses

Hypotheses serve as tentative explanations for the relationship between variables and provide a framework for empirical testing. A well-formulated hypothesis should be:

  • Specific:  Clearly state the expected relationship between the independent and dependent variables.
  • Testable:  The hypothesis should be capable of being empirically tested through observation or experimentation.
  • Falsifiable:  There should be a possibility of proving the hypothesis false through empirical evidence.

For example, a hypothesis in a study examining the effect of exercise on weight loss could be: "Increasing levels of physical activity (IV) will lead to greater weight loss (DV) among participants (compared to those with lower levels of physical activity)."

By formulating clear hypotheses and operationalizing variables, researchers can systematically investigate causal relationships and contribute to the advancement of scientific knowledge.

Causal Research Design

Designing your research study involves making critical decisions about how you will collect and analyze data to investigate causal relationships.

Experimental vs. Observational Designs

One of the first decisions you'll make when designing a study is whether to employ an experimental or observational design. Each approach has its strengths and limitations, and the choice depends on factors such as the research question, feasibility , and ethical considerations.

  • Experimental Design: In experimental designs, researchers manipulate the independent variable and observe its effects on the dependent variable while controlling for confounding variables. Random assignment to experimental conditions allows for causal inferences to be drawn. Example: A study testing the effectiveness of a new teaching method on student performance by randomly assigning students to either the experimental group (receiving the new teaching method) or the control group (receiving the traditional method).
  • Observational Design: Observational designs involve observing and measuring variables without intervention. Researchers may still examine relationships between variables but cannot establish causality as definitively as in experimental designs. Example: A study observing the association between socioeconomic status and health outcomes by collecting data on income, education level, and health indicators from a sample of participants.

Control and Randomization

Control and randomization are crucial aspects of experimental design that help ensure the validity of causal inferences.

  • Control: Controlling for extraneous variables involves holding constant factors that could influence the dependent variable, except for the independent variable under investigation. This helps isolate the effects of the independent variable. Example: In a medication trial, controlling for factors such as age, gender, and pre-existing health conditions ensures that any observed differences in outcomes can be attributed to the medication rather than other variables.
  • Randomization: Random assignment of participants to experimental conditions helps distribute potential confounders evenly across groups, reducing the likelihood of systematic biases and allowing for causal conclusions. Example: Randomly assigning patients to treatment and control groups in a clinical trial ensures that both groups are comparable in terms of baseline characteristics, minimizing the influence of extraneous variables on treatment outcomes.

Internal and External Validity

Two key concepts in research design are internal validity and external validity, which relate to the credibility and generalizability of study findings, respectively.

  • Internal Validity: Internal validity refers to the extent to which the observed effects can be attributed to the manipulation of the independent variable rather than confounding factors. Experimental designs typically have higher internal validity due to their control over extraneous variables. Example: A study examining the impact of a training program on employee productivity would have high internal validity if it could confidently attribute changes in productivity to the training intervention.
  • External Validity: External validity concerns the extent to which study findings can be generalized to other populations, settings, or contexts. While experimental designs prioritize internal validity, they may sacrifice external validity by using highly controlled conditions that do not reflect real-world scenarios. Example: Findings from a laboratory study on memory retention may have limited external validity if the experimental tasks and conditions differ significantly from real-life learning environments.

Types of Experimental Designs

Several types of experimental designs are commonly used in causal research, each with its own strengths and applications.

  • Randomized Control Trials (RCTs): RCTs are considered the gold standard for assessing causality in research. Participants are randomly assigned to experimental and control groups, allowing researchers to make causal inferences. Example: A pharmaceutical company testing a new drug's efficacy would use an RCT to compare outcomes between participants receiving the drug and those receiving a placebo.
  • Quasi-Experimental Designs: Quasi-experimental designs lack random assignment but still attempt to establish causality by controlling for confounding variables through design or statistical analysis . Example: A study evaluating the effectiveness of a smoking cessation program might compare outcomes between participants who voluntarily enroll in the program and a matched control group of non-enrollees.

By carefully selecting an appropriate research design and addressing considerations such as control, randomization, and validity, researchers can conduct studies that yield credible evidence of causal relationships and contribute valuable insights to their field of inquiry.

Causal Research Data Collection

Collecting data is a critical step in any research study, and the quality of the data directly impacts the validity and reliability of your findings.

Choosing Measurement Instruments

Selecting appropriate measurement instruments is essential for accurately capturing the variables of interest in your study. The choice of measurement instrument depends on factors such as the nature of the variables, the target population , and the research objectives.

  • Surveys :  Surveys are commonly used to collect self-reported data on attitudes, opinions, behaviors, and demographics . They can be administered through various methods, including paper-and-pencil surveys, online surveys, and telephone interviews.
  • Observations:  Observational methods involve systematically recording behaviors, events, or phenomena as they occur in natural settings. Observations can be structured (following a predetermined checklist) or unstructured (allowing for flexible data collection).
  • Psychological Tests:  Psychological tests are standardized instruments designed to measure specific psychological constructs, such as intelligence, personality traits, or emotional functioning. These tests often have established reliability and validity.
  • Physiological Measures:  Physiological measures, such as heart rate, blood pressure, or brain activity, provide objective data on bodily processes. They are commonly used in health-related research but require specialized equipment and expertise.
  • Existing Databases:  Researchers may also utilize existing datasets, such as government surveys, public health records, or organizational databases, to answer research questions. Secondary data analysis can be cost-effective and time-saving but may be limited by the availability and quality of data.

Ensuring accurate data collection is the cornerstone of any successful research endeavor. With the right tools in place, you can unlock invaluable insights to drive your causal research forward. From surveys to tests, each instrument offers a unique lens through which to explore your variables of interest.

At Appinio , we understand the importance of robust data collection methods in informing impactful decisions. Let us empower your research journey with our intuitive platform, where you can effortlessly gather real-time consumer insights to fuel your next breakthrough.   Ready to take your research to the next level? Book a demo today and see how Appinio can revolutionize your approach to data collection!

Book a Demo

Sampling Techniques

Sampling involves selecting a subset of individuals or units from a larger population to participate in the study. The goal of sampling is to obtain a representative sample that accurately reflects the characteristics of the population of interest.

  • Probability Sampling:  Probability sampling methods involve randomly selecting participants from the population, ensuring that each member of the population has an equal chance of being included in the sample. Common probability sampling techniques include simple random sampling , stratified sampling, and cluster sampling .
  • Non-Probability Sampling:  Non-probability sampling methods do not involve random selection and may introduce biases into the sample. Examples of non-probability sampling techniques include convenience sampling, purposive sampling, and snowball sampling.

The choice of sampling technique depends on factors such as the research objectives, population characteristics, resources available, and practical constraints. Researchers should strive to minimize sampling bias and maximize the representativeness of the sample to enhance the generalizability of their findings.

Ethical Considerations

Ethical considerations are paramount in research and involve ensuring the rights, dignity, and well-being of research participants. Researchers must adhere to ethical principles and guidelines established by professional associations and institutional review boards (IRBs).

  • Informed Consent:  Participants should be fully informed about the nature and purpose of the study, potential risks and benefits, their rights as participants, and any confidentiality measures in place. Informed consent should be obtained voluntarily and without coercion.
  • Privacy and Confidentiality:  Researchers should take steps to protect the privacy and confidentiality of participants' personal information. This may involve anonymizing data, securing data storage, and limiting access to identifiable information.
  • Minimizing Harm:  Researchers should mitigate any potential physical, psychological, or social harm to participants. This may involve conducting risk assessments, providing appropriate support services, and debriefing participants after the study.
  • Respect for Participants:  Researchers should respect participants' autonomy, diversity, and cultural values. They should seek to foster a trusting and respectful relationship with participants throughout the research process.
  • Publication and Dissemination:  Researchers have a responsibility to accurately report their findings and acknowledge contributions from participants and collaborators. They should adhere to principles of academic integrity and transparency in disseminating research results.

By addressing ethical considerations in research design and conduct, researchers can uphold the integrity of their work, maintain trust with participants and the broader community, and contribute to the responsible advancement of knowledge in their field.

Causal Research Data Analysis

Once data is collected, it must be analyzed to draw meaningful conclusions and assess causal relationships.

Causal Inference Methods

Causal inference methods are statistical techniques used to identify and quantify causal relationships between variables in observational data. While experimental designs provide the most robust evidence for causality, observational studies often require more sophisticated methods to account for confounding factors.

  • Difference-in-Differences (DiD):  DiD compares changes in outcomes before and after an intervention between a treatment group and a control group, controlling for pre-existing trends. It estimates the average treatment effect by differencing the changes in outcomes between the two groups over time.
  • Instrumental Variables (IV):  IV analysis relies on instrumental variables—variables that affect the treatment variable but not the outcome—to estimate causal effects in the presence of endogeneity. IVs should be correlated with the treatment but uncorrelated with the error term in the outcome equation.
  • Regression Discontinuity (RD):  RD designs exploit naturally occurring thresholds or cutoff points to estimate causal effects near the threshold. Participants just above and below the threshold are compared, assuming that they are similar except for their proximity to the threshold.
  • Propensity Score Matching (PSM):  PSM matches individuals or units based on their propensity scores—the likelihood of receiving the treatment—creating comparable groups with similar observed characteristics. Matching reduces selection bias and allows for causal inference in observational studies.

Assessing Causality Strength

Assessing the strength of causality involves determining the magnitude and direction of causal effects between variables. While statistical significance indicates whether an observed relationship is unlikely to occur by chance, it does not necessarily imply a strong or meaningful effect.

  • Effect Size:  Effect size measures the magnitude of the relationship between variables, providing information about the practical significance of the results. Standard effect size measures include Cohen's d for mean differences and odds ratios for categorical outcomes.
  • Confidence Intervals:  Confidence intervals provide a range of values within which the actual effect size is likely to lie with a certain degree of certainty. Narrow confidence intervals indicate greater precision in estimating the true effect size.
  • Practical Significance:  Practical significance considers whether the observed effect is meaningful or relevant in real-world terms. Researchers should interpret results in the context of their field and the implications for stakeholders.

Handling Confounding Variables

Confounding variables are extraneous factors that may distort the observed relationship between the independent and dependent variables, leading to spurious or biased conclusions. Addressing confounding variables is essential for establishing valid causal inferences.

  • Statistical Control:  Statistical control involves including confounding variables as covariates in regression models to partially out their effects on the outcome variable. Controlling for confounders reduces bias and strengthens the validity of causal inferences.
  • Matching:  Matching participants or units based on observed characteristics helps create comparable groups with similar distributions of confounding variables. Matching reduces selection bias and mimics the randomization process in experimental designs.
  • Sensitivity Analysis:  Sensitivity analysis assesses the robustness of study findings to changes in model specifications or assumptions. By varying analytical choices and examining their impact on results, researchers can identify potential sources of bias and evaluate the stability of causal estimates.
  • Subgroup Analysis:  Subgroup analysis explores whether the relationship between variables differs across subgroups defined by specific characteristics. Identifying effect modifiers helps understand the conditions under which causal effects may vary.

By employing rigorous causal inference methods, assessing the strength of causality, and addressing confounding variables, researchers can confidently draw valid conclusions about causal relationships in their studies, advancing scientific knowledge and informing evidence-based decision-making.

Causal Research Examples

Examples play a crucial role in understanding the application of causal research methods and their impact across various domains. Let's explore some detailed examples to illustrate how causal research is conducted and its real-world implications:

Example 1: Software as a Service (SaaS) User Retention Analysis

Suppose a SaaS company wants to understand the factors influencing user retention and engagement with their platform. The company conducts a longitudinal observational study, collecting data on user interactions, feature usage, and demographic information over several months.

  • Design:  The company employs an observational cohort study design, tracking cohorts of users over time to observe changes in retention and engagement metrics. They use analytics tools to collect data on user behavior , such as logins, feature usage, session duration, and customer support interactions.
  • Data Collection:  Data is collected from the company's platform logs, customer relationship management (CRM) system, and user surveys. Key metrics include user churn rates, active user counts, feature adoption rates, and Net Promoter Scores ( NPS ).
  • Analysis:  Using statistical techniques like survival analysis and regression modeling, the company identifies factors associated with user retention, such as feature usage patterns, onboarding experiences, customer support interactions, and subscription plan types.
  • Findings: The analysis reveals that users who engage with specific features early in their lifecycle have higher retention rates, while those who encounter usability issues or lack personalized onboarding experiences are more likely to churn. The company uses these insights to optimize product features, improve onboarding processes, and enhance customer support strategies to increase user retention and satisfaction.

Example 2: Business Impact of Digital Marketing Campaign

Consider a technology startup launching a digital marketing campaign to promote its new product offering. The company conducts an experimental study to evaluate the effectiveness of different marketing channels in driving website traffic, lead generation, and sales conversions.

  • Design:  The company implements an A/B testing design, randomly assigning website visitors to different marketing treatment conditions, such as Google Ads, social media ads, email campaigns, or content marketing efforts. They track user interactions and conversion events using web analytics tools and marketing automation platforms.
  • Data Collection:  Data is collected on website traffic, click-through rates, conversion rates, lead generation, and sales revenue. The company also gathers demographic information and user feedback through surveys and customer interviews to understand the impact of marketing messages and campaign creatives .
  • Analysis:  Utilizing statistical methods like hypothesis testing and multivariate analysis, the company compares key performance metrics across different marketing channels to assess their effectiveness in driving user engagement and conversion outcomes. They calculate return on investment (ROI) metrics to evaluate the cost-effectiveness of each marketing channel.
  • Findings:  The analysis reveals that social media ads outperform other marketing channels in generating website traffic and lead conversions, while email campaigns are more effective in nurturing leads and driving sales conversions. Armed with these insights, the company allocates marketing budgets strategically, focusing on channels that yield the highest ROI and adjusting messaging and targeting strategies to optimize campaign performance.

These examples demonstrate the diverse applications of causal research methods in addressing important questions, informing policy decisions, and improving outcomes in various fields. By carefully designing studies, collecting relevant data, employing appropriate analysis techniques, and interpreting findings rigorously, researchers can generate valuable insights into causal relationships and contribute to positive social change.

How to Interpret Causal Research Results?

Interpreting and reporting research findings is a crucial step in the scientific process, ensuring that results are accurately communicated and understood by stakeholders.

Interpreting Statistical Significance

Statistical significance indicates whether the observed results are unlikely to occur by chance alone, but it does not necessarily imply practical or substantive importance. Interpreting statistical significance involves understanding the meaning of p-values and confidence intervals and considering their implications for the research findings.

  • P-values:  A p-value represents the probability of obtaining the observed results (or more extreme results) if the null hypothesis is true. A p-value below a predetermined threshold (typically 0.05) suggests that the observed results are statistically significant, indicating that the null hypothesis can be rejected in favor of the alternative hypothesis.
  • Confidence Intervals:  Confidence intervals provide a range of values within which the true population parameter is likely to lie with a certain degree of confidence (e.g., 95%). If the confidence interval does not include the null value, it suggests that the observed effect is statistically significant at the specified confidence level.

Interpreting statistical significance requires considering factors such as sample size, effect size, and the practical relevance of the results rather than relying solely on p-values to draw conclusions.

Discussing Practical Significance

While statistical significance indicates whether an effect exists, practical significance evaluates the magnitude and meaningfulness of the effect in real-world terms. Discussing practical significance involves considering the relevance of the results to stakeholders and assessing their impact on decision-making and practice.

  • Effect Size:  Effect size measures the magnitude of the observed effect, providing information about its practical importance. Researchers should interpret effect sizes in the context of their field and the scale of measurement (e.g., small, medium, or large effect sizes).
  • Contextual Relevance:  Consider the implications of the results for stakeholders, policymakers, and practitioners. Are the observed effects meaningful in the context of existing knowledge, theory, or practical applications? How do the findings contribute to addressing real-world problems or informing decision-making?

Discussing practical significance helps contextualize research findings and guide their interpretation and application in practice, beyond statistical significance alone.

Addressing Limitations and Assumptions

No study is without limitations, and researchers should transparently acknowledge and address potential biases, constraints, and uncertainties in their research design and findings.

  • Methodological Limitations:  Identify any limitations in study design, data collection, or analysis that may affect the validity or generalizability of the results. For example, sampling biases , measurement errors, or confounding variables.
  • Assumptions:  Discuss any assumptions made in the research process and their implications for the interpretation of results. Assumptions may relate to statistical models, causal inference methods, or theoretical frameworks underlying the study.
  • Alternative Explanations:  Consider alternative explanations for the observed results and discuss their potential impact on the validity of causal inferences. How robust are the findings to different interpretations or competing hypotheses?

Addressing limitations and assumptions demonstrates transparency and rigor in the research process, allowing readers to critically evaluate the validity and reliability of the findings.

Communicating Findings Clearly

Effectively communicating research findings is essential for disseminating knowledge, informing decision-making, and fostering collaboration and dialogue within the scientific community.

  • Clarity and Accessibility:  Present findings in a clear, concise, and accessible manner, using plain language and avoiding jargon or technical terminology. Organize information logically and use visual aids (e.g., tables, charts, graphs) to enhance understanding.
  • Contextualization:  Provide context for the results by summarizing key findings, highlighting their significance, and relating them to existing literature or theoretical frameworks. Discuss the implications of the findings for theory, practice, and future research directions.
  • Transparency:  Be transparent about the research process, including data collection procedures, analytical methods, and any limitations or uncertainties associated with the findings. Clearly state any conflicts of interest or funding sources that may influence interpretation.

By communicating findings clearly and transparently, researchers can facilitate knowledge exchange, foster trust and credibility, and contribute to evidence-based decision-making.

Causal Research Tips

When conducting causal research, it's essential to approach your study with careful planning, attention to detail, and methodological rigor. Here are some tips to help you navigate the complexities of causal research effectively:

  • Define Clear Research Questions:  Start by clearly defining your research questions and hypotheses. Articulate the causal relationship you aim to investigate and identify the variables involved.
  • Consider Alternative Explanations:  Be mindful of potential confounding variables and alternative explanations for the observed relationships. Take steps to control for confounders and address alternative hypotheses in your analysis.
  • Prioritize Internal Validity:  While external validity is important for generalizability, prioritize internal validity in your study design to ensure that observed effects can be attributed to the manipulation of the independent variable.
  • Use Randomization When Possible:  If feasible, employ randomization in experimental designs to distribute potential confounders evenly across experimental conditions and enhance the validity of causal inferences.
  • Be Transparent About Methods:  Provide detailed descriptions of your research methods, including data collection procedures, analytical techniques, and any assumptions or limitations associated with your study.
  • Utilize Multiple Methods:  Consider using a combination of experimental and observational methods to triangulate findings and strengthen the validity of causal inferences.
  • Be Mindful of Sample Size:  Ensure that your sample size is adequate to detect meaningful effects and minimize the risk of Type I and Type II errors. Conduct power analyses to determine the sample size needed to achieve sufficient statistical power.
  • Validate Measurement Instruments:  Validate your measurement instruments to ensure that they are reliable and valid for assessing the variables of interest in your study. Pilot test your instruments if necessary.
  • Seek Feedback from Peers:  Collaborate with colleagues or seek feedback from peer reviewers to solicit constructive criticism and improve the quality of your research design and analysis.

Conclusion for Causal Research

Mastering causal research empowers researchers to unlock the secrets of cause and effect, shedding light on the intricate relationships between variables in diverse fields. By employing rigorous methods such as experimental designs, causal inference techniques, and careful data analysis, you can uncover causal mechanisms, predict outcomes, and inform evidence-based practices. Through the lens of causal research, complex phenomena become more understandable, and interventions become more effective in addressing societal challenges and driving progress. In a world where understanding the reasons behind events is paramount, causal research serves as a beacon of clarity and insight. Armed with the knowledge and techniques outlined in this guide, you can navigate the complexities of causality with confidence, advancing scientific knowledge, guiding policy decisions, and ultimately making meaningful contributions to our understanding of the world.

How to Conduct Causal Research in Minutes?

Introducing Appinio , your gateway to lightning-fast causal research. As a real-time market research platform, we're revolutionizing how companies gain consumer insights to drive data-driven decisions. With Appinio, conducting your own market research is not only easy but also thrilling. Experience the excitement of market research with Appinio, where fast, intuitive, and impactful insights are just a click away.

Here's why you'll love Appinio:

  • Instant Insights:  Say goodbye to waiting days for research results. With our platform, you'll go from questions to insights in minutes, empowering you to make decisions at the speed of business.
  • User-Friendly Interface:  No need for a research degree here! Our intuitive platform is designed for anyone to use, making complex research tasks simple and accessible.
  • Global Reach:  Reach your target audience wherever they are. With access to over 90 countries and the ability to define precise target groups from 1200+ characteristics, you'll gather comprehensive data to inform your decisions.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Brand Development Definition Process Strategies Examples

26.06.2024 | 35min read

Brand Development: Definition, Process, Strategies, Examples

Discover future flavors using Appinio predictive insights to stay ahead of consumer preferences.

18.06.2024 | 7min read

Future Flavors: How Burger King nailed Concept Testing with Appinio's Predictive Insights

What is a Pulse Survey Definition Types Questions

18.06.2024 | 32min read

What is a Pulse Survey? Definition, Types, Questions

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Introduction to Research Methods in Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

cause and effect type of research

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

cause and effect type of research

There are several different research methods in psychology , each of which can help researchers learn more about the way people think, feel, and behave. If you're a psychology student or just want to know the types of research in psychology, here are the main ones as well as how they work.

Three Main Types of Research in Psychology

stevecoleimages/Getty Images

Psychology research can usually be classified as one of three major types.

1. Causal or Experimental Research

When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables. This type of research also determines if one variable causes another variable to occur or change.

An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants.

2. Descriptive Research

Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are:

  • Case studies
  • Observational studies

An example of this psychology research method would be an opinion poll to determine which presidential candidate people plan to vote for in the next election. Descriptive studies don't try to measure the effect of a variable; they seek only to describe it.

3. Relational or Correlational Research

A study that investigates the connection between two or more variables is considered relational research. The variables compared are generally already present in the group or population.

For example, a study that looks at the proportion of males and females that would purchase either a classical CD or a jazz CD would be studying the relationship between gender and music preference.

Theory vs. Hypothesis in Psychology Research

People often confuse the terms theory and hypothesis or are not quite sure of the distinctions between the two concepts. If you're a psychology student, it's essential to understand what each term means, how they differ, and how they're used in psychology research.

A theory is a well-established principle that has been developed to explain some aspect of the natural world. A theory arises from repeated observation and testing and incorporates facts, laws, predictions, and tested hypotheses that are widely accepted.

A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, an experiment designed to look at the relationship between study habits and test anxiety might have a hypothesis that states, "We predict that students with better study habits will suffer less test anxiety." Unless your study is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your experiment or research.

While the terms are sometimes used interchangeably in everyday use, the difference between a theory and a hypothesis is important when studying experimental design.

Some other important distinctions to note include:

  • A theory predicts events in general terms, while a hypothesis makes a specific prediction about a specified set of circumstances.
  • A theory has been extensively tested and is generally accepted, while a hypothesis is a speculative guess that has yet to be tested.

The Effect of Time on Research Methods in Psychology

There are two types of time dimensions that can be used in designing a research study:

  • Cross-sectional research takes place at a single point in time. All tests, measures, or variables are administered to participants on one occasion. This type of research seeks to gather data on present conditions instead of looking at the effects of a variable over a period of time.
  • Longitudinal research is a study that takes place over a period of time. Data is first collected at the beginning of the study, and may then be gathered repeatedly throughout the length of the study. Some longitudinal studies may occur over a short period of time, such as a few days, while others may take place over a period of months, years, or even decades.

The effects of aging are often investigated using longitudinal research.

Causal Relationships Between Psychology Research Variables

What do we mean when we talk about a “relationship” between variables? In psychological research, we're referring to a connection between two or more factors that we can measure or systematically vary.

One of the most important distinctions to make when discussing the relationship between variables is the meaning of causation.

A causal relationship is when one variable causes a change in another variable. These types of relationships are investigated by experimental research to determine if changes in one variable actually result in changes in another variable.

Correlational Relationships Between Psychology Research Variables

A correlation is the measurement of the relationship between two variables. These variables already occur in the group or population and are not controlled by the experimenter.

  • A positive correlation is a direct relationship where, as the amount of one variable increases, the amount of a second variable also increases.
  • In a negative correlation , as the amount of one variable goes up, the levels of another variable go down.

In both types of correlation, there is no evidence or proof that changes in one variable cause changes in the other variable. A correlation simply indicates that there is a relationship between the two variables.

The most important concept is that correlation does not equal causation. Many popular media sources make the mistake of assuming that simply because two variables are related, a causal relationship exists.

Psychologists use descriptive, correlational, and experimental research designs to understand behavior . In:  Introduction to Psychology . Minneapolis, MN: University of Minnesota Libraries Publishing; 2010.

Caruana EJ, Roman M, Herandez-Sanchez J, Solli P. Longitudinal studies . Journal of Thoracic Disease. 2015;7(11):E537-E540. doi:10.3978/j.issn.2072-1439.2015.10.63

University of Berkeley. Science at multiple levels . Understanding Science 101 . Published 2012.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Experimental Method In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

Related Articles

Mixed Methods Research

Research Methodology

Mixed Methods Research

Conversation Analysis

Conversation Analysis

Discourse Analysis

Discourse Analysis

Phenomenology In Qualitative Research

Phenomenology In Qualitative Research

Ethnography In Qualitative Research

Ethnography In Qualitative Research

Narrative Analysis In Qualitative Research

Narrative Analysis In Qualitative Research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: April 2010

Cause and effect

Nature Methods volume  7 ,  page 243 ( 2010 ) Cite this article

14k Accesses

5 Citations

9 Altmetric

Metrics details

  • Biological techniques
  • Research management

The experimental tractability of biological systems makes it possible to explore the idea that causal relationships can be estimated from observational data.

“Happy is he who is able to know the causes of things.”— Virgil

The idea that one needs to do an experiment—a controlled perturbation of a single variable—to assign cause and effect is deeply embedded in traditional thinking about the way scientific knowledge is obtained, but it is largely absent from everyday life. One knows, without doing an experiment, that the street is wet on a rainy day because the rain has fallen. To be sure, this form of causal reasoning requires prior knowledge. One has seen the co-occurrence of rain and the wet street many times and been taught that rain causes wetness. And although such relationships are, in the strict sense, merely very good correlations, human beings routinely, and necessarily, use them to assign cause and effect.

As discussed on this page a year ago, this form of thinking, at least as a starting point for hypothesis-making, is in practice not uncommon in scientific research as well. Even before our data-driven age, a testable idea often began with an observation. When the structure of a voltage-gated potassium channel was first solved, for instance, the physical basis for potassium selectivity was suggested from observing the disposition of the residues known to allow potassium, but not sodium, ions to pass. In another example a century or so earlier, Ramón y Cajal famously predicted many features of the operation of the nervous system, including the directionality of neuronal signaling, based on his observations of the organization of neurons in the brain. Experiments had to be designed to test these ideas, but the hypotheses about cause and effect were generated at least in part by observation.

Many areas of contemporary biology seek to learn causal relationships from biological data. In systems biology, for instance, researchers use measurements of gene expression, cellular protein amounts or metabolite levels, among other types of data, to assign causal or regulatory relationships in models describing the cell. In the context of large-scale systems data, it is usually not possible to assign such relationships just by looking at the data by eye. Statistical and visualization tools are needed, when, for example, one is looking at lists of expression data of thousands of genes and trying to determine which genes regulate what other genes. The methods used to assign causal arrows typically involve perturbation experiments. When unperturbed data are used, additional information such as change over time or prior biological knowledge has been used to order the data.

A Correspondence by Maathuis and colleagues published in this issue (p. 247), in contrast, explores the notion that it might be possible to estimate causal relationships simply by observing random variation in unperturbed data, with no other information added. Making use of gene expression data obtained either from single gene knockouts in yeast—a classical perturbation experiment—or from parallel control measurements on wild-type yeast, an unperturbed system in which there is presumably only random variation, the authors report that, under some assumptions, statistical analysis can be used to predict the strongest causal effects from the control data alone.

The idea that such prediction is theoretically possible is not in itself new and has received some interest in, among others, the social scientific, economic and medical spheres. But it is an idea that is not easy to test in a real-world setting. In a sense, then, the study in this issue exploits the unique properties of biological systems—their complexity, the availability of good tools for precise and ethical system manipulation, and the well-developed technology for acquiring large-scale unbiased data—to test an idea that could have interest and value outside the biological realm as well.

It is worth noting that the assumptions made—in its current iteration, the approach by Maathuis and colleagues provides no allowance for feedback and does not incorporate change over time—could pose serious obstacles for understanding biological as well as other systems. What is more, statistical inference will clearly not replace perturbation experiments in systems that are amenable to manipulation.

Nonetheless, causal inference from purely observed data could have practical value in the prioritization and design of perturbation experiments. Perturbations can be impossible, for instance, if the tools available are not specific enough, unethical, for example in human studies, or simply unfeasible owing to cost or impracticality. Observational data could be used to identify candidate causal relationships, which could then be the basis for the design of targeted perturbations or for further analysis.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Cause and effect. Nat Methods 7 , 243 (2010). https://doi.org/10.1038/nmeth0410-243

Download citation

Issue Date : April 2010

DOI : https://doi.org/10.1038/nmeth0410-243

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

cause and effect type of research

Child Care and Early Education Research Connections

Causal study design.

Researchers conduct experiments to study cause and effect relationships and to estimate the impact of child care and early childhood programs on children and their families. There are two basic types of experiments:

Randomized experiments

Quasi-experiments.

An experiment is a study in which the researcher manipulates the treatment, or intervention, and then measures the outcome. It addresses the question “if we change X (the treatment or intervention), what happens to Y (the outcome)?” Conducted both in the laboratory and in real life situations, experiments are powerful techniques for evaluating cause-and-effect relationships. The researcher may manipulate whether research subjects receive a treatment (e.g., attendance in a Head Start program: yes or no) or the level of treatment (e.g., hours per day in the program).

Suppose, for example, a group of researchers was interested in the effect of government-funded child care subsidies on maternal employment. They might hypothesize that the provision of government-subsidized child care would promote such employment. They could then design an experiment in which some mothers would be provided the option of government-funded child care subsidies and others would not. The researchers might also manipulate the value of the child care subsidies in order to determine if higher subsidy values might result in different levels of maternal employment.

The group of participants that receives the intervention or treatment is known as the "treatment group," and the group that does not is known as the “control group” in randomized experiments and “comparison group” in quasi-experiments.

The key distinction between randomized experiments and quasi-experiments lies in the fact that in a randomized experiment, participants are randomly assigned to either the treatment or the control group whereas participants are not in a quasi-experiment.

Randomized Experiments

Random assignment ensures that all participants have the same chance of being in a given experimental condition. Randomized experiments (also known as RCT or randomized control trials) are considered to be the most rigorous approach, or the “gold standard,” to identifying causal effects because they theoretically eliminate all preexisting differences between the treatment and control groups. However, some differences might occur due to chance. In practice, therefore, researchers often control for observed characteristics that might differ between individuals in the treatment and control groups when estimating treatment effects. The use of control variables improves the precision of treatment effect estimates.

Cluster-randomized experiments

Despite being the “gold standard” in causal study design, randomized experiments are not common in social science research because it is often impossible or unethical to randomize individuals to experimental conditions. Cluster-randomized experiments, in which groups (e.g., schools or classes) instead of individuals are randomized, often encounter less objections out of ethical concerns and therefore are more feasible in real life. They also prevent treatment spill over to the control group. For example, if students in the same class are randomly assigned to either the treatment or control group with the treatment being a new curriculum, teachers may introduce features of the treatment (i.e., new curriculum) when working with students in the control group in ways that might affect the outcomes.

One drawback of cluster-randomized experiments is a reduction in statistical power. That is, the likelihood that a true effect is detected is reduced with this design.

Quasi-Experiments

Quasi-experiments are characterized by the lack of randomized assignment. They may or may not have comparison groups. When there are both comparison and treatment groups in a quasi-experiment, the groups differ not only in terms of the experimental treatment they receive, but also in other, often unknown or unknowable, ways. As a result, there may be several "rival hypotheses" competing with the experimental manipulation as explanations for observed results.

There are a variety of quasi-experiments. Below are some of the most common types in social and policy research, arranged in the order of weak to strong in terms of their capabilities of addressing threats to a statement that the relationship between the treatment and the outcome of interest is causal.

One group only

One-group pretest-posttest A single group that receives the treatment is observed at two time points, one before the treatment and one after the treatment. Changes in the outcome of interest are presumed to be the effect of the treatment. For example, a new fourth grade math curriculum is introduced and students' math achievement is assessed in the fall and spring of the school year. Improved scores on the assessment are attributed to the curriculum. The biggest weakness of this design is that a number of events can happen around the time of the treatment and influence the outcome. There can be multiple plausible alternative explanations for the observed results.

Interrupted time series A single group that receives the treatment is observed at multiple time points both before and after the treatment. A change in the trend around the time of the treatment is presumed to be the treatment effect. For example, individuals participating in an exercise program might be weighed each week before and after a new exercise routine is introduced. A downward trend in their weight around the time the new routine was introduced would be seen as evidence of the effectiveness of the treatment. This design is stronger than one-group pretest-posttest because it shows the trend in the outcome variable both before and after the treatment instead of a simple two-point-in-time comparison. However, it still suffers the same weakness that other events can happen at the time of the treatment and be the alternative causes of the observed outcome.

Static-group comparison A group that has experienced some treatment is compared with one that has not. Observed differences between the two groups are assumed to be the result of the treatment. For example, fourth graders in some classrooms in a school district are introduced to a new math curriculum while fourth graders in other classrooms in the district are not. Differences in the math scores of the two groups assessed in the spring of the school year only are assumed to be the result of the new curriculum. The weakness of this design is that the treatment and comparison groups may not be truly comparable because participants are not randomly assigned to the groups and there may be important differences in the characteristics and experiences of the groups, only some of which may be known. If the two groups differ in ways that affect the outcome of interest, the causal claim cannot be presumed.

Difference-in-differences Both treatment and comparison groups are measured before and after the treatment. The difference between the two before-after differences is presumed to be the treatment effect. This design is an improvement of the static-group comparison because it compares outcomes that are measured both before and after the treatment is introduced instead of two post-treatment outcomes. For example, the fourth graders in the prior example are assessed in both the fall (pre-treatment) and spring (post-treatment). Differences in the fall-spring scores between the two fourth grade groups are seen as evidence of the effect of the curriculum. For this reason, the treatment and comparison groups in difference-in-differences do not have to be perfectly comparable. The biggest challenge for the researcher is to defend the parallel trend assumption, namely the change in the treatment group would be the same as the change in the comparison group in the absence of the treatment.

Regression discontinuity Participants are assigned to experimental conditions based on whether their scores are above or below a cut point for a quantitative variable. For example, students who score below 75 on a math test are assigned to the treatment group with the treatment being an intensive tutoring program. Those who score at or above 75 are assigned to the comparison group. The students who score just above or below the cut point are considered to be on average identical because their score differences are most likely due to chance. These students therefore act as if they were randomly assigned. The difference in the outcome of interest (e.g., math ability as measured by a different test after the treatment) between the students right around the cut point is presumed to be the treatment effect.

Regression discontinuity is an alternative to randomized experiments when the latter design is not possible. It is the only recognized quasi-experimental design that meets the Institute of Education Sciences standards for establishing causal effects. Although considered to be a strong quasi-experimental design, it needs to meet certain conditions.

See the following for additional information on randomized and quasi-experimental designs.

  • The Core Analytics of Randomized Experiments for Social Research  (PDF)
  • Experimental and Quasi-Experimental Designs for Research  (PDF)
  • Experimental and Quasi-Experimental Designs for Generalized Causal Inference  (PDF)

Instrumental Variables (IV) Approach

An instrumental variable is a variable that is correlated with the independent variable of interest and only affects the dependent variable through that independent variable. The IV approach can be used in both randomized experiments and quasi-experiments.

In randomized experiments, the IV approach is used to estimate the effect of treatment receipt, which is different from treatment offer. Many social programs can only offer participants the treatment, or intervention, but not mandate them to use it. For example, parents are randomly assigned by way of lottery to a school voucher program. Those in the treatment group are offered vouchers to help pay for private school, but ultimately it is up to the parents to decide whether or not they will use the vouchers. If the researcher is interested in estimating the impact of voucher usage, namely the effect of treatment receipt, the IV approach is one way to do so. In this case, the IV is the treatment assignment status (e.g., a dummy variable with 1 being in the treatment group and 0 being in the control group), which is used to predict the probability of a parent using the voucher, which is in turn used as the independent variable of interest to estimate the effect of voucher usage.

In quasi-experiments, the IV approach is used to address the issue of endogeneity, namely that the treatment status is determined by participants themselves (self-selection) or by criteria established by the program designer (treatment selection). Endogeneity is an issue that plagues quasi-experiments and often a source of threats to the causal claim. The IV approach can be used to tease out the causal impact of an endogenous variable on the outcome. For example, researchers used  cigarette taxes as an instrumental variable to estimate the effect of maternal smoking on birth outcomes  (Evans and Ringel, 1999). Cigarette taxes affect how much pregnant mothers smoke but not birth outcomes. They therefore meet the condition of being an IV, which correlates with the independent variable/treatment (i.e., maternal smoking habit) and only affects the dependent variable (i.e., birth outcomes) through that independent variable. The estimated effect is, strictly speaking, a local average treatment effect, namely the effect of treatment (maternal smoking) among those mothers affected by the IV (cigarette taxes). It does not include mothers whose smoking habit is not affected by the price of cigarettes (e.g., chain smokers who may be addicted to nicotine).

An instrumental variable needs to meet certain conditions to provide a consistent estimate of a causal effect.

See the following for additional information on instrumental variables.

  • An introduction to instrumental variable assumptions, validation and estimation
  • An Introduction to Instrumental Variables  (PDF)

Validity of Results from Causal Designs

The two types of validity are internal and external. It is often difficult to achieve both in social science research experiments.

  • Internal validity refers to the strength of evidence of a causal relationship between the treatment (e.g., child care subsidies) and the outcome (e.g., maternal employment).
  • When subjects are randomly assigned to treatment or control groups, we can assume that the treatment caused the observed outcomes because the two groups should not have differed from one another at the start of the experiment.
  • For example, take the child care subsidy example above. Since research subjects were randomly assigned to the treatment (child care subsidies available) and control (no child care subsidies available) groups, the two groups should not have differed at the outset of the study. If, after the intervention, mothers in the treatment group were more likely to be working, we can assume that the availability of child care subsidies promoted maternal employment.

One potential threat to internal validity in experiments occurs when participants either drop out of the study or refuse to participate in the study. If individuals with particular characteristics drop out or refuse to participate more often than individuals with other characteristics, this is called differential attrition. For example, suppose an experiment was conducted to assess the effects of a new reading curriculum on the reading achievement of 10th graders. Schools were randomly assigned to use the new curriculum in all classrooms (treatment schools) or to continue using their current curriculum (control schools). If many of the slowest readers in treatment schools left the study before it was completed (e.g., dropped out of school or transferred to a school in another state), schools with the new curriculum would experience an increase in the average reading scores. The reason they experienced an increase in reading scores, however, is because weaker readers left the school, not because the new curriculum improved students' reading skills. The effects of the curriculum on the achievement of 10th graders might be overestimated, if schools in the control schools did not experience the same type of attrition.

  • External validity, or generalizability, is also of particular concern in social science experiments.
  • It can be very difficult to generalize experimental results to groups that were not included in the study.
  • Studies that randomly select participants from the most diverse and representative populations are more likely to have external validity.

For example, a study shows that a new curriculum improved reading comprehension of third-grade children in Iowa. To assess the study's external validity, the researcher would consider whether this new curriculum would also be effective with third graders in New York or with children in other elementary grades.

Advantages and Disadvantages of Experimental and Quasi-Experimental Designs

  • Yield the most accurate assessment of cause and effect.
  • Typically have strong internal validity.
  • Ensure that the treatment and control groups are truly comparable and that treatment status is not determined by participant characteristics that might influence the outcome.
  • In social policy research, it can be impractical or unethical to conduct randomized experiments.
  • They typically have limited external validity due to the fact that they often rely on volunteers and are implemented in a somewhat artificial experimental setting with a small number of participants.
  • Despite being the “gold standard” for identifying causal impacts, they can also be faced with threats to internal validity such as attrition, contamination, cross-overs, and Hawthorne effects.
  • Often have stronger external validity than randomized experiments because they are typically implemented in real-world settings and on larger scale.
  • May be more feasible than randomized experiments because they have fewer time and logistical constraints often associated with randomized experiments.
  • Avoid the ethical concerns associated with random assignment.
  • Are often less expensive than randomized experiments.
  • They often have weaker internal validity than randomized experiments.
  • The lack of randomized assignment means that the treatment and control groups may not be comparable and that treatment status may be driven by participant characteristics or other experiences that might influence the outcome.
  • Conclusions about causality are less definitive than randomized experiments due to the lack of randomization and reduced internal validity.
  • Despite having weaker internal validity, they are often the best option available when it is impractical or unethical to conduct randomized experiments.
  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Historical Archaeology
  • Browse content in Architecture
  • History of Architecture
  • Browse content in Art
  • History of Art
  • Browse content in Classical Studies
  • Classical Literature
  • Religion in the Ancient World
  • Browse content in History
  • Colonialism and Imperialism
  • History by Period
  • Intellectual History
  • Military History
  • Political History
  • Regional and National History
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Browse content in Literature
  • Literary Studies (Romanticism)
  • Literary Studies (European)
  • Literary Studies - World
  • Literary Studies (19th Century)
  • Literary Studies (African American Literature)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Media Studies
  • Browse content in Music
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Philosophy of Language
  • Philosophy of Science
  • Philosophy of Religion
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • History of Religion
  • Judaism and Jewish Studies
  • Religious Studies
  • Society and Culture
  • Browse content in Law
  • Company and Commercial Law
  • Comparative Law
  • Constitutional and Administrative Law
  • Criminal Law
  • History of Law
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Molecular and Cell Biology
  • Zoology and Animal Sciences
  • Browse content in Computer Science
  • Programming Languages
  • Environmental Science
  • History of Science and Technology
  • Browse content in Mathematics
  • Applied Mathematics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Biological and Medical Physics
  • Computational Physics
  • Condensed Matter Physics
  • History of Physics
  • Mathematical and Statistical Physics
  • Browse content in Psychology
  • Cognitive Neuroscience
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Browse content in Business and Management
  • Business History
  • Industry Studies
  • International Business
  • Knowledge Management
  • Public and Nonprofit Management
  • Criminology and Criminal Justice
  • Browse content in Economics
  • Asian Economics
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • History of Economic Thought
  • International Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Browse content in Education
  • Higher and Further Education
  • Browse content in Politics
  • Asian Politics
  • Comparative Politics
  • Conflict Politics
  • Environmental Politics
  • International Relations
  • Political Economy
  • Political Sociology
  • Political Theory
  • Public Policy
  • Security Studies
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • Middle Eastern Studies
  • Native American Studies
  • Browse content in Social Work
  • Social Work and Crime and Justice
  • Browse content in Sociology
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Health, Illness, and Medicine
  • Migration Studies
  • Occupations, Professions, and Work
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Urban and Rural Studies
  • Reviews and Awards
  • Journals on Oxford Academic
  • Books on Oxford Academic

A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences

  • < Previous chapter
  • Next chapter >

3 Causes-of-Effects versus Effects-of-Causes

  • Published: September 2012
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter examines two approaches used in social science research: the “causes-of-effects” approach and the “effects-of-causes” approach. The quantitative and qualitative cultures differ in the extent to which and the ways in which they address causes-of-effects and effects-of-causes questions. Quantitative scholars, who favor the effects-of-causes approach, focus on estimating the average effects of particular variables within populations or samples. By contrast, qualitative scholars employ individual case analysis to explain outcomes as well as the effects of particular causal factors. The chapter first considers the type of research question addressed by both quantitative and qualitative researchers before discussing the use of within-case analysis by the latter to investigate individual cases versus cross-case analysis by the former to elucidate central tendencies in populations. It also describes the complementarities between qualitative and quantitative research that make mixed-method research possible.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
October 2022 8
November 2022 2
December 2022 2
January 2023 5
February 2023 5
March 2023 4
April 2023 5
May 2023 4
June 2023 11
July 2023 2
August 2023 13
September 2023 8
October 2023 5
November 2023 4
December 2023 5
January 2024 15
February 2024 7
March 2024 4
April 2024 8
May 2024 6
June 2024 7
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Login to Survey Tool Review Center

How to Use Surveys in Cause-Effect Research

Summary: to understand cause and effect, we need to conduct experiments. experiments may include surveys as a data collection method, but surveys in themselves can’t provide the answer..

3 minutes to read. By author Michaela Mora on July 29, 2019 Topics: Analysis Techniques , Market Research , Survey Design

How to Use Surveys in Cause-Effect Research

Cause-effect research requires special research designs. However, many assume that surveys can uncover cause and effect links without much consideration.

Not long ago I got a call from a potential client asking for research to determine why a recent marketing campaign failed to increase sales, despite a significant increase in awareness.

He had conducted an advertising awareness survey, and the results showed that many in the target audience had noted the advertising and gave it high ratings, but didn’t make a purchase.

All possible explanations were mere speculations. He couldn’t pinpoint any particular cause for this. 

The main problem was that he looked for evidence of a cause-effect link, but the research design was not appropriate for that.

Cause-Effect Research=Experiment

The main method for cause-effect research is experimentation . In experimental-based research, we manipulate the causal or independent variables in a relatively controlled environment. This means that we control and monitor other variables affecting the dependent variable (e.g. sales) as much as possible.

In this case, the client had conducted the survey and analyzed the data without taking into account the effectiveness of different marketing collaterals, market penetration, competitor activity, and some characteristics of the purchase decision-makers.

After doing some digging around, we uncovered that in some markets, competitors had launched high-frequency advertising campaigns. This helped the client indirectly by increasing category awareness, but not his sales.

Moreover, the program targeted recent buyers who probably didn’t have a need for his products at that particular moment.

Marketing Experiments

Surveys that are not part of an experimental approach may show correlations, but not causality . 

To really connect the dots between cause and effect, we needed to create an experiment. This would include different renditions of the marketing collaterals, different markets, customers at different stages in the purchase cycle, and actions taken by competitors.

Experimentation in marketing has traditionally taken the form of standard test markets. In this approach, you launch controlled advertising in designated markets and sell the product through regular distribution channels.

However, these tests can be time-consuming, are often expensive, and may be difficult to administer.

Market Simulations

Simulated test markets are a more affordable solution. In this approach, we expose individuals to the product or concept (e.g. via actual marketing collaterals), and give them the opportunity to buy it. If they buy it, we ask them to evaluate the product and state their repeat purchase intent.

We can then combine trial and repeat estimates with data about promotions, distribution levels, competitor activity, and other relevant pieces of information.

User Research (UX)

Another experimentation channel is the popular freemium model , which mimics this process to some extent. The basic principle is to let people try it and observe what decision they make.

After this, we can follow up with research to understand what drove their decision while controlling for other variables that may affect the outcome. This approach goes deeper into user research .

Experiments are the Answer

In short, if you want to understand cause and effect, you need to conduct experiments.

Experiments may include surveys as a data collection method, but surveys in themselves can’t provide the answer. It is the experimental design that will lead you to it.

(An earlier version of this article was originally published on January 18, 2012. The article was last updated and revised on July 29, 2019.)

Related Articles

  • What To Consider in Survey Design
  • Is It Right to Include a Neutral Point in Rating Questions?
  • Multi-Response or Dichotomous Grid Questions?
  • Avoid These Survey Design Mistakes
  • How To Minimize Memory Errors in Surveys
  • The Accuracy of Election Polls
  • How Bad Surveys Can Turn Respondents Off
  • 10 Cognitive Biases You Shouldn’t Ignore In Research
  • Which Rating Scales Should I Use?
  • Survey Gamification? It’s About Good Survey Design
  • How Survey Tools May Improve User Experience
  • Guidelines to Write Attitudinal Questions in Surveys
  • Why We Need to Avoid Long Surveys
  • An Alternative To Multiple-Choice Questions
  • Validity and Reliability in Surveys
  • Making The Case For MaxDiff
  • How To Improve Online Survey Response Rates
  • Why Conjoint Analysis Is Best for Price Research
  • Making the Case Against the Van Westendorp Price Sensitivity Meter
  • Comparing Willingness To Pay Measurements To Real Purchase
  • 6 Decisions To Make When Designing Product Concept Tests
  • What Is The Right Sample Size For A Survey?
  • 12 Research Techniques to Solve Choice Overload
  • UX Research Methods For User-Centric Design
  • 10 Key Pieces of Advice On How to Do And Use Market Research

Subscribe to our newsletter to get notified about future articles

Subscribe and don’t miss anything!

Recent Articles

  • How AI Can Further Remove Researchers in Search of Productivity and Lower Costs
  • Re: Design/Growth Podcast – Researching User Experiences for Business Growth
  • Why You Need Positioning Concept Testing in New Product Development
  • The Rise of UX
  • How to Future-Proof Experience Management and Your Business
  • When Using Focus Groups Makes Sense
  • How to Make Segmentation Research Actionable
  • How To Integrate Market Research and UX Research for Desired Business Outcomes

Popular Articles

  • Step by Step Guide to the Market Research Process
  • Write Winning Product Concepts To Get Accurate Results In Concept Tests
  • What Is Market Research?
  • How to Use Qualitative and Quantitative Research in Product Development
  • The Opportunity of UX Research Webinar
  • Myths & Misunderstandings About UX – MR Realities Podcast
  • Concept Testing for UX Researchers
  • UX Research Geeks Podcast – Using Market Research for Better Context in UX
  • A Researcher’s Path – Data Stories Leaders At Work Podcast
  • How to Leverage UX and Market Research To Understand Your Customers
  • How To Improve Racial and Gender Inclusion in Survey Design

GDPR

  • Privacy Overview
  • Strictly Necessary Cookies

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

Establishing Cause and Effect

A central goal of most research is the identification of causal relationships, or demonstrating that a particular independent variable (the cause) has an effect on the dependent variable of interest (the effect).  The three criteria for establishing cause and effect – association, time ordering (or temporal precedence), and non-spuriousness – are familiar to most researchers from courses in research methods or statistics.  While the classic examples used to illustrate these criteria may imply that establishing cause and effect is straightforward, it is often one of the most challenging aspects of designing research studies for implementation in real world conditions.

request a consultation

Discover How We Assist to Edit Your Dissertation Chapters

Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.

  • Bring dissertation editing expertise to chapters 1-5 in timely manner.
  • Track all changes, then work with you to bring about scholarly writing.
  • Ongoing support to address committee feedback, reducing revisions.

The first step in establishing causality is demonstrating association; simply put, is there a relationship between the independent variable and the dependent variable?  If both variables are numeric, this can be established by looking at the correlation between the two to determine if they appear to convey.  A common example is the relationship between education and income: in general, individuals with more years of education are also likely to earn higher incomes.  Cross tabulation, which cross-classifies the distributions of two categorical variables, can also be used to examination association.  For example, we may observe that 60% of Protestants support the death penalty while only 35% of Catholics do so, establishing an association between denomination and attitudes toward capital punishment.  There is ongoing debate regarding just how closely associated variables must be to make a causal claim, but in general researchers are more concerned with the statistical significance of an association (whether it is likely to exist in the population) than with the actual strength of the association.

Once an association has been established, our attention turns to determining the time order of the variables of interest.  In order for the independent variable to cause the dependent variable, logic dictates that the independent variable must occur first in time; in short, the cause must come before the effect.  This time ordering is easy to ensure in an experimental design where the researcher carefully controls exposure to the treatment (which would be the independent variable) and then measures the outcome of interest (the dependent variable).  In cross-sectional designs the time ordering can be much more difficult to determine, especially when the relationship between variables could reasonably go in the opposite direction.  For example, although education usually precedes income, it is possible that individuals who are making a good living may finally have the money necessary to return to school.  Determining time ordering thus may involve using logic, existing research, and common sense when a controlled experimental design is not possible.  In any case, researchers must be very careful about specifying the hypothesized direction of the relationship between the variables and provide evidence (either theoretical or empirical) to support their claim.

The third criterion for causality is also the most troublesome, as it requires that alternative explanations for the observed relationship between two variables be ruled out.  This is termed non-spuriousness, which simply means “not false.”  A spurious or false relationship exists when what appears to be an association between the two variables is actually caused by a third extraneous variable.  Classic examples of spuriousness include the relationship between children’s shoe sizes and their academic knowledge: as shoe size increases so does knowledge, but of course both are also strongly related to age.  Another well-known example is the relationship between the number of fire fighters that respond to a fire and the amount of damage that results – clearly, the size of the fire determines both, so it is inaccurate to say that more fire fighters cause greater damage.  Though these examples seem straightforward, researchers in the fields of psychology, education, and the social sciences often face much greater challenges in ruling out spurious relationships simply because there are so many other factors that might influence the relationship between two variables.  Appropriate study design (using experimental procedures whenever possible), careful data collection and use of statistical controls, and triangulation of many data sources are all essential when seeking to establish non-spurious relationships between variables.

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Ch 2: Psychological Research Methods

Children sit in front of a bank of television screens. A sign on the wall says, “Some content may not be suitable for children.”

Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

The topic of violence in the media today is contentious. Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of quickly changing technologies, questions about the effects of media continue to emerge. Is it okay to talk on a cell phone while driving? Are headphones good to use in a car? What impact does text messaging have on reaction time while driving? These are types of questions that psychologist David Strayer asks in his lab.

Watch this short video to see how Strayer utilizes the scientific method to reach important conclusions regarding technology and driving safety.

You can view the transcript for “Understanding driver distraction” here (opens in new window) .

How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.

Introduction to the Scientific Method

Learning objectives.

  • Explain the steps of the scientific method
  • Describe why the scientific method is important to psychology
  • Summarize the processes of informed consent and debriefing
  • Explain how research involving humans or animals is regulated

photograph of the word "research" from a dictionary with a pen pointing at the word.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives. In this section, you’ll see how psychologists use the scientific method to study and understand behavior.

The Scientific Process

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see the behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests.

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 5). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Why the scientific method is important for psychology.

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

Ethics in Research

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the Tuskegee Syphilis Study, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB) . The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 6). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent  form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, the informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing  upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Dig Deeper: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 7). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.

Learn more about the Tuskegee Syphilis Study on the CDC website .

Research Involving Animal Subjects

A photograph shows a rat.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC) . An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

Introduction to Approaches to Research

  • Differentiate between descriptive, correlational, and experimental research
  • Explain the strengths and weaknesses of case studies, naturalistic observation, and surveys
  • Describe the strength and weaknesses of archival research
  • Compare longitudinal and cross-sectional approaches to research
  • Explain what a correlation coefficient tells us about the relationship between variables
  • Describe why correlation does not mean causation
  • Describe the experimental process, including ways to control for bias
  • Identify and differentiate between independent and dependent variables

Three researchers review data while talking around a microscope.

Psychologists use descriptive, experimental, and correlational methods to conduct research. Descriptive, or qualitative, methods include the case study, naturalistic observation, surveys, archival research, longitudinal research, and cross-sectional research.

Experiments are conducted in order to determine cause-and-effect relationships. In ideal experimental design, the only difference between the experimental and control groups is whether participants are exposed to the experimental manipulation. Each group goes through all phases of the experiment, but each group will experience a different level of the independent variable: the experimental group is exposed to the experimental manipulation, and the control group is not exposed to the experimental manipulation. The researcher then measures the changes that are produced in the dependent variable in each group. Once data is collected from both groups, it is analyzed statistically to determine if there are meaningful differences between the groups.

When scientists passively observe and measure phenomena it is called correlational research. Here, psychologists do not intervene and change behavior, as they do in experiments. In correlational research, they identify patterns of relationships, but usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

Watch It: More on Research

If you enjoy learning through lectures and want an interesting and comprehensive summary of this section, then click on the Youtube link to watch a lecture given by MIT Professor John Gabrieli . Start at the 30:45 minute mark  and watch through the end to hear examples of actual psychological studies and how they were analyzed. Listen for references to independent and dependent variables, experimenter bias, and double-blind studies. In the lecture, you’ll learn about breaking social norms, “WEIRD” research, why expectations matter, how a warm cup of coffee might make you nicer, why you should change your answer on a multiple choice test, and why praise for intelligence won’t make you any smarter.

You can view the transcript for “Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011” here (opens in new window) .

Descriptive Research

There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments.

The three main categories of psychological research are descriptive, correlational, and experimental research. Research studies that do not test specific relationships between variables are called descriptive, or qualitative, studies . These studies are used to describe general or specific behaviors and attributes that are observed and measured. In the early stages of research it might be difficult to form a hypothesis, especially when there is not any existing literature in the area. In these situations designing an experiment would be premature, as the question of interest is not yet clearly defined as a hypothesis. Often a researcher will begin with a non-experimental approach, such as a descriptive study, to gather more information about the topic before designing an experiment or correlational study to address a specific hypothesis. Descriptive research is distinct from correlational research , in which psychologists formally test whether a relationship exists between two or more variables. Experimental research  goes a step further beyond descriptive and correlational research and randomly assigns people to different conditions, using hypothesis testing to make inferences about how these conditions affect behavior. It aims to determine if one variable directly impacts and causes another. Correlational and experimental research both typically use hypothesis testing, whereas descriptive research does not.

Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control on how or what kind of data was collected.

Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, which will be discussed later in the text, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in very artificial settings. This calls into question the validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical concerns.

The three main types of descriptive studies are, naturalistic observation, case studies, and surveys.

Naturalistic Observation

If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. However, people might change their behavior in unexpected ways if they know they are being observed. How do researchers obtain accurate information when people tend to hide their natural behavior? As an example, imagine that your professor asks everyone in your class to raise their hand if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?

This is very similar to the phenomenon mentioned earlier in this module: many individuals do not feel comfortable answering a question honestly. But if we are committed to finding out the facts about hand washing, we have other options available to us.

Suppose we send a classmate into the restroom to actually watch whether everyone washes their hands after using the restroom. Will our observer blend into the restroom environment by wearing a white lab coat, sitting with a clipboard, and staring at the sinks? We want our researcher to be inconspicuous—perhaps standing at one of the sinks pretending to put in contact lenses while secretly recording the relevant information. This type of observational study is called naturalistic observation : observing behavior in its natural setting. To better understand peer exclusion, Suzanne Fanger collaborated with colleagues at the University of Texas to observe the behavior of preschool children on a playground. How did the observers remain inconspicuous over the duration of the study? They equipped a few of the children with wireless microphones (which the children quickly forgot about) and observed while taking notes from a distance. Also, the children in that particular preschool (a “laboratory preschool”) were accustomed to having observers on the playground (Fanger, Frankel, & Hazen, 2012).

A photograph shows two police cars driving, one with its lights flashing.

It is critical that the observer be as unobtrusive and as inconspicuous as possible: when people know they are being watched, they are less likely to behave naturally. If you have any doubt about this, ask yourself how your driving behavior might differ in two situations: In the first situation, you are driving down a deserted highway during the middle of the day; in the second situation, you are being followed by a police car down the same deserted highway (Figure 9).

It should be pointed out that naturalistic observation is not limited to research involving humans. Indeed, some of the best-known examples of naturalistic observation involve researchers going into the field to observe various kinds of animals in their own environments. As with human studies, the researchers maintain their distance and avoid interfering with the animal subjects so as not to influence their natural behaviors. Scientists have used this technique to study social hierarchies and interactions among animals ranging from ground squirrels to gorillas. The information provided by these studies is invaluable in understanding how those animals organize socially and communicate with one another. The anthropologist Jane Goodall, for example, spent nearly five decades observing the behavior of chimpanzees in Africa (Figure 10). As an illustration of the types of concerns that a researcher might encounter in naturalistic observation, some scientists criticized Goodall for giving the chimps names instead of referring to them by numbers—using names was thought to undermine the emotional detachment required for the objectivity of the study (McKie, 2010).

(a) A photograph shows Jane Goodall speaking from a lectern. (b) A photograph shows a chimpanzee’s face.

The greatest benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means that we have a higher degree of ecological validity, or realism, than we might achieve with other research approaches. Therefore, our ability to generalize  the findings of the research to real-world situations is enhanced. If done correctly, we need not worry about people or animals modifying their behavior simply because they are being observed. Sometimes, people may assume that reality programs give us a glimpse into authentic human behavior. However, the principle of inconspicuous observation is violated as reality stars are followed by camera crews and are interviewed on camera for personal confessionals. Given that environment, we must doubt how natural and realistic their behaviors are.

The major downside of naturalistic observation is that they are often difficult to set up and control. In our restroom study, what if you stood in the restroom all day prepared to record people’s hand washing behavior and no one came in? Or, what if you have been closely observing a troop of gorillas for weeks only to find that they migrated to a new place while you were sleeping in your tent? The benefit of realistic data comes at a cost. As a researcher you have no control of when (or if) you have behavior to observe. In addition, this type of observational research often requires significant investments of time, money, and a good dose of luck.

Sometimes studies involve structured observation. In these cases, people are observed while engaging in set, specific tasks. An excellent example of structured observation comes from Strange Situation by Mary Ainsworth (you will read more about this in the module on lifespan development). The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant upon being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.

Another potential problem in observational research is observer bias . Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability : a measure of reliability that assesses the consistency of observations by different observers.

Case Studies

In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.

The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.

These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).

In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.

If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.

Often, psychologists develop surveys as a means of gathering data. Surveys are lists of questions to be answered by research participants, and can be delivered as paper-and-pencil questionnaires, administered electronically, or conducted verbally (Figure 11). Generally, the survey itself can be completed in a short time, and the ease of administering a survey makes it easy to collect data from a large number of people.

Surveys allow researchers to gather data from larger samples than may be afforded by other research methods . A sample is a subset of individuals selected from a population , which is the overall group of individuals that the researchers are interested in. Researchers study the sample and seek to generalize their findings to the population.

A sample online survey reads, “Dear visitor, your opinion is important to us. We would like to invite you to participate in a short survey to gather your opinions and feedback on your news consumption habits. The survey will take approximately 10-15 minutes. Simply click the “Yes” button below to launch the survey. Would you like to participate?” Two buttons are labeled “yes” and “no.”

There is both strength and weakness of the survey in comparison to case studies. By using surveys, we can collect information from a larger sample of people. A larger sample is better able to reflect the actual diversity of the population, thus allowing better generalizability. Therefore, if our sample is sufficiently large and diverse, we can assume that the data we collect from the survey can be generalized to the larger population with more certainty than the information collected through a case study. However, given the greater number of people involved, we are not able to collect the same depth of information on each person that would be collected in a case study.

Another potential weakness of surveys is something we touched on earlier in this chapter: people don’t always give accurate responses. They may lie, misremember, or answer questions in a way that they think makes them look good. For example, people may report drinking less alcohol than is actually the case.

Any number of research questions can be answered through the use of surveys. One real-world example is the research conducted by Jenkins, Ruppel, Kizer, Yehl, and Griffin (2012) about the backlash against the US Arab-American community following the terrorist attacks of September 11, 2001. Jenkins and colleagues wanted to determine to what extent these negative attitudes toward Arab-Americans still existed nearly a decade after the attacks occurred. In one study, 140 research participants filled out a survey with 10 questions, including questions asking directly about the participant’s overt prejudicial attitudes toward people of various ethnicities. The survey also asked indirect questions about how likely the participant would be to interact with a person of a given ethnicity in a variety of settings (such as, “How likely do you think it is that you would introduce yourself to a person of Arab-American descent?”). The results of the research suggested that participants were unwilling to report prejudicial attitudes toward any ethnic group. However, there were significant differences between their pattern of responses to questions about social interaction with Arab-Americans compared to other ethnic groups: they indicated less willingness for social interaction with Arab-Americans compared to the other ethnic groups. This suggested that the participants harbored subtle forms of prejudice against Arab-Americans, despite their assertions that this was not the case (Jenkins et al., 2012).

Think It Over

Archival research.

(a) A photograph shows stacks of paper files on shelves. (b) A photograph shows a computer.

In comparing archival research to other research methods, there are several important distinctions. For one, the researcher employing archival research never directly interacts with research participants. Therefore, the investment of time and money to collect data is considerably less with archival research. Additionally, researchers have no control over what information was originally collected. Therefore, research questions have to be tailored so they can be answered within the structure of the existing data sets. There is also no guarantee of consistency between the records from one source to another, which might make comparing and contrasting different data sets problematic.

Longitudinal and Cross-Sectional Research

Sometimes we want to see how people change over time, as in studies of human development and lifespan. When we test the same group of individuals repeatedly over an extended period of time, we are conducting longitudinal research. Longitudinal research  is a research design in which data-gathering is administered repeatedly over an extended period of time. For example, we may survey a group of individuals about their dietary habits at age 20, retest them a decade later at age 30, and then again at age 40.

Another approach is cross-sectional research . In cross-sectional research, a researcher compares multiple segments of the population at the same time. Using the dietary habits example above, the researcher might directly compare different groups of people by age. Instead of observing a group of people for 20 years to see how their dietary habits changed from decade to decade, the researcher would study a group of 20-year-old individuals and compare them to a group of 30-year-old individuals and a group of 40-year-old individuals. While cross-sectional research requires a shorter-term investment, it is also limited by differences that exist between the different generations (or cohorts) that have nothing to do with age per se, but rather reflect the social and cultural experiences of different generations of individuals make them different from one another.

To illustrate this concept, consider the following survey findings. In recent years there has been significant growth in the popular support of same-sex marriage. Many studies on this topic break down survey participants into different age groups. In general, younger people are more supportive of same-sex marriage than are those who are older (Jones, 2013). Does this mean that as we age we become less open to the idea of same-sex marriage, or does this mean that older individuals have different perspectives because of the social climates in which they grew up? Longitudinal research is a powerful approach because the same individuals are involved in the research project over time, which means that the researchers need to be less concerned with differences among cohorts affecting the results of their study.

Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might cause or prevent the development of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over 20 years to determine which of them develop cancer and which do not.

Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of cancer and smoking (American Cancer Society, n.d.) (Figure 13).

A photograph shows pack of cigarettes and cigarettes in an ashtray. The pack of cigarettes reads, “Surgeon general’s warning: smoking causes lung cancer, heart disease, emphysema, and may complicate pregnancy.”

As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. In addition to the time demands, these studies also require a substantial financial investment. Many researchers are unable to commit the resources necessary to see a longitudinal project through to the end.

Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get married and take new names, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. As a result, the attrition  rates, or reduction in the number of research participants due to dropouts, in longitudinal studies are quite high and increases over the course of a project. For this reason, researchers using this approach typically recruit many participants fully expecting that a substantial number will drop out before the end. As the study progresses, they continually check whether the sample still represents the larger population, and make adjustments as necessary.

Correlational Research

Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur.

It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream. How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is attributable to coincidence or causation?

Three scatterplots are shown. Scatterplot (a) is labeled “positive correlation” and shows scattered dots forming a rough line from the bottom left to the top right; the x-axis is labeled “weight” and the y-axis is labeled “height.” Scatterplot (b) is labeled “negative correlation” and shows scattered dots forming a rough line from the top left to the bottom right; the x-axis is labeled “tiredness” and the y-axis is labeled “hours of sleep.” Scatterplot (c) is labeled “no correlation” and shows scattered dots having no pattern; the x-axis is labeled “shoe size” and the y-axis is labeled “hours of sleep.”

Correlation Does Not Indicate Causation

Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect . While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable , is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.

Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research , we would be overstepping our bounds by making this assumption.

A photograph shows a bowl of cereal.

Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantzen, Treviño, Echon, Garcia-Dominic, & DiMarco, 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet (Figure 15)? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.

Watch this clip from Freakonomics for an example of how correlation does  not  indicate causation.

You can view the transcript for “Correlation vs. Causality: Freakonomics Movie” here (opens in new window) .

Illusory Correlations

The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations , or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (Figure 16).

A photograph shows the moon.

There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.

Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias . Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).

We all have a tendency to make illusory correlations from time to time. Try to think of an illusory correlation that is held by you, a family member, or a close friend. How do you think this illusory correlation came about and what can be done in the future to combat them?

Experiments

Causality: conducting experiments and using the data, experimental hypothesis.

In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have younger relatives who watch cartoons featuring characters using martial arts to save the world from evildoers, with an impressive array of punching, kicking, and defensive postures. You notice that after watching these programs for a while, your young relatives mimic the fighting behavior of the characters portrayed in the cartoon (Figure 17).

A photograph shows a child pointing a toy gun.

These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to rigorously test our hypothesis. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.

Designing an Experiment

The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group  gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.

In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.

We also need to precisely define, or operationalize, what is considered violent and nonviolent. An operational definition is a description of how we will measure our variables, and it is important in allowing others understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.

Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts is recorded.

Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study , meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.

A photograph shows three glass bottles of pills labeled as placebos.

In a double-blind study , both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase placebo effect, you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.

The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.

Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.

To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations (Figure 18).

Independent and Dependent Variables

In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study (Figure 19). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.

A box labeled “independent variable: type of television programming viewed” contains a photograph of a person shooting an automatic weapon. An arrow labeled “influences change in the…” leads to a second box. The second box is labeled “dependent variable: violent behavior displayed” and has a photograph of a child pointing a toy gun.

We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what effect does watching a half hour of violent television programming or nonviolent television programming have on the number of incidents of physical aggression displayed on the playground?

Selecting and Assigning Experimental Participants

Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants  are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.

Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (Figure 20). If possible, we should use a random sample   (there are other types of samples, but for the purposes of this section, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.

In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth graders who we want to participate in our experiment.

In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.

(a) A photograph shows an aerial view of crowds on a street. (b) A photograph shows s small group of children.

Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment , all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the fourth graders in the sample to either the experimental or the control group.

Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.

Issues to Consider

While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.

Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.

Introduction to Statistical Thinking

Psychologists use statistics to assist them in analyzing data, and also to give more precise measurements to describe whether something is statistically significant. Analyzing data using statistics enables researchers to find patterns, make claims, and share their results with others. In this section, you’ll learn about some of the tools that psychologists use in statistical analysis.

  • Define reliability and validity
  • Describe the importance of distributional thinking and the role of p-values in statistical inference
  • Describe the role of random sampling and random assignment in drawing cause-and-effect conclusions
  • Describe the basic structure of a psychological research article

Interpreting Experimental Findings

Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.

Reporting Research

When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.

A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.

Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.

Dig Deeper: The Vaccine-Autism Myth and the Retraction of Published Studies

Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.

A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated (Figure 21). For more information about how the vaccine/autism story unfolded, as well as the repercussions of this story, take a look at Paul Offit’s book, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure.

A photograph shows a child being given an oral vaccine.

Reliability and Validity

Dig deeper:  everyday connection: how valid is the sat.

Standardized tests like the SAT are supposed to measure an individual’s aptitude for a college education, but how reliable and valid are such tests? Research conducted by the College Board suggests that scores on the SAT have high predictive validity for first-year college students’ GPA (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008). In this context, predictive validity refers to the test’s ability to effectively predict the GPA of college freshmen. Given that many institutions of higher education require the SAT for admission, this high degree of predictive validity might be comforting.

However, the emphasis placed on SAT scores in college admissions has generated some controversy on a number of fronts. For one, some researchers assert that the SAT is a biased test that places minority students at a disadvantage and unfairly reduces the likelihood of being admitted into a college (Santelices & Wilson, 2010). Additionally, some research has suggested that the predictive validity of the SAT is grossly exaggerated in how well it is able to predict the GPA of first-year college students. In fact, it has been suggested that the SAT’s predictive validity may be overestimated by as much as 150% (Rothstein, 2004). Many institutions of higher education are beginning to consider de-emphasizing the significance of SAT scores in making admission decisions (Rimer, 2008).

In 2014, College Board president David Coleman expressed his awareness of these problems, recognizing that college success is more accurately predicted by high school grades than by SAT scores. To address these concerns, he has called for significant changes to the SAT exam (Lewin, 2014).

Statistical Significance

Coffee cup with heart shaped cream inside.

Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day also had a 10% lower chance of dying (women’s chances were 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit? We will explore these results in more depth in the next section about drawing conclusions from statistics. Modern society has become awash in studies such as this; you can read about several such studies in the news every day.

Conducting such a study well, and interpreting the results of such studies requires understanding basic ideas of statistics , the science of gaining insight from data. Key components to a statistical investigation are:

  • Planning the study: Start by asking a testable research question and deciding how to collect data. For example, how long was the study period of the coffee study? How many people were recruited for the study, how were they recruited, and from where? How old were they? What other variables were recorded about the individuals? Were changes made to the participants’ coffee habits during the course of the study?
  • Examining the data: What are appropriate ways to examine the data? What graphs are relevant, and what do they reveal? What descriptive statistics can be calculated to summarize relevant aspects of the data, and what do they reveal? What patterns do you see in the data? Are there any individual observations that deviate from the overall pattern, and what do they reveal? For example, in the coffee study, did the proportions differ when we compared the smokers to the non-smokers?
  • Inferring from the data: What are valid statistical methods for drawing inferences “beyond” the data you collected? In the coffee study, is the 10%–15% reduction in risk of death something that could have happened just by chance?
  • Drawing conclusions: Based on what you learned from your data, what conclusions can you draw? Who do you think these conclusions apply to? (Were the people in the coffee study older? Healthy? Living in cities?) Can you draw a cause-and-effect conclusion about your treatments? (Are scientists now saying that the coffee drinking is the cause of the decreased risk of death?)

Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this section, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about.

Distributional Thinking

When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. Let’s take a look at an example.

Example 1 : Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Figure 23.

Table showing patients' reading levels and pahmphlet's reading levels.

  • Data vary . More specifically, values of a variable (such as reading level of a cancer patient or readability level of a cancer pamphlet) vary.
  • Analyzing the pattern of variation, called the distribution of the variable, often reveals insights.

Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 24.

Bar graph showing that the reading level of pamphlets is typically higher than the reading level of the patients.

Figure 24 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables.

Finding Significance in Data

Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1°F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population? Let’s take a look at another example.

Example 2 : In a study reported in the November 2007 issue of Nature , researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with.

The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy. One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand?

Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible. It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process.

Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor.

If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value . The p-value represents the likelihood that experimental results happened by chance. Within psychology, the most common standard for p-values is “p < .05”. What this means is that there is less than a 5% probability that the results happened just by random chance, and therefore a 95% probability that the results reflect a meaningful pattern in human psychology. We call this statistical significance .

So, in the study above, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy.

If we compare the p-value to some cut-off value, like 0.05, we see that the p=value is smaller. Because the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior.

Drawing Conclusions from Statistics

Generalizability.

Photo of a diverse group of college-aged students.

One limitation to the study mentioned previously about the babies choosing the “helper” toy is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample ) from a much larger group of individuals (the population ) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.

Example 3 : The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a r andom sample  that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.

In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling . Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error. A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.

The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.

Cause and Effect

In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?

Example 4 : A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic (internal) or extrinsic (external) motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 26, where higher scores indicate more creativity.

Image showing a dot for creativity scores, which vary between 5 and 27, and the types of motivation each person was given as a motivator, either extrinsic or intrinsic.

In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?

Figure 26 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)

The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.

We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment  tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.

But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?

We again want to apply to a probability model to approximate a p-value , but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 27 shows the results from 1,000 such hypothetical random assignments for these scores.

Standard distribution in a typical bell curve.

Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.

Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.

Close-up photo of mathematical equations.

Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.

So where does this leave us with regard to the coffee study mentioned previously (the Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012 found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none)? We can answer many of the questions:

  • This was a 14-year study conducted by researchers at the National Cancer Institute.
  • The results were published in the June issue of the New England Journal of Medicine , a respected, peer-reviewed journal.
  • The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study.
  • About 52,000 people died during the course of the study.
  • People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups.
  • The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%).
  • Whether coffee was caffeinated or decaffeinated did not appear to affect the results.
  • This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee.

This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.

Explore these outside resources to learn more about applied statistics:

  • Video about p-values:  P-Value Extravaganza
  • Interactive web applets for teaching and learning statistics
  • Inter-university Consortium for Political and Social Research  where you can find and analyze data.
  • The Consortium for the Advancement of Undergraduate Statistics
  • Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented?
  • Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not.

How to Read Research

In this course and throughout your academic career, you’ll be reading journal articles (meaning they were published by experts in a peer-reviewed journal) and reports that explain psychological research. It’s important to understand the format of these articles so that you can read them strategically and understand the information presented. Scientific articles vary in content or structure, depending on the type of journal to which they will be submitted. Psychological articles and many papers in the social sciences follow the writing guidelines and format dictated by the American Psychological Association (APA). In general, the structure follows: abstract, introduction, methods, results, discussion, and references.

  • Abstract : the abstract is the concise summary of the article. It summarizes the most important features of the manuscript, providing the reader with a global first impression on the article. It is generally just one paragraph that explains the experiment as well as a short synopsis of the results.
  • Introduction : this section provides background information about the origin and purpose of performing the experiment or study. It reviews previous research and presents existing theories on the topic.
  • Method : this section covers the methodologies used to investigate the research question, including the identification of participants , procedures , and  materials  as well as a description of the actual procedure . It should be sufficiently detailed to allow for replication.
  • Results : the results section presents key findings of the research, including reference to indicators of statistical significance.
  • Discussion : this section provides an interpretation of the findings, states their significance for current research, and derives implications for theory and practice. Alternative interpretations for findings are also provided, particularly when it is not possible to conclude for the directionality of the effects. In the discussion, authors also acknowledge the strengths and limitations/weaknesses of the study and offer concrete directions about for future research.

Watch this 3-minute video for an explanation on how to read scholarly articles. Look closely at the example article shared just before the two minute mark.

https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/

Practice identifying these key components in the following experiment: Food-Induced Emotional Resonance Improves Emotion Recognition.

In this chapter, you learned to

  • define and apply the scientific method to psychology
  • describe the strengths and weaknesses of descriptive, experimental, and correlational research
  • define the basic elements of a statistical investigation

Putting It Together: Psychological Research

Psychologists use the scientific method to examine human behavior and mental processes. Some of the methods you learned about include descriptive, experimental, and correlational research designs.

Watch the CrashCourse video to review the material you learned, then read through the following examples and see if you can come up with your own design for each type of study.

You can view the transcript for “Psychological Research: Crash Course Psychology #2” here (opens in new window).

Case Study: a detailed analysis of a particular person, group, business, event, etc. This approach is commonly used to to learn more about rare examples with the goal of describing that particular thing.

  • Ted Bundy was one of America’s most notorious serial killers who murdered at least 30 women and was executed in 1989. Dr. Al Carlisle evaluated Bundy when he was first arrested and conducted a psychological analysis of Bundy’s development of his sexual fantasies merging into reality (Ramsland, 2012). Carlisle believes that there was a gradual evolution of three processes that guided his actions: fantasy, dissociation, and compartmentalization (Ramsland, 2012). Read   Imagining Ted Bundy  (http://goo.gl/rGqcUv) for more information on this case study.

Naturalistic Observation : a researcher unobtrusively collects information without the participant’s awareness.

  • Drain and Engelhardt (2013) observed six nonverbal children with autism’s evoked and spontaneous communicative acts. Each of the children attended a school for children with autism and were in different classes. They were observed for 30 minutes of each school day. By observing these children without them knowing, they were able to see true communicative acts without any external influences.

Survey : participants are asked to provide information or responses to questions on a survey or structure assessment.

  • Educational psychologists can ask students to report their grade point average and what, if anything, they eat for breakfast on an average day. A healthy breakfast has been associated with better academic performance (Digangi’s 1999).
  • Anderson (1987) tried to find the relationship between uncomfortably hot temperatures and aggressive behavior, which was then looked at with two studies done on violent and nonviolent crime. Based on previous research that had been done by Anderson and Anderson (1984), it was predicted that violent crimes would be more prevalent during the hotter time of year and the years in which it was hotter weather in general. The study confirmed this prediction.

Longitudinal Study: researchers   recruit a sample of participants and track them for an extended period of time.

  • In a study of a representative sample of 856 children Eron and his colleagues (1972) found that a boy’s exposure to media violence at age eight was significantly related to his aggressive behavior ten years later, after he graduated from high school.

Cross-Sectional Study:  researchers gather participants from different groups (commonly different ages) and look for differences between the groups.

  • In 1996, Russell surveyed people of varying age groups and found that people in their 20s tend to report being more lonely than people in their 70s.

Correlational Design:  two different variables are measured to determine whether there is a relationship between them.

  • Thornhill et al. (2003) had people rate how physically attractive they found other people to be. They then had them separately smell t-shirts those people had worn (without knowing which clothes belonged to whom) and rate how good or bad their body oder was. They found that the more attractive someone was the more pleasant their body order was rated to be.
  • Clinical psychologists can test a new pharmaceutical treatment for depression by giving some patients the new pill and others an already-tested one to see which is the more effective treatment.

American Cancer Society. (n.d.). History of the cancer prevention studies. Retrieved from http://www.cancer.org/research/researchtopreventcancer/history-cancer-prevention-study

American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: Author.

American Psychological Association. (n.d.). Research with animals in psychology. Retrieved from https://www.apa.org/research/responsible/research-animals.pdf

Arnett, J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614.

Barton, B. A., Eldridge, A. L., Thompson, D., Affenito, S. G., Striegel-Moore, R. H., Franko, D. L., . . . Crockett, S. J. (2005). The relationship of breakfast and cereal consumption to nutrient intake and body mass index: The national heart, lung, and blood institute growth and health study. Journal of the American Dietetic Association, 105(9), 1383–1389. Retrieved from http://dx.doi.org/10.1016/j.jada.2005.06.003

Chwalisz, K., Diener, E., & Gallagher, D. (1988). Autonomic arousal feedback and emotional experience: Evidence from the spinal cord injured. Journal of Personality and Social Psychology, 54, 820–828.

Dominus, S. (2011, May 25). Could conjoined twins share a mind? New York Times Sunday Magazine. Retrieved from http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?_r=5&hp&

Fanger, S. M., Frankel, L. A., & Hazen, N. (2012). Peer exclusion in preschool children’s play: Naturalistic observations in a playground setting. Merrill-Palmer Quarterly, 58, 224–254.

Fiedler, K. (2004). Illusory correlation. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 97–114). New York, NY: Psychology Press.

Frantzen, L. B., Treviño, R. P., Echon, R. M., Garcia-Dominic, O., & DiMarco, N. (2013). Association between frequency of ready-to-eat cereal consumption, nutrient intakes, and body mass index in fourth- to sixth-grade low-income minority children. Journal of the Academy of Nutrition and Dietetics, 113(4), 511–519.

Harper, J. (2013, July 5). Ice cream and crime: Where cold cuisine and hot disputes intersect. The Times-Picaune. Retrieved from http://www.nola.com/crime/index.ssf/2013/07/ice_cream_and_crime_where_hot.html

Jenkins, W. J., Ruppel, S. E., Kizer, J. B., Yehl, J. L., & Griffin, J. L. (2012). An examination of post 9-11 attitudes towards Arab Americans. North American Journal of Psychology, 14, 77–84.

Jones, J. M. (2013, May 13). Same-sex marriage support solidifies above 50% in U.S. Gallup Politics. Retrieved from http://www.gallup.com/poll/162398/sex-marriage-support-solidifies-above.aspx

Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first-year college grade point average (Research Report No. 2008-5). Retrieved from https://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2008-5-validity-sat-predicting-first-year-college-grade-point-average.pdf

Lewin, T. (2014, March 5). A new SAT aims to realign with schoolwork. New York Times. Retreived from http://www.nytimes.com/2014/03/06/education/major-changes-in-sat-announced-by-college-board.html.

Lowry, M., Dean, K., & Manders, K. (2010). The link between sleep quantity and academic performance for the college student. Sentience: The University of Minnesota Undergraduate Journal of Psychology, 3(Spring), 16–19. Retrieved from http://www.psych.umn.edu/sentience/files/SENTIENCE_Vol3.pdf

McKie, R. (2010, June 26). Chimps with everything: Jane Goodall’s 50 years in the jungle. The Guardian. Retrieved from http://www.theguardian.com/science/2010/jun/27/jane-goodall-chimps-africa-interview

Offit, P. (2008). Autism’s false prophets: Bad science, risky medicine, and the search for a cure. New York: Columbia University Press.

Perkins, H. W., Haines, M. P., & Rice, R. (2005). Misperceiving the college drinking norm and related problems: A nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. J. Stud. Alcohol, 66(4), 470–478.

Rimer, S. (2008, September 21). College panel calls for less focus on SATs. The New York Times. Retrieved from http://www.nytimes.com/2008/09/22/education/22admissions.html?_r=0

Rothstein, J. M. (2004). College performance predictions and the SAT. Journal of Econometrics, 121, 297–317.

Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97(2), 286–306. doi:10.1037/0033-2909.97.2.286

Santelices, M. V., & Wilson, M. (2010). Unfair treatment? The case of Freedle, the SAT, and the standardization approach to differential item functioning. Harvard Education Review, 80, 106–134.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515–530.

Tuskegee University. (n.d.). About the USPHS Syphilis Study. Retrieved from http://www.tuskegee.edu/about_us/centers_of_excellence/bioethics_center/about_the_usphs_syphilis_study.aspx.

CC licensed content, Original

  • Psychological Research Methods. Provided by : Karenna Malavanti. License : CC BY-SA: Attribution ShareAlike

CC licensed content, Shared previously

  • Psychological Research. Provided by : OpenStax College. License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-introduction .
  • Why It Matters: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/introduction-15/
  • Introduction to The Scientific Method. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-the-scientific-method/
  • Research picture. Authored by : Mediterranean Center of Medical Sciences. Provided by : Flickr. License : CC BY: Attribution   Located at : https://www.flickr.com/photos/mcmscience/17664002728 .
  • The Scientific Process. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-the-scientific-process/
  • Ethics in Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/ethics/
  • Ethics. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-4-ethics . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction .
  • Introduction to Approaches to Research. Provided by : Lumen Learning. License : CC BY-NC-SA: Attribution NonCommercial ShareAlike   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-approaches-to-research/
  • Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011. Authored by : John Gabrieli. Provided by : MIT OpenCourseWare. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : https://www.youtube.com/watch?v=syXplPKQb_o .
  • Paragraph on correlation. Authored by : Christie Napa Scollon. Provided by : Singapore Management University. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/research-designs?r=MTc0ODYsMjMzNjQ%3D . Project : The Noba Project.
  • Descriptive Research. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-clinical-or-case-studies/
  • Approaches to Research. Authored by : OpenStax College.  License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-2-approaches-to-research
  • Analyzing Findings. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-3-analyzing-findings . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction.
  • Experiments. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Research Review. Authored by : Jessica Traylor for Lumen Learning. License : CC BY: Attribution Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Introduction to Statistics. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-statistical-thinking/
  • histogram. Authored by : Fisher’s Iris flower data set. Provided by : Wikipedia.
  • License : CC BY-SA: Attribution-ShareAlike   Located at : https://en.wikipedia.org/wiki/Wikipedia:Meetup/DC/Statistics_Edit-a-thon#/media/File:Fisher_iris_versicolor_sepalwidth.svg .
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman . Provided by : California Polytechnic State University, San Luis Obispo.  
  • License : CC BY-NC-SA: Attribution-NonCommerci al-S hareAlike .  License Terms : http://nobaproject.com/license-agreement   Located at : http://nobaproject.com/modules/statistical-thinking . Project : The Noba Project.
  • Drawing Conclusions from Statistics. Authored by: Pat Carroll and Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-drawing-conclusions-from-statistics/
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman, California Polytechnic State University, San Luis Obispo. Provided by : Noba. License: CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/statistical-thinking .
  • The Replication Crisis. Authored by : Colin Thomas William. Provided by : Ivy Tech Community College. License: CC BY: Attribution
  • How to Read Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/how-to-read-research/
  • What is a Scholarly Article? Kimbel Library First Year Experience Instructional Videos. 9. Authored by:  Joshua Vossler, John Watts, and Tim Hodge.  Provided by : Coastal Carolina University  License :  CC BY NC ND:  Attribution-NonCommercial-NoDerivatives Located at :  https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/
  • Putting It Together: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/putting-it-together-psychological-research/
  • Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:

All rights reserved content

  • Understanding Driver Distraction. Provided by : American Psychological Association. License : Other. License Terms: Standard YouTube License Located at : https://www.youtube.com/watch?v=XToWVxS_9lA&list=PLxf85IzktYWJ9MrXwt5GGX3W-16XgrwPW&index=9 .
  • Correlation vs. Causality: Freakonomics Movie. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=lbODqslc4Tg.
  • Psychological Research – Crash Course Psychology #2. Authored by : Hank Green. Provided by : Crash Course. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=hFV71QPvX2I .

Public domain content

  • Researchers review documents. Authored by : National Cancer Institute. Provided by : Wikimedia. Located at : https://commons.wikimedia.org/wiki/File:Researchers_review_documents.jpg . License : Public Domain: No Known Copyright

grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing

well-developed set of ideas that propose an explanation for observed phenomena

(plural: hypotheses) tentative and testable statement about the relationship between two or more variables

an experiment must be replicable by another researcher

implies that a theory should enable us to make predictions about future events

able to be disproven by experimental results

implies that all data must be considered when evaluating a hypothesis

committee of administrators, scientists, and community members that reviews proposals for research involving human participants

process of informing a research participant about what to expect during an experiment, any risks involved, and the implications of the research, and then obtaining the person’s consent to participate

purposely misleading experiment participants in order to maintain the integrity of the experiment

when an experiment involved deception, participants are told complete and truthful information about the experiment at its conclusion

committee of administrators, scientists, veterinarians, and community members that reviews proposals for research involving non-human animals

research studies that do not test specific relationships between variables

research investigating the relationship between two or more variables

research method that uses hypothesis testing to make inferences about how one variable impacts and causes another

observation of behavior in its natural setting

inferring that the results for a sample apply to the larger population

when observations may be skewed to align with observer expectations

measure of agreement among observers on how they record and classify a particular event

observational research study focusing on one or a few people

list of questions to be answered by research participants—given as paper-and-pencil questionnaires, administered electronically, or conducted verbally—allowing researchers to collect data from a large number of people

subset of individuals selected from the larger population

overall group of individuals that the researchers are interested in

method of research using past records or data sets to answer various research questions, or to search for interesting patterns or relationships

studies in which the same group of individuals is surveyed or measured repeatedly over an extended period of time

compares multiple segments of a population at a single time

reduction in number of research participants as some drop out of the study over time

relationship between two or more variables; when two variables are correlated, one variable changes as the other does

number from -1 to +1, indicating the strength and direction of the relationship between variables, and usually represented by r

two variables change in the same direction, both becoming either larger or smaller

two variables change in different directions, with one becoming larger as the other becomes smaller; a negative correlation is not the same thing as no correlation

changes in one variable cause the changes in the other variable; can be determined only through an experimental research design

unanticipated outside factor that affects both variables of interest, often giving the false impression that changes in one variable causes changes in the other variable, when, in actuality, the outside factor causes changes in both variables

seeing relationships between two things when in reality no such relationship exists

tendency to ignore evidence that disproves ideas or beliefs

group designed to answer the research question; experimental manipulation is the only difference between the experimental and control groups, so any differences between the two are due to experimental manipulation rather than chance

serves as a basis for comparison and controls for chance factors that might influence the results of the study—by holding such factors constant across groups so that the experimental manipulation is the only difference between groups

description of what actions and operations will be used to measure the dependent variables and manipulate the independent variables

researcher expectations skew the results of the study

experiment in which the researcher knows which participants are in the experimental group and which are in the control group

experiment in which both the researchers and the participants are blind to group assignments

people's expectations or beliefs influencing or determining their experience in a given situation

variable that is influenced or controlled by the experimenter; in a sound experimental study, the independent variable is the only important difference between the experimental and control group

variable that the researcher measures to see how much effect the independent variable had

subjects of psychological research

subset of a larger population in which every member of the population has an equal chance of being selected

method of experimental group assignment in which all participants have an equal chance of being assigned to either group

consistency and reproducibility of a given result

accuracy of a given result in measuring what it is designed to measure

determines how likely any difference between experimental groups is due to chance

statistical probability that represents the likelihood that experimental results happened by chance

Psychological Science is the scientific study of mind, brain, and behavior. We will explore what it means to be human in this class. It has never been more important for us to understand what makes people tick, how to evaluate information critically, and the importance of history. Psychology can also help you in your future career; indeed, there are very little jobs out there with no human interaction!

Because psychology is a science, we analyze human behavior through the scientific method. There are several ways to investigate human phenomena, such as observation, experiments, and more. We will discuss the basics, pros and cons of each! We will also dig deeper into the important ethical guidelines that psychologists must follow in order to do research. Lastly, we will briefly introduce ourselves to statistics, the language of scientific research. While reading the content in these chapters, try to find examples of material that can fit with the themes of the course.

To get us started:

  • The study of the mind moved away Introspection to reaction time studies as we learned more about empiricism
  • Psychologists work in careers outside of the typical "clinician" role. We advise in human factors, education, policy, and more!
  • While completing an observation study, psychologists will work to aggregate common themes to explain the behavior of the group (sample) as a whole. In doing so, we still allow for normal variation from the group!
  • The IRB and IACUC are important in ensuring ethics are maintained for both human and animal subjects

Psychological Science: Understanding Human Behavior Copyright © by Karenna Malavanti is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

cause and effect type of research

Home Market Research Research Tools and Apps

Causal Research: What it is, Tips & Examples

Causal research examines if there's a cause-and-effect relationship between two separate events. Learn everything you need to know about it.

Causal research is classified as conclusive research since it attempts to build a cause-and-effect link between two variables. This research is mainly used to determine the cause of particular behavior. We can use this research to determine what changes occur in an independent variable due to a change in the dependent variable.

It can assist you in evaluating marketing activities, improving internal procedures, and developing more effective business plans. Understanding how one circumstance affects another may help you determine the most effective methods for satisfying your business needs.

LEARN ABOUT: Behavioral Research

This post will explain causal research, define its essential components, describe its benefits and limitations, and provide some important tips.

Content Index

What is causal research?

Temporal sequence, non-spurious association, concomitant variation, the advantages, the disadvantages, causal research examples, causal research tips.

Causal research is also known as explanatory research . It’s a type of research that examines if there’s a cause-and-effect relationship between two separate events. This would occur when there is a change in one of the independent variables, which is causing changes in the dependent variable.

You can use causal research to evaluate the effects of particular changes on existing norms, procedures, and so on. This type of research examines a condition or a research problem to explain the patterns of interactions between variables.

LEARN ABOUT: Research Process Steps

Components of causal research

Only specific causal information can demonstrate the existence of cause-and-effect linkages. The three key components of causal research are as follows:

Causal Research Components

Prior to the effect, the cause must occur. If the cause occurs before the appearance of the effect, the cause and effect can only be linked. For example, if the profit increase occurred before the advertisement aired, it cannot be linked to an increase in advertising spending.

Linked fluctuations between two variables are only allowed if there is no other variable that is related to both cause and effect. For example, a notebook manufacturer has discovered a correlation between notebooks and the autumn season. They see that during this season, more people buy notebooks because students are buying them for the upcoming semester.

During the summer, the company launched an advertisement campaign for notebooks. To test their assumption, they can look up the campaign data to see if the increase in notebook sales was due to the student’s natural rhythm of buying notebooks or the advertisement.

Concomitant variation is defined as a quantitative change in effect that happens solely as a result of a quantitative change in the cause. This means that there must be a steady change between the two variables. You can examine the validity of a cause-and-effect connection by seeing if the independent variable causes a change in the dependent variable.

For example, if any company does not make an attempt to enhance sales by acquiring skilled employees or offering training to them, then the hire of experienced employees cannot be credited for an increase in sales. Other factors may have contributed to the increase in sales.

Causal Research Advantages and Disadvantages

Causal or explanatory research has various advantages for both academics and businesses. As with any other research method, it has a few disadvantages that researchers should be aware of. Let’s look at some of the advantages and disadvantages of this research design .

  • Helps in the identification of the causes of system processes. This allows the researcher to take the required steps to resolve issues or improve outcomes.
  • It provides replication if it is required.
  • Causal research assists in determining the effects of changing procedures and methods.
  • Subjects are chosen in a methodical manner. As a result, it is beneficial for improving internal validity .
  • The ability to analyze the effects of changes on existing events, processes, phenomena, and so on.
  • Finds the sources of variable correlations, bridging the gap in correlational research .
  • It is not always possible to monitor the effects of all external factors, so causal research is challenging to do.
  • It is time-consuming and might be costly to execute.
  • The effect of a large range of factors and variables existing in a particular setting makes it difficult to draw results.
  • The most major error in this research is a coincidence. A coincidence between a cause and an effect can sometimes be interpreted as a direction of causality.
  • To corroborate the findings of the explanatory research , you must undertake additional types of research. You can’t just make conclusions based on the findings of a causal study.
  • It is sometimes simple for a researcher to see that two variables are related, but it can be difficult for a researcher to determine which variable is the cause and which variable is the effect.

Since different industries and fields can carry out causal comparative research , it can serve many different purposes. Let’s discuss 3 examples of causal research:

Advertising Research

Companies can use causal research to enact and study advertising campaigns. For example, six months after a business debuts a new ad in a region. They see a 5% increase in sales revenue.

To assess whether the ad has caused the lift, they run the same ad in randomly selected regions so they can compare sales data across regions over another six months. When sales pick up again in these regions, they can conclude that the ad and sales have a valuable cause-and-effect relationship.

LEARN ABOUT: Ad Testing

Customer Loyalty Research

Businesses can use causal research to determine the best customer retention strategies. They monitor interactions between associates and customers to identify patterns of cause and effect, such as a product demonstration technique leading to increased or decreased sales from the same customers.

For example, a company implements a new individual marketing strategy for a small group of customers and sees a measurable increase in monthly subscriptions. After receiving identical results from several groups, they concluded that the one-to-one marketing strategy has the causal relationship they intended.

Educational Research

Learning specialists, academics, and teachers use causal research to learn more about how politics affects students and identify possible student behavior trends. For example, a university administration notices that more science students drop out of their program in their third year, which is 7% higher than in any other year.

They interview a random group of science students and discover many factors that could lead to these circumstances, including non-university components. Through the in-depth statistical analysis, researchers uncover the top three factors, and management creates a committee to address them in the future.

Causal research is frequently the last type of research done during the research process and is considered definitive. As a result, it is critical to plan the research with specific parameters and goals in mind. Here are some tips for conducting causal research successfully:

1. Understand the parameters of your research

Identify any design strategies that change the way you understand your data. Determine how you acquired data and whether your conclusions are more applicable in practice in some cases than others.

2. Pick a random sampling strategy

Choosing a technique that works best for you when you have participants or subjects is critical. You can use a database to generate a random list, select random selections from sorted categories, or conduct a survey.

3. Determine all possible relations

Examine the different relationships between your independent and dependent variables to build more sophisticated insights and conclusions.

To summarize, causal or explanatory research helps organizations understand how their current activities and behaviors will impact them in the future. This is incredibly useful in a wide range of business scenarios. This research can ensure the outcome of various marketing activities, campaigns, and collaterals. Using the findings of this research program, you will be able to design more successful business strategies that take advantage of every business opportunity.

At QuestionPro, we offer all kinds of necessary tools for researchers to carry out their projects. It can help you get the most out of your data by guiding you through the process.

MORE LIKE THIS

cause and effect type of research

When You Have Something Important to Say, You want to Shout it From the Rooftops

Jun 28, 2024

The Item I Failed to Leave Behind — Tuesday CX Thoughts

The Item I Failed to Leave Behind — Tuesday CX Thoughts

Jun 25, 2024

feedback loop

Feedback Loop: What It Is, Types & How It Works?

Jun 21, 2024

cause and effect type of research

QuestionPro Thrive: A Space to Visualize & Share the Future of Technology

Jun 18, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Correlation vs. Causation | Difference, Designs & Examples

Correlation vs. Causation | Difference, Designs & Examples

Published on July 12, 2021 by Pritha Bhandari . Revised on June 22, 2023.

Correlation means there is a statistical association between variables. Causation means that a change in one variable causes a change in another variable.

In research, you might have come across the phrase “correlation doesn’t imply causation.” Correlation and causation are two related ideas, but understanding their differences will help you critically evaluate sources and interpret scientific research.

Table of contents

What’s the difference, why doesn’t correlation mean causation, correlational research, third variable problem, regression to the mean, spurious correlations, directionality problem, causal research, other interesting articles, frequently asked questions about correlation and causation.

Correlation describes an association between types of variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables. These variables change together: they covary. But this covariation isn’t necessarily due to a direct or indirect causal link.

Causation means that changes in one variable brings about changes in the other; there is a cause-and-effect relationship between variables. The two variables are correlated with each other and there is also a causal link between them.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

cause and effect type of research

There are two main reasons why correlation isn’t causation. These problems are important to identify for drawing sound scientific conclusions from research.

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not. For example, ice cream sales and violent crime rates are closely correlated, but they are not causally linked with each other. Instead, hot temperatures, a third variable, affects both variables separately. Failing to account for third variables can lead research biases to creep into your work.

The directionality problem occurs when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other. For example, vitamin D levels are correlated with depression, but it’s not clear whether low vitamin D causes depression, or whether depression causes reduced vitamin D intake.

You’ll need to use an appropriate research design to distinguish between correlational and causal relationships:

  • Correlational research designs can only demonstrate correlational links between variables.
  • Experimental designs can test causation.

In a correlational research design, you collect data on your variables without manipulating them.

Correlational research is usually high in external validity , so you can generalize your findings to real life settings. But these studies are low in internal validity , which makes it difficult to causally connect changes in one variable to changes in the other.

These research designs are commonly used when it’s unethical, too costly, or too difficult to perform controlled experiments. They are also used to study relationships that aren’t expected to be causal.

Without controlled experiments, it’s hard to say whether it was the variable you’re interested in that caused changes in another variable. Extraneous variables are any third variable or omitted variable other than your variables of interest that could affect your results.

Limited control in correlational research means that extraneous or confounding variables serve as alternative explanations for the results. Confounding variables can make it seem as though a correlational relationship is causal when it isn’t.

When two variables are correlated, all you can say is that changes in one variable occur alongside changes in the other.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Regression to the mean is observed when variables that are extremely higher or extremely lower than average on the first measurement move closer to the average on the second measurement. Particularly in research that intentionally focuses on the most extreme cases or events, RTM should always be considered as a possible cause of an observed change.

Players or teams featured on the cover of SI have earned their place by performing exceptionally well. But athletic success is a mix of skill and luck, and even the best players don’t always win.

Chances are that good luck will not continue indefinitely, and neither can exceptional success.

A spurious correlation is when two variables appear to be related through hidden third variables or simply by coincidence.

The Theory of the Stork draws a simple causal link between the variables to argue that storks physically deliver babies. This satirical study shows why you can’t conclude causation from correlational research alone.

When you analyze correlations in a large dataset with many variables, the chances of finding at least one statistically significant result are high. In this case, you’re more likely to make a type I error . This means erroneously concluding there is a true correlation between variables in the population based on skewed sample data.

To demonstrate causation, you need to show a directional relationship with no alternative explanations. This relationship can be unidirectional, with one variable impacting the other, or bidirectional, where both variables impact each other.

A correlational design won’t be able to distinguish between any of these possibilities, but an experimental design can test each possible direction, one at a time.

  • Physical activity may affect self esteem
  • Self esteem may affect physical activity
  • Physical activity and self esteem may both affect each other

In correlational research, the directionality of a relationship is unclear because there is limited researcher control. You might risk concluding reverse causality, the wrong direction of the relationship.

Causal links between variables can only be truly demonstrated with controlled experiments . Experiments test formal predictions, called hypotheses , to establish causality in one direction at a time.

Experiments are high in internal validity , so cause-and-effect relationships can be demonstrated with reasonable confidence.

You can establish directionality in one direction because you manipulate an independent variable before measuring the change in a dependent variable.

In a controlled experiment, you can also eliminate the influence of third variables by using random assignment and control groups.

Random assignment helps distribute participant characteristics evenly between groups so that they’re similar and comparable. A control group lets you compare the experimental manipulation to a similar treatment or no treatment (or a placebo, to control for the placebo effect ).

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Correlation vs. Causation | Difference, Designs & Examples. Scribbr. Retrieved June 24, 2024, from https://www.scribbr.com/methodology/correlation-vs-causation/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, correlational research | when & how to use, guide to experimental design | overview, steps, & examples, confounding variables | definition, examples & controls, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-.

Cover of InformedHealth.org

InformedHealth.org [Internet].

In brief: what types of studies are there.

Last Update: September 8, 2016 ; Next update: 2024.

There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked.

When making decisions, patients and doctors need reliable answers to a number of questions. Depending on the medical condition and patient's personal situation, the following questions may be asked:

  • What is the cause of the condition?
  • What is the natural course of the disease if left untreated?
  • What will change because of the treatment?
  • How many other people have the same condition?
  • How do other people cope with it?

Each of these questions can best be answered by a different type of study.

In order to get reliable results, a study has to be carefully planned right from the start. One thing that is especially important to consider is which type of study is best suited to the research question. A study protocol should be written and complete documentation of the study's process should also be done. This is vital in order for other scientists to be able to reproduce and check the results afterwards.

The main types of studies are randomized controlled trials (RCTs), cohort studies, case-control studies and qualitative studies.

  • Randomized controlled trials

If you want to know how effective a treatment or diagnostic test is, randomized trials provide the most reliable answers. Because the effect of the treatment is often compared with "no treatment" (or a different treatment), they can also show what happens if you opt to not have the treatment or diagnostic test.

When planning this type of study, a research question is stipulated first. This involves deciding what exactly should be tested and in what group of people. In order to be able to reliably assess how effective the treatment is, the following things also need to be determined before the study is started:

  • How long the study should last
  • How many participants are needed
  • How the effect of the treatment should be measured

For instance, a medication used to treat menopause symptoms needs to be tested on a different group of people than a flu medicine. And a study on treatment for a stuffy nose may be much shorter than a study on a drug taken to prevent strokes .

“Randomized” means divided into groups by chance. In RCTs participants are randomly assigned to one of two or more groups. Then one group receives the new drug A, for example, while the other group receives the conventional drug B or a placebo (dummy drug). Things like the appearance and taste of the drug and the placebo should be as similar as possible. Ideally, the assignment to the various groups is done "double blinded," meaning that neither the participants nor their doctors know who is in which group.

The assignment to groups has to be random in order to make sure that only the effects of the medications are compared, and no other factors influence the results. If doctors decided themselves which patients should receive which treatment, they might – for instance – give the more promising drug to patients who have better chances of recovery. This would distort the results. Random allocation ensures that differences between the results of the two groups at the end of the study are actually due to the treatment and not something else.

Randomized controlled trials provide the best results when trying to find out if there is a cause-and-effect relationship. RCTs can answer questions such as these:

  • Is the new drug A better than the standard treatment for medical condition X?
  • Does regular physical activity speed up recovery after a slipped disk when compared to passive waiting?
  • Cohort studies

A cohort is a group of people who are observed frequently over a period of many years – for instance, to determine how often a certain disease occurs. In a cohort study, two (or more) groups that are exposed to different things are compared with each other: For example, one group might smoke while the other doesn't. Or one group may be exposed to a hazardous substance at work, while the comparison group isn't. The researchers then observe how the health of the people in both groups develops over the course of several years, whether they become ill, and how many of them pass away. Cohort studies often include people who are healthy at the start of the study. Cohort studies can have a prospective (forward-looking) design or a retrospective (backward-looking) design. In a prospective study, the result that the researchers are interested in (such as a specific illness) has not yet occurred by the time the study starts. But the outcomes that they want to measure and other possible influential factors can be precisely defined beforehand. In a retrospective study, the result (the illness) has already occurred before the study starts, and the researchers look at the patient's history to find risk factors.

Cohort studies are especially useful if you want to find out how common a medical condition is and which factors increase the risk of developing it. They can answer questions such as:

  • How does high blood pressure affect heart health?
  • Does smoking increase your risk of lung cancer?

For example, one famous long-term cohort study observed a group of 40,000 British doctors, many of whom smoked. It tracked how many doctors died over the years, and what they died of. The study showed that smoking caused a lot of deaths, and that people who smoked more were more likely to get ill and die.

  • Case-control studies

Case-control studies compare people who have a certain medical condition with people who do not have the medical condition, but who are otherwise as similar as possible, for example in terms of their sex and age. Then the two groups are interviewed, or their medical files are analyzed, to find anything that might be risk factors for the disease. So case-control studies are generally retrospective.

Case-control studies are one way to gain knowledge about rare diseases. They are also not as expensive or time-consuming as RCTs or cohort studies. But it is often difficult to tell which people are the most similar to each other and should therefore be compared with each other. Because the researchers usually ask about past events, they are dependent on the participants’ memories. But the people they interview might no longer remember whether they were, for instance, exposed to certain risk factors in the past.

Still, case-control studies can help to investigate the causes of a specific disease, and answer questions like these:

  • Do HPV infections increase the risk of cervical cancer ?
  • Is the risk of sudden infant death syndrome (“cot death”) increased by parents smoking at home?

Cohort studies and case-control studies are types of "observational studies."

  • Cross-sectional studies

Many people will be familiar with this kind of study. The classic type of cross-sectional study is the survey: A representative group of people – usually a random sample – are interviewed or examined in order to find out their opinions or facts. Because this data is collected only once, cross-sectional studies are relatively quick and inexpensive. They can provide information on things like the prevalence of a particular disease (how common it is). But they can't tell us anything about the cause of a disease or what the best treatment might be.

Cross-sectional studies can answer questions such as these:

  • How tall are German men and women at age 20?
  • How many people have cancer screening?
  • Qualitative studies

This type of study helps us understand, for instance, what it is like for people to live with a certain disease. Unlike other kinds of research, qualitative research does not rely on numbers and data. Instead, it is based on information collected by talking to people who have a particular medical condition and people close to them. Written documents and observations are used too. The information that is obtained is then analyzed and interpreted using a number of methods.

Qualitative studies can answer questions such as these:

  • How do women experience a Cesarean section?
  • What aspects of treatment are especially important to men who have prostate cancer ?
  • How reliable are the different types of studies?

Each type of study has its advantages and disadvantages. It is always important to find out the following: Did the researchers select a study type that will actually allow them to find the answers they are looking for? You can’t use a survey to find out what is causing a particular disease, for instance.

It is really only possible to draw reliable conclusions about cause and effect by using randomized controlled trials. Other types of studies usually only allow us to establish correlations (relationships where it isn’t clear whether one thing is causing the other). For instance, data from a cohort study may show that people who eat more red meat develop bowel cancer more often than people who don't. This might suggest that eating red meat can increase your risk of getting bowel cancer. But people who eat a lot of red meat might also smoke more, drink more alcohol, or tend to be overweight. The influence of these and other possible risk factors can only be determined by comparing two equal-sized groups made up of randomly assigned participants.

That is why randomized controlled trials are usually the only suitable way to find out how effective a treatment is. Systematic reviews, which summarize multiple RCTs , are even better. In order to be good-quality, though, all studies and systematic reviews need to be designed properly and eliminate as many potential sources of error as possible.

  • German Network for Evidence-based Medicine. Glossar: Qualitative Forschung.  Berlin: DNEbM; 2011. 
  • Greenhalgh T. Einführung in die Evidence-based Medicine: kritische Beurteilung klinischer Studien als Basis einer rationalen Medizin. Bern: Huber; 2003. 
  • Institute for Quality and Efficiency in Health Care (IQWiG, Germany). General methods . Version 5.0. Cologne: IQWiG; 2017.
  • Klug SJ, Bender R, Blettner M, Lange S. Wichtige epidemiologische Studientypen. Dtsch Med Wochenschr 2007; 132:e45-e47. [ PubMed : 17530597 ]
  • Schäfer T. Kritische Bewertung von Studien zur Ätiologie. In: Kunz R, Ollenschläger G, Raspe H, Jonitz G, Donner-Banzhoff N (eds.). Lehrbuch evidenzbasierte Medizin in Klinik und Praxis. Cologne: Deutscher Ärzte-Verlag; 2007.

IQWiG health information is written with the aim of helping people understand the advantages and disadvantages of the main treatment options and health care services.

Because IQWiG is a German institute, some of the information provided here is specific to the German health care system. The suitability of any of the described options in an individual case can be determined by talking to a doctor. informedhealth.org can provide support for talks with doctors and other medical professionals, but cannot replace them. We do not offer individual consultations.

Our information is based on the results of good-quality studies. It is written by a team of health care professionals, scientists and editors, and reviewed by external experts. You can find a detailed description of how our health information is produced and updated in our methods.

  • Cite this Page InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-. In brief: What types of studies are there? [Updated 2016 Sep 8].

In this Page

Informed health links, related information.

  • PubMed Links to PubMed

Recent Activity

  • In brief: What types of studies are there? - InformedHealth.org In brief: What types of studies are there? - InformedHealth.org

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Explanatory Research

Explanatory Research – Types, Methods, Guide

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Qualitative Research Methods

Qualitative Research Methods

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Organizing Your Social Sciences Research Paper: Types of Research Designs

  • Purpose of Guide
  • Writing a Research Proposal
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • The Research Problem/Question
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • The C.A.R.S. Model
  • Background Information
  • Theoretical Framework
  • Citation Tracking
  • Evaluating Sources
  • Reading Research Effectively
  • Primary Sources
  • Secondary Sources
  • What Is Scholarly vs. Popular?
  • Is it Peer-Reviewed?
  • Qualitative Methods
  • Quantitative Methods
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism [linked guide]
  • Annotated Bibliography
  • Grading Someone Else's Paper

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data. Note that your research problem determines the type of design you should use, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base . 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations far too early, before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing research designs in your paper can vary considerably, but any well-developed design will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the data which will be necessary for an adequate testing of the hypotheses and explain how such data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction and varies in length depending on the type of design you are using. However, you can get a sense of what to do by reviewing the literature of studies that have utilized the same research design. This can provide an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Video content

Videos in Business and Management , Criminology and Criminal Justice , Education , and Media, Communication and Cultural Studies specifically created for use in higher education.

A literature review tool that highlights the most influential works in Business & Management, Education, Politics & International Relations, Psychology and Sociology. Does not contain full text of the cited works. Dates vary.

Encyclopedias, handbooks, ebooks, and videos published by Sage and CQ Press. 2000 to present

Causal Design

Definition and Purpose

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.

What do these studies tell you ?

  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.

What these studies don't tell you ?

  • Not all relationships are casual! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation ; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base . 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, r ather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101 . Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study . Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design, Application, Strengths and Weaknesses of Cross-Sectional Studies . Healthknowledge, 2009. Cross-Sectional Study . Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies . Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design , September 26, 2008. Explorable.com website.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs . School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research . Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design . Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research . Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research . Wikipedia.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study . Wikipedia.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research . Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

  • << Previous: Writing a Research Proposal
  • Next: Design Flaws to Avoid >>
  • Last Updated: Sep 8, 2023 12:19 PM
  • URL: https://guides.library.txstate.edu/socialscienceresearch

A step-by-step guide to causal study design using real-world data

  • Open access
  • Published: 19 June 2024

Cite this article

You have full access to this open access article

cause and effect type of research

  • Sarah Ruth Hoffman 1 ,
  • Nilesh Gangan 1 ,
  • Xiaoxue Chen 2 ,
  • Joseph L. Smith 1 ,
  • Arlene Tave 1 ,
  • Yiling Yang 1 ,
  • Christopher L. Crowe 1 ,
  • Susan dosReis 3 &
  • Michael Grabner 1  

396 Accesses

Explore all metrics

Due to the need for generalizable and rapidly delivered evidence to inform healthcare decision-making, real-world data have grown increasingly important to answer causal questions. However, causal inference using observational data poses numerous challenges, and relevant methodological literature is vast. We endeavored to identify underlying unifying themes of causal inference using real-world healthcare data and connect them into a single schema to aid in observational study design, and to demonstrate this schema using a previously published research example. A multidisciplinary team (epidemiology, biostatistics, health economics) reviewed the literature related to causal inference and observational data to identify key concepts. A visual guide to causal study design was developed to concisely and clearly illustrate how the concepts are conceptually related to one another. A case study was selected to demonstrate an application of the guide. An eight-step guide to causal study design was created, integrating essential concepts from the literature, anchored into conceptual groupings according to natural steps in the study design process. The steps include defining the causal research question and the estimand; creating a directed acyclic graph; identifying biases and design and analytic techniques to mitigate their effect, and techniques to examine the robustness of findings. The cardiovascular case study demonstrates the applicability of the steps to developing a research plan. This paper used an existing study to demonstrate the relevance of the guide. We encourage researchers to incorporate this guide at the study design stage in order to elevate the quality of future real-world evidence.

Similar content being viewed by others

cause and effect type of research

Examples of Applying Causal-Inference Roadmap to Real-World Studies

cause and effect type of research

Selection Mechanisms and Their Consequences: Understanding and Addressing Selection Bias

cause and effect type of research

Assessing causality in epidemiology: revisiting Bradford Hill to incorporate developments in causal thinking

Avoid common mistakes on your manuscript.

1 Introduction

Approximately 50 new drugs are approved each year in the United States (Mullard 2022 ). For all new drugs, randomized controlled trials (RCTs) are the gold-standard by which potential effectiveness (“efficacy”) and safety are established. However, RCTs cannot guarantee how a drug will perform in a less controlled context. For this reason, regulators frequently require observational, post-approval studies using “real-world” data, sometimes even as a condition of drug approval. The “real-world” data requested by regulators is often derived from insurance claims databases and/or healthcare records. Importantly, these data are recorded during routine clinical care without concern for potential use in research. Yet, in recent years, there has been increasing use of such data for causal inference and regulatory decision making, presenting a variety of methodologic challenges for researchers and stakeholders to consider (Arlett et al. 2022 ; Berger et al. 2017 ; Concato and ElZarrad 2022 ; Cox et al. 2009 ; European Medicines Agency 2023 ; Franklin and Schneeweiss 2017 ; Girman et al. 2014 ; Hernán and Robins 2016 ; International Society for Pharmacoeconomics and Outcomes Research (ISPOR) 2022 ; International Society for Pharmacoepidemiology (ISPE) 2020 ; Stuart et al. 2013 ; U.S. Food and Drug Administration 2018 ; Velentgas et al. 2013 ).

Current guidance for causal inference using observational healthcare data articulates the need for careful study design (Berger et al. 2017 ; Cox et al. 2009 ; European Medicines Agency 2023 ; Girman et al. 2014 ; Hernán and Robins 2016 ; Stuart et al. 2013 ; Velentgas et al. 2013 ). In 2009, Cox et al. described common sources of bias in observational data and recommended specific strategies to mitigate these biases (Cox et al. 2009 ). In 2013, Stuart et al. emphasized counterfactual theory and trial emulation, offered several approaches to address unmeasured confounding, and provided guidance on the use of propensity scores to balance confounding covariates (Stuart et al. 2013 ). In 2013, the Agency for Healthcare Research and Quality (AHRQ) released an extensive, 200-page guide to developing a protocol for comparative effectiveness research using observational data (Velentgas et al. 2013 ). The guide emphasized development of the research question, with additional chapters on study design, comparator selection, sensitivity analyses, and directed acyclic graphs (Velentgas et al. 2013 ). In 2014, Girman et al. provided a clear set of steps for assessing study feasibility including examination of the appropriateness of the data for the research question (i.e., ‘fit-for-purpose’), empirical equipoise, and interpretability, stating that comparative effectiveness research using observational data “should be designed with the goal of drawing a causal inference” (Girman et al. 2014 ). In 2017 , Berger et al. described aspects of “study hygiene,” focusing on procedural practices to enhance confidence in, and credibility of, real-world data studies (Berger et al. 2017 ). Currently, the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP) maintains a guide on methodological standards in pharmacoepidemiology which discusses causal inference using observational data and includes an overview of study designs, a chapter on methods to address bias and confounding, and guidance on writing statistical analysis plans (European Medicines Agency 2023 ). In addition to these resources, the “target trial framework” provides a structured approach to planning studies for causal inferences from observational databases (Hernán and Robins 2016 ; Wang et al. 2023b ). This framework, published in 2016, encourages researchers to first imagine a clinical trial for the study question of interest and then to subsequently design the observational study to reflect the hypothetical trial (Hernán and Robins 2016 ).

While the literature addresses critical issues collectively, there remains a need for a framework that puts key components, including the target trial approach, into a simple, overarching schema (Loveless 2022 ) so they can be more easily remembered, and communicated to all stakeholders including (new) researchers, peer-reviewers, and other users of the research findings (e.g., practicing providers, professional clinical societies, regulators). For this reason, we created a step-by-step guide for causal inference using administrative health data, which aims to integrate these various best practices at a high level and complements existing, more specific guidance, including those from the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the International Society for Pharmacoepidemiology (ISPE) (Berger et al. 2017 ; Cox et al. 2009 ; Girman et al. 2014 ). We demonstrate the application of this schema using a previously published paper in cardiovascular research.

This work involved a formative phase and an implementation phase to evaluate the utility of the causal guide. In the formative phase, a multidisciplinary team with research expertise in epidemiology, biostatistics, and health economics reviewed selected literature (peer-reviewed publications, including those mentioned in the introduction, as well as graduate-level textbooks) related to causal inference and observational healthcare data from the pharmacoepidemiologic and pharmacoeconomic perspectives. The potential outcomes framework served as the foundation for our conception of causal inference (Rubin 2005 ). Information was grouped into the following four concepts: (1) Defining the Research Question; (2) Defining the Estimand; (3) Identifying and Mitigating Biases; (4) Sensitivity Analysis. A step-by-step guide to causal study design was developed to distill the essential elements of each concept, organizing them into a single schema so that the concepts are clearly related to one another. References for each step of the schema are included in the Supplemental Table.

In the implementation phase we tested the application of the causal guide to previously published work (Dondo et al. 2017 ). The previously published work utilized data from the Myocardial Ischaemia National Audit Project (MINAP), the United Kingdom’s national heart attack register. The goal of the study was to assess the effect of β-blockers on all-cause mortality among patients hospitalized for acute myocardial infarction without heart failure or left ventricular systolic dysfunction. We selected this paper for the case study because of its clear descriptions of the research goal and methods, and the explicit and methodical consideration of potential biases and use of sensitivity analyses to examine the robustness of the main findings.

3.1 Overview of the eight steps

The step-by-step guide to causal inference comprises eight distinct steps (Fig.  1 ) across the four concepts. As scientific inquiry and study design are iterative processes, the various steps may be completed in a different order than shown, and steps may be revisited.

figure 1

A step-by-step guide for causal study design

Abbreviations: GEE: generalized estimating equations; IPC/TW: inverse probability of censoring/treatment weighting; ITR: individual treatment response; MSM: marginal structural model; TE: treatment effect

Please refer to the Supplemental Table for references providing more in-depth information.

1 Ensure that the exposure and outcome are well-defined based on literature and expert opinion.

2 More specifically, measures of association are not affected by issues such as confounding and selection bias because they do not intend to isolate and quantify a single causal pathway. However, information bias (e.g., variable misclassification) can negatively affect association estimates, and association estimates remain subject to random variability (and are hence reported with confidence intervals).

3 This list is not exhaustive; it focuses on frequently encountered biases.

4 To assess bias in a nonrandomized study following the target trial framework, use of the ROBINS-I tool is recommended ( https://www.bmj.com/content/355/bmj.i4919 ).

5 Only a selection of the most popular approaches is presented here. Other methods exist; e.g., g-computation and g-estimation for both time-invariant and time-varying analysis; instrumental variables; and doubly-robust estimation methods. There are also program evaluation methods (e.g., difference-in-differences, regression discontinuities) that can be applied to pharmacoepidemiologic questions. Conventional outcome regression analysis is not recommended for causal estimation due to issues determining covariate balance, correct model specification, and interpretability of effect estimates.

6 Online tools include, among others, an E-value calculator for unmeasured confounding ( https://www.evalue-calculator.com /) and the P95 outcome misclassification estimator ( http://apps.p-95.com/ISPE /).

3.2 Defining the Research question (step 1)

The process of designing a study begins with defining the research question. Research questions typically center on whether a causal relationship exists between an exposure and an outcome. This contrasts with associative questions, which, by their nature, do not require causal study design elements because they do not attempt to isolate a causal pathway from a single exposure to an outcome under study. It is important to note that the phrasing of the question itself should clarify whether an association or a causal relationship is of interest. The study question “Does statin use reduce the risk of future cardiovascular events?” is explicitly causal and requires that the study design addresses biases such as confounding. In contrast, the study question “Is statin use associated with a reduced risk of future cardiovascular events?” can be answered without control of confounding since the word “association” implies correlation. Too often, however, researchers use the word “association” to describe their findings when their methods were created to address explicitly causal questions (Hernán 2018 ). For example, a study that uses propensity score-based methods to balance risk factors between treatment groups is explicitly attempting to isolate a causal pathway by removing confounding factors. This is different from a study that intends only to measure an association. In fact, some journals may require that the word “association” be used when causal language would be more appropriate; however, this is beginning to change (Flanagin et al. 2024 ).

3.3 Defining the estimand (steps 2, 3, 4)

The estimand is the causal effect of research interest and is described in terms of required design elements: the target population for the counterfactual contrast, the kind of effect, and the effect/outcome measure.

In Step 2, the study team determines the target population of interest, which depends on the research question of interest. For example, we may want to estimate the effect of the treatment in the entire study population, i.e., the hypothetical contrast between all study patients taking the drug of interest versus all study patients taking the comparator (the average treatment effect; ATE). Other effects can be examined, including the average treatment effect in the treated or untreated (ATT or ATU).When covariate distributions are the same across the treated and untreated populations and there is no effect modification by covariates, these effects are generally the same (Wang et al. 2017 ). In RCTs, this occurs naturally due to randomization, but in non-randomized data, careful study design and statistical methods must be used to mitigate confounding bias.

In Step 3, the study team decides whether to measure the intention-to-treat (ITT), per-protocol, or as-treated effect. The ITT approach is also known as “first-treatment-carried-forward” in the observational literature (Lund et al. 2015 ). In trials, the ITT measures the effect of treatment assignment rather than the treatment itself, and in observational data the ITT can be conceptualized as measuring the effect of treatment as started . To compute the ITT effect from observational data, patients are placed into the exposure group corresponding to the treatment that they initiate, and treatment switching or discontinuation are purposely ignored in the analysis. Alternatively, a per-protocol effect can be measured from observational data by classifying patients according to the treatment that they initiated but censoring them when they stop, switch, or otherwise change treatment (Danaei et al. 2013 ; Yang et al. 2014 ). Finally, “as-treated” effects are estimated from observational data by classifying patients according to their actual treatment exposure during follow-up, for example by using multiple time windows to measure exposure changes (Danaei et al. 2013 ; Yang et al. 2014 ).

Step 4 is the final step in specifying the estimand in which the research team determines the effect measure of interest. Answering this question has two parts. First, the team must consider how the outcome of interest will be measured. Risks, rates, hazards, odds, and costs are common ways of measuring outcomes, but each measure may be best suited to a particular scenario. For example, risks assume patients across comparison groups have equal follow-up time, while rates allow for variable follow-up time (Rothman et al. 2008 ). Costs may be of interest in studies focused on economic outcomes, including as inputs to cost-effectiveness analyses. After deciding how the outcome will be measured, it is necessary to consider whether the resulting quantity will be compared across groups using a ratio or a difference. Ratios convey the effect of exposure in a way that is easy to understand, but they do not provide an estimate of how many patients will be affected. On the other hand, differences provide a clearer estimate of the potential public health impact of exposure; for example, by allowing the calculation of the number of patients that must be treated to cause or prevent one instance of the outcome of interest (Tripepi et al. 2007 ).

3.4 Identifying and mitigating biases (steps 5, 6, 7)

Observational, real-world studies can be subject to multiple potential sources of bias, which can be grouped into confounding, selection, measurement, and time-related biases (Prada-Ramallal et al. 2019 ).

In Step 5, as a practical first approach in developing strategies to address threats to causal inference, researchers should create a visual mapping of factors that may be related to the exposure, outcome, or both (also called a directed acyclic graph or DAG) (Pearl 1995 ). While creating a high-quality DAG can be challenging, guidance is increasingly available to facilitate the process (Ferguson et al. 2020 ; Gatto et al. 2022 ; Hernán and Robins 2020 ; Rodrigues et al. 2022 ; Sauer 2013 ). The types of inter-variable relationships depicted by DAGs include confounders, colliders, and mediators. Confounders are variables that affect both exposure and outcome, and it is necessary to control for them in order to isolate the causal pathway of interest. Colliders represent variables affected by two other variables, such as exposure and outcome (Griffith et al. 2020 ). Colliders should not be conditioned on since by doing so, the association between exposure and outcome will become distorted. Mediators are variables that are affected by the exposure and go on to affect the outcome. As such, mediators are on the causal pathway between exposure and outcome and should also not be conditioned on, otherwise a path between exposure and outcome will be closed and the total effect of the exposure on the outcome cannot be estimated. Mediation analysis is a separate type of analysis aiming to distinguish between direct and indirect (mediated) effects between exposure and outcome and may be applied in certain cases (Richiardi et al. 2013 ). Overall, the process of creating a DAG can create valuable insights about the nature of the hypothesized underlying data generating process and the biases that are likely to be encountered (Digitale et al. 2022 ). Finally, an extension to DAGs which incorporates counterfactual theory is available in the form of Single World Intervention Graphs (SWIGs) as described in a 2013 primer (Richardson and Robins 2013 ).

In Step 6, researchers comprehensively assess the possibility of different types of bias in their study, above and beyond what the creation of the DAG reveals. Many potential biases have been identified and summarized in the literature (Berger et al. 2017 ; Cox et al. 2009 ; European Medicines Agency 2023 ; Girman et al. 2014 ; Stuart et al. 2013 ; Velentgas et al. 2013 ). Every study can be subject to one or more biases, each of which can be addressed using one or more methods. The study team should thoroughly and explicitly identify all possible biases with consideration for the specifics of the available data and the nuances of the population and health care system(s) from which the data arise. Once the potential biases are identified and listed, the team can consider potential solutions using a variety of study design and analytic techniques.

In Step 7, the study team considers solutions to the biases identified in Step 6. “Target trial” thinking serves as the basis for many of these solutions by requiring researchers to consider how observational studies can be designed to ensure comparison groups are similar and produce valid inferences by emulating RCTs (Labrecque and Swanson 2017 ; Wang et al. 2023b ). Designing studies to include only new users of a drug and an active comparator group is one way of increasing the similarity of patients across both groups, particularly in terms of treatment history. Careful consideration must be paid to the specification of the time periods and their relationship to inclusion/exclusion criteria (Suissa and Dell’Aniello 2020 ). For instance, if a drug is used intermittently, a longer wash-out period is needed to ensure adequate capture of prior use in order to avoid bias (Riis et al. 2015 ). The study team should consider how to approach confounding adjustment, and whether both time-invariant and time-varying confounding may be present. Many potential biases exist, and many methods have been developed to address them in order to improve causal estimation from observational data. Many of these methods, such as propensity score estimation, can be enhanced by machine learning (Athey and Imbens 2019 ; Belthangady et al. 2021 ; Mai et al. 2022 ; Onasanya et al. 2024 ; Schuler and Rose 2017 ; Westreich et al. 2010 ). Machine learning has many potential applications in the causal inference discipline, and like other tools, must be used with careful planning and intentionality. To aid in the assessment of potential biases, especially time-related ones, and the development of a plan to address them, the study design should be visualized (Gatto et al. 2022 ; Schneeweiss et al. 2019 ). Additionally, we note the opportunity for collaboration across research disciplines (e.g., the application of difference-in-difference methods (Zhou et al. 2016 ) to the estimation of comparative drug effectiveness and safety).

3.5 Quality Control & sensitivity analyses (step 8)

Causal study design concludes with Step 8, which includes planning quality control and sensitivity analyses to improve the internal validity of the study. Quality control begins with reviewing study output for prima facie validity. Patient characteristics (e.g., distributions of age, sex, region) should align with expected values from the researchers’ intuition and the literature, and researchers should assess reasons for any discrepancies. Sensitivity analyses should be conducted to determine the robustness of study findings. Researchers can test the stability of study estimates using a different estimand or type of model than was used in the primary analysis. Sensitivity analysis estimates that are similar to those of the primary analysis might confirm that the primary analysis estimates are appropriate. The research team may be interested in how changes to study inclusion/exclusion criteria may affect study findings or wish to address uncertainties related to measuring the exposure or outcome in the administrative data by modifying the algorithms used to identify exposure or outcome (e.g., requiring hospitalization with a diagnosis code in a principal position rather than counting any claim with the diagnosis code in any position). As feasible, existing validation studies for the exposure and outcome should be referenced, or new validation efforts undertaken. The results of such validation studies can inform study estimates via quantitative bias analyses (Lanes and Beachler 2023 ). The study team may also consider biases arising from unmeasured confounding and plan quantitative bias analyses to explore how unmeasured confounding may impact estimates. Quantitative bias analysis can assess the directionality, magnitude, and uncertainty of errors arising from a variety of limitations (Brenner and Gefeller 1993 ; Lash et al. 2009 , 2014 ; Leahy et al. 2022 ).

3.6 Illustration using a previously published research study

In order to demonstrate how the guide can be used to plan a research study utilizing causal methods, we turn to a previously published study (Dondo et al. 2017 ) that assessed the causal relationship between the use of 𝛽-blockers and mortality after acute myocardial infarction in patients without heart failure or left ventricular systolic dysfunction. The investigators sought to answer a causal research question (Step 1), and so we proceed to Step 2. Use (or no use) of 𝛽-blockers was determined after discharge without taking into consideration discontinuation or future treatment changes (i.e., intention-to-treat). Considering treatment for whom (Step 3), both ATE and ATT were evaluated. Since survival was the primary outcome, an absolute difference in survival time was chosen as the effect measure (Step 4). While there was no explicit directed acyclic graph provided, the investigators specified a list of confounders.

Robust methodologies were established by consideration of possible sources of biases and addressing them using viable solutions (Steps 6 and 7). Table  1 offers a list of the identified potential biases and their corresponding solutions as implemented. For example, to minimize potential biases including prevalent-user bias and selection bias, the sample was restricted to patients with no previous use of 𝛽-blockers, no contraindication for 𝛽-blockers, and no prescription of loop diuretics. To improve balance across the comparator groups in terms of baseline confounders, i.e., those that could influence both exposure (𝛽-blocker use) and outcome (mortality), propensity score-based inverse probability of treatment weighting (IPTW) was employed. However, we noted that the baseline look-back period to assess measured covariates was not explicitly listed in the paper.

Quality control and sensitivity analysis (Step 8) is described extensively. The overlap of propensity score distributions between comparator groups was tested and confounder balance was assessed. Since observations in the tail-end of the propensity score distribution may violate the positivity assumption (Crump et al. 2009 ), a sensitivity analysis was conducted including only cases within 0.1 to 0.9 of the propensity score distribution. While not mentioned by the authors, the PS tails can be influenced by unmeasured confounders (Sturmer et al. 2021 ), and the findings were robust with and without trimming. An assessment of extreme IPTW weights, while not included, would further help increase confidence in the robustness of the analysis. An instrumental variable approach was employed to assess potential selection bias due to unmeasured confounding, using hospital rates of guideline-indicated prescribing as the instrument. Additionally, potential bias caused by missing data was attenuated through the use of multiple imputation, and separate models were built for complete cases only and imputed/complete cases.

4 Discussion

We have described a conceptual schema for designing observational real-world studies to estimate causal effects. The application of this schema to a previously published study illuminates the methodologic structure of the study, revealing how each structural element is related to a potential bias which it is meant to address. Real-world evidence is increasingly accepted by healthcare stakeholders, including the FDA (Concato and Corrigan-Curay 2022 ; Concato and ElZarrad 2022 ), and its use for comparative effectiveness and safety assessments requires appropriate causal study design; our guide is meant to facilitate this design process and complement existing, more specific, guidance.

Existing guidance for causal inference using observational data includes components that can be clearly mapped onto the schema that we have developed. For example, in 2009 Cox et al. described common sources of bias in observational data and recommended specific strategies to mitigate these biases, corresponding to steps 6–8 of our step-by-step guide (Cox et al. 2009 ). In 2013, the AHRQ emphasized development of the research question, corresponding to steps 1–4 of our guide, with additional chapters on study design, comparator selection, sensitivity analyses, and directed acyclic graphs which correspond to steps 7 and 5, respectively (Velentgas et al. 2013 ). Much of Girman et al.’s manuscript (Girman et al. 2014 ) corresponds with steps 1–4 of our guide, and the matter of equipoise and interpretability specifically correspond to steps 3 and 7–8. The current ENCePP guide on methodological standards in pharmacoepidemiology contains a section on formulating a meaningful research question, corresponding to step 1, and describes strategies to mitigate specific sources of bias, corresponding to steps 6–8 (European Medicines Agency 2023 ). Recent works by the FDA Sentinel Innovation Center (Desai et al. 2024 ) and the Joint Initiative for Causal Inference (Dang et al. 2023 ) provide more advanced exposition of many of the steps in our guide. The target trial framework contains guidance on developing seven components of the study protocol, including eligibility criteria, treatment strategies, assignment procedures, follow-up period, outcome, causal contrast of interest, and analysis plan (Hernán and Robins 2016 ). Our work places the target trial framework into a larger context illustrating its relationship with other important study planning considerations, including the creation of a directed acyclic graph and incorporation of prespecified sensitivity and quantitative bias analyses.

Ultimately, the feasibility of estimating causal effects relies on the capabilities of the available data. Real-world data sources are complex, and the investigator must carefully consider whether the data on hand are sufficient to answer the research question. For example, a study that relies solely on claims data for outcome ascertainment may suffer from outcome misclassification bias (Lanes and Beachler 2023 ). This bias can be addressed through medical record validation for a random subset of patients, followed by quantitative bias analysis (Lanes and Beachler 2023 ). If instead, the investigator wishes to apply a previously published, claims-based algorithm validated in a different database, they must carefully consider the transportability of that algorithm to their own study population. In this way, causal inference from real-world data requires the ability to think creatively and resourcefully about how various data sources and elements can be leveraged, with consideration for the strengths and limitations of each source. The heart of causal inference is in the pairing of humility and creativity: the humility to acknowledge what the data cannot do, and the creativity to address those limitations as best as one can at the time.

4.1 Limitations

As with any attempt to synthesize a broad array of information into a single, simplified schema, there are several limitations to our work. Space and useability constraints necessitated simplification of the complex source material and selections among many available methodologies, and information about the relative importance of each step is not currently included. Additionally, it is important to consider the context of our work. This step-by-step guide emphasizes analytic techniques (e.g., propensity scores) that are used most frequently within our own research environment and may not include less familiar study designs and analytic techniques. However, one strength of the guide is that additional designs and techniques or concepts can easily be incorporated into the existing schema. The benefit of a schema is that new information can be added and is more readily accessed due to its association with previously sorted information (Loveless 2022 ). It is also important to note that causal inference was approached as a broad overarching concept defined by the totality of the research, from start to finish, rather than focusing on a particular analytic technique, however we view this as a strength rather than a limitation.

Finally, the focus of this guide was on the methodologic aspects of study planning. As a result, we did not include steps for drafting or registering the study protocol in a public database or for communicating results. We strongly encourage researchers to register their study protocols and communicate their findings with transparency. A protocol template endorsed by ISPOR and ISPE for studies using real-world data to evaluate treatment effects is available (Wang et al. 2023a ). Additionally, the steps described above are intended to illustrate an order of thinking in the study planning process, and these steps are often iterative. The guide is not intended to reflect the order of study execution; specifically, quality control procedures and sensitivity analyses should also be formulated up-front at the protocol stage.

5 Conclusion

We outlined steps and described key conceptual issues of importance in designing real-world studies to answer causal questions, and created a visually appealing, user-friendly resource to help researchers clearly define and navigate these issues. We hope this guide serves to enhance the quality, and thus the impact, of real-world evidence.

Data availability

No datasets were generated or analysed during the current study.

Arlett, P., Kjaer, J., Broich, K., Cooke, E.: Real-world evidence in EU Medicines Regulation: Enabling Use and establishing value. Clin. Pharmacol. Ther. 111 (1), 21–23 (2022)

Article   PubMed   Google Scholar  

Athey, S., Imbens, G.W.: Machine Learning Methods That Economists Should Know About. Annual Review of Economics 11(Volume 11, 2019): 685–725. (2019)

Belthangady, C., Stedden, W., Norgeot, B.: Minimizing bias in massive multi-arm observational studies with BCAUS: Balancing covariates automatically using supervision. BMC Med. Res. Methodol. 21 (1), 190 (2021)

Article   PubMed   PubMed Central   Google Scholar  

Berger, M.L., Sox, H., Willke, R.J., Brixner, D.L., Eichler, H.G., Goettsch, W., Madigan, D., Makady, A., Schneeweiss, S., Tarricone, R., Wang, S.V., Watkins, J.: and C. Daniel Mullins. 2017. Good practices for real-world data studies of treatment and/or comparative effectiveness: Recommendations from the joint ISPOR-ISPE Special Task Force on real-world evidence in health care decision making. Pharmacoepidemiol Drug Saf. 26 (9): 1033–1039

Brenner, H., Gefeller, O.: Use of the positive predictive value to correct for disease misclassification in epidemiologic studies. Am. J. Epidemiol. 138 (11), 1007–1015 (1993)

Article   CAS   PubMed   Google Scholar  

Concato, J., Corrigan-Curay, J.: Real-world evidence - where are we now? N Engl. J. Med. 386 (18), 1680–1682 (2022)

Concato, J., ElZarrad, M.: FDA Issues Draft Guidances on Real-World Evidence, Prepares to Publish More in Future [accessed on 2022]. (2022). https://www.fda.gov/drugs/news-events-human-drugs/fda-issues-draft-guidances-real-world-evidence-prepares-publish-more-future

Cox, E., Martin, B.C., Van Staa, T., Garbe, E., Siebert, U., Johnson, M.L.: Good research practices for comparative effectiveness research: Approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: The International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report–Part II. Value Health. 12 (8), 1053–1061 (2009)

Crump, R.K., Hotz, V.J., Imbens, G.W., Mitnik, O.A.: Dealing with limited overlap in estimation of average treatment effects. Biometrika. 96 (1), 187–199 (2009)

Article   Google Scholar  

Danaei, G., Rodriguez, L.A., Cantero, O.F., Logan, R., Hernan, M.A.: Observational data for comparative effectiveness research: An emulation of randomised trials of statins and primary prevention of coronary heart disease. Stat. Methods Med. Res. 22 (1), 70–96 (2013)

Dang, L.E., Gruber, S., Lee, H., Dahabreh, I.J., Stuart, E.A., Williamson, B.D., Wyss, R., Diaz, I., Ghosh, D., Kiciman, E., Alemayehu, D., Hoffman, K.L., Vossen, C.Y., Huml, R.A., Ravn, H., Kvist, K., Pratley, R., Shih, M.C., Pennello, G., Martin, D., Waddy, S.P., Barr, C.E., Akacha, M., Buse, J.B., van der Laan, M., Petersen, M.: A causal roadmap for generating high-quality real-world evidence. J. Clin. Transl Sci. 7 (1), e212 (2023)

Desai, R.J., Wang, S.V., Sreedhara, S.K., Zabotka, L., Khosrow-Khavar, F., Nelson, J.C., Shi, X., Toh, S., Wyss, R., Patorno, E., Dutcher, S., Li, J., Lee, H., Ball, R., Dal Pan, G., Segal, J.B., Suissa, S., Rothman, K.J., Greenland, S., Hernan, M.A., Heagerty, P.J., Schneeweiss, S.: Process guide for inferential studies using healthcare data from routine clinical practice to evaluate causal effects of drugs (PRINCIPLED): Considerations from the FDA Sentinel Innovation Center. BMJ. 384 , e076460 (2024)

Digitale, J.C., Martin, J.N., Glymour, M.M.: Tutorial on directed acyclic graphs. J. Clin. Epidemiol. 142 , 264–267 (2022)

Dondo, T.B., Hall, M., West, R.M., Jernberg, T., Lindahl, B., Bueno, H., Danchin, N., Deanfield, J.E., Hemingway, H., Fox, K.A.A., Timmis, A.D., Gale, C.P.: beta-blockers and Mortality after Acute myocardial infarction in patients without heart failure or ventricular dysfunction. J. Am. Coll. Cardiol. 69 (22), 2710–2720 (2017)

Article   CAS   PubMed   PubMed Central   Google Scholar  

European Medicines Agency: ENCePP Guide on Methodological Standards in Pharmacoepidemiology [accessed on 2023]. (2023). https://www.encepp.eu/standards_and_guidances/methodologicalGuide.shtml

Ferguson, K.D., McCann, M., Katikireddi, S.V., Thomson, H., Green, M.J., Smith, D.J., Lewsey, J.D.: Evidence synthesis for constructing directed acyclic graphs (ESC-DAGs): A novel and systematic method for building directed acyclic graphs. Int. J. Epidemiol. 49 (1), 322–329 (2020)

Flanagin, A., Lewis, R.J., Muth, C.C., Curfman, G.: What does the proposed causal inference Framework for Observational studies Mean for JAMA and the JAMA Network Journals? JAMA (2024)

U.S. Food and Drug Administration: Framework for FDA’s Real-World Evidence Program [accessed on 2018]. (2018). https://www.fda.gov/media/120060/download

Franklin, J.M., Schneeweiss, S.: When and how can Real World Data analyses substitute for randomized controlled trials? Clin. Pharmacol. Ther. 102 (6), 924–933 (2017)

Gatto, N.M., Wang, S.V., Murk, W., Mattox, P., Brookhart, M.A., Bate, A., Schneeweiss, S., Rassen, J.A.: Visualizations throughout pharmacoepidemiology study planning, implementation, and reporting. Pharmacoepidemiol Drug Saf. 31 (11), 1140–1152 (2022)

Girman, C.J., Faries, D., Ryan, P., Rotelli, M., Belger, M., Binkowitz, B., O’Neill, R.: and C. E. R. S. W. G. Drug Information Association. 2014. Pre-study feasibility and identifying sensitivity analyses for protocol pre-specification in comparative effectiveness research. J. Comp. Eff. Res. 3 (3): 259–270

Griffith, G.J., Morris, T.T., Tudball, M.J., Herbert, A., Mancano, G., Pike, L., Sharp, G.C., Sterne, J., Palmer, T.M., Davey Smith, G., Tilling, K., Zuccolo, L., Davies, N.M., Hemani, G.: Collider bias undermines our understanding of COVID-19 disease risk and severity. Nat. Commun. 11 (1), 5749 (2020)

Hernán, M.A.: The C-Word: Scientific euphemisms do not improve causal inference from Observational Data. Am. J. Public Health. 108 (5), 616–619 (2018)

Hernán, M.A., Robins, J.M.: Using Big Data to emulate a target Trial when a Randomized Trial is not available. Am. J. Epidemiol. 183 (8), 758–764 (2016)

Hernán, M., Robins, J.: Causal Inference: What if. Chapman & Hall/CRC, Boca Raton (2020)

Google Scholar  

International Society for Pharmacoeconomics and Outcomes Research (ISPOR): Strategic Initiatives: Real-World Evidence [accessed on 2022]. (2022). https://www.ispor.org/strategic-initiatives/real-world-evidence

International Society for Pharmacoepidemiology (ISPE): Position on Real-World Evidence [accessed on 2020]. (2020). https://pharmacoepi.org/pub/?id=136DECF1-C559-BA4F-92C4-CF6E3ED16BB6

Labrecque, J.A., Swanson, S.A.: Target trial emulation: Teaching epidemiology and beyond. Eur. J. Epidemiol. 32 (6), 473–475 (2017)

Lanes, S., Beachler, D.C.: Validation to correct for outcome misclassification bias. Pharmacoepidemiol Drug Saf. (2023)

Lash, T.L., Fox, M.P., Fink, A.K.: Applying Quantitative bias Analysis to Epidemiologic data. Springer (2009)

Lash, T.L., Fox, M.P., MacLehose, R.F., Maldonado, G., McCandless, L.C., Greenland, S.: Good practices for quantitative bias analysis. Int. J. Epidemiol. 43 (6), 1969–1985 (2014)

Leahy, T.P., Kent, S., Sammon, C., Groenwold, R.H., Grieve, R., Ramagopalan, S., Gomes, M.: Unmeasured confounding in nonrandomized studies: Quantitative bias analysis in health technology assessment. J. Comp. Eff. Res. 11 (12), 851–859 (2022)

Loveless, B.: A Complete Guide to Schema Theory and its Role in Education [accessed on 2022]. (2022). https://www.educationcorner.com/schema-theory/

Lund, J.L., Richardson, D.B., Sturmer, T.: The active comparator, new user study design in pharmacoepidemiology: Historical foundations and contemporary application. Curr. Epidemiol. Rep. 2 (4), 221–228 (2015)

Mai, X., Teng, C., Gao, Y., Governor, S., He, X., Kalloo, G., Hoffman, S., Mbiydzenyuy, D., Beachler, D.: A pragmatic comparison of logistic regression versus machine learning methods for propensity score estimation. Supplement: Abstracts of the 38th International Conference on Pharmacoepidemiology: Advancing Pharmacoepidemiology and Real-World Evidence for the Global Community, August 26–28, 2022, Copenhagen, Denmark. Pharmacoepidemiology and Drug Safety 31(S2). (2022)

Mullard, A.: 2021 FDA approvals. Nat. Rev. Drug Discov. 21 (2), 83–88 (2022)

Onasanya, O., Hoffman, S., Harris, K., Dixon, R., Grabner, M.: Current applications of machine learning for causal inference in healthcare research using observational data. International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Atlanta, GA. (2024)

Pearl, J.: Causal diagrams for empirical research. Biometrika. 82 (4), 669–688 (1995)

Prada-Ramallal, G., Takkouche, B., Figueiras, A.: Bias in pharmacoepidemiologic studies using secondary health care databases: A scoping review. BMC Med. Res. Methodol. 19 (1), 53 (2019)

Richardson, T.S., Robins, J.M.: Single World Intervention Graphs: A Primer [accessed on 2013]. (2013). https://www.stats.ox.ac.uk/~evans/uai13/Richardson.pdf

Richiardi, L., Bellocco, R., Zugna, D.: Mediation analysis in epidemiology: Methods, interpretation and bias. Int. J. Epidemiol. 42 (5), 1511–1519 (2013)

Riis, A.H., Johansen, M.B., Jacobsen, J.B., Brookhart, M.A., Sturmer, T., Stovring, H.: Short look-back periods in pharmacoepidemiologic studies of new users of antibiotics and asthma medications introduce severe misclassification. Pharmacoepidemiol Drug Saf. 24 (5), 478–485 (2015)

Rodrigues, D., Kreif, N., Lawrence-Jones, A., Barahona, M., Mayer, E.: Reflection on modern methods: Constructing directed acyclic graphs (DAGs) with domain experts for health services research. Int. J. Epidemiol. 51 (4), 1339–1348 (2022)

Rothman, K.J., Greenland, S., Lash, T.L.: Modern Epidemiology. Wolters Kluwer Health/Lippincott Williams & Wilkins, Philadelphia (2008)

Rubin, D.B.: Causal inference using potential outcomes. J. Am. Stat. Assoc. 100 (469), 322–331 (2005)

Article   CAS   Google Scholar  

Sauer, B.V.: TJ. Use of Directed Acyclic Graphs. In Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide , edited by P. Velentgas, N. Dreyer, and P. Nourjah: Agency for Healthcare Research and Quality (US) (2013)

Schneeweiss, S., Rassen, J.A., Brown, J.S., Rothman, K.J., Happe, L., Arlett, P., Dal Pan, G., Goettsch, W., Murk, W., Wang, S.V.: Graphical depiction of longitudinal study designs in Health Care databases. Ann. Intern. Med. 170 (6), 398–406 (2019)

Schuler, M.S., Rose, S.: Targeted maximum likelihood estimation for causal inference in Observational studies. Am. J. Epidemiol. 185 (1), 65–73 (2017)

Stuart, E.A., DuGoff, E., Abrams, M., Salkever, D., Steinwachs, D.: Estimating causal effects in observational studies using Electronic Health data: Challenges and (some) solutions. EGEMS (Wash DC) 1 (3). (2013)

Sturmer, T., Webster-Clark, M., Lund, J.L., Wyss, R., Ellis, A.R., Lunt, M., Rothman, K.J., Glynn, R.J.: Propensity score weighting and trimming strategies for reducing Variance and Bias of Treatment Effect estimates: A Simulation Study. Am. J. Epidemiol. 190 (8), 1659–1670 (2021)

Suissa, S., Dell’Aniello, S.: Time-related biases in pharmacoepidemiology. Pharmacoepidemiol Drug Saf. 29 (9), 1101–1110 (2020)

Tripepi, G., Jager, K.J., Dekker, F.W., Wanner, C., Zoccali, C.: Measures of effect: Relative risks, odds ratios, risk difference, and ‘number needed to treat’. Kidney Int. 72 (7), 789–791 (2007)

Velentgas, P., Dreyer, N., Nourjah, P., Smith, S., Torchia, M.: Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide. Agency for Healthcare Research and Quality (AHRQ) Publication 12(13). (2013)

Wang, A., Nianogo, R.A., Arah, O.A.: G-computation of average treatment effects on the treated and the untreated. BMC Med. Res. Methodol. 17 (1), 3 (2017)

Wang, S.V., Pottegard, A., Crown, W., Arlett, P., Ashcroft, D.M., Benchimol, E.I., Berger, M.L., Crane, G., Goettsch, W., Hua, W., Kabadi, S., Kern, D.M., Kurz, X., Langan, S., Nonaka, T., Orsini, L., Perez-Gutthann, S., Pinheiro, S., Pratt, N., Schneeweiss, S., Toussi, M., Williams, R.J.: HARmonized Protocol Template to enhance reproducibility of hypothesis evaluating real-world evidence studies on treatment effects: A good practices report of a joint ISPE/ISPOR task force. Pharmacoepidemiol Drug Saf. 32 (1), 44–55 (2023a)

Wang, S.V., Schneeweiss, S., Initiative, R.-D., Franklin, J.M., Desai, R.J., Feldman, W., Garry, E.M., Glynn, R.J., Lin, K.J., Paik, J., Patorno, E., Suissa, S., D’Andrea, E., Jawaid, D., Lee, H., Pawar, A., Sreedhara, S.K., Tesfaye, H., Bessette, L.G., Zabotka, L., Lee, S.B., Gautam, N., York, C., Zakoul, H., Concato, J., Martin, D., Paraoan, D.: and K. Quinto. Emulation of Randomized Clinical Trials With Nonrandomized Database Analyses: Results of 32 Clinical Trials. JAMA 329(16): 1376-85. (2023b)

Westreich, D., Lessler, J., Funk, M.J.: Propensity score estimation: Neural networks, support vector machines, decision trees (CART), and meta-classifiers as alternatives to logistic regression. J. Clin. Epidemiol. 63 (8), 826–833 (2010)

Yang, S., Eaton, C.B., Lu, J., Lapane, K.L.: Application of marginal structural models in pharmacoepidemiologic studies: A systematic review. Pharmacoepidemiol Drug Saf. 23 (6), 560–571 (2014)

Zhou, H., Taber, C., Arcona, S., Li, Y.: Difference-in-differences method in comparative Effectiveness Research: Utility with unbalanced groups. Appl. Health Econ. Health Policy. 14 (4), 419–429 (2016)

Download references

The authors received no financial support for this research.

Author information

Authors and affiliations.

Carelon Research, Wilmington, DE, USA

Sarah Ruth Hoffman, Nilesh Gangan, Joseph L. Smith, Arlene Tave, Yiling Yang, Christopher L. Crowe & Michael Grabner

Elevance Health, Indianapolis, IN, USA

Xiaoxue Chen

University of Maryland School of Pharmacy, Baltimore, MD, USA

Susan dosReis

You can also search for this author in PubMed   Google Scholar

Contributions

SH, NG, JS, AT, CC, MG are employees of Carelon Research, a wholly owned subsidiary of Elevance Health, which conducts health outcomes research with both internal and external funding, including a variety of private and public entities. XC was an employee of Elevance Health at the time of study conduct. YY was an employee of Carelon Research at the time of study conduct. SH, MG, and JLS are shareholders of Elevance Health. SdR receives funding from GlaxoSmithKline for a project unrelated to the content of this manuscript and conducts research that is funded by state and federal agencies.

Corresponding author

Correspondence to Sarah Ruth Hoffman .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hoffman, S.R., Gangan, N., Chen, X. et al. A step-by-step guide to causal study design using real-world data. Health Serv Outcomes Res Method (2024). https://doi.org/10.1007/s10742-024-00333-6

Download citation

Received : 07 December 2023

Revised : 31 May 2024

Accepted : 10 June 2024

Published : 19 June 2024

DOI : https://doi.org/10.1007/s10742-024-00333-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Causal inference
  • Real-world data
  • Confounding
  • Non-randomized data
  • Bias in pharmacoepidemiology
  • Find a journal
  • Publish with us
  • Track your research

Cause and Effect Essay Outline: Types, Examples and Writing Tips

20 June, 2020

9 minutes read

Author:  Tomas White

This is a complete guide on writing cause and effect essays. Find a link to our essay sample at the end. Let's get started!

Cause and Effect

What is a Cause and Effect Essay?

A cause and effect essay is the type of paper that the author is using to analyze the causes and effects of a particular action or event. A curriculum usually includes this type of exercise to test your ability to understand the logic of certain events or actions.

cause and effect essay

If you can see the logic behind cause and effect in the world around you, you will encounter fewer problems when writing. If not, writing this kind of paper will give you the chance to improve your skillset and your brain’s ability to reason.

“Shallow men believe in luck or in circumstance. Strong men believe in cause and effect.” ― Ralph Waldo Emerson

In this article, the  Handmade Writing team will find out how to create an outline for your cause and effect essay – the key to successful essay writing.

Types of the Cause and Effect Essay

Before writing this kind of essay, you need to draft the structure. A good structure will result in a good paper, so it’s important to have a plan before you start. But remember , there’s no need to reinvent the wheel: just about every type of structure has already been formulated by someone.

If you are still unsure about the definition of an essay, you can take a look at our guide:  What is an Essay?

Generally speaking, there are three types of cause and effect essays. We usually differentiate them by the number of and relationships between the different causes and the effects. Let’s take a quick look at these three different cases:

1. Many causes, one effect

Cause and effect graphic organizer

This kind of essay illustrates how different causes can lead to one effect. The idea here is to try and examine a variety of causes, preferably ones that come from different fields, and prove how they contributed to a particular effect. If you are writing about World War I, for example, mention the political, cultural, and historical factors that led to the great war.

By examining a range of fundamental causes, you will be able to demonstrate your knowledge about the topic.

Here is how to structure this type of essay:

  • Introduction
  • Cause #3 (and so on…)
  • The effect of the causes

2. One cause, many effects

Cause and effect chart

This type of cause and effect essay is constructed to show the various effects of a particular event, problem, or decision. Once again, you will have to demonstrate your comprehensive knowledge and analytical mastery of the field. There is no need to persuade the reader or present your argument . When writing this kind of essay, in-depth knowledge of the problem or event’s roots will be of great benefit. If you know why it happened, it will be much easier to write about its effects.

Here is the structure for this kind of essay:

  • Effect #3 (and so on…)

3. Chain of causes and effects

Cause and effect pictures

This is the most challenging type. You need to maintain a chain of logic that demonstrates a sequence of actions and consequences, leading to the end of the chain. Although this is usually the most interesting kind of cause and effect essay, it can also be the most difficult to write.

Here is the outline structure:

  • Effect #1 = Cause #2
  • Effect #2 = Cause #3
  • Effect #3 = Cause #4 (and so on…)

Cause and Effect Essay Outline Example

Let’s take a look at an example. Below, you will find an outline for the topic “The causes of obesity” (Type 1) :

Cause and effect examples

As you can see, we used a blended strategy here. When writing about the ever-increasing consumption of unhealthy food, it is logical to talk about the marketing strategies that encourage people to buy fast food. If you are discussing fitness trainers, it is important to mention that people need to be checked by a doctor more often, etc.

In case you face some issues with writing your Cause and Effect essay, you can always count on our Essay Writers !

How do I start writing once I have drafted the structure?

If you start by structuring each paragraph and collecting suitable examples, the writing process will be much simpler. The final essay might not come up as a classic five paragraph essay – it all depends on the cause-effect chain and the number of statements of your essay.

Five paragraph essay graphic organizer

In the Introduction, try to give the reader a general idea of what the cause and effect essay will contain. For an experienced reader, a thesis statement will be an indication that you know what you are writing about. It is also important to emphasize how and why this problem is relevant to modern life. If you ever need to write about the Caribbean crisis, for instance, state that the effects of the Cold War are still apparent in contemporary global politics. 

Related Post: How to write an Essay introduction | How to write a Thesis statement

In the Body, provide plenty of details about what causes led to the effects. Once again, if you have already assembled all the causes and effects with their relevant examples when writing your plan, you shouldn’t have any problems. But, there are some things to which you must pay particular attention. To begin with, try to make each paragraph the same length: it looks better visually. Then, try to avoid weak or unconvincing causes. This is a common mistake, and the reader will quickly realize that you are just trying to write enough characters to reach the required word count.

Moreover, you need to make sure that your causes are actually linked to their effects. This is particularly important when you write a “chained” cause and effect essay (type 3) . You need to be able to demonstrate that each cause was actually relevant to the final result. As I mentioned before, writing the Body without preparing a thorough and logical outline is often an omission.

The Conclusion must be a summary of the thesis statement that you proposed in the Introduction. An effective Conclusion means that you have a well-developed understanding of the subject. Notably, writing the Conclusion can be one of the most challenging parts of this kind of project. You typically write the Conclusion once you have finished the Body, but in practice, you will sometimes find that a well-written conclusion will reveal a few mistakes of logic in the body!

Cause and Effect Essay Sample

Be sure to check the sample essay, completed by our writers. Use it as an example to write your own cause and effect essay. Link: Cause and effect essay sample: Advertising ethic issues .

Tips and Common Mistakes from Our Expert Writers

College essay tips

Check out Handmadewriting paper writing Guide to learn more about academic writing!

A life lesson in Romeo and Juliet taught by death

A life lesson in Romeo and Juliet taught by death

Due to human nature, we draw conclusions only when life gives us a lesson since the experience of others is not so effective and powerful. Therefore, when analyzing and sorting out common problems we face, we may trace a parallel with well-known book characters or real historical figures. Moreover, we often compare our situations with […]

Ethical Research Paper Topics

Ethical Research Paper Topics

Writing a research paper on ethics is not an easy task, especially if you do not possess excellent writing skills and do not like to contemplate controversial questions. But an ethics course is obligatory in all higher education institutions, and students have to look for a way out and be creative. When you find an […]

Art Research Paper Topics

Art Research Paper Topics

Students obtaining degrees in fine art and art & design programs most commonly need to write a paper on art topics. However, this subject is becoming more popular in educational institutions for expanding students’ horizons. Thus, both groups of receivers of education: those who are into arts and those who only get acquainted with art […]

Arthritis Foundation Logo

Best Climate for Arthritis Patients: Humidity's Impact on Your Joints

How does climate impact people living with arthritis? Learn the best climate for arthritis and how humidity and other weather patterns can affect your joints. 

There’s no denying it, weather and climate can have a significant effect on arthritis and painful joints. Many report that humidity, along with other factors such as temperature and weather changes and weather patterns, increase joint pain or trigger arthritis flares. For some, humidity and weather’s effect on their joints is so bothersome that they seek relief by moving to drier, temperate climates.

But will a change of climate really help joint pain? And if so, what is the best climate for people with arthritis? What weather is safest for joints? Before you start packing, consider what the research has to say about the effects of weather and climate on arthritis.

What the Research Says

While the weather’s effects on arthritis have long troubled people with the disease and intrigued researchers who study it, the connection between weather and joint pain is not well understood. Yet studies — while conflicting in some cases — offer important clues. One of the most recent and largest is a 2019 British study in which more than 2,600 participants who entered symptom information into their smart phones in real-time over a 15-month period. The phones’ GPS allowed scientist to collect precise weather data based on participants location.

Analysis of that data showed a modest, but significant, correlation between pain and three weather components — relative humidity, air pressure and wind speed. Temperature, however, did not have a significant association with pain.

In a handful of earlier, smaller studies, however, temperature was shown to have an effect on arthritis pain. For example, a study published in 2015 in the Journal of Rheumatology found that among 810 participants with osteoarthritis (OA) of the knee, hand and/or hip, daily average humidity and temperature had a significant effect on joint pain. The effect of humidity on pain was stronger in relatively cold weather conditions . In a separate 2007 study of 200 people with knee OA, pain increased with every 10-degree drop in temperature.

Lower temperatures have been shown to have a similar effect on patients with rheumatoid arthritis (RA). A 2013 Spanish study of 245 RA patients who visited the emergency room 306 times due to RA-related complaints found that patients were 16% more likely to present a flare with lower mean temperatures. A 2021 Chinese study, which analyzed hospital admission data from January 1, 2015 to December 31, 2019, found a significant association between low temperature and admission for RA.

Conversely, warmer temperatures have been associated with the worsening of gout and some lupus symptoms. A 2014 study in the American Journal of Epidemiology found that among 632 participants with gout , there was a significant dose-response relationship between mean temperature in the prior 48 hours and the risk of subsequent gout attack. Higher temperatures were associated with approximately 40% higher risk of gout attack compared with moderate temperatures. A study published in 2020 in Arthritis & Rheumatology found that an increase in temperatures was associated with joint complaints, rashes and inflammation of the membrane surrounding the heart and lungs in people with lupus.

Studies have also found correlations between seasonal fluctuations and arthritis symptoms. In one systematic review and meta-analysis, gout was found to develop significantly more in spring, between March and July, when temperatures were rising. Another study looked at a database of rheumatoid arthritis patients and found that RA activity was higher in the spring and lower in the fall. Neither of these determined, however, what climatologic changes led to the increase and decrease in disease activity and symptoms.

How Weather Might Affect You

If weather does in fact affect arthritis, the studies show the connection is not always clear and may not be direct.

Possible explanations include:

  • Lower temperatures my lead to thickening of the synovial fluid, which lubricates the joints. This thickening could lead to joint pain and stiffening.
  • Bones and connective tissue in our bodies, like structures in our homes, become smaller or larger in response to changes in barometric pressure. Cadaver studies have shown that barometric pressure can influence pressure in the joints. 
  • Alternatively, stretches of cloudy or rainy days may lead to low mood , which may cause people to focus more on their pain.
  • On cold, rainy days, patients may be less likely to be out and active. Lack of physical activity is known to worsen joint pain and stiffness.

The effect may be different for different people and, as the research suggests, are different for different forms of arthritis.  

What’s the Best Climate – and Should You Move?

Just as the effects of weather vary, the best climate may not be the same for all people. But based on research, it appears that for most people with arthritis, a warmer, drier climate may be optimal, such as that in parts of Texas, Arizona, Nevada and the Eastern Sierra region of California.

But obviously there are no absolutes and no guarantees that moving to a different climate would help your arthritis.

If you suspect a certain locale or climate is better for your arthritis and are considering a move, try visiting at different times of the year before moving there to see if you really notice a difference. And even if you do, consider what you will be giving up by moving. Unless a move also means getting a better job or being closer to family, the benefits of staying put (for example, friendships, jobs, schools, access to medical care and established social supports) may exceed the benefits of moving.

If you decide it’s best to stay put in a climate that is less than optimal, there are things you can do to minimize weather’s effects on your arthritis. First, check the weather forecast . If you notice patterns or temperatures that cause you pain, be prepared with tools you have found to relieve it.

When cold weather comes, dressing for warmth can help you weather problems like achy joints and hand pain related to Raynaud’s syndrome. Also, be mindful of other steps to staying healthy as the weather changes — getting your flu shot in fall, upping your vitamin D in winter and applying sunscreen and other sun protection, particularly if you have lupus or take medicines that make you more sun sensitive in the summer months.

With a little preparation and planning you can be more comfortable whatever the weather brings.

Check your local arthritis weather index so you can better prepare for how your climate may affect your joints.

Pain and the Body

Dress to beat the chill, stay in the know. live in the yes..

Get involved with the arthritis community. Tell us a little about yourself and, based on your interests, you’ll receive emails packed with the latest information and resources to live your best life and connect with others.

  • Search the site GO Please fill out this field.
  • Newsletters
  • Sexual Health

What Happens When You Don't Ejaculate (Release Sperm) for a Long Time?

Wendy Wisner is freelance journalist and international board certified lactation consultant (IBCLC). She has written about all things pregnancy, maternal/child health, parenting, and general health and wellness.

cause and effect type of research

  • Side Effects

Benefits of Not Ejaculating

Benefits of ejaculating.

  • Side Effects of Frequent Ejaculation

When To See a Healthcare Provider

FatCamera / Getty Images

Ejaculation refers to semen being released from the penis during orgasm. There are many reasons why someone may not ejaculate for a long time. Some people do it intentionally for personal or religious reasons. Some people abstain to increase their sperm count for fertility purposes. Other people may have a health condition that makes ejaculation difficult or impossible.

The effects of not ejaculating haven’t been studied extensively, but there is no evidence that doing so—even for extensive periods—is harmful. There are no known negative side effects of not ejaculating. That said, if you are unable to ejaculate, or are having trouble ejaculating, it’s important to see a healthcare provider to find out if you have an underlying health condition that may be causing this.

Reasons You're Not Ejaculating

There are several reasons why someone may not be ejaculating. Sometimes the reasons are intentional, and sometimes they are not.

Intentionally Abstaining

Whether or not to engage in sexual activity or masturbation is a personal choice. Some people make mindful choices not to ejaculate for specific periods or extended periods. For example:

  • Many religions advise abstention from masturbation or sexual activity.
  • Someone may also choose not to ejaculate for personal reasons or as a part of a spiritual journey.
  • Healthcare providers might recommend abstaining from ejaculating for several days while trying to conceive or before fertility treatments .

Delayed Ejaculation

A sexual disorder called delayed ejaculation can be the cause of not ejaculating. Delayed ejaculation is defined as either a delay in the ability to ejaculate or a complete inability to ejaculate.

Delayed ejaculation is not common and experts aren’t sure what causes it. It used to be believed that relationship issues or psychological issues cause delayed ejaculation. It might be caused by endocrine, genetic, or neurobiological conditions, or it might be a medication side effect. Endocrine conditions are hormone-related, such as low testosterone.

Retrograde Ejaculation

Retrograde ejaculation is when semen is not expelled through the penis during orgasm, but enters the bladder instead. This is often referred to as a “dry orgasm” because you experience an orgasm, but you see a very low volume of semen or no semen at all.

Conditions like diabetes, previous pelvic surgeries, neurological conditions, and bladder malformations can cause retrograde ejaculation. It may also be a side effect of certain medications.

Anejaculation

Anejaculation is when you don’t ejaculate at all during sexual activity. A person with anejaculation experiences erections without ejaculation. However, they may ejaculate during nocturnal emissions (wet dreams) or while masturbating.

There are various potential causes of anejaculation, including health conditions like spinal cord injuries, diabetes, and multiple sclerosis. Potential psychological causes include a lack of body awareness, guilt or shame about sex, and performance anxiety .

People with male reproductive organs can ejaculate and produce sperm for their entire lives—there isn’t a particular age where this ability goes away. However, similarly to people with female reproductive organs, reproductive capacity decreases as they age. As such, it can be more difficult to experience erections and orgasms/ejaculations as they get older.

Side Effects of Not Ejaculating for a Long Time

There is nothing inherently harmful about not ejaculating for a long time. There are no known dangerous physical or psychological side effects. However, some general side effects are possible for certain individuals.

Physical Effects

Testes constantly produce sperm. If you don’t ejaculate it, the sperm becomes reabsorbed into the body. Some people are concerned that you will get “blue balls” if you don’t ejaculate, or pain due to sexual arousal that doesn’t end in orgasm. However, there are no known medical problems associated with this phenomenon, and any discomfort resolves without intervention.

Psychological Effects

The mental health effects of not ejaculating or abstaining from ejaculating aren’t well-researched at this time. However, many people report different emotions when they haven’t ejaculated for a long period of time. Some people might experience clarity or peace of mind , while others may report feeling more irritated or distressed.

People who experience ejaculated-related health problems, such as delayed ejaculation or anejaculation, may experience relationship stress, or anxiety surrounding sexual contact and sexual desire.

There are no reported benefits of not ejaculating, and the benefits of this practice have not been studied. Nevertheless, people who intentionally refrain may report benefits, such as mental and emotional balance, decreased fixation on sex, increased energy, and stress relief.

While many people abstain from ejaculating for several days while trying to conceive, or going through fertility treatments, the effectiveness of this practice is not clear. Research has found that abstaining from ejaculating for several days increases sperm count and semen volume. It’s less clear if this practice is helpful for other sperm aspects, such as motility (movement speed), vitality, and morphology (sperm shape).

Again, the mental health benefits of ejaculating or not ejaculating are not well studied. Still, there are some immediate benefits to experiencing orgasm, including reduced stress, improved mood, and even pain relief.

There is some evidence that ejaculation frequency might be protective against developing prostate cancer . For example, one 2016 study found that participants who reported higher rates of ejaculation were less likely to be diagnosed with prostate cancer.

Side Effects of Ejaculating Too Frequently

It’s normal to ejaculate frequently, and ejaculating daily or even several times a day has no known negative side effects. Ejaculating frequently may cause certain side effects, such as chafing of the skin (usually from masturbation specifically) or fatigue .

Some people may be concerned that frequent ejaculation may cause sex addiction or other sexual disorders. While the exact causes of sex addiction haven’t been identified yet, it is not thought to be caused by excessive masturbation or sexual activity. On the other hand, excessive masturbation or sexual activity may be a symptom of a sex addiction.

Masturbating frequently might affect sexual function, leading to issues like sexual desensitization, where you become less sensitive to sexual stimulation. Some people who masturbate frequently experience trouble getting erections or reaching orgasm through other forms of sexual activity.

It’s normal for some people to not ejaculate for a long time. In most cases, it will not cause physical or psychological harm.

See a healthcare provider if you have any concerns about your ejaculation patterns. If you are intentionally not masturbating because of guilt or shame about sex or masturbation, you may want to speak to a therapist about your feelings and concerns.

Conditions like diabetes, multiple sclerosis, and sexual disorders can result in an inability to ejaculate. Endocrine disorders, neurological disorders, and medication side effects may also cause these symptoms. A healthcare provider can evaluate you for any underlying medical conditions and discuss treatment plans.

A Quick Review

Not ejaculating for several days, weeks, or even longer, is not damaging to your health. Some people abstain from ejaculating for religious reasons, personal reasons, or to increase sperm count while trying to conceive .

If you are unable to ejaculate, you may have an underlying medical condition causing these symptoms. It's important to visit your healthcare provider for an evaluation.

Albobali Y, Madi MY. Masturbatory guilt leading to severe depression . Cureus . 2021;13(3):e13626. doi:10.7759/cureus.13626

Hanson BM, Aston KI, Jenkins TG, et al. The impact of ejaculatory abstinence on semen analysis parameters: A systematic review . J Assist Reprod Genet . 2018;35(2):213-220. doi:10.1007/s10815-017-1086-0

Gopalakrishnan R, Thangadurai P, Kuruvilla A, et al. Situational psychogenic anejaculation: A case study . Indian J Psychol Med . 2014;36(3):329-331. doi:10.4103/0253-7176.135393

Abdel-Hamid IA, Ali OI. Delayed ejaculation: Pathophysiology, diagnosis, and treatment . World J Mens Health . 2018;36(1):22-40. doi:10.5534/wjmh.17051

Society for Male Reproduction and Urology. Treatment options for patients with ejaculatory dysfunction .

Gunes S, Hekim GN, Arslan MA, et al. Effects of aging on the male reproductive system . J Assist Reprod Genet . 2016;33(4):441-454. doi:10.1007/s10815-016-0663-y

MedlinePlus. Sperm release pathway .

Levang S, Henkelman M, Neish R, et al. “Blue balls” and sexual coercion: A survey study of genitopelvic pain after sexual arousal without orgasm and its implications for sexual advances . Sex Med . 2023;11(2):qfad016. doi:10.1093/sexmed/qfad016

Mascherek A, Reidick MC, Gallinat J, et al. Is ejaculation frequency in men related to general and mental health? Looking back and looking forward . Front Psychol . 2021;12:693121. doi:10.3389/fpsyg.2021.693121

MedlinePlus. Delayed ejaculation .

Gianotten WL. The Health Benefits of Sexual Expression . In: Geuens S, Polona Mivšek A, Gianotten W. (eds). Midwifery and Sexuality . doi:10.1007/978-3-031-18432-1_4

Rider JR, Wilson KM, Sinnott JA, et al. Ejaculation frequency and risk of prostate cancer: Updated results with an additional decade of follow-up . Eur Urol . 2016;70(6):974-982. doi:10.1016/j.eururo.2016.03.027

Fong TW. Understanding and managing compulsive sexual behaviors . Psychiatry (Edgmont) . 2006;3(11):51-58.

Huang S, Niu C, Santtila P. Masturbation frequency and sexual function in individuals with and without sexual partners . Sexes . 2022; 3(2):229-243. doi:10.3390/sexes3020018

Related Articles

  • Open access
  • Published: 26 June 2024

Association of MAFLD and MASLD with all-cause and cause-specific dementia: a prospective cohort study

  • Xue Bao 1 , 2   na1 ,
  • Lina Kang 1   na1 ,
  • Songjiang Yin 3 ,
  • Gunnar Engström 2 ,
  • Lian Wang 1 ,
  • Biao Xu 1 ,
  • Xiaowen Zhang 4 , 5 &
  • Xinlin Zhang   ORCID: orcid.org/0000-0002-7149-1033 1  

Alzheimer's Research & Therapy volume  16 , Article number:  136 ( 2024 ) Cite this article

86 Accesses

7 Altmetric

Metrics details

Liver disease and dementia are both highly prevalent and share common pathological mechanisms. We aimed to investigate the associations between metabolic dysfunction-associated fatty liver disease (MAFLD), metabolic dysfunction-associated steatotic liver disease (MASLD) and the risk of all-cause and cause-specific dementia.

We conducted a prospective study with 403,506 participants from the UK Biobank. Outcomes included all-cause dementia, Alzheimer’s disease, and vascular dementia. Multivariable Cox proportional hazards models were used for analyses.

155,068 (38.4%) participants had MAFLD, and 111,938 (27.7%) had MASLD at baseline. During a median follow-up of 13.7 years, 5,732 participants developed dementia (2,355 Alzheimer’s disease and 1,274 vascular dementia). MAFLD was associated with an increased risk of vascular dementia (HR 1.32 [95% CI 1.18–1.48]) but a reduced risk of Alzheimer’s disease (0.92 [0.84–1.0]). Differing risks emerged among MAFLD subtypes, with the diabetes subtype increasing risk of all-cause dementia (1.8 [1.65–1.96]), vascular dementia (2.95 [2.53–3.45]) and Alzheimer’s disease (1.46 [1.26–1.69]), the lean metabolic disorder subtype only increasing vascular dementia risk (2.01 [1.25–3.22]), whereas the overweight/obesity subtype decreasing risk of Alzheimer’s disease (0.83 [0.75–0.91]) and all-cause dementia (0.9 [0.84–0.95]). MASLD was associated with an increased risk of vascular dementia (1.24 [1.1–1.39]) but not Alzheimer’s disease (1.0 [0.91–1.09]). The effect of MAFLD on vascular dementia was consistent regardless of MASLD presence, whereas associations with Alzheimer’s disease were only present in those without MASLD (0.78 [0.67–0.91]).

Conclusions

MAFLD and MASLD are associated with an increased risk of vascular dementia, with subtype-specific variations observed in dementia risks. Further research is needed to refine MAFLD and SLD subtyping and explore the underlying mechanisms contributing to dementia risk.

Liver disease accounts for over 2 million global deaths per year, representing approximately 3.5–4% of total worldwide mortality, as estimated in 2015 [ 1 , 2 ]. Among liver diseases, nonalcoholic fatty liver disease (NAFLD) is a prominent contributor [ 2 , 3 ]. The term “nonalcoholic” is widely used, but it fails to accurately reflect the disease’s true origins, as there is considerable overlap between NAFLD and alcohol-related liver disease (ALD).

In 2020, a new nomenclature, metabolic dysfunction-associated fatty liver disease (MAFLD), was introduced as an alternative to NAFLD [ 4 ]. This updated terminology aims to shift the focus towards recognizing the primary factors driving NAFLD, rather than merely excluding other potential causes [ 4 ]. However, MAFLD has not gained universal acceptance, as concerns persist about mixing etiologies and overlooking a significant proportion of NAFLD patients with lean and normal body mass index (BMI) [ 5 ]. In 2023, a Delphi consensus statement introduced metabolic dysfunction-associated steatotic liver disease (MASLD) as a new term [ 6 ]. Unlike MAFLD, MASLD considers varying levels of alcohol consumption and avoids clinically challenging criteria and biological measurements, such as insulin resistance, which can be difficult to assess in routine clinical practice.

Dementia ranks as the fifth leading cause of death worldwide, with its prevalence on the rise [ 7 ]. Despite extensive efforts, the mechanisms underlying dementia remain largely unknown, and effective treatments are lacking. Importantly, there are shared risk factors between dementia and liver diseases. Emerging evidence suggests that the pathological pathways triggered by NAFLD, including insulin resistance, neuroinflammation, hyperammonemia, gut dysbiosis, and cerebrovascular dysfunction, may contribute to dementia development [ 8 ].

Some studies have demonstrated an association between NAFLD and reduced brain volume in healthy adults [ 9 , 10 ], poorer cognitive function across multiple domains [ 11 ] and accelerated aging in patients with advanced fibrosis caused by NAFLD [ 12 ]. However, a post-hoc analysis of two large cardiovascular trials did not find a significant association between chronic liver disease and brain imaging markers [ 13 ]. Several observational studies investigating the link between NAFLD and dementia have produced conflicting results. While a Swedish [ 14 ] and a South Korean cohort study [ 15 ] suggested a modestly increased risk of vascular dementia in NAFLD individuals, others did not report an association [ 16 , 17 , 18 ], or even reported an inverse one [ 16 , 19 ]. Most of these association studies have limitations, including cross-sectional designs, insufficient adjustments for covariates, small-to-moderate sample sizes, and limited follow-up durations. Moreover, to our knowledge, no prior study has examined or compared the association between the two newly proposed nomenclatures, MAFLD and MASLD, and the risk of dementia.

This study aims to address these gaps by conducting a prospective investigation in a large population-based UK cohort. The objective is to thoroughly explore the independent longitudinal association of MAFLD and/or MASLD with all-cause and cause-specific dementia. Additionally, the study aims to investigate dementia outcomes based on subtypes defined by the MAFLD criteria, as well as subtypes of steatotic liver diseases (SLD) based on the Delphi consensus.

Study designs and participants

The UK Biobank is a large prospective cohort comprising over 500,000 participants aged 38 to 72 years at recruitment between 2006 and 2010 at one of the 22 assessment centers across England, Scotland, and Wales [ 20 ]. The study received ethical approval from the North West Multicenter Research Ethics Committee, and all participants provided written informed consent. Participants with an available fatty liver index (FLI) at enrollment were included, and those with pre-existing dementia were excluded. The study was reported according to the STROBE guidelines (Supplementary materials).

Hepatic steatosis was assessed using the FLI, as specific imaging or histological data related to fatty liver were not available in the UK Biobank. The FLI is based on BMI, waist circumference, triglycerides, and gamma-glutamyltransferase (GGT) and has demonstrated reliability as an alternative to imaging techniques such as ultrasonography and transient elastography, showing a good diagnostic performance with an area under the receiver operator curve (AUROC) of 0.85 [ 21 ]. An FLI ≥ 60 indicated hepatic steatosis [ 22 ].

MAFLD diagnosis relied on hepatic steatosis evidence meeting any of three criteria: overweight/obesity, type 2 diabetes, or at least two metabolic abnormalities. MAFLD had three subtypes: diabetes subtype, overweight/obesity subtype, and lean metabolic disorder subtype [ 23 ]. MASLD was defined as the presence of fatty liver along with at least one of the five specified criteria, excluding secondary liver steatosis causes [ 6 ]. Subtypes of SLD included MASLD, MetALD (MASLD with higher alcohol intake or other combination etiology), cryptogenic SLD, and SLD with specific etiologies.

We gathered disease diagnosis information and diagnosis dates from hospital inpatient records and death registry records. Our primary outcomes were incident all-cause dementia, Alzheimer’s disease and vascular dementia. Due to the limited number of incident cases available for obtaining reliable associations, the results for frontotemporal dementia were analyzed but only presented in the supplementary materials. We identified these diagnoses using International Classification of Disease-10 (ICD-10) and ICD-9 codes, with the diagnosis date determined by the earliest date of either primary or secondary diagnosis. The ICD codes for outcomes were shown in supplementary Table 1 . The UK Biobank Outcome Adjudication Group conducted outcome adjudication for incident dementia.

In our full-model analyses, we considered the following covariates: age, sex, race, education, smoking and drinking status, Townsend Deprivation Index (TDI), annual household income, physical activity, cardiovascular disease (CVD), and APOE ε4 status. Alcohol intake was categorized as daily or almost daily, 3–4 times per week, 1–2 times per week, occasionally, or never. The TDI is a composite measure reflecting socioeconomic status and categorized into quartiles, with higher scores indicating lower socioeconomic status. Self-reported physical activity was categorized as high, moderate, or low, based on the validated International Physical Activity Questionnaire (IPAQ). CVD was defined as a composite of ischemic heart disease, stroke, and heart failure [ 14 ]. APOE ε4 genotype was defined by two SNPs, rs429358 and rs7412, and categorized as noncarriers (−/−), heterozygotes (+/−), and homozygotes (+/+).

Statistical analysis

The participants’ baseline characteristics were presented as means ± standard deviation (SD) for continuous variables and as numbers (percentages [%]) for categorical variables. We used Cox proportional hazards regression models to assess the associations of MAFLD or MASLD with the time to incident dementia. In these models, reference groups included participants without MAFLD, without MASLD, without both MAFLD and MASLD, or without hepatic steatosis, as specified. Missing data were coded separately for categorical variables. We examined the proportional hazards assumption by testing Schoenfeld residuals. Follow-up time was calculated from attendance date until the first dementia diagnosis, death, or the censoring date (October 31, 2022), whichever occurred first.

In addition to univariable analysis, we conducted adjusted analyses. Model 1 adjusted for age, sex, race or ethnicity, education, TDI, annual household income, smoking status, alcohol intake, and physical activity. Model 2 further adjusted for CVD and APOE status, with covariate selection based on previous literature, clinical relevance, and data availability. Notably, certain covariates such as waist circumference, BMI, diabetes, hypertension, and dyslipidemia were not adjusted for, as they were already incorporated into the definitions of MAFLD and MASLD to avoid overadjustment [ 24 , 25 ].

We examined the potential interactions between each covariate and MAFLD or MASLD on risk of dementia by including one interaction term at a time in the multivariate models. We performed post-hoc subgroup analyses based on age categories (< 65 years and ≥ 65 years). We also analyzed the association of different MAFLD and SLD subtypes with dementia.

To ensure robustness, we conducted sensitivity analyses. Firstly, we excluded participants who experienced dementia events within the first 2 and 5 years of follow-up to address potential reverse causality. Secondly, to address the potential confounding effect of mixed dementia (involving both Alzheimer’s and vascular dementia), we excluded participants who were diagnosed with vascular dementia before or concurrently with Alzheimer’s dementia during the follow-up period when using Alzheimer’s dementia as the outcome, or vice versa. Lastly, to account for the potential effect modification of death before dementia events, we conducted competing risk analyses using the Fine-Gray proportional subhazards model, treating death from other causes as a competing event [ 26 ].

Statistical analyses were performed using SAS software, version 9.4 (SAS Institute Inc, Cary, NC). A two-tailed p-value less than 0.05 was considered statistically significant.

Baseline characteristics

A total of 403,506 participants were included in the analyses (Fig.  1 ). At baseline, the participants had a mean age of 56.6 ± 8.1 years, with 53.8% being females. The baseline characteristics of participants are presented in Table  1 . Among the entire cohort, 155,068 (38.4%) participants had MAFLD and 111,938 (27.7%) had MASLD. Among the 155,520 individuals with FLD, 99.7% could be classified as MAFLD, while 72.0% as MASLD; 71.8% were classified as MAFLD + MASLD+, 27.9% MAFLD + MASLD–, and only 0.15% MAFLD–MASLD+.

figure 1

Study population flow chart

Compared to participants without MASLD, those with MASLD tended to be older, more likely to be male, physically inactive, and non-White. They also had higher BMI, lower education levels, lower household income, and lower alcohol consumption. The prevalence of hypertension, diabetes, and CVD was higher in participants with MASLD compared to those without MASLD. Most characteristics between the MAFLD + and MASLD + groups were similar, with the exception of a higher percentage of men and greater alcohol consumption in the MAFLD group (Table  1 ).

MAFLD, its subtypes and dementia

Over a median follow-up of 13.7 years (interquartile range 12.9–14.4), there were 5,732 new dementia events recorded, including 2,355 cases of Alzheimer’s disease and 1,274 cases of vascular dementia. The proportional hazards assumption was assessed using Schoenfeld residuals, and no violations were found. In the fully adjusted model, MAFLD was associated with a higher risk of vascular dementia (1.32 [1.18–1.48]) but a lower risk of Alzheimer’s disease (0.92 [0.84–1.0]). The risk of all-cause dementia was not statistically different (1.03 [0.98–1.09]) (Table  2 ).

18,345 individuals were classified as having the MAFLD diabetes subtype, 133,927 as the overweight/obesity subtype, and 2,796 as the lean metabolic disorder subtype. In the full adjustment model, individuals with the MAFLD diabetes subtype had a higher risk of all-cause dementia (1.8 [1.65–1.96]), Alzheimer’s disease (1.46 [1.26–1.69]), and vascular dementia (2.95 [2.52–3.45]) compared to those without hepatic steatosis. Conversely, the MAFLD overweight/obese subtype was associated with a lower risk of all-cause dementia (0.9 [0.84–0.95]), primarily driven by Alzheimer’s disease (0.83 [0.75–0.91]); the risk of vascular dementia did not differ (1.0 [0.88–1.13]). In the MAFLD lean metabolic disorder subtype, there was a higher risk of vascular dementia (2.01 [1.25–3.22]), while the risks of all-cause dementia (1.21 [0.92–1.59]) and Alzheimer’s disease (0.87 [0.53–1.42]) were not significantly different, although the number of dementia events was limited (Table  2 ).

MASLD, SLD subtypes and dementia

In the fully adjusted model, MASLD was associated with a higher risk of vascular dementia (1.24 [1.1–1.39]), but the risk of Alzheimer’s disease (1.0 [0.91–1.09]) and all-cause dementia (1.05 [0.99–1.11]) did not differ significantly (Table  3 ).

111,938 individuals had MASLD, 43,528 had MetALD, 30 had cryptogenic SLD, and 24 had other specific etiology SLD. After full adjustment, individuals with MASLD had a higher risk of vascular dementia (1.32 [1.16–1.49]) compared to those without hepatic steatosis, but the risk of other types of dementia was not statistically different. MetSLD was associated with a lower risk of Alzheimer’s disease (0.79 [0.67–0.92]), but a higher risk of vascular dementia (1.33 [1.12–1.59]), while the risk of all-cause dementia was similar. The number of participants with cryptogenic SLD and other specific etiology SLD was small (Table  3 ).

MAFLD, MASLD combinations and dementia

After full adjustment, compared to MAFLD–MASLD–, MAFLD + MASLD– was associated with a higher risk of vascular dementia (1.33 [1.12–1.59]), but a decreased risk of Alzheimer’s disease (0.78 [0.67–0.91]). Similarly, MAFLD + MASLD + was also associated with a higher risk of vascular dementia (1.31 [1.16–1.49]), but the risk of Alzheimer’s disease (0.96 [0.88–1.06]) was similar. The risk of all-cause dementia did not show statistically significant differences between MAFLD + MASLD– and MAFLD–MASLD–, as well as between MAFLD + MASLD + and MAFLD–MASLD– (Table  4 ).

Subgroup and sensitivity analyses

No significant association was detected between MAFLD, MASLD, their combinations or subgroups with frontotemporal dementia (supplementary Table 2 ). The overall results remained consistent when the analyses were stratified by various factors. A significant interaction was found between MAFLD and age in relation to the risk of all-cause and vascular dementia, and between MASLD and age regarding vascular dementia. Therefore, we conducted post-hoc subgroup analyses based on age categories (< 65 years and ≥ 65 years). Generally, we found associations with dementia being more prominent in groups with younger age (supplementary Tables 3 – 5 ). Consistent results were observed when considering only dementia events that occurred at least 2 or 5 years after baseline (supplementary Tables 6 – 11 ). Additionally, taking into account the competing risk of death from other causes (supplementary Tables 12 – 14 ) or excluding participants with mixed dementia (supplementary Tables 15 – 17 ) also showed comparable findings.

Discussions

In this  ∼ 13-year follow-up study of 403,506 participants from UK Biobank, MAFLD was associated with a higher risk of vascular dementia but a lower risk of Alzheimer’s disease. The increased risk of vascular dementia was primarily driven by the presence of diabetes and lean metabolic disorder subtypes within the MAFLD group. Conversely, the lower risk of Alzheimer’s disease was mainly attributed to the protective effects seen in the overweight/obesity subtype of MAFLD. MASLD was also associated with a higher risk of vascular dementia but did not affect the risk of Alzheimer’s disease. The impact of MAFLD on vascular dementia was consistent regardless of the presence of MASLD, but the effects on Alzheimer’s disease were only evident in individuals without MASLD. Neither MAFLD, MASLD nor the combinations of MAFLD and MASLD had a significant impact on the risk of all-cause dementia, except that MAFLD diabetes subtype was associated with a higher risk of all-cause dementia.

The prevalence of MAFLD in the UK Biobank population was 38.4%, with MASLD prevalence at 27.7%. These rates were higher than those reported in the NHANES III (20.4% and 14.9%, respectively) [ 24 ], but similar to rates in a national cohort in Korea (37.3% for MAFLD) [ 25 ]. It is noteworthy that almost all MASLD subjects (99.8%) fell within the MAFLD diagnosis, wheras only 72.0% of MAFLD cases fell within MASLD. MAFLD+/MASLD- individuals have hepatic steatosis with concurrent liver disease or exhibit only two components of metabolic syndrome (insulin resistance assessed by HOMA and inflammation assessed by CRP levels) [ 23 ]. This relatively high percentage underscores the diverse etiologies underlying MAFLD. On the other hand, MAFLD–/MASLD + individuals have hepatic steatosis with only one component of specific cardiometabolic criteria (prediabetes, hypertension, increased waist circumference, increased triglycerides, or decreased HDL cholesterol) [ 6 ]. The notebly lower percentage of this group suggests that individuals with a singular cardiometabolic abnormality often exhibit coexistence with other cardiometabolic abnormalities within the UK population.

Our study represents the first longitudinally investigation into the association of MAFLD and MASLD with all-cause and cause-specific dementia. We found a positive link between MAFLD and vascular dementia, primarily driven by diabetes and the lean metabolic disorder subtypes. Similarly, MASLD and MetALD were also associated with vascular dementia, indicating that hepatic steatosis, diabetes, lean metabolic syndrome, and excessive alcohol consumption collectively contribute to the risk of vascular dementia. However, the overweight/obese MAFLD subtype did not follow this pattern. A study utilizing NHANES III databases similarly found that the overweight/obese MAFLD subtype was not associated with increased risk of all-cause mortality, unlike the other two subtypes [ 27 ]. These varying prognostic effects of the MAFLD subtypes on mortality [ 27 ], myocardial infarction [ 28 ], and dementia highlight the heterogeneity of MAFLD definitions and emphasize the importance for further subclassification to guide tailored therapeutic interventions.

The inverse relationship between MAFLD and Alzheimer’s disease was predominantly observed in the overweight/obesity subtype. Our sensitivity analyses, focusing on dementia events occurring after 5 years of follow-up, provided consistent results and minimized the likelihood of reverse causation between obesity and dementia. The relatioinship between BMI and dementia events is complex and has produced conflicting findings in prior research. Some studies suggest that being overweight in mid-life increases the risk of dementia later in life. However, in later life, being overweight may be associated with a reduced dementia risk [ 29 ], exemplifying the obesity paradox [ 30 ]. Nevertheless, other studies present divergent results. For example, a large cohort study involving 1,958,191 UK participants and 45,507 dementia events found that dementia incidence decreased with increasing BMI categories [ 31 ], even after excluding events within 15 years [ 31 ]. Among participants aged 60 years and older, a higher BMI was associated with a reduced risk of Alzheimer’s disease among those with the same genetic risk [ 32 ]. Furthermore, declining BMI has been associated with an increased risk of incident Alzheimer’s disease [ 33 ], while weight gain is associated with reduced dementia-related mortality [ 34 ].

MetALD, a subtype of SLD within the new consensus, was also associated with a reduced risk of Alzheimer’s disease. This suggests that MetALD, which is largely included in the MAFLD definition but not separately classified, contributes significantly to the risk reduction of Alzheimer’s disease associated with MAFLD, alongside the overweight/obesity subtype. While MASLD was not associated with a decreased risk of Alzheimer’s disease, the effect of observed with MetALD may stem from alcohol consumption or the interaction between alcohol and metabolic dysfunction. The lower risk of Alzheimer’s disease in MAFLD + MASLD– individuals compared to MAFLD–MASLD– individuals supports this speculation, as the MAFLD + MASLD– group largely represents individuals with concomitant alcohol liver disease and metabolic abnormalities.

The relationship between alcohol consumption and dementia is complex and inconclusive. Studies have reported a 22% reduction in the risk of Alzheimer’s disease among mild-to-moderate alcohol consumers compared to non-consumers [ 35 , 36 ], which aligns with our findings. This protective effect may be attributed to mechanisms such as prosurvival pathways promotion and reduction in neuroinflammation [ 37 ]. Conversely, sustained heavy drinking is associated with a significantly increased risk of Alzheimer’s disease [ 36 ].

Research often demonstrates a J-shaped or U-shaped association between alcohol consumption and all-cause dementia risk, but the threshold for dementia risk remains uncertain [ 36 , 38 ]. Establishing a definitive cause-and-effect association between alcohol and Alzheimer’s disease is challenging due to methodological differences across studies, particularly in how alcohol consumption is measured and control groups are defined.

In our study, we did not investigate the amount or type of alcohol consumed. Given the differing effects of MAFLD + MASLD– and MAFLD + MASLD + on Alzheimer’s disease, it is important to distinguish MAFLD with concomitant liver disease from those without in future studies. Although the newly established Delphi consensus statement has addressed this issue to some extent, there is still room for refinement in predicting dementia risk. A more nuanced approach, similar to the subcategorization used in the MAFLD definition, holds promise for improving the accuracy of dementia risk prediction in clinicalsettings.

This study has several strengths, including its prospective design, large sample size, extensive long-term follow-up, robust adjustment for potential confounding factors including genetic background, and the use of multiple sources to identify incident dementia cases. Dementia outcome has been previously validated, demonstrating a sensitivity of 78% and specificity of 92% for dementia diagnosis recording in general hospitals, using secondary mental health care diagnostic status as the gold-standard [ 39 ].

However, it’s important to acknowledge some limitations. Firstly, despite using multiple sources to identify incident dementia cases, there is a possibility that early-stage or milder cases of dementia from primary care were missed. Nevertheless, the overall accuracy of dementia diagnosis showed good agreement with primary care records. Secondly, hepatic steatosis was defined using the FLI rather than liver biopsy or imaging. However, the FLI has demonstrated a strong correlation with ultrasound diagnosis of NAFLD in multiple studies [ 21 , 40 ]. Thirdly, while we controlled for a wide range of confounders, we acknowledge that potential residual confounding cannot be completely ruled out due to the observational nature of the study. Fourthly, the assessment of MAFLD and MASLD was conducted only at baseline, and there is a lack of data regarding exposure durations and any changes during the follow-up period. Fifthly, it’s essential to note that the findings are observational, and therefore, causality cannot be established. Finally, the majority of participants in the UK Biobank study were of White ethnicity, so generalizing the findings to other ethnic groups should be done with caution.

In conclusion, MAFLD and MASLD are associated with an increased risk of vascular dementia, with subtype-specific variations observed in dementia risks. The MAFLD diabetes subtype elevates the risk of all-cause dementia, vascular dementia, and Alzheimer’s disease, while the lean metabolic disorder subtype only increases the risk of vascular dementia. On the other hand, the overweight/obesity subtype is associated with a reduced risk of Alzheimer’s disease and all-cause dementia. MASLD does not influence the risk of Alzheimer’s disease, but MetSLD is associated with a lower risk of Alzheimer’s disease. Further research is needed to refine MAFLD and SLD subtyping and explore the underlying mechanisms contributing to dementia risk.

Data availability

Data are available in a public, open access repository. Data from the UK Biobank ( https://www.ukbiobank.ac.uk/ ) are available to researchers on application.

Abbreviations

Alcohol-related liver disease

Body mass index

Cardiovascular diseases

Fatty liver index

Hazard ratio

International physical activity questionnaire

Metabolic dysfunction associated fatty liver disease

Metabolic dysfunction associated steatotic liver disease

Nonalcoholic fatty liver disease

Steatotic liver diseases

Townsend Deprivation Index

Asrani SK, Devarbhavi H, Eaton J, Kamath PS. Burden of liver diseases in the world. J Hepatol. 2019;70:151–71.

Article   PubMed   Google Scholar  

Devarbhavi H, Asrani SK, Arab JP, Nartey YA, Pose E, Kamath PS. Global burden of liver disease: 2023 update. J Hepatol. 2023;79:516–37.

Riazi K, Azhari H, Charette JH, Underwood FE, King JA, Afshar EE, Swain MG, Congly SE, Kaplan GG, Shaheen AA. The prevalence and incidence of NAFLD worldwide: a systematic review and meta-analysis. Lancet Gastroenterol Hepatol. 2022;7:851–61.

Eslam M, Newsome PN, Sarin SK, Anstee QM, Targher G, Romero-Gomez M, Zelber-Sagi S, Wai-Sun Wong V, Dufour JF, Schattenberg JM, et al. A new definition for metabolic dysfunction-associated fatty liver disease: an international expert consensus statement. J Hepatol. 2020;73:202–9.

De A, Ahmad N, Mehta M, Singh P, Duseja A. NAFLD vs. MAFLD - it is not the name but the disease that decides the outcome in fatty liver. J Hepatol. 2022;76:475–7.

Rinella ME, Lazarus JV, Ratziu V, Francque SM, Sanyal AJ, Kanwal F, Romero D, Abdelmalek MF, Anstee QM, Arab JP et al. A multi-society Delphi consensus statement on new fatty liver disease nomenclature. J Hepatol 2023.

Collaborators GBDDF. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the global burden of Disease Study 2019. Lancet Public Health. 2022;7:e105–25.

Article   Google Scholar  

Cheon SY, Song J. Novel insights into non-alcoholic fatty liver disease and dementia: insulin resistance, hyperammonemia, gut dysbiosis, vascular impairment, and inflammation. Cell Biosci. 2022;12:99.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Weinstein G, Zelber-Sagi S, Preis SR, Beiser AS, DeCarli C, Speliotes EK, Satizabal CL, Vasan RS, Seshadri S. Association of Nonalcoholic Fatty liver Disease with Lower Brain volume in healthy middle-aged adults in the Framingham Study. JAMA Neurol. 2018;75:97–104.

Weinstein G, O’Donnell A, Frenzel S, Xiao T, Yaqub A, Yilmaz P, de Knegt RJ, Maestre GE, van Melo D, Long M, et al. Nonalcoholic fatty liver disease, liver fibrosis, and structural brain imaging: the cross-cohort collaboration. Eur J Neurol. 2024;31:e16048.

George ES, Sood S, Daly RM, Tan SY. Is there an association between non-alcoholic fatty liver disease and cognitive function? A systematic review. BMC Geriatr. 2022;22:47.

Article   PubMed   PubMed Central   Google Scholar  

Loomba R, Gindin Y, Jiang Z, Lawitz E, Caldwell S, Djedjos CS, Xu R, Chung C, Myers RP, Subramanian GM et al. DNA methylation signatures reflect aging in patients with nonalcoholic steatohepatitis. JCI Insight 2018;3.

Basu E, Mehta M, Zhang C, Zhao C, Rosenblatt R, Tapper EB, Parikh NS. Association of chronic liver disease with cognition and brain volumes in two randomized controlled trial populations. J Neurol Sci. 2022;434:120117.

Shang Y, Widman L, Hagstrom H. Nonalcoholic fatty liver disease and risk of dementia: a Population-based Cohort Study. Neurology. 2022;99:e574–82.

Kim GA, Oh CH, Kim JW, Jeong SJ, Oh IH, Lee JS, Park KC, Shim JJ. Association between non-alcoholic fatty liver disease and the risk of dementia: a nationwide cohort study. Liver Int. 2022;42:1027–36.

Article   CAS   PubMed   Google Scholar  

Xiao T, van Kleef LA, Ikram MK, de Knegt RJ, Ikram MA. Association of Nonalcoholic Fatty Liver Disease and Fibrosis With Incident Dementia and Cognition: the Rotterdam Study. Neurology. 2022;99:e565–73.

Labenz C, Kostev K, Kaps L, Galle PR, Schattenberg JM. Incident Dementia in Elderly patients with nonalcoholic fatty liver disease in Germany. Dig Dis Sci. 2021;66:3179–85.

Huang H, Liu Z, Xie J, Xu C. NAFLD does not increase the risk of incident dementia: a prospective study and meta-analysis. J Psychiatr Res. 2023;161:435–40.

Liu Z, Suo C, Fan H, Zhang T, Jin L, Chen X. Dissecting causal relationships between nonalcoholic fatty liver disease proxied by chronically elevated alanine transaminase levels and 34 extrahepatic diseases. Metabolism. 2022;135:155270.

Sudlow C, Gallacher J, Allen N, Beral V, Burton P, Danesh J, Downey P, Elliott P, Green J, Landray M, et al. UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 2015;12:e1001779.

Jones GS, Alvarez CS, Graubard BI, McGlynn KA. Agreement between the prevalence of nonalcoholic fatty liver Disease determined by transient elastography and fatty liver indices. Clin Gastroenterol Hepatol. 2022;20:227–9. e2.

European Association for the Study of the L. European Association for the Study of D, European Association for the Study of O. EASL-EASD-EASO Clinical Practice Guidelines for the management of non-alcoholic fatty liver disease. J Hepatol. 2016;64:1388–402.

Wang TY, Wang RF, Bu ZY, Targher G, Byrne CD, Sun DQ, Zheng MH. Association of metabolic dysfunction-associated fatty liver disease with kidney disease. Nat Rev Nephrol. 2022;18:259–68.

Zhao Q, Deng Y. Comparison of mortality outcomes in individuals with MASLD and/or MAFLD. J Hepatol 2023.

Lee H, Lee YH, Kim SU, Kim HC. Metabolic dysfunction-Associated fatty liver Disease and Incident Cardiovascular Disease Risk: a Nationwide Cohort Study. Clin Gastroenterol Hepatol. 2021;19:2138–47. e10.

Kohl M, Plischke M, Leffondre K, Heinze G. PSHREG: a SAS macro for proportional and nonproportional subdistribution hazards regression. Comput Methods Programs Biomed. 2015;118:218–33.

Chen X, Chen S, Pang J, Tang Y, Ling W. Are the different MAFLD subtypes based on the inclusion criteria correlated with all-cause mortality? J Hepatol. 2021;75:987–9.

Chen S, Xue H, Huang R, Chen K, Zhang H, Chen X. Associations of MAFLD and MAFLD subtypes with the risk of the incident myocardial infarction and stroke. Diabetes Metab. 2023;49:101468.

Whitmer RA, Gustafson DR, Barrett-Connor E, Haan MN, Gunderson EP, Yaffe K. Central obesity and increased risk of dementia more than three decades later. Neurology. 2008;71:1057–64.

Emmerzaal TL, Kiliaan AJ, Gustafson DR. 2003–2013: a decade of body mass index, Alzheimer’s disease, and dementia. J Alzheimers Dis. 2015;43:739–55.

Qizilbash N, Gregson J, Johnson ME, Pearce N, Douglas I, Wing K, Evans SJW, Pocock SJ. BMI and risk of dementia in two million people over two decades: a retrospective cohort study. Lancet Diabetes Endocrinol. 2015;3:431–6.

Yuan S, Wu W, Ma W, Huang X, Huang T, Peng M, Xu A, Lyu J. Body mass index, genetic susceptibility, and Alzheimer’s disease: a longitudinal study based on 475,813 participants from the UK Biobank. J Transl Med. 2022;20:417.

Buchman AS, Wilson RS, Bienias JL, Shah RC, Evans DA, Bennett DA. Change in body mass index and risk of incident Alzheimer disease. Neurology. 2005;65:892–7.

Strand BH, Wills AK, Langballe EM, Rosness TA, Engedal K, Bjertness E. Weight change in midlife and risk of Mortality from Dementia up to 35 years later. J Gerontol Biol Sci Med Sci. 2017;72:855–60.

Google Scholar  

Hendriks HFJ. Alcohol and Human Health: what is the evidence? Annu Rev Food Sci Technol. 2020;11:1–21.

Jeon KH, Han K, Jeong SM, Park J, Yoo JE, Yoo J, Lee J, Kim S, Shin DW. Changes in Alcohol Consumption and Risk of Dementia in a Nationwide Cohort in South Korea. JAMA Netw Open. 2023;6:e2254771.

Collins MA, Neafsey EJ, Wang K, Achille NJ, Mitchell RM, Sivaswamy S. Moderate ethanol preconditioning of rat brain cultures engenders neuroprotection against dementia-inducing neuroinflammatory proteins: possible signaling mechanisms. Mol Neurobiol. 2010;41:420–5.

Xu W, Wang H, Wan Y, Tan C, Li J, Tan L, Yu JT. Alcohol consumption and dementia risk: a dose-response meta-analysis of prospective studies. Eur J Epidemiol. 2017;32:31–42.

Sommerlad A, Perera G, Singh-Manoux A, Lewis G, Stewart R, Livingston G. Accuracy of general hospital dementia diagnoses in England: sensitivity, specificity, and predictors of diagnostic accuracy 2008–2016. Alzheimers Dement. 2018;14:933–43.

Koehler EM, Schouten JN, Hansen BE, Hofman A, Stricker BH, Janssen HL. External validation of the fatty liver index for identifying nonalcoholic fatty liver disease in a population-based study. Clin Gastroenterol Hepatol. 2013;11:1201–4.

Download references

Acknowledgements

This research has been conducted using the UK Biobank Resource (application No. 91907). Permission to use the UK Biobank Resource was approved by the access subcommittee of the UK Biobank Board. We thank the UK Biobank volunteers for their contribution.

This work was partly supported by the National Natural Science Foundation of China (82100478 and 82104893), and Fundings for Clinical Trials from the Affiliated Drum Tower Hospital, Nanjing University School of Medicine (2022-LCYJ-PY-06 and 2022-YXZX-NFM-02). The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Author information

Xue Bao and Lina Kang were co-first authors. Corresponding to Xinlin Zhang, Xiaowen Zhang, and Biao Xu.

Authors and Affiliations

Department of Cardiology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, 321 Zhongshan Road, Nanjing, 210008, China

Xue Bao, Lina Kang, Lian Wang, Wei Xu, Biao Xu & Xinlin Zhang

Department of Clinical Sciences, Lund University, Malmö, Sweden

Xue Bao & Gunnar Engström

Departments of Orthopedics, Jiangsu Province Hospital of Chinese Medicine, the Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, China

Songjiang Yin

Department of Endocrinology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, China

Xiaowen Zhang

Endocrine and Metabolic Disease Medical Center, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, China

You can also search for this author in PubMed   Google Scholar

Contributions

XLZ, XB, XWZ, and BX conceived the study. XB and XLZ did statistical analyses. XWZ, LK, and XLZ drafted the first manuscript. XWZ, SY, GE, LW, and WX contributed to interpretation of the data. XLZ and XB have accessed and verified the underlying data. XLZ attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. All authors had access to the data and accept responsibility for the decision to submit for publication.

Corresponding authors

Correspondence to Biao Xu , Xiaowen Zhang or Xinlin Zhang .

Ethics declarations

Ethics approval.

The study received ethical approval from the North West Multicenter Research Ethics Committee. The UK Biobank study was conducted in accordance with the Declaration of Helsinki.

Consent to participate

All participants provided written consent.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Bao, X., Kang, L., Yin, S. et al. Association of MAFLD and MASLD with all-cause and cause-specific dementia: a prospective cohort study. Alz Res Therapy 16 , 136 (2024). https://doi.org/10.1186/s13195-024-01498-5

Download citation

Received : 15 February 2024

Accepted : 12 June 2024

Published : 26 June 2024

DOI : https://doi.org/10.1186/s13195-024-01498-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Liver disease
  • Prospective

Alzheimer's Research & Therapy

ISSN: 1758-9193

cause and effect type of research

Examples

Research Problem

Ai generator.

cause and effect type of research

A research problem is a specific issue or gap in knowledge that a researcher aims to address through systematic investigation. It forms the foundation of a study, guiding the research question, research design , and potential outcomes. Identifying a clear research problem is crucial as it often emerges from existing literature, theoretical frameworks, and practical considerations. In a student case study , the research question and hypothesis stem from the identified research problem.

What is a Research Problem?

A research problem is a specific issue, difficulty, contradiction, or gap in knowledge that a researcher aims to address through systematic investigation. It forms the basis of a study, guiding the research question, research design, and the formulation of a hypothesis.

Examples of Research Problem

Examples of Research Problem

  • Impact of Social Media on Adolescent Mental Health : Investigating how social media usage affects the mental health and well-being of teenagers.
  • Climate Change and Agricultural Productivity : Examining the effects of climate change on crop yields and farming practices.
  • Online Learning and Student Engagement : Assessing the effectiveness of online learning platforms in maintaining student engagement and academic performance.
  • Healthcare Access in Rural Areas : Exploring the barriers to healthcare access in rural communities and potential solutions.
  • Workplace Diversity and Employee Performance : Analyzing how workplace diversity influences team dynamics and employee productivity.
  • Renewable Energy Adoption : Studying the factors that influence the adoption of renewable energy sources in urban versus rural areas.
  • AI in Healthcare Diagnostics : Evaluating the accuracy and reliability of artificial intelligence in medical diagnostics.
  • Gender Disparities in STEM Education : Investigating the causes and consequences of gender disparities in STEM education and careers.
  • Urbanization and Housing Affordability : Exploring the impact of rapid urbanization on housing affordability and availability in major cities.
  • Public Transportation Efficiency : Assessing the efficiency and effectiveness of public transportation systems in reducing urban traffic congestion.

Research Problem Examples for Students

  • The Impact of Homework on Academic Achievement in High School Students
  • The Relationship Between Sleep Patterns and Academic Performance in College Students
  • The Effects of Extracurricular Activities on Social Skills Development
  • Influence of Parental Involvement on Students’ Attitudes Toward Learning
  • The Role of Technology in Enhancing Classroom Learning
  • Factors Contributing to Student Anxiety During Exams
  • The Effectiveness of Peer Tutoring in Improving Reading Skills
  • Challenges Faced by International Students in Adapting to New Educational Systems
  • Impact of Nutrition on Concentration and Academic Performance
  • The Role of Socioeconomic Status in Access to Higher Education Opportunities

Research Problems Examples in Education

  • Effect of Class Size on Student Learning Outcomes
  • Impact of Technology Integration in Classroom Instruction
  • Influence of Teacher Professional Development on Student Achievement
  • Challenges in Implementing Inclusive Education for Students with Disabilities
  • Effectiveness of Bilingual Education Programs on Language Proficiency
  • Role of Parental Involvement in Enhancing Academic Performance
  • Impact of School Leadership on Teacher Retention and Job Satisfaction
  • Assessment of Remote Learning Efficacy During the COVID-19 Pandemic
  • Barriers to STEM Education Participation Among Female Students
  • Effect of Socioeconomic Status on Access to Quality Education

Research Problems Examples in Business

  • Impact of Employee Engagement on Productivity and Retention
  • Effectiveness of Social Media Marketing Strategies on Consumer Behavior
  • Challenges in Implementing Sustainable Business Practices
  • Influence of Leadership Styles on Organizational Performance
  • Role of Corporate Culture in Driving Innovation
  • Impact of Remote Work on Team Collaboration and Communication
  • Strategies for Managing Supply Chain Disruptions
  • Effect of Customer Feedback on Product Development
  • Challenges in Expanding into International Markets
  • Influence of Brand Loyalty on Customer Retention

Basic Research Problem Examples

  • Effect of Sleep on Cognitive Function
  • Impact of Exercise on Mental Health
  • Influence of Diet on Academic Performance
  • Role of Social Support in Stress Management
  • Impact of Screen Time on Children’s Behavior
  • Effects of Pollution on Public Health
  • Influence of Music on Mood and Productivity
  • Role of Genetics in Disease Susceptibility
  • Impact of Advertising on Consumer Choices
  • Effects of Climate Change on Local Wildlife

Research Problem in Research Methodology

A research problem in research methodology refers to an issue or gap in the process of conducting research that requires a solution. Examples include:

  • Validity and Reliability of Measurement Tools : Ensuring that instruments used for data collection consistently produce accurate results.
  • Selection of Appropriate Sampling Techniques : Determining the best sampling method to ensure the sample represents the population accurately.
  • Bias in Data Collection and Analysis : Identifying and minimizing biases that can affect the validity of research findings.
  • Ethical Considerations in Research : Addressing ethical issues related to participant consent, confidentiality, and data protection.
  • Generalizability of Research Findings : Ensuring that research results are applicable to broader populations beyond the study sample.
  • Mixed Methods Research Design : Effectively integrating qualitative and quantitative approaches in a single study.
  • Data Interpretation and Reporting : Developing accurate and unbiased interpretations and reports of research findings.
  • Longitudinal Study Challenges : Managing the complexities of conducting studies over extended periods.
  • Control of Extraneous Variables : Identifying and controlling variables that can affect the dependent variable outside the study’s primary focus.
  • Developing Theoretical Frameworks : Constructing robust frameworks that guide the research process and support hypothesis development.

Characteristics of a Research Problem

  • Clarity : The research problem should be clearly defined, unambiguous, and understandable to all stakeholders.
  • Specificity : It should be specific and narrow enough to be addressed comprehensively within the scope of the research.
  • Relevance : The problem should be significant and relevant to the field of study, contributing to the advancement of knowledge or practice.
  • Feasibility : It should be practical and manageable, considering the resources, time, and capabilities available to the researcher.
  • Novelty : The research problem should address an original question or gap in the existing literature, providing new insights or perspectives.
  • Researchability : The problem should be researchable using scientific methods, including data collection, analysis, and interpretation.
  • Ethical Considerations : The research problem should be ethically sound, ensuring no harm to participants or the environment.
  • Alignment with Objectives : The problem should align with the research objectives and goals, guiding the direction and purpose of the study.
  • Measurability : It should be possible to measure and evaluate the outcomes related to the problem using appropriate metrics and methodologies.
  • Contextualization : The problem should be placed within a broader context, considering theoretical frameworks, existing literature, and practical applications.

Types of Research Problems

  • Aim: To describe the characteristics of a specific phenomenon or population.
  • Example: “What are the key features of successful online education programs?”
  • Aim: To compare two or more groups, variables, or phenomena.
  • Example: “How does employee satisfaction differ between remote and on-site workers?”
  • Aim: To determine cause-and-effect relationships between variables.
  • Example: “What is the impact of leadership style on employee productivity?”
  • Aim: To examine the relationship between two or more variables.
  • Example: “What is the relationship between social media usage and self-esteem among teenagers?”
  • Aim: To explore a new or under-researched area where little information is available.
  • Example: “What are the emerging trends in consumer behavior post-pandemic?”
  • Aim: To solve a specific, practical problem faced by an organization or society.
  • Example: “How can small businesses improve their cybersecurity measures?”
  • Aim: To expand existing theories or develop new theoretical frameworks.
  • Example: “How can existing theories of motivation be integrated to better understand employee behavior?”
  • Aim: To evaluate the effects of policies or suggest improvements.
  • Example: “What are the effects of the new minimum wage laws on small businesses?”
  • Aim: To investigate ethical issues within a field or practice.
  • Example: “What are the ethical implications of AI in decision-making processes?”
  • Aim: To address issues that span multiple disciplines or fields of study.
  • Example: “How can principles of environmental science and economics be combined to develop sustainable business practices?”

How to Define a Research Problem

Defining a research problem involves several key steps that help in identifying and articulating a specific issue that needs investigation. Here’s a structured approach:

  • Choose a general area of interest or field relevant to your expertise or curiosity. This can be broad initially and will be narrowed down through the next steps.
  • Review existing research to understand what has already been studied. This helps in identifying gaps, inconsistencies, or areas that need further exploration.
  • Based on your literature review, refine your broad topic to a more specific issue or aspect that has not been adequately addressed.
  • Ensure the problem is significant and relevant to the field. It should address a real-world issue or theoretical gap that contributes to advancing knowledge or solving practical problems.
  • Clearly articulate the problem in a concise and precise manner. This statement should explain what the problem is, why it is important, and how it impacts the field.
  • Develop specific research questions that your study will answer. These questions should be directly related to your problem statement and guide the direction of your research.
  • Establish clear research objectives that outline what you aim to achieve. Formulate hypotheses if applicable, which are testable predictions related to your research questions.
  • Consider the resources, time, and scope of your study. Ensure that the research problem you have defined is feasible to investigate within the constraints you have.
  • Discuss your defined research problem with peers, mentors, or experts in the field. Feedback can help refine and improve your problem statement.

Importance of Research Problem

The research problem is crucial as it forms the foundation of any research study, guiding the direction and focus of the investigation. It helps in:

  • Defining Objectives : Clarifies the purpose and objectives of the research, ensuring the study remains focused and relevant.
  • Guiding Research Design : Determines the methodology and approach, including data collection and analysis techniques.
  • Identifying Significance : Highlights the importance and relevance of the study, demonstrating its potential impact on the field.
  • Focusing Efforts : Helps researchers concentrate their efforts on addressing specific issues, leading to more precise and meaningful results.
  • Resource Allocation : Assists in the efficient allocation of resources, including time, funding, and manpower, by prioritizing critical aspects of the research.

FAQ’s

Why is defining a research problem important.

Defining a research problem is crucial because it guides the research process, helps focus on specific objectives, and determines the direction of the study.

How do you identify a research problem?

Identify a research problem by reviewing existing literature, considering real-world issues, discussing with experts, and reflecting on personal experiences and observations.

What is the difference between a research problem and a research question?

A research problem identifies the issue to be addressed, while a research question is a specific query the research aims to answer.

Can a research problem change during the study?

Yes, a research problem can evolve as new data and insights emerge, requiring refinement or redefinition to better align with findings.

How do you formulate a research problem?

Formulate a research problem by clearly stating the issue, outlining its significance, and specifying the context and scope of the problem.

What is the role of literature review in identifying a research problem?

A literature review helps identify gaps, inconsistencies, and unresolved issues in existing research, which can guide the formulation of a research problem.

How does a research problem impact the research design?

The research problem shapes the research design by determining the methodology, data collection techniques, and analysis strategies needed to address the issue.

What are common sources of research problems?

Common sources include academic literature, practical experiences, societal issues, technological advancements, and gaps identified in previous research.

How specific should a research problem be?

A research problem should be specific enough to guide focused research but broad enough to allow comprehensive investigation and meaningful results.

How do research objectives relate to the research problem?

Research objectives are specific goals derived from the research problem, detailing what the study aims to achieve and how it plans to address the problem.

Twitter

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

Beyond Ozempic: New GLP-1 drugs promise weight loss and health benefits

Photo Illustration: An abstraction of semiglutide injectors

The next wave of obesity drugs is coming soon.

Drug companies are racing to develop GLP-1 drugs following the blockbuster success of Novo Nordisk’s Ozempic and Wegovy and Eli Lilly’s Mounjaro and Zepbound.

Some of the experimental drugs may go beyond diabetes and weight loss, improving liver and heart function while reducing side effects such as muscle loss common to the existing medications. At the 2024 American Diabetes Association conference in Orlando, Florida, researchers are expected to present data on 27 GLP-1 drugs in development.

“We’ve heard about Ozempic and Mounjaro and so on, but now we’re seeing lots and lots of different drug candidates in the pipeline, from very early-stage preclinical all the way through late-stage clinical,” said Dr. Marlon Pragnell, ADA’s vice president of research and science. “It’s very exciting to see so much right now.”

A large portion of the data presented comes from animal studies or early-stage human trials. However, some presentations include mid-to late-stage trials, according to a list shared by the organization.

Approval by the Food and Drug Administration is likely years away for most. Some of the drugs showcased could be available for prescription in the U.S. within the next few years.

“We’ve witnessed an unprecedented acceleration in the development of GLP drugs,” said Dr. Christopher McGowan, a gastroenterologist who runs a weight loss clinic in Cary, North Carolina. “We are now firmly entrenched in the era of the GLP.”

While the existing drugs are highly effective, new drugs that are more affordable and have fewer side effects are needed, McGowan added.

There aren’t just GLP-1 drugs in the pipeline. On Thursday, ahead of the diabetes conference, Denmark-based biotech firm Zealand Pharma released data that showed a high dose of its experimental weight loss drug petrelintide helped reduce body weight by an average of 8.6% at 16 weeks.

The weekly injectable medication is unique because it mimics the hormone amylin, which helps control blood sugar. The hope is patients will experience fewer side effects like nausea commonly associated with GLP-1 drugs such as Wegovy and Zepbound.

Can glucagon hormone help with weight loss?

GLP-1 medications work, in part, by slowing down how quickly food passes through the stomach, leading people to feel fuller longer. In several of the upcoming weight loss drugs, a different hormone called glucagon is in the spotlight. Glucagon is a key blood-sugar-regulating hormone that can mimic the effects of exercise.

One of the drugs featured at the conference on Sunday is called pemvidutide, from Maryland-based biotech firm Altimmune .

The drug contains the GLP-1 hormone, a key ingredient in Ozempic and Wegovy, in addition to glucagon.

Altimmune released data from a phase 2 trial of 391 adults with obesity or who are overweight with at least one weight-related comorbidity such as high blood pressure. Patients were randomized to either get one of three doses of pemvidutide or a placebo for 48 weeks.

Researchers found that patients who got the highest dose of the drug lost on average 15.6% of their body weight after 48 weeks, compared to the 2.2% body weight loss seen in patients who got a placebo. In similar trials, semaglutide was shown to reduce body weight by around 15% after 68 weeks.

These are not direct comparisons because the drugs weren’t compared in a head-to-head clinical trial.

Dr. Scott Harris, Altimmune’s chief medical officer, said the drug has been shown to help people lose weight, as well as provide health benefits to the liver and heart. What’s more, the drug has shown benefits in preserving lean body mass. Some studies have suggested that semaglutide, the active ingredient in Ozempic and Wegovy, can cause muscle loss.

“If people take the drugs long term, what’s going to be their long-term health? What’s going to be the long-term effects on their body composition, their muscle, their ability to function?” he said.

Harris said that people who got pemvidutide lost on average 21% of their lean body mass, which is lower than the around 25% of lean body mass people typically lose with diet and exercise.

“We’re the next wave of obesity drugs,” Altimmune President and CEO Vipin Garg said. “The first wave of mechanisms was all driven by appetite suppression. We are adding another component.”

Altimmune expects to begin a phase 3 trial soon. The company hopes the drug will be available in the U.S. sometime in 2028.

Competition could drive down costs

Expanding the number of weight loss drugs available is important for several reasons, experts say.

More options could also help alleviate the shortages seen in the U.S. with Novo Nordisk’s and Lilly’s weight loss drugs.

Latest news on weight loss medications

  • Amid shortages, WHO warns about safety risks from fake versions of Wegovy and Zepbound.
  • How one state is trying to make weight loss drugs cheaper.
  • Weight loss drugs like Wegovy are meant for long-term use. What happens if you stop taking them?

Increased competition could drive down the high cost of the drugs over time. A month’s supply of Wegovy or Zepbound can cost more than $1,000, often financially untenable for many patients, experts say.

Patients can also respond differently to treatments, said Dr. Fatima Cody Stanford, an associate professor of medicine and pediatrics at Harvard Medical School. In fact, some have found the existing GLP-1 options ineffective.

“Different GLP-1 drugs may have varying levels of efficacy and potency,” she said. “Some patients may respond better to one drug over another, depending on how their body metabolizes and responds to the medication.”

Since starting Ozempic in June 2022, Danielle Griffin has not seen the results her doctor predicted. “She really expected to see a huge difference in my weight, and I just never saw it,” said the 38-year-old from Elida, New Mexico. Griffin weighed about 300 pounds and has lost only about 10 pound in two years. She said her “expectations were pretty much shattered from that.”

Amid insurance battles and shortages, she has also tried Wegovy and Mounjaro, but didn’t see a difference in her weight.

“I don’t feel like there are options, especially for myself, for someone who the medications not working for.”

The prospect of new medications on the horizon excites Griffin. “I would be willing to try it,” she said, adding that “it could be life changing, honestly, and you know that alone gives me something to look forward to.”

More drugs in the pipeline

Eli Lilly, which makes Zepbound and the diabetes version Mounjaro, has two more GLP-1 drugs in development.

On Sunday, Lilly released new data about retatrutide, an injectable drug that combines GLP-1 and glucagon , plus another hormone called GIP. GIP is thought to improve how the body breaks down sugar.

In an earlier trial, retatrutide helped people lose, on average, about 24% of their body weight, the equivalent of about 58 pounds — greater weight loss than any other drug on the market.

New findings showed the weekly medication also significantly reduced blood sugar levels in people with Type 2 diabetes.

On Saturday, there were also new findings on the experimental mazdutide, which Lilly has given permission to biotech firm Innovent Biologics to develop in China. The drug combines GLP-1 and glucagon.

In a phase 3 study of adults in China who were overweight or had obesity, researchers found that after 48 weeks, a 6-milligram dose of the drug led to an average body weight reduction of 14.4%.

The drug also led to a reduction in serum uric acid — a chemical that can build up in the bloodstream, causing health problems, and has been associated with obesity, according to Dr. Linong Ji, director of the Peking University Diabetes Center, who presented the findings.

That was “quite unique and never reported for other GLP-1-based therapies,” he said in an interview.

The drug could be approved in China in 2025, Ji said.

Improving metabolic conditions

An estimated 75% of people with obesity have nonalcoholic fatty liver disease and 34% have MASH, or metabolic dysfunction-associated steatohepatitis, according to researchers with the German drugmaker Boehringer Ingelheim. Fatty liver disease occurs when the body begins to store fat in the liver . It can progress to MASH, when fat buildup causes inflammation and scarring.

In a phase 2 trial of people who were overweight or had obesity, Boehringer Ingelheim’s survodutide, which uses both GLP-1 and glucagon, led to weight loss of 19% at 46 weeks. Another phase 2 study in people with MASH and fibrosis found that 83% of participants also showed improvement in MASH.

Survodutide “has significant potential to make a meaningful difference to people living with cardiovascular, renal and metabolic conditions,” said Dr. Waheed Jamal, Boehringer Ingelheim’s corporate vice president and head of cardiometabolic medicine.

On Friday, the company released two studies on the drug. One, in hamsters, found that weight loss was associated with improvements in insulin and cholesterol. The second, in people with Type 2 diabetes or people with obesity, found the drug helped improve blood sugar levels.  

The company is looking to begin a phase 3 trial.

CLARIFICATION (June 24, 2024, 2:31 p.m. ET): Innovent Biologics has entered into an exclusive licensed agreement with Eli Lilly for the development of mazdutide in China, not a partnership.

cause and effect type of research

Berkeley Lovelace Jr. is a health and medical reporter for NBC News. He covers the Food and Drug Administration, with a special focus on Covid vaccines, prescription drug pricing and health care. He previously covered the biotech and pharmaceutical industry with CNBC.

IMAGES

  1. Give an example of cause and effect. Cause and Effect Relationship

    cause and effect type of research

  2. PPT

    cause and effect type of research

  3. Causal Research Advantages and Disadvantages

    cause and effect type of research

  4. Cause and Effect

    cause and effect type of research

  5. SOLUTION: 422 sample cause and effect essay

    cause and effect type of research

  6. Cause and Effect Diagram Examples

    cause and effect type of research

VIDEO

  1. Cause and Effect

  2. Cause Effect Review for Timed Test

  3. Cause, Effect and Is It Is Ignored

  4. || Cause and Effect || Understanding Consequences ||

  5. Cause and Effect

  6. Cause & Effect: The Science Behind It!

COMMENTS

  1. Causal Research: Definition, examples and how to use it

    You can't rely on just the outcomes of causal research as it's inaccurate. It's best to conduct other types of research alongside it to confirm its output. Trouble establishing cause and effect; Researchers might identify that two variables are connected, but struggle to determine which is the cause and which variable is the effect.

  2. Causal Research Design: Definition, Benefits, Examples

    Causal research is sometimes called an explanatory or analytical study. It delves into the fundamental cause-and-effect connections between two or more variables. Researchers typically observe how changes in one variable affect another related variable. Examining these relationships gives researchers valuable insights into the mechanisms that ...

  3. Types of Research Designs Compared

    You can also create a mixed methods research design that has elements of both. Descriptive research vs experimental research. Descriptive research gathers data without controlling any variables, while experimental research manipulates and controls variables to determine cause and effect.

  4. Methods for Evaluating Causality in Observational Studies

    Sufficient and necessary causes: A is a sufficient cause of B if B always happens when A has happened. A is a necessary cause of B if B only happens when A has happened. Although these relationships are logically clear and seemingly simple, this type of deterministic causality is hardly ever found in real-life scientific research.

  5. Causal Research (Explanatory research)

    Causal research, also known as explanatory research is conducted in order to identify the extent and nature of cause-and-effect relationships. Causal research can be conducted in order to assess impacts of specific changes on existing norms, various processes etc. Causal studies focus on an analysis of a situation or a specific problem to ...

  6. Causal Research: Definition, Design, Tips, Examples

    Causal research, on the other hand, seeks to identify cause-and-effect relationships between variables by systematically manipulating independent variables and observing their effects on dependent variables. Unlike descriptive research, causal research aims to determine whether changes in one variable directly cause changes in another variable.

  7. Overview of the Types of Research in Psychology

    Psychology research can usually be classified as one of three major types. 1. Causal or Experimental Research. When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables.

  8. Experimental Method In Psychology

    1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where ...

  9. 5 Cause and effect: The epidemiological approach

    Cause and effect understanding is the highest achievement (the jewel in the crown) of scientific knowledge, including epidemiology. ... an ethical responsibility to apply knowledge even when, from a scientific point of view, further research is advised. Yet, this ethical imperative may be perilous. Early application of knowledge sometimes has ...

  10. Cause and effect

    Nature Methods 7 , 243 ( 2010) Cite this article. The experimental tractability of biological systems makes it possible to explore the idea that causal relationships can be estimated from ...

  11. Causal Study Design

    Causal Study Design. Researchers conduct experiments to study cause and effect relationships and to estimate the impact of child care and early childhood programs on children and their families. There are two basic types of experiments: Randomized experiments. Quasi-experiments.

  12. What Is a Research Design

    Step 2: Choose a type of research design. Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research. ... Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables ...

  13. 3 Causes-of-Effects versus Effects-of-Causes

    This chapter examines two approaches used in social science research: the "causes-of-effects" approach and the "effects-of-causes" approach. ... The chapter first considers the type of research question addressed by both quantitative and qualitative researchers before discussing the use of within-case analysis by the latter to ...

  14. How to Use Surveys in Cause-Effect Research

    The main method for cause-effect research is experimentation. In experimental-based research, we manipulate the causal or independent variables in a relatively controlled environment. This means that we control and monitor other variables affecting the dependent variable (e.g. sales) as much as possible.

  15. Establishing Cause and Effect

    Establishing Cause and Effect. A central goal of most research is the identification of causal relationships, or demonstrating that a particular independent variable (the cause) has an effect on the dependent variable of interest (the effect). The three criteria for establishing cause and effect - association, time ordering (or temporal precedence), and non-spuriousness - are familiar to ...

  16. Systematic Reviews in the Health Sciences

    This type of research will recognize trends and patterns in data, but it does not go so far in its analysis to prove causes for these observed patterns. Cause and effect is not the basis of this type of observational research. The data, relationships, and distributions of variables are studied only. Variables are not manipulated; they are only ...

  17. Ch 2: Psychological Research Methods

    This type of research approach is known as archival research. ... Cause and Effect. In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don't). In some studies, the researchers ...

  18. Causal Research: What it is, Tips & Examples

    Causal research is also known as explanatory research. It's a type of research that examines if there's a cause-and-effect relationship between two separate events. This would occur when there is a change in one of the independent variables, which is causing changes in the dependent variable. You can use causal research to evaluate the ...

  19. Correlation vs. Causation

    Revised on June 22, 2023. Correlation means there is a statistical association between variables. Causation means that a change in one variable causes a change in another variable. In research, you might have come across the phrase "correlation doesn't imply causation.". Correlation and causation are two related ideas, but understanding ...

  20. In brief: What types of studies are there?

    There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked. When making decisions, patients and doctors need reliable answers to a number of questions. Depending on the medical condition and patient's personal situation, the following ...

  21. Experimental Design

    Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome. Here are some situations where experimental research design may be appropriate:

  22. Types of Research Designs

    Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects. Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study. Approach provides the highest level of evidence for single studies.

  23. A step-by-step guide to causal study design using real-world data

    The estimand is the causal effect of research interest and is described in terms of required design elements: the target population for the counterfactual contrast, the kind of effect, and the effect/outcome measure. In Step 2, the study team determines the target population of interest, which depends on the research question of interest.

  24. Cause and Effect Essay Outline: Types, Examples & Writing Tips

    Cause #1. Cause #2. Cause #3 (and so on…) The effect of the causes. Conclusion. 2. One cause, many effects. This type of cause and effect essay is constructed to show the various effects of a particular event, problem, or decision. Once again, you will have to demonstrate your comprehensive knowledge and analytical mastery of the field.

  25. Best Climate for Arthritis Patients: Humidity's Impact on Your Joints

    Before you start packing, consider what the research has to say about the effects of weather and climate on arthritis. What the Research Says. While the weather's effects on arthritis have long troubled people with the disease and intrigued researchers who study it, the connection between weather and joint pain is not well understood. Yet ...

  26. 6 types of depression identified in Stanford study

    About 280 million people worldwide and 26 million people in the United States have depression, which is a leading cause of disability. Some 30% to 40% of people with depression do not experience ...

  27. Side Effects of Not Ejaculating (Releasing Sperm)

    Ejaculating frequently may cause certain side effects, such as chafing of the skin (usually from masturbation specifically) or fatigue. Some people may be concerned that frequent ejaculation may ...

  28. Association of MAFLD and MASLD with all-cause and cause-specific

    Further research is needed to refine MAFLD and SLD subtyping and explore the underlying mechanisms contributing to dementia risk. ... overweight/obesity, type 2 diabetes, or at least two metabolic abnormalities. MAFLD had three subtypes ... Establishing a definitive cause-and-effect association between alcohol and Alzheimer's disease is ...

  29. Research Problem

    A research problem is a specific issue or gap in knowledge that a researcher aims to address through systematic investigation. It forms the foundation of a study, guiding the research question, research design, and potential outcomes.Identifying a clear research problem is crucial as it often emerges from existing literature, theoretical frameworks, and practical considerations.

  30. Beyond Ozempic: New GLP-1 drugs promise weight loss and health benefits

    Some of the experimental drugs may go beyond diabetes and weight loss, improving liver and heart function while reducing side effects such as muscle loss common to the existing medications. At the ...