• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE: Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: May 25, 2024 4:09 PM
  • URL: https://libguides.usc.edu/writingguide

Chapter 5 Research Design

Research design is a comprehensive plan for data collection in an empirical research project. It is a “blueprint” for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: (1) the data collection process, (2) the instrument development process, and (3) the sampling process. The instrument development and sampling processes are described in next two chapters, and the data collection process (which is often loosely called “research design”) is introduced in this chapter and is described in further detail in Chapters 9-12.

Broadly speaking, data collection methods can be broadly grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected (quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth) and analyzed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that are not available from either types of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key Attributes of a Research Design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in hypothesized independent variable, and not by variables extraneous to the research context. Causality requires three conditions: (1) covariation of cause and effect (i.e., if cause happens, then effect also happens; and if cause does not happen, effect does not happen), (2) temporal precedence: cause must precede effect in time, (3) no plausible alternative explanation (or spurious correlation). Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are, by no means, immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalizability refers to whether the observed associations can be generalized from the sample to the population (population validity), or to other people, organizations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalized to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalizability than laboratory experiments where artificially contrived treatments and strong control over extraneous variables render the findings less generalizable to real-life settings where treatments and extraneous variables cannot be controlled. The variation in internal and external validity for a wide range of research designs are shown in Figure 5.1.

plan of study of a researcher is called

Figure 5.1. Internal and external validity.

Some researchers claim that there is a tradeoff between internal and external validity: higher external validity can come only at the cost of internal validity and vice-versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs is ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organizational learning are difficult to define, much less measure. For instance, construct validity must assure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure is valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical test, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

plan of study of a researcher is called

Figure 5.2. Different Types of Validity in Scientific Research

Improving Internal and External Validity

The best research designs are those that can assure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalizable to the population at large. Controls are required to assure internal validity (causality) of research designs, and can be accomplished in four ways: (1) manipulation, (2) elimination, (3) inclusion, and (4) statistical control, and (5) randomization.

In manipulation , the researcher manipulates the independent variables in one or more levels (called “treatments”), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a, a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs but not in non-experimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail.

The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socio-economic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalizability but also requires substantially larger samples. In statistical control , extraneous variables are measured and used as covariates during the statistical testing process.

Finally, the randomization technique is aimed at canceling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomization are: (1) random selection , where a sample is selected randomly from a population, and (2) random assignment , where subjects selected in a non-random manner are randomly assigned to treatment groups.

Randomization also assures external validity, allowing inferences drawn from the sample to be generalized to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalizability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for few of those dimensions.

Popular Research Designs

As noted earlier, research designs can be classified into two categories – positivist and interpretive – depending how their goal in scientific research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalized patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9-12.

Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the “treatment group”) but not to another group (“control group”), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value). More complex designs may include multiple treatment groups, such as low versus high dosage of the drug, multiple treatments, such as combining drug administration with dietary interventions. In a true experimental design , subjects must be randomly assigned between each group. If random assignment is not followed, then the design becomes quasi-experimental . Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organization where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analyzed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalizability since real life is often more complex (i.e., involve more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations.

Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys , independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys , dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a “socially desirable” response rather than their true response) which further hurts internal validity.

Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by country from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job.

Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear.

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualized and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalized to other case sites. Generalizability can be improved by replicating and comparing the analysis in other case sites in a multiple case design .

Focus group research is a type of research that involves bringing in a small group of subjects (typically 6 to 10 people) at one location, and having them discuss a phenomenon of interest for a period of 1.5 to 2 hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences.

Internal validity cannot be established due to lack of controls and the findings may not be generalized to other settings because of small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research.

Action research assumes that complex social phenomena are best understood by introducing interventions or “actions” into those phenomena and observing the effects of those actions. In this method, the researcher is usually a consultant or an organizational member embedded within a social context such as an organization, who initiates an action such as new organizational procedures or new technologies, in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalizability of findings is often restricted to the context where the study was conducted.

Ethnography is an interpretive research design inspired by anthropology that emphasizes that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time (8 months to 2 years), and during that period, engages, observes, and records the daily life of the studied culture, and theorizes about the evolution and behaviors in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves “sense-making”. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalizable to other cultures.

Selecting Research Designs

Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for individual unit of analysis) or a case study (for organizational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate.

Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organizational decision making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organizational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 27 May 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Open, rigorous and reproducible research: A practitioner’s handbook

1 study design phase.

It is always exciting to start a new research project. By the time you actually roll-up your sleeves and get your hands dirty, you have probably been pondering on many things that related to your new project: what kind of questions would I like to answer? How should I formalize the question? How do I get to the answer? Should I conduct an experiment to gather some data, or should I explore the existing datasets? What does the data look like anyway? How do I make sure my answer is not wildly wrong?

Then you find yourself buried in those questions, do not know where to start… Well, firstly, you are definitely on the right track thinking about all those questions! Congratulations, you just have just taken the first step! Secondly, do not be intimidated by this big fuzzy ball of thoughts. You may not know where to start, how to start, or whether you have thoroughly considered everything to start your research project. This is totally normal! I am here to propose a general framework to get your work started. Let’s call it “the research project starter kit”!

In the following four chapters, I will be introducing concepts, tools, and easy-to-follow frameworks as part of this precious starter kit. I will also share insights on study designs from experienced researchers in different scientific fields. Chapter 1 will discuss how to define the research question, a clear and effective one that we can actually act upon. Chapter 2 will introduce study designs techniques that can answer the research question effectively. Chapter 3 will touch on how to create a realistic analytic plan. Finally, Chapter 4 will provide tools for documenting all the research planning steps. [add a paragraph] How does the best practice relate to open science? Without well planned study design, open science might be simply unfeasible as you cannot go back in time to retrieve all missing information.

1.1 Define the research question

1.1.1 start with a question in mind.

A well-defined research question reflects the researcher’s careful thinking of the problem that he/she/they is trying to tackle. Specifying a good research question also serves the researcher a long way:

  • Provides clear aims of what to achieve through the study
  • Sets reasonable expectations and future goals
  • Helps select appropriate methodology moving forward
  • Gives a better chance to practice open science

At this point, you may think, “oh, come on, man! A seasoned researcher like me, of course, knows how to come up with a good research question!” Well, I would say, “Man, think twice!” So what does a well-defined research question look like anyway? To answer this question, we shall consider the following two aspects: scientific contribution to the field and operational feasibility.

To evaluate the scientific contribution:

  • Does the question have a solid scientific base?
  • Is the question novel?
  • If the question is sufficiently answered, what will it add to the current knowledge?

To evaluate the operational feasibility:

  • What is the study unit to answer the question? (Individuals? Microbial - colonies? Countries? Planet or celestial systems?
  • Does the question include an explicit intervention / treatment / exposure?
  • Does the question imply a comparison group?
  • Is there an anticipated outcome?

Note: Since academic fields vary, it does not mean that your question has to fulfill all these points mentioned above. However, we do encourage you to go through these questions while you are contemplating the research question.

Figure credit: https://simplystatistics.org/2019/04/17/tukey-design-thinking-and-better-questions/

Figure 1.1: Figure credit: https://simplystatistics.org/2019/04/17/tukey-design-thinking-and-better-questions/

1.1.2 Classification of different questions

Another helpful practice is to carefully scrutinize what kind of question you are asking. It does not mean that some types of questions are absolutely superior to the others. The purpose of thinking through the type of the question allows us to be true to ourselves and honest to our audience when we are making any inferences from our research. Now I will go through two general classifications of reserved questions.

Confirmatory vs. Exploratory

Confirmatory questions typically involve a set of hypotheses, a null hypothesis with one or more alternatives. It often requires the researcher to develop a concrete research design to test these hypotheses. The question is often answered via inductive inference. Such inference is often considered as “strong inference”, and is deemed to make a “scientific step forward”. (Platt 1964) Some examples of confirmatory questions include… add examples! add reading on inductive and deductive reasoning (maybe the one from Steve’s class)

Exploratory questions, unlike confirmatory questions, are not explicitly hypothesis driven. Rather, these questions are often considered to be hypotheses-generating. Hence, exploratory questions do not mean to achieve “strong inference”. Results from exploratory research cannot be over-interpreted as something confirmatory, and often yield a higher false positive rate. However, exploratory questions are meaningful and necessary for new discoveries and unexplored topics. Before we make any “strong inference”, we should always attempt to validate the results from exploratory research in a confirmatory setting. Some examples of exploratory questions include… add examples!

In summary, after you carefully think through your research question, it will be more clear to see whether your question is rather hypothesis driven or hypothesis generating. Either way may make interesting research topics, as long as you are making the right amount of inference from the results. Please be true to yourself as the Knights of the Round Table to King Arthur! When you ask an exploratory question, please do not pretend it is confirmatory no matter how hard you want to believe it is confirmatory. When you think you are asking a confirmatory question, make sure it is really confirmatory, not an exploratory question with fancy confirmatory wordings. Believe me, your reviewers will be able to tell!

Causal vs. Non-Causal

Another lens of classifying the research question is to examine whether the question is trying to draw a conclusion about a causal relationship between the indexed exposures and outcomes. As a typical graduate student conducting research, I often find myself either busy establishing an association, or busy determining whether the association I find involves a cause and its effect on the outcome. Although even the famous American writer John Barth once said “The world is richer in associations than meanings, and it is the part of wisdom to differentiate the two.”, this is rather a post-hoc strategy. As having been stressed multiple times, forward thinking is really the key to high quality research. When you have a research question in mind, while thinking about how brilliant your idea is, please also go through the following items to see whether your question is causal or not. The typical causal question includes the following components:

A well defined cause (what can be qualified as a cause? Still debatable, expand).

A well defined outcome.

A scientifically plausible effect on the outcome that can be attributable to the cause.

Provide a better list of components.

Moreover, English epidemiologist and statistician Austin Bradford Hill carefully summarized the common characteristics of causal relationship in his paper [The Environment and Disease: Association or Causation?]( https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1898525/?page=1 in 1965. Although this paper later was referred to as the “Hill Causal Criteria”, Hill himself actually suggested that we should take all these characteristics with a grain of salt. Most of the causal questions contain the “C” word ( C for cause) or the “W” word (W for why). However, a lot of times, causal questions does not explicitly contain the word “cause”, rather, it describes the effect that one factor A has on another factor B. For example:

What causes urban residential segregation?

Why are rural people more politically conservative than urban people in the - United States?

How effective hydroxychloroquine is in treating COVID-19 patients?

How does the concentration of silver nitrate affect the formation of silver crystals?

The two typical classes of non-causal questions we encounter are descriptive questions and observational/associational questions. The former primarily describes the objective existence of certain phenomena. (e.g what is the ductility of bronze, silver, gold?) The latter concerns the relationship between two factors without considering the underlying causal mechanisms. (e.g how does metal ductility relate to its melting point?)

Design an infographic guide for people to go through confirmatory / exploratory and causal / non-causal questions.

Note: The takeaway here is that knowing the type of questions that you are asking comes quite handy when determining the downstream analytical methodology and guarding the proper inference!

1.1.3 Not All Research Questions Are Hypotheses

“Every Science begins as philosophy and ends as art; it arises in hypothesis and flows into achievement.”

Will Durant

This section, I would like to further emphasize the characteristics of research hypotheses, as they are the driving force for confirmatory studies and oftentimes the “products” of exploratory studies. As mentioned previously, research questions, as a more general concept, can take on myriad forms with fewer requirements and restrictions; whereas hypotheses, as a subset of research questions, are often phrased in a more specific way with at least one priori belief and one or more alternative(s). It usually does not take on a question form, rather it is a statement, an educated, testable prediction about the results. The main criteria of a good research hypothesis include:

Written in clear and simple language that clearly defines both the dependent - and independent variables.

States the priori belief of the relationship between the dependent and - independent variables.

Variables are defined without scientific ambiguity and are feasible to measure - using either a quantitative or qualitative approach.

The hypothesis must be testable using scientific methods. (While constructing the hypothesis, one shall try to think of different methods that might be applicable to test your hypothesis).

The hypothesis is also feasible with respect to the current resources and time frame.

Here are some examples of hypotheses:

Farmed salmon in Norway is more likely to have a higher prevalence of parasitic diseases than the wild salmon. (Good!)

  • My comment: The dependent variable is the prevalence of parasitic diseases. Independent variable is the status of farmed or wild. The predicted effect here is farmed salmon - higher prevalence of parasitic disease. Here the effect is prevalence, which is unambiguous. This is reasonably straightforward to test as well.

The extinction of honey will lead to mass extinction to other species, including humans. (Poor!)

  • Now you try to apply the main criteria of a good hypothesis to critique this hypothesis, why does it sound plausible, yet is such a poor hypothesis?

Some fields have their own guidelines on how to generate “tight” hypotheses. For example, the P.I.C.O framework is commonly used in evidence based medicine to formulate high quality clinical questions. The following table summarizes each P.I.C.O component. Although this framework is designed for one particular field, it could be applicable to other scientific disciplines as well. If your research question can be formulated in such a comparison setting, please think through these four components.

Protip: Use study reporting guidelines to navigate your research question formulation process. When starting a new research project, despite our enthusiasm and motivation, we may still feel quite clueless, especially for us young researchers. Firstly, if you feel this way, fear no more, you are not alone. Secondly, there might be some good news for you. Depending on your study design (coming up in the next chapter), there are corresponding protocols to guide researchers through the study reporting phase. These reporting protocols provide a list of necessary information needed to ensure the transparency of the reported study. These reporting protocols are often developed by a panel of experts within the research field. They can be used to spur high-quality research question/hypothesis generating even at the early stage of a study! Here are several examples of such reporting guidelines:

Consolidated Standards of Reporting Trials (CONSORT) for randomized controlled trials

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for systematic reviews and meta-analyses

Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) for observational studies, such as cohort, case-control, and cross-sectional studies, and conference abstracts for these studies

Case Report Guidelines (CARE) for case reports

Consolidated Criteria for Reporting Qualitative Research (CPREQ) for qualitative research

Animal Research: Reporting of In Vivo Experiments Guidelines (ARRIVE) for research involving experient animals

Consolidated Health Economic Evaluation Reporting Standards (CHEERS) for economic evaluations

Protip: Conduct literature review prior to formulate or finalize the research question or hypothesis. In Star Trek Voyager, captain Janeway led her crew to explore the uncharted delta quadrant. (They came from the alpha quadrant.) One obvious reason that they constantly got into trouble (so the show could last for seven seasons) was that they lacked the knowledge of all the planets and new alien species they were dealing with. They called them explorers. And their trip back home was full of treacherous adventures. Trust me, my friend, you don’t want your research to be anything like an adventure! Sufficient literature review prior to formulating or finalizing your research question / hypothesis will provide you a map of the scientific field that you are interested in exploring.

1.1.4 Thinking Forward vs. Thinking Backward

“By failing to prepare, you are preparing to fail.”

Benjamin Franklin

By now, you may have noticed that the main idea we would like to drive home is “Think ahead of time! Plan ahead of time! Prepare ahead of time!”. Forward thinking in research conduct allows the researchers not only to better define and understand the research topics themselves, but also anticipate potential contingencies and prepare to tackle the expected “unexpected”.

Backward thinking in scientific research is not uncommon. Think about the following scenario: you worked hard to generate some seemingly meaningful results. Because you didn’t form any plausible research question, and you were too lazy to conduct a literature review, now you do not know how to interpret the results. “Hey, it’s not too late to start some literature review!”, you said to yourself. Then you put the results in the google search bar and added a question mark in the end. Then you naively think “Voila, problem solved!”. You continued on writing the discussion section of your paper… Please don’t feel ashamed if this situation sounds quite familiar to you, however, you must know by now that this kind of practice is just horrible, period! Backward thinking often leads to HARKing, which stands for Hypothesis After Result is Known. HARKing commonly increases the risk of type I error, and further leads to reproducibility crisis. ( Click this Link to Read more about HARKing. )

1.1.5 Fun Readings and Additional Resources

  • Tukey, Design Thinking, and Better Questions - a neat blog article on research questions written by Roger Peng from Johns Hopkins University.

1.2 Choose Your Study Design

1.2.1 but first, know your data.

Where does data come from anyway? In general there are three main sources of data for your research:

Data gathered by researchers themselves. In this case, the researchers are more likely to have a better understanding of the data used in their study, as they are actively involved in the data collecting procedure. Some examples of this kind of “shoe-leather” work include epidemiologists gathering patient-level data from a clinical trial; anthropologist recording interviews of residents from a certain tribe, political scientists conducting polling survey online, earth science researchers collecting information on soil and weather conditions, electrical engineers simulating a target signal of interest, etc.

Single-source pre-existing data. In this case, the data has already been gathered mostly for either a more general purpose or a different purpose. But all the information has been consolidated into one dataset, and it is ready to be repurposed for the researchers’ new study. Some examples include bioinformaticians using a subset of U.K biobank data for genetic analysis; nutritionists using a pre-established cohort, such as Stanford Well Living Laboratory to investigate alcohol consumption; econometricians employing Uber driver data to evaluate service quality, etc.

Multi-source pre-existing data. In this scenario, researchers need to pull data from multiple existing data sources. The original data sources can be passively collected, such as insurance claims, Facebook user information, or, actively gather data with a study design in place… Researchers have to harmonize the data from different sources to fit the goal of the current study. Some examples are using health insurance claims, hospital registry, and surgeon records to evaluate healthcare quality; employing satellite images and other meteorological measurements to study crops’ growing pattern;

Once we obtain the research data, I don’t know about you, but I get extremely excited! However, before we load everything into any analytic software while humming our favorite tunes, there are several sanity checks we should go over together:

How are the study units selected? (Sampling scheme applied? Administrative information?…)

How is the data collected? (Survey? Interview? Wearable Devices?…) Are there any underlying data structures that we should be aware of? (Correlated study units? Repeated measurements? …)

Carefully going through these questions will help us anticipate potential biases, choose appropriate analytical methodologies, draw coherent inferences and conclusions, and make planned extrapolations. The following section will point out all the concepts that we need to answer the above questions. In this manual, I will only scratch the surface of these concepts, but provide links to further readings for the keeners!

1.3 Study Designs

“We must never make experiments to confirm our ideas, but simply to control them.”

Claude Bernard

1.3.1 What is study design anyway? - An Important Mind Map

Now, let’s think about planning an epic expedition trip to Yosemite. You want to know the general terrain and geology that you will set your feet on, choose the route that takes you to the Half Dome, bring the right gear to excel your performance. And if you have a bit of extra time, you might also want to test the gear during a shorter hike. Once you reach the top, you take breathtaking photos. Then you come back, tell all your family and friends about how you made it to the top in great details over beer. If this sounds familiar, congratulations, you are a natural in study design.

Similar to planning an epic hiking trip, the broader concept of study design is the process of planning the execution protocol and analytic method of your study. By the time you have got a clearly defined research question, the nature of your research question should hint you towards certain study designs, or at least it should help you toss some of the designs into the dumpster. The following section will provide some ideas on how to use the type of questions you are trying to answer to guide you through choosing the appropriate study design.

For now, let’s first go through the components of study design:

  • Execution Protocol is the core of study design. Most of the time when people talk about study design, they refer to this study execution protocol. Depending on the stage of the research, the protocol may include an array of documentations: the detailed plan on fieldworks to gather population data (human population, microbe population, animals, plants, … ), including the sampling scheme, recruitment strategy, quality control, etc; if the research is related to wet lab, then the protocol shall document all the steps that have been done in the experiments, including the equipment used to process these steps; if human participants are involved, one should also compose the Institutional Review Board (IRB) protocol. If you are using multi-sourced data, the protocol shall elaborate on the data harmonization from different sources. Even if you are using a single-sourced dataset, your inclusion and exclusion criteria could differ from the original data collecting protocol. Hence, in your protocol, you shall document any inclusion or exclusion of records. This will make sure when you are trying to rerun your study, or others are trying to reproduce your study, you will all be on the same page prior to conducting any analysis. To sum up, whatever you do to gather your data, to massage your data prior to the analysis, document all your steps in the execution protocol.

Note: Sampling is totally art. The study design should also govern how the study units are sampled either directly via fieldwork, bench work, or from existing data sources. We will lightly touch on this topic when introducing the types of study designs.

  • Analytic Method should also naturally stem from the kind of question you are asking. Moreover, a lot of times the execution protocol implies a proper set of analytical methods. You may “play around” with your data and get yourself familiarized. We all do that. However, once you decide to take it seriously, plan your analysis prior to generating any results. For example, if your study involves a hypothesis, you shall predetermine how many alternative hypotheses you would like to test. If you are doing a confirmatory study and would like to interpret the effect sizes of independent variables on the dependent variable, you may consider a method that can actually obtain the effect estimator, instead of using some non-parametric method or black-box method, where the effect sizes are not explicit. If you are asking a causal question, the analytical method should be able to fit into the causal inference framework.

Protip: Another great practice is that when you think through all the models picture the following figure, where the x-axis is the model complexity, the y-axis is the number of assumptions of the data the model makes. The fuzziness of the ball indicates the interpretability of the model. Most of the time, there is no perfect model to answer your question, you end up evaluating the tradeoffs among the available methods. This is where knowing your data comes quite handy.

Figure credit: Yan Min

Figure 1.2: Figure credit: Yan Min

Protip: If your study involves human subjects, you will need to draft an IRB protocol for approval. Although the purpose of the IRB is not to govern the study quality and rigor but to guarantee the ethical conducts of the research, some of the IRB frameworks can be used as a guiding tool to think through the study execution. Here is the link to the Stanford IRB office , you may find a lot more resources from their website.

1.3.2 Types of Study Designs

The concept of study design exists in almost every academic field. Some study designs are more common in one field than the others; some are widely used across the disciplines but with esoteric names in certain fields. To avoid confusion, I will first discuss the general study design patterns with respect to the question types, then introduce an additional factor - temporality. I will mention specific study design names commonly used in my discipline (Epidemiology) in several examples, but I will provide the definition of each study design type.

Note: When do you need a study design? If you don’t want to turn your research project into an unwanted surprise, you will always be in need of a study design!

1.4 Recognize Different Sources of Errors/Uncertainties in Estimation

“The mistake is thinking that there can be an antidote to the uncertainty”

David Levithan

As long as you conduct scientific research, it is just absolutely inevitable for you to deal with errors/uncertainties of some sorts. However, don’t panic! Although there is no antidote to the uncertainties, anticipating and knowing what kind of error or uncertainty you are dealing with is necessary, so that you can guard your research from being shipwrecked. Different study designs and data types have different intrinsic errors that could reduce the validity or/and accuracy of your study. The following section will, first, introduce different types of errors; then, introduce how different errors can compromise a study; further, provide examples of intrinsic errors for different study designs and data sources; finally, briefly touch on different methods we can use to evaluate the errors/uncertainties.

1.4.1 Types of Errors

Generally speaking, there are three types of errors, namely random error , systematic error , and blunders . The following diagram indicates the classification of errors.

Figure credit: Yan Min

Figure 1.3: Figure credit: Yan Min

Random error (A.K.A unsystematic error) often refers to random noise or random fluctuations that are impossible to remove (irreducible) when taking the measurements. Such errors are usually unpredictable. The exact error cannot be replicated by repeating the measurements again.

Systematic error (A.K.A systematic bias) usually results from flaws in the execution of a study, such as the selection of study units, the procedure for gathering relevant information. The error is consistent overtime, if the flaw in the study is not corrected or adjusted.

Blunders are just straight mistakes in the process of research conduct. This is unfortunate. However, as long as these mistakes are spotted, it is often easy to correct. We will not spend additional effort to discuss blunders as the previous two types of errors are the “boat-sinkers” in more than 99% of time.

Among all three types of errors, the systematic error is the most complicated and can completely nullify the study results. Hence, let’s take a further look at it. As shown in the diagram above, systematic error can be further classified into selection bias, information bias, and confounding bias.

Selection Bias occurs when a systematic error in the inclusion or exclusion of study units happens. The occurrence of such bias may relate to the exposure/treatment of interest, therefore it will give rise to the tendency of distorting the study results from the true result. There are also many different selection bias, such as self-selection bias, differential losses to follow-up, Berksonian bias, etc.

Information Bias is caused by erroneous measures of the needed information on the exposure/treatment/independent variable, the outcome/dependent variable, or other relevant covariates. Information bias often results in misclassification of the exposure or/and the outcome. Such misclassifications can be differential or nondifferential. Differential misclassification occurs when the chance of being misclassified differs across study groups (exposed/unexposed groups, treated/untreated groups). The direction and magnitude of the bias needs to be further scrutinized. Nondifferential misclassification occurs when the misclassification is independent from other studied variables (exposure, outcome). It usually biases the results towards the null.

Confounding Bias is caused by confounding variables, which have independent associations with both the exposure/treatment/dependent variable and the outcome/dependent variable. Confounding leads to a noncausal association that is likely a distorted version of the true (causal) relationship. Usually by adjusting for the confounders can explain away the observed noncausal association. Confounding is a rather complicated concept. Most of the studies will suffer from some sort of confounding.

Note: We are only scratching the surface of all the possible biases that we might encounter in our study. To read more about biases, I find this Catalog of Bias from University of Oxford is quite helpful!

1.4.2 How Do Errors Affect Our Study Result?

When you show the study results to your advisor or reviewers, what features of your results would they care the most? Most likely, they will ask you a series of questions regarding precision and validity.

Precision concerns how close the measurements are to each other. As indicated in the figure above, random errors affect the precision of one measurement.

Validity concerns how close the estimate is to the true effect size. Validity includes internal validity and external validity. The latter is also called generalizability. The internal validity evaluates whether the estimate is close to the true effect size of the study subjects, and it is often affected by systematic errors. The external validity concerns whether the estimate is true, such that the estimate from the current study is close to the true effect sizes in other subjects that are not included in the current study. For an excellent discussion of validity in relation to measurement and fairness, please refer to this paper by Jacobs and Wallach.

As shown in the following figure, the random error alone usually affects the study precision, systematic error affects the study validity. For example, a noisy measurement of the key exposure variable, as a random error by definition, will result in way less concrete inference of the exposure, in some cases, it can be expressed as wide confidence intervals. Whereas, only selecting healthy participants to test out COVID-19 vaccine, as a systematic error, will potentially produce a false conclusion that the vaccine is effective. However, the direction and magnitude of the bias caused by systematic error must be discussed based on the study specific conditions. When both random error and systematic error occur in one study. The results could be quite questionable… But you don’t want to wait till you see this happens in your study. Therefore, based on what you know about the source of data and your study design, you shall try to anticipate the potential errors that you may encounter. Hence, address such concerns in the design phase to reduce the chance of having such anticipated errors.

Figure credit: Yan Min

Figure 1.4: Figure credit: Yan Min

1.5 Typical Biases in Different Data Sources and Study Designs

Note: Here, I would like to give a case study as an example of how to incorporate all the things we have talked about into practice. [add case study example]

1.6 How to Reduce and Estimate Different Types of Errors

Random Errors. Again, the random error is unpredictable and non-systematic, and it affects the precision of the study estimate. There are several ways to reduce such error:

Provide sufficient uniform training to all the personnel taking the measurements. Maintain good experimental techniques.

When taking the measurement, plan to take repeated measurements and take the average. E.g, when measuring the blood pressure of patients, measure three times to avoid random variability of the patient’s blood pressure. When measuring the concentration of a chemical solution, also measure more than one time to take into account the intra-variability of the researcher.

Increase the sample sizes. As the standard error is inversely correlated to the sample size N (recall “the square-root law”)

To effectively estimate the random error, we need to consider both the measurement error and the sampling error.

Systematic Error.

Systematic error, AKA systematic bias comes in different forms. There is no universal panacea to reduce all the systematic errors. The best way to cope with systematic errors is to anticipate the sources and the causes of these biases in your study based on your study design.

Note that it is possible to correct for systematic bias, if you can adequately characterize that bias. For example, in this study, the authors were able to accurately measure public opinion by polling people on the Xbox. Xbox users are, unsurprisingly, a heavily biased subset of the population. Nevertheless, because the authors had information about the respondents, they were able to correct for this distortion.

Bias analysis is a very active methodological search field. Most common approach is to conduct different types of sensitivity analysis to evaluate the tolerance of bias in the estimated results.

1.7 Create Your Analytic Plan

1.7.1 choose your weapon, 1.7.2 mind your power.

“Nearly all men can stand adversity, but if you want to test a man’s character, give him power.”

Abraham Lincoln

Inspired by Lincoln’s quote, I am thinking “nearly all research results can stand a long, baffling discussion section in the researcher’s own paper, but if you want to test the result’s true scientific plausibility, give the research power!” In statistics, power is defined as the probability of avoiding a Type II error. In other words power is the chance that a test of significance will detect an effect conditioned on the effect does exist. In general, power is related to the prespecified significance level, the study sample size, the number of studied factors/interventions/exposures/treatment, and the anticipated effect sizes. This section is to motivate young researchers to think about power calculation, whenever it is applicable to their studies. For more detailed information on power calculation, I find this simple tutorial from Boston University very clear and can be a good starting point.

So when do we need to conduct a power calculation? To answer this question, I tend to think about the following criteria:

Do I need to collect the data?

Does my research question contain a hypothesis for me to test?

Is power calculation required because it is part of a grant application?

Figure credit: Yan Min

(#fig:power_calculation)Figure credit: Yan Min

Power calculation is commonly conducted when the research question contains a hypothesis to test and the researchers need to design an experiment to collect the first-hand data to test the hypothesis.

For example, a randomized controlled trial to test the effectiveness of a new drug in treating systemic lupus. With limited amounts of time and resources, researchers often need to determine how many people they should enroll in each arm of the study. Assuming the study has two arms, a new drug arm and an old drug arm. Then they need to decide whether the participants match across arms or not, if so, what is the matching ratio? 1 treated to 2 control? Or 1 treated to 5 controls? They then specify a significance level that conventionally set at 0.05, and an anticipated effect size of alleviating the disease symptoms. Once this is in place, researchers may calculate the sample sizes needed to achieve a reasonably high power. This way, the researchers are more likely to find the sweet point of a reasonable sample size that could achieve a reasonable high power. When researchers do not do any calculation and just wing it, if the sample size is way larger than the actual sample size needed to detect the effect, good! They will probably just spend more money and time to conduct the trial. However, if somehow the sample size is not sufficient at all, then they will be in trouble. After having spent all the money and the time to complete the entire trial, the researchers won’t be able to make any concrete conclusion of the drug being tested in the clinical trial. Hence, if possible, always do your power calculation if it is applicable to your research setting. Although power calculation is not the panacea for under-powered studies, it at least provides reasonable guidance on where the results might be heading prior to actually having the data.

Although power analysis is most commonly used in human subjects experiments, it is also relevant to other areas of data science, including user studies and benchmark comparisons. Just as you need a certain number of people in your psychology study to be able to have a decent chance of being able to detect the true effect as significant, you need a sufficient number of raters to be able to test whether people actually prefer your system. Similarly, to provide convincing evidence that your new model is better than the previous state of the art, the test set needs to be big enough to be able to have a good chance of detecting the improvement as significant. For more details on this connection, have a look at <Dallas’ paper that will be on arxiv in two weeks>.

Note: More articles are pointing out that 0.05 is fairly a low bar for meaningful scientific discovery. Even Fisher himself, who first brought up this 0.05 as the cutoff point, mentioned in his original writing that this 0.05 only means that the result is worth further exploration, which has nothing to do with confirming a scientific finding.

Note: How high the power is high enough?

1.7.3 Need a Pilot?

1.7.3.1 what is a pilot study.

A pilot study is a small-scaled experiment to test the procedure and methods that could be expanded to a larger scale. It is like a mini replica of your actual study. Depending on the needs of the particular study design, the main goals of the pilot study include:

Testing the feasibility recruitment protocol and the informed consent process. This will inform the researchers the feasibility of reaching out to the target population, and anticipate the potential response rate. Researchers will also understand how interested the target population is in the proposed research topic and intervention.

Assessing the randomization procedure. If the study calls for randomization, the randomization procedure itself should be tested prior to the actual study. Since a failed randomization process will jeopardize the overall validity of the study.

Testing the “flow” of the study protocol. The study protocol often involves multiple steps for the researchers and participants to fulfill. The integrity of this “flow” is crucial. For example, after “filtering” participants using the prespecified inclusion/exclusion criteria, the participants will be invited to a lab for blood draw, and then the blood sample will be prepared and transported, etc. All these steps need to be smoothly connected, so that the participants will feel less hectic, and the sample quality is guaranteed.

Evaluating metrics used for data collection. Studies using denovo questionnaires or other forms of data collection tools need to pre-test these approaches in the target population. So that researchers may catch any glitches in the tools before it is too late! Also if there are more than one tools, researchers may use the pilot to compare which tool is the best regarding ease in administering, recording, and analyzing.

Gaining insights on the quality and feasibility of gathering data on the study outcomes. Oftentimes, researchers would like to think of multiple possible outcomes to evaluate the effects from the exposure/intervention/treatment. The gold standard version of the outcome could be the best theoretically, however, in practice, it might not be the case. For example, when measuring fat mass in the target population, we can use body mass index (BMI), where you only need to measure weight and height; or use dual x-ray absorptiometry (DXA) to measure body composition, which tells us the volume of fat mass, lean mass, and bone mass; a third option would be using computed tomography (CT), then we can also differentiate visceral fat from subcutaneous fat. However, assuming we do not know which one is the most feasible, both cost wise and time wise, then we shall use the insights gained from the pilot study to guide our decision making.

Familiarizing the research team with the study protocol. A pilot study can also be the opportunity for the team members to get familiarized with every single step of the study. This can be viewed as part of team training. What kind of error(s) does this step reduce? Random error or blunders!

1.7.4 So, Do You Need a Pilot?

People say you never test the depth of the water with both feet. Hence, my answer to this question would be “Yes, more or less!”. Whenever you have any concerns or questions regarding the execution of the study, a pilot study may just provide the insight that you need. Since its focus is on the execution instead of the actual analytical results and it is small-scale, a pilot study can amplify the “signals” at each glitch of the research execution. However, we need to be mindful of the kind of concerns that a pilot study cannot address:

A pilot study cannot be used to evaluate any adverse effects on the studied population. Specific trial design is needed to evaluate toxicity, safety, and tolerability of an exposure/treatment/intervention.

It also cannot inform effect sizes used in the sample calculation for the main study. Again the goal of a pilot study is to evaluate the process, not to inform the results. The small sample size for a pilot study usually provides a very unstable estimate of the effect size.

Following the second point, a pilot study also cannot be used to inform the results of hypothesis testing. Please do not use pilot study to test multiple alternative hypotheses and choose the significant one to be tested in the main study. First, this increases the chance of getting a Type I error (false positive); secondly, something is significant in a pilot study, might not be significant in the main study anyway. (Conditioned on the null hypothesis being true, the p-values actually follow a uniform distribution.)

1.7.5 Fun Readings and Additional Resources

1.7.5.1 on pilot study.

Design and analysis of pilot studies: recommendations for good practice.

Pilot Studies: Common Uses and Misuses.

Guidelines for Reporting Non-randomised Pilot and Feasibility Studies

Internal Pilots for Observational Studies

1.8 Start Documenting Your Grand Study

“Documentation is a love letter that you write to your future self.”

Damian Conway

Okay, let’s be honest. Documentation is way less exciting than the previous three parts, but it is a crucial part of all research projects. This Chapter will only serve a motivational purpose on this topic, the following two sections of this manual will discuss this topic in great detail. Simply put, it makes your life easier. Well-documented studies are more likely to be reproduced, first, by yourself; then researchers from other groups. Imagine your reviewers ask you the following questions:

How the data has been quality-controlled?

Which participants are excluded in the final analysis? And why?

Why did you choose 387 as your sample size?

Could you please change the color of the figure to make them color-blind friendly?

Before running the blood assays, how were the blood samples prepared?

Why in the questionnaire you design, question A is after question B? What are your ordering test results?

These are the kinds of questions that keep me awake at night. To provide the answers, we usually need to go back into the nitty gritty details of the research conduct. What if we did not document all these, should we just count on our vague memory? Trust me, although we are still young, for a majority of us, our memories are not that trustworthy. Of course the motivation of well documenting our research projects should not be to answer questions from the reviewers. What really matters is that we, as primary researchers, know what we have done to your study step-by-step in great details. This is a determination of moving away from sloppy, hand-wavy science! This is always a way to show our love to our future selves by reducing the ambiguity of what has actually been done already. As mentioned earlier, another purpose of documentation is to serve a greater research community and, together, to conduct high quality scientific research. Therefore, the field is populated by high quality, reproducible advancements, instead of spotty significant results here and there. To serve these two purposes, the documentation plan can be divided into internal documentation and external documentation. The former is more self-serving and mostly done on an internal platform, the latter is serving a greater deed in the research community and the scientific field and mostly done on an external platform.

1.8.1 Internal Documentation

Sampling and recruiting protocols and any justified modifications.

Experiment/wet lab routines, procedures, and personnel.

Protocol and results of the pilot study.

Development of any instruments/metrics/questionnaires used in the project, and - their validation procedures.

Device calibration process.

Data quality control protocols, variable definitions, any kind of data transformations applied prior to the analysis.

All the analysis that has been performed (not just the one you “like” the most).

Codes (or software settings) used to generate figures and tables.

FLEET LIBRARY | Research Guides

Rhode island school of design, create a research plan: research plan.

  • Research Plan
  • Literature Review
  • Ulrich's Global Serials Directory
  • Related Guides

A research plan is a framework that shows how you intend to approach your topic. The plan can take many forms: a written outline, a narrative, a visual/concept map or timeline. It's a document that will change and develop as you conduct your research. Components of a research plan

1. Research conceptualization - introduces your research question

2. Research methodology - describes your approach to the research question

3. Literature review, critical evaluation and synthesis - systematic approach to locating,

    reviewing and evaluating the work (text, exhibitions, critiques, etc) relating to your topic

4. Communication - geared toward an intended audience, shows evidence of your inquiry

Research conceptualization refers to the ability to identify specific research questions, problems or opportunities that are worthy of inquiry. Research conceptualization also includes the skills and discipline that go beyond the initial moment of conception, and which enable the researcher to formulate and develop an idea into something researchable ( Newbury 373).

Research methodology refers to the knowledge and skills required to select and apply appropriate methods to carry through the research project ( Newbury 374) .

Method describes a single mode of proceeding; methodology describes the overall process.

Method - a way of doing anything especially according to a defined and regular plan; a mode of procedure in any activity

Methodology - the study of the direction and implications of empirical research, or the sustainability of techniques employed in it; a method or body of methods used in a particular field of study or activity *Browse a list of research methodology books  or this guide on Art & Design Research

Literature Review, critical evaluation & synthesis

A literature review is a systematic approach to locating, reviewing, and evaluating the published work and work in progress of scholars, researchers, and practitioners on a given topic.

Critical evaluation and synthesis is the ability to handle (or process) existing sources. It includes knowledge of the sources of literature and contextual research field within which the person is working ( Newbury 373).

Literature reviews are done for many reasons and situations. Here's a short list:

Sources to consult while conducting a literature review:

Online catalogs of local, regional, national, and special libraries

meta-catalogs such as worldcat , Art Discovery Group , europeana , world digital library or RIBA

subject-specific online article databases (such as the Avery Index, JSTOR, Project Muse)

digital institutional repositories such as Digital Commons @RISD ; see Registry of Open Access Repositories

Open Access Resources recommended by RISD Research LIbrarians

works cited in scholarly books and articles

print bibliographies

the internet-locate major nonprofit, research institutes, museum, university, and government websites

search google scholar to locate grey literature & referenced citations

trade and scholarly publishers

fellow scholars and peers

Communication                              

Communication refers to the ability to

  • structure a coherent line of inquiry
  • communicate your findings to your intended audience
  • make skilled use of visual material to express ideas for presentations, writing, and the creation of exhibitions ( Newbury 374)

Research plan framework: Newbury, Darren. "Research Training in the Creative Arts and Design." The Routledge Companion to Research in the Arts . Ed. Michael Biggs and Henrik Karlsson. New York: Routledge, 2010. 368-87. Print.

About the author

Except where otherwise noted, this guide is subject to a Creative Commons Attribution license

source document

  Routledge Companion to Research in the Arts

  • Next: Literature Review >>
  • Last Updated: Sep 20, 2023 5:05 PM
  • URL: https://risd.libguides.com/researchplan

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-.

Cover of InformedHealth.org

InformedHealth.org [Internet].

In brief: what types of studies are there.

Last Update: September 8, 2016 ; Next update: 2024.

There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked.

When making decisions, patients and doctors need reliable answers to a number of questions. Depending on the medical condition and patient's personal situation, the following questions may be asked:

  • What is the cause of the condition?
  • What is the natural course of the disease if left untreated?
  • What will change because of the treatment?
  • How many other people have the same condition?
  • How do other people cope with it?

Each of these questions can best be answered by a different type of study.

In order to get reliable results, a study has to be carefully planned right from the start. One thing that is especially important to consider is which type of study is best suited to the research question. A study protocol should be written and complete documentation of the study's process should also be done. This is vital in order for other scientists to be able to reproduce and check the results afterwards.

The main types of studies are randomized controlled trials (RCTs), cohort studies, case-control studies and qualitative studies.

  • Randomized controlled trials

If you want to know how effective a treatment or diagnostic test is, randomized trials provide the most reliable answers. Because the effect of the treatment is often compared with "no treatment" (or a different treatment), they can also show what happens if you opt to not have the treatment or diagnostic test.

When planning this type of study, a research question is stipulated first. This involves deciding what exactly should be tested and in what group of people. In order to be able to reliably assess how effective the treatment is, the following things also need to be determined before the study is started:

  • How long the study should last
  • How many participants are needed
  • How the effect of the treatment should be measured

For instance, a medication used to treat menopause symptoms needs to be tested on a different group of people than a flu medicine. And a study on treatment for a stuffy nose may be much shorter than a study on a drug taken to prevent strokes .

“Randomized” means divided into groups by chance. In RCTs participants are randomly assigned to one of two or more groups. Then one group receives the new drug A, for example, while the other group receives the conventional drug B or a placebo (dummy drug). Things like the appearance and taste of the drug and the placebo should be as similar as possible. Ideally, the assignment to the various groups is done "double blinded," meaning that neither the participants nor their doctors know who is in which group.

The assignment to groups has to be random in order to make sure that only the effects of the medications are compared, and no other factors influence the results. If doctors decided themselves which patients should receive which treatment, they might – for instance – give the more promising drug to patients who have better chances of recovery. This would distort the results. Random allocation ensures that differences between the results of the two groups at the end of the study are actually due to the treatment and not something else.

Randomized controlled trials provide the best results when trying to find out if there is a cause-and-effect relationship. RCTs can answer questions such as these:

  • Is the new drug A better than the standard treatment for medical condition X?
  • Does regular physical activity speed up recovery after a slipped disk when compared to passive waiting?
  • Cohort studies

A cohort is a group of people who are observed frequently over a period of many years – for instance, to determine how often a certain disease occurs. In a cohort study, two (or more) groups that are exposed to different things are compared with each other: For example, one group might smoke while the other doesn't. Or one group may be exposed to a hazardous substance at work, while the comparison group isn't. The researchers then observe how the health of the people in both groups develops over the course of several years, whether they become ill, and how many of them pass away. Cohort studies often include people who are healthy at the start of the study. Cohort studies can have a prospective (forward-looking) design or a retrospective (backward-looking) design. In a prospective study, the result that the researchers are interested in (such as a specific illness) has not yet occurred by the time the study starts. But the outcomes that they want to measure and other possible influential factors can be precisely defined beforehand. In a retrospective study, the result (the illness) has already occurred before the study starts, and the researchers look at the patient's history to find risk factors.

Cohort studies are especially useful if you want to find out how common a medical condition is and which factors increase the risk of developing it. They can answer questions such as:

  • How does high blood pressure affect heart health?
  • Does smoking increase your risk of lung cancer?

For example, one famous long-term cohort study observed a group of 40,000 British doctors, many of whom smoked. It tracked how many doctors died over the years, and what they died of. The study showed that smoking caused a lot of deaths, and that people who smoked more were more likely to get ill and die.

  • Case-control studies

Case-control studies compare people who have a certain medical condition with people who do not have the medical condition, but who are otherwise as similar as possible, for example in terms of their sex and age. Then the two groups are interviewed, or their medical files are analyzed, to find anything that might be risk factors for the disease. So case-control studies are generally retrospective.

Case-control studies are one way to gain knowledge about rare diseases. They are also not as expensive or time-consuming as RCTs or cohort studies. But it is often difficult to tell which people are the most similar to each other and should therefore be compared with each other. Because the researchers usually ask about past events, they are dependent on the participants’ memories. But the people they interview might no longer remember whether they were, for instance, exposed to certain risk factors in the past.

Still, case-control studies can help to investigate the causes of a specific disease, and answer questions like these:

  • Do HPV infections increase the risk of cervical cancer ?
  • Is the risk of sudden infant death syndrome (“cot death”) increased by parents smoking at home?

Cohort studies and case-control studies are types of "observational studies."

  • Cross-sectional studies

Many people will be familiar with this kind of study. The classic type of cross-sectional study is the survey: A representative group of people – usually a random sample – are interviewed or examined in order to find out their opinions or facts. Because this data is collected only once, cross-sectional studies are relatively quick and inexpensive. They can provide information on things like the prevalence of a particular disease (how common it is). But they can't tell us anything about the cause of a disease or what the best treatment might be.

Cross-sectional studies can answer questions such as these:

  • How tall are German men and women at age 20?
  • How many people have cancer screening?
  • Qualitative studies

This type of study helps us understand, for instance, what it is like for people to live with a certain disease. Unlike other kinds of research, qualitative research does not rely on numbers and data. Instead, it is based on information collected by talking to people who have a particular medical condition and people close to them. Written documents and observations are used too. The information that is obtained is then analyzed and interpreted using a number of methods.

Qualitative studies can answer questions such as these:

  • How do women experience a Cesarean section?
  • What aspects of treatment are especially important to men who have prostate cancer ?
  • How reliable are the different types of studies?

Each type of study has its advantages and disadvantages. It is always important to find out the following: Did the researchers select a study type that will actually allow them to find the answers they are looking for? You can’t use a survey to find out what is causing a particular disease, for instance.

It is really only possible to draw reliable conclusions about cause and effect by using randomized controlled trials. Other types of studies usually only allow us to establish correlations (relationships where it isn’t clear whether one thing is causing the other). For instance, data from a cohort study may show that people who eat more red meat develop bowel cancer more often than people who don't. This might suggest that eating red meat can increase your risk of getting bowel cancer. But people who eat a lot of red meat might also smoke more, drink more alcohol, or tend to be overweight. The influence of these and other possible risk factors can only be determined by comparing two equal-sized groups made up of randomly assigned participants.

That is why randomized controlled trials are usually the only suitable way to find out how effective a treatment is. Systematic reviews, which summarize multiple RCTs , are even better. In order to be good-quality, though, all studies and systematic reviews need to be designed properly and eliminate as many potential sources of error as possible.

  • German Network for Evidence-based Medicine. Glossar: Qualitative Forschung.  Berlin: DNEbM; 2011. 
  • Greenhalgh T. Einführung in die Evidence-based Medicine: kritische Beurteilung klinischer Studien als Basis einer rationalen Medizin. Bern: Huber; 2003. 
  • Institute for Quality and Efficiency in Health Care (IQWiG, Germany). General methods . Version 5.0. Cologne: IQWiG; 2017.
  • Klug SJ, Bender R, Blettner M, Lange S. Wichtige epidemiologische Studientypen. Dtsch Med Wochenschr 2007; 132:e45-e47. [ PubMed : 17530597 ]
  • Schäfer T. Kritische Bewertung von Studien zur Ätiologie. In: Kunz R, Ollenschläger G, Raspe H, Jonitz G, Donner-Banzhoff N (eds.). Lehrbuch evidenzbasierte Medizin in Klinik und Praxis. Cologne: Deutscher Ärzte-Verlag; 2007.

IQWiG health information is written with the aim of helping people understand the advantages and disadvantages of the main treatment options and health care services.

Because IQWiG is a German institute, some of the information provided here is specific to the German health care system. The suitability of any of the described options in an individual case can be determined by talking to a doctor. informedhealth.org can provide support for talks with doctors and other medical professionals, but cannot replace them. We do not offer individual consultations.

Our information is based on the results of good-quality studies. It is written by a team of health care professionals, scientists and editors, and reviewed by external experts. You can find a detailed description of how our health information is produced and updated in our methods.

  • Cite this Page InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-. In brief: What types of studies are there? [Updated 2016 Sep 8].

In this Page

Informed health links, related information.

  • PubMed Links to PubMed

Recent Activity

  • In brief: What types of studies are there? - InformedHealth.org In brief: What types of studies are there? - InformedHealth.org

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Developing a Research Plan

  • First Online: 20 September 2022

Cite this chapter

plan of study of a researcher is called

  • Habeeb Adewale Ajimotokan 2  

Part of the book series: SpringerBriefs in Applied Sciences and Technology ((BRIEFSAPPLSCIENCES))

993 Accesses

The objectives of this chapter are to

Describe the terms research proposal and research protocol;

Specify and discuss the elements of research proposal;

Specify the goals of research protocol;

Outline preferable sequence for the different section headings of a research protocol and discuss their contents; and

Discuss the basic engineering research tools and techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

5StarEssays. (2020). Writing a research proposal—Outline, format and examples. In Complete guide to writing a research paper . Retrieved from https://www.5staressays.com/blog/writing-research-proposal

Walliman, N. (2011). Research methods: The basics . Routledge—Taylor and Francis Group.

Google Scholar  

Olujide, J. O. (2004). Writing a research proposal. In H. A. Saliu & J. O. Oyebanji (Eds.), A guide on research proposal and report writing (Ch. 7, pp. 67–79). Faculty of Business and Social Sciences, Unilorin.

Thiel, D. V. (2014). Research methods for engineers . University Printing House, University of Cambridge.

Book   Google Scholar  

Mouton, J. (2001). How to succeed in your master’s and doctoral studies. Van Schaik.

Lues, L., & Lategan, L. O. K. (2006). RE: Search ABC (1st ed.). Sun Press.

Bak, N. (2004). Completing your thesis: A practical guide . Van Schaik.

Sadiku, M. N. O. (2000). Numerical techniques in electromagnetics . CRC Press.

Download references

Author information

Authors and affiliations.

Department of Mechanical Engineering, University of Ilorin, Ilorin, Nigeria

Habeeb Adewale Ajimotokan

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Ajimotokan, H.A. (2023). Developing a Research Plan. In: Research Techniques. SpringerBriefs in Applied Sciences and Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-13109-7_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-13109-7_4

Published : 20 September 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-13108-0

Online ISBN : 978-3-031-13109-7

eBook Packages : Engineering Engineering (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Pilot Study in Research: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design.

Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

How Does it Work?

Pilot studies are a fundamental stage of the research process. They can help identify design issues and evaluate a study’s feasibility, practicality, resources, time, and cost before the main research is conducted.

It involves selecting a few people and trying out the study on them. It is possible to save time and, in some cases, money by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e., unusual things), confusion in the information given to participants, or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling.”

This enables researchers to predict an appropriate sample size, budget accordingly, and improve the study design before performing a full-scale project.

Pilot studies also provide researchers with preliminary data to gain insight into the potential results of their proposed experiment.

However, pilot studies should not be used to test hypotheses since the appropriate power and sample size are not calculated. Rather, pilot studies should be used to assess the feasibility of participant recruitment or study design.

By conducting a pilot study, researchers will be better prepared to face the challenges that might arise in the larger study. They will be more confident with the instruments they will use for data collection.

Multiple pilot studies may be needed in some studies, and qualitative and/or quantitative methods may be used.

To avoid bias, pilot studies are usually carried out on individuals who are as similar as possible to the target population but not on those who will be a part of the final sample.

Feedback from participants in the pilot study can be used to improve the experience for participants in the main study. This might include reducing the burden on participants, improving instructions, or identifying potential ethical issues.

Experiment Pilot Study

In a pilot study with an experimental design , you would want to ensure that your measures of these variables are reliable and valid.

You would also want to check that you can effectively manipulate your independent variables and that you can control for potential confounding variables.

A pilot study allows the research team to gain experience and training, which can be particularly beneficial if new experimental techniques or procedures are used.

Questionnaire Pilot Study

It is important to conduct a questionnaire pilot study for the following reasons:
  • Check that respondents understand the terminology used in the questionnaire.
  • Check that emotive questions are not used, as they make people defensive and could invalidate their answers.
  • Check that leading questions have not been used as they could bias the respondent’s answer.
  • Ensure that the questionnaire can be completed in a reasonable amount of time. If it’s too long, respondents may lose interest or not have enough time to complete it, which could affect the response rate and the data quality.

By identifying and addressing issues in the pilot study, researchers can reduce errors and risks in the main study. This increases the reliability and validity of the main study’s results.

Assessing the practicality and feasibility of the main study

Testing the efficacy of research instruments

Identifying and addressing any weaknesses or logistical problems

Collecting preliminary data

Estimating the time and costs required for the project

Determining what resources are needed for the study

Identifying the necessity to modify procedures that do not elicit useful data

Adding credibility and dependability to the study

Pretesting the interview format

Enabling researchers to develop consistent practices and familiarize themselves with the procedures in the protocol

Addressing safety issues and management problems

Limitations

Require extra costs, time, and resources.

Do not guarantee the success of the main study.

Contamination (ie: if data from the pilot study or pilot participants are included in the main study results).

Funding bodies may be reluctant to fund a further study if the pilot study results are published.

Do not have the power to assess treatment effects due to small sample size.

  • Viscocanalostomy: A Pilot Study (Carassa, Bettin, Fiori, & Brancato, 1998)
  • WHO International Pilot Study of Schizophrenia (Sartorius, Shapiro, Kimura, & Barrett, 1972)
  • Stephen LaBerge of Stanford University ran a series of experiments in the 80s that investigated lucid dreaming. In 1985, he performed a pilot study that demonstrated that time perception is the same as during wakefulness. Specifically, he had participants go into a state of lucid dreaming and count out ten seconds, signaling the start and end with pre-determined eye movements measured with the EOG.
  • Negative Word-of-Mouth by Dissatisfied Consumers: A Pilot Study (Richins, 1983)
  • A pilot study and randomized controlled trial of the mindful self‐compassion program (Neff & Germer, 2013)
  • Pilot study of secondary prevention of posttraumatic stress disorder with propranolol (Pitman et al., 2002)
  • In unstructured observations, the researcher records all relevant behavior without a system. There may be too much to record, and the behaviors recorded may not necessarily be the most important, so the approach is usually used as a pilot study to see what type of behaviors would be recorded.
  • Perspectives of the use of smartphones in travel behavior studies: Findings from a literature review and a pilot study (Gadziński, 2018)

Further Information

  • Lancaster, G. A., Dodd, S., & Williamson, P. R. (2004). Design and analysis of pilot studies: recommendations for good practice. Journal of evaluation in clinical practice, 10 (2), 307-312.
  • Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., … & Goldsmith, C. H. (2010). A tutorial on pilot studies: the what, why and how. BMC Medical Research Methodology, 10 (1), 1-10.
  • Moore, C. G., Carter, R. E., Nietert, P. J., & Stewart, P. W. (2011). Recommendations for planning pilot studies in clinical and translational research. Clinical and translational science, 4 (5), 332-337.

Carassa, R. G., Bettin, P., Fiori, M., & Brancato, R. (1998). Viscocanalostomy: a pilot study. European journal of ophthalmology, 8 (2), 57-61.

Gadziński, J. (2018). Perspectives of the use of smartphones in travel behaviour studies: Findings from a literature review and a pilot study. Transportation Research Part C: Emerging Technologies, 88 , 74-86.

In J. (2017). Introduction of a pilot study. Korean Journal of Anesthesiology, 70 (6), 601–605. https://doi.org/10.4097/kjae.2017.70.6.601

LaBerge, S., LaMarca, K., & Baird, B. (2018). Pre-sleep treatment with galantamine stimulates lucid dreaming: A double-blind, placebo-controlled, crossover study. PLoS One, 13 (8), e0201246.

Leon, A. C., Davis, L. L., & Kraemer, H. C. (2011). The role and interpretation of pilot studies in clinical research. Journal of psychiatric research, 45 (5), 626–629. https://doi.org/10.1016/j.jpsychires.2010.10.008

Malmqvist, J., Hellberg, K., Möllås, G., Rose, R., & Shevlin, M. (2019). Conducting the Pilot Study: A Neglected Part of the Research Process? Methodological Findings Supporting the Importance of Piloting in Qualitative Research Studies. International Journal of Qualitative Methods. https://doi.org/10.1177/1609406919878341

Neff, K. D., & Germer, C. K. (2013). A pilot study and randomized controlled trial of the mindful self‐compassion program. Journal of Clinical Psychology, 69 (1), 28-44.

Pitman, R. K., Sanders, K. M., Zusman, R. M., Healy, A. R., Cheema, F., Lasko, N. B., … & Orr, S. P. (2002). Pilot study of secondary prevention of posttraumatic stress disorder with propranolol. Biological psychiatry, 51 (2), 189-192.

Richins, M. L. (1983). Negative word-of-mouth by dissatisfied consumers: A pilot study. Journal of Marketing, 47 (1), 68-78.

Sartorius, N., Shapiro, R., Kimura, M., & Barrett, K. (1972). WHO International Pilot Study of Schizophrenia1. Psychological medicine, 2 (4), 422-425.

Teijlingen, E. R; V. Hundley (2001). The importance of pilot studies, Social research UPDATE, (35)

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Mcqmate logo

View all MCQs in

No comments yet

Related MCQs

  • ……………research is a preliminary study of a new problem about which the researcher has little or no knowledge.
  • …………research is a preliminary study of a new problem about which the researcher has little or no knowledge.
  • ……….is a comprehensive listing of the works relevant to the study of the researcher.
  • A ………. Is a list of the sources used by the researcher to get information for research report.
  • In ………… observation, the researcher acts both as an observer and a participant
  • a researcher uses paired comparison scaling techniques to measure consumer preference between 7 brands of toilet soaps . He will present…………pairs of brands to the respondents.
  • ………… is a list of sources used by the researcher to get information for research report.
  • Original source from which researcher directly collects the data that has not been previously collected
  • A ……….study is a small scale preliminary study conducted before the main research, in order to check the feasibility or to improve the design of the research.
  • A ………… study is a small scale replica of the main study

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 20 May 2024

China's Yangtze fish-rescue plan is a failure, study says

  • Xiaoying You 0

Xiaoying You is a writer based in London.

You can also search for this author in PubMed   Google Scholar

A tank of captive bred Chinese sturgeons about to be released to the Yangtze River in 2019.

A tank of captive-bred Chinese sturgeons about to be released to the Yangtze River. Credit: Xiao Yijiu/Xinhua via Alamy

Five fish species, including the iconic Chinese sturgeon, have gone extinct, or will soon be extinct, because of dams on the Yangtze River in China, according to a paper released on 10 May in Science Advances 1 . The findings have reignited a long-running debate among Chinese scientists about the best way to rescue the species in the Yangtze, with some saying that the analysis is flawed.

The Yangtze River is a mighty 6,300-kilometre-long waterway and a global biodiversity hotspot that runs through 11 Chinese provinces. But over the past 50 years, six major hydropower dams and more than 24,000 smaller hydropower stations have been built in the river’s main stream and branches — with even more on the drawing board.

The dams were built to help generate electricity, provide flood protection and make the river easier to navigate. But dams can block migratory fishes and damage their habitat. To mitigate the effects of the dams, fish-rescue programmes have been in place in various forms since 1982, when the first dam was being constructed.

Huang Zhenli, the deputy engineer-in-chief at the China Institute of Water Resources and Hydropower Research in Beijing, and his colleague Li Haiying developed an analytical tool that models the impact of the Yangtze River dams on its fish populations.

They focused on five iconic species: the Chinese sturgeon ( Acipenser sinensi s), the Yangtze sturgeon ( Acipenser dabryanus ), the Chinese paddlefish ( Psephurus gladius ), the Chinese sucker ( Myxocyprinus asiaticus ) and the largemouth bronze gudgeon ( Coreius guichenoti ).

By the time of the analysis, the paddlefish was already extinct. The Yangtze sturgeons are being kept alive only through captive-breeding programmes. The Chinese sturgeon is critically endangered. The International Union for Conservation of Nature lists the sucker as vulnerable, and the gudgeon as endangered.

The researchers’ modelling found that all five species will be entirely extinct or extinct in the wild by 2030.

David Dudgeon, a retired freshwater ecologist at the University of Hong Kong, says that the study is helpful in identifying the effect of the dams on the five species, particularly the understudied Chinese sucker. “There is nothing much that surprises me about the conclusions of the study,” he says. “It is good to see a well-integrated investigation of these five species.”

However, not all researchers are convinced by the study. Wei Qiwei, a conservation researcher at the Yangtze River Fisheries Research Institute, Chinese Academy of Fishery Sciences, in Wuhan, says that the authors’ work “deserves to be encouraged”, but disagrees with their conclusions.

Wei — who co-authored a 2020 paper 2 that declared the Chinese paddlefish extinct — says the predictions that all species will be extinct or near extinct in by 2030 can’t be relied on because the parameters in the analysis are uncertain and difficult to quantify.

Xie Ping, a freshwater ecologist at the Institute of Hydrobiology (IHB) of the Chinese Academy of Sciences in Wuhan, agrees that it might be too soon to draw definitive conclusions from the models’ findings. “More needs to be done to cover more fish species in more geographic regions, so as to validate the effectiveness of the models and to optimize their parameters,” Xie says.

‘Six misjudgements’

The authors blame the dams, and the lack of specialized passageways for migratory fish to bypass the dams — known as fish-ladders — for the five species’ collapse.

“To prevent more migratory fishes from going extinct in China, [its] dam-related fish-rescue programmes must undergo fundamental changes,” Huang says.

As fish numbers continued to decline from the 1980s onwards, China stepped up its efforts to safeguard the ecology and environment of the Yangtze.

In 2021, it commenced a ten-year fishing ban and increased its restocking of the river with young, captive-bred fish.

An aerial photo shows Wudongde Hydropower Station damming the Jinsha River.

The Wudongde Hydropower Station on the Jinsha River, an upper stretch of the Yangtze, became operational in 2020 — after the Chinese paddlefish was declared extinct. Credit: CFOTO/Future Publishing via Getty

However, the authors say that it was not enough. They describe “six misjudgements” of these fish-conservation campaigns, including that overfishing is the primary cause of the population declines; and that restocking is a “viable strategy” for mitigating the effects of the dams.

Wei and his team lead the scientific research behind the current conservation plan. He says that the dams’ impacts on fishes exist, but “one cannot ignore other factors”, such as overfishing.

“I believe if the 10-year fishing ban had been introduced to the Yangtze River 30 years earlier, the Chinese paddlefish would not be extinct. Nor would the Chinese sturgeon, the Yangtze sturgeon and the Chinese sucker get so close to extinction,” Wei notes.

As for restocking from captive-bred populations, he describes it as “the most important protection and restoration task” for the Chinese sturgeon and Yangtze sturgeon.

A 2023 study led by IHB researchers 3 found that a 2017 pilot fishing ban introduced to the Chishui River — an upstream tributary of the Yangtze — was “an effective measure to facilitate fish resources recovery”.

Steven Cooke, a biologist specializing in fish ecology and conservation at the Carleton University in Ottawa, says that science-based restocking can work “quite well” in cases such as the white sturgeon ( Acipenser transmontanus ) in North America. “But if the habitat is degraded and fish can’t complete their life cycles, then stocked fish may not survive,” Cooke says.

Dudgeon, meanwhile, regards the paper’s criticism of restocking of the Yangtze as being “well-founded”.

“There is absolutely no evidence that sturgeon restocking has enhanced wild populations, despite the release of millions of cultured juveniles … [and] the fact that the practice has continued for many years,” he says.

Fishway or highway

Xie highlights that, for large and long-lived species such as sturgeons, conservation work is “very hard”.

Chinese sturgeons feed and grow near the sea when they are young and migrate more than 3,200 kilometres up the Yangtze to reproduce. “They spent at least 10 to 20 million years adapting to such a cycle,” Xie says, “They cannot adapt to the huge changes caused by humans within these few decades.”

Xie says that fish ladders might not be enough to save the sturgeons. “Fish passages in Europe and North America are mainly designed for relatively small-sized fishes, such as salmon. But sturgeons are mostly large and need a lot of space to swim in rivers,” Xie says. “Less than 2% of sturgeons are able to successfully navigate through the fish passages in dams,” he says.

Dudgeon says that, even when fish ladders work, the stillness of the water in the dam might not provide adequate cues to guide the fish upstream to complete their migration.

On the downstream journey, both adult and juvenile fish have to find a way to navigate the dam, locate the fish ladder and make a safe descent, he adds.

Dam removals: Rivers on the run

Some countries, such as the United States, France and the United Kingdom, have started to dismantle dams to re-establish migration corridors. When removal is not feasible, or fish ladders are ineffective, Xie and his colleagues suggested in a 2023 paper 4 that building river-like side channels around hydropower dams is “the best way” to restore sturgeon migration routes and provide alternative habitats. Successful such cases have been observed in Russia, Canada and the United States, they noted.

Dudgeon says that, with so many complications, improving the situation for fishes in the Yangtze “will be challenging”.

doi: https://doi.org/10.1038/d41586-024-01444-3

Huang, Z. & Li, H. Sci. Adv. 10 , eadi6580(2024).

Article   PubMed   Google Scholar  

Zhang, H. et al. Sci. Total Environ. 710 , 136242 (2020).

Liu, F., Wang, Z., Xiz, Z., Wang, J. & Liu, H. Ecol. Process. 12 , 51 (2023).

Article   Google Scholar  

Zhang, L. et al. Proc. Natl Acad. Sci. U.S.A. 120 , e2217386120 (2023).

Download references

Reprints and permissions

Related Articles

plan of study of a researcher is called

  • Engineering
  • Biodiversity
  • Conservation biology

Adopt stricter regulation to stop ‘critical mineral’ greenwashing

Correspondence 28 MAY 24

Risks of bridge collapses are real and set to rise — here’s why

Risks of bridge collapses are real and set to rise — here’s why

Comment 28 MAY 24

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Nature Index 22 MAY 24

Climate policy must integrate blue energy with food security

Correspondence 09 JAN 24

Forecast warns when sea life will get tangled in nets — one year in advance

Forecast warns when sea life will get tangled in nets — one year in advance

News 05 DEC 23

With the arrival of El Niño, prepare for stronger marine heatwaves

With the arrival of El Niño, prepare for stronger marine heatwaves

Comment 06 SEP 23

Heterogeneous integration of spin–photon interfaces with a CMOS platform

Heterogeneous integration of spin–photon interfaces with a CMOS platform

Article 29 MAY 24

A vision chip with complementary pathways for open-world sensing

A vision chip with complementary pathways for open-world sensing

Selective lignin arylation for biomass fractionation and benign bisphenols

Selective lignin arylation for biomass fractionation and benign bisphenols

Postdoctoral Associate- Medical Image Analysis

Houston, Texas (US)

Baylor College of Medicine (BCM)

plan of study of a researcher is called

Postdoctoral Associate- Optogenetics, Voltage Imaging, and All-Optical Electrophysiology

Postdoctoral and visiting scholar positions in immunology, stem cells, and cancer.

Postdoctoral Research Fellow and Visiting Scholar positions in immunology, stem cells and cancer are immediately available at UConn in USA

Storrs Mansfield, Connecticut

University of Connecticut-Lai's Lab

plan of study of a researcher is called

Zhejiang Provincial Hospital of Chinese Medicine on Open Recruitment of Medical Talents and Postdocs

Director of Clinical Department, Professor, Researcher, Post-doctor

Hangzhou, Zhejiang, China

The First Affiliated Hospital of Zhejiang Chinese Medical University

plan of study of a researcher is called

Postdoc in Biomechanical Engineering (m/f/d)

The Muskuloskelettales Universitätszentrum München (MUM) on the Campus Großhadern invites applications for the department Orthopädie

Munich (Stadt), Bayern (DE)

PWG-LMU Klinikum

plan of study of a researcher is called

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

plan of study of a researcher is called

Lifelong cognitive reserve helps maintain late-life cognitive health, 15-year follow-up study suggests

T he brain's flexibility and ability to cope with loss of neurons or other lesions in the brain is called cognitive reserve. In a 15-year follow-up study, researchers at the division of Aging Research Center (ARC), Karolinska Institutet, suggest that lifelong cognitive reserve helps maintain late-life cognitive health by delaying cognitive transition in the preclinical stages of dementia. The results of their findings were recently published in Alzheimers & Dementia .

"We found evidence that lifelong greater cognitive reserve was linked with reduced risks of late-life transitions from normal cognition to mild cognitive impairment and death, but not with the transition from mild cognitive impairment to dementia," says Chengxuan Qiu, Senior lecturer, and senior author of the study.

Impact of lifelong cognitive reserve

Most previous studies have examined the association of individual indicators for cognitive reserve (e.g., education and leisure activities) with static cognitive conditions such as mild cognitive impairment and dementia.

"Our study suggests that great cognitive reserve could help maintain cognitive health, especially in the preclinical phase of dementia and that cognitive reserve could also benefit survival in older people with cognitive impairment," says Qiu.

These findings could help develop preventive interventions to promote cognitive health and healthy longevity in old age.

The study included 2,631 older residents who were free from dementia and living in central Stockholm. At the beginning of the study, the researchers collected data on various indicators for cognitive reserve (e.g., early-life education, midlife work complexity, and late-life leisure activities).

The participants were then regularly examined over 15 years to determine their cognitive states (e.g., normal function, mild cognitive impairment, and dementia) and survival.

"We used multistate models to investigate the composite measure of cognitive reserve in association with the risk of transitions across different cognitive states and death while considering impact of other factors," explains Serhiy Dekhtyar, Associate Professor, and co-author.

"We plan to assess the impact of cognitive reserve-enhancing measures on maintaining cognitive function within the randomized controlled trials," says Qiu. "We can do this study by using data from our ongoing randomized controlled intervention studies within the Worldwide FINGERS Network (e.g., FINGER and MIND-China trials), a global network for risk reduction and prevention of dementia."

In these intervention studies, social activity, physical activity, and cognitive training, which could enhance cognitive reserve, are part of the intervention measures and cognitive function and dementia are the primary outcomes. In addition, the researchers would like to further explore the mechanisms linking cognitive reserve with cognitive transitions by using blood and imaging biomarkers for brain lesions in their projects.

More information: Yuanjing Li et al, Association of cognitive reserve with transitions across cognitive states and death in older adults: A 15‐year follow‐up study, Alzheimer's & Dementia (2024). DOI: 10.1002/alz.13910

Provided by Karolinska Institutet

Credit: Pixabay/CC0 Public Domain

Dogs play a key role in veterinary college’s brain cancer trial

  • Marjorielee Christianson

21 May 2024

  • Share on Facebook
  • Share on Twitter
  • Copy address link to clipboard

Group photo of Lucy and the clinical trials team.

Lucy, with her boundless puppy-like energy even at 12 years old, is more than just a pet to Susan Ketcham. She's now part of a research project that could transform the way we treat brain cancer – in both dogs and humans.

This study at Virginia Tech's Virginia-Maryland College of Veterinary Medicine explores an innovative therapy called histotripsy. It's a leap forward from traditional cancer treatments, harnessing the power of focused ultrasound to break down tumors with precision. 

When Lucy began experiencing seizures last July, Ketcham, a clinical nurse specialist, knew something more serious was the cause. The diagnosis of a brain tumor was devastating, but Ketcham was determined to explore all treatment options available. She discovered the histotripsy trial during her search and quickly reached out.

"Being in human medicine myself," said Ketcham. "I work in operating rooms and am very familiar with focused ultrasound, so I was eager to learn more."

Collaborative mission, translational impact

The trial is led by John Rossmeisl , the Dr. and Mrs. Dorsey Taylor Mahin Professor of Neurology and Neurosurgery, and Rell Parker , an iTHRIV scholar and assistant professor of neurology and neurosurgery.

Also on the team is Lauren Ruger , a postdoctoral associate in Eli Vlaisavljevich’s lab in the Department of Biomedical Engineering and Mechanics where histotripsy is extensively researched. She's adapting the equipment used in the study to make it safe and effective for their canine patients.

“I had wanted to be a veterinarian when I was younger before deciding to become an engineer,” said Ruger. “So I love having the opportunity to use my skills as an engineer to influence animal health.”

This trial offers an essential stepping stone in developing less invasive treatment options for brain cancer and is supported by the Focused Ultrasound Foundation and the Canine Health Foundation, highlighting the widespread commitment to results across species.

Lauren Ruger posing with medical equipment.

Hope for histotripsy

“Histotripsy uses acoustic energy, or sound waves, to modify tissue,” said Rossmeisl. “The intent is to cause a mechanical disruption of the tissue – killing cancer cells." 

The technology was developed by researchers at the University of Michigan in the early 2000s. 

The advantage is precision. Unlike traditional surgery, histotripsy can focus its impact on the tumor itself. "We could potentially treat these hard-to-reach brain tumors we normally can’t access with traditional surgery,” said Parker.

"We really don't have great ways to treat brain cancers in patients,” said Rossmeisl. “Even when you do surgery, radiation, or chemotherapy alone or in combination, usually, you're not creating a cure." 

However, there is hope that histotripsy could be used to activate the body’s immune response and have it attack cancer cells, called the abscopal effect. Clinicians also see fewer side effects compared to other traditional treatment options.

About the procedure

The study currently still involves surgery to access brain tumors, which is the gold standard of care for this type of diagnosis. This allows for direct targeting of the tumor with the histotripsy transducer, delivering focused sound waves for precise treatment.

“When we do the surgery, we can see the tumor via ultrasound,” Parker said. “We can see that we're treating the appropriate cells, and then we also do an MRI to ensure that we've targeted the right area.” 

After the histotripsy treatment, surgeons carefully remove the treated tumor. This tissue provides crucial insights into the technique's effect on cancer cells, helping researchers refine the technology for future applications.

“It gives us the advantage of being able to look at the tissue that's been broken down to ensure that we're getting the desired effect from the histotripsy therapy,” said Ruger.

While the science is complex, the stories of patients like Lucy are reminders of why this work matters. "The recovery was quick, the incision was small," Ketcham said. "She's back to her playful self, and knowing she's helped advance science and technology is amazing."

Parker added: "We’re happy to say that the procedure has been safe for our patients, and we've been able to treat them appropriately."

Looking toward the future of treatment

A long-term goal of this study is to develop a completely non-invasive treatment that would eliminate the need for surgery. The team is in the early stages of exploring this possibility, citing several challenges to realizing a solution that could be widely available. 

“Transmitting ultrasound through the bones of the skull is very difficult,” said Ruger. “And then accurately focusing it in only the areas you want to treat with histotripsy adds another layer of complexity.”

However, the technique can be successfully applied through the skin, explained Rossmeisl. "That would be a paradigm changer. We would make surgically treating tumors a lot more widely available.”

“Even though I love neurosurgery,” he added. “Anytime I can do something that doesn't require putting the patient through a complex and invasive procedure and get them home quicker, that's always a good thing.”

To learn more about eligibility criteria or enroll your dog in the trial, please contact John Rossmeisl at [email protected] or 540-231-4621 or Mindy Quigley, clinical trials coordinator at  [email protected] or 540-231-1363.  

plan of study of a researcher is called

Andrew Mann

540-231-9005

  • Biomedical Engineering and Mechanics
  • Blacksburg, Va.
  • Brain Cancer
  • Cancer Research
  • College of Engineering
  • Faculty Excellence
  • Good Health and Well-Being
  • Industry, Innovation, and Infrastructure
  • Office of Postdoctoral Affairs
  • Small Animal Clinical Sciences
  • Success Story
  • Top News - Virginia-Maryland College of Veterinary Medicine
  • University Distinguished Professor
  • Veterinary Clinical Research Office
  • Veterinary Teaching Hospital
  • Virginia-Maryland College of Veterinary Medicine

Related Content

(From left) Oakley Milam , MacKenzie Milam, and Sierra Travis in front of the veterinary college..

Express & Star

  • Entertainment
  • Submit Your Story

New immunotherapy could treat cancer in the bone, study suggests

Developed by UCL researchers, the treatment has shown promising results against a bone cancer called osteosarcoma.

plan of study of a researcher is called

A new type of immunotherapy could help to treat bone cancer, new research suggests.

Osteosarcoma is relatively rare, with around 160 new cases each year in the UK, but is the most common bone cancer in teenagers.

More than 150,000 people suffer from cancer that has spread to the bones.

The study in mice found that using a small subset of immune cells, called gamma-delta T cells, could provide an efficient and cost-effective solution to the cancer – which is often resistant to chemotherapy.

These cells are a less well-known type of immune cell that can be made from healthy donor immune cells.

They can safely be given from one person to another, without the risk of potentially life-threatening graft-versus-host disease.

In order to manufacture the cells, blood is taken from a healthy donor, and the cells are then engineered to release tumour targeting antibodies and immune stimulating chemicals, before being injected into the patient with cancer in the bone.

This new treatment delivery platform is called OPS-gamma-delta T.

Lead author, Dr Jonathan Fisher, UCL Great Ormond Street Institute of Child Health and UCLH, said: “Current immunotherapies such as CAR-T cells (another type of immunotherapy using genetically modified immune cells) use the patient’s own immune cells and engineer them to improve their cancer-killing properties.

“However, this is expensive and takes time, during which a patient’s disease can get worse.

“And, while it is an effective treatment for leukaemia it has been found to be less effective against solid cancers.

“An alternative is to use an ‘off the shelf’ treatment made from healthy donor immune cells, but in order to do this care must be taken to avoid graft-versus-host disease, where the donor immune cells attack the patient’s body.

“The Fisher Lab discovered a way of engineering the previously under-utilised gamma-delta T cells, which have been clinically proven to be safe when made from unrelated donor blood.

“This offers a more cost-effective alternative to current per-patient manufacturing.”

The researchers tested the treatment on mouse models with bone cancer and found that OPS-gamma-delta T cells were better than conventional immunotherapy when controlling osteosarcoma growth.

The OPS-gamma-delta T cells were most effective when partnered with a bone sensitising drug – which has previously been used on its own to strengthen weak bones in patients with cancer.

This treatment prevented the tumours from growing in the mice that received it – leaving them healthy three months later, the study found.

Dr Fisher said: “Thousands upon thousands of people have cancer that spreads to the bones.

“There is currently very little that can be done to cure these patients. However, this is an exciting step forward in finding a potential new treatment.

“Our hope is that not only will this treatment work for osteosarcoma but also other adult cancers.”

Immunotherapy uses the immune system to fight cancer by helping the immune system recognise and attack cancer cells.

Because cancer that starts in or spreads to the bones is so hard to treat, it is a leading cause of cancer-related death.

The findings are published in the Science Translational Medicine journal.

plan of study of a researcher is called

Woman fatally stabbed on Bournemouth beach a ‘loving wife and mother’ UK News | 11 hours ago

Girl, 17, becomes third teen to die after Penkridge crash in which car hit tree Penkridge | 10 hours ago

Walsall supermarket bosses give update after traveller encampment sets up in car park Plus Business | 9 hours ago

Police car 'pelted with bricks' by suspected robbers attempting to steal supermarket cash machine Plus Crime | 14 hours ago

Transfer rumours: All the rumours and links from Wolves, West Brom, Walsall & Aston Villa Football | 17 hours ago

ScienceDaily

Health and economic benefits of breastfeeding quantified

Among half a million scottish infants, those exclusively breastfed were less likely to use healthcare services and incurred lower costs to the healthcare system.

Breastmilk can promote equitable child health and save healthcare costs by reducing childhood illnesses and healthcare utilization in the early years, according to a new study published this week in the open-access journal PLOS ONE by Tomi Ajetunmobi of the Glasgow Centre for Population Health, Scotland, and colleagues.

Breastfeeding has previously been found to promote development and prevent disease among infants. In Scotland -- as well as other developed countries -- low rates of breastfeeding in more economically deprived areas are thought to contribute to inequalities in early childhood health. However, government policies to promote child health have made little progress and more evidence on the effectiveness of interventions may be needed.

In the new study, researchers used administrative datasets on 502,948 babies born in Scotland between 1997 and 2009. Data were available on whether or not infants were breastfed during the first 6-8 weeks, the occurrence of ten common childhood conditions from birth to 27 months, and the details of hospital admissions, primary care consultations and prescriptions.

Among all infants included in the study, 27% were exclusively breastfed, 9% mixed fed and 64% formula fed during the first 6-8 weeks of life. The rates of exclusively breastfed infants ranged from 45% in the least deprived areas to 13% in the most deprived areas.

The researchers found that, within each quintile of deprivation, exclusively breastfed infants used fewer healthcare services and incurred lower costs compared to infants fed any formula milk. On average, breastfed infants had lower average costs of hospital care per admission (£42) compared to formula-fed infants (£79) in the first six months of life and fewer GP consultations (1.72, 95% CI: 1.66 -- 1.79) than formula-fed infants (1.92 95% CI: 1.88 -- 1.94). At least £10 million of healthcare costs could have been avoided if all formula-fed infants had instead been exclusively breastfed for the first 6-8 weeks of life, the researchers calculated.

The authors conclude that breastfeeding has a significant health and economic benefit and that increasing breastfeeding rates in the most deprived areas could contribute to the narrowing of inequalities in the early years.

  • Breastfeeding
  • Infant's Health
  • Today's Healthcare
  • Health Policy
  • Public Health
  • Disaster Plan
  • Poverty and Learning
  • Education and Employment
  • Early childhood education
  • Upper respiratory tract infection
  • Social inclusion
  • Health science
  • Evidence-based medicine

Story Source:

Materials provided by PLOS . Note: Content may be edited for style and length.

Journal Reference :

  • Omotomilola Ajetunmobi, Emma McIntosh, Diane Stockton, David Tappin, Bruce Whyte. Levelling up health in the early years: A cost-analysis of infant feeding and healthcare . PLOS ONE , 2024; 19 (5): e0300267 DOI: 10.1371/journal.pone.0300267

Cite This Page :

Explore More

  • The Case of the Missing Black Holes
  • Adjusting Sunglasses for Your Windows
  • Novel Gene-Editing Tool Created
  • How Hummingbirds Hover With Such Accuracy
  • Complete X and Y Chromosomes of Great Apes
  • Moonlets Stuck Together Orbit 'Dinky' Asteroid
  • Orchids Aid Seedlings Through Fungal Networks
  • Precise Maps of the Moon's Surface
  • Amazing Expertise of Scent Detection Dogs
  • Getting to Grips With a Handy Extra Thumb

Trending Topics

Strange & offbeat.

Named for a fire god, radioactive element at ORNL could now 'rewrite chemistry textbooks'

plan of study of a researcher is called

Nearly 80 years after scientists at Oak Ridge National Laboratory discovered an extremely rare radioactive element called promethium, a team at the lab published a landmark study on the subject that ORNL said could " rewrite chemistry textbooks ."

Research published in Nature on May 22 marks the first time scientists have uncovered key characteristics of the element, though the study could have implications far beyond promethium (No. 61 on the periodic table).

One of the most critical discoveries from the research is the bond length between promethium and surrounding atoms, a previously unknown measurement that unlocks some of the element's properties.

At any given time, only about one pound of promethium exists on Earth. Promethium is used mostly for research, but also in nuclear batteries used for pacemakers and space exploration.

The new research could help scientists expand these applications and potentially discover new ones for an element that's still relatively unexplored.

ORNL is the only producer of promethium-147 in the U.S. Its unique capabilities come from the High Flux Isotope Reactor, one of the world's most powerful research nuclear reactors. The reactor bombards materials with a concentrated beam of neutrons to create unique materials.

Among those materials are plutonium-238 , produced for generators on NASA space missions. There's also californium-252 , used for starting up nuclear reactors.

The High Flux Isotope Reactor, operational for nearly 60 years, is one of the few facilities in the world that can create manmade elements heavier that uranium.

Promethium was kept an ORNL secret until after Manhattan Project

Promethium was first produced as a byproduct of uranium fission at the lab's Graphite Reactor in 1945 by Charles Coryell, Jacob A. Marinsky and Lawrence E. Glendenin.

The scientists named the new element for Prometheus, a Titan and the god of fire in Greek mythology who disobeyed the gods of Olympus by bringing fire to humans. The scientists kept the discovery of promethium secret until years after World War II ended and Oak Ridge's scientific mission moved beyond the Manhattan Project .

Their discovery of promethium filled a gap in the periodic table. Every other element in the group known as lanthanides had already been discovered and studied.

Lanthanides are the 15 elements from No. 57 lanthanum to No. 71 lutetium. They are rare earth elements that are essential to modern technologies such as smartphones, laptops, car batteries, lasers and some cancer treatments.

ORNL research increases efficiency with hard-to-study promethium

For years, studies on lanthanides have not included promethium, in part because of how rare and unstable it is.

The isotope produced by ORNL researchers, promethium-147, has a half life of just 2.6 years. That means by the time scientists have actually produced the radioactive material, it has already started to decay into a different element.

"It is quite an undertaking to prepare to make a reasonable amount of promethium, especially in a chemically pure form," Ilja Popovs, a staff scientist who co-led the study, told Knox News. "Producing and handling sufficient quantities of any isotope of promethium is fairly challenging and requires special facilities and definitely expertise."

It took scientists using multiple world-leading facilities four months to isolate and purify the sample of promethium.

Popovs, along with Alex Ivanov and Santa Jansone-Popova, led a team of 18 authors on the study. The group used ORNL's High Flux Isotope Reactor and hot cells to protect them from radiation. The lab's Summit supercomputer, one of the top 10 fastest computers in the world, also was used in the research.

New promethium discoveries spill into tech

The scientists made new discoveries about lanthanide contraction, a phenomenon in which the elements' atoms get smaller as their atomic number increases, changing their properties.

The team uncovered that the shrinking slows down considerably along the lanthanide series after promethium.

This new discovery could increase efficiency in separating lanthanides, a critical process for using the elements in modern devices.

"Figuring out new and better ways that allow more efficient separation of lanthanides is extremely important, and quite a few scientists and research groups are working in that field," Popovs said. "We hope that we're gonna add an additional piece of information that will allow us to design better processes."

ORNL has legacy of discovering elements

ORNL is credited with the discovery of three elements : promethium in 1945, moscovium in 2003 and tennessine in 2010. Moscovium and tennessine, developed in partnership with a Russian lab, were verified as new elements by the International Union for Pure and Applied Chemistry in 2015.

Overall, the lab has played a critical role in the discovery of nine elements . The other six are rutherfordium, dubnium, seaborgium, flerovium, livermorium and oganesson, the last chemical on the current periodic table.

For Ivanov, one of the scientists who led the study, carrying on the lab's long legacy as a leader in scientific innovation is among the most rewarding parts of the research. ORNL, managed by UT-Battelle, is the Department of Energy's largest science and technology lab.

Daniel Dassow is a growth and development reporter focused on technology and energy. Phone 423-637-0878. Email  [email protected] .

Support strong local journalism by subscribing at  knoxnews.com/subscribe .   

IMAGES

  1. Study Plan

    plan of study of a researcher is called

  2. How To Write A Phd Study Plan

    plan of study of a researcher is called

  3. FREE 10+ Research Study Plan Templates in PDF

    plan of study of a researcher is called

  4. How To Write A Phd Study Plan

    plan of study of a researcher is called

  5. Developing a Five-Year Research Plan

    plan of study of a researcher is called

  6. Study Plan

    plan of study of a researcher is called

VIDEO

  1. Conclusion Confidence: Leaving a Lasting Impression #irfannawaz #phd #research

  2. Research Meaning

  3. Optimise Your Study Plan: Effective Daily Scheduling Tips

  4. Basic Structure of Research Proposal

  5. Exploring Careers Beyond Academia: Researcher FAQs

  6. Difference between Research Proposal and Study Plan

COMMENTS

  1. Organizing Your Social Sciences Research Paper

    Before beginning your paper, you need to decide how you plan to design the study.. The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection ...

  2. PDF CHAPTER 1 The Selection of a Research Approach

    This plan involves several decisions, and researchers need not take them in the order in which they are presented here. The overall decision involves which approach should be used to study a topic. Informing this decision should be the philosophical assumptions the researcher brings to the study; procedures of inquiry (called research

  3. What Is a Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Other interesting articles.

  4. Chapter 5 Research Design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a "blueprint" for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: (1) the data collection process, (2) the instrument development process, and (3) the sampling process.

  5. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  6. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  7. PDF Chapter 1 The Selection of a Research Approach Do not copy, post or

    The overall decision involves which approach should be used to study a topic. Informing this decision should be the philosophical assumptions the researcher brings to the study; procedures of inquiry (called research designs); and specific research methods of data collection, analysis, and interpretation. The

  8. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  9. PDF Literature Review and Focusing the Research

    Use of the literature review to plan and conduct a study requires that you critically evaluate the research that you read. This critical analysis can form the basis for your rationale or for your choice of data collection procedures. Criteria for evaluating primary research studies are provided at the end of each chapter.

  10. 1 Study design phase

    1.1.1 Start with a question in mind. A well-defined research question reflects the researcher's careful thinking of the problem that he/she/they is trying to tackle. Specifying a good research question also serves the researcher a long way: Provides clear aims of what to achieve through the study.

  11. Research Plan

    A research plan is a framework that shows how you intend to approach your topic. The plan can take many forms: a written outline, a narrative, a visual/concept map or timeline. It's a document that will change and develop as you conduct your research. Components of a research plan. 1. Research conceptualization - introduces your research question.

  12. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  13. The Research Process

    Research is intended to answer a specific question that is pertinent to a field of study. The research question or study purpose determines the type of research approach taken. ... Specifically, the roadmap is the study design or plan that must be implemented in order to answer the research question . Once the study is designed, study data can ...

  14. In brief: What types of studies are there?

    There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked. When making decisions, patients and doctors need reliable answers to a number of questions. Depending on the medical condition and patient's personal situation, the following ...

  15. 6 Basic Types of Research Studies (Plus Pros and Cons)

    Here are six common types of research studies, along with examples that help explain the advantages and disadvantages of each: 1. Meta-analysis. A meta-analysis study helps researchers compile the quantitative data available from previous studies. It's an observational study in which the researchers don't manipulate variables.

  16. What Is Research, and Why Do People Do It?

    Abstractspiepr Abs1. Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain ...

  17. (PDF) Developing and executing an effective research plan

    proposed research should be one of the. first steps in developing the research plan. Dividing work tasks can alleviate workload. for individual members of the research. team. The development of a ...

  18. Developing a Research Plan

    A research proposal is a written document, concerned with a comprehensive description of a proposed research plan or programme on a specific subject matter or topic to substantiate the need and relevance of carrying out the research [].Research proposals should draw attention to the proposed study's benefits and possible research outcomes, backed by informative and convincing evidence.

  19. Pilot Study in Research: Definition & Examples

    Advantages. Limitations. Examples. A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design. Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

  20. How To Write a Research Plan (With Template and Examples)

    If you want to learn how to write your own plan for your research project, consider the following seven steps: 1. Define the project purpose. The first step to creating a research plan for your project is to define why and what you're researching. Regardless of whether you're working with a team or alone, understanding the project's purpose can ...

  21. PDF The Selection of a Research Approach

    researcher brings to the study; procedures of inquiry (called research designs); and specific research methods of data collection, analysis, and interpretation. The selection of a research approach is also based on the nature of the research problem or issue being addressed, the researchers' personal experiences, and the audiences for the study.

  22. Research Final Flashcards

    The overall plan for answering a research question—the architectural backbone of a study—is called which of the following? A.Sampling plan B.Research design C.Proposal D.Hypothesis. B. The aggregate of those to whom a researcher wishes to generalize study results is which of the following?

  23. [Solved] Plan of study of a researcher is called the

    Related MCQs ……………research is a preliminary study of a new problem about which the researcher has little or no knowledge. …………research is a ...

  24. China's Yangtze fish-rescue plan is a failure, study says

    However, not all researchers are convinced by the study. Wei Qiwei, a conservation researcher at the Yangtze River Fisheries Research Institute, Chinese Academy of Fishery Sciences, in Wuhan, says ...

  25. Fish oil supplements may cause harm, study finds. 'Is it time to dump

    In fact, the new study found that people with existing heart disease at the beginning of the research had a 15% lower risk of progressing from atrial fibrillation to a heart attack and a 9% lower ...

  26. Lifelong cognitive reserve helps maintain late-life cognitive health

    The brain's flexibility and ability to cope with loss of neurons or other lesions in the brain is called cognitive reserve. In a 15-year follow-up study, researchers at the division of Aging ...

  27. Dogs play a key role in veterinary college's brain cancer trial

    Lucy, with her boundless puppy-like energy even at 12 years old, is more than just a pet to Susan Ketcham. She's now part of a research project that could transform the way we treat brain cancer - in both dogs and humans. This study at Virginia Tech's Virginia-Maryland College of Veterinary Medicine explores an innovative therapy called ...

  28. New immunotherapy could treat cancer in the bone, study suggests

    A new type of immunotherapy could help to treat bone cancer, new research suggests. Developed by UCL researchers, the treatment has shown promising results against a bone cancer called osteosarcoma.

  29. Health and economic benefits of breastfeeding quantified

    Breastmilk can promote equitable child health and save healthcare costs by reducing childhood illnesses and healthcare utilization in the early years, according to a new study.

  30. ORNL unlocks promethium properties with rare radioactive element study

    ORNL research increases efficiency with hard-to-study promethium. For years, studies on lanthanides have not included promethium, in part because of how rare and unstable it is.