Logo for Open Educational Resources Collective

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 26: Rigour

Darshini Ayton

Learning outcomes

Upon completion of this chapter, you should be able to:

  • Understand the concepts of rigour and trustworthiness in qualitative research.
  • Describe strategies for dependability, credibility, confirmability and transferability in qualitative research.
  • Define reflexivity and describe types of reflexivity

What is rigour?

In qualitative research, rigour, or trustworthiness, refers to how researchers demonstrate the quality of their research. 1, 2 Rigour is an umbrella term for several strategies and approaches that recognise the influence on qualitative research by multiple realities; for example, of the researcher during data collection and analysis, and of the participant. The research process is shaped by multiple elements, including research skills, the social and research environment and the community setting. 2

Research is considered rigorous or trustworthy when members of the research community are confident in the study’s methods, the data and its interpretation. 3 As mentioned in Chapters 1 and 2, quantitative and qualitative research are founded on different research paradigms and, hence, quality in research cannot be addressed in the same way for both types of research studies. Table 26.1 provides a comparison overview of the approaches of quantitative and qualitative research in ensuring quality in research.

Table 26.1: Comparison of quantitative and qualitative approaches to ensuring quality in research

Qualitative research - Concept Qualitative research - Definition Quantitative research - concept Quantitative research - Definition
Dependability Consistency in the research and the ability for another researcher to achieve the same results with the same research process. Dependability is demonstrated through detailing the changes and context of the research setting. This includes any changes that may occur in the setting, and description and explanation of how these changes may have affected the research process. Reliability The extent to which results are consistent over time and an accurate representation of the study population and an assessment of whether the results of a study can be reproduced under a similar methodology.
Credibility Confidence in the truth of the findings. Validity An assessment of whether the research measures what it was meant to measure, or how truthful the results are.
Confirmability The extent by which the findings of a study are shaped by the respondents and not research bias, motivation or interest. Objectivity Strategies to reduce bias in research.
Transferability Provides sufficient information about the context and process of the research to enable another person to determine if their context is similar and therefore the findings can be applied to the setting. Generalisability The extent to which the findings from the research sample can be applied to the broader population.
Authenticity Demonstrates the range of participant realities, and provides rich and detailed descriptions of these realities, using quotes and narratives.

Below is an overview of the main approaches to rigour in qualitative research. For each of the approaches, examples of how rigour was demonstrated are provided from the author’s PhD thesis.

Approaches to dependability

Dependability requires the researcher to provide an account of changes to the research process and setting. 3 The main approach to dependability is an audit trail.

  • Audit trail – the researcher records or takes notes on the conduct of the research and the process of reaching conclusions from the data. The audit trail includes information on the data collection and data analysis, including decision-making and interpretations of the data that influence the study’s results. 8 , 9
The interview questions for this study evolved as the study progressed, and accordingly, the process was iterative. I spent 12 months collecting data, and as my understanding and responsiveness to my participants and to the culture and ethos of the various churches developed, so did my line of questioning. For example, in the early interviews for phase 2, I included questions regarding the qualifications a church leader might look for in hiring someone to undertake health promotion activities. This question was dropped after the first couple of interviews, as it was clear that church leaders did not necessarily view their activities as health promoting and therefore did not perceive the relevance of this question. By ‘being church’, they were health promoting, and therefore activities that were health promoting were not easily separated from other activities that were part of the core mission of the church 10 ( pp93–4)

Approaches to credibility

Credibility requires the researcher to demonstrate the truth or confidence in the findings. The main approaches to credibility include triangulation, prolonged engagement, persistent observation, negative case analysis and member checking. 3

  • Triangulation – the assembly of data and interpretations from multiple methods (methods triangulation), researchers (research triangulation), theory (theory triangulation) and data sources (different participant groups). 9 Refer to Chapter 28 for a detailed discussion of this process.
  • Prolonged engagement – the requirement for researchers to spend sufficient time with participants and/or within the research context to familiarise them with the research setting, to build trust and rapport with participants and to recognise and correct any misinformation. 9
Prolonged engagement with churches was also achieved through the case study phase as the ten case study churches were involved in more than one phase of data collection. These ten churches were the case studies in which significant time was spent conducting interviews and focus groups, and attending activities and programs. Subsequently, there were many instances where I interacted with the same people on more than one occasion, thereby facilitating the development of interactive and deeper relationships with participants 10 (pp.94–5)
  • Persistent observation – the identification of characteristics and elements that are most relevant to the problem or issue under study, and upon which the research will focus in detail. 9
In the following chapters, I present my analysis of the world of churches in which I was immersed as I conducted fieldwork. I describe the processes of church practice and action, and explore how this can be conceptualised into health promotion action 10 (p97)
  • Negative case analysis – the process of finding and discussing data that contradicts the study’s main findings. Negative case analysis demonstrates that nuance and granularity in perspectives of both shared and divergent opinions have been examined, enhancing the quality of the interpretation of the data.
Although I did not use negative case selection, the Catholic churches in this study acted as examples of the ‘low engagement’ 10 (p97 )
  • Member checking – the presentation of data analysis, interpretations and conclusions of the research to members of the participant groups. This enables participants or people with shared identity with the participants to provide their perspectives on the research. 9
Throughout my candidature – during data collection and analysis, and in the construction of my results chapters – I engaged with a number of Christians, both paid church staff members and volunteers, to test my thoughts and concepts. These people were not participants in the study, but they were embedded in the cultural and social context of churches in Victoria. They were able to challenge and also affirm my thinking and so contributed to a process of member checking 10 (p96)

Approaches to confirmability

Confirmability is demonstrated by grounding the results in the data from participants. 3 This can be achieved through the use of quotes, specifying the number of participants and data sources and providing details of the data collection.

  • Quotes from participants are used to demonstrate that the themes are generated from the data. The results section of the thesis chapters commences with a story based on the field notes or recordings, with extensive quotes from participants presented throughout. 10
  • The number of participants in the study provides the context for where the data is ‘sourced’ from for the results and interpretation. Table 26.2 is reproduced with permission from the Author’s thesis and details the data sources for the project. This also contributes to establishing how triangulation across data sources and methods was achieved.
  • Details of data collection – Table 26.2 provides detailed information about the processes of data collection, including dates and locations but the duration of each research encounter was not specified.

Table 26.2 Data sources for the PhD research project of the Author.

Study phase Date Data source Data collection Participant numbers
Phase 1 - Exploration April - Oct 2009

Jan- Mar 2010
Documents

Qualitative interviews
Annual reports of funding agencies, local government councils and church affiliated organisations, strategic plans of primary care partnerships.

In-depth interviews with local church leaders and individuals from church affiliated organisations in Victoria
5 participants from local churches

5 participants from church affiliated organisations
Phase 2 - Description April - June 2010 Qualitative telephone interviews Qualitative semi-structured telephone interviews with church leaders of 25 Victorian churches 25 church ministers
Phase 3 - Case studies July - Dec 2010 Qualitative Interviews

Focus groups

Observation

Document analysis
Face-to-face qualitative in-depth interviews with the church staff and/or key volunteers of 10 case study churches.

Focus groups with church volunteers.

Direct observation of case study churches in their conduct of health promotion activities.

Annual reports and/or church newsletters
37 participants

10 focus groups

17 direct observations

12 document analyses

Approaches to transferability

To enable the transferability of qualitative research, researchers need to provide information about the context and the setting. A key approach for transferability is thick description. 6

  • Thick description – detailed explanations and descriptions of the research questions are provided, including about the research setting, contextual factors and changes to the research setting. 9
I chose to include the Catholic Church because it is the largest Christian group in Australia and is an example of a traditional church. The Protestant group were represented through the Uniting, Anglican Baptist and Church of Christ denominations. The Uniting Church denomination is unique to Australia and was formed in 1977 through the merging of the Methodist, Presbyterian and Congregationalist denominations. The Church of Christ denomination was chosen to represent a contemporary less hierarchical denomination in comparison to the other protestant denominations. The last group, the Salvation Army, was chosen because of its high profile in social justice and social welfare, therefore offering different perspectives on the role and activities of the church in health promotion 10 (pp82–3)

What is reflexivity?

Reflexivity is the process in which researchers engage to explore and explain how their subjectivity (or bias) has influenced the research. 12 Researchers engage in reflexive practices to ensure and demonstrate rigour, quality and, ultimately, trustworthiness in their research. 13 The researcher is the instrument of data collection and data analysis, and hence awareness of what has influenced their approach and conduct of the research – and being able to articulate them – is vital in the creation of knowledge. One important element is researcher positionality (see Chapter 27), which acknowledges the characteristics, interests, beliefs and personal experiences of the researcher and how this influences the research process. Table 26.3 outlines different types of reflexivity, with examples from the author’s thesis.

Table 26.3: Types of reflexivity

Reflexivity type Examples from the author’s thesis

Personal – reflections on the researcher's personal expectations, assumptions, biases and reactions to the research contexts, participants and data. ‘It was with hesitant steps that I entered the field for my research. I was known in some of these church communities, and my background and experience in churches was what drove me to do this research. As mentioned above, I identified as an insider to this research as I shared experiences, religious affiliation and language with the research participants. In undertaking this research, I was required to be true in what I captured and interpreted, and reflexive in acknowledging my own biases that may have coloured my approach and interpretations.’
Interpersonal – reflections on how relationships influence the research process. ‘My time in the field was peppered with statements such as “Oh, you know this person?” or “I don’t need to explain this church terminology to you.” I identified myself as an insider and by positioning myself in this way my participants treated me as someone who shared their beliefs.’
Methodological – reflections on how decisions were made regarding the study’s methods and methodological approach, and the implications of these. ‘I sought to understand what it meant to “be church” and how this played out in health promoting practices in their local community...The church is the social context. The aim of the inquiry is to understand and re-examine the constructions that the participants and I, as the researcher, hold in relation to the local church as a setting and partner for health promotion.’
Contextual – reflections on how the research context shapes and influences the research process. ‘This experience from the field involved a shared experience of a church service, however during this process the participants and I became acutely aware of our differences in social position. I was attending my own church service afterwards and had dressed according to the middle class norms of this service. The attendees at Redgum Church of Christ were experiencing poverty and health issues and therefore their dress and manner reflected their circumstances in life. Despite being an insider in some aspects (religious background, familiarity with church culture and practices), there were social facets that were not shared with my participants including generational, socio-economic and ethnic differences.’

The quality of qualitative research is measured through the rigour or trustworthiness of the research, demonstrated through a range of strategies in the processes of data collection, analysis, reporting and reflexivity.

  • Chowdhury IA. Issue of quality in qualitative research: an overview. Innovative Issues and Approaches in Social Sciences . 2015;8(1):142-162. doi:10.12959/issn.1855-0541.IIASS-2015-no1-art09
  • Cypress BS. Rigor or reliability and validity in qualitative research: perspectives, strategies, reconceptualization, and recommendations. Dimens Crit Care Nurs . 2017;36(4):253-263. doi:10.1097/DCC.0000000000000253
  • Connelly LM. Trustworthiness in qualitative research. Medsurg Nurs . 2016;25(6):435-6.
  • Golafshani N. Understanding reliability and validity in qualitative research. Qual Rep . 2003;8(4):597-607. Accessed September 18, 2023. https://nsuworks.nova.edu/tqr/vol8/iss4/6/
  • Yilmaz K. Comparison of quantitative and qualitative research traditions: epistemological, theoretical, and methodological differences. Eur J  Educ . 2013;48(2):311-325. doi:10.1111/ejed.12014
  • Shenton AK. Strategies for ensuring trustworthiness in qualitative research projects. Education for Information 2004;22:63-75. Accessed September 18, 2023. https://content.iospress.com/articles/education-for-information/efi00778
  • Varpio L, O’Brien B, Rees CE, Monrouxe L, Ajjawi R, Paradis E. The applicability of generalisability and bias to health professions education’s research. Med Educ . Feb 2021;55(2):167-173. doi:10.1111/medu.14348
  • Carcary M. The Research Audit Trail: Methodological guidance for application in practice. Electronic Journal of Business Research Methods . 2020;18(2):166-177. doi:10.34190/JBRM.18.2.008
  • Korstjens I, Moser A. Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. Eur J Gen Pract . Dec 2018;24(1):120-124. doi:10.1080/13814788.2017.1375092
  • Ayton D. ‘From places of despair to spaces of hope’ – the local church and health promotion in Victoria . PhD. Monash University; 2013. https://figshare.com/articles/thesis/_From_places_of_despair_to_spaces_of_hope_-_the_local_church_and_health_promotion_in_Victoria/4628308/1
  • Hanson A. Negative case analysis. In: Matthes J, ed. The International Encyclopedia of Communication Research Methods . John Wiley & Sons, Inc.; 2017. doi: 10.1002/9781118901731.iecrm0165
  • Olmos-Vega FM. A practical guide to reflexivity in qualitative research: AMEE Guide No. 149. Med Teach . 2023;45(3):241-251. doi: 10.1080/0142159X.2022.2057287
  • Dodgson JE. Reflexivity in qualitative research. J Hum Lact . 2019;35(2):220-222. doi:10.1177/08903344198309

Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Darshini Ayton is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

20.1 Introduction to qualitative rigor

We hear a lot about fake news these days. Fake news has to do with the quality of journalism that we are consuming. It begs questions like: does it contain misinformation, is it skewed or biased in its portrayal of stories, does it leave out certain facts while inflating others. If we take this news at face value, our opinions and actions may be intentionally manipulated by poor quality information. So, how do we avoid or challenge this? The oversimplified answer is, we find ways to check for quality. While this isn’t a chapter dedicated to fake news, it does offer an important comparison for the focus of this chapter, rigor in qualitative research. Rigor is concerned with the quality of research that we are designing and consuming. While I devote a considerable amount of time in my clinical class talking about the importance of adopting a non-judgmental stance in practice, that is not the case here; I want you to be judgmental, critical thinkers about research! As a social worker who will hopefully be producing research (we need you!) and definitely consuming research, you need to be able to differentiate good science from rubbish science. Rigor will help you to do this.

how is procedural rigour demonstrated in a research report

This chapter will introduce you to the concept of rigor and specifically, what it looks like in qualitative research. We will begin by considering how rigor relates to issues of ethics and how thoughtfully involving community partners in our research can add additional dimensions in planning for rigor. Next, we will look at rigor in how we capture and manage qualitative data, essentially helping to ensure that we have quality raw data to work with for our study. Finally, we will devote time to discussing how researchers, as human instruments, need to maintain accountability throughout the research process. Finally, we will examine tools that encourage this accountability and how they can be integrated into your research design. Our hope is that by the end of this chapter, you will begin to be able to identify some of the hallmarks of quality in qualitative research, and if you are designing a qualitative research proposal, that you consider how to build these into your design.

19.1 Introduction to qualitative rigor

Learning objectives.

Learners will be able to…

  • Identify the role of rigor in qualitative research and important concepts related to qualitative rigor
  • Discuss why rigor is an important consideration when conducting, critiquing and consuming qualitative research
  • Differentiate between quality in quantitative and qualitative research studies

In Chapter 11 we talked about quality in quantitative studies, but we built our discussion around concepts like reliability and validity . With qualitative studies, we generally think about quality in terms of the concept of rigor . The difference between quality in quantitative research and qualitative research extends beyond the type of data (numbers vs. words/sounds/images). If you sneak a peek all the way back to Chapter 7 , we discussed the idea of different paradigms or fundamental frameworks for how we can think about the world. These frameworks value different kinds of knowledge, arrive at knowledge in different ways, and evaluate the quality of knowledge with different criteria. These differences are essential in differentiating qualitative and quantitative work.

Quantitative research generally falls under a positivist paradigm, seeking to uncover knowledge that holds true across larger groups of people. To accomplish this, we need to have tools like reliability and validity to help produce internally consistent and externally generalizable findings (i.e. was our study design dependable and do our findings hold true across our population).

In contrast, qualitative research is generally considered to fall into an alternative paradigm (other than positivist), such as the interpretive paradigm which is focused on the subjective experiences of individuals and their unique perspectives. To accomplish this, we are often asking participants to expand on their ideas and interpretations. A positivist tradition requires the information collected to be very focused and discretely defined (i.e. closed questions with prescribed categories). With qualitative studies, we need to look across unique experiences reflected in the data and determine how these experiences develop a richer understanding of the phenomenon we are studying, often across numerous perspectives.

Rigor is a concept that reflects the quality of the process used in capturing, managing, and analyzing our data as we develop this rich understanding. Rigor helps to establish standards through which qualitative research is critiqued and judged, both by the scientific community and by the practitioner community.

how is procedural rigour demonstrated in a research report

For the scientific community, people who review qualitative research studies submitted for publication in scientific journals or for presentations at conferences will specifically look for indications of rigor, such as the tools we will discuss in this chapter. This confirms for them that the researcher(s) put safeguards in place to ensure that the research took place systematically and that consumers can be relatively confident that the findings are not fabricated and can be directly connected back to the primary sources of data that was gathered or the secondary data that was analyzed.

As a note here, as we are critiquing the research of others or developing our own studies, we also need to recognize the limitations of rigor. No research design is flawless and every researcher faces limitations and constraints. We aren’t looking for a researcher to adopt every tool we discuss below in their design. In fact, one of my mentors, speaks explicitly about “misplaced rigor”, that is, using techniques to support rigor that don’t really fit what you are trying to accomplish with your research design. Suffice it to say that we can go overboard in the area of rigor and it might not serve our study’s best interest. As a consumer or evaluator of research, you want to look for steps being taken to reflect quality and transparency throughout the research process, but they should fit within the overall framework of the study and what it is trying to accomplish.

From the perspective of a practitioner, we also need to be acutely concerned with the quality of research. Social work has made a commitment, outlined in our Code of Ethics (NASW,2017) , to competent practice in service to our clients based on “empirically based knowledge” (subsection 4.01). When I think about my own care providers, I want them to be using “good” research—research that we can be confident was conducted in a credible way and whose findings are honestly and clearly represented. Don’t our clients deserve the same from us?

how is procedural rigour demonstrated in a research report

As providers, we will be looking to qualitative research studies to provide us with information that helps us better understand our clients, their experiences, and the problems they encounter. As such, we need to look for research that accurately represents:

  • Who is participating in the study
  • What circumstances is the study being conducted under
  • What is the research attempting to determine

Further, we want to ensure that:

  • Findings are presented accurately and reflect what was shared by participants ( raw data )
  •  A reasonably good explanation of how the researcher got from the raw data to their findings is presented
  • The researcher adequately considered and accounted for their potential influence on the research process

As we talk about different tools we can use to help establish qualitative rigor, I will try to point out tips for what to look for as you are reading qualitative studies that can reflect these. While rigor can’t “prove” quality, it can demonstrate steps that are taken that reflect thoughtfulness and attention on the part of the researcher(s). This is a link from the American Psychological Association on the topic of reviewing qualitative research manuscripts. It’s a bit beyond the level of critiquing that I would expect from a beginning qualitative research student, however, it does provide a really nice overview of this process. Even if you aren’t familiar with all the terms, I think it can be helpful in giving an overview of the general thought process that should be taking place.

To begin breaking down how to think about rigor, I find it helpful to have a framework to help understand different concepts that support or are associated with rigor. Lincoln and Guba (1985) have suggested such a framework for thinking about qualitative rigor that has widely contributed to standards that are often employed for qualitative projects. The overarching concept around which this framework is centered is trustworthiness . Trustworthiness is reflective of how much stock we should put in a given qualitative study—is it really worth our time, headspace, and intellectual curiosity? A study that isn’t trustworthy suggests poor quality resulting from inadequate forethought, planning, and attention to detail in how the study was carried out. This suggests that we should have little confidence in the findings of a study that is not trustworthy.

According to Lincoln and Guba (1985) [1] trustworthiness is grounded in responding to four key ideas and related questions to help you conceptualize how they relate to your study. Each of these concepts is discussed below with some considerations to help you to compare and contrast these ideas with more positivist or quantitative constructs of research quality.

Truth value

You have already been introduced to the concept of internal validity . As a reminder, establishing internal validity is a way to ensure that the change we observe in the dependent variable is the result of the variation in our independent variable—did we actually design a study that is truly testing our hypothesis. In much/most qualitative studies we don’t have hypotheses, independent or dependent variables, but we do still want to design a study where our audience (and ourselves) can be relatively sure that we as the researcher arrived at our findings through a systematic and scientific process, and that those findings can be clearly linked back to the data we used and not some fabrication or falsification of that data; in other words, the truth value of the research process and its findings. We want to give our readers confidence that we didn’t just make up our findings or “see what we wanted to see”.

how is procedural rigour demonstrated in a research report

Applicability

  • who we were studying
  • how we went about studying them
  • what we found

how is procedural rigour demonstrated in a research report

Consistency

how is procedural rigour demonstrated in a research report

These concepts reflect a set of standards that help to determine the integrity of qualitative studies. At the end of this chapter you will be introduced to a range of tools to help support or reflect these various standards in qualitative research. Because different qualitative designs (e.g. phenomenology, narrative, ethnographic), that you will learn more about in Chapter 22 emphasize or prioritize different aspects of quality, certain tools will be more appropriate for these designs. Since this chapter is intended to give you a general overview of rigor in qualitative studies, exploring additional resources will be necessary to best understand which of these concepts are prioritized in each type of design and which tools best support them.

Key Takeaways

  • Qualitative research is generally conducted within an interpretativist paradigm. This differs from the post-positivist paradigm in which most quantitative research originates. This fundamental difference means that the overarching aim of these different approaches to knowledge building differ, and consequently, our standards for judging the quality of research within these paradigms differ.
  • Assessing the quality of qualitative research is important, both from a researcher and a practitioner perspective. On behalf of our clients and our profession, we are called to be critical consumers of research. To accomplish this, we need strategies for assessing the scientific rigor with which research is conducted.
  • Trustworthiness and associated concepts, including credibility, transferablity, dependability and confirmability, provide a framework for assessing rigor or quality in qualitative research.
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Newberry Park, CA: Sage ↵

Rigor is the process through which we demonstrate, to the best of our ability, that our research is empirically sound and reflects a scientific approach to knowledge building.

The degree to which an instrument reflects the true score rather than error.  In statistical terms, reliability is the portion of observed variability in the sample that is accounted for by the true variability, not by error. Note : Reliability is necessary, but not sufficient, for measurement validity.

The extent to which the scores from a measure represent the variable they are intended to.

a paradigm guided by the principles of objectivity, knowability, and deductive logic

Findings form a research study that apply to larger group of people (beyond the sample). Producing generalizable findings requires starting with a representative sample.

a paradigm based on the idea that social context and interaction frame our realities

in a literature review, a source that describes primary data collected and analyzed by the author, rather than only reviewing what other researchers have found

Data someone else has collected that you have permission to use in your research.

unprocessed data that researchers can analyze using quantitative and qualitative methods (e.g., responses to a survey or interview transcripts)

Trustworthiness is a quality reflected by qualitative research that is conducted in a credible way; a way that should produce confidence in its findings.

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

The level of confidence that research is obtained through a systematic and scientific process and that findings can be clearly connected to the data they are based on (and not some fabrication or falsification of that data).

The ability to apply research findings beyond the study sample to some broader population,

This is a synonymous term for generalizability - the ability to apply the findings of a study beyond the sample to a broader population.

The potential for qualitative research findings to be applicable to other situations or with other people outside of the research study itself.

Consistency is the idea that we use a systematic (and potentially repeatable) process when conducting our research.

a single truth, observed without bias, that is universally applicable

one truth among many, bound within a social and cultural context

The idea that qualitative researchers attempt to limit or at the very least account for their own biases, motivations, interests and opinions during the research process.

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views
  • Checklists for...

Checklists for improving rigour in qualitative research

  • Related content
  • Peer review

Never mind the tail (checklist), check out the dog (research)

  • Robert Power , senior lecturer in medical sociology ([email protected])
  • Royal Free and University College Medical School, London WC1E 6AU
  • University of Dundee, Dundee DD1 4HN

EDITOR—Barbour's article is tantalising and mystifying in equal measure. 1 She is right to counsel qualitative researchers from shielding behind a protective wall of checklists and quasi-paradigmatic research techniques—although the same should be levelled at epidemiologists, statisticians, and health economists, with all researchers being charged with the responsibility of ensuring that the research tools and analysis fit the question to be addressed. Yet, and this is where the tantalising becomes mystifying, she twice (once in the second paragraph and again in the last) tells us that our research strategies need to be informed by the epistemology of qualitative research, without giving us an inkling as to what she believes this to be. Although she rightly espouses the importance of context for qualitative researchers, she denies us the context in which to assess her own critique.

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £33 / $40 / €36 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

how is procedural rigour demonstrated in a research report

Appraisal of Qualitative Studies

  • Reference work entry
  • First Online: 13 January 2019
  • pp 1013–1026
  • Cite this reference work entry

how is procedural rigour demonstrated in a research report

  • Camilla S. Hanson 2 , 3 ,
  • Angela Ju 2 , 3 &
  • Allison Tong 2 , 4  

2189 Accesses

3 Citations

The appraisal of health research is an essential skill required of readers in order to determine the extent to which the findings may inform evidence-based policy and practice. The appraisal of qualitative research remains highly contentious, and there is a lack of consensus regarding a standard approach to appraising qualitative studies. Different guides and tools are available for the critical appraisal of qualitative research. While these guides propose different criteria for assessment, overarching principles of rigor have been widely adopted, and these include credibility, dependability, confirmability, transferability, and reflexivity. This chapter will discuss the importance of appraising qualitative research, the principles and techniques for establishing rigor, and future directions regarding the use of guidelines to appraise qualitative research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Anyan F. The influence of power shifts in data collection and analysis stages: a focus on qualitative research interview. Qual Rep. 2013;18(18):1.

Google Scholar  

Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ: Br Med J. 2001;322(7294):1115.

Article   Google Scholar  

CASP. CASP qualitative research checklist. http://docs.wixstatic.com/ugd/dded87_25658615020e427da194a325e7773d42.pdf . Critical Appraisal Skills Programme (2017).

Dixon-Woods M, Shaw RL, Agarwal S, Smith JA. The problem of appraising qualitative research. Qual Saf Health Care. 2004;13(3):223–5.

Dixon-Woods M, Sutton A, Shaw R, Miller T, Smith J, Young B, … Jones D. Appraising qualitative research for inclusion in systematic reviews: a quantitative and qualitative comparison of three methods. J Health Serv Res Policy. 2007;12(1):42–7.

Giacomini MK, Cook DJ, Group, E.-B. M. W. Users’ guides to the medical literature: XXIII. Qualitative research in health care A. Are the results of the study valid? JAMA. 2000;284(3):357–62.

Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, … Sterne JA. The Cochrane collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Hill A, Spittlehouse C. Evidence based medicine-what is critical appraisal. Newmarket: Heyward Med Commun; 2003.

Kenwood K, Pigeon N. Qualitative research and psychological theorising. Br J Psychol. 1992;83(1):97–112.

Kitto SC, Chesters J, Grbich C. Quality in qualitative research. Med J Aust. 2008;188(4):243.

Krueger RA. Focus groups: a practical guide for applied research. Singapore: Sage; 2014.

Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ. 2008;337:a1035–a1035.

Liamputtong P. Researching the vulnerable: a guide to sensitive research methods. London: Sage; 2007.

Book   Google Scholar  

Liamputtong, P. Focus group methodology: Principle and practice. Sage Publications. 2011.

Liamputtong P. Qualitative research methods. 4th ed. Melbourne: Oxford University Press; 2013.

Lincoln YS, Guba EG. But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. N Dir Eval. 1986;1986(30):73–84.

Mays N, Pope C. Assessing quality in qualitative research. BMJ. 2000;320(7226):50.

Mays N, Pope C. Quality in qualitative health research. In: Qualitative research in health care. 3rd ed. Blackwell Publishing Ltd, Oxford, UK. 2007. p. 82–101.

Meyrick J. What is good qualitative research? A first step towards a comprehensive approach to judging rigour/quality. J Health Psychol. 2006;11(5):799–808.

Noyes J, Popay J, Pearson A, Hannes K. 20 qualitative research and Cochrane reviews. Jackie Chandler (ed.) In: Cochrane handbook for systematic reviews of interventions. United Kingdom: The Cochrane Collaboration. 2008. p. 571.

O’reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7.

Patton MQ. Enhancing the quality and credibility of qualitative analysis. Health Serv Res. 1999;34(5 Pt 2):1189.

Patton MQ. Qualitative research and evaluation methods. 4th ed. Thousand Oaks: Sage; 2015.

Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res. 1998;8(3):341–51.

Pope C, Mays N. Qualitative research in health care. 3rd ed. Malden: Blackwell; 2006.

Rao D, Kekwaletswe T, Hosek S, Martinez J, Rodriguez F. Stigma and social barriers to medication adherence with urban youth living with HIV. AIDS Care. 2007;19(1):28–33.

Råheim M, Magnussen LH, Sekse RJT, Lunde Å, Jacobsen T, Blystad A. Researcher–researched relationship in qualitative research: Shifts in positions and researcher vulnerability. International journal of qualitative studies on health and well-being. 2016;11(1):30996.

Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995;18(2):179–83.

Shorey S, Dennis CL, Bridge S, Chong YS, Holroyd E, He HG. First-time fathers’ postnatal experiences and support needs: a descriptive qualitative study. J Adv Nurs. 2017;73:2987–96.

Spencer, L, Ritchie J, Lewis J, and Dillon L. Quality in Qualitative Evaluatoin: A framework for assessing research evidence, Government Chief Social Researcher’s Office, London: Cabinet Office. 2003.

Tong A, Dew MA. Qualitative research in transplantation: ensuring relevance and rigor. Transplantation. 2016;100(4):710–2.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health C. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Download references

Author information

Authors and affiliations.

Sydney School of Public Health, The University of Sydney, Sydney, NSW, Australia

Camilla S. Hanson, Angela Ju & Allison Tong

Centre for Kidney Research, The Children’s Hospital at Westmead, Westmead, NSW, Australia

Camilla S. Hanson & Angela Ju

Centre for Kidney Research, The Children’s Hospital at Westmead, Westmead, Australia

Allison Tong

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Camilla S. Hanson .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Hanson, C.S., Ju, A., Tong, A. (2019). Appraisal of Qualitative Studies. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_119

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_119

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of elife

Framework for advancing rigorous research

Walter j koroshetz.

1 National Institute of Neurological Disorders and Stroke, Bethesda, United States

Shannon Behrman

2 iBiology, San Francisco, United States

Cynthia J Brame

3 Center for Teaching and Department of Biological Sciences, Vanderbilt University, Nashville, United States

Janet L Branchaw

4 Department of Kinesiology and Wisconsin Institute for Science Education and Community Engagement, University of Wisconsin - Madison, Madison, United States

Emery N Brown

5 Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, United States

6 Department of Brain and Cognitive Science, Institute of Medical Engineering and Sciences, the Picower Institute for Learning and Memory, and the Institute for Data Systems and Society, Massachusetts Institute of Technology, Boston, United States

Erin A Clark

7 Department of Biology and Program in Neuroscience, Brandeis University, Waltham, United States

David Dockterman

8 Harvard Graduate School of Education, Harvard University, Cambridge, United States

Jordan J Elm

9 Department of Public Health Sciences, Medical University of South Carolina, Charleston, United States

Pamela L Gay

10 Planetary Science Institute, Tucson, United States

Katelyn M Green

11 Cellular and Molecular Biology Graduate Program, University of Michigan, Ann Arbor, United States

12 The Concord Consortium, Emeryville, United States

Michael G Kaplitt

13 Department of Neurological Surgery, Weill Cornell Medical College, New York, United States

Benedict J Kolber

14 Department of Biological Sciences, Duquesne University, Pittsburgh, United States

Alex L Kolodkin

15 Solomon H. Snyder Department of Neuroscience, Johns Hopkins School of Medicine, Baltimore, United States

Diane Lipscombe

16 Carney Institute for Brain Science, Department of Neuroscience, Brown University, Providence, United States

Malcolm R MacLeod

17 Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom

Caleb C McKinney

18 Biomedical Graduate Education, Georgetown University Medical Center, Washington, United States

Marcus R Munafò

19 MRC Integrative Epidemiology Unit, School of Psychological Science, University of Bristol, Bristol, United Kingdom

Barbara Oakley

20 Oakland University, Rochester, United States

Jeffrey T Olimpo

21 Department of Biological Sciences, The University of Texas at El Paso, El Paso, United States

Nathalie Percie du Sert

22 National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), London, United Kingdom

Indira M Raman

23 Department of Neurobiology, Northwestern University, Evanston, United States

24 Complexly, Missoula, United States

Amy L Shelton

25 Center for Talented Youth and School of Education, Johns Hopkins University, Baltimore, United States

Stephen Miles Uzzo

26 New York Hall of Science, Flushing Meadows Corona Park, United States

Devon C Crawford

Shai d silberberg.

There is a pressing need to increase the rigor of research in the life and biomedical sciences. To address this issue, we propose that communities of 'rigor champions' be established to campaign for reforms of the research culture that has led to shortcomings in rigor. These communities of rigor champions would also assist in the development and adoption of a comprehensive educational platform that would teach the principles of rigorous science to researchers at all career stages.

The scientific enterprise relies on mentors teaching their students and trainees how to design and conduct studies that produce reliable scientific knowledge. A crucial part of this is teaching students and trainees how to minimize the risks that chance observations, subconscious biases, or other factors might lead to incorrect or inflated claims. However, as the demands on mentors increase, some of them unintentionally overlook this crucial aspect of scientific investigation, meaning that students and trainees are not taught how to distinguish between high- and low-quality evidence when working on their own studies and when reading about other studies ( Ioannidis et al., 2014 ; Bosch and Casadevall, 2017 ; Landis et al., 2012 ).

Additional complications stem from the welcome rise in team-based science and a greater sophistication and range of experimental techniques ( National Research Council, 2015 ), which may, in part, be driven by a feeling that only exciting and complete stories will appeal to journals and funders ( Nosek et al., 2012 ; Casadevall et al., 2016 ). Increasingly, an individual scientist cannot be an expert in all the techniques used in a research project.

Taken together, these developments suggest that enhanced training in the fundamental principles of rigorous research common to most, if not all, experimental practices is needed to ensure that the outputs of scientific research remain reliable and robust. Such principles include strong reasoning and inference based on valid assertions, which requires the proper interpretation of uncertainty and a motivation to identify inconsistencies ( Bosch and Casadevall, 2017 ; Casadevall and Fang, 2016 ; Munafò and Davey Smith, 2018 ; Wasserstein et al., 2019 ). For studies that test hypotheses, researchers should: clearly define interventions; identify and disclose possible confounding factors; transparently report project workflows, experimental plans, methods, data analyses, and any divergence from pre-planned procedures; and fully report their competing interests (see https://www.equator-network.org/ for reporting guidelines). The requirements for studies intended to generate hypotheses will be different but should be equally described ( Dirnagl, 2019 ).

Before formulating solutions to these issues, we assessed current training practices at the graduate and postdoctoral levels by surveying all 41 institutions in the United States that held at least one training grant from the National Institute of Neurological Disorders and Stroke (NINDS) in May 2018. Only 5 of the 37 institutions that responded to the survey reported providing a course predominantly dedicated to principles of rigorous research, with others using a range of approaches – such as seminars, lectures within other coursework, workshops, and informal mentoring – to teach good research practices. However, few if any of the institutions covered the full range of principles that need to be learned and understood. Although the sample in our survey was small, the responses reinforced the common belief that formal training in rigorous research needs to be enhanced ( Ioannidis et al., 2014 ; Munafò et al., 2017 ).

While numerous training materials related to rigorous research are available online, finding suitable materials and assembling them into a cohesive course is challenging. Having access to a free, organized suite of educational resources could greatly reduce the energy barrier for institutions and scientists to implement enhanced training at all levels, from undergraduate education to faculty professional development.

Towards this end NINDS convened a workshop attended by a range of stakeholders: basic, translational, and clinical neuroscientists; scholars of education and science communication; educational platform developers; and trainees. Although neuroscience served as a focal point, the four outcomes of the discussions apply widely across the biomedical sciences: i) there is a clear need for a platform that teaches the principles of rigorous research and covers the needs of scientists at all career stages; ii) effective educational interventions should lead to measurable behavioral change; iii) academic institutions need to play a proactive role in promoting rigorous research practices; iv) progress in this area will require cultural change at academic institutions, funders, and publishers ( Casadevall et al., 2016 ; Munafò et al., 2017 ; Collins and Tabak, 2014 ; Begley et al., 2015 ; Casadevall and Fang, 2012 ).

Building communities of rigor champions

To unleash the motivation for a cultural change evident in discussions between the authors and early-career researchers and others, and to provide momentum for change across different sectors, we propose the establishment of inter- and intra-institutional communities of 'rigor champions' who are committed to promoting rigor and transparency in research. We know there are many such individuals working at different levels of seniority in different types of organizations (such as universities, funders, publishers, and scientific societies), but they often feel isolated and under-resourced. To seed this effort and to help like-minded individuals in different organizations to find each other and join forces, NINDS has created a website for researchers, educators, trainees, organizational leaders and others who are passionate about the issues discussed here. This website includes currently available resources for making science more rigorous and transparently reporting results, as well as instructions for identifying yourself as a rigor champion.

More information about the different activities that these communities could undertake are given in Table 1 . Researchers, educators and trainees are best placed to collaborate on new tools, share best practices, and promote rigorous research in their local scientific communities. Societies are in a position to advocate for widespread policy changes, while funders and journals have important gatekeeping roles ( Collins and Tabak, 2014 ; McNutt, 2014 ; Cressey, 2015 ; PLOS Biology, 2018 ). The recently established UK Reproducibility Network ( Munafò et al., 2020 ) and the PREMIER project ( Dirnagl et al., 2018 ), both of which aim to improve scientific practices, may serve as models for these communities.

CommunityIntra-organizational activitiesInter-organizational activities
• Promote transparency and other rigorous practices among colleagues and mentors
• Advocate for resources to facilitate rigorous research practices
• Share institutional resources and practices in education and training
• Call for changes in institutional culture and policies
• Transparently report all experiments, including neutral outcomes
• Promote rigorous practices among colleagues and trainees
• Call for changes to institutional culture, policies, and infrastructure
• Share effective training practices and useful laboratory resources
• Coordinate with the broader scientific community to promote better incentive structures
• Suggest improvements to available resources that address rigor
• Integrate rigorous research principles into all coursework
• Share resources and educational best practices
• Share effective learning evaluation methods
• Enact policies and support infrastructure to incentivize transparency and other rigorous research practices
• Explicitly incorporate mentoring, collaboration, and rigorous research practices into promotion procedures
• Initiate and share outcomes from piloted educational resources
• Support and promote communities of rigor champions
• Disseminate policy changes, new initiatives, educational successes, and implementation strategies
• Develop tangible outcome measures to evaluate impact
• Promote thorough review of research practices in publications
• Explicitly support research transparency and neutral outcomes
• Educate reviewers on which scientific practices are valued by the journal
• Collaborate to implement best practices consistently across different publishers
• Support the founding of communities of rigor champions
• Compile and encourage best practices used by the scientific community
• Host workshops and educational materials for members
• Promote and maintain communities of rigor champions
• Encourage institutional policies that promote research quality and effective education
• Emphasize attention to rigor in peer review
• Reward rigorous research practices and outstanding mentorship
• Support infrastructure for transparent and rigorous science
• Support educational resources and initiatives
• Support and promote communities of rigor champions
• Share best practices for incentivizing rigorous research and educating scientists
• Develop partnerships to support better training and facilitate cultural changes

NINDS, for example, has proactively sought effective approaches to support greater transparency in reporting . An NINDS meeting with publishers led to changes in journal policies regarding transparency of reporting at various journals ( Nature, 2013 ; Kelner, 2013 ). Recommendations for greater transparency at scientific meetings stemmed from an NINDS roundtable with conference organizing bodies ( Silberberg et al., 2017 ) and are being piloted by the Federation of American Societies for Experimental Biology (FASEB ) . To recognize outstanding mentors, NINDS established the Landis Mentoring Award , and by providing greater stability to meritorious scientists though the NINDS R35 Program , it is anticipated that the pressures to rush studies to publication will be mitigated.

In particular we hope that leaders at academic institutions – such as department chairs, deans, and vice-presidents of research – will become involved because they are uniquely placed to shape the culture and social norms of institutions ( Begley et al., 2015 ). For example, faculty evaluation criteria should be modified to place greater emphasis on data sharing, methods transparency, demonstrated rigor, collaboration, and mentoring, with less emphasis on the number of publications and journal impact factors ( Casadevall and Fang, 2012 ; Moher et al., 2018 ; Bertuzzi and Jamaleddine, 2016 ; Lundwall, 2019 ; Strech et al., 2020 ; Casci and Adams, 2020 ; see also https://sfdora.org/read ). When publications are being evaluated, rigorously obtained null results should be valued as highly as positive findings. Institutional leaders are also uniquely placed to ensure that scientific rigor is properly taught to trainees and incorporated into day-to-day lab work ( Casadevall et al., 2016 ; Begley et al., 2015 ; Bosch, 2018 ; Button et al., 2020 ). Moreover, evaluations of trainees should emphasize experimental and analytic skills rather than where papers are published.

Building an educational resource for rigorous research

The establishment of communities of rigor champions will set the stage for the creation of an educational platform designed by the scientific community to communicate the principles of rigorous research. Given the rapid evolution of technologies and learning practices, it is difficult to predict what resource formats will be most effective in the future, so the platform will need to be open and freely available, easily discoverable, engaging, modular, adaptable, and upgradable. It will also need to be available during coursework and beyond so that scientists can use it to answer questions when they are doing research or as part of life-long learning ( Figure 1 ). This means that the platform will have to embody a number of principles of effective teaching and mentoring (see Table 2 ).

An external file that holds a picture, illustration, etc.
Object name is elife-55915-fig1.jpg

We envision a comprehensive resource that can be used by scientists at all stages of their career to explore the principles of rigorous research at various levels of detail. We envision modules on a range of topics (such as reducing cognitive biases), each of which contains a number of topics (such as blinding), each of which contains a number of lessons (such as practical examples).

Key elementTeaching and learning principle
Define the learning objectives upfront, identify ways to measure achievement of these objectives, and then design activities to support learning ( ).
Encourage students to pose their own questions, apply commonly used tools and methods to actively explore their questions, and provide evidence when explaining phenomena ( ; ; ; ).
Provide feedback on real-world experiments, whether in the classroom or the laboratory, as a way to demonstrate relevance and stimulate interest. Opportunities for personalized application and discussion in the local setting with the help of a facilitator’s guide are particularly critical, as adults typically learn most effectively when given the opportunity for immediate personal utility and value ( ). Emphasize the ability to contribute to a larger purpose or gain social standing ( ).
Include a range of approaches to teaching and learning to accommodate different levels of knowledge and skills, motivations, and senses of self-efficacy ( ; ).
Allow individuals to gain self-efficacy by experiencing a feeling of progress, being challenged in low-stakes environments, and working through confusing concepts successfully ( ). This is more effective when the person feels psychologically safe to take risks and fail in front of their local scientific community.
Facilitate learning, foster collaboration, and recognize diverse perspectives in order to encourage learners to gain agency and forge a connection with the intellectual community ( ; ).
Include complexity and inconsistencies in training examples rather than simplification for the sake of a persuasive story ( ; ). This counteracts the drive to smooth over inconvenient but potentially important details and highlights the importance of confounding variables, potential artefactual influences, reproducibility, and robustness of the findings.
Nurture positive behaviors, like acknowledging and learning from mistakes, rather than penalize imperfect practices ( ). Mentors at all career stages are encouraged to model these positive behaviors and to share their own failures, the drudgery and frustrations of science, and their approaches to coping emotionally and growing intellectually while maintaining rigorous research practices.
Measure success via gains in learner competency and changes to their real-world approaches to research. Changes in laboratory practice could be assessed by user self-reports, by analysis of research presented at meetings ( ) and in publications ( ), or by querying scientists on whether discussions with their mentors and colleagues led to changes in laboratory and institutional culture. Collaborate from the beginning with individuals who specialize in assessment design in higher education settings ( ).

We envision the platform being developed via a hub-and-spoke approach as discussed at a recent National Advisory Neurological Disorders and Stroke Council meeting. A centralized mechanism (the 'hub') will provide financial and infrastructural support and guidance (possibly via a steering committee) and facilitate sharing and coordination between groups, while rigor champions will come together to design specific modules (spokes) for the platform by using existing resources or designing new ones from scratch as needed. We envision worldwide teams of experts collaborating on building and testing the resource. Rigor champions with experience in defining clear learning objectives, building curricula, and evaluating success, for example, will collaborate with content experts to design topics needed in the resource. Importantly, potential users will be involved from the beginning of the development stage, and onwards through the design and implementation stages, to provide feedback about effectiveness and usability.

Given the importance of being able to measure the effectiveness (or otherwise) of the platform ( Table 2 ), individual components should be released publicly as they are completed to allow educators and users to iteratively test and improve the resource as it unfolds. As with science itself, the developers will need to experiment with content and delivery. If the resource does not improve the comprehension and research practice of individuals, or add value to the research community, rigorous approaches should be applied to improve it.

Once a functioning and effective resource has been built, it will be essential to promote its use and adoption. One approach would be to host 'train-the-trainer' programs ( Spencer et al., 2018 ; Pfund et al., 2006 ): those involved in building the resource share it with small groups of mentors, who are then better equipped to use the resource with their own mentees and to encourage their colleagues to use it. This form of dissemination also creates buy-in from mentors who need to model the behaviors they are teaching. Rigor champions, meanwhile, can encourage their institutions and colleagues to adopt and use the resource.

Setting up and supporting communities of rigor champions and developing educational resources on rigorous research will be complex and likely require multiple sources of support. However, with the participation of all sectors of the scientific enterprise, the actions proposed herein should, within a decade, lead to improvements in the culture of science as well as improvements in the design, conduct, analysis, and reporting of biomedical research. The result will be a healthier and more effective scientific community.

The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government.

Biographies

Walter J Koroshetz is at the National Institute of Neurological Disorders and Stroke, Rockville, MD, United States

Shannon Behrman is at iBiology, San Francisco, CA, United States

Cynthia J Brame is at the Center for Teaching and Department of Biological Sciences, Vanderbilt University, Nashville, TN, United States

Janet L Branchaw is in the Department of Kinesiology and Wisconsin Institute for Science Education and Community Engagement, University of Wisconsin - Madison, Madison, WI, United States

Emery N Brown is in the Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, and the Department of Brain and Cognitive Science, Institute of Medical Engineering and Sciences, the Picower Institute for Learning and Memory, and the Institute for Data Systems and Society, Massachusetts Institute of Technology, Cambridge, MA, United States

Erin A Clark is in the Department of Biology and Program in Neuroscience, Brandeis University, Waltham, MA, United States

David Dockterman is at the Harvard Graduate School of Education, Harvard University, Cambridge, MA, United States

Jordan J Elm is in the Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, United States

Pamela L Gay is at the Planetary Science Institute, Tucson, AZ, United States

Katelyn M Green is in the Cellular and Molecular Biology Graduate Program, University of Michigan, Ann Arbor, MI, United States

Sherry Hsi is with The Concord Consortium, Emeryville, CA, United States

Michael G Kaplitt is in the Department of Neurological Surgery, Weill Cornell Medical College, New York, NY, United States

Benedict J Kolber is in the Department of Biological Sciences, Duquesne University, Pittsburgh, PA, United States

Alex L Kolodkin is in the Solomon H. Snyder Department of Neuroscience, Johns Hopkins School of Medicine, Baltimore, MD, United States

Diane Lipscombe is in the Carney Institute for Brain Science, Department of Neuroscience, Brown University, Providence, RI, United States

Malcolm R MacLeod is in the Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom

Caleb C McKinney is in Biomedical Graduate Education, Georgetown University Medical Center, Washington, DC, United States

Marcus R Munafò is in the MRC Integrative Epidemiology Unit, School of Psychological Science, University of Bristol, Bristol, United Kingdom

Barbara Oakley is at Oakland University, Rochester, MI, United States

Jeffrey T Olimpo is in the Department of Biological Sciences, The University of Texas at El Paso, El Paso, TX, United States

Nathalie Percie du Sert is in the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), London, United Kingdom

Indira M Raman is in the Department of Neurobiology, Northwestern University, Evanston, IL, United States

Ceri Riley is with Complexly, Missoula, MT, United States

Amy L Shelton is at the Center for Talented Youth and School of Education, Johns Hopkins University, Baltimore, MD, United States

Stephen Miles Uzzo is at the New York Hall of Science, Flushing Meadows Corona Park, NY, United States

Devon C Crawford is at the National Institute of Neurological Disorders and Stroke, Rockville, MD, United States

Shai D Silberberg is at the National Institute of Neurological Disorders and Stroke, Rockville, MD, United States

Funding Statement

Funded by the National Institute of Neurological Disorders and Stroke (NINDS).

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Writing - review and editing.

Conceptualization, Writing - original draft, Writing - review and editing. DCC and SDS wrote the manuscript; all authors provided intellectual input and contributed to the editing of the manuscript.

  • Alberts B, Cicerone RJ, Fienberg SE, Kamb A, McNutt M, Nerem RM, Schekman R, Shiffrin R, Stodden V, Suresh S, Zuber MT, Pope BK, Jamieson KH. Self-correction in science at work. Science. 2015; 348 :1420–1422. doi: 10.1126/science.aab3847. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Begley CG, Buchan AM, Dirnagl U. Robust Research: Institutions must do their part for reproducibility. Nature. 2015; 525 :25–27. doi: 10.1038/525025a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bertuzzi S, Jamaleddine Z. Capturing the value of biomedical research. Cell. 2016; 165 :9–12. doi: 10.1016/j.cell.2016.03.004. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bjork RA, Dunlosky J, Kornell N. Self-regulated learning: beliefs, techniques, and illusions. Annual Review of Psychology. 2013; 64 :417–444. doi: 10.1146/annurev-psych-113011-143823. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bosch G. Train PhD students to be thinkers not just specialists. Nature. 2018; 554 :277. doi: 10.1038/d41586-018-01853-1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bosch G, Casadevall A. Graduate biomedical science education needs a new philosophy. mBio. 2017; 8 :17. doi: 10.1128/mBio.01539-17. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bradforth SE, Miller ER, Dichtel WR, Leibovich AK, Feig AL, Martin JD, Bjorkman KS, Schultz ZD, Smith TL. University Learning: Improve undergraduate science education. Nature. 2015; 523 :282–284. doi: 10.1038/523282a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown JS, Adler RP. Minds on fire: open education, the long tail, and learning 2.0. EDUCAUSE Review. 2008; 43 :16–32. [ Google Scholar ]
  • Button KS, Chambers CD, Lawrence N, Munafò MR. Grassroots training for reproducible science: a consortium-based approach to the empirical dissertation. Psychology Learning & Teaching. 2020; 19 :77–90. doi: 10.1177/1475725719857659. [ CrossRef ] [ Google Scholar ]
  • Casadevall A, Ellis LM, Davies EW, McFall-Ngai M, Fang FC. A framework for improving the quality of research in the biological sciences. mBio. 2016; 7 :e01256. doi: 10.1128/mBio.01256-16. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Casadevall A, Fang FC. Reforming science: methodological and cultural reforms. Infection and Immunity. 2012; 80 :891–896. doi: 10.1128/IAI.06183-11. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Casadevall A, Fang FC. Rigorous Science: A how-to guide. mBio. 2016; 7 :e01902. doi: 10.1128/mBio.01902-16. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Casci T, Adams E. Setting the right tone. eLife. 2020; 9 :e55543. doi: 10.7554/eLife.55543. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coleman B. Science writing: Too good to be true? [February 29, 2020]; New York Times. 1987 https://www.nytimes.com/1987/09/27/books/sceince-writing-too-good-to-be-true.html
  • Collins FS, Tabak LA. NIH plans to enhance reproducibility. Nature. 2014; 505 :612–613. doi: 10.1038/505612a. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Corwin LA, Graham MJ, Dolan EL. Modeling course-based undergraduate research experiences: an agenda for future research and evaluation. CBE—Life Sciences Education. 2015; 14 :es1. doi: 10.1187/cbe.14-10-0167. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cressey D. UK funders demand strong statistics for animal studies. Nature. 2015; 520 :271–272. doi: 10.1038/520271a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dirnagl U, Kurreck C, Castaños-Vélez E, Bernard R. Quality management for academic laboratories: burden or boon? EMBO Reports. 2018; 19 :e47143. doi: 10.15252/embr.201847143. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dirnagl U. Handbook of Experimental Pharmacology. Berlin, Heidelberg: Springer; 2019. Resolving the ttension between exploration and confirmation in preclinical biomedical research. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • D’Mello S, Lehman B, Pekrun R, Graesser A. Confusion can be beneficial for learning. Learning and Instruction. 2014; 29 :153–170. doi: 10.1016/j.learninstruc.2012.05.003. [ CrossRef ] [ Google Scholar ]
  • Handelsman J, Ebert-May D, Beichner R, Bruns P, Chang A, DeHaan R, Gentile J, Lauffer S, Stewart J, Tilghman SM, Wood WB. Scientific teaching. Science. 2004; 304 :521–522. doi: 10.1126/science.1096022. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Howitt SM, Wilson AN. Revisiting “Is the scientific paper a fraud?” EMBO Reports. 2014; 15 :481–484. doi: 10.1002/embr.201338302. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ioannidis JPA, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. The Lancet. 2014; 383 :166–175. doi: 10.1016/S0140-6736(13)62227-8. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kelner KL. Playing our part. Science Translational Medicine. 2013; 5 :190ed7. doi: 10.1126/scitranslmed.3006661. [ CrossRef ] [ Google Scholar ]
  • Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, Crystal RG, Darnell RB, Ferrante RJ, Fillit H, Finkelstein R, Fisher M, Gendelman HE, Golub RM, Goudreau JL, Gross RA, Gubitz AK, Hesterlee SE, Howells DW, Huguenard J, Kelner K, Koroshetz W, Krainc D, Lazic SE, Levine MS, Macleod MR, McCall JM, Moxley RT, Narasimhan K, Noble LJ, Perrin S, Porter JD, Steward O, Unger E, Utz U, Silberberg SD. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012; 490 :187–191. doi: 10.1038/nature11556. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lundwall RA. Changing institutional incentives to foster sound scientific practices: one department. Infant Behavior and Development. 2019; 55 :69–76. doi: 10.1016/j.infbeh.2019.03.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • MacLeod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, Hirst T, Hemblade R, Bahor Z, Nunes-Fonseca C, Potluru A, Thomson A, Baginskaite J, Baginskitae J, Egan K, Vesterinen H, Currie GL, Churilov L, Howells DW, Sena ES. Risk of bias in reports of in vivo research: a focus for improvement. PLOS Biology. 2015; 13 :e1002273. doi: 10.1371/journal.pbio.1002273. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McNutt M. Journals unite for reproducibility. Science. 2014; 346 :679. doi: 10.1126/science.aaa1724. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Minner DD, Levy AJ, Century J. Inquiry-based science instruction-what is it and does it matter? results from a research synthesis years 1984 to 2002. Journal of Research in Science Teaching. 2010; 47 :474–496. doi: 10.1002/tea.20347. [ CrossRef ] [ Google Scholar ]
  • Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN. Assessing scientists for hiring, promotion, and tenure. PLOS Biology. 2018; 16 :e2004089. doi: 10.1371/journal.pbio.2004089. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, Simonsohn U, Wagenmakers E-J, Ware JJ, Ioannidis JPA. A manifesto for reproducible science. Nature Human Behaviour. 2017; 1 :0021. doi: 10.1038/s41562-016-0021. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Munafò MR, Chambers CD, Collins AM, Fortunato L, Macleod MR. Research culture and reproducibility. Trends in Cognitive Sciences. 2020; 24 :91–93. doi: 10.1016/j.tics.2019.12.002. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Munafò MR, Davey Smith G. Robust research needs many lines of evidence. Nature. 2018; 553 :399–401. doi: 10.1038/d41586-018-01023-3. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • National Research Council . Enhancing the Effectiveness of Team Science. The National Academies Press; 2015. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nature Reducing our irreproducibility. Nature. 2013; 496 :398. doi: 10.1038/496398a. [ CrossRef ] [ Google Scholar ]
  • Nosek BA, Spies JR, Motyl M. Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science. 2012; 7 :615–631. doi: 10.1177/1745691612459058. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pfund C, Maidl Pribbenow C, Branchaw J, Miller Lauffer S, Handelsman J. The merits of training mentors. Science. 2006; 311 :473–474. doi: 10.1126/science.1123806. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • PLOS Biology Fifteen years in, what next for PLOS biology? PLOS Biology. 2018; 16 :e3000049. doi: 10.1371/journal.pbio.3000049. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Raman IM. How to be a graduate advisee. Neuron. 2014; 81 :9–11. doi: 10.1016/j.neuron.2013.12.030. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Silberberg SD, Crawford DC, Finkelstein R, Koroshetz WJ, Blank RD, Freeze HH, Garrison HH, Seger YR. Shake up conferences. Nature. 2017; 548 :153–154. doi: 10.1038/548153a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Spencer KC, McDaniels M, Utzerath E, Rogers JG, Sorkness CA, Asquith P, Pfund C. Building a sustainable national infrastructure to expand research mentor training. CBE—Life Sciences Education. 2018; 17 :ar48. doi: 10.1187/cbe.18-03-0034. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Strech D, Weissgerber T, Dirnagl U, QUEST Group Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative. PLOS Biology. 2020; 18 :e3000576. doi: 10.1371/journal.pbio.3000576. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Walkington C, Bernacki ML. Personalization of instruction: design dimensions and implications for cognition. The Journal of Experimental Education. 2018; 86 :50–68. doi: 10.1080/00220973.2017.1380590. [ CrossRef ] [ Google Scholar ]
  • Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond "p < 0.05". The American Statistician. 2019; 73 :1–19. doi: 10.1080/00031305.2019.1583913. [ CrossRef ] [ Google Scholar ]
  • Yeager DS, Henderson MD, Paunesku D, Walton GM, D'Mello S, Spitzer BJ, Duckworth AL. Boring but important: a self-transcendent purpose for learning fosters academic self-regulation. Journal of Personality and Social Psychology. 2014; 107 :559–580. doi: 10.1037/a0037637. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Using the TACT Framework to Learn the Principles of Rigour in Qualitative Research

Profile image of Ben Daniel

Electronic Journal of Business Research Methods

Assessing the quality of qualitative research to ensure rigour in the findings is critical, especially if findings are to contribute to theory and be utilised in practice. However, teaching students concepts of rigour and how to apply them to their research is challenging. This article presents a generic framework of rigour with four critical dimensions—Trustworthiness, Auditability, Credibility and Transferability (TACT) intended to teach issues of rigour to postgraduate students and those new to qualitative research methodology. The framework enables them to explore the key dimensions necessary for assessing the rigour of qualitative research studies and checklist questions against each of the dimensions. TACT was offered through 10 workshops, attended by 64 participants. Participants positively evaluated the workshops and reported that the workshops enable them to learn the principles of qualitative research and better understanding issues of rigour. Work presented in the article...

Related Papers

Nurse Researcher

Anthony Tuckett

how is procedural rigour demonstrated in a research report

Jurnal Akuntansi dan Keuangan

helianti utami

Prior research has explored qualitative studies in relation to the paradigms used. This paper enriches the literature by investigating the quality of qualitative studies in relation to the data collection method and participants&#39; selection. In this study, we collected SNA qualitative paper proceedings from 2007 to 2017. Guided by the minimum criteria of the data collection method described in the literature review sections, we analyze those proceedings. We found the three most common methods used in the studies: interview, observation, and documentation. The majority of the paper clearly stated their data collection method. However, only a minority of them provides a clear description of how the data were collected and how to obtain participants/data used in their studies and why invite dthem in the research. Thus, it is suggested that researchers provide a detail explanation of their methods to show the rigour of the study that they conducted

Julia Crook

American Journal of Pharmaceutical Education

Sheila Chauvin

Zimitri Erasmus , Jacques de Wet

Nick Schuermans

Journal of Advanced Nursing

Qualitative Report

Sonya Jakubec

Matjeko Lenka

Catherine Pope

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Indian Journal of Public Health

Sanjay Zodpey

Evidence-based nursing

Helen Noble

BMC Medical Research Methodology

Kate Roberts

Wawan Yulianto

International Forum

Safary Wa-Mbaleka

Fitzroy Gordon

Debra Campbell

Journal of Research in Nursing

Jane cahill

Mohammed Ali Bapir

Motriz: Revista de Educação Física

Health Services Research

Michael Patton

VANESSA VERA LYNN

Health Research Policy and Systems

Joanna Reynolds

Deepak P Kafle

Qualitative Research

Dana Miller

Qualitative Health Research

Miles Little

Victoria Clarke

Caroline Bradbury-Jones

Journal of Public Administration Research and Theory

Kate Albrecht

Maria Northcote

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Login to your account

If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password

If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password

Property Value
Status
Version
Ad File
Disable Ads Flag
Environment
Moat Init
Moat Ready
Contextual Ready
Contextual URL
Contextual Initial Segments
Contextual Used Segments
AdUnit
SubAdUnit
Custom Targeting
Ad Events
Invalid Ad Sizes
  • AACP Member Login      Submit

how is procedural rigour demonstrated in a research report

Download started.

  • PDF [516 KB] PDF [516 KB]
  • Figure Viewer
  • Download Figures (PPT)
  • Add To Online Library Powered By Mendeley
  • Add To My Reading List
  • Export Citation
  • Create Citation Alert

A Review of the Quality Indicators of Rigor in Qualitative Research

  • Jessica L. Johnson, PharmD Jessica L. Johnson Correspondence Corresponding Author: Jessica L. Johnson, William Carey University School of Pharmacy, 19640 Hwy 67, Biloxi, MS 39574. Tel: 228-702-1897. Contact Affiliations William Carey University School of Pharmacy, Biloxi, Mississippi Search for articles by this author
  • Donna Adkins, PharmD Donna Adkins Affiliations William Carey University School of Pharmacy, Biloxi, Mississippi Search for articles by this author
  • Sheila Chauvin, PhD Sheila Chauvin Affiliations Louisiana State University, School of Medicine, New Orleans, Louisiana Search for articles by this author
  • qualitative research design
  • standards of rigor
  • best practices

INTRODUCTION

  • Denzin Norman
  • Lincoln Y.S.
  • Google Scholar
  • Anderson C.
  • Full Text PDF
  • Scopus (590)
  • Santiago-Delefosse M.
  • Stephen S.L.
  • Scopus (85)
  • Scopus (32)
  • Levinson W.
  • Scopus (508)
  • Dixon-Woods M.
  • Scopus (440)
  • Malterud K.
  • Midtgarden T.
  • Scopus (205)
  • Wasserman J.A.
  • Wilson K.L.
  • Scopus (69)
  • Barbour R.S.
  • Scopus (68)
  • Sale J.E.M.
  • Scopus (12)
  • Fraser M.W.
  • Scopus (37)
  • Sandelowski M.
  • Scopus (1574)

BEST PRACTICES: STEP-WISE APPROACH

Step 1: identifying a research topic.

  • Scopus (290)
  • Creswell J.
  • Maxwell J.A.
  • Glassick C.E.
  • Maeroff G.I.
  • Scopus (270)

Table thumbnail ajpe7120-t1

  • Scopus (281)
  • Ringsted C.
  • Scherpbier A.
  • Scopus (132)
  • Ravitch S.M.

Figure 1

  • View Large Image
  • Download Hi-res image
  • Download (PPT)
  • Huberman M.

Step 2: Qualitative Study Design

  • Whittemore R.
  • Mandle C.L.
  • Scopus (992)
  • Marshall M.N.
  • Scopus (2255)
  • Horsfall J.
  • Scopus (186)
  • O’Reilly M.
  • Scopus (1087)
  • Burkard A.W.
  • Scopus (169)
  • Patton M.Q.
  • Scopus (366)
  • Scopus (4328)
  • Johnson R.B.

Step 3: Data Analysis

Step 4: drawing valid conclusions.

  • Swanwick T.
  • Swanwick T.O.
  • O’Brien B.C.
  • Harris I.B.
  • Beckman T.J.
  • Scopus (5124)

Step 5: Reporting Research Results

Table thumbnail ajpe7120-t2

  • Shenton A.K.
  • Scopus (4270)

Article info

Publication history, identification.

DOI: https://doi.org/10.5688/ajpe7120

ScienceDirect

  • Download .PPT

Related Articles

  • Access for Developing Countries
  • Articles & Issues
  • Articles In Press
  • Current Issue
  • Past Issues
  • Journal Information
  • About Open Access
  • Aims & Scope
  • Editorial Board
  • Editorial Team
  • History of AJPE
  • Contact Information
  • For Authors
  • Guide for Authors
  • Researcher Academy
  • Rights & Permissions
  • Submission Process
  • Submit Article
  • For Reviewers
  • Reviewer Instructions
  • Reviewer Frequently Asked Questions

The content on this site is intended for healthcare professionals.

  • Privacy Policy   
  • Terms and Conditions   
  • Accessibility   
  • Help & Contact

RELX

Session Timeout (2:00)

Your session will expire shortly. If you are still working, click the ‘Keep Me Logged In’ button below. If you do not respond within the next minute, you will be automatically logged out.

Logo for JCU Open eBooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

3.7 Quantitative Rigour

The extent to which the researchers strive to improve the quality of their study is referred to as rigour. Rigour is accomplished in quantitative research by measuring validity and reliability. 55 These concepts affect the quality of findings and their applicability to broader populations.

Validity refers to the accuracy of a measure. It is the extent to which a study or test accurately measures what it sets out to measure. There are three main types of validity – content, construct and criterion validity.

  • Content validity: Content validity examines whether the instrument adequately covers all aspects of the content that it should with respect to the variable under investigation. 56 This type of validity can be assessed through expert judgment and by examining the coverage of items or questions in measure. 56 Face validity is a subset of content validity in which experts are consulted to determine if a measurement tool accurately captures what it is supposed to measure. 56 There are multiple methods for testing content validity – content validity index (CVI) and content validity ratio (CVR). CVI is calculated as the number of experts giving a rating of “very relevant” for each item divided by the total number of experts. Values range from 0 to 1, with items having a CVI score > 0.79 relevant; between 0.70 and 0.79, the item needs revisions, and if the value is below 0.70, the item is eliminated. 57 CVR varies between 1 and −1; a higher score indicates greater agreement among panel members. CVR is calculated as  (Ne – N/2)/(N/2), where Ne is the number of panellists indicating an item as “essential” and N is the total number of panelists. 57 A study by Mousazadeh et al. 2017 investigated the content, face validity and reliability of sociocultural attitude towards appearance questionnaire-3 (SATAQ-3) among female adolescents. 58 To ensure face validity, the questionnaire was given to 25 female adolescents, a psychologist and three nurses, who were required to evaluate the items with respect to problems, ambiguity, relativity, proper terms and grammar, and understandability. For content validity, 15 experts in psychology and nursing were asked to assess the qualitative content validity. To determine the quantitative content validity, the content validity index and content validity ratio were calculated. 58
  • Construct validity: A construct is an idea or theoretical concept based on empirical observations that are not directly measurable. An example of a construct could be physical functioning or social anxiety. Thus construct validity determines whether an instrument measures the underlying construct of interest and discriminates it from other related constructs. 55 It is important and expresses the confidence that a particular construct is valid. 55 This type of validity can be assessed using factor analysis or other statistical techniques. For example, Pinar, Rukiye 2005 , evaluated the reliability and construct validity of the SF-36 in Turkish cancer patients. 59 The SF-36 is widely used to measure the quality of life or health status in sick and healthy populations. Principal components factor analysis with varimax rotation confirmed the presence of the seven domains in the SF-36: in the SF-36: physical functioning, role limitations due to physical and emotional problems, mental health, general health perception, bodily pain, social functioning, and vitality. It was concluded that the Turkish version of the SF-36 was a suitable instrument that could be employed in cancer research in Turkey. 59
  • Criterion validity: Criterion validity is the relationship between an instrument score and some external criterion. This criterion is considered the “gold standard” and has to be a widely accepted measure that shares the same characteristics as the assessment tool. 55 Determining the validity of a new diagnostic test requires two principal factors – sensitivity and specificity. 60   Sensitivity refers to the probability of detecting those with the disease, while specificity refers to the probability of the test correctly identifying those without the disease. 60 For example, the reverse transcriptase polymerase chain reaction (RT PCR) is the gold standard for testing COVID-19; its results are available at the earliest several hours to days after testing. Rapid antigen tests are diagnostic tools that can be used at the point of care, and the results can be obtained within 30 minutes). 61, 62 Therefore, the validity of these rapid antigen tests was determined against the gold standard. 61, 62 Two published articles that assessed the validity of the rapid antigen test reported sensitivity of 71.43% and 78.3% and specificity of 99.68% and 99.5%, respectively. 61, 62 Thus indicating that the tests were less effective in identifying those who have the disease but highly effective in identifying those who do not have the disease. While it is important to assess the accuracy of the instruments used, it is also imperative to determine if the measure and findings are reliable.

Reliability

Reliability refers to the consistency of a measure. It is the ability of a measure or tests to reproduce a consistent result over time and across different observers. 55 A reliable measurement tool produces consistent results, even when different observers administer the test or when the test is conducted on different occasions. 55, 5 6 Reliability can be assessed by examining test-retest reliability, inter-rater reliability, and internal consistency.

  • Test-retest reliability: Test-retest reliability refers to the degree of consistency between the outcomes of the same test or measure taken by the same participants at varying times. It estimates the consistency of measurement repetition. The intraclass correlation coefficient (ICC) is often used to determine test-retest reliability. 56 For example, a study may be conducted to evaluate the reliability of a new tool for measuring pain and might administer the tool to a group of patients at two different time points and compare the results. If the results are consistent across the two-time points, this would indicate that the tool has good test-retest reliability. However, it is important to note that the reliability reduces when the time between administration of the test is extended or too long. An adequate time span between tests should range from 10 to 14 days. 56 The article by Pinar, Rukiye 2005 , demonstrated this by assessing a test–retest stability using intraclass correlation coefficient-ICC. The retest procedure was conducted two weeks after the first test as two weeks was considered to be the optimum re-test interval. 59 This would be sufficiently long for participants to forget their initial responses but not too long that most health domains would change. 59
  • Inter-observer (between observers) reliability: is also known as i nter-rater reliability, and it is the level of agreement between two or more observers on the results of an instrument or test. It is the most popular method of determining if two things are equivalent. 55, 56 For example, a study may be conducted to evaluate the reliability of a new tool for measuring depression. This will involve two different raters or observers independently scoring the same patient on the tool and comparing the results. If the results are consistent across the two raters, this would indicate that the tool has excellent inter-rater reliability. The Kappa coefficient is a measure used to assess the agreement between the raters. 56 It can have a maximum value of 1.00; the higher the value, the greater the concordance between the raters. 56
  • Internal consistency: Internal consistency refers to the extent to which different items or questions in a test or questionnaire are consistent with one another. It is also known as homogeneity, which indicates whether each component of an instrument measures the same characteristics. 55 This type of reliability can be assessed by calculating Cronbach’s alpha (α) coefficient, which measures the correlation between different items or questions. Cronbach α is expressed as a number between 0 and 1, and a reliability score of 0.7 or above is considered acceptable. 55 For example, Pinar, Rukiye 2005 reported that reliability evaluations of the SF-36 were based on the internal consistency test (Cronbach’s α coefficient). The results showed that Cronbach’s α coefficient for the eight subscales of the SF-36 ranged between 0.79 and 0.90, confirming the internal consistency of the subscales. 59

Now you have an understanding of the quantitative methodology. Use the Padlet below to write a research question that can be answered quantitatively.

An Introduction to Research Methods for Undergraduate Health Profession Students Copyright © 2023 by Faith Alele and Bunmi Malau-Aduli is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government, Department of Justice.

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NCJRS Virtual Library

Preparing for analysis: a practical guide for a critical step for procedural rigor in large-scale multisite qualitative research studies, additional details.

810 Seventh Street NW , Washington , DC 20531 , United States

Availability

Related topics.

IMAGES

  1. Promoting and evaluating scientific rigour in qualitative research

    how is procedural rigour demonstrated in a research report

  2. Rigor in qualitative research

    how is procedural rigour demonstrated in a research report

  3. Criteria for ensuring rigour in qualitative research

    how is procedural rigour demonstrated in a research report

  4. Types of Research Report

    how is procedural rigour demonstrated in a research report

  5. Criteria for ensuring rigour in qualitative research

    how is procedural rigour demonstrated in a research report

  6. Rigour in qualitative research Strategies employed during research

    how is procedural rigour demonstrated in a research report

VIDEO

  1. Behind the scenes of research and development "Procedural Generation" #arenabreakoutpc #abi #fyp

  2. SAMPLING PROCEDURE AND SAMPLE (QUALITATIVE RESEARCH)

  3. 2012 BOOM Competition at Cornell

  4. Managing the Limitations of Qualitative Research

  5. unit 2: 2/4 intellectual honesty and research integrity

  6. KSP 0.22 [English] Infernal Robotics Tutorial Part 2

COMMENTS

  1. A Review of the Quality Indicators of Rigor in Qualitative Research

    Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...

  2. Chapter 26: Rigour

    What is rigour? In qualitative research, rigour, or trustworthiness, refers to how researchers demonstrate the quality of their research. 1, 2 Rigour is an umbrella term for several strategies and approaches that recognise the influence on qualitative research by multiple realities; for example, of the researcher during data collection and analysis, and of the participant.

  3. Ensuring Rigor in Qualitative Data Analysis: A Design Research Approach

    Rigor is demonstrated by this depth of engagement that enables the designer "to reach through to the concealed plums" (Cross, 2001, p. 53). Demonstrating Rigor in Research. It is important to clarify that the requirements for demonstrating rigor in design research and in grounded theory qualitative analysis vary from those required in ...

  4. PDF Achieving Rigor in Qualitative Analysis: The Role of Active

    1967: 244). When conducting qualitative analysis, we generally identify categories in our data. These categories are generally described as codes or groupings of codes, such as the first- and. second-order codes and overarching categories often described in classical grounded theory. (Strauss & Corbin, 1990).

  5. A Reviewer's Guide to Qualitative Rigor

    Qualitative saturation is a technique commonly referenced in inductive research to demonstrate that the dataset is robust in terms of capturing the important variability that exists around the phenomenon of ... Data Collection Protocols and Procedures. In deductive research, constructs and relationships are articulated prior to analysis, and ...

  6. 20.1 Introduction to qualitative rigor

    Rigor is a concept that reflects the quality of the process used in capturing, managing, and analyzing our data as we develop this rich understanding. Rigor helps to establish standards through which qualitative research is critiqued and judged, both by the scientific community and by the practitioner community.

  7. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  8. Rigor, Transparency, Evidence, and Representation in Discourse Analysis

    inferences through applying systematic inquiry procedures (King, Keohane, & Verba, 1994). Thus, a basic demand for all qualitative research has been for it to be systematic and rigorous, although conceptions of rigor are rooted in and therefore differ across paradigms of qualitative research (Hammersley, 2007).

  9. A Review of the Quality Indicators of Rigor in Qualitative Research

    Peer review, another common standard of rigor, is a process by which researchers invite an independent third-party researcher to analyze a detailed audit trail maintained by the study author. The audit trail methodically describes the step-by-step processes and decision-making through-out the study.

  10. Qualitative Research: Rigour and qualitative research

    Various strategies are available within qualitative research to protect against bias and enhance the reliability of findings. This paper gives examples of the principal approaches and summarises them into a methodological checklist to help readers of reports of qualitative projects to assess the quality of the research. In the health field--with its strong tradition of biomedical research ...

  11. PDF Methodological rigour within a qualitative framework

    qualitative research is a scientific process that has a valued contribution to make to the advancement of knowledge. Rigour is the means by which we demonstrate integrity and competence (Aroni et al. 1999), a way of demonstrating the legitimacy of the research process. Without rigour, there is a danger that research may become fictional ...

  12. Checklists for improving rigour in qualitative research

    EDITOR—Barbour's article is tantalising and mystifying in equal measure. 1 She is right to counsel qualitative researchers from shielding behind a protective wall of checklists and quasi-paradigmatic research techniques—although the same should be levelled at epidemiologists, statisticians, and health economists, with all researchers being ...

  13. PDF This document is an introduction to the concept of rigour as related to

    Addressing the complexity of rigour from an Indigenous research methodology may mean thinking outside the box. As noted by Given, Rigor in research involving humans surely means producing results that faithfully reflect lived reality that has validity or truth value for both the Indigenous and scholarly communities.

  14. Appraisal of Qualitative Studies

    The traditional criteria for rigor in the quantitative paradigm are well known. These include trustworthiness (internal validity), generalizability (external validity), consistency (reliability), and objectivity (Mays and Pope 2007).Different terminology and criteria for rigor been proposed for qualitative research, which have used these same core principles (Mays and Pope 2007).

  15. Checklists for improving rigour in qualitative research: a case of the

    Reducing qualitative research to a list of technical procedures (such as purposive sampling, grounded theory, multiple coding, triangulation, and respondent validation) is overly prescriptive and results in "the tail wagging the dog". None of these "technical fixes" in itself confers rigour; they can strengthen the rigour of qualitative ...

  16. Framework for advancing rigorous research

    Abstract. There is a pressing need to increase the rigor of research in the life and biomedical sciences. To address this issue, we propose that communities of 'rigor champions' be established to campaign for reforms of the research culture that has led to shortcomings in rigor. These communities of rigor champions would also assist in the ...

  17. Using the TACT Framework to Learn the Principles of Rigour in

    The concept of 'trustworthiness' portrays quality in qualitative research and underpins both rigour in the research process and the relevance, and confidence in the research outcome (Baillie, 2015; Finlay 2006). Also, it is a proxy for establishing the authenticity of the research outcome, and truthfulness of findings (Cypress, 2017).

  18. A Review of the Quality Indicators of Rigor in Qualitative Research

    Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework, both of which contribute to the selection of appropriate ...

  19. 3.7 Quantitative Rigour

    Rigour is accomplished in quantitative research by measuring validity and reliability. ... 56 The article by Pinar, Rukiye 2005, demonstrated this by assessing a test-retest stability using intraclass correlation coefficient-ICC. The retest procedure was conducted two weeks after the first test as two weeks was considered to be the optimum re ...

  20. Rigor, Transparency, Evidence, and Representation in Discourse Analysis

    The challenge of representation is twofold. First, representation requires researchers to find ways to present the process of data analysis in textual and/or visual form in order to publicly disclose the research process and to demonstrate the rigor of the analysis (Anfara et al., 2002; Harry et al., 2005).

  21. PDF Rigor in the Research Approach

    Rigor of the Prior Research A careful assessment of the rigor of the prior research that serves as the key support for a proposed project will help applicants identify any weaknesses or gaps in the line of research. Describe the strengths and weaknesses in the rigor of the prior research (both

  22. 'Rigour', 'discipline' and the 'systematic' in educational research

    Keywords. Research, rigour, discipline, systematic, truth, alternative epistemology. The interconnected notions of rigour, discipline and systematic inquiry play a central role in the discourse of research, including educational research. For some (see the argument that follows this introduction) they almost define what form of inquiry will ...

  23. Preparing for Analysis: A Practical Guide for a Critical Step for

    Guided by the research cooperative, RCs in the current project collaborated on many aspects of the qualitative data activities (e.g., codebook development and coding activities); however, pre-analysis procedures, such as organizing and managing resources, were primarily managed at the RC level.