• Advocate for resources to facilitate rigorous research practices
• Share institutional resources and practices in education and training • Call for changes in institutional culture and policies | | • Transparently report all experiments, including neutral outcomes • Promote rigorous practices among colleagues and trainees • Call for changes to institutional culture, policies, and infrastructure | • Share effective training practices and useful laboratory resources • Coordinate with the broader scientific community to promote better incentive structures |
| • Suggest improvements to available resources that address rigor • Integrate rigorous research principles into all coursework | • Share resources and educational best practices • Share effective learning evaluation methods |
| • Enact policies and support infrastructure to incentivize transparency and other rigorous research practices • Explicitly incorporate mentoring, collaboration, and rigorous research practices into promotion procedures • Initiate and share outcomes from piloted educational resources | • Support and promote communities of rigor champions • Disseminate policy changes, new initiatives, educational successes, and implementation strategies • Develop tangible outcome measures to evaluate impact |
| • Promote thorough review of research practices in publications • Explicitly support research transparency and neutral outcomes • Educate reviewers on which scientific practices are valued by the journal | • Collaborate to implement best practices consistently across different publishers |
| • Support the founding of communities of rigor champions • Compile and encourage best practices used by the scientific community • Host workshops and educational materials for members | • Promote and maintain communities of rigor champions • Encourage institutional policies that promote research quality and effective education |
| • Emphasize attention to rigor in peer review • Reward rigorous research practices and outstanding mentorship • Support infrastructure for transparent and rigorous science • Support educational resources and initiatives | • Support and promote communities of rigor champions • Share best practices for incentivizing rigorous research and educating scientists • Develop partnerships to support better training and facilitate cultural changes |
NINDS, for example, has proactively sought effective approaches to support greater transparency in reporting . An NINDS meeting with publishers led to changes in journal policies regarding transparency of reporting at various journals ( Nature, 2013 ; Kelner, 2013 ). Recommendations for greater transparency at scientific meetings stemmed from an NINDS roundtable with conference organizing bodies ( Silberberg et al., 2017 ) and are being piloted by the Federation of American Societies for Experimental Biology (FASEB ) . To recognize outstanding mentors, NINDS established the Landis Mentoring Award , and by providing greater stability to meritorious scientists though the NINDS R35 Program , it is anticipated that the pressures to rush studies to publication will be mitigated.
In particular we hope that leaders at academic institutions – such as department chairs, deans, and vice-presidents of research – will become involved because they are uniquely placed to shape the culture and social norms of institutions ( Begley et al., 2015 ). For example, faculty evaluation criteria should be modified to place greater emphasis on data sharing, methods transparency, demonstrated rigor, collaboration, and mentoring, with less emphasis on the number of publications and journal impact factors ( Casadevall and Fang, 2012 ; Moher et al., 2018 ; Bertuzzi and Jamaleddine, 2016 ; Lundwall, 2019 ; Strech et al., 2020 ; Casci and Adams, 2020 ; see also https://sfdora.org/read ). When publications are being evaluated, rigorously obtained null results should be valued as highly as positive findings. Institutional leaders are also uniquely placed to ensure that scientific rigor is properly taught to trainees and incorporated into day-to-day lab work ( Casadevall et al., 2016 ; Begley et al., 2015 ; Bosch, 2018 ; Button et al., 2020 ). Moreover, evaluations of trainees should emphasize experimental and analytic skills rather than where papers are published.
Building an educational resource for rigorous research
The establishment of communities of rigor champions will set the stage for the creation of an educational platform designed by the scientific community to communicate the principles of rigorous research. Given the rapid evolution of technologies and learning practices, it is difficult to predict what resource formats will be most effective in the future, so the platform will need to be open and freely available, easily discoverable, engaging, modular, adaptable, and upgradable. It will also need to be available during coursework and beyond so that scientists can use it to answer questions when they are doing research or as part of life-long learning ( Figure 1 ). This means that the platform will have to embody a number of principles of effective teaching and mentoring (see Table 2 ).
We envision a comprehensive resource that can be used by scientists at all stages of their career to explore the principles of rigorous research at various levels of detail. We envision modules on a range of topics (such as reducing cognitive biases), each of which contains a number of topics (such as blinding), each of which contains a number of lessons (such as practical examples).
Key element | Teaching and learning principle |
---|
| Define the learning objectives upfront, identify ways to measure achievement of these objectives, and then design activities to support learning ( ). |
| Encourage students to pose their own questions, apply commonly used tools and methods to actively explore their questions, and provide evidence when explaining phenomena ( ; ; ; ). |
| Provide feedback on real-world experiments, whether in the classroom or the laboratory, as a way to demonstrate relevance and stimulate interest. Opportunities for personalized application and discussion in the local setting with the help of a facilitator’s guide are particularly critical, as adults typically learn most effectively when given the opportunity for immediate personal utility and value ( ). Emphasize the ability to contribute to a larger purpose or gain social standing ( ). |
| Include a range of approaches to teaching and learning to accommodate different levels of knowledge and skills, motivations, and senses of self-efficacy ( ; ). |
| Allow individuals to gain self-efficacy by experiencing a feeling of progress, being challenged in low-stakes environments, and working through confusing concepts successfully ( ). This is more effective when the person feels psychologically safe to take risks and fail in front of their local scientific community. |
| Facilitate learning, foster collaboration, and recognize diverse perspectives in order to encourage learners to gain agency and forge a connection with the intellectual community ( ; ). |
| Include complexity and inconsistencies in training examples rather than simplification for the sake of a persuasive story ( ; ). This counteracts the drive to smooth over inconvenient but potentially important details and highlights the importance of confounding variables, potential artefactual influences, reproducibility, and robustness of the findings. |
| Nurture positive behaviors, like acknowledging and learning from mistakes, rather than penalize imperfect practices ( ). Mentors at all career stages are encouraged to model these positive behaviors and to share their own failures, the drudgery and frustrations of science, and their approaches to coping emotionally and growing intellectually while maintaining rigorous research practices. |
| Measure success via gains in learner competency and changes to their real-world approaches to research. Changes in laboratory practice could be assessed by user self-reports, by analysis of research presented at meetings ( ) and in publications ( ), or by querying scientists on whether discussions with their mentors and colleagues led to changes in laboratory and institutional culture. Collaborate from the beginning with individuals who specialize in assessment design in higher education settings ( ). |
We envision the platform being developed via a hub-and-spoke approach as discussed at a recent National Advisory Neurological Disorders and Stroke Council meeting. A centralized mechanism (the 'hub') will provide financial and infrastructural support and guidance (possibly via a steering committee) and facilitate sharing and coordination between groups, while rigor champions will come together to design specific modules (spokes) for the platform by using existing resources or designing new ones from scratch as needed. We envision worldwide teams of experts collaborating on building and testing the resource. Rigor champions with experience in defining clear learning objectives, building curricula, and evaluating success, for example, will collaborate with content experts to design topics needed in the resource. Importantly, potential users will be involved from the beginning of the development stage, and onwards through the design and implementation stages, to provide feedback about effectiveness and usability.
Given the importance of being able to measure the effectiveness (or otherwise) of the platform ( Table 2 ), individual components should be released publicly as they are completed to allow educators and users to iteratively test and improve the resource as it unfolds. As with science itself, the developers will need to experiment with content and delivery. If the resource does not improve the comprehension and research practice of individuals, or add value to the research community, rigorous approaches should be applied to improve it.
Once a functioning and effective resource has been built, it will be essential to promote its use and adoption. One approach would be to host 'train-the-trainer' programs ( Spencer et al., 2018 ; Pfund et al., 2006 ): those involved in building the resource share it with small groups of mentors, who are then better equipped to use the resource with their own mentees and to encourage their colleagues to use it. This form of dissemination also creates buy-in from mentors who need to model the behaviors they are teaching. Rigor champions, meanwhile, can encourage their institutions and colleagues to adopt and use the resource.
Setting up and supporting communities of rigor champions and developing educational resources on rigorous research will be complex and likely require multiple sources of support. However, with the participation of all sectors of the scientific enterprise, the actions proposed herein should, within a decade, lead to improvements in the culture of science as well as improvements in the design, conduct, analysis, and reporting of biomedical research. The result will be a healthier and more effective scientific community.
The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government.
Biographies
Walter J Koroshetz is at the National Institute of Neurological Disorders and Stroke, Rockville, MD, United States
Shannon Behrman is at iBiology, San Francisco, CA, United States
Cynthia J Brame is at the Center for Teaching and Department of Biological Sciences, Vanderbilt University, Nashville, TN, United States
Janet L Branchaw is in the Department of Kinesiology and Wisconsin Institute for Science Education and Community Engagement, University of Wisconsin - Madison, Madison, WI, United States
Emery N Brown is in the Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, and the Department of Brain and Cognitive Science, Institute of Medical Engineering and Sciences, the Picower Institute for Learning and Memory, and the Institute for Data Systems and Society, Massachusetts Institute of Technology, Cambridge, MA, United States
Erin A Clark is in the Department of Biology and Program in Neuroscience, Brandeis University, Waltham, MA, United States
David Dockterman is at the Harvard Graduate School of Education, Harvard University, Cambridge, MA, United States
Jordan J Elm is in the Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, United States
Pamela L Gay is at the Planetary Science Institute, Tucson, AZ, United States
Katelyn M Green is in the Cellular and Molecular Biology Graduate Program, University of Michigan, Ann Arbor, MI, United States
Sherry Hsi is with The Concord Consortium, Emeryville, CA, United States
Michael G Kaplitt is in the Department of Neurological Surgery, Weill Cornell Medical College, New York, NY, United States
Benedict J Kolber is in the Department of Biological Sciences, Duquesne University, Pittsburgh, PA, United States
Alex L Kolodkin is in the Solomon H. Snyder Department of Neuroscience, Johns Hopkins School of Medicine, Baltimore, MD, United States
Diane Lipscombe is in the Carney Institute for Brain Science, Department of Neuroscience, Brown University, Providence, RI, United States
Malcolm R MacLeod is in the Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
Caleb C McKinney is in Biomedical Graduate Education, Georgetown University Medical Center, Washington, DC, United States
Marcus R Munafò is in the MRC Integrative Epidemiology Unit, School of Psychological Science, University of Bristol, Bristol, United Kingdom
Barbara Oakley is at Oakland University, Rochester, MI, United States
Jeffrey T Olimpo is in the Department of Biological Sciences, The University of Texas at El Paso, El Paso, TX, United States
Nathalie Percie du Sert is in the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), London, United Kingdom
Indira M Raman is in the Department of Neurobiology, Northwestern University, Evanston, IL, United States
Ceri Riley is with Complexly, Missoula, MT, United States
Amy L Shelton is at the Center for Talented Youth and School of Education, Johns Hopkins University, Baltimore, MD, United States
Stephen Miles Uzzo is at the New York Hall of Science, Flushing Meadows Corona Park, NY, United States
Devon C Crawford is at the National Institute of Neurological Disorders and Stroke, Rockville, MD, United States
Shai D Silberberg is at the National Institute of Neurological Disorders and Stroke, Rockville, MD, United States
Funding Statement
Funded by the National Institute of Neurological Disorders and Stroke (NINDS).
Competing interests
No competing interests declared.
Author contributions
Conceptualization, Writing - review and editing.
Conceptualization, Writing - original draft, Writing - review and editing. DCC and SDS wrote the manuscript; all authors provided intellectual input and contributed to the editing of the manuscript.
- Alberts B, Cicerone RJ, Fienberg SE, Kamb A, McNutt M, Nerem RM, Schekman R, Shiffrin R, Stodden V, Suresh S, Zuber MT, Pope BK, Jamieson KH. Self-correction in science at work. Science. 2015; 348 :1420–1422. doi: 10.1126/science.aab3847. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Begley CG, Buchan AM, Dirnagl U. Robust Research: Institutions must do their part for reproducibility. Nature. 2015; 525 :25–27. doi: 10.1038/525025a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Bertuzzi S, Jamaleddine Z. Capturing the value of biomedical research. Cell. 2016; 165 :9–12. doi: 10.1016/j.cell.2016.03.004. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Bjork RA, Dunlosky J, Kornell N. Self-regulated learning: beliefs, techniques, and illusions. Annual Review of Psychology. 2013; 64 :417–444. doi: 10.1146/annurev-psych-113011-143823. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Bosch G. Train PhD students to be thinkers not just specialists. Nature. 2018; 554 :277. doi: 10.1038/d41586-018-01853-1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Bosch G, Casadevall A. Graduate biomedical science education needs a new philosophy. mBio. 2017; 8 :17. doi: 10.1128/mBio.01539-17. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Bradforth SE, Miller ER, Dichtel WR, Leibovich AK, Feig AL, Martin JD, Bjorkman KS, Schultz ZD, Smith TL. University Learning: Improve undergraduate science education. Nature. 2015; 523 :282–284. doi: 10.1038/523282a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Brown JS, Adler RP. Minds on fire: open education, the long tail, and learning 2.0. EDUCAUSE Review. 2008; 43 :16–32. [ Google Scholar ]
- Button KS, Chambers CD, Lawrence N, Munafò MR. Grassroots training for reproducible science: a consortium-based approach to the empirical dissertation. Psychology Learning & Teaching. 2020; 19 :77–90. doi: 10.1177/1475725719857659. [ CrossRef ] [ Google Scholar ]
- Casadevall A, Ellis LM, Davies EW, McFall-Ngai M, Fang FC. A framework for improving the quality of research in the biological sciences. mBio. 2016; 7 :e01256. doi: 10.1128/mBio.01256-16. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Casadevall A, Fang FC. Reforming science: methodological and cultural reforms. Infection and Immunity. 2012; 80 :891–896. doi: 10.1128/IAI.06183-11. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Casadevall A, Fang FC. Rigorous Science: A how-to guide. mBio. 2016; 7 :e01902. doi: 10.1128/mBio.01902-16. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Casci T, Adams E. Setting the right tone. eLife. 2020; 9 :e55543. doi: 10.7554/eLife.55543. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Coleman B. Science writing: Too good to be true? [February 29, 2020]; New York Times. 1987 https://www.nytimes.com/1987/09/27/books/sceince-writing-too-good-to-be-true.html
- Collins FS, Tabak LA. NIH plans to enhance reproducibility. Nature. 2014; 505 :612–613. doi: 10.1038/505612a. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Corwin LA, Graham MJ, Dolan EL. Modeling course-based undergraduate research experiences: an agenda for future research and evaluation. CBE—Life Sciences Education. 2015; 14 :es1. doi: 10.1187/cbe.14-10-0167. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Cressey D. UK funders demand strong statistics for animal studies. Nature. 2015; 520 :271–272. doi: 10.1038/520271a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Dirnagl U, Kurreck C, Castaños-Vélez E, Bernard R. Quality management for academic laboratories: burden or boon? EMBO Reports. 2018; 19 :e47143. doi: 10.15252/embr.201847143. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Dirnagl U. Handbook of Experimental Pharmacology. Berlin, Heidelberg: Springer; 2019. Resolving the ttension between exploration and confirmation in preclinical biomedical research. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- D’Mello S, Lehman B, Pekrun R, Graesser A. Confusion can be beneficial for learning. Learning and Instruction. 2014; 29 :153–170. doi: 10.1016/j.learninstruc.2012.05.003. [ CrossRef ] [ Google Scholar ]
- Handelsman J, Ebert-May D, Beichner R, Bruns P, Chang A, DeHaan R, Gentile J, Lauffer S, Stewart J, Tilghman SM, Wood WB. Scientific teaching. Science. 2004; 304 :521–522. doi: 10.1126/science.1096022. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Howitt SM, Wilson AN. Revisiting “Is the scientific paper a fraud?” EMBO Reports. 2014; 15 :481–484. doi: 10.1002/embr.201338302. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Ioannidis JPA, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. The Lancet. 2014; 383 :166–175. doi: 10.1016/S0140-6736(13)62227-8. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Kelner KL. Playing our part. Science Translational Medicine. 2013; 5 :190ed7. doi: 10.1126/scitranslmed.3006661. [ CrossRef ] [ Google Scholar ]
- Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, Crystal RG, Darnell RB, Ferrante RJ, Fillit H, Finkelstein R, Fisher M, Gendelman HE, Golub RM, Goudreau JL, Gross RA, Gubitz AK, Hesterlee SE, Howells DW, Huguenard J, Kelner K, Koroshetz W, Krainc D, Lazic SE, Levine MS, Macleod MR, McCall JM, Moxley RT, Narasimhan K, Noble LJ, Perrin S, Porter JD, Steward O, Unger E, Utz U, Silberberg SD. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012; 490 :187–191. doi: 10.1038/nature11556. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Lundwall RA. Changing institutional incentives to foster sound scientific practices: one department. Infant Behavior and Development. 2019; 55 :69–76. doi: 10.1016/j.infbeh.2019.03.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- MacLeod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, Hirst T, Hemblade R, Bahor Z, Nunes-Fonseca C, Potluru A, Thomson A, Baginskaite J, Baginskitae J, Egan K, Vesterinen H, Currie GL, Churilov L, Howells DW, Sena ES. Risk of bias in reports of in vivo research: a focus for improvement. PLOS Biology. 2015; 13 :e1002273. doi: 10.1371/journal.pbio.1002273. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- McNutt M. Journals unite for reproducibility. Science. 2014; 346 :679. doi: 10.1126/science.aaa1724. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Minner DD, Levy AJ, Century J. Inquiry-based science instruction-what is it and does it matter? results from a research synthesis years 1984 to 2002. Journal of Research in Science Teaching. 2010; 47 :474–496. doi: 10.1002/tea.20347. [ CrossRef ] [ Google Scholar ]
- Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN. Assessing scientists for hiring, promotion, and tenure. PLOS Biology. 2018; 16 :e2004089. doi: 10.1371/journal.pbio.2004089. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, Simonsohn U, Wagenmakers E-J, Ware JJ, Ioannidis JPA. A manifesto for reproducible science. Nature Human Behaviour. 2017; 1 :0021. doi: 10.1038/s41562-016-0021. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Munafò MR, Chambers CD, Collins AM, Fortunato L, Macleod MR. Research culture and reproducibility. Trends in Cognitive Sciences. 2020; 24 :91–93. doi: 10.1016/j.tics.2019.12.002. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Munafò MR, Davey Smith G. Robust research needs many lines of evidence. Nature. 2018; 553 :399–401. doi: 10.1038/d41586-018-01023-3. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- National Research Council . Enhancing the Effectiveness of Team Science. The National Academies Press; 2015. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Nature Reducing our irreproducibility. Nature. 2013; 496 :398. doi: 10.1038/496398a. [ CrossRef ] [ Google Scholar ]
- Nosek BA, Spies JR, Motyl M. Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science. 2012; 7 :615–631. doi: 10.1177/1745691612459058. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Pfund C, Maidl Pribbenow C, Branchaw J, Miller Lauffer S, Handelsman J. The merits of training mentors. Science. 2006; 311 :473–474. doi: 10.1126/science.1123806. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- PLOS Biology Fifteen years in, what next for PLOS biology? PLOS Biology. 2018; 16 :e3000049. doi: 10.1371/journal.pbio.3000049. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Raman IM. How to be a graduate advisee. Neuron. 2014; 81 :9–11. doi: 10.1016/j.neuron.2013.12.030. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Silberberg SD, Crawford DC, Finkelstein R, Koroshetz WJ, Blank RD, Freeze HH, Garrison HH, Seger YR. Shake up conferences. Nature. 2017; 548 :153–154. doi: 10.1038/548153a. [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Spencer KC, McDaniels M, Utzerath E, Rogers JG, Sorkness CA, Asquith P, Pfund C. Building a sustainable national infrastructure to expand research mentor training. CBE—Life Sciences Education. 2018; 17 :ar48. doi: 10.1187/cbe.18-03-0034. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Strech D, Weissgerber T, Dirnagl U, QUEST Group Improving the trustworthiness, usefulness, and ethics of biomedical research through an innovative and comprehensive institutional initiative. PLOS Biology. 2020; 18 :e3000576. doi: 10.1371/journal.pbio.3000576. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Walkington C, Bernacki ML. Personalization of instruction: design dimensions and implications for cognition. The Journal of Experimental Education. 2018; 86 :50–68. doi: 10.1080/00220973.2017.1380590. [ CrossRef ] [ Google Scholar ]
- Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond "p < 0.05". The American Statistician. 2019; 73 :1–19. doi: 10.1080/00031305.2019.1583913. [ CrossRef ] [ Google Scholar ]
- Yeager DS, Henderson MD, Paunesku D, Walton GM, D'Mello S, Spitzer BJ, Duckworth AL. Boring but important: a self-transcendent purpose for learning fosters academic self-regulation. Journal of Personality and Social Psychology. 2014; 107 :559–580. doi: 10.1037/a0037637. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser .
Enter the email address you signed up with and we'll email you a reset link.
Using the TACT Framework to Learn the Principles of Rigour in Qualitative Research
Electronic Journal of Business Research Methods
Assessing the quality of qualitative research to ensure rigour in the findings is critical, especially if findings are to contribute to theory and be utilised in practice. However, teaching students concepts of rigour and how to apply them to their research is challenging. This article presents a generic framework of rigour with four critical dimensions—Trustworthiness, Auditability, Credibility and Transferability (TACT) intended to teach issues of rigour to postgraduate students and those new to qualitative research methodology. The framework enables them to explore the key dimensions necessary for assessing the rigour of qualitative research studies and checklist questions against each of the dimensions. TACT was offered through 10 workshops, attended by 64 participants. Participants positively evaluated the workshops and reported that the workshops enable them to learn the principles of qualitative research and better understanding issues of rigour. Work presented in the article...
Related Papers
Nurse Researcher
Anthony Tuckett
Jurnal Akuntansi dan Keuangan
helianti utami
Prior research has explored qualitative studies in relation to the paradigms used. This paper enriches the literature by investigating the quality of qualitative studies in relation to the data collection method and participants' selection. In this study, we collected SNA qualitative paper proceedings from 2007 to 2017. Guided by the minimum criteria of the data collection method described in the literature review sections, we analyze those proceedings. We found the three most common methods used in the studies: interview, observation, and documentation. The majority of the paper clearly stated their data collection method. However, only a minority of them provides a clear description of how the data were collected and how to obtain participants/data used in their studies and why invite dthem in the research. Thus, it is suggested that researchers provide a detail explanation of their methods to show the rigour of the study that they conducted
Julia Crook
American Journal of Pharmaceutical Education
Sheila Chauvin
Zimitri Erasmus , Jacques de Wet
Nick Schuermans
Journal of Advanced Nursing
Qualitative Report
Sonya Jakubec
Matjeko Lenka
Catherine Pope
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
RELATED PAPERS
Indian Journal of Public Health
Sanjay Zodpey
Evidence-based nursing
Helen Noble
BMC Medical Research Methodology
Kate Roberts
Wawan Yulianto
International Forum
Safary Wa-Mbaleka
Fitzroy Gordon
Debra Campbell
Journal of Research in Nursing
Jane cahill
Mohammed Ali Bapir
Motriz: Revista de Educação Física
Health Services Research
Michael Patton
VANESSA VERA LYNN
Health Research Policy and Systems
Joanna Reynolds
Deepak P Kafle
Qualitative Research
Dana Miller
Qualitative Health Research
Miles Little
Victoria Clarke
Caroline Bradbury-Jones
Journal of Public Administration Research and Theory
Kate Albrecht
Maria Northcote
RELATED TOPICS
- We're Hiring!
- Help Center
- Find new research papers in:
- Health Sciences
- Earth Sciences
- Cognitive Science
- Mathematics
- Computer Science
- Academia ©2024
Login to your account
If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Property | Value |
Status | |
Version | |
Ad File | |
Disable Ads Flag | |
Environment | |
Moat Init | |
Moat Ready | |
Contextual Ready | |
Contextual URL | |
Contextual Initial Segments | |
Contextual Used Segments | |
AdUnit | |
SubAdUnit | |
Custom Targeting | |
Ad Events | |
Invalid Ad Sizes | |
Download started.
- PDF [516 KB] PDF [516 KB]
- Figure Viewer
- Download Figures (PPT)
- Add To Online Library Powered By Mendeley
- Add To My Reading List
- Export Citation
- Create Citation Alert
A Review of the Quality Indicators of Rigor in Qualitative Research
- Jessica L. Johnson, PharmD Jessica L. Johnson Correspondence Corresponding Author: Jessica L. Johnson, William Carey University School of Pharmacy, 19640 Hwy 67, Biloxi, MS 39574. Tel: 228-702-1897. Contact Affiliations William Carey University School of Pharmacy, Biloxi, Mississippi Search for articles by this author
- Donna Adkins, PharmD Donna Adkins Affiliations William Carey University School of Pharmacy, Biloxi, Mississippi Search for articles by this author
- Sheila Chauvin, PhD Sheila Chauvin Affiliations Louisiana State University, School of Medicine, New Orleans, Louisiana Search for articles by this author
- qualitative research design
- standards of rigor
- best practices
INTRODUCTION
- Denzin Norman
- Lincoln Y.S.
- Google Scholar
- Anderson C.
- Full Text PDF
- Scopus (590)
- Santiago-Delefosse M.
- Stephen S.L.
- Scopus (85)
- Scopus (32)
- Levinson W.
- Scopus (508)
- Dixon-Woods M.
- Scopus (440)
- Malterud K.
- Midtgarden T.
- Scopus (205)
- Wasserman J.A.
- Wilson K.L.
- Scopus (69)
- Barbour R.S.
- Scopus (68)
- Sale J.E.M.
- Scopus (12)
- Fraser M.W.
- Scopus (37)
- Sandelowski M.
- Scopus (1574)
BEST PRACTICES: STEP-WISE APPROACH
Step 1: identifying a research topic.
- Scopus (290)
- Creswell J.
- Maxwell J.A.
- Glassick C.E.
- Maeroff G.I.
- Scopus (270)
- Scopus (281)
- Ringsted C.
- Scherpbier A.
- Scopus (132)
- Ravitch S.M.
- View Large Image
- Download Hi-res image
- Download (PPT)
- Huberman M.
Step 2: Qualitative Study Design
- Whittemore R.
- Mandle C.L.
- Scopus (992)
- Marshall M.N.
- Scopus (2255)
- Horsfall J.
- Scopus (186)
- O’Reilly M.
- Scopus (1087)
- Burkard A.W.
- Scopus (169)
- Patton M.Q.
- Scopus (366)
- Scopus (4328)
- Johnson R.B.
Step 3: Data Analysis
Step 4: drawing valid conclusions.
- Swanwick T.
- Swanwick T.O.
- O’Brien B.C.
- Harris I.B.
- Beckman T.J.
- Scopus (5124)
Step 5: Reporting Research Results
- Shenton A.K.
- Scopus (4270)
Article info
Publication history, identification.
DOI: https://doi.org/10.5688/ajpe7120
ScienceDirect
Related Articles
- Access for Developing Countries
- Articles & Issues
- Articles In Press
- Current Issue
- Past Issues
- Journal Information
- About Open Access
- Aims & Scope
- Editorial Board
- Editorial Team
- History of AJPE
- Contact Information
- For Authors
- Guide for Authors
- Researcher Academy
- Rights & Permissions
- Submission Process
- Submit Article
- For Reviewers
- Reviewer Instructions
- Reviewer Frequently Asked Questions
The content on this site is intended for healthcare professionals.
- Privacy Policy
- Terms and Conditions
- Accessibility
- Help & Contact
Session Timeout (2:00)
Your session will expire shortly. If you are still working, click the ‘Keep Me Logged In’ button below. If you do not respond within the next minute, you will be automatically logged out.
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
3.7 Quantitative Rigour
The extent to which the researchers strive to improve the quality of their study is referred to as rigour. Rigour is accomplished in quantitative research by measuring validity and reliability. 55 These concepts affect the quality of findings and their applicability to broader populations.
Validity refers to the accuracy of a measure. It is the extent to which a study or test accurately measures what it sets out to measure. There are three main types of validity – content, construct and criterion validity.
- Content validity: Content validity examines whether the instrument adequately covers all aspects of the content that it should with respect to the variable under investigation. 56 This type of validity can be assessed through expert judgment and by examining the coverage of items or questions in measure. 56 Face validity is a subset of content validity in which experts are consulted to determine if a measurement tool accurately captures what it is supposed to measure. 56 There are multiple methods for testing content validity – content validity index (CVI) and content validity ratio (CVR). CVI is calculated as the number of experts giving a rating of “very relevant” for each item divided by the total number of experts. Values range from 0 to 1, with items having a CVI score > 0.79 relevant; between 0.70 and 0.79, the item needs revisions, and if the value is below 0.70, the item is eliminated. 57 CVR varies between 1 and −1; a higher score indicates greater agreement among panel members. CVR is calculated as (Ne – N/2)/(N/2), where Ne is the number of panellists indicating an item as “essential” and N is the total number of panelists. 57 A study by Mousazadeh et al. 2017 investigated the content, face validity and reliability of sociocultural attitude towards appearance questionnaire-3 (SATAQ-3) among female adolescents. 58 To ensure face validity, the questionnaire was given to 25 female adolescents, a psychologist and three nurses, who were required to evaluate the items with respect to problems, ambiguity, relativity, proper terms and grammar, and understandability. For content validity, 15 experts in psychology and nursing were asked to assess the qualitative content validity. To determine the quantitative content validity, the content validity index and content validity ratio were calculated. 58
- Construct validity: A construct is an idea or theoretical concept based on empirical observations that are not directly measurable. An example of a construct could be physical functioning or social anxiety. Thus construct validity determines whether an instrument measures the underlying construct of interest and discriminates it from other related constructs. 55 It is important and expresses the confidence that a particular construct is valid. 55 This type of validity can be assessed using factor analysis or other statistical techniques. For example, Pinar, Rukiye 2005 , evaluated the reliability and construct validity of the SF-36 in Turkish cancer patients. 59 The SF-36 is widely used to measure the quality of life or health status in sick and healthy populations. Principal components factor analysis with varimax rotation confirmed the presence of the seven domains in the SF-36: in the SF-36: physical functioning, role limitations due to physical and emotional problems, mental health, general health perception, bodily pain, social functioning, and vitality. It was concluded that the Turkish version of the SF-36 was a suitable instrument that could be employed in cancer research in Turkey. 59
- Criterion validity: Criterion validity is the relationship between an instrument score and some external criterion. This criterion is considered the “gold standard” and has to be a widely accepted measure that shares the same characteristics as the assessment tool. 55 Determining the validity of a new diagnostic test requires two principal factors – sensitivity and specificity. 60 Sensitivity refers to the probability of detecting those with the disease, while specificity refers to the probability of the test correctly identifying those without the disease. 60 For example, the reverse transcriptase polymerase chain reaction (RT PCR) is the gold standard for testing COVID-19; its results are available at the earliest several hours to days after testing. Rapid antigen tests are diagnostic tools that can be used at the point of care, and the results can be obtained within 30 minutes). 61, 62 Therefore, the validity of these rapid antigen tests was determined against the gold standard. 61, 62 Two published articles that assessed the validity of the rapid antigen test reported sensitivity of 71.43% and 78.3% and specificity of 99.68% and 99.5%, respectively. 61, 62 Thus indicating that the tests were less effective in identifying those who have the disease but highly effective in identifying those who do not have the disease. While it is important to assess the accuracy of the instruments used, it is also imperative to determine if the measure and findings are reliable.
Reliability
Reliability refers to the consistency of a measure. It is the ability of a measure or tests to reproduce a consistent result over time and across different observers. 55 A reliable measurement tool produces consistent results, even when different observers administer the test or when the test is conducted on different occasions. 55, 5 6 Reliability can be assessed by examining test-retest reliability, inter-rater reliability, and internal consistency.
- Test-retest reliability: Test-retest reliability refers to the degree of consistency between the outcomes of the same test or measure taken by the same participants at varying times. It estimates the consistency of measurement repetition. The intraclass correlation coefficient (ICC) is often used to determine test-retest reliability. 56 For example, a study may be conducted to evaluate the reliability of a new tool for measuring pain and might administer the tool to a group of patients at two different time points and compare the results. If the results are consistent across the two-time points, this would indicate that the tool has good test-retest reliability. However, it is important to note that the reliability reduces when the time between administration of the test is extended or too long. An adequate time span between tests should range from 10 to 14 days. 56 The article by Pinar, Rukiye 2005 , demonstrated this by assessing a test–retest stability using intraclass correlation coefficient-ICC. The retest procedure was conducted two weeks after the first test as two weeks was considered to be the optimum re-test interval. 59 This would be sufficiently long for participants to forget their initial responses but not too long that most health domains would change. 59
- Inter-observer (between observers) reliability: is also known as i nter-rater reliability, and it is the level of agreement between two or more observers on the results of an instrument or test. It is the most popular method of determining if two things are equivalent. 55, 56 For example, a study may be conducted to evaluate the reliability of a new tool for measuring depression. This will involve two different raters or observers independently scoring the same patient on the tool and comparing the results. If the results are consistent across the two raters, this would indicate that the tool has excellent inter-rater reliability. The Kappa coefficient is a measure used to assess the agreement between the raters. 56 It can have a maximum value of 1.00; the higher the value, the greater the concordance between the raters. 56
- Internal consistency: Internal consistency refers to the extent to which different items or questions in a test or questionnaire are consistent with one another. It is also known as homogeneity, which indicates whether each component of an instrument measures the same characteristics. 55 This type of reliability can be assessed by calculating Cronbach’s alpha (α) coefficient, which measures the correlation between different items or questions. Cronbach α is expressed as a number between 0 and 1, and a reliability score of 0.7 or above is considered acceptable. 55 For example, Pinar, Rukiye 2005 reported that reliability evaluations of the SF-36 were based on the internal consistency test (Cronbach’s α coefficient). The results showed that Cronbach’s α coefficient for the eight subscales of the SF-36 ranged between 0.79 and 0.90, confirming the internal consistency of the subscales. 59
Now you have an understanding of the quantitative methodology. Use the Padlet below to write a research question that can be answered quantitatively.
An Introduction to Research Methods for Undergraduate Health Profession Students Copyright © 2023 by Faith Alele and Bunmi Malau-Aduli is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.
An official website of the United States government, Department of Justice.
Here's how you know
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
NCJRS Virtual Library
Preparing for analysis: a practical guide for a critical step for procedural rigor in large-scale multisite qualitative research studies, additional details.
810 Seventh Street NW , Washington , DC 20531 , United States
Availability
Related topics.
IMAGES
VIDEO
COMMENTS
Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...
What is rigour? In qualitative research, rigour, or trustworthiness, refers to how researchers demonstrate the quality of their research. 1, 2 Rigour is an umbrella term for several strategies and approaches that recognise the influence on qualitative research by multiple realities; for example, of the researcher during data collection and analysis, and of the participant.
Rigor is demonstrated by this depth of engagement that enables the designer "to reach through to the concealed plums" (Cross, 2001, p. 53). Demonstrating Rigor in Research. It is important to clarify that the requirements for demonstrating rigor in design research and in grounded theory qualitative analysis vary from those required in ...
1967: 244). When conducting qualitative analysis, we generally identify categories in our data. These categories are generally described as codes or groupings of codes, such as the first- and. second-order codes and overarching categories often described in classical grounded theory. (Strauss & Corbin, 1990).
Qualitative saturation is a technique commonly referenced in inductive research to demonstrate that the dataset is robust in terms of capturing the important variability that exists around the phenomenon of ... Data Collection Protocols and Procedures. In deductive research, constructs and relationships are articulated prior to analysis, and ...
Rigor is a concept that reflects the quality of the process used in capturing, managing, and analyzing our data as we develop this rich understanding. Rigor helps to establish standards through which qualitative research is critiqued and judged, both by the scientific community and by the practitioner community.
This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...
inferences through applying systematic inquiry procedures (King, Keohane, & Verba, 1994). Thus, a basic demand for all qualitative research has been for it to be systematic and rigorous, although conceptions of rigor are rooted in and therefore differ across paradigms of qualitative research (Hammersley, 2007).
Peer review, another common standard of rigor, is a process by which researchers invite an independent third-party researcher to analyze a detailed audit trail maintained by the study author. The audit trail methodically describes the step-by-step processes and decision-making through-out the study.
Various strategies are available within qualitative research to protect against bias and enhance the reliability of findings. This paper gives examples of the principal approaches and summarises them into a methodological checklist to help readers of reports of qualitative projects to assess the quality of the research. In the health field--with its strong tradition of biomedical research ...
qualitative research is a scientific process that has a valued contribution to make to the advancement of knowledge. Rigour is the means by which we demonstrate integrity and competence (Aroni et al. 1999), a way of demonstrating the legitimacy of the research process. Without rigour, there is a danger that research may become fictional ...
EDITOR—Barbour's article is tantalising and mystifying in equal measure. 1 She is right to counsel qualitative researchers from shielding behind a protective wall of checklists and quasi-paradigmatic research techniques—although the same should be levelled at epidemiologists, statisticians, and health economists, with all researchers being ...
Addressing the complexity of rigour from an Indigenous research methodology may mean thinking outside the box. As noted by Given, Rigor in research involving humans surely means producing results that faithfully reflect lived reality that has validity or truth value for both the Indigenous and scholarly communities.
The traditional criteria for rigor in the quantitative paradigm are well known. These include trustworthiness (internal validity), generalizability (external validity), consistency (reliability), and objectivity (Mays and Pope 2007).Different terminology and criteria for rigor been proposed for qualitative research, which have used these same core principles (Mays and Pope 2007).
Reducing qualitative research to a list of technical procedures (such as purposive sampling, grounded theory, multiple coding, triangulation, and respondent validation) is overly prescriptive and results in "the tail wagging the dog". None of these "technical fixes" in itself confers rigour; they can strengthen the rigour of qualitative ...
Abstract. There is a pressing need to increase the rigor of research in the life and biomedical sciences. To address this issue, we propose that communities of 'rigor champions' be established to campaign for reforms of the research culture that has led to shortcomings in rigor. These communities of rigor champions would also assist in the ...
The concept of 'trustworthiness' portrays quality in qualitative research and underpins both rigour in the research process and the relevance, and confidence in the research outcome (Baillie, 2015; Finlay 2006). Also, it is a proxy for establishing the authenticity of the research outcome, and truthfulness of findings (Cypress, 2017).
Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework, both of which contribute to the selection of appropriate ...
Rigour is accomplished in quantitative research by measuring validity and reliability. ... 56 The article by Pinar, Rukiye 2005, demonstrated this by assessing a test-retest stability using intraclass correlation coefficient-ICC. The retest procedure was conducted two weeks after the first test as two weeks was considered to be the optimum re ...
The challenge of representation is twofold. First, representation requires researchers to find ways to present the process of data analysis in textual and/or visual form in order to publicly disclose the research process and to demonstrate the rigor of the analysis (Anfara et al., 2002; Harry et al., 2005).
Rigor of the Prior Research A careful assessment of the rigor of the prior research that serves as the key support for a proposed project will help applicants identify any weaknesses or gaps in the line of research. Describe the strengths and weaknesses in the rigor of the prior research (both
Keywords. Research, rigour, discipline, systematic, truth, alternative epistemology. The interconnected notions of rigour, discipline and systematic inquiry play a central role in the discourse of research, including educational research. For some (see the argument that follows this introduction) they almost define what form of inquiry will ...
Guided by the research cooperative, RCs in the current project collaborated on many aspects of the qualitative data activities (e.g., codebook development and coding activities); however, pre-analysis procedures, such as organizing and managing resources, were primarily managed at the RC level.