2.1 Why Is Research Important?

Learning objectives.

By the end of this section, you will be able to:

  • Explain how scientific research addresses questions about behavior
  • Discuss how scientific research guides public policy
  • Appreciate how scientific research can be important in making personal decisions

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession ( Figure 2.2 ). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Use of Research Information

Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges. For example, the explosion in our use of technology has led researchers to question whether this ultimately helps or hinders us. The use and implementation of technology in educational settings has become widespread over the last few decades. Researchers are coming to different conclusions regarding the use of technology. To illustrate this point, a study investigating a smartphone app targeting surgery residents (graduate students in surgery training) found that the use of this app can increase student engagement and raise test scores (Shaw & Tan, 2015). Conversely, another study found that the use of technology in undergraduate student populations had negative impacts on sleep, communication, and time management skills (Massimini & Peterson, 2009). Until sufficient amounts of research have been conducted, there will be no clear consensus on the effects that technology has on a student's acquisition of knowledge, study skills, and mental health.

In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.

We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding early intervention programs. These programs are designed to help children who come from low-income backgrounds, have special needs, or face other disadvantages. These programs may involve providing a wide variety of services to maximize the children's development and position them for optimal levels of success in school and later in life (Blann, 2005). While such programs sound appealing, you would want to be sure that they also proved effective before investing additional money in these programs. Fortunately, psychologists and other scientists have conducted vast amounts of research on such programs and, in general, the programs are found to be effective (Neil & Christensen, 2009; Peters-Scheffer, Didden, Korzilius, & Sturmey, 2011). While not all programs are equally effective, and the short-term effects of many such programs are more pronounced, there is reason to believe that many of these programs produce long-term benefits for participants (Barnett, 2011). If you are committed to being a good steward of taxpayer money, you would want to look at research. Which programs are most effective? What characteristics of these programs make them effective? Which programs promote the best outcomes? After examining the research, you would be best equipped to make decisions about which programs to fund.

Link to Learning

Watch this video about early childhood program effectiveness to learn how scientists evaluate effectiveness and how best to invest money into programs that are most effective.

Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine that your sister, Maria, expresses concern about her two-year-old child, Umberto. Umberto does not speak as much or as clearly as the other children in his daycare or others in the family. Umberto's pediatrician undertakes some screening and recommends an evaluation by a speech pathologist, but does not refer Maria to any other specialists. Maria is concerned that Umberto's speech delays are signs of a developmental disorder, but Umberto's pediatrician does not; she sees indications of differences in Umberto's jaw and facial muscles. Hearing this, you do some internet searches, but you are overwhelmed by the breadth of information and the wide array of sources. You see blog posts, top-ten lists, advertisements from healthcare providers, and recommendations from several advocacy organizations. Why are there so many sites? Which are based in research, and which are not?

In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.

NOTABLE RESEARCHERS

Psychological research has a long history involving important figures from diverse backgrounds. While the introductory chapter discussed several researchers who made significant contributions to the discipline, there are many more individuals who deserve attention in considering how psychology has advanced as a science through their work ( Figure 2.3 ). For instance, Margaret Floy Washburn (1871–1939) was the first woman to earn a PhD in psychology. Her research focused on animal behavior and cognition (Margaret Floy Washburn, PhD, n.d.). Mary Whiton Calkins (1863–1930) was a preeminent first-generation American psychologist who opposed the behaviorist movement, conducted significant research into memory, and established one of the earliest experimental psychology labs in the United States (Mary Whiton Calkins, n.d.).

Francis Sumner (1895–1954) was the first African American to receive a PhD in psychology in 1920. His dissertation focused on issues related to psychoanalysis. Sumner also had research interests in racial bias and educational justice. Sumner was one of the founders of Howard University’s department of psychology, and because of his accomplishments, he is sometimes referred to as the “Father of Black Psychology.” Thirteen years later, Inez Beverly Prosser (1895–1934) became the first African American woman to receive a PhD in psychology. Prosser’s research highlighted issues related to education in segregated versus integrated schools, and ultimately, her work was very influential in the hallmark Brown v. Board of Education Supreme Court ruling that segregation of public schools was unconstitutional (Ethnicity and Health in America Series: Featured Psychologists, n.d.).

Although the establishment of psychology’s scientific roots occurred first in Europe and the United States, it did not take much time until researchers from around the world began to establish their own laboratories and research programs. For example, some of the first experimental psychology laboratories in South America were founded by Horatio Piñero (1869–1919) at two institutions in Buenos Aires, Argentina (Godoy & Brussino, 2010). In India, Gunamudian David Boaz (1908–1965) and Narendra Nath Sen Gupta (1889–1944) established the first independent departments of psychology at the University of Madras and the University of Calcutta, respectively. These developments provided an opportunity for Indian researchers to make important contributions to the field (Gunamudian David Boaz, n.d.; Narendra Nath Sen Gupta, n.d.).

When the American Psychological Association (APA) was first founded in 1892, all of the members were White males (Women and Minorities in Psychology, n.d.). However, by 1905, Mary Whiton Calkins was elected as the first female president of the APA, and by 1946, nearly one-quarter of American psychologists were female. Psychology became a popular degree option for students enrolled in the nation’s historically Black higher education institutions, increasing the number of Black Americans who went on to become psychologists. Given demographic shifts occurring in the United States and increased access to higher educational opportunities among historically underrepresented populations, there is reason to hope that the diversity of the field will increasingly match the larger population, and that the research contributions made by the psychologists of the future will better serve people of all backgrounds (Women and Minorities in Psychology, n.d.).

The Process of Scientific Research

Scientific knowledge is advanced through a process known as the scientific method . Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In deductive reasoning , ideas are tested in the real world; in inductive reasoning , real-world observations lead to new ideas ( Figure 2.4 ). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.

In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive.

Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favorite fruits—apples, bananas, and oranges—all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes.

For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.

We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.

A hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests Figure 2.5 .

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

A scientific hypothesis is also falsifiable , or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviors ( Figure 2.6 ). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • Authors: Rose M. Spielman, William J. Jenkins, Marilyn D. Lovett
  • Publisher/website: OpenStax
  • Book title: Psychology 2e
  • Publication date: Apr 22, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/psychology-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/psychology-2e/pages/2-1-why-is-research-important

© Jan 6, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Res Metr Anal

Logo of frontrma

The Use of Research Methods in Psychological Research: A Systematised Review

Salomé elizabeth scholtz.

1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa

Werner de Klerk

Leon t. de beer.

2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field (Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data (Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method (Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions (Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm (O'Neil and Koekemoer, 2016 ), research question (Grix, 2002 ), or the skill and exposure of the researcher (Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research (Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies (Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design (Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills (Scott Jones and Goldring, 2015 ), how theories are developed (Ngulube, 2013 ), and the credibility of research results (Levitt et al., 2017 ). This, in turn, can be detrimental to the field (Nind et al., 2015 ), journal publication (Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research (Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing (Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. ( 2011 ) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. ( 1999 ) as well as Bluhm et al. ( 2011 ) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl ( 2014 ) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer ( 2016 ). Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity (O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population (Bittermann and Fischer, 2018 )], method [data-gathering tools (Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research (Ritchie et al., 2009 )], data collection [techniques and research strategy (Maree, 2016 )], and data analysis [discovering information by examining bodies of data (Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth ( 2009 ) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form (Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample (Strydom, 2011 ), and time and cost constraints (Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample (Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee ( 2015 ) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank (Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus® database (Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ (Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals (Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals (Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. ( 2016 ) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually (Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0001.jpg

Systematised review procedure.

According to Johnston et al. ( 2019 ), “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form (Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data (Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. ( 2016 ) and approved by three research committees/stakeholders and the researchers (Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail (Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process (Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process (Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy (Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction (Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review (Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0002.jpg

Journal article frequency.

Codes used to form themes (research topics).

Social Psychology31Aggression SP, Attitude SP, Belief SP, Child abuse SP, Conflict SP, Culture SP, Discrimination SP, Economic, Family illness, Family, Group, Help, Immigration, Intergeneration, Judgement, Law, Leadership, Marriage SP, Media, Optimism, Organisational and Social justice, Parenting SP, Politics, Prejudice, Relationships, Religion, Romantic Relationships SP, Sex and attraction, Stereotype, Violence, Work
Experimental Psychology17Anxiety, stress and PTSD, Coping, Depression, Emotion, Empathy, Facial research, Fear and threat, Happiness, Humor, Mindfulness, Mortality, Motivation and Achievement, Perception, Rumination, Self, Self-efficacy
Cognitive Psychology12Attention, Cognition, Decision making, Impulse, Intelligence, Language, Math, Memory, Mental, Number, Problem solving, Reading
Health Psychology7Addiction, Body, Burnout, Health, Illness (Health Psychology), Sleep (Health Psychology), Suicide and Self-harm
Physiological Psychology6Gender, Health (Physiological psychology), Illness (Physiological psychology), Mood disorders, Sleep (Physiological psychology), Visual research
Developmental Psychology3Attachment, Development, Old age
Personality3Machiavellian, Narcissism, Personality
Psychological Psychology3Programme, Psychology practice, Theory
Education and Learning1Education and Learning
Psychometrics1Measure
Code Total84

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten ( 2010 ). These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten ( 2010 ). It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0003.jpg

Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour (Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost ( 2014 ) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

Research methods in psychology.

Quantitative4011626960525248283813
Qualitative28410523501
Review115203411301
Mixed Methods7000101100
Multi-method0000000010
Total4471717260615853473915

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0004.jpg

Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

Sampling use in the field of psychology.

Not stated3311534557494343383114
Convenience sampling558101689261
Random sampling15391220211
Snowball sampling14441200300
Purposive sampling6020020310
Cluster sampling8120020000
Stratified sampling4120110000
Non-probability sampling4010000010
Probability sampling3100000000
Quota sampling1010000000
Criterion sampling1000000000
Self-selection sampling1000000000
Unsystematic sampling0100000000
Total4431727660605852484016

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0005.jpg

Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher ( 2016 ), is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

Design use in the field of psychology.

Experimental design828236010128643
Non-experimental design1153051013171313143
Cross-sectional design123311211917215132
Correlational design5612301022042
Not stated377304241413
Longitudinal design21621122023
Quasi-experimental design4100002100
Systematic review3000110100
Cross-cultural design3001000100
Descriptive design2000003000
Ethnography4000000000
Literature review1100110000
Interpretative Phenomenological Analysis (IPA)2000100000
Narrative design1000001100
Case-control research design0000020000
Concurrent data collection design1000100000
Grounded Theory1000100000
Narrative review0100010000
Auto-ethnography1000000000
Case series evaluation0000000100
Case study1000000000
Comprehensive review0100000000
Descriptive-inferential0000000010
Explanatory sequential design1000000000
Exploratory mixed-method0000100100
Grounded ethnographic design0100000000
Historical cohort design0100000000
Historical research0000000100
interpretivist approach0000000100
Meta-review1000000100
Prospective design1000000000
Qualitative review0000000100
Qualitative systematic review0000010000
Short-term prospective design0100000000
Total4611757463635856483916

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0006.jpg

Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

Data collection in the field of psychology.

Questionnaire3641136542405139243711
Experimental task68663529511551
Cognitive ability test957112615110
Physiological measure31216253010
Interview19301302201
Online scholarly literature104003401000
Open-ended questions15301312300
Semi-structured interviews10300321201
Observation10100000020
Documents5110000120
Focus group6120100000
Not stated2110001401
Public data6100000201
Drawing task0201110200
In-depth interview6000100000
Structured interview0200120010
Writing task1000400100
Questionnaire interviews1010201000
Non-experimental task4000000000
Tests2200000000
Group accounts2000000100
Open-ended prompts1100000100
Field notes2000000000
Open-ended interview2000000000
Qualitative questions0000010001
Social media1000000010
Assessment procedure0001000000
Closed-ended questions0000000100
Open discussions1000000000
Qualitative descriptions1000000000
Total55127375116797365605017

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0007.jpg

Data collection frequency in topics.

Data analysis in the field of psychology.

Not stated5120011501
Actor-Partner Interdependence Model (APIM)4000000000
Analysis of Covariance (ANCOVA)17813421001
Analysis of Variance (ANOVA)112601629151715653
Auto-regressive path coefficients0010000000
Average variance extracted (AVE)1000010000
Bartholomew's classification system1000000000
Bayesian analysis3000100000
Bibliometric analysis1100000100
Binary logistic regression1100141000
Binary multilevel regression0001000000
Binomial and Bernoulli regression models2000000000
Binomial mixed effects model1000000000
Bivariate Correlations321030435111
Bivariate logistic correlations1000010000
Bootstrapping391623516121
Canonical correlations0000000020
Cartesian diagram1000000000
Case-wise diagnostics0100001000
Casual network analysis0001000000
Categorisation5200110400
Categorisation of responses2000000000
Category codes3100010000
Cattell's scree-test0010000000
Chi-square tests52201756118743
Classic Parallel Analysis (PA)0010010010
Cluster analysis7000111101
Coded15312111210
Cohen d effect size14521323101
Common method variance (CMV)5010000000
Comprehensive Meta-Analysis (CMA)0000000010
Confidence Interval (CI)2000010000
Confirmatory Factor Analysis (CFA)5713400247131
Content analysis9100210100
Convergent validity1000000000
Cook's distance0100100000
Correlated-trait-correlated-method minus one model1000000000
Correlational analysis2598544182731348338
Covariance matrix3010000000
Covariance modelling0110000000
Covariance structure analyses2000000000
Cronbach's alpha61141865108375
Cross-validation0020000001
Cross-lagged analyses1210001000
Dependent t-test1200110100
Descriptive statistics3241324349414336282910
Differentiated analysis0000001000
Discriminate analysis1020000001
Discursive psychology1000000000
Dominance analysis1000000000
Expectation maximisation2100000100
Exploratory data Analysis1100110000
Exploratory Factor Analysis (EFA)145240114040
Exploratory structural equation modelling (ESEM)0010000010
Factor analysis124160215020
Measurement invariance testing0000000000
Four-way mixed ANOVA0101000000
Frequency rate20142122200
Friedman test1000000000
Games-Howell 2200010000
General linear model analysis1200001100
Greenhouse-Geisser correction2500001111
Grounded theory method0000000001
Grounded theory methodology using open and axial coding1000000000
Guttman split-half0010000000
Harman's one-factor test13200012000
Herman's criteria of experience categorisation0000000100
Hierarchical CFA (HCFA)0010000000
Hierarchical cluster analysis1000000000
Hierarchical Linear Modelling (HLM)762223767441
Huynh-Felt correction1000000000
Identified themes3000100000
Independent samples t-test38944483311
Inductive open coding1000000000
Inferential statistics2000001000
Interclass correlation3010000000
Internal consistency3120000000
Interpreted and defined0000100000
Interpretive Phenomenological Analysis (IPA)2100100000
Item fit analysis1050000000
K-means clustering0000000100
Kaiser-meyer-Olkin measure of sampling adequacy2080002020
Kendall's coefficients3100000000
Kolmogorov-Smirnov test1211220010
Lagged-effects multilevel modelling1100000000
Latent class differentiation (LCD)1000000000
Latent cluster analysis0000010000
Latent growth curve modelling (LGCM)1000000110
Latent means1000000000
Latent Profile Analysis (LPA)1100000000
Linear regressions691941031253130
Linguistic Inquiry and Word Count0000100000
Listwise deletion method0000010000
Log-likelihood ratios0000010000
Logistic mixed-effects model1000000000
Logistic regression analyses17010421001
Loglinear Model2000000000
Mahalanobis distances0200010000
Mann-Whitney U tests6421202400
Mauchly's test0102000101
Maximum likelihood method11390132310
Maximum-likelihood factor analysis with promax rotation0100000000
Measurement invariance testing4110100000
Mediation analysis29712435030
Meta-analysis3010000100
Microanalysis1000000000
Minimum significant difference (MSD) comparison0100000000
Mixed ANOVAs196010121410
Mixed linear model0001001000
Mixed-design ANCOVA1100000000
Mixed-effects multiple regression models1000000000
Moderated hierarchical regression model1000000000
Moderated regression analysis8400101010
Monte Carlo Markov Chains2010000000
Multi-group analysis3000000000
Multidimensional Random Coefficient Multinomial Logit (MRCML)0010000000
Multidimensional Scaling2000000000
Multiple-Group Confirmatory Factor Analysis (MGCFA)3000020000
Multilevel latent class analysis1000010000
Multilevel modelling7211100110
Multilevel Structural Equation Modelling (MSEM)2000000000
Multinominal logistic regression (MLR)1000000000
Multinominal regression analysis1000020000
Multiple Indicators Multiple Causes (MIMIC)0000110000
Multiple mediation analysis2600221000
Multiple regression341530345072
Multivariate analysis of co-variance (MANCOVA)12211011010
Multivariate Analysis of Variance (MANOVA)38845569112
Multivariate hierarchical linear regression1100000000
Multivariate linear regression0100001000
Multivariate logistic regression analyses1000000000
Multivariate regressions2100001000
Nagelkerke's R square0000010000
Narrative analysis1000001000
Negative binominal regression with log link0000010000
Newman-Keuls0100010000
Nomological Validity Analysis0010000000
One sample t-test81017464010
Ordinary Least-Square regression (OLS)2201000000
Pairwise deletion method0000010000
Pairwise parameter comparison4000002000
Parametric Analysis0001000000
Partial Least Squares regression method (PLS)1100000000
Path analysis21901245120
Path-analytic model test1000000000
Phenomenological analysis0010000100
Polynomial regression analyses1000000000
Fisher LSD0100000000
Principal axis factoring2140001000
Principal component analysis (PCA)81121103251
Pseudo-panel regression1000000000
Quantitative content analysis0000100000
Receiver operating characteristic (ROC) curve analysis2001000000
Relative weight analysis1000000000
Repeated measures analyses of variances (rANOVA)182217521111
Ryan-Einot-Gabriel-Welsch multiple F test1000000000
Satorra-Bentler scaled chi-square statistic0030000000
Scheffe's test3000010000
Sequential multiple mediation analysis1000000000
Shapiro-Wilk test2302100000
Sobel Test13501024000
Squared multiple correlations1000000000
Squared semi-partial correlations (sr2)2000000000
Stepwise regression analysis3200100020
Structural Equation Modelling (SEM)562233355053
Structure analysis0000001000
Subsequent t-test0000100000
Systematic coding- Gemeinschaft-oriented1000100000
Task analysis2000000000
Thematic analysis11200302200
Three (condition)-way ANOVA0400101000
Three-way hierarchical loglinear analysis0200000000
Tukey-Kramer corrections0001010000
Two-paired sample t-test7611031101
Two-tailed related t-test0110100000
Unadjusted Logistic regression analysis0100000000
Univariate generalized linear models (GLM)2000000000
Variance inflation factor (VIF)3100000010
Variance-covariance matrix1000000100
Wald test1100000000
Ward's hierarchical cluster method0000000001
Weighted least squares with corrections to means and variances (WLSMV)2000000000
Welch and Brown-Forsythe F-ratios0100010000
Wilcoxon signed-rank test3302000201
Wilks' Lamba6000001000
Word analysis0000000100
Word Association Analysis1000000000
scores5610110100
Total173863532919219823722511715255

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. ( 2019 ) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia (Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods (Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature (Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method (Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies (Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research (Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method (Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies (Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth ( 2009 ), state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method (Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods (Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty (Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world (Gunasekare, 2015 ). According to Toomela ( 2010 ), this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics (Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results (Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations (Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science (American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs (Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used (Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) (Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process (Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology (Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health (Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error (Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aanstoos C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
  • American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
  • Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report . Am. Psychol. 73 :3. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bandara W., Furtmueller E., Gorbacheva E., Miskon S., Beekhuyzen J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support . Commun. Ass. Inform. Syst. 37 , 154–204. 10.17705/1CAIS.03708 [ CrossRef ] [ Google Scholar ]
  • Barr-Walker J. (2017). Evidence-based information needs of public health workers: a systematized review . J. Med. Libr. Assoc. 105 , 69–79. 10.5195/JMLA.2017.109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bittermann A., Fischer A. (2018). How to identify hot topics in psychology using topic modeling . Z. Psychol. 226 , 3–13. 10.1027/2151-2604/a000318 [ CrossRef ] [ Google Scholar ]
  • Bluhm D. J., Harman W., Lee T. W., Mitchell T. R. (2011). Qualitative research in management: a decade of progress . J. Manage. Stud. 48 , 1866–1891. 10.1111/j.1467-6486.2010.00972.x [ CrossRef ] [ Google Scholar ]
  • Breen L. J., Darlaston-Jones D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia . Aust. Psychol. 45 , 67–76. 10.1080/00050060903127481 [ CrossRef ] [ Google Scholar ]
  • Burman E., Whelan P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill. [ Google Scholar ]
  • Chaichanasakul A., He Y., Chen H., Allen G. E. K., Khairallah T. S., Ramos K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007) . J. Career. Dev. 38 , 440–455. 10.1177/0894845310380223 [ CrossRef ] [ Google Scholar ]
  • Chryssochoou X. (2015). Social Psychology . Inter. Encycl. Soc. Behav. Sci. 22 , 532–537. 10.1016/B978-0-08-097086-8.24095-6 [ CrossRef ] [ Google Scholar ]
  • Cichocka A., Jost J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies . Inter. J. Psychol. 49 , 6–29. 10.1002/ijop.12011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clay R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
  • Coetzee M., Van Zyl L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology . SA. J. Psychol . 40 , 1–16. 10.4102/sajip.v40i1.1227 [ CrossRef ] [ Google Scholar ]
  • Counsell A., Harlow L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology . Can. Psychol. 58 , 140–147. 10.1037/cap0000074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deangelis T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
  • Demuth C. (2015). New directions in qualitative research in psychology . Integr. Psychol. Behav. Sci. 49 , 125–133. 10.1007/s12124-015-9303-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Denzin N. K., Lincoln Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage. [ Google Scholar ]
  • Drotar D. (2010). A call for replications of research in pediatric psychology and guidance for authors . J. Pediatr. Psychol. 35 , 801–805. 10.1093/jpepsy/jsq049 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dweck C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe . Perspect. Psychol. Sci. 12 , 656–659. 10.1177/1745691616687747 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Earp B. D., Trafimow D. (2015). Replication, falsification, and the crisis of confidence in social psychology . Front. Psychol. 6 :621. 10.3389/fpsyg.2015.00621 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ezeh A. C., Izugbara C. O., Kabiru C. W., Fonn S., Kahn K., Manderson L., et al.. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model . Glob. Health Action 3 :5693. 10.3402/gha.v3i0.5693 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferreira A. L. L., Bessa M. M. M., Drezett J., De Abreu L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review . Reprod. Clim. 31 , 48–54. 10.1016/j.recli.2015.12.002 [ CrossRef ] [ Google Scholar ]
  • Fonseca M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
  • Gough B., Lyons A. (2016). The future of qualitative research in psychology: accentuating the positive . Integr. Psychol. Behav. Sci. 50 , 234–243. 10.1007/s12124-015-9320-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant M. J., Booth A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info. Libr. J. 26 , 91–108. 10.1111/j.1471-1842.2009.00848.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grix J. (2002). Introducing students to the generic terminology of social research . Politics 22 , 175–186. 10.1111/1467-9256.00173 [ CrossRef ] [ Google Scholar ]
  • Gunasekare U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review . Int. J. Sci. Res. 4 , 361–368. Available online at: https://ssrn.com/abstract=2735996 [ Google Scholar ]
  • Hengartner M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9 :256. 10.3389/fpsyg.2018.00256 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holloway W. (2008). Doing intellectual disagreement differently . Psychoanal. Cult. Soc. 13 , 385–396. 10.1057/pcs.2008.29 [ CrossRef ] [ Google Scholar ]
  • Ivankova N. V., Creswell J. W., Plano Clark V. L. (2016). Foundations and Approaches to mixed methods research , in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers; ), 306–335. [ Google Scholar ]
  • Johnson M., Long T., White A. (2001). Arguments for British pluralism in qualitative health research . J. Adv. Nurs. 33 , 243–249. 10.1046/j.1365-2648.2001.01659.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johnston A., Kelly S. E., Hsieh S. C., Skidmore B., Wells G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide . J. Clin. Epidemiol. 108 , 64–72. 10.1016/j.jclinepi.2018.11.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ketchen D. J., Jr., Boyd B. K., Bergh D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges . Organ. Res. Methods 11 , 643–658. 10.1177/1094428108319843 [ CrossRef ] [ Google Scholar ]
  • Ktepi B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
  • Laher S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research . S. Afr. J. Psychol. 46 , 316–327. 10.1177/0081246316649121 [ CrossRef ] [ Google Scholar ]
  • Lee C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
  • Lee T. W., Mitchell T. R., Sablynski C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999 . J. Vocat. Behav. 55 , 161–187. 10.1006/jvbe.1999.1707 [ CrossRef ] [ Google Scholar ]
  • Leech N. L., Anthony J., Onwuegbuzie A. J. (2007). A typology of mixed methods research designs . Sci. Bus. Media B. V Qual. Quant 43 , 265–275. 10.1007/s11135-007-9105-3 [ CrossRef ] [ Google Scholar ]
  • Levitt H. M., Motulsky S. L., Wertz F. J., Morrow S. L., Ponterotto J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity . Qual. Psychol. 4 , 2–22. 10.1037/qup0000082 [ CrossRef ] [ Google Scholar ]
  • Lowe S. M., Moore S. (2014). Social networks and female reproductive choices in the developing world: a systematized review . Rep. Health 11 :85. 10.1186/1742-4755-11-85 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maree K. (2016). Planning a research proposal , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 49–70. [ Google Scholar ]
  • Maree K., Pietersen J. (2016). Sampling , in First Steps in Research, 2nd Edn , ed Maree K. (Pretoria: Van Schaik Publishers; ), 191–202. [ Google Scholar ]
  • Ngulube P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa . ESARBICA J. 32 , 10–23. Available online at: http://hdl.handle.net/10500/22397 . [ Google Scholar ]
  • Nieuwenhuis J. (2016). Qualitative research designs and data-gathering techniques , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 71–102. [ Google Scholar ]
  • Nind M., Kilburn D., Wiles R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods . Int. J. Soc. Res. Methodol. 18 , 561–576. 10.1080/13645579.2015.1062628 [ CrossRef ] [ Google Scholar ]
  • O'Cathain A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution . J. Mix. Methods 3 , 1–6. 10.1177/1558689808326272 [ CrossRef ] [ Google Scholar ]
  • O'Neil S., Koekemoer E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review . SA J. Indust. Psychol. 42 , 1–16. 10.4102/sajip.v42i1.1350 [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie A. J., Collins K. M. (2017). The role of sampling in mixed methods research enhancing inference quality . Köln Z Soziol. 2 , 133–156. 10.1007/s11577-017-0455-0 [ CrossRef ] [ Google Scholar ]
  • Perestelo-Pérez L. (2013). Standards on how to develop and report systematic reviews in psychology and health . Int. J. Clin. Health Psychol. 13 , 49–57. 10.1016/S1697-2600(13)70007-3 [ CrossRef ] [ Google Scholar ]
  • Pericall L. M. T., Taylor E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review . Dev. Med. Child Neurol. 56 , 19–30. 10.1111/dmcn.12237 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson R. A., Merunka D. R. (2014). Convenience samples of college students and research reproducibility . J. Bus. Res. 67 , 1035–1041. 10.1016/j.jbusres.2013.08.010 [ CrossRef ] [ Google Scholar ]
  • Ritchie J., Lewis J., Elam G. (2009). Designing and selecting samples , in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed Ritchie J., Lewis J. (London: Sage; ), 1–23. [ Google Scholar ]
  • Sandelowski M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis . Res. Nurs. Health 34 , 342–352. 10.1002/nur.20437 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M., Voils C. I., Knafl G. (2009). On quantitizing . J. Mix. Methods Res. 3 , 208–222. 10.1177/1558689809334210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scholtz S. E., De Klerk W., De Beer L. T. (2019). A data generated research framework for conducting research methods in psychological research .
  • Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
  • Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scott Jones J., Goldring J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods . Int. J. Soc. Res. Methodol. 18 , 479–494. 10.1080/13645579.2015.1062623 [ CrossRef ] [ Google Scholar ]
  • Smith B., McGannon K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology . Int. Rev. Sport Exerc. Psychol. 11 , 101–121. 10.1080/1750984X.2017.1317357 [ CrossRef ] [ Google Scholar ]
  • Stangor C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
  • Strydom H. (2011). Sampling in the quantitative paradigm , in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds de Vos A. S., Strydom H., Fouché C. B., Delport C. S. L. (Pretoria: Van Schaik Publishers; ), 221–234. [ Google Scholar ]
  • Tashakkori A., Teddlie C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications. [ Google Scholar ]
  • Toomela A. (2010). Quantitative methods in psychology: inevitable and useless . Front. Psychol. 1 :29. 10.3389/fpsyg.2010.00029 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Truscott D. M., Swars S., Smith S., Thornton-Reid F., Zhao Y., Dooley C., et al.. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005 . Int. J. Soc. Res. Methodol. 13 , 317–328. 10.1080/13645570903097950 [ CrossRef ] [ Google Scholar ]
  • Weiten W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth. [ Google Scholar ]

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2. Psychological Research

Why is Research Important?

Learning objectives.

By the end of this section you will be able to:

  • Explain how scientific research addresses questions about behaviour
  • Discuss how scientific research guides public policy
  • Appreciate how scientific research can be important in making personal decisions

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession (see Figure 2). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

Some of our ancestors, across the world and over the centuries, believed that trephination—the practice of making a hole in the skull, as shown here—allowed evil spirits to leave the body, thus curing mental illness and other disorders. (credit: “taiproject”/Flickr)

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behaviour, as well as the cognitive (mental) and physiological (body) processes that underlie behaviour. In contrast to other methods that people use to understand the behaviour of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behaviour is observable, the mind is not. If someone is crying, we can see behaviour. However, the reason for the behaviour is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behaviour by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behaviour. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Use of Research Information

Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges. For example, the hypothesized link between exposure to media violence and subsequent aggression has been debated in the scientific community for roughly 60 years. Even today, we will find detractors, but a consensus is building. Several professional organizations view media violence exposure as a risk factor for actual violence, including the American Medical Association, the American Psychiatric Association, and the American Psychological Association (American Academy of Pediatrics, American Academy of Child & Adolescent Psychiatry, American Psychological Association, American Medical Association, American Academy of Family Physicians, American Psychiatric Association, 2000).

In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.

We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding the D.A.R.E. (Drug Abuse Resistance Education) program in public schools (see Figure 3). This program typically involves police officers coming into the classroom to educate students about the dangers of becoming involved with alcohol and other drugs. According to the D.A.R.E. website (www.dare.org), this program has been very popular since its inception in 1983, and it is currently operating in 75% of school districts in the United States and in more than 40 countries worldwide. According to D.A.R.E. BC, since its inception, more than 100,000 school children in British Columbia have gone through the program (http://darebc.com/) . Sounds like an easy decision, right? However, on closer review, you discover that the vast majority of research into this program consistently suggests that participation has little, if any, effect on whether or not someone uses alcohol or other drugs (Clayton, Cattarello, & Johnstone, 1996; Ennett, Tobler, Ringwalt, & Flewelling, 1994; Lynam et al., 1999; Ringwalt, Ennett, & Holt, 1991). If you are committed to being a good steward of taxpayer money, will you fund this particular program, or will you try to find other programs that research has consistently demonstrated to be effective?

Figure 1.2 The D.A.R.E. program continues to be popular in schools around the world despite research suggesting that it is ineffective.

Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that a close friend has breast cancer or that one of your young relatives has recently been diagnosed with autism. In either case, you want to know which treatment options are most successful with the fewest side effects. How would you find that out? You would probably talk with your doctor and personally review the research that has been done on various treatment options—always with a critical eye to ensure that you are as informed as possible.

In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.

The Process of Scientific Research

Scientific knowledge is advanced through a process known as the scientific method . Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In deductive reasoning, ideas are tested against the empirical world; in inductive reasoning, empirical observations lead to new ideas (see Figure 4). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.

Figure 1.3 Psychological research relies on both inductive and deductive reasoning.

In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive.

Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favourite fruits—apples, bananas, and oranges—all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes.

For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.

We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.

A hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests (see Figure 5).

Figure 1.4 The scientific method of research includes proposing hypotheses, conducting research, and creating or modifying theories based on results.

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

A scientific hypothesis is also falsifiable, or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviours (see Figure 6). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

Figure 1.5 Many of the specifics of (a) Freud's theories, such as (b) his division of the mind into id, ego, and superego, have fallen out of favor in recent decades because they are not falsifiable. In broader strokes, his views set the stage for much of psychological thinking today, such as the unconscious nature of the majority of psychological processes.

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.

Activities: Watch a Video

OpenStax , Psychology. OpenStax CNX. Download for free at http://cnx.org/contents/[email protected]

DARE BC.  http://darebc.com/

Introduction to Psychology I Copyright © 2017 by Rajiv Jhangiani, Ph.D. is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

8 Why Is Research Important?

[latexpage]

Learning Objectives

By the end of this section, you will be able to:

  • Explain how scientific research addresses questions about behavior
  • Discuss how scientific research guides public policy
  • Appreciate how scientific research can be important in making personal decisions

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession ( [link] ). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

USE OF RESEARCH INFORMATION

Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges. For example, the hypothesized link between exposure to media violence and subsequent aggression has been debated in the scientific community for roughly 60 years. Even today, we will find detractors, but a consensus is building. Several professional organizations view media violence exposure as a risk factor for actual violence, including the American Medical Association, the American Psychiatric Association, and the American Psychological Association (American Academy of Pediatrics, American Academy of Child & Adolescent Psychiatry, American Psychological Association, American Medical Association, American Academy of Family Physicians, American Psychiatric Association, 2000).

In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.

We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding the D.A.R.E. (Drug Abuse Resistance Education) program in public schools ( [link] ). This program typically involves police officers coming into the classroom to educate students about the dangers of becoming involved with alcohol and other drugs. According to the D.A.R.E. website (www.dare.org), this program has been very popular since its inception in 1983, and it is currently operating in 75% of school districts in the United States and in more than 40 countries worldwide. Sounds like an easy decision, right? However, on closer review, you discover that the vast majority of research into this program consistently suggests that participation has little, if any, effect on whether or not someone uses alcohol or other drugs (Clayton, Cattarello, & Johnstone, 1996; Ennett, Tobler, Ringwalt, & Flewelling, 1994; Lynam et al., 1999; Ringwalt, Ennett, & Holt, 1991). If you are committed to being a good steward of taxpayer money, will you fund this particular program, or will you try to find other programs that research has consistently demonstrated to be effective?

A D.A.R.E. poster reads “D.A.R.E. to resist drugs and violence.”

Watch this news report to learn more about some of the controversial issues surrounding the D.A.R.E. program.

Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that a close friend has breast cancer or that one of your young relatives has recently been diagnosed with autism. In either case, you want to know which treatment options are most successful with the fewest side effects. How would you find that out? You would probably talk with your doctor and personally review the research that has been done on various treatment options—always with a critical eye to ensure that you are as informed as possible.

In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.

THE PROCESS OF SCIENTIFIC RESEARCH

Scientific knowledge is advanced through a process known as the scientific method . Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In deductive reasoning , ideas are tested against the empirical world; in inductive reasoning , empirical observations lead to new ideas ( [link] ). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.

A diagram has a box at the top labeled “hypothesis or general premise” and a box at the bottom labeled “empirical observations.” On the left, an arrow labeled “inductive reasoning” goes from the bottom to top box. On the right, an arrow labeled “deductive reasoning” goes from the top to the bottom box.

In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive.

Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favorite fruits—apples, bananas, and oranges—all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes.

For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.

Play this “Deal Me In” interactive card game to practice using inductive reasoning.

We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.

A hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests [link] .

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

A scientific hypothesis is also falsifiable , or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviors ( [link] ). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.

Visit this website to apply the scientific method and practice its steps by using them to solve a murder mystery, determine why a student is in trouble, and design an experiment to test house paint.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives.

Review Questions

Scientific hypotheses are ________ and falsifiable.

________ are defined as observable realities.

Scientific knowledge is ________.

A major criticism of Freud’s early theories involves the fact that his theories ________.

  • were too limited in scope
  • were too outrageous
  • were too broad
  • were not testable

Critical Thinking Questions

In this section, the D.A.R.E. program was described as an incredibly popular program in schools across the United States despite the fact that research consistently suggests that this program is largely ineffective. How might one explain this discrepancy?

There is probably tremendous political pressure to appear to be hard on drugs. Therefore, even though D.A.R.E. might be ineffective, it is a well-known program with which voters are familiar.

The scientific method is often described as self-correcting and cyclical. Briefly describe your understanding of the scientific method with regard to these concepts.

This cyclical, self-correcting process is primarily a function of the empirical nature of science. Theories are generated as explanations of real-world phenomena. From theories, specific hypotheses are developed and tested. As a function of this testing, theories will be revisited and modified or refined to generate new hypotheses that are again tested. This cyclical process ultimately allows for more and more precise (and presumably accurate) information to be collected.

Personal Application Questions

Healthcare professionals cite an enormous number of health problems related to obesity, and many people have an understandable desire to attain a healthy weight. There are many diet programs, services, and products on the market to aid those who wish to lose weight. If a close friend was considering purchasing or participating in one of these products, programs, or services, how would you make sure your friend was fully aware of the potential consequences of this decision? What sort of information would you want to review before making such an investment or lifestyle change yourself?

Creative Commons License

Share This Book

  • Increase Font Size

Logo for Digital Editions

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Why is Research Important?

Learning Objectives

  • Explain how scientific research addresses questions about behaviour
  • Discuss how scientific research guides public policy
  • Appreciate how scientific research can be important in making personal decisions

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession ( Figure PR.2 ). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behaviour, as well as the cognitive (mental) and physiological (body) processes that underlie behaviour. In contrast to other methods that people use to understand the behaviour of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is  empirical : it is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behaviour is observable, the mind is not. If someone is crying, we can see behaviour. However, the reason for the behaviour is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behaviour by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behaviour. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Use of Research Information

Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges. For example, the explosion in our use of technology has led researchers to question whether this ultimately helps or hinders us. The use and implementation of technology in educational settings has become widespread over the last few decades. Researchers are coming to different conclusions regarding the use of technology. To illustrate this point, a study investigating a smartphone app targeting surgery residents (graduate students in surgery training) found that the use of this app can increase student engagement and raise test scores (Shaw & Tan, 2015). Conversely, another study found that the use of technology in undergraduate student populations had negative impacts on sleep, communication, and time management skills (Massimini & Peterson, 2009). Until sufficient amounts of research have been conducted, there will be no clear consensus on the effects that technology has on a student’s acquisition of knowledge, study skills, and mental health.

In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.

We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the Premier of your province. One of your responsibilities is to manage the provincial budget and determine how to best spend your constituents’ tax dollars. As the new Premier, you need to decide whether to continue funding early intervention programs. These programs are designed to help children who come from low-income backgrounds, have special needs, or face other disadvantages. These programs may involve providing a wide variety of services to maximize the children’s development and position them for optimal levels of success in school and later in life (Blann, 2005). While such programs sound appealing, you would want to be sure that they also proved effective before investing additional money in these programs. Fortunately, psychologists and other scientists have conducted vast amounts of research on such programs and, in general, the programs are found to be effective (Neil & Christensen, 2009; Peters-Scheffer, Didden, Korzilius, & Sturmey, 2011). While not all programs are equally effective, and the short-term effects of many such programs are more pronounced, there is reason to believe that many of these programs produce long-term benefits for participants (Barnett, 2011). If you are committed to being a good steward of taxpayer money, you would want to look at research. Which programs are most effective? What characteristics of these programs make them effective? Which programs promote the best outcomes? After examining the research, you would be best equipped to make decisions about which programs to fund.

LINK TO LEARNING

Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that a close friend has breast cancer or that one of your young relatives has recently been diagnosed with autism. In either case, you want to know which treatment options are most successful with the fewest side effects. How would you find that out? You would probably talk with your doctor and personally review the research that has been done on various treatment options—always with a critical eye to ensure that you are as informed as possible.

In the end, research is what makes the difference between facts and opinions.  Facts  are observable realities, and  opinions  are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.

The Process of Scientific Research

Scientific knowledge is advanced through a process known as the  scientific method . Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In  deductive reasoning , ideas are tested in the real world; in  inductive reasoning , real-world observations lead to new ideas ( Figure PR.3 ). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.

A diagram has a box at the top labeled “hypothesis or general premise” and a box at the bottom labeled “empirical observations.” On the left, an arrow labeled “inductive reasoning” goes from the bottom to top box. On the right, an arrow labeled “deductive reasoning” goes from the top to the bottom box.

In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is supported, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive.

Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favourite fruits—apples, bananas, and oranges—all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes.

For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.

We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A  theory   is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.

A  hypothesis  is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests  Figure PR.4 .

A diagram has seven labeled boxes with arrows to show the progression in the flow chart. The chart starts at “Theory” and moves to “Generate hypothesis,” “Collect data,” “Analyze data,” and “Summarize data and report findings.” There are two arrows coming from “Summarize data and report findings” to show two options. The first arrow points to “Confirm theory.” The second arrow points to “Modify theory,” which has an arrow that points back to “Generate hypothesis.”

Introduction to Psychology & Neuroscience Copyright © 2020 by Edited by Leanne Stevens is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

PSYCH101: Introduction to Psychology

Why research is important.

Read this text, which introduces the scientific method, which involves making a hypothesis or general premise, deductive reasoning, making empirical observations, and inductive reasoning,

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth's continents did not move, and that mental illness was caused by possession (Figure 2.2). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.

Figure 2.2 Some of our ancestors, across the world and over the centuries, believed that trephination - the practice of making a hole in the skull, as shown here - allowed evil spirits to leave the body, thus curing mental illness and other disorders.

Use of Research Information

Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges. For example, the explosion in our use of technology has led researchers to question whether this ultimately helps or hinders us. The use and implementation of technology in educational settings has become widespread over the last few decades.

Researchers are coming to different conclusions regarding the use of technology. To illustrate this point, a study investigating a smartphone app targeting surgery residents (graduate students in surgery training) found that the use of this app can increase student engagement and raise test scores. Conversely, another study found that the use of technology in undergraduate student populations had negative impacts on sleep, communication, and time management skills. Until sufficient amounts of research have been conducted, there will be no clear consensus on the effects that technology has on a student's acquisition of knowledge, study skills, and mental health. In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on "scientific evidence" when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives. We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents' tax dollars. As the new governor, you need to decide whether to continue funding early intervention programs. These programs are designed to help children who come from low-income backgrounds, have special needs, or face other disadvantages. These programs may involve providing a wide variety of services to maximize the children's development and position them for optimal levels of success in school and later in life.

While such programs sound appealing, you would want to be sure that they also proved effective before investing additional money in these programs. Fortunately, psychologists and other scientists have conducted vast amounts of research on such programs and, in general, the programs are found to be effective. While not all programs are equally effective, and the short-term effects of many such programs are more pronounced, there is reason to believe that many of these programs produce long-term benefits for participants. If you are committed to being a good steward of taxpayer money, you would want to look at research. Which programs are most effective? What characteristics of these programs make them effective? Which programs promote the best outcomes? After examining the research, you would be best equipped to make decisions about which programs to fund. Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that your sister Maria's child, Umberto, was recently diagnosed with autism. There are many treatments for autism that help decrease the negative impact of autism on the individual. Some examples of treatments for autism are applied behavior analysis (ABA), social communication groups, social skills groups, occupational therapy, and even medication options. If Maria asked you for advice or guidance, what would you do? You would likely want to review the research and learn about the efficacy of each treatment so you could best advise your sister. In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.

Notable Researchers

Psychological research has a long history involving important figures from diverse backgrounds. While the introductory chapter discussed several researchers who made significant contributions to the discipline, there are many more individuals who deserve attention in considering how psychology has advanced as a science through their work (Figure 2.3). For instance, Margaret Floy Washburn (1871–1939) was the first woman to earn a PhD in psychology. Her research focused on animal behavior and cognition. Mary Whiton Calkins (1863–1930) was a preeminent first-generation American psychologist who opposed the behaviorist movement, conducted significant research into memory, and established one of the earliest experimental psychology labs in the United States. Francis Sumner (1895–1954) was the first African American to receive a PhD in psychology in 1920. His dissertation focused on issues related to psychoanalysis. Sumner also had research interests in racial bias and educational justice. Sumner was one of the founders of Howard University's department of psychology, and because of his accomplishments, he is sometimes referred to as the "Father of Black Psychology". Thirteen years later, Inez Beverly Prosser (1895–1934) became the first African American woman to receive a PhD in psychology. Prosser's research highlighted issues related to education in segregated versus integrated schools, and ultimately, her work was very influential in the hallmark Brown v. Board of Education Supreme Court ruling that segregation of public schools was unconstitutional.

Figure a is a portrait of Margaret Floy Washburn. Figure b is the front page of the Implementation Decree from the Supreme Co

Figure 2.3 (a) Margaret Floy Washburn was the first woman to earn a doctorate degree in psychology. (b) The outcome of Brown v. Board of Education was influenced by the research of psychologist Inez Beverly Prosser, who was the first African American woman to earn a PhD in psychology.

The Process of Scientific Research

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In deductive reasoning , ideas are tested in the real world; in inductive reasoning , real-world observations lead to new ideas (Figure 2.4). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.

A diagram has a box at the top labeled "hypothesis or general premise" and a box at the bottom labeled "empirical observation

In the scientific context, deductive reasoning begins with a generalization - one hypothesis - that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive. Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favorite fruits - apples, bananas, and oranges - all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes. For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning. We've stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory. A hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests Figure 2.5.

A diagram has seven labeled boxes with arrows to show the progression in the flow chart. The chart starts at "Theory" and mov

Figure 2.5 The scientific method involves deriving hypotheses from theories and then testing those hypotheses. If the results are consistent with the theory, then the theory is supported. If the results are not consistent, then the theory should be modified and new hypotheses will be generated.

(a)A photograph shows Freud holding a cigar. (b) The mind's conscious and unconscious states are illustrated as an iceberg fl

Figure 2.6 Many of the specifics of (a) Freud's theories, such as (b) his division of the mind into id, ego, and superego, have fallen out of favor in recent decades because they are not falsifiable. In broader strokes, his views set the stage for much of psychological thinking today, such as the unconscious nature of the majority of psychological processes.

Creative Commons License

Psychology Zone

Exploring the Nature and Importance of Psychological Research

important of research in psychology

Table of Contents

Have you ever wondered why we behave the way we do, or how our minds work? The quest for these answers lies at the heart of psychological research , a field as fascinating as it is fundamental to our understanding of the human and animal psyche. This blog post will take you on a journey through the intricate nature of psychological research, illuminating its methods, applications, and profound impact on various aspects of society.

What is psychological research?

At its core, psychological research is an intricate tapestry of empirical studies and theoretical constructs that seek to decode the vast complexities of behavior and mental processes. Whether it’s the way we learn new information, what motivates us to act, or how we remember past events, psychological research aims to uncover the how and why of these phenomena, setting the stage for a deeper understanding of ourselves and the world around us.

The empirical nature of psychological studies

Psychological research is grounded in empirical evidence , meaning that it relies on observable and measurable data collected through scientific methods. This approach ensures that findings are not just theoretical concepts but are backed by rigorous testing and analysis, lending credibility and reliability to the conclusions drawn.

Theoretical frameworks in psychology

Theories in psychology are not merely abstract ideas but are carefully constructed frameworks that explain and predict behaviors and mental processes. These theories are constantly tested and refined through research, forming the bedrock of our psychological knowledge.

Intersecting psychological research with diverse fields

Psychological research doesn’t exist in a vacuum. Its tentacles reach far into various domains, influencing and being influenced by different fields of study and practice.

Organizational behavior and psychology

In the realm of business and organizational behavior , psychological research helps us understand how to create better work environments, enhance employee motivation, and improve leadership styles. These insights lead to more productive and harmonious workplaces.

Psychology’s role in medical sciences

Medical sciences benefit from psychological research as it offers crucial insights into patient behavior, mental health, and the interplay between psychological well-being and physical health. This multidisciplinary approach enables more holistic healthcare.

Education shaped by psychological findings

Educators and policymakers turn to psychological research to craft curricula that align with how we learn best. From the effectiveness of different teaching methods to the psychological impacts of standardized testing, research informs educational strategies and policies.

The practical applications of psychological research

Understanding the theory is one thing, but seeing it in action is where the true power of psychological research shines. Let’s explore some of the practical ways in which this research shapes our everyday lives and solves real-world problems.

Solving real-world problems

Psychological research has real-world applications that affect every aspect of society. For instance, it can help address social issues such as prejudice and discrimination , improve mental health treatment, and even aid in disaster response strategies by understanding human behavior in crisis situations.

Improving mental health services

The insights gained from psychological studies are crucial in developing effective therapies and interventions for mental health disorders . By understanding the underlying mechanisms of these conditions, psychologists can tailor treatments to better serve those in need.

Enhancing learning and memory

Research into learning and memory has revolutionized educational practices, making them more inclusive and effective. By applying psychological principles, teachers are able to foster environments where students of all backgrounds and abilities can thrive.

Uncovering psychological facts , laws, and theories

Psychological research is a relentless pursuit of knowledge, aiming to establish facts, laws, and theories that explain the inner workings of our minds and behaviors. These foundational elements are critical in building a structured and reliable understanding of psychology.

The quest for psychological facts

A psychological fact is a scientifically verified piece of information about behaviors or mental processes. Through meticulous research, psychologists can determine these facts, which then serve as building blocks for broader laws and theories.

Establishing psychological laws

Laws in psychology are generalizations about behaviors that are consistent and predictable. For example, the law of effect, which states that behaviors followed by positive outcomes are likely to be repeated, is a principle that has stood the test of time and research.

Formulating psychological theories

Theories are comprehensive explanations that connect and make sense of various psychological facts and laws. They are the culmination of extensive research and provide a framework for understanding complex behaviors and mental processes.

This exploration of psychological research underscores its significance in not only advancing our comprehension of human and animal behavior but also in applying this understanding to better our lives and society. It’s an ever-evolving discipline that continues to challenge our perceptions and drive innovation across countless domains.

How do you see psychological research impacting your daily life? And can you think of a problem in your community where psychological insights might provide a solution?

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

Research Methods in Psychology

1 Introduction to Psychological Research – Objectives and Goals, Problems, Hypothesis and Variables

  • Nature of Psychological Research
  • The Context of Discovery
  • Context of Justification
  • Characteristics of Psychological Research
  • Goals and Objectives of Psychological Research

2 Introduction to Psychological Experiments and Tests

  • Independent and Dependent Variables
  • Extraneous Variables
  • Experimental and Control Groups
  • Introduction of Test
  • Types of Psychological Test
  • Uses of Psychological Tests

3 Steps in Research

  • Research Process
  • Identification of the Problem
  • Review of Literature
  • Formulating a Hypothesis
  • Identifying Manipulating and Controlling Variables
  • Formulating a Research Design
  • Constructing Devices for Observation and Measurement
  • Sample Selection and Data Collection
  • Data Analysis and Interpretation
  • Hypothesis Testing
  • Drawing Conclusion

4 Types of Research and Methods of Research

  • Historical Research
  • Descriptive Research
  • Correlational Research
  • Qualitative Research
  • Ex-Post Facto Research
  • True Experimental Research
  • Quasi-Experimental Research

5 Definition and Description Research Design, Quality of Research Design

  • Research Design
  • Purpose of Research Design
  • Design Selection
  • Criteria of Research Design
  • Qualities of Research Design

6 Experimental Design (Control Group Design and Two Factor Design)

  • Experimental Design
  • Control Group Design
  • Two Factor Design

7 Survey Design

  • Survey Research Designs
  • Steps in Survey Design
  • Structuring and Designing the Questionnaire
  • Interviewing Methodology
  • Data Analysis
  • Final Report

8 Single Subject Design

  • Single Subject Design: Definition and Meaning
  • Phases Within Single Subject Design
  • Requirements of Single Subject Design
  • Characteristics of Single Subject Design
  • Types of Single Subject Design
  • Advantages of Single Subject Design
  • Disadvantages of Single Subject Design

9 Observation Method

  • Definition and Meaning of Observation
  • Characteristics of Observation
  • Types of Observation
  • Advantages and Disadvantages of Observation
  • Guides for Observation Method

10 Interview and Interviewing

  • Definition of Interview
  • Types of Interview
  • Aspects of Qualitative Research Interviews
  • Interview Questions
  • Convergent Interviewing as Action Research
  • Research Team

11 Questionnaire Method

  • Definition and Description of Questionnaires
  • Types of Questionnaires
  • Purpose of Questionnaire Studies
  • Designing Research Questionnaires
  • The Methods to Make a Questionnaire Efficient
  • The Types of Questionnaire to be Included in the Questionnaire
  • Advantages and Disadvantages of Questionnaire
  • When to Use a Questionnaire?

12 Case Study

  • Definition and Description of Case Study Method
  • Historical Account of Case Study Method
  • Designing Case Study
  • Requirements for Case Studies
  • Guideline to Follow in Case Study Method
  • Other Important Measures in Case Study Method
  • Case Reports

13 Report Writing

  • Purpose of a Report
  • Writing Style of the Report
  • Report Writing – the Do’s and the Don’ts
  • Format for Report in Psychology Area
  • Major Sections in a Report

14 Review of Literature

  • Purposes of Review of Literature
  • Sources of Review of Literature
  • Types of Literature
  • Writing Process of the Review of Literature
  • Preparation of Index Card for Reviewing and Abstracting

15 Methodology

  • Definition and Purpose of Methodology
  • Participants (Sample)
  • Apparatus and Materials

16 Result, Analysis and Discussion of the Data

  • Definition and Description of Results
  • Statistical Presentation
  • Tables and Figures

17 Summary and Conclusion

  • Summary Definition and Description
  • Guidelines for Writing a Summary
  • Writing the Summary and Choosing Words
  • A Process for Paraphrasing and Summarising
  • Summary of a Report
  • Writing Conclusions

18 References in Research Report

  • Reference List (the Format)
  • References (Process of Writing)
  • Reference List and Print Sources
  • Electronic Sources
  • Book on CD Tape and Movie
  • Reference Specifications
  • General Guidelines to Write References

Share on Mastodon

REVIEW article

The use of research methods in psychological research: a systematised review.

\nSalom Elizabeth Scholtz

  • 1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa
  • 2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field ( Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on ( Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation ( Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data ( Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method ( Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions ( Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm ( O'Neil and Koekemoer, 2016 ), research question ( Grix, 2002 ), or the skill and exposure of the researcher ( Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research ( Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies ( Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design ( Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills ( Scott Jones and Goldring, 2015 ), how theories are developed ( Ngulube, 2013 ), and the credibility of research results ( Levitt et al., 2017 ). This, in turn, can be detrimental to the field ( Nind et al., 2015 ), journal publication ( Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research ( Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing ( Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. (2011) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. (1999) as well as Bluhm et al. (2011) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl (2014) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer (2016) . Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity ( O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population ( Bittermann and Fischer, 2018 )], method [data-gathering tools ( Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research ( Ritchie et al., 2009 )], data collection [techniques and research strategy ( Maree, 2016 )], and data analysis [discovering information by examining bodies of data ( Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth (2009) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form ( Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample ( Strydom, 2011 ), and time and cost constraints ( Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample ( Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee (2015) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank ( Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus ® database ( Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ ( Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals ( Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals ( Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. (2016) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually ( Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

www.frontiersin.org

Figure 1 . Systematised review procedure.

According to Johnston et al. (2019) , “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form ( Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data ( Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. (2016) and approved by three research committees/stakeholders and the researchers ( Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail ( Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process ( Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process ( Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy ( Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction ( Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review ( Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

www.frontiersin.org

Figure 2 . Journal article frequency.

www.frontiersin.org

Table 1 . Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten (2010) . These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten (2010) . It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

www.frontiersin.org

Figure 3 . Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour ( Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost (2014) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

www.frontiersin.org

Table 2 . Research methods in psychology.

www.frontiersin.org

Figure 4 . Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

www.frontiersin.org

Table 3 . Sampling use in the field of psychology.

www.frontiersin.org

Figure 5 . Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher (2016) , is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

www.frontiersin.org

Table 4 . Design use in the field of psychology.

www.frontiersin.org

Figure 6 . Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

www.frontiersin.org

Table 5 . Data collection in the field of psychology.

www.frontiersin.org

Figure 7 . Data collection frequency in topics.

www.frontiersin.org

Table 6 . Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. (2019) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia ( Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods ( Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature ( Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method ( Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies ( Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research ( Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method ( Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies ( Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth (2009) , state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method ( Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods ( Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty ( Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world ( Gunasekare, 2015 ). According to Toomela (2010) , this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics ( Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results ( Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations ( Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science ( American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs ( Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used ( Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) ( Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process ( Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology ( Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health ( Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error ( Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Aanstoos, C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers

Google Scholar

American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., and Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report. Am. Psychol. 73:3. doi: 10.1037/amp0000191

PubMed Abstract | CrossRef Full Text | Google Scholar

Bandara, W., Furtmueller, E., Gorbacheva, E., Miskon, S., and Beekhuyzen, J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support. Commun. Ass. Inform. Syst. 37, 154–204. doi: 10.17705/1CAIS.03708

CrossRef Full Text | Google Scholar

Barr-Walker, J. (2017). Evidence-based information needs of public health workers: a systematized review. J. Med. Libr. Assoc. 105, 69–79. doi: 10.5195/JMLA.2017.109

Bittermann, A., and Fischer, A. (2018). How to identify hot topics in psychology using topic modeling. Z. Psychol. 226, 3–13. doi: 10.1027/2151-2604/a000318

Bluhm, D. J., Harman, W., Lee, T. W., and Mitchell, T. R. (2011). Qualitative research in management: a decade of progress. J. Manage. Stud. 48, 1866–1891. doi: 10.1111/j.1467-6486.2010.00972.x

Breen, L. J., and Darlaston-Jones, D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia. Aust. Psychol. 45, 67–76. doi: 10.1080/00050060903127481

Burman, E., and Whelan, P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill.

Chaichanasakul, A., He, Y., Chen, H., Allen, G. E. K., Khairallah, T. S., and Ramos, K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007). J. Career. Dev. 38, 440–455. doi: 10.1177/0894845310380223

Chryssochoou, X. (2015). Social Psychology. Inter. Encycl. Soc. Behav. Sci. 22, 532–537. doi: 10.1016/B978-0-08-097086-8.24095-6

Cichocka, A., and Jost, J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies. Inter. J. Psychol. 49, 6–29. doi: 10.1002/ijop.12011

Clay, R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular

Coetzee, M., and Van Zyl, L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA. J. Psychol . 40, 1–16. doi: 10.4102/sajip.v40i1.1227

Counsell, A., and Harlow, L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology. Can. Psychol. 58, 140–147. doi: 10.1037/cap0000074

Deangelis, T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors

Demuth, C. (2015). New directions in qualitative research in psychology. Integr. Psychol. Behav. Sci. 49, 125–133. doi: 10.1007/s12124-015-9303-9

Denzin, N. K., and Lincoln, Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage.

Drotar, D. (2010). A call for replications of research in pediatric psychology and guidance for authors. J. Pediatr. Psychol. 35, 801–805. doi: 10.1093/jpepsy/jsq049

Dweck, C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe. Perspect. Psychol. Sci. 12, 656–659. doi: 10.1177/1745691616687747

Earp, B. D., and Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621. doi: 10.3389/fpsyg.2015.00621

Ezeh, A. C., Izugbara, C. O., Kabiru, C. W., Fonn, S., Kahn, K., Manderson, L., et al. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model. Glob. Health Action 3:5693. doi: 10.3402/gha.v3i0.5693

Ferreira, A. L. L., Bessa, M. M. M., Drezett, J., and De Abreu, L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review. Reprod. Clim. 31, 48–54. doi: 10.1016/j.recli.2015.12.002

Fonseca, M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections

Gough, B., and Lyons, A. (2016). The future of qualitative research in psychology: accentuating the positive. Integr. Psychol. Behav. Sci. 50, 234–243. doi: 10.1007/s12124-015-9320-8

Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x

Grix, J. (2002). Introducing students to the generic terminology of social research. Politics 22, 175–186. doi: 10.1111/1467-9256.00173

Gunasekare, U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review. Int. J. Sci. Res. 4, 361–368. Available online at: https://ssrn.com/abstract=2735996

Hengartner, M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9:256. doi: 10.3389/fpsyg.2018.00256

Holloway, W. (2008). Doing intellectual disagreement differently. Psychoanal. Cult. Soc. 13, 385–396. doi: 10.1057/pcs.2008.29

Ivankova, N. V., Creswell, J. W., and Plano Clark, V. L. (2016). “Foundations and Approaches to mixed methods research,” in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers), 306–335.

Johnson, M., Long, T., and White, A. (2001). Arguments for British pluralism in qualitative health research. J. Adv. Nurs. 33, 243–249. doi: 10.1046/j.1365-2648.2001.01659.x

Johnston, A., Kelly, S. E., Hsieh, S. C., Skidmore, B., and Wells, G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide. J. Clin. Epidemiol. 108, 64–72. doi: 10.1016/j.jclinepi.2018.11.030

Ketchen, D. J. Jr., Boyd, B. K., and Bergh, D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges. Organ. Res. Methods 11, 643–658. doi: 10.1177/1094428108319843

Ktepi, B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers

Laher, S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research. S. Afr. J. Psychol. 46, 316–327. doi: 10.1177/0081246316649121

Lee, C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/

Lee, T. W., Mitchell, T. R., and Sablynski, C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999. J. Vocat. Behav. 55, 161–187. doi: 10.1006/jvbe.1999.1707

Leech, N. L., Anthony, J., and Onwuegbuzie, A. J. (2007). A typology of mixed methods research designs. Sci. Bus. Media B. V Qual. Quant 43, 265–275. doi: 10.1007/s11135-007-9105-3

Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., and Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual. Psychol. 4, 2–22. doi: 10.1037/qup0000082

Lowe, S. M., and Moore, S. (2014). Social networks and female reproductive choices in the developing world: a systematized review. Rep. Health 11:85. doi: 10.1186/1742-4755-11-85

Maree, K. (2016). “Planning a research proposal,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 49–70.

Maree, K., and Pietersen, J. (2016). “Sampling,” in First Steps in Research, 2nd Edn , ed K. Maree (Pretoria: Van Schaik Publishers), 191–202.

Ngulube, P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa. ESARBICA J. 32, 10–23. Available online at: http://hdl.handle.net/10500/22397 .

Nieuwenhuis, J. (2016). “Qualitative research designs and data-gathering techniques,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 71–102.

Nind, M., Kilburn, D., and Wiles, R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods. Int. J. Soc. Res. Methodol. 18, 561–576. doi: 10.1080/13645579.2015.1062628

O'Cathain, A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution. J. Mix. Methods 3, 1–6. doi: 10.1177/1558689808326272

O'Neil, S., and Koekemoer, E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review. SA J. Indust. Psychol. 42, 1–16. doi: 10.4102/sajip.v42i1.1350

Onwuegbuzie, A. J., and Collins, K. M. (2017). The role of sampling in mixed methods research enhancing inference quality. Köln Z Soziol. 2, 133–156. doi: 10.1007/s11577-017-0455-0

Perestelo-Pérez, L. (2013). Standards on how to develop and report systematic reviews in psychology and health. Int. J. Clin. Health Psychol. 13, 49–57. doi: 10.1016/S1697-2600(13)70007-3

Pericall, L. M. T., and Taylor, E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review. Dev. Med. Child Neurol. 56, 19–30. doi: 10.1111/dmcn.12237

Peterson, R. A., and Merunka, D. R. (2014). Convenience samples of college students and research reproducibility. J. Bus. Res. 67, 1035–1041. doi: 10.1016/j.jbusres.2013.08.010

Ritchie, J., Lewis, J., and Elam, G. (2009). “Designing and selecting samples,” in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed J. Ritchie and J. Lewis (London: Sage), 1–23.

Sandelowski, M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis. Res. Nurs. Health 34, 342–352. doi: 10.1002/nur.20437

Sandelowski, M., Voils, C. I., and Knafl, G. (2009). On quantitizing. J. Mix. Methods Res. 3, 208–222. doi: 10.1177/1558689809334210

Scholtz, S. E., De Klerk, W., and De Beer, L. T. (2019). A data generated research framework for conducting research methods in psychological research.

Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015

Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scott Jones, J., and Goldring, J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods. Int. J. Soc. Res. Methodol. 18, 479–494. doi: 10.1080/13645579.2015.1062623

Smith, B., and McGannon, K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology. Int. Rev. Sport Exerc. Psychol. 11, 101–121. doi: 10.1080/1750984X.2017.1317357

Stangor, C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/

Strydom, H. (2011). “Sampling in the quantitative paradigm,” in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds A. S. de Vos, H. Strydom, C. B. Fouché, and C. S. L. Delport (Pretoria: Van Schaik Publishers), 221–234.

Tashakkori, A., and Teddlie, C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications.

Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Front. Psychol. 1:29. doi: 10.3389/fpsyg.2010.00029

Truscott, D. M., Swars, S., Smith, S., Thornton-Reid, F., Zhao, Y., Dooley, C., et al. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005. Int. J. Soc. Res. Methodol. 13, 317–328. doi: 10.1080/13645570903097950

Weiten, W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth.

Keywords: research methods, research approach, research trends, psychological research, systematised review, research designs, research topic

Citation: Scholtz SE, de Klerk W and de Beer LT (2020) The Use of Research Methods in Psychological Research: A Systematised Review. Front. Res. Metr. Anal. 5:1. doi: 10.3389/frma.2020.00001

Received: 30 December 2019; Accepted: 28 February 2020; Published: 20 March 2020.

Reviewed by:

Copyright © 2020 Scholtz, de Klerk and de Beer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Salomé Elizabeth Scholtz, 22308563@nwu.ac.za

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Introduction to Research Methods in Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

important of research in psychology

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

important of research in psychology

There are several different research methods in psychology , each of which can help researchers learn more about the way people think, feel, and behave. If you're a psychology student or just want to know the types of research in psychology, here are the main ones as well as how they work.

Three Main Types of Research in Psychology

stevecoleimages/Getty Images

Psychology research can usually be classified as one of three major types.

1. Causal or Experimental Research

When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables. This type of research also determines if one variable causes another variable to occur or change.

An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants.

2. Descriptive Research

Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are:

  • Case studies
  • Observational studies

An example of this psychology research method would be an opinion poll to determine which presidential candidate people plan to vote for in the next election. Descriptive studies don't try to measure the effect of a variable; they seek only to describe it.

3. Relational or Correlational Research

A study that investigates the connection between two or more variables is considered relational research. The variables compared are generally already present in the group or population.

For example, a study that looks at the proportion of males and females that would purchase either a classical CD or a jazz CD would be studying the relationship between gender and music preference.

Theory vs. Hypothesis in Psychology Research

People often confuse the terms theory and hypothesis or are not quite sure of the distinctions between the two concepts. If you're a psychology student, it's essential to understand what each term means, how they differ, and how they're used in psychology research.

A theory is a well-established principle that has been developed to explain some aspect of the natural world. A theory arises from repeated observation and testing and incorporates facts, laws, predictions, and tested hypotheses that are widely accepted.

A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, an experiment designed to look at the relationship between study habits and test anxiety might have a hypothesis that states, "We predict that students with better study habits will suffer less test anxiety." Unless your study is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your experiment or research.

While the terms are sometimes used interchangeably in everyday use, the difference between a theory and a hypothesis is important when studying experimental design.

Some other important distinctions to note include:

  • A theory predicts events in general terms, while a hypothesis makes a specific prediction about a specified set of circumstances.
  • A theory has been extensively tested and is generally accepted, while a hypothesis is a speculative guess that has yet to be tested.

The Effect of Time on Research Methods in Psychology

There are two types of time dimensions that can be used in designing a research study:

  • Cross-sectional research takes place at a single point in time. All tests, measures, or variables are administered to participants on one occasion. This type of research seeks to gather data on present conditions instead of looking at the effects of a variable over a period of time.
  • Longitudinal research is a study that takes place over a period of time. Data is first collected at the beginning of the study, and may then be gathered repeatedly throughout the length of the study. Some longitudinal studies may occur over a short period of time, such as a few days, while others may take place over a period of months, years, or even decades.

The effects of aging are often investigated using longitudinal research.

Causal Relationships Between Psychology Research Variables

What do we mean when we talk about a “relationship” between variables? In psychological research, we're referring to a connection between two or more factors that we can measure or systematically vary.

One of the most important distinctions to make when discussing the relationship between variables is the meaning of causation.

A causal relationship is when one variable causes a change in another variable. These types of relationships are investigated by experimental research to determine if changes in one variable actually result in changes in another variable.

Correlational Relationships Between Psychology Research Variables

A correlation is the measurement of the relationship between two variables. These variables already occur in the group or population and are not controlled by the experimenter.

  • A positive correlation is a direct relationship where, as the amount of one variable increases, the amount of a second variable also increases.
  • In a negative correlation , as the amount of one variable goes up, the levels of another variable go down.

In both types of correlation, there is no evidence or proof that changes in one variable cause changes in the other variable. A correlation simply indicates that there is a relationship between the two variables.

The most important concept is that correlation does not equal causation. Many popular media sources make the mistake of assuming that simply because two variables are related, a causal relationship exists.

Psychologists use descriptive, correlational, and experimental research designs to understand behavior . In:  Introduction to Psychology . Minneapolis, MN: University of Minnesota Libraries Publishing; 2010.

Caruana EJ, Roman M, Herandez-Sanchez J, Solli P. Longitudinal studies . Journal of Thoracic Disease. 2015;7(11):E537-E540. doi:10.3978/j.issn.2072-1439.2015.10.63

University of Berkeley. Science at multiple levels . Understanding Science 101 . Published 2012.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

American Psychological Association Logo

Psychology research is front and center

Though the COVID-19 pandemic has disrupted research, it has also highlighted the importance of psychology

Vol. 52 No. 1 Print version: Page 53

graphic representation of research data behind silhouettes

Physical distancing requirements around the COVID-19 pandemic have created undeniable difficulties for many psychology research projects that relied on in-person interactions, forcing academics to be flexible and creative.

In response, many researchers are moving as much work as possible online. Meanwhile, funding agencies are supporting accommodations on existing grants where possible and will likely be turning an eye toward research that could help prepare for the next pandemic.

“The pandemic has illustrated the importance of social and behavioral research, especially since our mitigation strategies and their impacts are predominately social and behavioral in nature,” says William Riley, PhD, the director of the Office of Behavioral and Social Sciences Research at the National Institutes of Health (NIH). “I believe it also increases the burden on social and behavioral scientists applying for grant funding to make a strong case for the public health impact of their research moving forward.”

The impacts of the pandemic on research have varied widely, even within labs. At the Rice University psychoneuroimmunology lab of Chris Fagundes, PhD, some graduate students were able to pivot immediately to analyzing existing data from home. Others who were in the process of conducting in-person experiments will see their degrees delayed by at least six months to a year. The biggest challenge has been avoiding laying off lab staff whose salaries are paid by stalled grants, Fagundes says. Fortunately, his lab’s emphasis on stress and the immune system made it possible to apply for supplemental grants through NIH to focus on pandemic-related outcomes. It’s a strategy that can both benefit lab employees and inform public health.

Post-lockdown, some researchers have been able to resume in-person activities with precautions and protective equipment. Others remain in limbo. Many scientists who work with rodents had to euthanize animals during lockdowns because animal care technicians could not work. Some of these scientists, leery of future shutdowns, have delayed expanding their colonies again. Socially distanced in-person research also moves slowly, says BJ Casey, PhD, a psychologist and collaborator on the Adolescent Brain Cognitive Development™ study at Yale University. Many families are reluctant to come in for brain imaging during a pandemic, Casey says. “All of us were far too optimistic that once we began to scan children again, we would be able to catch up fairly rapidly,” she says.

The sudden shift to virtual activities has occasionally been positive. At Penn State University, psychologist Daryl Cameron, PhD, was forced to move his “Expanding Empathy” symposia online, but he was pleasantly surprised that the change allowed panelists to participate in a conversation about empathy and COVID-19. “Getting everyone together like that for the panel webinar wouldn’t have happened in the in-person iteration of events,” Cameron says.

Families in Casey’s study have appreciated doing psychological assessments online rather than having to travel to her lab. Other researchers report more time to prepare papers for publication, a freedom reflected in the 25% increase in submissions to APA journals between January and September 2020 compared with January to September 2019. Data in some fields, however, suggests a gender gap in submissions, with women submitting fewer papers for publication (Kibbe, M. R., JAMA Surgery , Vol. 155, No. 9, 2020 ). APA is working to analyze data on submissions by gender to its journals, but the data were not available at press time.

Funding agencies have made efforts to support researchers during the pandemic upheaval. Like NIH, the National Science Foundation (NSF) has also offered COVID-19 opportunities: Its Directorate for Social, Behavioral, and Economic Sciences (SBE) had funded 240 RAPID Awards for a total of $32.4 million as of October 2020. That same month, NIH announced the Rapid Acceleration of Diagnostics Underserved Populations (RADx-UP) initiative, a $500 million program aimed at improving COVID-19 testing in vulnerable populations. Awardees include psychologists such as Leslie Leve, PhD, of the University of Oregon, whose project will implement an outreach and testing program for Oregon’s Latinx community, and Mary Cwik, PhD, a clinical psychologist at the Jo hns Hopkins Bloomberg School of Public Health, whose project will test interventions to expand testing access to American Indian communities.

While future federal funding depends on the decisions of a new Congress, NSF and NIH officials do say that the pandemic has brought the importance of psychology research to the forefront.

“If you are doing psychology research and you have the ability to tie your research agenda to the amazing public impact you could have at a moment like this, this agency would be receptive to those proposals,” says Arthur Lupia, PhD, the head of SBE. “We’re living in a society that desperately needs that kind of insight.”

Recommended Reading

Contact apa, you may also like.

CALL TO BOOK

PsychMed

  • Why Choose Us
  • About The Artwork
  • Vision and Mission Statement
  • Referral Pathways
  • Fees, Rebates & Services
  • Your Rights
  • Carer Rights
  • Mental Health Adelaide Help
  • SA Intensive Gambling Help Service
  • Trauma Services
  • Suicide Prevention
  • The Matrix Program
  • The RPG Program
  • Disability (NDIS)
  • Rural and Remote Help
  • Psychological Assessments and Reports
  • Employee Assistance Programs
  • Therapy for Internet Gaming Disorder (IGD)
  • Remedial Massage
  • Game Therapy (Lego® / Minecraft)
  • Trauma-Informed Care
  • Adelaide Clinic (City)
  • Gilles Plains Clinic
  • Seaview Downs Clinic
  • Sunshine Coast

The role of research in the practice of psychology

important of research in psychology

‘I’ve done my research,’ is a phrase that seems to be spoken more and more often. As information continues to become easier to produce and access, doing research is likely to become more relevant for everybody. In psychology, research plays an essential part in understanding human behaviour, and in the assessment, diagnosis and treatment of psychological disorders.

As a science, the body of knowledge under the heading ‘psychology’ concerns our knowledge of human behaviour that has been acquired through scientific research. Behaviour can be researched through an array of techniques and study designs, which gives individual studies unique qualities that affect the conclusions that can be drawn from them. This means a single piece of research will rarely provide a comprehensive understanding of a particular problem, and that research needs to be ongoing.

In Australia, registration as a psychologist requires university study involving both training in research and conducting research itself. This enables psychologists to do their own research, and also to understand, critique and apply others’ research to reach their own conclusions. Conducting high-quality research requires critical thinking, rigour, logic, and objectivity, which can be applied to assessing the quality of studies they use to inform their practice.

A key area of application for research in psychology is in developing, administering and interpreting psychological assessments. To provide real world meaning of the results, development of psychological assessments requires research in actual populations. The research that is used to develop an assessment has a substantial effect on determining if it is appropriate to use, how the test should be administered, and how the results should be interpreted. Because of this, understanding the research behind an assessment is important in psychological practice. It enables psychologists to better explain what results ‘mean’.

Research is also conducted in psychology to develop treatments for psychological disorders, determine whether they are effective, and use them in clinical practice. As psychologists are required to follow evidence-based practice, treatments used by psychologists have been demonstrated under scientific conditions to produce results. Research into the efficacy of treatments enables psychologists to better understand the variables involved and ensures treatments are applied in the most effective way. As an example, a study conducted by PsychMed on rates of remission for methamphetamine addiction showing that people who gave up tobacco and methamphetamine had higher rates of remission than those who quit methamphetamine alone, has helped us to advise on issues around co-substance use.  

Finally, research plays a role in measurement-based treatment/care. Measurement-based treatment is a systematic approach to mental health care that involves using standardised assessments to track a patient’s progress and adjust their treatment plan as needed. This approach is based on the idea that regular monitoring and assessment can help identify changes in a patient’s symptoms or functioning, and allow for timely adjustments to their treatment plan to address any changes that may be occurring in different aspects of a patient’s mental health, including their symptoms, functioning, and overall well-being.

 One of the main advantages of measurement-based treatment is that it provides a systematic and objective way to monitor a patient’s progress over time. This can be particularly helpful for patients with chronic mental health conditions, as it can allow for more precise tracking of their symptoms and functioning and can help ensure that their treatment plan is appropriate and effective. In addition to tracking a patient’s progress, measurement-based treatment may also involve setting specific treatment goals and working with the patient to develop strategies to achieve those goals.

In summary, research is crucial in understanding human behaviour, as well as in the assessment, diagnosis, and treatment of psychological disorders. Psychological assessments and interventions are developed through rigorous research, and psychologists being trained in research enables them to understand, critique, and apply this research in their own practice. Measurement-based treatment, which involves using standardised assessments to track a patient’s progress and adjust their treatment plan as needed, can be seen as research on the smallest, but also most relevant scale. Understanding the research behind these tools and approaches is vital for psychologists to provide the most effective care for their patients.

Why Is Research Important?

Please log in to save materials. Log in

  • EPUB 3 Student View
  • PDF Student View
  • Thin Common Cartridge
  • Thin Common Cartridge Student View
  • SCORM Package
  • SCORM Package Student View
  • 2 - USE OF RESEARCH INFORMATION
  • 3 - THE PROCESS OF SCIENTIFIC RESEARCH
  • 4 - Summary
  • 5 - Review Questions
  • 6 - Critical Thinking Questions
  • 7 - Personal Application Questions
  • View all as one page

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession ( Figure ). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Ethical Considerations In Psychology Research

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants. This means that they must abide by certain moral principles and rules of conduct.

What are Ethical Guidelines?

In Britain, ethical guidelines for research are published by the British Psychological Society, and in America, by the American Psychological Association. The purpose of these codes of conduct is to protect research participants, the reputation of psychology, and psychologists themselves.

Moral issues rarely yield a simple, unambiguous, right or wrong answer. It is, therefore, often a matter of judgment whether the research is justified or not.

For example, it might be that a study causes psychological or physical discomfort to participants; maybe they suffer pain or perhaps even come to serious harm.

On the other hand, the investigation could lead to discoveries that benefit the participants themselves or even have the potential to increase the sum of human happiness.

Rosenthal and Rosnow (1984) also discuss the potential costs of failing to carry out certain research. Who is to weigh up these costs and benefits? Who is to judge whether the ends justify the means?

Finally, if you are ever in doubt as to whether research is ethical or not, it is worthwhile remembering that if there is a conflict of interest between the participants and the researcher, it is the interests of the subjects that should take priority.

Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented. All UK research requires ethical approval by one or more of the following:

  • Department Ethics Committee (DEC) : for most routine research.
  • Institutional Ethics Committee (IEC) : for non-routine research.
  • External Ethics Committee (EEC) : for research that s externally regulated (e.g., NHS research).

Committees review proposals to assess if the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These committees may request researchers make changes to the study’s design or procedure or, in extreme cases, deny approval of the study altogether.

The British Psychological Society (BPS) and American Psychological Association (APA) have issued a code of ethics in psychology that provides guidelines for conducting research.  Some of the more important ethical issues are as follows:

Informed Consent

Before the study begins, the researcher must outline to the participants what the research is about and then ask for their consent (i.e., permission) to participate.

An adult (18 years +) capable of being permitted to participate in a study can provide consent. Parents/legal guardians of minors can also provide consent to allow their children to participate in a study.

Whenever possible, investigators should obtain the consent of participants. In practice, this means it is not sufficient to get potential participants to say “Yes.”

They also need to know what it is that they agree to. In other words, the psychologist should, so far as is practicable, explain what is involved in advance and obtain the informed consent of participants.

Informed consent must be informed, voluntary, and rational. Participants must be given relevant details to make an informed decision, including the purpose, procedures, risks, and benefits. Consent must be given voluntarily without undue coercion. And participants must have the capacity to rationally weigh the decision.

Components of informed consent include clearly explaining the risks and expected benefits, addressing potential therapeutic misconceptions about experimental treatments, allowing participants to ask questions, and describing methods to minimize risks like emotional distress.

Investigators should tailor the consent language and process appropriately for the study population. Obtaining meaningful informed consent is an ethical imperative for human subjects research.

The voluntary nature of participation should not be compromised through coercion or undue influence. Inducements should be fair and not excessive/inappropriate.

However, it is not always possible to gain informed consent.  Where the researcher can’t ask the actual participants, a similar group of people can be asked how they would feel about participating.

If they think it would be OK, then it can be assumed that the real participants will also find it acceptable. This is known as presumptive consent.

However, a problem with this method is that there might be a mismatch between how people think they would feel/behave and how they actually feel and behave during a study.

In order for consent to be ‘informed,’ consent forms may need to be accompanied by an information sheet for participants’ setting out information about the proposed study (in lay terms), along with details about the investigators and how they can be contacted.

Special considerations exist when obtaining consent from vulnerable populations with decisional impairments, such as psychiatric patients, intellectually disabled persons, and children/adolescents. Capacity can vary widely so should be assessed individually, but interventions to improve comprehension may help. Legally authorized representatives usually must provide consent for children.

Participants must be given information relating to the following:

  • A statement that participation is voluntary and that refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive.
  • Purpose of the research.
  • All foreseeable risks and discomforts to the participant (if there are any). These include not only physical injury but also possible psychological.
  • Procedures involved in the research.
  • Benefits of the research to society and possibly to the individual human subject.
  • Length of time the subject is expected to participate.
  • Person to contact for answers to questions or in the event of injury or emergency.
  • Subjects” right to confidentiality and the right to withdraw from the study at any time without any consequences.
Debriefing after a study involves informing participants about the purpose, providing an opportunity to ask questions, and addressing any harm from participation. Debriefing serves an educational function and allows researchers to correct misconceptions. It is an ethical imperative.

After the research is over, the participant should be able to discuss the procedure and the findings with the psychologist. They must be given a general idea of what the researcher was investigating and why, and their part in the research should be explained.

Participants must be told if they have been deceived and given reasons why. They must be asked if they have any questions, which should be answered honestly and as fully as possible.

Debriefing should occur as soon as possible and be as full as possible; experimenters should take reasonable steps to ensure that participants understand debriefing.

“The purpose of debriefing is to remove any misconceptions and anxieties that the participants have about the research and to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Harris, 1998).

The debriefing aims to provide information and help the participant leave the experimental situation in a similar frame of mind as when he/she entered it (Aronson, 1988).

Exceptions may exist if debriefing seriously compromises study validity or causes harm itself, like negative emotions in children. Consultation with an institutional review board guides exceptions.

Debriefing indicates investigators’ commitment to participant welfare. Harms may not be raised in the debriefing itself, so responsibility continues after data collection. Following up demonstrates respect and protects persons in human subjects research.

Protection of Participants

Researchers must ensure that those participating in research will not be caused distress. They must be protected from physical and mental harm. This means you must not embarrass, frighten, offend or harm participants.

Normally, the risk of harm must be no greater than in ordinary life, i.e., participants should not be exposed to risks greater than or additional to those encountered in their normal lifestyles.

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, ensure their participation is brief as they get tired easily and have a limited attention span.

Researchers are not always accurately able to predict the risks of taking part in a study, and in some cases, a therapeutic debriefing may be necessary if participants have become disturbed during the research (as happened to some participants in Zimbardo’s prisoners/guards study ).

Deception research involves purposely misleading participants or withholding information that could influence their participation decision. This method is controversial because it limits informed consent and autonomy, but can provide otherwise unobtainable valuable knowledge.

Types of deception include (i) deliberate misleading, e.g. using confederates, staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g., failure to disclose full information about the study, or creating ambiguity.

The researcher should avoid deceiving participants about the nature of the research unless there is no alternative – and even then, this would need to be judged acceptable by an independent expert. However, some types of research cannot be carried out without at least some element of deception.

For example, in Milgram’s study of obedience , the participants thought they were giving electric shocks to a learner when they answered a question wrongly. In reality, no shocks were given, and the learners were confederates of Milgram.

This is sometimes necessary to avoid demand characteristics (i.e., the clues in an experiment that lead participants to think they know what the researcher is looking for).

Another common example is when a stooge or confederate of the experimenter is used (this was the case in both the experiments carried out by Asch ).

According to ethics codes, deception must have strong scientific justification, and non-deceptive alternatives should not be feasible. Deception that causes significant harm is prohibited. Investigators should carefully weigh whether deception is necessary and ethical for their research.

However, participants must be deceived as little as possible, and any deception must not cause distress.  Researchers can determine whether participants are likely distressed when deception is disclosed by consulting culturally relevant groups.

Participants should immediately be informed of the deception without compromising the study’s integrity. Reactions to learning of deception can range from understanding to anger. Debriefing should explain the scientific rationale and social benefits to minimize negative reactions.

If the participant is likely to object or be distressed once they discover the true nature of the research at debriefing, then the study is unacceptable.

If you have gained participants’ informed consent by deception, then they will have agreed to take part without actually knowing what they were consenting to.  The true nature of the research should be revealed at the earliest possible opportunity or at least during debriefing.

Some researchers argue that deception can never be justified and object to this practice as it (i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which to build a discipline; and (iii) leads to distrust of psychology in the community.

Confidentiality

Protecting participant confidentiality is an ethical imperative that demonstrates respect, ensures honest participation, and prevents harms like embarrassment or legal issues. Methods like data encryption, coding systems, and secure storage should match the research methodology.

Participants and the data gained from them must be kept anonymous unless they give their full consent.  No names must be used in a lab report .

Researchers must clearly describe to participants the limits of confidentiality and methods to protect privacy. With internet research, threats exist like third-party data access; security measures like encryption should be explained. For non-internet research, other protections should be noted too, like coding systems and restricted data access.

High-profile data breaches have eroded public trust. Methods that minimize identifiable information can further guard confidentiality. For example, researchers can consider whether birthdates are necessary or just ages.

Generally, reducing personal details collected and limiting accessibility safeguards participants. Following strong confidentiality protections demonstrates respect for persons in human subjects research.

What do we do if we discover something that should be disclosed (e.g., a criminal act)? Researchers have no legal obligation to disclose criminal acts and must determine the most important consideration: their duty to the participant vs. their duty to the wider community.

Ultimately, decisions to disclose information must be set in the context of the research aims.

Withdrawal from an Investigation

Participants should be able to leave a study anytime if they feel uncomfortable. They should also be allowed to withdraw their data. They should be told at the start of the study that they have the right to withdraw.

They should not have pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram’s research).

Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many participants are paid or receive course credits; they may worry they won’t get this if they withdraw.

Even at the end of the study, the participant has a final opportunity to withdraw the data they have provided for the research.

Ethical Issues in Psychology & Socially Sensitive Research

There has been an assumption over the years by many psychologists that provided they follow the BPS or APA guidelines when using human participants and that all leave in a similar state of mind to how they turned up, not having been deceived or humiliated, given a debrief, and not having had their confidentiality breached, that there are no ethical concerns with their research.

But consider the following examples:

a) Caughy et al. 1994 found that middle-class children in daycare at an early age generally score less on cognitive tests than children from similar families reared in the home.

Assuming all guidelines were followed, neither the parents nor the children participating would have been unduly affected by this research. Nobody would have been deceived, consent would have been obtained, and no harm would have been caused.

However, consider the wider implications of this study when the results are published, particularly for parents of middle-class infants who are considering placing their young children in daycare or those who recently have!

b)  IQ tests administered to black Americans show that they typically score 15 points below the average white score.

When black Americans are given these tests, they presumably complete them willingly and are not harmed as individuals. However, when published, findings of this sort seek to reinforce racial stereotypes and are used to discriminate against the black population in the job market, etc.

Sieber & Stanley (1988) (the main names for Socially Sensitive Research (SSR) outline 4 groups that may be affected by psychological research: It is the first group of people that we are most concerned with!
  • Members of the social group being studied, such as racial or ethnic group. For example, early research on IQ was used to discriminate against US Blacks.
  • Friends and relatives of those participating in the study, particularly in case studies, where individuals may become famous or infamous. Cases that spring to mind would include Genie’s mother.
  • The research team. There are examples of researchers being intimidated because of the line of research they are in.
  • The institution in which the research is conducted.
salso suggest there are 4 main ethical concerns when conducting SSR:
  • The research question or hypothesis.
  • The treatment of individual participants.
  • The institutional context.
  • How the findings of the research are interpreted and applied.

Ethical Guidelines For Carrying Out SSR

Sieber and Stanley suggest the following ethical guidelines for carrying out SSR. There is some overlap between these and research on human participants in general.

Privacy : This refers to people rather than data. Asking people questions of a personal nature (e.g., about sexuality) could offend.

Confidentiality: This refers to data. Information (e.g., about H.I.V. status) leaked to others may affect the participant’s life.

Sound & valid methodology : This is even more vital when the research topic is socially sensitive. Academics can detect flaws in methods, but the lay public and the media often don’t.

When research findings are publicized, people are likely to consider them fact, and policies may be based on them. Examples are Bowlby’s maternal deprivation studies and intelligence testing.

Deception : Causing the wider public to believe something, which isn’t true by the findings, you report (e.g., that parents are responsible for how their children turn out).

Informed consent : Participants should be made aware of how participating in the research may affect them.

Justice & equitable treatment : Examples of unjust treatment are (i) publicizing an idea, which creates a prejudice against a group, & (ii) withholding a treatment, which you believe is beneficial, from some participants so that you can use them as controls.

Scientific freedom : Science should not be censored, but there should be some monitoring of sensitive research. The researcher should weigh their responsibilities against their rights to do the research.

Ownership of data : When research findings could be used to make social policies, which affect people’s lives, should they be publicly accessible? Sometimes, a party commissions research with their interests in mind (e.g., an industry, an advertising agency, a political party, or the military).

Some people argue that scientists should be compelled to disclose their results so that other scientists can re-analyze them. If this had happened in Burt’s day, there might not have been such widespread belief in the genetic transmission of intelligence. George Miller (Miller’s Magic 7) famously argued that we should give psychology away.

The values of social scientists : Psychologists can be divided into two main groups: those who advocate a humanistic approach (individuals are important and worthy of study, quality of life is important, intuition is useful) and those advocating a scientific approach (rigorous methodology, objective data).

The researcher’s values may conflict with those of the participant/institution. For example, if someone with a scientific approach was evaluating a counseling technique based on a humanistic approach, they would judge it on criteria that those giving & receiving the therapy may not consider important.

Cost/benefit analysis : It is unethical if the costs outweigh the potential/actual benefits. However, it isn’t easy to assess costs & benefits accurately & the participants themselves rarely benefit from research.

Sieber & Stanley advise that researchers should not avoid researching socially sensitive issues. Scientists have a responsibility to society to find useful knowledge.

  • They need to take more care over consent, debriefing, etc. when the issue is sensitive.
  • They should be aware of how their findings may be interpreted & used by others.
  • They should make explicit the assumptions underlying their research so that the public can consider whether they agree with these.
  • They should make the limitations of their research explicit (e.g., ‘the study was only carried out on white middle-class American male students,’ ‘the study is based on questionnaire data, which may be inaccurate,’ etc.
  • They should be careful how they communicate with the media and policymakers.
  • They should be aware of the balance between their obligations to participants and those to society (e.g. if the participant tells them something which they feel they should tell the police/social services).
  • They should be aware of their own values and biases and those of the participants.

Arguments for SSR

  • Psychologists have devised methods to resolve the issues raised.
  • SSR is the most scrutinized research in psychology. Ethical committees reject more SSR than any other form of research.
  • By gaining a better understanding of issues such as gender, race, and sexuality, we are able to gain greater acceptance and reduce prejudice.
  • SSR has been of benefit to society, for example, EWT. This has made us aware that EWT can be flawed and should not be used without corroboration. It has also made us aware that the EWT of children is every bit as reliable as that of adults.
  • Most research is still on white middle-class Americans (about 90% of research is quoted in texts!). SSR is helping to redress the balance and make us more aware of other cultures and outlooks.

Arguments against SSR

  • Flawed research has been used to dictate social policy and put certain groups at a disadvantage.
  • Research has been used to discriminate against groups in society, such as the sterilization of people in the USA between 1910 and 1920 because they were of low intelligence, criminal, or suffered from psychological illness.
  • The guidelines used by psychologists to control SSR lack power and, as a result, are unable to prevent indefensible research from being carried out.

American Psychological Association. (2002). American Psychological Association ethical principles of psychologists and code of conduct. www.apa.org/ethics/code2002.html

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Caughy, M. O. B., DiPietro, J. A., & Strobino, D. M. (1994). Day‐care participation as a protective factor in the cognitive development of low‐income children.  Child development ,  65 (2), 457-471.

Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. Morawski (Ed.), The rise of experimentation in American psychology (pp. 188-212). New York: Oxford University Press.

Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research: A conceptual addendum. American Psychologist, 39(5) , 561.

Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research.  American psychologist ,  43 (1), 49.

The British Psychological Society. (2010). Code of Human Research Ethics. www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf

Further Information

  • MIT Psychology Ethics Lecture Slides

BPS Documents

  • Code of Ethics and Conduct (2018)
  • Good Practice Guidelines for the Conduct of Psychological Research within the NHS
  • Guidelines for Psychologists Working with Animals
  • Guidelines for ethical practice in psychological research online

APA Documents

APA Ethical Principles of Psychologists and Code of Conduct

Print Friendly, PDF & Email

Related Articles

Discourse Analysis

Research Methodology

Discourse Analysis

Phenomenology In Qualitative Research

Phenomenology In Qualitative Research

Ethnography In Qualitative Research

Ethnography In Qualitative Research

Narrative Analysis In Qualitative Research

Narrative Analysis In Qualitative Research

Thematic Analysis: A Step by Step Guide

Thematic Analysis: A Step by Step Guide

Metasynthesis Of Qualitative Research

Metasynthesis Of Qualitative Research

APS

Presidential Column

The importance of psychological research at nichd.

  • Biological/Neuroscience
  • Developmental Psychology
  • Environmental Effects
  • Health Psychology
  • Language Development
  • Mental Health
  • Nancy Eisenberg Columns
  • National Institute of Child Health & Human Development (NICHD)

Eisenberg_Nancy_board_web

Nancy Eisenberg

Many psychological scientists who have conducted research for some time have a home or favorite funding agency. Mine is the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). Although NICHD supports considerable medical research, for decades it has been a strong funder of basic psychological research relevant to human development, broadly defined. I got my first R01 (major individual principal investigator) grant and my first Research Scientist Development award (also a grant) at NICHD. The first review panel I served on was housed at NICHD (in the old days, review panels tended to be within NIH institutes, rather than reviewing grants across institutes). When the National Institute of Mental Health (NIMH), which used to fund large proportions of developmental, social, and personality research, decided to no longer support research on normal samples (unless, perhaps, the work was highly biological/neural) or research examining externalizing, internalizing, and related symptoms rather than diagnosed mental illnesses, many of us who had been funded by NIMH flocked to NICHD. NICHD funds a broad array of topics and approaches and has been very supportive of young scholars and scholars from diverse disciplines and subdisciplines of psychology.

Our field is very lucky that Dr. Alan Guttmacher, a pediatrician and geneticist who became director of NICHD in 2009, has an understanding and appreciation of the contributions that psychological science has made, and can make in the future, to an understanding of human development and the promotion of healthy development. When Dr. Guttmacher assumed the leadership of NICHD, he had a series of meetings to chart the future directions of NICHD and included many psychologists in the discussions. Under his leadership, NICHD continues to be a major source of support and direction for quality psychological research. Thus, I was very pleased when Dr. Guttmacher agreed to write a presidential column for the Observer . -Nancy Eisenberg

Guttmacher_Alan_web

Alan Guttmacher

Many thoughtful people draw a distinction between “biomedical” and “psychological” research. While I understand the basis for this distinction, I think it does a disservice. Psychological research is a key component of biomedical research. Human well-being, health, and disease are shaped by the complex interweaving of biological, environmental (using that term quite broadly, encompassing everything from the built environment to culture), and psychological factors. Biomedical research, at its best, not only examines each of these factors, but attempts to explore that complex interweaving of them.

Thus, it is unsurprising that the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) views psychological research as critical to achieving its mission. For instance, NICHD works to enable all children to have the chance to achieve their full potential for healthy and productive lives, free from disease or disability. Psychological research helps us reach this goal by providing an understanding of children’s typical and atypical journeys across the developmental trajectory.      

Psychological science plays a large part in understanding early childhood development and in informing efforts to optimize it. Researchers in NICHD’s intramural program study a variety of family situations to understand how sociodemographic factors influence development. For example, NICHD researchers study biological families and adoptive families to understand factors that influence the parent–child relationships that develop in each. Understanding similarities and differences between these situations helps practitioners understand the long-term effects of adoption on child development. This work is important because the findings can help professionals understand what might put a family at risk or provide a protective factor against risk and enhance interventions that improve family relationships.

NICHD’s extramural Child Development and Behavior Branch develops scientific initiatives and supports research and research training relevant to the psychological, psychobiological, language, behavioral, and educational development and health of children. For example, NICHD-supported researchers study at-risk populations to understand how health disparities and socioeconomic factors influence how different populations cope with stress. Researchers also study the long-term effects of adversity due to discrimination or socioeconomic factors to understand how stress plays a role in the development of chronic diseases such as heart disease and type 2 diabetes, while also looking at biomarkers of metabolic syndrome and pro-inflammatory tendencies, which are linked to chronic diseases of aging. Such research provides an important understanding of the relationship among stress, coping, and health.

NICHD’s Pediatric Trauma and Critical Illness Branch supports research and research training in pediatric trauma, injury, and critical illness. Throughout the course of development, children may experience any number of traumatic events — including child abuse or neglect, separation due to a parent’s military service, or natural disasters. Psychological research helps us to understand how traumatic experiences affect children’s psychosocial states, including how these stressful experiences during childhood may have effects on lifelong health. Studying these areas can also offer insight about protective factors or coping mechanisms that contribute to resiliency — an area of health and well-being that is too often overlooked.

Obesity is an obvious example of how biological, environmental, and psychological factors all play key roles in well-being, health, and disease — and, therefore, that research into each of these factors is crucial. Reversing the obesity epidemic will require better understanding of biology, environment, and psychology. Research focused on one factor can improve research into another. For instance, research at NIH’s clinical center in Bethesda, Maryland, which examines the roles of genes and the environment in obesity, can inform psychological research, with the end goal of utilizing all three factors to design effective interventions to tackle this national epidemic.

NICHD researchers have made progress in the area of teenage driving, through research that has identified a number of the underlying causes of risk for drivers, including teenage passengers and smartphone use and other distractions. We now understand the effectiveness of parental supervision and involvement, including setting limits on driving privileges that can reduce risky behaviors when their teens are behind the wheel. Research like this promises to bridge the divide between basic and applied population health science to create safer driving experiences for teenage drivers.

The understanding and insight gained from psychological science is extremely important to achieving the NICHD mission. The examples cited here are a very small sampling of the ways that psychological research has always been, and will continue to be, at the heart of what NICHD is about.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

About the Author

Alan Guttmacher is the Director of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the focal point at the National Institutes of Health for research in pediatric health and development, maternal health, reproductive health, intellectual and developmental disabilities, and rehabilitation medicine.

important of research in psychology

Teaching: Ethical Research to Help Romania’s Abandoned Children 

An early intervention experiment in Bucharest can introduce students to the importance of responsive caregiving during human development.

important of research in psychology

Silver Linings in the Demographic Revolution 

Podcast: In her final column as APS President, Alison Gopnik makes the case for more effectively and creatively caring for vulnerable humans at either end of life.

important of research in psychology

Communicating Psychological Science: The Lifelong Consequences of Early Language Skills

“When families are informed about the importance of conversational interaction and are provided training, they become active communicators and directly contribute to reducing the word gap (Leung et al., 2020).”

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
  • Department of Psychology
  • Hall of Fame
  • Research Focus
  • Recent Research
  • Past Honours Research Projects
  • Studying Psychology at UCT
  • Applying for Undergraduate Studies in Psychology
  • Bachelor of Social Science (Honours) specialising in Psychology
  • How to apply for Honours in Psychology
  • MA in Clinical Psychology
  • MA in Psychological Research
  • MA in Neuropsychology
  • MSocSci in Psychology
  • PhD in Psychology
  • What can I do with my Psychology degree?
  • Ethical Approval
  • Library Guide
  • Plagiarism Policy
  • News archive
  • Contact details for the Child Guidance Clinic

Master of Arts in Neuropsychology

Programme requirements.

1. All applicants must be in possession of an Honours degree in psychology from a South African university (or an equivalent qualification recognised by the University of Cape Town and the Professional Board of Psychology). Students applying from outside of South Africa (e.g., neighbouring countries) should apply to the  South African Qualifications Authority  to have their degree evaluated. SAQA LEVEL 8 is the minimum requirement for your application. Please visit the SAQA website for further information regarding the evaluation of foreign qualifications. Please note that a Psychology Honours degree from UCT is not a requirement. 

2. An overall average mark of at least 70% for a Psychology Honours program.

3. A minimum of 70% for Neuropsychology (or equivalent) at Honours level.** 

If you do not have a final mark for your full Honours degree or for your Neuropsychology (or equivalent) course, you must include a letter from your course convenor/s confirming the partial marks you currently have for your:

  • Honours overall coursework mark to date
  • Honours Neuropsychology (or equivalent) course mark 

**Please note that the Psychology Honours programme at UCT is a single course that includes all content modules and a research component.  It is not possible to register for the Honours Neuropsychology module on its own .  Selection into the UCT Honours programme is also highly competitive.

Please note that the research dissertation comprises a substantial proportion (50%) of the degree mark, so appropriate training in Psychological research is also necessary.

Please see the UCT Humanities Postgraduate Handbook for more information. 

Application Procedure

NOTE! In order to apply for an M.A. in Neuropsychology at UCT, all candidates MUST complete BOTH UCT's application form (step 1 below)  AND complete the internal departmental application form (step 2 below). Failure to complete one or both of these will mean that your application is incomplete and will not be considered). 

Please ensure that you complete all the steps outlined below by the closing date: 31 October 2024

1.  Make an online application for study through the central UCT Admissions Office by no later than 31 October 2024 . Applications open from 2 April 2024. 

2.  Complete the Departmental application form . In this application you will need to submit a motivational letter, academic transcript(s), degree certificate(s), progress report (if currently completing honours), proof of payment (see below), and the names and contact details of two referees (see below). Click here to access the departmental application form . This must be completed and submitted by 31 October 2024 . No late applications will be considered.

3.  In addition, two referee reports must be submitted (one must be an academic referee, preferably your honours supervisor). Please provide your two chosen referees with your  UCT Student Number  as they will need to include this information when submitting a report for you. If you are new to UCT, you will be given a student number once you have completed the UCT application (see point [1] above). Please send the following link to your two chosen referees: https://forms.gle/hJMg8zACFr6N8FZQ8 . Referee reports are due when internal applications close on  31 October 2024 . It is your responsibility to ensure that your referees have submitted their reports by the deadline. No late referee reports will be accepted.   

4. Proof of EFT payment of the departmental application fee. The fee is  R150  for South African applicants and  R175  for international applicants. You will need to submit these documents with your application (in point [2] above). If we do not receive the proof of payment with your application, we have no record of the transaction. Banking details are provided below. Please note that this is an additional fee to UCT's general application fee. 

To apply for funding, please see the Postgraduate Funding Office .

Please note that incomplete applications will not be considered.

See more in the Faculty of Humanities Postgraduate Handbook .

Banking Details:

STANDARD BANK
UCT- Sundries
Account number: 071503854
Business Current Account
Rondebosch branch code: 025009

 (if you can fit in NEU)

Selection Procedure

Selection into this program is highly competitive , as we get many more applicants than we can accommodate.  There are only 6 places available each year.  When making the selection we take into consideration academic record (especially at Honours level, but also overall; and appropriate academic background in Neuropsychology and cognate areas), personal suitability for clinical work, and a letter of motivation.  We also conform to UCT policy on equity. 

Applicants will be short-listed and will be required to attend an in-person interview if chosen. If you are not contacted for an interview, it means that your application was unsuccessful. Interviews typically take place in December. Interview dates will be relayed as soon as they become available. 

Important Notice

The HPCSA has now opened the Neuropsychology register.  Many of our graduates have taken the Board exam and are now registered.  The uncertainty around the Neuropsychology qualification therefore no longer pertains. Successful completion of UCT’s accredited MA Neuropsychology degree, an HPCSA approved internship, and their Board Exam, should qualify you for registration as a Neuropsychologist. However, ultimate authority to register an individual rests with the HPCSA and not with UCT.

Frequently Asked Questions

Q: I am doing Course X at University Y but have not completed Neuro-Psychology at Honours level.  Is my course equivalent?**

A: Some Honours level courses that cover brain and behaviour, physiological psychology, or human neuroscience topics may be considered equivalent to Neuro-Psychology at Honours level.  This depends on the particular course’s content and the level at which the course is taught.  Decisions regarding such courses will be made during the application and selection process each year. Please provide details in your letter of application.

Q: How is clinical suitability determined?

A: We use information from various parts of the full application and we interview short-listed candidates.

Q: Is it possible to do the course part-time?

A:  This is a full-time clinical training program. It is not possible to take it on a part-time basis. Students are required to be in Cape Town to complete their training. 

For any additional queries please contact Mia Karriem via email: [email protected] .

Jamie Cannon MS, LPC

3 Signs Someone Is Using Guilt to Manipulate You

Guilt can become a tool in the hands of a manipulator..

Updated June 19, 2024 | Reviewed by Ray Parker

  • Coping With Guilt
  • Find a therapist near me

Guilt is a natural, commonly occurring human emotion —one that usually sparks intense self-reflection and, at times, can be the catalyst for behavioral change . Guilt can be an important emotion to pay attention to and allows humans to contemplate the impact of their actions on others. However, there are instances when guilt can become toxic, a chronic emotion that becomes out of proportion to the situation at hand.

Research on the Aftereffects of Chronic Guilt

Recent research has explored how effective guilt really is in changing behaviors, and though evidence suggests the emotion does have positive prosocial results, it also indicates the potential for negative effects as well. Chronic or toxic guilt can lead to anxiety , depression , and even a compromised immune system. Those aftereffects are the opposite of what guilt should produce. In its truest form, guilt should promote personal responsibility—and empathy—rather than an unstable mental state.

When it comes to emotions, manipulators are skilled at recognizing and using them to their advantage—a necessary skill to make others bend to your will. Positive or negative emotions can become deadly in the hands of a chronic manipulator, and guilt is often one of their most widely exploited feelings.

Manipulators and Guilt

Guilt is a tool that can be used to elicit compliance in others, and while that may have its place, manipulators are experts at twisting this aspect of guilt to their advantage. Manipulators who use guilt to get what they want from others are engaging in emotional blackmail, a tactic that can disrupt relationships and result in significant damage to self-esteem . Protect yourself against those tactics by recognizing some common signs that a manipulator may be using guilt to manipulate you:

1. Their “poor me” mentality takes center stage. Manipulators rarely take responsibility for the true motivations behind their actions, and much of their time is spent convincing others they have been victimized in some way. When it comes to using guilt as a tool, these individuals excel at persuading others they have been hurt—and that compliance on the part of whoever hurt them is the best compensation. That can look like punishing the people they believe injured them in some way, believing they have the right to exert their will onto those people, or even coercing those people into behaviors they would normally refuse, all out of a sense of guilt.

When it comes to manipulators, they quickly grasp your feelings of guilt and turn them against you, subtly suggesting that those very emotions are the reason you should allow yourself to be treated however they want. If they convince you that they are the true victim and prey on your empathy in the process, their next step is using that win to ensure you submit to their wishes.

2. They try to appear perfect while highlighting your deficiencies. Individuals who remind you of all they’ve done for you—a laundry list of their superiority—could be using that as manipulation, a reminder that you should feel guilty for not measuring up. Remarks like “I’m the one who cares the most” or “I’ve always done that for you” are leading statements meant to provoke negative feelings of guilt in others—and get them to give in.

Manipulators have a long memory when it comes to their successes and a short one when it comes to yours. If someone consistently focuses on everything you’ve done wrong while spotlighting only what they have done right, it should be a warning signal that guilt is being used against you.

3. They hint that you “owe” them. Manipulators keep a checklist of the favors they believe others owe them and never engage in positive behaviors without ulterior motives. If a chronic manipulator has offered to help you in some way, you can be certain they will call that in as a favor down the road. In fact, many times, they help without being asked—or even after being asked not to help—just to gain a sense of one-upmanship in the relationship.

Manipulators care deeply about how they appear to others, and guilting those around them into believing the balance in their relationship is unequal is a common coercive tool. When you hear subtle reminders that someone expects payback because they helped you in some way, it’s time to reevaluate that relationship. Guilt should not be the impetus to do something nice or go out of your way for someone else—that will just leave you and the relationship feeling empty.

important of research in psychology

Spot the Manipulation Before It's Too Late

Manipulators are skilled at using emotions against you because they have to be; emotions are the driving force behind our behaviors and the best place to start when trying to convince someone to acquiesce. Recognizing that emotions are the basis is a starting point, and understanding the role guilt plays in that process is crucial. Guilt being used to manipulate can be challenging to spot simply due to the qualities that make guilt such a unique emotion.

Guilt is powerful because of its intensity—and the difficulty that comes with trying to resolve it. Both of those characteristics make guilt the perfect breeding ground for a chronic manipulator to turn your emotions against you. If you allow guilt to become toxic and a tool in others’ hands to control you, it can cause serious long-term impacts on your physical health, mental well-being, and future relationships.

Aurélien G, Melody M. A Theory of Guilt Appeals: A Review Showing the Importance of Investigating Cognitive Processes as Mediators between Emotion and Behavior. Behav Sci (Basel). 2019 Nov 20;9(12):117. doi: 10.3390/bs9120117. PMID: 31756909; PMCID: PMC6960572.

Jamie Cannon MS, LPC

Jamie Cannon, MS, LPC, specializes in the treatment of trauma, anxiety, and grief with populations ranging from children and families to victims of domestic violence.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Using AI at Work Makes Us Lonelier and Less Healthy

  • David De Cremer
  • Joel Koopman

important of research in psychology

Employees who use AI as a core part of their jobs report feeling more isolated, drinking more, and sleeping less than employees who don’t.

The promise of AI is alluring — optimized productivity, lightning-fast data analysis, and freedom from mundane tasks — and both companies and workers alike are fascinated (and more than a little dumbfounded) by how these tools allow them to do more and better work faster than ever before. Yet in fervor to keep pace with competitors and reap the efficiency gains associated with deploying AI, many organizations have lost sight of their most important asset: the humans whose jobs are being fragmented into tasks that are increasingly becoming automated. Across four studies, employees who use it as a core part of their jobs reported feeling lonelier, drinking more, and suffering from insomnia more than employees who don’t.

Imagine this: Jia, a marketing analyst, arrives at work, logs into her computer, and is greeted by an AI assistant that has already sorted through her emails, prioritized her tasks for the day, and generated first drafts of reports that used to take hours to write. Jia (like everyone who has spent time working with these tools) marvels at how much time she can save by using AI. Inspired by the efficiency-enhancing effects of AI, Jia feels that she can be so much more productive than before. As a result, she gets focused on completing as many tasks as possible in conjunction with her AI assistant.

  • David De Cremer is a professor of management and technology at Northeastern University and the Dunton Family Dean of its D’Amore-McKim School of Business. His website is daviddecremer.com .
  • JK Joel Koopman is the TJ Barlow Professor of Business Administration at the Mays Business School of Texas A&M University. His research interests include prosocial behavior, organizational justice, motivational processes, and research methodology. He has won multiple awards from Academy of Management’s HR Division (Early Career Achievement Award and David P. Lepak Service Award) along with the 2022 SIOP Distinguished Early Career Contributions award, and currently serves on the Leadership Committee for the HR Division of the Academy of Management .

Partner Center

IMAGES

  1. The role of research in the practice of psychology

    important of research in psychology

  2. Why is Research Important?

    important of research in psychology

  3. Research in Psychology

    important of research in psychology

  4. RESEARCH IN PSYCHOLOGY & RESEARCH CAREERS

    important of research in psychology

  5. The Importance of Research in Psychology: Exploring Facts and

    important of research in psychology

  6. How Applied Research Is Used in Psychology

    important of research in psychology

VIDEO

  1. PSYCHOLOGY FACT

  2. Effective Facts

  3. Research in psychology

  4. Educational psychologist

  5. Importance of Research

  6. Importance of Research

COMMENTS

  1. 2.1 Why is Research Important

    Discuss how scientific research guides public policy. Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck.

  2. 2.1 Why Is Research Important?

    Appreciate how scientific research can be important in making personal decisions; Scientific research is a critical tool for successfully navigating our complex world. ... (1871-1939) was the first woman to earn a PhD in psychology. Her research focused on animal behavior and cognition (Margaret Floy Washburn, PhD, n.d.). Mary Whiton Calkins ...

  3. The Use of Research Methods in Psychological Research: A Systematised

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  4. Why is Research Important?

    Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck. While many of us feel confident in our abilities to decipher and interact ...

  5. Why Is Research Important?

    Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck. While many of us feel confident in our abilities to decipher and interact ...

  6. 2.1: Why Is Research Important?

    Discuss how scientific research guides public policy. Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck.

  7. Why is Research Important?

    Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck. While many of us feel confident in our abilities to decipher and interact ...

  8. PDF APA Handbook of Research Methods in Psychology

    Research Methods in Psychology AP A Han dbook s in Psychology VOLUME Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological SECOND EDITION Harris Cooper, Editor-in-Chief Marc N. Coutanche, Linda M. McMullen, A. T. Panter, sychological Association. Not for further distribution.

  9. Research in Psychology: Methods You Should Know

    Research in psychology focuses on a variety of topics, ranging from the development of infants to the behavior of social groups. Psychologists use the scientific method to investigate questions both systematically and empirically. Research in psychology is important because it provides us with valuable information that helps to improve human lives.

  10. PDF Why research is important

    Why research is important 3 concepts or constructs. A piece of research is embedded in a frame-work or way of seeing the world. Second, research involves the application of a method, which has been designed to achieve knowledge that is as valid and truthful as possible. 4 The products of research are propositions or statements. There is a

  11. PSYCH101: Why Research Is Important

    Why Research Is Important. Read this text, which introduces the scientific method, which involves making a hypothesis or general premise, deductive reasoning, making empirical observations, and inductive reasoning, Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely ...

  12. Exploring the Nature and Importance of Psychological Research

    This section emphasizes the importance of psychological research across various fields such as organizational behavior, medical sciences, and education. Psychological research, characterized by its empirical and theoretical methodologies, aims to understand human and animal behavior through the study of areas like learning, motivation, and memory. It outlines the quest for psychological facts ...

  13. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  14. Science of Psychology

    The science of psychology is pervasive. Psychologists work in some of the nation's most prominent companies and organizations. From Google, Boeing and NASA to the federal government, national health care organizations and research groups to Cirque du Soleil, Disney and NASCAR — psychologists are there, playing important roles.

  15. Frontiers

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  16. Overview of the Types of Research in Psychology

    Psychology research can usually be classified as one of three major types. 1. Causal or Experimental Research. When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables.

  17. Psychology research is front and center

    Physical distancing requirements around the COVID-19 pandemic have created undeniable difficulties for many psychology research projects that relied on in-person interactions, but NSF and NIH officials say the pandemic has brought the importance of psychology research to the forefront.

  18. The role of research in the practice of psychology

    Because of this, understanding the research behind an assessment is important in psychological practice. It enables psychologists to better explain what results 'mean'. Research is also conducted in psychology to develop treatments for psychological disorders, determine whether they are effective, and use them in clinical practice.

  19. Psychology, Psychological Research, Why Is Research Important?

    Why Is Research Important? Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with ...

  20. Ethical Considerations in Psychology Research

    The research team. There are examples of researchers being intimidated because of the line of research they are in. The institution in which the research is conducted. salso suggest there are 4 main ethical concerns when conducting SSR: The research question or hypothesis. The treatment of individual participants.

  21. The Importance of Psychological Research at NICHD

    Such research provides an important understanding of the relationship among stress, coping, and health. ... Reversing the obesity epidemic will require better understanding of biology, environment, and psychology. Research focused on one factor can improve research into another. For instance, research at NIH's clinical center in Bethesda ...

  22. The Science of Happiness

    The idea of the midlife crisis is firmly entrenched in popular psychology despite its lack of research support. New research shows another reason it needs to disappear. Excavating Joy in Relationships

  23. <em>British Journal of Educational Psychology</em>

    British Journal of Educational Psychology is an international journal publishing psychological research aiming to improve the understanding of all aspects of education. Abstract Background The importance of parent-teacher relationships has been well-discussed in Western contexts. It's still unclear whether and how parent-teacher relationships ...

  24. PDF WrITINg CeNTer BrIeF gUIde SerIeS A Brief Guide to Writing the

    Common Types of Psychology Papers Research psychologists engage in a variety of kinds of writing, including grant proposals, research applications and renewals, review articles, research articles, and ... The citation of sources is very important in psychology. For all papers you will write for courses, you will use APA style. The best way to ...

  25. Master of Arts in Neuropsychology

    Programme Requirements. 1. All applicants must be in possession of an Honours degree in psychology from a South African university (or an equivalent qualification recognised by the University of Cape Town and the Professional Board of Psychology). Students applying from outside of South Africa (e.g., neighbouring countries) should apply to the South African Qualifications Authority to have ...

  26. 2.8: Why Is Research Important?

    By the end of this section, you will be able to: Explain how scientific research addresses questions about behavior. Discuss how scientific research guides public policy. Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world.

  27. The Link Between Borderline Personality Disorder and Anger

    Research has also shown that there is a strong association between having an insecure attachment style and borderline personality disorder (Critchfield, Levy, Clarking, et. al., 2007). And when ...

  28. 3 Signs Someone Is Using Guilt to Manipulate You

    A Theory of Guilt Appeals: A Review Showing the Importance of Investigating Cognitive Processes as Mediators between Emotion and Behavior. Behav Sci (Basel). 2019 Nov 20;9(12):117. doi: 10.3390 ...

  29. Research: Using AI at Work Makes Us Lonelier and Less Healthy

    Joel Koopman is the TJ Barlow Professor of Business Administration at the Mays Business School of Texas A&M University. His research interests include prosocial behavior, organizational justice ...

  30. Unlocking the entrepreneurial brain: New perspectives on cognitive

    Pioneering research highlights the importance of combining neuroscience with traditional entrepreneurial studies to gain a comprehensive understanding of what makes successful entrepreneurs ...