Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 11: Presenting Your Research

Writing a Research Report in American Psychological Association (APA) Style

Learning Objectives

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centred in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioural Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behaviour?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behaviour (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humourous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humour and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favourite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behaviour during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centred on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Three ways of organizing an APA-style method. Long description available.

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centred at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centred at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

""

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different colour each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Long Descriptions

Figure 11.1 long description: Table showing three ways of organizing an APA-style method section.

In the simple method, there are two subheadings: “Participants” (which might begin “The participants were…”) and “Design and procedure” (which might begin “There were three conditions…”).

In the typical method, there are three subheadings: “Participants” (“The participants were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).

In the complex method, there are four subheadings: “Participants” (“The participants were…”), “Materials” (“The stimuli were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”). [Return to Figure 11.1]

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The compleat academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

A type of research article which describes one or more new empirical studies conducted by the authors.

The page at the beginning of an APA-style research report containing the title of the article, the authors’ names, and their institutional affiliation.

A summary of a research study.

The third page of a manuscript containing the research question, the literature review, and comments about how to answer the research question.

An introduction to the research question and explanation for why this question is interesting.

A description of relevant previous research on the topic being discusses and an argument for why the research is worth addressing.

The end of the introduction, where the research question is reiterated and the method is commented upon.

The section of a research report where the method used to conduct the study is described.

The main results of the study, including the results from statistical analyses, are presented in a research article.

Section of a research report that summarizes the study's results and interprets them by referring back to the study's theoretical background.

Part of a research report which contains supplemental material.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

research reports psychology

Psychological Report Writing

March 8, 2021 - paper 2 psychology in context | research methods.

Through using this website, you have learned about, referred to, and evaluated research studies. These research studies are generally presented to the scientific community as a journal article. Most journal articles follow a standard format. This is similar to the way you may have written up experiments in other sciences.

(2) Introduction:

This tells everyone why the study is being carried out and the commentary should form a ‘funnel’ of information. First, there is broad coverage of all the background research with appropriate evaluative comments: “Asch (1951) found…but Crutchfield (1955) showed…” Once the general research has been covered, the focus becomes much narrower finishing with the main researcher/research area you are hoping to support/refute. This then leads to the aims and hypothesis/hypotheses (i.e. experimental and null hypotheses) being stated.

(1) Design:

(4) Results:

All studies have flaws, so anything that went wrong or the limitations of the study are discussed together with suggestions for how it could be improved if it were to be repeated. Suggestions for alternative studies and future research are also explored. The discussion ends with a paragraph summing up what was found and assessing the implications of the study and any conclusions that can be drawn from it.

Look through your report and include a reference every researcher mentioned. A reference should include; the name of the researcher, the date the research was published, the title of the book/journal, where the book was published (or what journal the article was published in), the edition number of the book/volume of the journal article, the page numbers used.

Exam Tip:  In the exam, the types of questions you could expect relating to report writing include; defining what information you would find in each section of the report, in addition, on the old specification, questions linked to report writing have included; writing up a method section, results section and designing a piece of research.

In addition, in the exam, you may get asked to write; a  consent form ,  debriefing sheet  or a set of  standardised instructions.

(2)  A im of the study?

(3)  P rocedure – What will I be asked to do if I take part?

(5) Do I  H ave to take part?

Explain to the participant that they don’t have to take part in the study, explain about their right to withdraw.

Have you received enough information about the study? YES/NO

Do you consent for your data to be used in this study and retained for use in other studies? YES/NO

When writing a set of standardised instructions, it is essential that you include:

5. Explain to the participants what will happen in the study, what they will be expected to do (step by step), how long the task/specific parts of the task will take to complete.

8. Check that the participants are still happy to proceed with the study.

This is the form that you should complete with your participants at the end of the study to ensure that they are happy with the way the study has been conducted, to explain to them the true nature of the study, to confirm consent and to give them the researcher’s contact details in case they want to ask any further questions.

(2) Participants:

We're not around right now. But you can send us an email and we'll get back to you, asap.

APS

Psychological Science

Prospective submitters of manuscripts are encouraged to read Editor-in-Chief Simine Vazire’s editorial , as well as the editorial by Tom Hardwicke, Senior Editor for Statistics, Transparency, & Rigor, and Simine Vazire.

Psychological Science , the flagship journal of the Association for Psychological Science, is the leading peer-reviewed journal publishing empirical research spanning the entire spectrum of the science of psychology. The journal publishes high quality research articles of general interest and on important topics spanning the entire spectrum of the science of psychology. Replication studies are welcome and evaluated on the same criteria as novel studies. Articles are published in OnlineFirst before they are assigned to an issue. This journal is a member of the Committee on Publication Ethics (COPE) .

Quick Facts

Simine Vazire
Print: 0956-7976
Online: 1467-9280
12 issues per year

Read the February 2022 editorial by former Editor-in-Chief Patricia Bauer, “Psychological Science Stepping Up a Level.”

Read the January 2020 editorial by former Editor Patricia Bauer on her vision for the future of  Psychological Science .

Read the December 2015 editorial on replication by former Editor Steve Lindsay, as well as his April 2017 editorial on sharing data and materials during the review process.

Watch Geoff Cumming’s video workshop on the new statistics.

research reports psychology

Current Issue

research reports psychology

Online First Articles

research reports psychology

List of Issues

research reports psychology

Editorial Board

research reports psychology

Submission Guidelines

research reports psychology

Editorial Policies

Featured research from psychological science.

Thumbnail Image for Teens Who View Their Homes as More Chaotic Than Their Siblings Have Poorer Mental Health in Adulthood

Teens Who View Their Homes as More Chaotic Than Their Siblings Have Poorer Mental Health in Adulthood

Many parents ponder why one of their children seems more emotionally troubled than the others. A new study in the United Kingdom reveals a possible basis for those differences.

Thumbnail Image for Rewatching Videos of People Shifts How We Judge Them, Study Indicates

Rewatching Videos of People Shifts How We Judge Them, Study Indicates

Rewatching recorded behavior, whether on a Tik-Tok video or police body-camera footage, makes even the most spontaneous actions seem more rehearsed or deliberate, new research shows.

Thumbnail Image for Loneliness Bookends Adulthood, Study Shows

Loneliness Bookends Adulthood, Study Shows

Loneliness in adulthood follows a U-shaped pattern: It’s higher in younger and older adulthood, and lowest during middle adulthood, according to new research that examined nine longitudinal studies from around the world.

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Psychology articles from across Nature Portfolio

Psychology is a scientific discipline that focuses on understanding mental functions and the behaviour of individuals and groups.

The neuroscience of turning heads

Measuring neural activity in moving humans has been a longstanding challenge in neuroscience, which limits what we know about our navigational neural codes. Leveraging mobile EEG and motion capture, Griffiths et al. overcome this challenge to elucidate neural representations of direction and highlight key cross-species similarities.

  • Sergio A. Pecirno
  • Alexandra T. Keinath

research reports psychology

Child brains respond to climate change

Maternal exposure to ambient heat during pregnancy has been shown to increase the risk for several adverse birth outcomes. Now research reveals that variations of ambient temperature during pregnancy and childhood could have a long-term impact on a child’s brain development.

  • Johanna Lepeule

research reports psychology

Boosting children’s cognitive control does not result in behavioral or neural changes

Cognitive control is crucial for present and future success and therefore is a frequent target of interventions. This study showed that training cognitive control in a large sample of 6–13-year-old children did not lead to any behavioral or neural changes, either immediately or 1 year after training.

Related Subjects

  • Human behaviour

Latest Research and Reviews

research reports psychology

Distilling the concept of authenticity

Authenticity is promoted by cultural norms, institutions and folk wisdom, but there is disagreement about what exactly authenticity is. In this Review, Sedikides and Schlegel describe major conceptualizations of the subjective experience of authenticity and discuss its relevance for psychological functioning.

  • Constantine Sedikides
  • Rebecca J. Schlegel

research reports psychology

Follow-up of telemedicine mental health interventions amid COVID-19 pandemic

  • Carlos Roncero
  • Sara Díaz-Trejo
  • Armando González-Sánchez

Sleep quality and mental disorder symptoms among correctional workers in Ontario, Canada

  • Rosemary Ricciardelli
  • Tamara L. Taillieu
  • R. Nicholas Carleton

research reports psychology

Humans flexibly use visual priors to optimize their haptic exploratory behavior

  • Michaela Jeschke
  • Aaron C. Zoeller
  • Knut Drewing

research reports psychology

Reliable, rapid, and remote measurement of metacognitive bias

  • Celine A. Fox
  • Abbie McDonogh
  • Claire M. Gillan

research reports psychology

Exploring the dynamics of prefrontal cortex in the interaction between orienteering experience and cognitive performance by fNIRS

Advertisement

News and Comment

Premature call for implementation of tetris in clinical practice: a commentary on deforges et al. (2023).

  • Joar Øveraas Halvorsen
  • Ineke Wessel
  • Ioana A. Cristea

Exploring brain representations through the lens of similarity structures

  • Stefania Mattioni

Puerto Rico’s energy transition

  • Silvana Lakeman

Causal prominence for neuroscience

  • Philip Tseng

Ethical principles and practices for using naturally occurring data

Naturally occurring data are not always covered by today’s ethical regulations. However, scientists can adapt the foundational ethical principles of research using human subjects to meet their obligations to science and society.

  • Alexandra Paxton

research reports psychology

The shortage of child psychiatrists in mainland China

This Comment was conducted to clarify the current number of child psychiatrists in mainland China, to analyze the reasons for the shortages and to provide constructive suggestions for solving the current shortage.

  • Zhongliang Jiang

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research reports psychology

Last updated 27/06/24: Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

research reports psychology

  • > The Cambridge Handbook of Clinical Assessment and Diagnosis
  • > Writing a Psychological Report Using Evidence-Based Psychological Assessment Methods

research reports psychology

Book contents

  • The Cambridge Handbook of Clinical Assessment and Diagnosis
  • Copyright page
  • Contributors
  • Acknowledgments
  • 1 Introduction to the Handbook of Clinical Assessment and Diagnosis
  • Part I General Issues in Clinical Assessment and Diagnosis
  • 2 Psychometrics and Psychological Assessment
  • 3 Multicultural Issues in Clinical Psychological Assessment
  • 4 Ethical and Professional Issues in Assessment
  • 5 Contemporary Psychopathology Diagnosis
  • 6 Assessment of Noncredible Reporting and Responding
  • 7 Technological Advances in Clinical Assessment
  • 8 Psychological Assessment as Treatment
  • 9 Writing a Psychological Report Using Evidence-Based Psychological Assessment Methods
  • Part II Specific Clinical Assessment Methods
  • Part III Assessment and Diagnosis of Specific Mental Disorders
  • Part IV Clinical Assessment in Specific Settings

9 - Writing a Psychological Report Using Evidence-Based Psychological Assessment Methods

from Part I - General Issues in Clinical Assessment and Diagnosis

Published online by Cambridge University Press:  06 December 2019

Psychological assessment and report writing are arguably two of the more important tasks of clinical psychologists. The overall purpose of this chapter is to provide some recommendations and guidelines on how to write a psychological report using evidence-based assessment methods. Principles on psychological report writing derived from seminal papers in the field of psychological assessment were adapted and used as an organizing tool to create a template on how to write all varieties of psychological reports that incorporate evidence-based assessment methods. Report writers who share similar approaches to evidence-based assessment methods may find this template helpful when formatting their psychological reports.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • Writing a Psychological Report Using Evidence-Based Psychological Assessment Methods
  • By R. Michael Bagby , Shauna Solomon-Krakus
  • Edited by Martin Sellbom , University of Otago, New Zealand , Julie A. Suhr , Ohio University
  • Book: The Cambridge Handbook of Clinical Assessment and Diagnosis
  • Online publication: 06 December 2019
  • Chapter DOI: https://doi.org/10.1017/9781108235433.009

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing in Psychology Overview

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Psychology is based on the study of human behaviors. As a social science, experimental psychology uses empirical inquiry to help understand human behavior. According to Thrass and Sanford (2000), psychology writing has three elements: describing, explaining, and understanding concepts from a standpoint of empirical investigation.

Discipline-specific writing, such as writing done in psychology, can be similar to other types of writing you have done in the use of the writing process, writing techniques, and in locating and integrating sources. However, the field of psychology also has its own rules and expectations for writing; not everything that you have learned in about writing in the past works for the field of psychology.

Writing in psychology includes the following principles:

  • Using plain language : Psychology writing is formal scientific writing that is plain and straightforward. Literary devices such as metaphors, alliteration, or anecdotes are not appropriate for writing in psychology.
  • Conciseness and clarity of language : The field of psychology stresses clear, concise prose. You should be able to make connections between empirical evidence, theories, and conclusions. See our OWL handout on conciseness for more information.
  • Evidence-based reasoning: Psychology bases its arguments on empirical evidence. Personal examples, narratives, or opinions are not appropriate for psychology.
  • Use of APA format: Psychologists use the American Psychological Association (APA) format for publications. While most student writing follows this format, some instructors may provide you with specific formatting requirements that differ from APA format .

Types of writing

Most major writing assignments in psychology courses consists of one of the following two types.

Experimental reports: Experimental reports detail the results of experimental research projects and are most often written in experimental psychology (lab) courses. Experimental reports are write-ups of your results after you have conducted research with participants. This handout provides a description of how to write an experimental report .

Critical analyses or reviews of research : Often called "term papers," a critical analysis of research narrowly examines and draws conclusions from existing literature on a topic of interest. These are frequently written in upper-division survey courses. Our research paper handouts provide a detailed overview of how to write these types of research papers.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Reporting Standards for Research in Psychology

In anticipation of the impending revision of the Publication Manual of the American Psychological Association , APA’s Publications and Communications Board formed the Working Group on Journal Article Reporting Standards (JARS) and charged it to provide the board with background and recommendations on information that should be included in manuscripts submitted to APA journals that report (a) new data collections and (b) meta-analyses. The JARS Group reviewed efforts in related fields to develop standards and sought input from other knowledgeable groups. The resulting recommendations contain (a) standards for all journal articles, (b) more specific standards for reports of studies with experimental manipulations or evaluations of interventions using research designs involving random or nonrandom assignment, and (c) standards for articles reporting meta-analyses. The JARS Group anticipated that standards for reporting other research designs (e.g., observational studies, longitudinal studies) would emerge over time. This report also (a) examines societal developments that have encouraged researchers to provide more details when reporting their studies, (b) notes important differences between requirements, standards, and recommendations for reporting, and (c) examines benefits and obstacles to the development and implementation of reporting standards.

The American Psychological Association (APA) Working Group on Journal Article Reporting Standards (the JARS Group) arose out of a request for information from the APA Publications and Communications Board. The Publications and Communications Board had previously allowed any APA journal editor to require that a submission labeled by an author as describing a randomized clinical trial conform to the CONSORT (Consolidated Standards of Reporting Trials) reporting guidelines ( Altman et al., 2001 ; Moher, Schulz, & Altman, 2001 ). In this context, and recognizing that APA was about to initiate a revision of its Publication Manual ( American Psychological Association, 2001 ), the Publications and Communications Board formed the JARS Group to provide itself with input on how the newly developed reporting standards related to the material currently in its Publication Manual and to propose some related recommendations for the new edition.

The JARS Group was formed of five current and previous editors of APA journals. It divided its work into six stages:

  • establishing the need for more well-defined reporting standards,
  • gathering the standards developed by other related groups and professional organizations relating to both new data collections and meta-analyses,
  • drafting a set of standards for APA journals,
  • sharing the drafted standards with cognizant others,
  • refining the standards yet again, and
  • addressing additional and unresolved issues.

This article is the report of the JARS Group’s findings and recommendations. It was approved by the Publications and Communications Board in the summer of 2007 and again in the spring of 2008 and was transmitted to the task force charged with revising the Publication Manual for consideration as it did its work. The content of the report roughly follows the stages of the group’s work. Those wishing to move directly to the reporting standards can go to the sections titled Information for Inclusion in Manuscripts That Report New Data Collections and Information for Inclusion in Manuscripts That Report Meta-Analyses.

Why Are More Well-Defined Reporting Standards Needed?

The JARS Group members began their work by sharing with each other documents they knew of that related to reporting standards. The group found that the past decade had witnessed two developments in the social, behavioral, and medical sciences that encouraged researchers to provide more details when they reported their investigations. The first impetus for more detail came from the worlds of policy and practice. In these realms, the call for use of “evidence-based” decision making had placed a new emphasis on the importance of understanding how research was conducted and what it found. For example, in 2006, the APA Presidential Task Force on Evidence-Based Practice defined the term evidence-based practice to mean “the integration of the best available research with clinical expertise” (p. 273; italics added). The report went on to say that “evidence-based practice requires that psychologists recognize the strengths and limitations of evidence obtained from different types of research” (p. 275).

In medicine, the movement toward evidence-based practice is now so pervasive (see Sackett, Rosenberg, Muir Grey, Hayes & Richardson, 1996 ) that there exists an international consortium of researchers (the Cochrane Collaboration; http://www.cochrane.org/index.htm ) producing thousands of papers examining the cumulative evidence on everything from public health initiatives to surgical procedures. Another example of accountability in medicine, and the importance of relating medical practice to solid medical science, comes from the member journals of the International Committee of Medical Journal Editors (2007) , who adopted a policy requiring registration of all clinical trials in a public trials registry as a condition of consideration for publication.

In education, the No Child Left Behind Act of 2001 (2002) required that the policies and practices adopted by schools and school districts be “scientifically based,” a term that appears over 100 times in the legislation. In public policy, a consortium similar to that in medicine now exists (the Campbell Collaboration; http://www.campbellcollaboration.org ), as do organizations meant to promote government policymaking based on rigorous evidence of program effectiveness (e.g., the Coalition for Evidence-Based Policy; http://www.excelgov.org/index.php?keyword=a432fbc34d71c7 ). Each of these efforts operates with a definition of what constitutes sound scientific evidence. The developers of previous reporting standards argued that new transparency in reporting is needed so that judgments can be made by users of evidence about the appropriate inferences and applications derivable from research findings.

The second impetus for more detail in research reporting has come from within the social and behavioral science disciplines. As evidence about specific hypotheses and theories accumulates, greater reliance is being placed on syntheses of research, especially meta-analyses ( Cooper, 2009 ; Cooper, Hedges, & Valentine, 2009 ), to tell us what we know about the workings of the mind and the laws of behavior. Different findings relating to a specific question examined with various research designs are now mined by second users of the data for clues to the mediation of basic psychological, behavioral, and social processes. These clues emerge by clustering studies based on distinctions in their methods and then comparing their results. This synthesis-based evidence is then used to guide the next generation of problems and hypotheses studied in new data collections. Without complete reporting of methods and results, the utility of studies for purposes of research synthesis and meta-analysis is diminished.

The JARS Group viewed both of these stimulants to action as positive developments for the psychological sciences. The first provides an unprecedented opportunity for psychological research to play an important role in public and health policy. The second promises a sounder evidence base for explanations of psychological phenomena and a next generation of research that is more focused on resolving critical issues.

The Current State of the Art

Next, the JARS Group collected efforts of other social and health organizations that had recently developed reporting standards. Three recent efforts quickly came to the group’s attention. Two efforts had been undertaken in the medical and health sciences to improve the quality of reporting of primary studies and to make reports more useful for the next users of the data. The first effort is called CONSORT (Consolidated Standards of Reporting Trials; Altman et al., 2001 ; Moher et al., 2001 ). The CONSORT standards were developed by an ad hoc group primarily composed of biostatisticians and medical researchers. CONSORT relates to the reporting of studies that carried out random assignment of participants to conditions. It comprises a checklist of study characteristics that should be included in research reports and a flow diagram that provides readers with a description of the number of participants as they progress through the study—and by implication the number who drop out—from the time they are deemed eligible for inclusion until the end of the investigation. These guidelines are now required by the top-tier medical journals and many other biomedical journals. Some APA journals also use the CONSORT guidelines.

The second effort is called TREND (Transparent Reporting of Evaluations with Nonexperimental Designs; Des Jarlais, Lyles, Crepaz, & the TREND Group, 2004 ). TREND was developed under the initiative of the Centers for Disease Control, which brought together a group of editors of journals related to public health, including several journals in psychology. TREND contains a 22-item checklist, similar to CONSORT, but with a specific focus on reporting standards for studies that use quasi-experimental designs, that is, group comparisons in which the groups were established using procedures other than random assignment to place participants in conditions.

In the social sciences, the American Educational Research Association (2006) recently published “Standards for Reporting on Empirical Social Science Research in AERA Publications.” These standards encompass a broad range of research designs, including both quantitative and qualitative approaches, and are divided into eight general areas, including problem formulation; design and logic of the study; sources of evidence; measurement and classification; analysis and interpretation; generalization; ethics in reporting; and title, abstract, and headings. They contain about two dozen general prescriptions for the reporting of studies as well as separate prescriptions for quantitative and qualitative studies.

Relation to the APA Publication Manual

The JARS Group also examined previous editions of the APA Publication Manual and discovered that for the last half century it has played an important role in the establishment of reporting standards. The first edition of the APA Publication Manual , published in 1952 as a supplement to Psychological Bulletin ( American Psychological Association, Council of Editors, 1952 ), was 61 pages long, printed on 6-in. by 9-in. paper, and cost $1. The principal divisions of manuscripts were titled Problem, Method, Results, Discussion, and Summary (now the Abstract). According to the first Publication Manual, the section titled Problem was to include the questions asked and the reasons for asking them. When experiments were theory-driven, the theoretical propositions that generated the hypotheses were to be given, along with the logic of the derivation and a summary of the relevant arguments. The method was to be “described in enough detail to permit the reader to repeat the experiment unless portions of it have been described in other reports which can be cited” (p. 9). This section was to describe the design and the logic of relating the empirical data to theoretical propositions, the subjects, sampling and control devices, techniques of measurement, and any apparatus used. Interestingly, the 1952 Manual also stated, “Sometimes space limitations dictate that the method be described synoptically in a journal, and a more detailed description be given in auxiliary publication” (p. 25). The Results section was to include enough data to justify the conclusions, with special attention given to tests of statistical significance and the logic of inference and generalization. The Discussion section was to point out limitations of the conclusions, relate them to other findings and widely accepted points of view, and give implications for theory or practice. Negative or unexpected results were not to be accompanied by extended discussions; the editors wrote, “Long ‘alibis,’ unsupported by evidence or sound theory, add nothing to the usefulness of the report” (p. 9). Also, authors were encouraged to use good grammar and to avoid jargon, as “some writing in psychology gives the impression that long words and obscure expressions are regarded as evidence of scientific status” (pp. 25–26).

Through the following editions, the recommendations became more detailed and specific. Of special note was the Report of the Task Force on Statistical Inference ( Wilkinson & the Task Force on Statistical Inference, 1999 ), which presented guidelines for statistical reporting in APA journals that informed the content of the 4th edition of the Publication Manual . Although the 5th edition of the Manual does not contain a clearly delineated set of reporting standards, this does not mean the Manual is devoid of standards. Instead, recommendations, standards, and requirements for reporting are embedded in various sections of the text. Most notably, statements regarding the method and results that should be included in a research report (as well as how this information should be reported) appear in the Manual ’s description of the parts of a manuscript (pp. 10–29). For example, when discussing who participated in a study, the Manual states, “When humans participated as the subjects of the study, report the procedures for selecting and assigning them and the agreements and payments made” (p. 18). With regard to the Results section, the Manual states, “Mention all relevant results, including those that run counter to the hypothesis” (p. 20), and it provides descriptions of “sufficient statistics” (p. 23) that need to be reported.

Thus, although reporting standards and requirements are not highlighted in the most recent edition of the Manual, they appear nonetheless. In that context, then, the proposals offered by the JARS Group can be viewed not as breaking new ground for psychological research but rather as a systematization, clarification, and—to a lesser extent than might at first appear—an expansion of standards that already exist. The intended contribution of the current effort, then, becomes as much one of increased emphasis as increased content.

Drafting, Vetting, and Refinement of the JARS

Next, the JARS Group canvassed the APA Council of Editors to ascertain the degree to which the CONSORT and TREND standards were already in use by APA journals and to make us aware of other reporting standards. Also, the JARS Group requested from the APA Publications Office data it had on the use of auxiliary websites by authors of APA journal articles. With this information in hand, the JARS Group compared the CONSORT, TREND, and AERA standards to one another and developed a combined list of nonredundant elements contained in any or all of the three sets of standards. The JARS Group then examined the combined list, rewrote some items for clarity and ease of comprehension by an audience of psychologists and other social and behavioral scientists, and added a few suggestions of its own.

This combined list was then shared with the APA Council of Editors, the APA Publication Manual Revision Task Force, and the Publications and Communications Board. These groups were requested to react to it. After receiving these reactions and anonymous reactions from reviewers chosen by the American Psychologist , the JARS Group revised its report and arrived at the list of recommendations contained in Tables 1 , ​ ,2, 2 , and ​ and3 3 and Figure 1 . The report was then approved again by the Publications and Communications Board.

An external file that holds a picture, illustration, etc.
Object name is nihms239779f1.jpg

Note. This flowchart is an adaptation of the flowchart offered by the CONSORT Group ( Altman et al., 2001 ; Moher, Schulz, & Altman, 2001 ). Journals publishing the original CONSORT flowchart have waived copyright protection.

Journal Article Reporting Standards (JARS): Information Recommended for Inclusion in Manuscripts That Report New Data Collections Regardless of Research Design

Paper section and topicDescription
Title and title pageIdentify variables and theoretical issues under investigation and the relationship between them
Author note contains acknowledgment of special circumstances:
 Use of data also appearing in previous publications, dissertations, or conference papers
 Sources of funding or other support
 Relationships that may be perceived as conflicts of interest
AbstractProblem under investigation
Participants or subjects; specifying pertinent characteristics; in animal research, include genus and species
Study method, including:
 Sample size
 Any apparatus used
 Outcome measures
 Data-gathering procedures
 Research design (e.g., experiment, observational study)
Findings, including effect sizes and confidence intervals and/or statistical significance levels
Conclusions and the implications or applications
IntroductionThe importance of the problem:
 Theoretical or practical implications
Review of relevant scholarship:
 Relation to previous work
 If other aspects of this study have been reported on previously, how the current report differs from these earlier reports
Specific hypotheses and objectives:
 Theories or other means used to derive hypotheses
 Primary and secondary hypotheses, other planned analyses
How hypotheses and research design relate to one another
Method
 Participant characteristicsEligibility and exclusion criteria, including any restrictions based on demographic characteristics
Major demographic characteristics as well as important topic-specific characteristics (e.g., achievement level in studies of educational interventions), or in the case of animal research, genus and species
 Sampling proceduresProcedures for selecting participants, including:
 The sampling method if a systematic sampling plan was implemented
 Percentage of sample approached that participated
 Self-selection (either by individuals or units, such as schools or clinics)
Settings and locations where data were collected
Agreements and payments made to participants
Institutional review board agreements, ethical standards met, safety monitoring
 Sample size, power, and precisionIntended sample size
Actual sample size, if different from intended sample size
How sample size was determined:
 Power analysis, or methods used to determine precision of parameter estimates
 Explanation of any interim analyses and stopping rules
 Measures and covariatesDefinitions of all primary and secondary measures and covariates:
 Include measures collected but not included in this report
Methods used to collect data
Methods used to enhance the quality of measurements:
 Training and reliability of data collectors
 Use of multiple observations
Information on validated or ad hoc instruments created for individual studies, for example, psychometric and biometric properties
 Research designWhether conditions were manipulated or naturally observed
Type of research design; provided in are modules for:
 Randomized experiments (Module A1)
 Quasi-experiments (Module A2)
Other designs would have different reporting needs associated with them
Results
 Participant flowTotal number of participants
Flow of participants through each stage of the study
 Recruitment Statistics and data analysisDates defining the periods of recruitment and repeated measurements or follow-up
Information concerning problems with statistical assumptions and/or data distributions that could affect the validity of findings
Missing data:
 Frequency or percentages of missing data
 Empirical evidence and/or theoretical arguments for the causes of data that are missing, for example, missing completely at random (MCAR), missing at random (MAR), or missing not at random (MNAR)
 Methods for addressing missing data, if used
For each primary and secondary outcome and for each subgroup, a summary of:
 Cases deleted from each analysis
 Subgroup or cell sample sizes, cell means, standard deviations, or other estimates of precision, and other descriptive statistics
 Effect sizes and confidence intervals
For inferential statistics (null hypothesis significance testing), information about:
 The a priori Type I error rate adopted
 Direction, magnitude, degrees of freedom, and exact level, even if no significant effect is reported
For multivariable analytic systems (e.g., multivariate analyses of variance, regression analyses, structural equation modeling analyses, and hierarchical linear modeling) also include the associated variance–covariance (or correlation) matrix or matrices
Estimation problems (e.g., failure to converge, bad solution spaces), anomalous data points
Statistical software program, if specialized procedures were used
Report any other analyses performed, including adjusted analyses, indicating those that were prespecified and those that were exploratory (though not necessarily in level of detail of primary analyses)
 Ancillary analysesDiscussion of implications of ancillary analyses for statistical error rates
DiscussionStatement of support or nonsupport for all original hypotheses:
 Distinguished by primary and secondary hypotheses
 Post hoc explanations
Similarities and differences between results and work of others
Interpretation of the results, taking into account:
 Sources of potential bias and other threats to internal validity
 Imprecision of measures
 The overall number of tests or overlap among tests, and
 Other limitations or weaknesses of the study
Generalizability (external validity) of the findings, taking into account:
 The target population
 Other contextual issues
Discussion of implications for future research, program, or policy

Module A: Reporting Standards for Studies With an Experimental Manipulation or Intervention (in Addition to Material Presented in Table 1 )

Paper section and topicDescription
Method
 Experimental manipulations or interventionsDetails of the interventions or experimental manipulations intended for each study condition, including control groups, and how and when manipulations or interventions were actually administered, specifically including:
 Content of the interventions or specific experimental manipulations
  Summary or paraphrasing of instructions, unless they are unusual or compose the experimental manipulation, in which case they may be presented verbatim
 Method of intervention or manipulation delivery
  Description of apparatus and materials used and their function in the experiment
   Specialized equipment by model and supplier
 Deliverer: who delivered the manipulations or interventions
  Level of professional training
  Level of training in specific interventions or manipulations
  Number of deliverers and, in the case of interventions, the , , and range of number of individuals/units treated by each
 Setting: where the manipulations or interventions occurred
 Exposure quantity and duration: how many sessions, episodes, or events were intended to be delivered, how long they were intended to last
 Time span: how long it took to deliver the intervention or manipulation to each unit
 Activities to increase compliance or adherence (e.g., incentives)
 Use of language other than English and the translation method
 Units of delivery and analysisUnit of delivery: How participants were grouped during delivery
 Description of the smallest unit that was analyzed (and in the case of experiments, that was randomly assigned to conditions) to assess manipulation or intervention effects (e.g., individuals, work groups, classes)
 If the unit of analysis differed from the unit of delivery, description of the analytical method used to account for this (e.g., adjusting the standard error estimates by the design effect or using multilevel analysis)
Results
 Participant flowTotal number of groups (if intervention was administered at the group level) and the number of participants assigned to each group:
 Number of participants who did not complete the experiment or crossed over to other conditions, explain why
 Number of participants used in primary analyses
Flow of participants through each stage of the study (see )
 Treatment fidelityEvidence on whether the treatment was delivered as intended
 Baseline dataBaseline demographic and clinical characteristics of each group
 Statistics and data analysisWhether the analysis was by intent-to-treat, complier average causal effect, other or multiple ways
 Adverse events and side effectsAll important adverse events or side effects in each intervention group
DiscussionDiscussion of results taking into account the mechanism by which the manipulation or intervention was intended to work (causal pathways) or alternative mechanisms
If an intervention is involved, discussion of the success of and barriers to implementing the intervention, fidelity of implementation
Generalizability (external validity) of the findings, taking into account:
 The characteristics of the intervention
 How, what outcomes were measured
 Length of follow-up
 Incentives
 Compliance rates
The “clinical or practical significance” of outcomes and the basis for these interpretations

Reporting Standards for Studies Using Random and Nonrandom Assignment of Participants to Experimental Groups

Paper section and topicDescription
Module A1: Studies using random assignment
Method
 Random assignment methodProcedure used to generate the random assignment sequence, including details of any restriction (e.g., blocking, stratification)
 Random assignment concealmentWhether sequence was concealed until interventions were assigned
 Random assignment implementationWho generated the assignment sequence
Who enrolled participants
Who assigned participants to groups
 MaskingWhether participants, those administering the interventions, and those assessing the outcomes were unaware of condition assignments
If masking took place, statement regarding how it was accomplished and how the success of masking was evaluated
 Statistical methodsStatistical methods used to compare groups on primary outcome(s)
Statistical methods used for additional analyses, such as subgroup analyses and adjusted analysis
Statistical methods used for mediation analyses
Module A2: Studies using nonrandom assignment
Method
 Assignment methodUnit of assignment (the unit being assigned to study conditions, e.g., individual, group, community)
Method used to assign units to study conditions, including details of any restriction (e.g., blocking, stratification, minimization)
Procedures employed to help minimize potential bias due to nonrandomization (e.g., matching, propensity score matching)
 MaskingWhether participants, those administering the interventions, and those assessing the outcomes were unaware of condition assignments
If masking took place, statement regarding how it was accomplished and how the success of masking was evaluated
 Statistical methodsStatistical methods used to compare study groups on primary outcome(s), including complex methods for correlated data
Statistical methods used for additional analyses, such as subgroup analyses and adjusted analysis (e.g., methods for modeling pretest differences and adjusting for them)
Statistical methods used for mediation analyses

Information for Inclusion in Manuscripts That Report New Data Collections

The entries in Tables 1 through ​ through3 3 and Figure 1 divide the reporting standards into three parts. First, Table 1 presents information recommended for inclusion in all reports submitted for publication in APA journals. Note that these recommendations contain only a brief entry regarding the type of research design. Along with these general standards, then, the JARS Group also recommended that specific standards be developed for different types of research designs. Thus, Table 2 provides standards for research designs involving experimental manipulations or evaluations of interventions (Module A). Next, Table 3 provides standards for reporting either (a) a study involving random assignment of participants to experimental or intervention conditions (Module A1) or (b) quasi-experiments, in which different groups of participants receive different experimental manipulations or interventions but the groups are formed (and perhaps equated) using a procedure other than random assignment (Module A2). Using this modular approach, the JARS Group was able to incorporate the general recommendations from the current APA Publication Manual and both the CONSORT and TREND standards into a single set of standards. This approach also makes it possible for other research designs (e.g., observational studies, longitudinal designs) to be added to the standards by adding new modules.

The standards are categorized into the sections of a research report used by APA journals. To illustrate how the tables would be used, note that the Method section in Table 1 is divided into subsections regarding participant characteristics, sampling procedures, sample size, measures and covariates, and an overall categorization of the research design. Then, if the design being described involved an experimental manipulation or intervention, Table 2 presents additional information about the research design that should be reported, including a description of the manipulation or intervention itself and the units of delivery and analysis. Next, Table 3 presents two separate sets of reporting standards to be used depending on whether the participants in the study were assigned to conditions using a random or nonrandom procedure. Figure 1 , an adaptation of the chart recommended in the CONSORT guidelines, presents a chart that should be used to present the flow of participants through the stages of either an experiment or a quasi-experiment. It details the amount and cause of participant attrition at each stage of the research.

In the future, new modules and flowcharts regarding other research designs could be added to the standards to be used in conjunction with Table 1 . For example, tables could be constructed to replace Table 2 for the reporting of observational studies (e.g., studies with no manipulations as part of the data collection), longitudinal studies, structural equation models, regression discontinuity designs, single-case designs, or real-time data capture designs ( Stone & Shiffman, 2002 ), to name just a few.

Additional standards could be adopted for any of the parts of a report. For example, the Evidence-Based Behavioral Medicine Committee ( Davidson et al., 2003 ) examined each of the 22 items on the CONSORT checklist and described for each special considerations for reporting of research on behavioral medicine interventions. Also, this group proposed an additional 5 items, not included in the CONSORT list, that they felt should be included in reports on behavioral medicine interventions: (a) training of treatment providers, (b) supervision of treatment providers, (c) patient and provider treatment allegiance, (d) manner of testing and success of treatment delivery by the provider, and (e) treatment adherence. The JARS Group encourages other authoritative groups of interested researchers, practitioners, and journal editorial teams to use Table 1 as similar starting point in their efforts, adding and deleting items and modules to fit the information needs dictated by research designs that are prominent in specific subdisciplines and topic areas. These revisions could then be in corporated into future iterations of the JARS.

Information for Inclusion in Manuscripts That Report Meta-Analyses

The same pressures that have led to proposals for reporting - standards for manuscripts that report new data collections have led to similar efforts to establish standards for the reporting of other types of research. Particular attention has been focused on the reporting of meta-analyses.

With regard to reporting standards for meta-analysis, the JARS Group began by contacting the members of the Society for Research Synthesis Methodology and asking them to share with the group what they felt were the critical aspects of meta-analysis conceptualization, methodology, and results that need to be reported so that readers (and manuscript reviewers) can make informed, critical judgments about the appropriateness of the methods used for the inferences drawn. This query led to the identification of four other efforts to establish reporting standards for meta-analysis. These included the QUOROM Statement (Quality of Reporting of Meta-analysis; Moher et al., 1999 ) and its revision, PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses; Moher, Liberati, Tetzlaff, Altman, & the PRISMA Group, 2008 ), MOOSE (Meta-analysis of Observational Studies in Epidemiology; Stroup et al., 2000 ), and the Potsdam Consultation on Meta-Analysis ( Cook, Sackett, & Spitzer, 1995 ).

Next the JARS Group compared the content of each of the four sets of standards with the others and developed a combined list of nonredundant elements contained in any or all of them. The JARS Group then examined the combined list, rewrote some items for clarity and ease of comprehension by an audience of psychologists, and added a few suggestions of its own. Then the resulting recommendations were shared with a subgroup of members of the Society for Research Synthesis Methodology who had experience writing and reviewing research syntheses in the discipline of psychology. After these suggestions were incorporated into the list, it was shared with members of the Publications and Communications Board, who were requested to react to it. After receiving these reactions, the JARS Group arrived at the list of recommendations contained in Table 4 , titled Meta-Analysis Reporting Standards (MARS). These were then approved by the Publications and Communications Board.

Meta-Analysis Reporting Standards (MARS): Information Recommended for Inclusion in Manuscripts Reporting Meta-Analyses

Paper section and topicDescription
TitleMake it clear that the report describes a research synthesis and include “meta-analysis,” if applicable
Footnote funding source(s)
AbstractThe problem or relation(s) under investigation
Study eligibility criteria
Type(s) of participants included in primary studies
Meta-analysis methods (indicating whether a fixed or random model was used)
Main results (including the more important effect sizes and any important moderators of these effect sizes)
Conclusions (including limitations)
Implications for theory, policy, and/or practice
IntroductionClear statement of the question or relation(s) under investigation:
 Historical background
 Theoretical, policy, and/or practical issues related to the question or relation(s) of interest
 Rationale for the selection and coding of potential moderators and mediators of results
 Types of study designs used in the primary research, their strengths and weaknesses
 Types of predictor and outcome measures used, their psychometric characteristics
 Populations to which the question or relation is relevant
 Hypotheses, if any
Method
 Inclusion and exclusion criteriaOperational characteristics of independent (predictor) and dependent (outcome) variable(s)
Eligible participant populations
Eligible research design features (e.g., random assignment only, minimal sample size)
Time period in which studies needed to be conducted
Geographical and/or cultural restrictions
 Moderator and mediator analysesDefinition of all coding categories used to test moderators or mediators of the relation(s) of interest
 Search strategiesReference and citation databases searched
Registries (including prospective registries) searched:
 Keywords used to enter databases and registries
 Search software used and version
Time period in which studies needed to be conducted, if applicable
Other efforts to retrieve all available studies:
 Listservs queried
 Contacts made with authors (and how authors were chosen)
 Reference lists of reports examined
Method of addressing reports in languages other than English
Process for determining study eligibility:
 Aspects of reports were examined (i.e, title, abstract, and/or full text)
 Number and qualifications of relevance judges
 Indication of agreement
  How disagreements were resolved
Treatment of unpublished studies
 Coding proceduresNumber and qualifications of coders (e.g., level of expertise in the area, training)
Intercoder reliability or agreement
Whether each report was coded by more than one coder and if so, how disagreements were
resolved
Assessment of study quality:
 If a quality scale was employed, a description of criteria and the procedures for application
 If study design features were coded, what these were
How missing data were handled
 Statistical methodsEffect size metric(s):
 Effect sizes calculating formulas (e.g., s and s, use of univariate to transform)
 Corrections made to effect sizes (e.g., small sample bias, correction for unequal s)
Effect size averaging and/or weighting method(s)
How effect size confidence intervals (or standard errors) were calculated
How effect size credibility intervals were calculated, if used
How studies with more than one effect size were handled
Whether fixed and/or random effects models were used and the model choice justification
How heterogeneity in effect sizes was assessed or estimated
s and s for measurement artifacts, if construct-level relationships were the focus
Tests and any adjustments for data censoring (e.g., publication bias, selective reporting)
Tests for statistical outliers
Statistical power of the meta-analysis
Statistical programs or software packages used to conduct statistical analyses
ResultsNumber of citations examined for relevance
List of citations included in the synthesis
Number of citations relevant on many but not all inclusion criteria excluded from the meta-analysis
Number of exclusions for each exclusion criterion (e.g., effect size could not be calculated), with examples
Table giving descriptive information for each included study, including effect size and sample size
Assessment of study quality, if any
Tables and/or graphic summaries:
 Overall characteristics of the database (e.g., number of studies with different research designs)
 Overall effect size estimates, including measures of uncertainty (e.g., confidence and/or credibility intervals)
Results of moderator and mediator analyses (analyses of subsets of studies):
 Number of studies and total sample sizes for each moderator analysis
 Assessment of interrelations among variables used for moderator and mediator analyses
Assessment of bias including possible data censoring
DiscussionStatement of major findings
Consideration of alternative explanations for observed results:
 Impact of data censoring
Generalizability of conclusions:
 Relevant populations
 Treatment variations
 Dependent (outcome) variables
 Research designs
General limitations (including assessment of the quality of studies included)
Implications and interpretation for theory, policy, or practice
Guidelines for future research

Other Issues Related to Reporting Standards

A definition of “reporting standards”.

The JARS Group recognized that there are three related terms that need definition when one speaks about journal article reporting standards: recommendations, standards, and requirements. According to Merriam-Webster’s Online Dictionary (n.d.) , to recommend is “to present as worthy of acceptance or trial … to endorse as fit, worthy, or competent.” In contrast, a standard is more specific and should carry more influence: “something set up and established by authority as a rule for the measure of quantity, weight, extent, value, or quality.” And finally, a requirement goes further still by dictating a course of action—“something wanted or needed”—and to require is “to claim or ask for by right and authority … to call for as suitable or appropriate … to demand as necessary or essential.”

With these definitions in mind, the JARS Group felt it was providing recommendations regarding what information should be reported in the write-up of a psychological investigation and that these recommendations could also be viewed as standards or at least as a beginning effort at developing standards. The JARS Group felt this characterization was appropriate because the information it was proposing for inclusion in reports was based on an integration of efforts by authoritative groups of researchers and editors. However, the proposed standards are not offered as requirements. The methods used in the subdisciplines of psychology are so varied that the critical information needed to assess the quality of research and to integrate it successfully with other related studies varies considerably from method to method in the context of the topic under consideration. By not calling them “requirements,” the JARS Group felt the standards would be given the weight of authority while retaining for authors and editors the flexibility to use the standards in the most efficacious fashion (see below).

The Tension Between Complete Reporting and Space Limitations

There is an innate tension between transparency in reporting and the space limitations imposed by the print medium. As descriptions of research expand, so does the space needed to report them. However, recent improvements in the capacity of and access to electronic storage of information suggest that this trade-off could someday disappear. For example, the journals of the APA, among others, now make available to authors auxiliary websites that can be used to store supplemental materials associated with the articles that appear in print. Similarly, it is possible for electronic journals to contain short reports of research with hot links to websites containing supplementary files.

The JARS Group recommends an increased use and standardization of supplemental websites by APA journals and authors. Some of the information contained in the reporting standards might not appear in the published article itself but rather in a supplemental website. For example, if the instructions in an investigation are lengthy but critical to understanding what was done, they may be presented verbatim in a supplemental website. Supplemental materials might include the flowchart of participants through the study. It might include oversized tables of results (especially those associated with meta-analyses involving many studies), audio or video clips, computer programs, and even primary or supplementary data sets. Of course, all such supplemental materials should be subject to peer review and should be submitted with the initial manuscript. Editors and reviewers can assist authors in determining what material is supplemental and what needs to be presented in the article proper.

Other Benefits of Reporting Standards

The general principle that guided the establishment of the JARS for psychological research was the promotion of sufficient and transparent descriptions of how a study was conducted and what the researcher(s) found. Complete reporting allows clearer determination of the strengths and weaknesses of a study. This permits the users of the evidence to judge more accurately the appropriate inferences and applications derivable from research findings.

Related to quality assessments, it could be argued as well that the existence of reporting standards will have a salutary effect on the way research is conducted. For example, by setting a standard that rates of loss of participants should be reported (see Figure 1 ), researchers may begin considering more concretely what acceptable levels of attrition are and may come to employ more effective procedures meant to maximize the number of participants who complete a study. Or standards that specify reporting a confidence interval along with an effect size might motivate researchers to plan their studies so as to ensure that the confidence intervals surrounding point estimates will be appropriately narrow.

Also, as noted above, reporting standards can improve secondary use of data by making studies more useful for meta-analysis. More broadly, if standards are similar across disciplines, a consistency in reporting could promote interdisciplinary dialogue by making it clearer to researchers how their efforts relate to one another.

And finally, reporting standards can make it easier for other researchers to design and conduct replications and related studies by providing more complete descriptions of what has been done before. Without complete reporting of the critical aspects of design and results, the value of the next generation of research may be compromised.

Possible Disadvantages of Standards

It is important to point out that reporting standards also can lead to excessive standardization with negative implications. For example, standardized reporting could fill articles with details of methods and results that are inconsequential to interpretation. The critical facts about a study can get lost in an excess of minutiae. Further, a forced consistency can lead to ignoring important uniqueness. Reporting standards that appear comprehensive might lead researchers to believe that “If it’s not asked for or does not conform to criteria specified in the standards, it’s not necessary to report.” In rare instances, then, the setting of reporting standards might lead to the omission of information critical to understanding what was done in a study and what was found.

Also, as noted above, different methods are required for studying different psychological phenomena. What needs to be reported in order to evaluate the correspondence between methods and inferences is highly dependent on the research question and empirical approach. Inferences about the effectiveness of psychotherapy, for example, require attention to aspects of research design and analysis that are different from those important for inferences in the neuroscience of text processing. This context dependency pertains not only to topic-specific considerations but also to research designs. Thus, an experimental study of the determinants of well-being analyzed via analysis of variance engenders different reporting needs than a study on the same topic that employs a passive longitudinal design and structural equation modeling. Indeed, the variations in substantive topics and research designs are factorial in this regard. So experiments in psychotherapy and neuroscience could share some reporting standards, even though studies employing structural equation models investigating well-being would have little in common with experiments in neuroscience.

Obstacles to Developing Standards

One obstacle to developing reporting standards encountered by the JARS Group was that differing taxonomies of research approaches exist and different terms are used within different subdisciplines to describe the same operational research variations. As simple examples, researchers in health psychology typically refer to studies that use experimental manipulations of treatments conducted in naturalistic settings as randomized clinical trials, whereas similar designs are referred to as randomized field trials in educational psychology. Some research areas refer to the use of random assignment of participants, whereas others use the term random allocation. Another example involves the terms multilevel model, hierarchical linear model, and mixed effects model, all of which are used to identify a similar approach to data analysis. There have been, from time to time, calls for standardized terminology to describe commonly but inconsistently used scientific terms, such as Kraemer et al.’s (1997) distinctions among words commonly used to denote risk. To address this problem, the JARS Group attempted to use the simplest descriptions possible and to avoid jargon and recommended that the new Publication Manual include some explanatory text.

A second obstacle was that certain research topics and methods will reveal different levels of consensus regarding what is and is not important to report. Generally, the newer and more complex the technique, the less agreement there will be about reporting standards. For example, although there are many benefits to reporting effect sizes, there are certain situations (e.g., multilevel designs) where no clear consensus exists on how best to conceptualize and/or calculate effect size measures. In a related vein, reporting a confidence interval with an effect size is sound advice, but calculating confidence intervals for effect sizes is often difficult given the current state of software. For this reason, the JARS Group avoided developing reporting standards for research designs about which a professional consensus had not yet emerged. As consensus emerges, the JARS can be expanded by adding modules.

Finally, the rapid pace of developments in methodology dictates that any standards would have to be updated frequently in order to retain currency. For example, the state of the art for reporting various analytic techniques is in a constant state of flux. Although some general principles (e.g., reporting the estimation procedure used in a structural equation model) can incorporate new developments easily, other developments can involve fundamentally new types of data for which standards must, by necessity, evolve rapidly. Nascent and emerging areas, such as functional neuroimaging and molecular genetics, may require developers of standards to be on constant vigil to ensure that new research areas are appropriately covered.

Questions for the Future

It has been mentioned several times that the setting of standards for reporting of research in psychology involves both general considerations and considerations specific to separate subdisciplines. And, as the brief history of standards in the APA Publication Manual suggests, standards evolve over time. The JARS Group expects refinements to the contents of its tables. Further, in the spirit of evidence-based decision making that is one impetus for the renewed emphasis on reporting standards, we encourage the empirical examination of the effects that standards have on reporting practices. Not unlike the issues many psychologists study, the proposal and adoption of reporting standards is itself an intervention. It can be studied for its effects on the contents of research reports and, most important, its impact on the uses of psychological research by decision makers in various spheres of public and health policy and by scholars seeking to understand the human mind and behavior.

The Working Group on Journal Article Reporting Standards was composed of Mark Appelbaum, Harris Cooper (Chair), Scott Maxwell, Arthur Stone, and Kenneth J. Sher. The working group wishes to thank members of the American Psychological Association’s (APA’s) Publications and Communications Board, the APA Council of Editors, and the Society for Research Synthesis Methodology for comments on this report and the standards contained herein.

  • Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gotzsche PC, Lang T. The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine. 2001. pp. 663–694. Retrieved April 20, 2007, from http://www.consort-statement.org/ [ PubMed ]
  • American Educational Research Association. Standards for reporting on empirical social science research in AERA publications. Educational Researcher. 2006; 35 (6):33–40. [ Google Scholar ]
  • American Psychological Association. Publication manual of the American Psychological Association. 5. Washington, DC: Author; 2001. [ Google Scholar ]
  • American Psychological Association, Council of Editors. Publication manual of the American Psychological Association. Psychological Bulletin. 1952; 49 (Suppl, Pt 2) [ Google Scholar ]
  • APA Presidential Task Force on Evidence-Based Practice. Evidence-based practice in psychology. American Psychologist. 2006; 61 :271–283. [ PubMed ] [ Google Scholar ]
  • Cook DJ, Sackett DL, Spitzer WO. Methodologic guidelines for systematic reviews of randomized control trials in health care from the Potsdam Consultation on Meta-Analysis. Journal of Clinical Epidemiology. 1995; 48 :167–171. [ PubMed ] [ Google Scholar ]
  • Cooper H. Research synthesis and meta-analysis: A step-by-step approach. 4. Thousand, Oaks, CA: Sage; 2009. [ Google Scholar ]
  • Cooper H, Hedges LV, Valentine JC, editors. The handbook of research synthesis and meta-analysis. 2. New York: Russell Sage Foundation; 2009. [ Google Scholar ]
  • Davidson KW, Goldstein M, Kaplan RM, Kaufmann PG, Knatterud GL, Orleans TC, et al. Evidence-based behavioral medicine: What is it and how do we achieve it? Annals of Behavioral Medicine. 2003; 26 :161–171. [ PubMed ] [ Google Scholar ]
  • Des Jarlais DC, Lyles C, Crepaz N the TREND Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health. 2004. pp. 361–366. Retrieved April 20, 2007, from http://www.trend-statement.org/asp/documents/statements/AJPH_Mar2004_Trendstatement.pdf . [ PMC free article ] [ PubMed ]
  • International Committee of Medical Journal Editors. Uniform requirements for manuscripts submitted to biomedical journals: Writing and editing for biomedical publication. 2007. Retrieved April 9, 2008, from http://www.icmje.org/#clin_trials . [ PubMed ]
  • Kraemer HC, Kazdin AE, Offord DR, Kessler RC, Jensen PS, Kupfer DJ. Coming to terms with the terms of risk. Archives of General Psychiatry. 1997; 54 :337–343. [ PubMed ] [ Google Scholar ]
  • Merriam-Webster’s online dictionary. nd. Retrieved April 20, 2007, from http://www.m-w.com/dictionary/
  • Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup D for the QUOROM group. Improving the quality of reporting of meta-analysis of randomized controlled trials: The QUOROM statement. Lancet. 1999; 354 :1896–1900. [ PubMed ] [ Google Scholar ]
  • Moher D, Schulz KF, Altman DG. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Annals of Internal Medicine. 2001. pp. 657–662. Retrieved April 20, 2007 from http://www.consort-statement.org . [ PubMed ]
  • Moher D, Liberati A, Tetzlaff J, Altman DG the PRISMA Group. Preferred reporting items for systematic reviews and meta-analysis: The PRISMA statement. 2008. Manuscript submitted for publication. [ PubMed ] [ Google Scholar ]
  • No Child Left Behind Act of 2001, Pub. L. 107–110, 115 Stat. 1425 (2002, January 8).
  • Sackett DL, Rosenberg WMC, Muir Grey JA, Hayes RB, Richardson WS. Evidence based medicine: What it is and what it isn’t. British Medical Journal. 1996; 312 :71–72. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stone AA, Shiffman S. Capturing momentary, self-report data: A proposal for reporting guidelines. Annals of Behavioral Medicine. 2002; 24 :236–243. [ PubMed ] [ Google Scholar ]
  • Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology. Journal of the American Medical Association. 2000; 283 :2008–2012. [ PubMed ] [ Google Scholar ]
  • Wilkinson L the Task Force on Statistical Inference. Statistical methods in psychology journals: Guidelines and explanations. American Psychologist. 1999; 54 :594–604. [ Google Scholar ]

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

12.3 Expressing Your Results

Learning objectives.

  • Write out simple descriptive statistics in American Psychological Association (APA) style.
  • Interpret and create simple APA-style graphs—including bar graphs, line graphs, and scatterplots.
  • Interpret and create simple APA-style tables—including tables of group or condition means and correlation matrixes.

Once you have conducted your descriptive statistical analyses, you will need to present them to others. In this section, we focus on presenting descriptive statistical results in writing, in graphs, and in tables—following American Psychological Association (APA) guidelines for written research reports. These principles can be adapted easily to other presentation formats such as posters and slide show presentations.

Presenting Descriptive Statistics in Writing

When you have a small number of results to report, it is often most efficient to write them out. There are a few important APA style guidelines here. First, statistical results are always presented in the form of numerals rather than words and are usually rounded to two decimal places (e.g., “2.00” rather than “two” or “2”). They can be presented either in the narrative description of the results or parenthetically—much like reference citations. Here are some examples:

The mean age of the participants was 22.43 years with a standard deviation of 2.34.
Among the low self-esteem participants, those in a negative mood expressed stronger intentions to have unprotected sex ( M = 4.05, SD = 2.32) than those in a positive mood ( M = 2.15, SD = 2.27).
The treatment group had a mean of 23.40 ( SD = 9.33), while the control group had a mean of 20.87 ( SD = 8.45).
The test-retest correlation was .96.
There was a moderate negative correlation between the alphabetical position of respondents’ last names and their response time ( r = −.27).

Notice that when presented in the narrative, the terms mean and standard deviation are written out, but when presented parenthetically, the symbols M and SD are used instead. Notice also that it is especially important to use parallel construction to express similar or comparable results in similar ways. The third example is much better than the following nonparallel alternative:

The treatment group had a mean of 23.40 ( SD = 9.33), while 20.87 was the mean of the control group, which had a standard deviation of 8.45.

Presenting Descriptive Statistics in Graphs

When you have a large number of results to report, you can often do it more clearly and efficiently with a graph. When you prepare graphs for an APA-style research report, there are some general guidelines that you should keep in mind. First, the graph should always add important information rather than repeat information that already appears in the text or in a table. (If a graph presents information more clearly or efficiently, then you should keep the graph and eliminate the text or table.) Second, graphs should be as simple as possible. For example, the Publication Manual discourages the use of color unless it is absolutely necessary (although color can still be an effective element in posters, slide show presentations, or textbooks.) Third, graphs should be interpretable on their own. A reader should be able to understand the basic result based only on the graph and its caption and should not have to refer to the text for an explanation.

There are also several more technical guidelines for graphs that include the following:

  • The graph should be slightly wider than it is tall.
  • The independent variable should be plotted on the x- axis and the dependent variable on the y- axis.
  • Values should increase from left to right on the x- axis and from bottom to top on the y- axis.

Axis Labels and Legends

  • Axis labels should be clear and concise and include the units of measurement if they do not appear in the caption.
  • Axis labels should be parallel to the axis.
  • Legends should appear within the boundaries of the graph.
  • Text should be in the same simple font throughout and differ by no more than four points.
  • Captions should briefly describe the figure, explain any abbreviations, and include the units of measurement if they do not appear in the axis labels.
  • Captions in an APA manuscript should be typed on a separate page that appears at the end of the manuscript. See Chapter 11 “Presenting Your Research” for more information.

As we have seen throughout this book, bar graphs are generally used to present and compare the mean scores for two or more groups or conditions. The bar graph in Figure 12.12 “Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues” is an APA-style version of Figure 12.5 “Bar Graph Showing Mean Clinician Phobia Ratings for Children in Two Treatment Conditions” . Notice that it conforms to all the guidelines listed. A new element in Figure 12.12 “Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues” is the smaller vertical bars that extend both upward and downward from the top of each main bar. These are error bars , and they represent the variability in each group or condition. Although they sometimes extend one standard deviation in each direction, they are more likely to extend one standard error in each direction (as in Figure 12.12 “Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues” ). The standard error is the standard deviation of the group divided by the square root of the sample size of the group. The standard error is used because, in general, a difference between group means that is greater than two standard errors is statistically significant. Thus one can “see” whether a difference is statistically significant based on a bar graph with error bars.

Figure 12.12 Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues

Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues

Line Graphs

Line graphs are used to present correlations between quantitative variables when the independent variable has, or is organized into, a relatively small number of distinct levels. Each point in a line graph represents the mean score on the dependent variable for participants at one level of the independent variable. Figure 12.13 “Sample APA-Style Line Graph Based on Research by Carlson and Conard” is an APA-style version of the results of Carlson and Conard. Notice that it includes error bars representing the standard error and conforms to all the stated guidelines.

Figure 12.13 Sample APA-Style Line Graph Based on Research by Carlson and Conard

Sample APA-Style Line Graph Based on Research by Carlson and Conard

In most cases, the information in a line graph could just as easily be presented in a bar graph. In Figure 12.13 “Sample APA-Style Line Graph Based on Research by Carlson and Conard” , for example, one could replace each point with a bar that reaches up to the same level and leave the error bars right where they are. This emphasizes the fundamental similarity of the two types of statistical relationship. Both are differences in the average score on one variable across levels of another. The convention followed by most researchers, however, is to use a bar graph when the variable plotted on the x- axis is categorical and a line graph when it is quantitative.

Scatterplots

Scatterplots are used to present relationships between quantitative variables when the variable on the x- axis (typically the independent variable) has a large number of levels. Each point in a scatterplot represents an individual rather than the mean for a group of individuals, and there are no lines connecting the points. The graph in Figure 12.14 “Sample APA-Style Scatterplot” is an APA-style version of Figure 12.8 “Statistical Relationship Between Several College Students’ Scores on the Rosenberg Self-Esteem Scale Given on Two Occasions a Week Apart” , which illustrates a few additional points. First, when the variables on the x- axis and y -axis are conceptually similar and measured on the same scale—as here, where they are measures of the same variable on two different occasions—this can be emphasized by making the axes the same length. Second, when two or more individuals fall at exactly the same point on the graph, one way this can be indicated is by offsetting the points slightly along the x- axis. Other ways are by displaying the number of individuals in parentheses next to the point or by making the point larger or darker in proportion to the number of individuals. Finally, the straight line that best fits the points in the scatterplot, which is called the regression line, can also be included.

Figure 12.14 Sample APA-Style Scatterplot

Sample APA-Style Scatterplot

Expressing Descriptive Statistics in Tables

Like graphs, tables can be used to present large amounts of information clearly and efficiently. The same general principles apply to tables as apply to graphs. They should add important information to the presentation of your results, be as simple as possible, and be interpretable on their own. Again, we focus here on tables for an APA-style manuscript.

The most common use of tables is to present several means and standard deviations—usually for complex research designs with multiple independent and dependent variables. Figure 12.15 “Sample APA-Style Table Presenting Means and Standard Deviations” , for example, shows the results of a hypothetical study similar to the one by MacDonald and Martineau (2002) discussed in Chapter 5 “Psychological Measurement” . (The means in Figure 12.15 “Sample APA-Style Table Presenting Means and Standard Deviations” are the means reported by MacDonald and Martineau, but the standard errors are not). Recall that these researchers categorized participants as having low or high self-esteem, put them into a negative or positive mood, and measured their intentions to have unprotected sex. Although not mentioned in Chapter 5 “Psychological Measurement” , they also measured participants’ attitudes toward unprotected sex. Notice that the table includes horizontal lines spanning the entire table at the top and bottom, and just beneath the column headings. Furthermore, every column has a heading—including the leftmost column—and there are additional headings that span two or more columns that help to organize the information and present it more efficiently. Finally, notice that APA-style tables are numbered consecutively starting at 1 (Table 1, Table 2, and so on) and given a brief but clear and descriptive title.

Figure 12.15 Sample APA-Style Table Presenting Means and Standard Deviations

Sample APA-Style Table Presenting Means and Standard Deviations

Another common use of tables is to present correlations—usually measured by Pearson’s r —among several variables. This is called a correlation matrix . Figure 12.16 “Sample APA-Style Table (Correlation Matrix) Based on Research by McCabe and Colleagues” is a correlation matrix based on a study by David McCabe and colleagues (McCabe, Roediger, McDaniel, Balota, & Hambrick, 2010). They were interested in the relationships between working memory and several other variables. We can see from the table that the correlation between working memory and executive function, for example, was an extremely strong .96, that the correlation between working memory and vocabulary was a medium .27, and that all the measures except vocabulary tend to decline with age. Notice here that only half the table is filled in because the other half would have identical values. For example, the Pearson’s r value in the upper right corner (working memory and age) would be the same as the one in the lower left corner (age and working memory). The correlation of a variable with itself is always 1.00, so these values are replaced by dashes to make the table easier to read.

Figure 12.16 Sample APA-Style Table (Correlation Matrix) Based on Research by McCabe and Colleagues

Sample APA-Style Table (Correlation Matrix) Based on Research by McCabe and Colleagues

As with graphs, precise statistical results that appear in a table do not need to be repeated in the text. Instead, the writer can note major trends and alert the reader to details (e.g., specific correlations) that are of particular interest.

Key Takeaways

  • In an APA-style article, simple results are most efficiently presented in the text, while more complex results are most efficiently presented in graphs or tables.
  • APA style includes several rules for presenting numerical results in the text. These include using words only for numbers less than 10 that do not represent precise statistical results, and rounding results to two decimal places, using words (e.g., “mean”) in the text and symbols (e.g., “ M ”) in parentheses.
  • APA style includes several rules for presenting results in graphs and tables. Graphs and tables should add information rather than repeating information, be as simple as possible, and be interpretable on their own with a descriptive caption (for graphs) or a descriptive title (for tables).
  • Practice: In a classic study, men and women rated the importance of physical attractiveness in both a short-term mate and a long-term mate (Buss & Schmitt, 1993). The means and standard deviations are as follows. Men / Short Term: M = 5.67, SD = 2.34; Men / Long Term: M = 4.43, SD = 2.11; Women / Short Term: M = 5.67, SD = 2.48; Women / Long Term: M = 4.22, SD = 1.98. Present these results (a) in writing, (b) in a graph, and (c) in a table.

Buss, D. M., & Schmitt, D. P. (1993). Sexual strategies theory: A contextual evolutionary analysis of human mating. Psychological Review, 100 , 204–232.

MacDonald, T. K., & Martineau, A. M. (2002). Self-esteem, mood, and intentions to use condoms: When does low self-esteem lead to risky health behaviors? Journal of Experimental Social Psychology, 38 , 299–306.

McCabe, D. P., Roediger, H. L., McDaniel, M. A., Balota, D. A., & Hambrick, D. Z. (2010). The relationship between working memory capacity and executive functioning. Neuropsychology, 243 , 222–243.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

A scoping review on effective measurements of emotional responses in teamwork contexts

  • Published: 27 June 2024

Cite this article

research reports psychology

  • Xiaoshan Huang   ORCID: orcid.org/0000-0002-2853-7219 1 &
  • Susanne P. Lajoie 1  

19 Accesses

1 Altmetric

Explore all metrics

Effective collaboration within teams relies significantly on emotion regulation, a process vital for managing and navigating emotional responses. Various methods have been employed to measure emotional responses in team contexts, including self-report questionnaires, behavioral coding, and physiological measures. This review paper aims to summarize studies conducted in teamwork contexts that measured team members' emotional responses, with a particular focus on the methods used. The findings from these studies can lead to identification of emotion regulation strategies and can lead to effective interventions to improve team performance in future. The core question guiding this review is: What are effective measures in capturing individuals' emotional responses in team dynamics? Using a scoping review, the study aims to answer three research questions (RQs): 1: What was the distribution over time of the studies that examined team members’ emotional responses and/or regulation of emotions in team dynamic? 2: What type(s) of data were collected, and what are the theories used in these studies? 3: What are the advantages and challenges of each type of measurement on emotional responses in team dynamics? The synthesis of the findings suggests that multimodal data, combining various measures such as physiological data, observations, and self-reports, offer a promising approach to capturing emotions in teamwork contexts. Furthermore, combining multimodal data can benefit capturing individual and inter-personal regulation, including self-, co-, and social emotion regulation in teamwork. This paper highlights the importance of integrating multiple measurement methods and provides insights into the advantages and challenges associated with each approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research reports psychology

Data availability

All data generated or analysed during this study are included in this published article.

Barron, B. (2003). When smart groups fail. The Journal of the Learning Sciences, 12 (3), 307–359.

Article   Google Scholar  

Barsade, S. G. (2002). The ripple effect: Emotional contagion and its influence on group behavior. Administrative Science Quarterly, 47 (4), 644–675.

Blau, I., Shamir-Inbal, T., & Avdiel, O. (2020). How does the pedagogical design of a technology-enhanced collaborative academic course promote digital literacies, self-regulation, and perceived learning of students? The Internet and Higher Education, 45 , 100722. https://doi.org/10.1016/j.iheduc.2019.100722

Boekaerts, M. (2011). Emotions, emotion regulation, and self-regulation of learning: center for the study of learning and instruction, Leiden University, The Netherlands, and KU Leuven. In  Handbook of self-regulation of learning and performance  (pp. 422–439). Routledge.

Boroș, S. (2020). Controversy without conflict: How group emotional awareness and regulation can prevent conflict escalation. Group Decision and Negotiation, 29 (2), 251–269. https://doi.org/10.1007/s10726-020-09659-1

Braithwaite, J., Watson, D., Jones, R., & Rowe, M. (2013). A guide for analysing electrodermal activity (EDA) & skin conductance responses (SCRs) for psychological experiments. Psychophysiology, 49 (1), 1017–1034.

Google Scholar  

Curşeu, P. L., Boroş, S., & Oerlemans, L. A. G. (2012). Task and relationship conflict in short-term and long-term groups: The critical role of emotion regulation. International Journal of Conflict Management, 23 (1), 97–107. https://doi.org/10.1108/10444061211199331

D’mello, S. K., & Kory, J. (2015). A review and meta-analysis of multimodal affect detection systems. ACM Computing Surveys, 47 (3), 1-43:36. https://doi.org/10.1145/2682899

Dafoulas, G. A., Maia, C. C., Clarke, J. S., Ali, A., & Augusto, J. (2018). Investigating the role of biometrics in education–the use of sensor data in collaborative learning. 12th International Conference on e-Learning. Madrid, Spain 17 - 19 Jul 2018 IADIS. pp. 115-123.

English, T., & John, O. P. (2013). Understanding the social effects of emotion regulation: the mediating role of authenticity for individual differences in suppression. Emotion , 13 (2), 314–329. https://doi.org/10.1037/a0029847

Eshet, Y. (2004). Digital literacy: A conceptual framework for survival skills in the digital era. Journal of Educational Multimedia and Hypermedia , 13 (1), 93–106. https://www.learntechlib.org/primary/p/4793/

Giannakos, M. N., Sharma, K., Pappas, I. O., Kostakos, V., & Velloso, E. (2019). Multimodal data as a means to understand the learning experience. International Journal of Information Management, 48 , 108–119. https://doi.org/10.1016/j.ijinfomgt.2019.02.003

Gross, J. J. (1998). The emerging field of emotion regulation: An integrative review. Review of General Psychology, 2 (3), 271–299.

Gross, J. J. (2015). Emotion regulation: Current status and future prospects. Psychological Inquiry, 26 (1), 1–26. https://doi.org/10.1080/1047840X.2014.940781

Gross, J. J., & John, O. P. (2003). Individual differences in two emotion regulation processes: Implications for affect, relationships, and well-being. Journal of Personality and Social Psychology, 85 (2), 348–362. https://doi.org/10.1037/0022-3514.85.2.348

Haataja, E., Malmberg, J., & Järvelä, S. (2018). Monitoring in collaborative learning: Co-occurrence of observed behavior and physiological synchrony explored. Computers in Human Behavior, 87 , 337–347.

Hadwin, A., & Oshige, M. (2011). Self-regulation, coregulation, and socially shared regulation: Exploring perspectives of social in self-regulated learning theory. Teachers College Record, 113 (2), 240–264.

Hadwin, A., Järvelä, S., & Miller, M. (2018). Self-regulation, co-regulation, and shared regulation in collaborative learning environments. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp. 83–106). Routledge/Taylor & Francis Group.

Harley, J. M. (2016). Measuring emotions: A survey of cutting edge methodologies used in computer-based learning environment research. In S. Y. Tettegah & M. Gartmeier (Eds.), Emotions, Technology, Design, and Learning (pp. 89–114). Academic Press. https://doi.org/10.1016/B978-0-12-801856-9.00005-0

Huang, X., & Lajoie, S. P. (2023). Social emotional interaction in collaborative learning: Why it matters and how can we measure it? Social Sciences & Humanities Open, 7 (1), 100447. https://doi.org/10.1016/j.ssaho.2023.100447

Huang, X., Beck, S., Huang, L., & Lajoie, S. (2023). Emotion and emotion regulation matter: A case study on teachers’ online teaching experience during COVID-19. ISLS Annual Meeting 2023: Building Knowledge and Sustaining Our Community. https://repository.isls.org//handle/1/9828

Huber, K., & Bannert, M. (2023). What happens to your body during learning with computer-based environments? Exploring negative academic emotions using psychophysiological measurements. Journal of Computers in Education, 10 (1), 189–215. https://doi.org/10.1007/s40692-022-00228-w

Järvelä, S., & Hadwin, A. F. (2013). New frontiers: Regulating learning in CSCL. Educational Psychologist, 48 (1), 25–39.

Järvelä, S., Kirschner, P. A., Panadero, E., Malmberg, J., Phielix, C., Jaspers, J., Koivuniemi, M., & Järvenoja, H. (2015). Enhancing socially shared regulation in collaborative learning groups: Designing for CSCL regulation tools. Educational Technology Research and Development, 63 (1), 125–142. https://doi.org/10.1007/s11423-014-9358-1

Järvelä, S., Järvenoja, H., & Malmberg, J. (2019). Capturing the dynamic and cyclical nature of regulation: Methodological Progress in understanding socially shared regulation in learning. International Journal of Computer-Supported Collaborative Learning, 14 (4), 425–441. https://doi.org/10.1007/s11412-019-09313-2

Järvelä, S., Nguyen, A., & Hadwin, A. (2023). Human and artificial intelligence collaboration for socially shared regulation in learning. British Journal of Educational Technology, 54 (5), 1057–1076. https://doi.org/10.1111/bjet.13325

Järvenoja, H., & Järvelä, S. (2009). Emotion control in collaborative learning situations: Do students regulate emotions evoked by social challenges. British Journal of Educational Psychology, 79 (3), 463–481.

Article   PubMed   Google Scholar  

Järvenoja, H., Järvelä, S., Törmänen, T., Näykki, P., Malmberg, J., Kurki, K., Mykkänen, A., & Isohätälä, J. (2018). Capturing motivation and emotion regulation during a learning process. Frontline Learning Research, 6 (3), 85–104.

Järvenoja, H., Malmberg, J., Törmänen, T., Mänty, K., Haataja, E., Ahola, S., & Järvelä, S. (2020). A collaborative learning design for promoting and analyzing adaptive motivation and emotion regulation in the science classroom. Frontiers in Education , 5 . https://www.frontiersin.org/articles/ https://doi.org/10.3389/feduc.2020.00111

Jehn, K. A., & Mannix, E. A. (2001). The dynamic nature of conflict: A longitudinal study of intragroup conflict and group performance. Academy of Management Journal, 44 (2), 238–251.

Jiang, J. Y., Zhang, X., & Tjosvold, D. (2013). Emotion regulation as a boundary condition of the relationship between team conflict and performance: A multi-level examination. Journal of Organizational Behavior, 34 (5), 714–734. https://doi.org/10.1002/job.1834

Kazemitabar, M., Lajoie, S. P., & Doleck, T. (2019). Examining Changes in Medical Students’ Emotion Regulation in an Online PBL Session. Knowledge Management & E-Learning, 11 (2), 129–157.

Korunka, C. (2020). What Moderates the Relation Between Intragroup Conflict, Emotional Exhaustion, and Work Engagement? (No. 1). 5 (1), Article 1. https://doi.org/10.16993/sjwop.91

Kurki, K., Järvelä, S., Mykkänen, A., & Määttä, E. (2015). Investigating children’s emotion regulation in socio-emotionally challenging classroom situations. Early Child Development and Care, 185 (8), 1238–1254. https://doi.org/10.1080/03004430.2014.988710

Laine, T. H., & Lindberg, R. S. N. (2020). Designing engaging games for education: A systematic literature review on game motivators and design principles. IEEE Transactions on Learning Technologies, 13 (4), 804–821. https://doi.org/10.1109/TLT.2020.3018503

Lajoie, S. P., Lee, L., Poitras, E., Bassiri, M., Kazemitabar, M., Cruz-Panesso, I., Hmelo-Silver, C., Wiseman, J., Chan, L. K., & Lu, J. (2015). The role of regulation in medical student learning in small groups: Regulating oneself and others’ learning and emotions. Computers in Human Behavior, 52, 601–616. https://doi.org/10.1016/j.chb.2014.11.073

Li, Y., Li, X., Su, Y., Peng, Y., & Hu, H. (2020). Exploring the role of EFL learners’ online self-regulation profiles in their social regulation of learning in wiki-supported collaborative reading activities. Journal of Computers in Education, 7 (4), 575–595. https://doi.org/10.1007/s40692-020-00168-3

Liu, Y., Wang, T., Wang, K., & Zhang, Y. (2021). Collaborative learning quality classification through physiological synchrony recorded by wearable biosensors. Frontiers in Psychology , 12 . https://www.frontiersin.org/articles/ https://doi.org/10.3389/fpsyg.2021.674369

Mänty, K., Järvenoja, H., & Törmänen, T. (2020). Socio-emotional interaction in collaborative learning: Combining individual emotional experiences and group-level emotion regulation. International Journal of Educational Research, 102 , 101589. https://doi.org/10.1016/j.ijer.2020.101589

Marci, C. D., & Orr, S. P. (2006). The effect of emotional distance on psychophysiologic concordance and perceived empathy between patient and interviewer. Applied Psychophysiology and Biofeedback, 31 (2), 115–128. https://doi.org/10.1007/s10484-006-9008-4

Mayer, J. D., & Salovey, P. (1997). What is emotional intelligence? In P. Salovey & D. Sluyter (Eds.), Emotional Development and Emotional Intelligence: Implications for Educators (pp. 3–31). Basic Books.

McCaslin, M. (2009). Co-regulation of student motivation and emergent identity. Educational Psychologist, 44 (2), 137–146.

Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The prisma statement. Annals of Internal Medicine, 151 (4), 264–269. https://doi.org/10.7326/0003-4819-151-4-200908180-00135

Nasir, J., Kothiyal, A., Bruno, B., & Dillenbourg, P. (2021). Many are the ways to learn identifying multi-modal behavioral profiles of collaborative learning in constructivist activities. International Journal of Computer-Supported Collaborative Learning, 16 (4), 485–523. https://doi.org/10.1007/s11412-021-09358-2

Näykki, P., Järvelä, S., Kirschner, P. A., & Järvenoja, H. (2014). Socio-emotional conflict in collaborative learning—A process-oriented case study in a higher education context. International Journal of Educational Research, 68 , 1–14.

Nestsiarovich, K., Pons, D., & Becker, S. (2020). Communication adjustment in engineering professional and student project meetings. Behavioral Sciences , 10 (7), Article 7. https://doi.org/10.3390/bs10070111

Nguyen, A., Järvelä, S., Rosé, C., Järvenoja, H., & Malmberg, J. (2023). Examining socially shared regulation and shared physiological arousal events with multimodal learning analytics. British Journal of Educational Technology, 54 (1), 293–312. https://doi.org/10.1111/bjet.13280

Nieswandt, M., McEneaney, E. H., & Affolter, R. (2020). A framework for exploring small group learning in high school science classrooms: The triple problem solving space. Instructional Science: An International Journal of the Learning Sciences, 48 (3), 243–290. https://doi.org/10.1007/s11251-020-09510-9

Okon-Singer, H., Hendler, T., Pessoa, L., & Shackman, A. J. (2015). The neurobiology of emotion–cognition interactions: Fundamental questions and strategies for future research. Frontiers in Human Neuroscience , 9 . https://www.frontiersin.org/articles/ https://doi.org/10.3389/fnhum.2015.00058

Oveis, C., Gu, Y., Ocampo, J. M., Hangen, E. J., & Jamieson, J. P. (2020). Emotion regulation contagion: Stress reappraisal promotes challenge responses in teammates. Journal of Experimental Psychology: General, 149 , 2187–2205. https://doi.org/10.1037/xge0000757

Ozawa, S., Nakatani, H., Miyauchi, C. M., Hiraki, K., & Okanoya, K. (2022). Synergistic effects of disgust and anger on amygdala activation while recalling memories of interpersonal stress: An fMRI study. International Journal of Psychophysiology, 182 , 39–46. https://doi.org/10.1016/j.ijpsycho.2022.09.008

Pekrun, R. (2016). Academic emotions. Handbook of motivation at school (pp. 120–144). Routledge.

Rogat, T. K., & Linnenbrink-Garcia, L. (2011). Socially shared regulation in collaborative groups: An analysis of the interplay between quality of social regulation and group processes. Cognition and Instruction, 29 (4), 375–415.

Russell, J. A., & Barrett, L. F. (1999). Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. Journal of Personality and Social Psychology, 76 (5), 805.

Salas, E., Sims, D. E., & Burke, C. S. (2005). Is there a “big five” in teamwork? Small Group Research, 36 (5), 555–599.

Sharma, K., Pappas, I., Papavlasopoulou, S., & Giannakos, M. (2022). Wearable sensing and quantified-self to explain learning experience. International Conference on Advanced Learning Technologies (ICALT) , 136–138. https://doi.org/10.1109/ICALT55010.2022.00048

Slakmon, B., Keynan, O., & Shapira, O. (2022). Emotion expression and recognition in written digital discussions on Civic Issues. International Journal of Computer-Supported Collaborative Learning, 17 (4), 519–537. https://doi.org/10.1007/s11412-022-09379-5

Sohr, E. R., Gupta, A., & Elby, A. (2018). Taking an escape hatch: Managing tension in group discourse. Science Education, 102 (5), 883–916. https://doi.org/10.1002/sce.21448

Su, Y., Li, Y., Hu, H., & Rosé, C. P. (2018). Exploring college English language learners’ self and social regulation of learning during wiki-supported collaborative reading activities. International Journal of Computer-Supported Collaborative Learning, 13 (1), 35–60. https://doi.org/10.1007/s11412-018-9269-y

Thiel, C. E., Harvey, J., Courtright, S., & Bradley, B. (2019). What doesn’t kill you makes you stronger: How teams rebound from early-stage relationship conflict. Journal of Management, 45 (4), 1623–1659. https://doi.org/10.1177/0149206317729026

Törmänen, T., Järvenoja, H., & Mänty, K. (2021). All for one and one for all– How are students’ affective states and group-level emotion regulation interconnected in collaborative learning? International Journal of Educational Research, 109 , 101861. https://doi.org/10.1016/j.ijer.2021.101861

Törmänen, T., Järvenoja, H., Saqr, M., Malmberg, J., & Järvelä, S. (2022). A person-centered approach to study students’ socio-emotional interaction profiles and regulation of collaborative learning. Frontiers in Education , 7 . https://doi.org/10.3389/feduc.2022.866612

Volet, S., Seghezzi, C., & Ritchie, S. (2019). Positive emotions in student-led collaborative science activities: Relating types and sources of emotions to engagement in learning. Studies in Higher Education , 44 (10), 1734–1746. https://doi.org/10.1080/03075079.2019.1665314

Vriesema, C. C., & McCaslin, M. (2020). Experience and meaning in small-group contexts: Fusing observational and self-report data to capture self and other dynamics. Frontline Learning Research, 8 (3), 126–139.

Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology , 54 , 1063–1070. https://doi.org/10.1037/0022-3514.54.6.1063

Weissberg, R. P., Durlak, J. A., Domitrovich, C. E., & Gullotta, T. P. (Eds.). (2015). Social and emotional learning: Past, present, and future. In Handbook of social and emotional learning:Research and practice (pp. 3–19). The Guilford Press.

Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In Metacognition in educational theory and practice (pp. 277–304). Lawrence Erlbaum Associates.

Zhang, Y., Olenick, J., Chang, C.-H., Kozlowski, S. W. J., & Hung, H. (2018). TeamSense: Assessing personal affect and group cohesion in small teams through dyadic interaction and behavior analysis with wearable sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2 (3), 1-150:22. https://doi.org/10.1145/3264960

Zhang, S., Chen, J., Wen, Y., Chen, H., Gao, Q., & Wang, Q. (2021). Capturing regulatory patterns in online collaborative learning: A network analytic approach. International Journal of Computer-Supported Collaborative Learning, 16 (1), 37–66. https://doi.org/10.1007/s11412-021-09339-5

Zheng, S., & Zhou, X. (2022). Positive influence of cooperative learning and emotion regulation on EFL learners’ foreign language enjoyment. International Journal of Environmental Research and Public Health , 19 (19), Article 19. https://doi.org/10.3390/ijerph191912604

Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology , 81 (3), 329–339. https://doi.org/10.1037/0022-0663.81.3.32

Download references

Acknowledgements

We would like to express my sincere gratitude to Dr. Jason M. Harley and Dr. Adam K. Dubé for their invaluable contributions and insightful feedback during the development of the first draft of this article.

This work is supported by the Fonds de recherche du Québec – Société et culture (FRQSC) awarded to Xiaoshasn Huang and the Social Sciences and Humanities Research Council of Canada (SSHRC) under the grant number of 895–2011-1006. Any opinions, findings, and conclusions or recommendations expressed in this paper, however, are those of the authors and do not necessarily reflect the views of the FRQSC and the SSHRC.

Author information

Authors and affiliations.

Department of Educational and Counselling Psychology, McGill University, Room B148, Education Building, 3700 McTavish Street, Montreal, Quebe, H3A 1Y2, Canada

Xiaoshan Huang & Susanne P. Lajoie

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Xiaoshan Huang .

Ethics declarations

Conflict of interest.

The author(s) declared no potential conflicts of interest concerning the research, authorship, and/or publication of this article.

Current themes of research

Xiaoshan Huang is a PhD candidate in the Department of Educational and Counselling Psychology (ECP) at McGill University, and a member of the ATLAS (Advanced Technologies for Learning in Authentic Settings) Lab. Her areas of research interests include investigating learners’ cognition, motivation, and emotion regulation in both academia and the workplace using intelligent tutoring systems, as well as socially shared regulation in collaborative learning.

Most relevant publications

Huang, X., Wu, H., Liu, X., & Lajoie, S. (2024, May). Examining the Role of Peer Acknowledgements on Social Annotations: Unraveling the Psychological Underpinnings. Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–9.   https://doi.org/10.1145/3613904.3641906

Huang, X., Li, S., Wang, T., Pan, Z., & Lajoie, S. P. (2023). Exploring the co‐occurrence of students' learning behaviours and reasoning processes in an intelligent tutoring system: An epistemic network analysis.  Journal of Computer Assisted Learning , 39 (5), 1701–1713. https://doi.org/10.1111/jcal.12827

Huang, X., Li, S., & Lajoie, S. P. (2023, May). The Relative Importance of Cognitive and Behavioral Engagement to Task Performance in Self-regulated Learning with an Intelligent Tutoring System. In  International Conference on Intelligent Tutoring Systems  (pp. 430–441). Cham: Springer Nature Switzerland.

Huang, X., & Lajoie, S. P. (2023). Social emotional interaction in collaborative learning: why it matters and how can we measure it?  Social Sciences & Humanities Open ,  7 (1), 100447. https://doi.org/10.1016/j.ssaho.2023.100447

Huang, X., Huang, L., & Lajoie, S. P. (2022). Exploring teachers’ emotional experience in a TPACK development task.  Educational technology research and development ,  70 (4), 1283–1303.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Huang, X., Lajoie, S.P. A scoping review on effective measurements of emotional responses in teamwork contexts. Curr Psychol (2024). https://doi.org/10.1007/s12144-024-06235-7

Download citation

Accepted : 03 June 2024

Published : 27 June 2024

DOI : https://doi.org/10.1007/s12144-024-06235-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Emotional responses
  • Team processes
  • Collaboration effectiveness
  • Measurements
  • Emotion regulation
  • Find a journal
  • Publish with us
  • Track your research

Kobie Marketing

CX Journey Mapping

Campaign Offer Planning

Personalization Strategy

Content Management

Loyalty CRM Design

Rewards Fulfillment

UX Design Services

Contact Center

research reports psychology

Program Management

Loyalty Benchmarking

Analytical Services

Campaign Measurement

Loyalty Health Monitor

Emotional Loyalty Measurement

research reports psychology

Martech Optimization

Data Privacy Assessment

Organizational Alignment

Change Management

Staff Augmentation

research reports psychology

Loyalty CDP

Loyalty ID Mangement

Loyalty API Catalogue

research reports psychology

Member Management

Program Configuration

Journey Orchestration

research reports psychology

Rewards Marketplace

Member Portal

Loyalty CRM

Contact Center Solutions

research reports psychology

Loyalty Insights

Human Guided AI

The Heart of Loyalty: 2024 Consumer Research Report

Jun 26, 2024

The Hidden Consumer Motivators Behind Loyalty Program Success   The Heart of Loyalty: 2024 Consumer Research Report Reveals the Behavioral Psychology that Underpins Consumer Loyalty and What Strategies Brands Can Activate

ST PETERSRBUG, FLA, June 26, 2023 —Kobie, a global leader in loyalty marketing technology and services, today released The Heart of Loyalty: 2024 Consumer Research Report, unveiling a uniquely academic approach to consumer motivations.

Kobie’s research team, comprised of PhDs who specialize in the intersection of human psychology and consumer perceptions, fielded the research study with more than 4,000 consumers in industries like retail, financial services, travel, hospitality, quick serve restaurants and more. As one of the most academically rigorous reports in the field, the findings reveal key questions brands should be asking about how consumer perceive their loyalty program, and what strategies can be activated to increase member engagement and drive deeper emotional loyalty. Key learnings for brands looking to improve their loyalty program health include:

1. The lifecycle of disengagement: Understand why members disengage at different points and how to address the underlying reasons. 2. The power of choice and co-creation : Learn how to leverage optionality and co-creating the experience as a powerful motivator that shifts mindsets towards cash back and other benefits. 3. Perfecting personalization: Discover how and why consumers react when personalization falls short, and what you can do to prevent disengagement. 4. Gamification & engagement: Explore how loyalty strategies and game science can drive engagement during the moments in between transactions. 5. The duality of tiers: Understand the dual desires of consumers for both endowing status and seeing progress, and what the implications are for tiered loyalty programs. 6. Navigating newness: Address the challenge of consumer resistance to new or unfamiliar concepts, and how age and emotional loyalty influence adoption.

“This research gives loyalty marketers information they can use immediately,” says Dr. JR Slubowski, Kobie’s AVP of Strategic Consulting and head of their Research Center of Excellence. “Our study differs in that it ties actionable program insights to what makes us tick as human beings – what motivates us – in terms of what drives deeper emotional connections.”

Slubowksi received his doctorate in 2022 with dual emphasis of Marketing and Management and has worked in the customer experience and loyalty fields for more than a decade. He adds, “Loyalty practitioners, whether new to loyalty or tenured, will find this study especially useful as they aim to understand and shift consumer’s perception of their program to be more positive, which in turn, builds emotional loyalty and ultimately grows enterprise value for brands.” Recently named the only Leader in Services and Technology by Forrester , Slubowski and the Kobie team are equipped to help readers put the report findings into action.

About the report:  Kobie fielded their 2024 research study with over 4,000 consumers across the U.S. and Canada, drilling into a variety of topics like appeal of features and benefits, personalization, engagement, recognition and more. As the only provider who sees emotional loyalty as an input vs. an output of loyalty, Kobie use this lens and a proprietary Emotional Loyalty Scoring® (ELS) methodology to tie the findings together and provide actionable insights for brands. To access the full research report, visit:  https://kobie.com/2024-consumer-research/

About Kobie:  As a trusted partner for more than 30 years, Kobie delivers market-leading, end-to-end loyalty solutions designed to enable customer experiences for the world’s most successful brands. With a strategy-led, technology-enabled approach, Kobie is consistently named an industry leader by Forrester with a mission of growing enterprise value through loyalty for clients.​ ​ Reaching more than 330 million consumers through loyalty, Kobie’s solutions are robust, but our philosophy is simple. The thoughtful design of proven solutions coupled with extensible, scalable, and configurable technology leads to a seamless customer experience. We bring strategic tools and frameworks to design programs that deliver results, and leverage our proprietary technology, Kobie Alchemy® Loyalty Cloud, to deliver and measure loyalty experiences. To learn more about partnering with Kobie, visit www.kobie.com.

research reports psychology

Increasing Loyalty with Non-Transactional Earn Opportunities

 Meghan Pratt |  Jun 25, 2024

research reports psychology

The Future of Rewards: Key Trends Shaping Credit Card Programs in the Next Five Years

 Meghan Pratt |  Jun 18, 2024

research reports psychology

Kobie Named As The Only Leader In The Latest Loyalty Services Report by Independent Research Firm 

 Rachel Podos |  Jun 3, 2024

Privacy Overview

Secondary Menu

Research specialist position @ university of south carolina, columbia sc.

The research specialist will work with the faculty principal investigator and grant staff to assist with NIH-funded research by conducting assessments on early development in infants and toddlers with neurodevelopmental conditions (e.g., Down syndrome) using multiple developmental, physiological, and language measures. This position also assists with data management, preparing summary reports for families, preparing assessment materials and community outreach events. Travel across the Southeast United States is required. This position is ideal for someone planning to apply to PhD programs in Communication Sciences, Clinical or School Psychology, Human Development and Family Sciences, or related fields.

Responsibilities

  • 40% - Conducts assessments and collects data. They administer developmental and behavioral assessments reliably and according to standardized procedures. Travel is an essential function as assessments are conducted across the Southeastern United States.
  • 25% - Manages data, including scoring reports, entering data, calculating reliability, reporting data summaries, and ensuring data is entered in a consistent and timely manner.
  • 15% - Conducts pre- and post-assessment duties weekly, including preparing measures for assessments, mailing protocols and forms to families, and processing completed data. They score and summarize assessment data through feedback reports. These duties involve providing oversight to undergraduate student research assistants and promoting the accuracy and efficiency of the research.
  • 10% - Contributes, as needed, to professional presentations and articles. This includes summarizing data, conducting literature searches, creating posters, and presenting findings at academic conferences.
  • 10% - Follows and ensures compliance with federal, state, and university policies and procedures as needed and completes other duties as required.

Qualifications

Required Qualifications

  • Bachelor’s degree in Psychology, Human Development and Family Studies, Communication Sciences and Disorders, or an allied health major (e.g., Public Health, Pre-Med, Pre-OT, Pre-PT), or the ability to obtain a degree prior to position start date.
  • Strong organization, communication skills, interpersonal skills, and attention to detail are necessary for success in this position.
  • At least one year of prior experience working in a research lab or with children

Preferred Qualifications

  • Prior experience working with families, infants, and toddlers, or children with neurodevelopmental conditions and developmental delays is strongly preferred.

Click here for more information and to apply

  • Post-graduation
  • Diversity, Equity & Inclusion
  • Climate Handbook
  • P&N Team Resources
  • Degree Requirements
  • Practicum and Ongoing Research Projects in Psychology
  • Research Participation Requirements for Psychology Courses
  • Summer Vertical Integration Program (VIP)
  • Psychology Courses
  • Graduate School Advice
  • Career Options
  • Forms & Resources
  • Global Education
  • Trinity Ambassadors
  • Co-requisite Requirement
  • Neuroscience Courses
  • Neuroscience: Undergraduate Research Opportunities
  • Neuroscience Research Practicum & Laboratories
  • Summer Neuroscience Program
  • Research Independent Study in Neuroscience
  • Graduation with Distinction
  • Frequently Asked Questions
  • Neuroscience Teaching Lab
  • Student Spotlights
  • Other Job Boards
  • Student Organizations
  • Clinical Psychology
  • Cognition & the Brain
  • Developmental Psychology
  • Social Psychology
  • Systems and Integrative Neuroscience
  • Admitting Faculty
  • Application FAQ
  • Financial Support
  • Teaching Opportunities
  • Departmental Graduate Requirements
  • MAP/Dissertation Committee Guidelines
  • MAP/Oral Exam Guidelines/Timeline
  • Dissertation and Final Examination Guidelines
  • Awards for Current Students
  • Teaching Resources
  • Instructor/TA Guidelines
  • Faculty Mentorship Vision Statement
  • All Courses
  • Psychology: Course Sequence
  • Psychology: Methods Courses
  • Neuroscience: Course Clusters
  • Neuroscience: Courses By Category
  • Primary Faculty
  • Joint Graduate Training Faculty
  • Instructional Faculty
  • Secondary Faculty
  • Graduate Students
  • Postdocs, Affiliates, and Research Scientists
  • Faculty Research Labs
  • Research News Stories
  • Child Studies
  • Community Volunteers
  • Charles Lafitte Foundation: Funding Support
  • Meet Our Alumni
  • For Current Students
  • Neuroscience Graduation 2024 Program
  • Assisting Duke Students
  • Neuroscience Graduation 2023 Program
  • Psychology Graduation 2023 Program
  • Giving to the Department

11.2 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question and hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned to conditions, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on. The materials subsection is also a good place to refer to the reliability and/or validity of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items and that they accurately measure what they are intended to measure.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end by returning to the problem or issue introduced in your opening paragraph and clearly stating how your research has addressed that issue or problem.

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to display graphs, illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different color each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

Creative Commons License

Share This Book

  • Increase Font Size

The Ohio State University Wexner Medical Center logo

Popular Services

  • Patient & Visitor Guide

Committed to improving health and wellness in our Ohio communities.

Health equity, healthy community, classes and events, the world is changing. medicine is changing. we're leading the way., featured initiatives, helpful resources.

  • Refer a Patient

Photo not Available

Marcia Edwards, PSYD

  • Psychologist More information about Psychologist. A psychologist is extensively trained to help people learn to cope more effectively with life issues and mental health illnesses. They are licensed to provide a number of service, including evaluations, therapy and other evidence-based treatments. A common type of treatment is talk therapy (sometimes called cognitive behavioral therapy), but there are many types of therapy styles.

Insurances We Accept

Education and Training

Education history.

The Chicago School Of Professional Psychology-Chicago, Chicago, IL 5/6/2019 - 8/25/2019

Academic Information

My department, my division, consulting and related relationships.

At The Ohio State University Wexner Medical Center, we support a faculty member’s research and consulting in collaboration with medical device, research and/or drug companies because a faculty member’s expertise can guide important advancements in the practice of medicine and improve patient care. In order to provide effective management of these relationships, the University requires annual disclosures from all faculty members with external interests related to their University responsibilities.

As of 09/29/2023, Dr. Edwards has reported no relationships with companies or entities.

Subscribe. Get just the right amount of health and wellness in your inbox.

Logo for Texas State University Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Presenting Your Research

49 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The abstract is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The introduction begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that they enjoy smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The closing of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question and hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned to conditions, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Three Ways of Organizing an APA-Style Method. Image description available.

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on. The materials subsection is also a good place to refer to the reliability and/or validity of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items and that they accurately measure what they are intended to measure.

The  results section is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Many journals encourage the open sharing of raw data online, and some now require open data and materials before publication.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The discussion is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end by returning to the problem or issue introduced in your opening paragraph and clearly stating how your research has addressed that issue or problem.

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An appendix is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to display graphs, illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

research reports psychology

Image Description

Figure 11.1 image description:  Table showing three ways of organizing an APA-style method section.

In the simple method, there are two subheadings: “Participants” (which might begin “The participants were…”) and “Design and procedure” (which might begin “There were three conditions…”).

In the typical method, there are three subheadings: “Participants” (“The participants were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).

In the complex method, there are four subheadings: “Participants” (“The participants were…”), “Materials” (“The stimuli were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).  [Return to Figure 11.1]

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. ↵
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

An article that presents the results of one or more new studies.

A brief summary of the study's research question, methods, results and conclusions.

Describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length.

Where you present the main results of the study, including the results of the statistical analyses.

Research Methods in Psychology Copyright © 2023 by William L. Kelemen, Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

American Psychological Association Logo

Top 20 Principles for Students with Disabilities

Teacher reads while four students gather around her.

Psychological science has much to contribute to enhancing teaching, learning, and well-being in the classroom. Psychology provides key insights on effective instruction; classroom environments that promote learning; and appropriate use of data, tests, and measurement.

We present here a document for listing and describing the top 20 psychological principles for use in the context of pre-K to 12 classroom teaching and learning, as well as the implications of each principle as applied to classroom practices for students with disabilities. These principles are categorized into five areas of psychological functioning:

  • Thinking and learning: How do children think and learn?  
  • Motivation: What motivates children?
  • Social-emotional learning: Why are social context, interpersonal relationships, and emotional well-being important to children’s learning?  
  • Classroom management: How can the classroom best be managed?
  • Assessment: How can educators assess children’s progress?

Download the full report PDF, 521KB

Two preschool students play with blocks together.

Thinking and Learning Principles 1-8

Two preschool students play with toys together

Motivation Principles 9-12

Young students hugging each other.

Social-Emotional Learning Principles 13-15

woman drawing with crayon between with two young students

Classroom Management Principles 16-17

child with down syndrome plays with toys while woman watches

Assessment Principles 18-20

All Top 20 Principles

Contact Education

IMAGES

  1. FREE 10+ Sample Psychological Reports in PDF

    research reports psychology

  2. (PDF) Samples in Applied Psychology: Over a Decade of Research in Review

    research reports psychology

  3. Writing Psychology Research Reports

    research reports psychology

  4. FREE 10+ Sample Psychological Reports in PDF

    research reports psychology

  5. How to Write a Psychology Lab Report

    research reports psychology

  6. Template for Lab Report

    research reports psychology

VIDEO

  1. Psychology and Report Design

  2. Writing Case Study Reports: IGNOU Counselling Psychology (MPCE-025)

  3. Preregistration and Registered Reports

  4. Terminology and Psychology of Support and Resistance

  5. Psychology lab report writing guide: the title page

  6. Writing Reports in Psychology

COMMENTS

  1. Psychological Reports: Sage Journals

    Psychological Reports is a bi-monthly peer-reviewed journal that publishes original and creative contributions to the field of general psychology. The journal carries experimental, theoretical, and speculative articles and comments in all areas of psychology. View full journal description. This journal is a ... Sage Research Methods ...

  2. Lab Report Format: Step-by-Step Guide & Examples

    In psychology, a lab report outlines a study's objectives, methods, results, discussion, and conclusions, ensuring clarity and adherence to APA (or relevant) formatting guidelines. A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion.

  3. Writing a Research Report in American Psychological Association (APA

    An APA-style research report begins with a ... In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." They usually take the form of a play on words or a well-known expression that relates to the topic under study.

  4. Psychological Science: Sage Journals

    The journal publishes cutting-edge research articles and short reports, spanning the entire spectrum of the science of psychology. This journal is the source for the latest findings in cognitive, social, developmental, and health psychology, as well as behavioral neuroscience and biopsychology. View full journal description

  5. Psychological Report Writing

    Psychological Report Writing March 8, 2021 - Paper 2 Psychology in Context | Research Methods Back to Paper 2 - Research Methods Writing up Psychological Investigations Through using this website, you have learned about, referred to, and evaluated research studies. These research studies are generally presented to the scientific community as a journal article. […]

  6. Psychological Science

    Psychological Science, the flagship journal of the Association for Psychological Science, is the leading peer-reviewed journal publishing empirical research spanning the entire spectrum of the science of psychology.The journal publishes high quality research articles of general interest and on important topics spanning the entire spectrum of the science of psychology.

  7. Psychology

    Research Open Access 26 Jun 2024 Scientific Reports Volume: 14, P: 14698 Comparison of networks of loneliness, depressive symptoms, and anxiety symptoms in at-risk community-dwelling older adults ...

  8. Writing a Psychological Report Using Evidence-Based Psychological

    Principles on psychological report writing derived from seminal papers in the field of psychological assessment were adapted and used as an organizing tool to create a template on how to write all varieties of psychological reports that incorporate evidence-based assessment methods.

  9. PDF Reporting Qualitative Research in Psychology

    oping comprehensive reports that will support their review. Guidance is provided for how to best present qualitative research, with rationales and illustrations. The reporting standards for qualitative meta-analyses, which are integrative analy-ses of findings from across primary qualitative research, are presented in Chapter 8.

  10. PDF GUIDE TO WRITING RESEARCH REPORTS

    THE DEPARTMENT OF PSYCHOLOGY GUIDE TO WRITING RESEARCH REPORTS The following set of guidelines provides psychology students at Essex with the basic information for structuring and formatting reports of research in psychology. During your time here this will be an invaluable reference. You are encouraged to refer to this document each time you ...

  11. Reports and surveys

    Includes links to task force reports as well as APA's Stress in America and Work and Well-Being surveys. ... Topics in Psychology. Explore how scientific research by psychologists can inform our professional lives, family and community relationships, emotional wellness, and more. ... APA conducts an annual survey of psychology practitioners to ...

  12. Writing in Psychology Overview

    Experimental reports: Experimental reports detail the results of experimental research projects and are most often written in experimental psychology (lab) courses. Experimental reports are write-ups of your results after you have conducted research with participants. This handout provides a description of how to write an experimental report .

  13. PDF Guide to Writing a Psychology Research Paper

    Component 1: The Title Page. • On the right side of the header, type the first 2-3 words of your full title followed by the page number. This header will appear on every page of you report. • At the top of the page, type flush left the words "Running head:" followed by an abbreviation of your title in all caps.

  14. Library Research in Psychology: Finding it Easily

    These topics include a wide range of issues, from ability tests for employees to research on drugs and the brain, school violence, the impact of AIDS on family members and the ways in which children learn. A variety of resources about psychology are available on the Internet or at any library, including books, journals, newspapers, pamphlets ...

  15. Reporting Standards for Research in Psychology

    This article is the report of the JARS Group's findings and recommendations. It was approved by the Publications and Communications Board in the summer of 2007 and again in the spring of 2008 and was transmitted to the task force charged with revising the Publication Manual for consideration as it did its work. The content of the report roughly follows the stages of the group's work.

  16. PDF RESEARCH REPORT (PSYCHOLOGY)

    A psychology Research Report, or Lab Report, gives an account of an experiment about humanbehaviour. The account not only includes the information about the process of the experiment, but also communicates the relevance, validity, and reliability of the research in a well-developed line of argument.

  17. 11.2 Writing a Research Report in American Psychological Association

    In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal of Personality and Social Psychology.

  18. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  19. 12.3 Expressing Your Results

    There are a few important APA style guidelines here. First, statistical results are always presented in the form of numerals rather than words and are usually rounded to two decimal places (e.g., "2.00" rather than "two" or "2"). They can be presented either in the narrative description of the results or parenthetically—much like ...

  20. PDF Writing Research Reports in Psychology

    In psychology, research reports follow the technical writing style set by the American Psychological Association (called APA format or APA style). This format is described in detail in the Publication Manual of the American Psychological Association. • One-inch margins, 12 pt. font (Ariel or Times New Roman). • Double-spacing throughout.

  21. A scoping review on effective measurements of emotional ...

    In reviews of research, we applied a scoping review by searching four major databases in the domains of psychology, education, and educational technology. For this review, we selected English-language scientific studies on participants' emotional responses in teamwork and set a scope of published year from 2010 to present.

  22. Working with Sport Clients in Transitions: Four Practitioners, Four

    Sport psychology practitioners (SPP) ground their applied work in psychological and sport science theories, research, professional practice reports, and their professional experience. Applied work ...

  23. The Heart of Loyalty: 2024 Consumer Research Report

    The Hidden Consumer Motivators Behind Loyalty Program Success The Heart of Loyalty: 2024 Consumer Research Report Reveals the Behavioral Psychology that Underpins Consumer Loyalty and What Strategies Brands Can Activate ST PETERSRBUG, FLA, June 26, 2023 —Kobie, a global leader in loyalty marketing technology and services, today released The Heart of Loyalty: 2024 Consumer Research

  24. Research Specialist Position @ University of South Carolina, Columbia

    The research specialist will work with the faculty principal investigator and grant staff to assist with NIH-funded research by conducting assessments on early development in infants and toddlers with neurodevelopmental conditions (e.g., Down syndrome) using multiple developmental, physiological, and language measures. This position also assists with data management, preparing summary reports ...

  25. Science of social media's effect on mental health isn't as ...

    According to a recent report from the Surgeon Generals office, rates of psychological distress among young people, including symptoms of anxiety, depression, and other mental health disorders ...

  26. 11.2 Writing a Research Report in American Psychological Association

    Identify the major sections of an APA-style research report and the basic contents of each section. ... In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." They usually take the form of a play on words or a well-known expression that relates to the topic ...

  27. Marcia Edwards, PSYD

    Psychology Consulting and Related Relationships At The Ohio State University Wexner Medical Center, we support a faculty member's research and consulting in collaboration with medical device, research and/or drug companies because a faculty member's expertise can guide important advancements in the practice of medicine and improve patient care.

  28. Writing a Research Report in American Psychological Association (APA

    Sections of a Research Report Title Page and Abstract. An APA-style research report begins with a title page. The title is centered in the upper half of the page, with each important word capitalized. ... In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute ...

  29. Top 20 Principles for Students with Disabilities

    Principles from psychology to enhance pre-K to 12 teaching and learning of students with disabilities. ... Topics in Psychology. Explore how scientific research by psychologists can inform our professional lives, family and community relationships, emotional wellness, and more. ... Report lists eight recommendations for scientists, policymakers ...