Subscribe or renew today

Every print subscription comes with full digital access

Science News

Social media harms teens’ mental health, mounting evidence shows. what now.

Understanding what is going on in teens’ minds is necessary for targeted policy suggestions

A teen scrolls through social media alone on her phone.

Most teens use social media, often for hours on end. Some social scientists are confident that such use is harming their mental health. Now they want to pinpoint what explains the link.

Carol Yepes/Getty Images

Share this:

By Sujata Gupta

February 20, 2024 at 7:30 am

In January, Mark Zuckerberg, CEO of Facebook’s parent company Meta, appeared at a congressional hearing to answer questions about how social media potentially harms children. Zuckerberg opened by saying: “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health.”

But many social scientists would disagree with that statement. In recent years, studies have started to show a causal link between teen social media use and reduced well-being or mood disorders, chiefly depression and anxiety.

Ironically, one of the most cited studies into this link focused on Facebook.

Researchers delved into whether the platform’s introduction across college campuses in the mid 2000s increased symptoms associated with depression and anxiety. The answer was a clear yes , says MIT economist Alexey Makarin, a coauthor of the study, which appeared in the November 2022 American Economic Review . “There is still a lot to be explored,” Makarin says, but “[to say] there is no causal evidence that social media causes mental health issues, to that I definitely object.”

The concern, and the studies, come from statistics showing that social media use in teens ages 13 to 17 is now almost ubiquitous. Two-thirds of teens report using TikTok, and some 60 percent of teens report using Instagram or Snapchat, a 2022 survey found. (Only 30 percent said they used Facebook.) Another survey showed that girls, on average, allot roughly 3.4 hours per day to TikTok, Instagram and Facebook, compared with roughly 2.1 hours among boys. At the same time, more teens are showing signs of depression than ever, especially girls ( SN: 6/30/23 ).

As more studies show a strong link between these phenomena, some researchers are starting to shift their attention to possible mechanisms. Why does social media use seem to trigger mental health problems? Why are those effects unevenly distributed among different groups, such as girls or young adults? And can the positives of social media be teased out from the negatives to provide more targeted guidance to teens, their caregivers and policymakers?

“You can’t design good public policy if you don’t know why things are happening,” says Scott Cunningham, an economist at Baylor University in Waco, Texas.

Increasing rigor

Concerns over the effects of social media use in children have been circulating for years, resulting in a massive body of scientific literature. But those mostly correlational studies could not show if teen social media use was harming mental health or if teens with mental health problems were using more social media.

Moreover, the findings from such studies were often inconclusive, or the effects on mental health so small as to be inconsequential. In one study that received considerable media attention, psychologists Amy Orben and Andrew Przybylski combined data from three surveys to see if they could find a link between technology use, including social media, and reduced well-being. The duo gauged the well-being of over 355,000 teenagers by focusing on questions around depression, suicidal thinking and self-esteem.

Digital technology use was associated with a slight decrease in adolescent well-being , Orben, now of the University of Cambridge, and Przybylski, of the University of Oxford, reported in 2019 in Nature Human Behaviour . But the duo downplayed that finding, noting that researchers have observed similar drops in adolescent well-being associated with drinking milk, going to the movies or eating potatoes.

Holes have begun to appear in that narrative thanks to newer, more rigorous studies.

In one longitudinal study, researchers — including Orben and Przybylski — used survey data on social media use and well-being from over 17,400 teens and young adults to look at how individuals’ responses to a question gauging life satisfaction changed between 2011 and 2018. And they dug into how the responses varied by gender, age and time spent on social media.

Social media use was associated with a drop in well-being among teens during certain developmental periods, chiefly puberty and young adulthood, the team reported in 2022 in Nature Communications . That translated to lower well-being scores around ages 11 to 13 for girls and ages 14 to 15 for boys. Both groups also reported a drop in well-being around age 19. Moreover, among the older teens, the team found evidence for the Goldilocks Hypothesis: the idea that both too much and too little time spent on social media can harm mental health.

“There’s hardly any effect if you look over everybody. But if you look at specific age groups, at particularly what [Orben] calls ‘windows of sensitivity’ … you see these clear effects,” says L.J. Shrum, a consumer psychologist at HEC Paris who was not involved with this research. His review of studies related to teen social media use and mental health is forthcoming in the Journal of the Association for Consumer Research.

Cause and effect

That longitudinal study hints at causation, researchers say. But one of the clearest ways to pin down cause and effect is through natural or quasi-experiments. For these in-the-wild experiments, researchers must identify situations where the rollout of a societal “treatment” is staggered across space and time. They can then compare outcomes among members of the group who received the treatment to those still in the queue — the control group.

That was the approach Makarin and his team used in their study of Facebook. The researchers homed in on the staggered rollout of Facebook across 775 college campuses from 2004 to 2006. They combined that rollout data with student responses to the National College Health Assessment, a widely used survey of college students’ mental and physical health.

The team then sought to understand if those survey questions captured diagnosable mental health problems. Specifically, they had roughly 500 undergraduate students respond to questions both in the National College Health Assessment and in validated screening tools for depression and anxiety. They found that mental health scores on the assessment predicted scores on the screenings. That suggested that a drop in well-being on the college survey was a good proxy for a corresponding increase in diagnosable mental health disorders. 

Compared with campuses that had not yet gained access to Facebook, college campuses with Facebook experienced a 2 percentage point increase in the number of students who met the diagnostic criteria for anxiety or depression, the team found.

When it comes to showing a causal link between social media use in teens and worse mental health, “that study really is the crown jewel right now,” says Cunningham, who was not involved in that research.

A need for nuance

The social media landscape today is vastly different than the landscape of 20 years ago. Facebook is now optimized for maximum addiction, Shrum says, and other newer platforms, such as Snapchat, Instagram and TikTok, have since copied and built on those features. Paired with the ubiquity of social media in general, the negative effects on mental health may well be larger now.

Moreover, social media research tends to focus on young adults — an easier cohort to study than minors. That needs to change, Cunningham says. “Most of us are worried about our high school kids and younger.” 

And so, researchers must pivot accordingly. Crucially, simple comparisons of social media users and nonusers no longer make sense. As Orben and Przybylski’s 2022 work suggested, a teen not on social media might well feel worse than one who briefly logs on. 

Researchers must also dig into why, and under what circumstances, social media use can harm mental health, Cunningham says. Explanations for this link abound. For instance, social media is thought to crowd out other activities or increase people’s likelihood of comparing themselves unfavorably with others. But big data studies, with their reliance on existing surveys and statistical analyses, cannot address those deeper questions. “These kinds of papers, there’s nothing you can really ask … to find these plausible mechanisms,” Cunningham says.

One ongoing effort to understand social media use from this more nuanced vantage point is the SMART Schools project out of the University of Birmingham in England. Pedagogical expert Victoria Goodyear and her team are comparing mental and physical health outcomes among children who attend schools that have restricted cell phone use to those attending schools without such a policy. The researchers described the protocol of that study of 30 schools and over 1,000 students in the July BMJ Open.

Goodyear and colleagues are also combining that natural experiment with qualitative research. They met with 36 five-person focus groups each consisting of all students, all parents or all educators at six of those schools. The team hopes to learn how students use their phones during the day, how usage practices make students feel, and what the various parties think of restrictions on cell phone use during the school day.

Talking to teens and those in their orbit is the best way to get at the mechanisms by which social media influences well-being — for better or worse, Goodyear says. Moving beyond big data to this more personal approach, however, takes considerable time and effort. “Social media has increased in pace and momentum very, very quickly,” she says. “And research takes a long time to catch up with that process.”

Until that catch-up occurs, though, researchers cannot dole out much advice. “What guidance could we provide to young people, parents and schools to help maintain the positives of social media use?” Goodyear asks. “There’s not concrete evidence yet.”

More Stories from Science News on Science & Society

Close up of a woman holding a smartphone

Privacy remains an issue with several women’s health apps

A screenshot of a fake website, showing a young girl hugging an older woman. The tagline says "Be the favorite grandkid forever"

Should we use AI to resurrect digital ‘ghosts’ of the dead?

A photograph of the landscape in West Thumb Geyser Basin and Yellowstone Lake (in the photo's background)

A hidden danger lurks beneath Yellowstone

Tracking feature in Snapchat can make people feel excluded.

Online spaces may intensify teens’ uncertainty in social interactions

One yellow butterfly visits a purple flower while a second one flutters nearby. They are in focus while an area of wild grasses and flowers, with some buildigns visible behind them, is blurrier.

Want to see butterflies in your backyard? Try doing less yardwork

Eight individuals wearing beekeepers suit are surrounding two bee-hive boxes as they stand against a mountainous background. One of the people are holding a bee hive frame covered in bees, and everyone else seem to be paying attention to the frame.

Ximena Velez-Liendo is saving Andean bears with honey

A photograph of two female scientists cooking meet in a laboratory

‘Flavorama’ guides readers through the complex landscape of flavor

Rain Bosworth smiling and looking at a parent-child pair to her left. She has blonde hair and blue eyes and wearing blue button-up shirt. The parent is looking at an iPad, sitting in front of them on a round table. The iPad is displaying what appears to be a video with a person signing. The parent has black hair and wearing a navy polka dot shirt. The child is sitting on the parent's lap and staring at Bosworth.

Rain Bosworth studies how deaf children experience the world

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Advertisement

Advertisement

Misinformation, manipulation, and abuse on social media in the era of COVID-19

  • Published: 22 November 2020
  • Volume 3 , pages 271–277, ( 2020 )

Cite this article

write a speech about uses and abuses of social media

  • Emilio Ferrara 1 ,
  • Stefano Cresci 2 &
  • Luca Luceri 3  

30k Accesses

93 Citations

40 Altmetric

Explore all metrics

The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the coronavirus outbreak. Articles in this collection adopt a diverse range of methods and techniques, and focus on the study of the narratives that fueled conspiracy theories, on the diffusion patterns of COVID-19 misinformation, on the global news sentiment, on hate speech and social bot interference, and on multimodal Chinese propaganda. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues. In turn, these crucial endeavors might anticipate a growing trend of studies where diverse theories, models, and techniques will be combined to tackle the different aspects of online misinformation, manipulation, and abuse.

Similar content being viewed by others

write a speech about uses and abuses of social media

Fake news, disinformation and misinformation in social media: a review

write a speech about uses and abuses of social media

Fake news on Social Media: the Impact on Society

write a speech about uses and abuses of social media

Sexist Slurs: Reinforcing Feminine Stereotypes Online

Avoid common mistakes on your manuscript.

Introduction

Malicious and abusive behaviors on social media have elicited massive concerns for the negative repercussions that online activity can have on personal and collective life. The spread of false information [ 8 , 14 , 19 ] and propaganda [ 10 ], the rise of AI-manipulated multimedia [ 3 ], the presence of AI-powered automated accounts [ 9 , 12 ], and the emergence of various forms of harmful content are just a few of the several perils that social media users can—even unconsciously—encounter in the online ecosystem. In times of crisis, these issues can only get more pressing, with increased threats for everyday social media users [ 20 ]. The ongoing COVID-19 pandemic makes no exception and, due to dramatically increased information needs, represents the ideal setting for the emergence of infodemics —situations characterized by the undisciplined spread of information, including a multitude of low-credibility, fake, misleading, and unverified information [ 24 ]. In addition, malicious actors thrive on these wild situations and aim to take advantage of the resulting chaos. In such high-stakes scenarios, the downstream effects of misinformation exposure or information landscape manipulation can manifest in attitudes and behaviors with potentially dramatic public health consequences [ 4 , 21 ].

By affecting the very fabric of our socio-technical systems, these problems are intrinsically interdisciplinary and require joint efforts to investigate and address both the technical (e.g., how to thwart automated accounts and the spread of low-quality information, how to develop algorithms for detecting deception, automation, and manipulation), as well as the socio-cultural aspects (e.g., why do people believe in and share false news, how do interference campaigns evolve over time) [ 7 , 15 ]. Fortunately, in the case of COVID-19, several open datasets were promptly made available to foster research on the aforementioned matters [ 1 , 2 , 6 , 16 ]. Such assets bootstrapped the first wave of studies on the interplay between a global pandemic and online deception, manipulation, and automation.

Contributions

In light of the previous considerations, the purpose of this special issue was to collect contributions proposing models, methods, empirical findings, and intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information, and conspiracy theories about the COVID-19 outbreak. In particular, to protect the integrity of online discussions on social media, we aimed to stimulate contributions along two interlaced lines. On one hand, we solicited contributions to enhance the understanding on how health misinformation spreads, on the role of social media actors that play a pivotal part in the diffusion of inaccurate information, and on the impact of their interactions with organic users. On the other hand, we sought to stimulate research on the downstream effects of misinformation and manipulation on user perception of, and reaction to, the wave of questionable information they are exposed to, and on possible strategies to curb the spread of false narratives. From ten submissions, we selected seven high-quality articles that provide important contributions for curbing the spread of misinformation, manipulation, and abuse on social media. In the following, we briefly summarize each of the accepted articles.

The COVID-19 pandemic has been plagued by the pervasive spread of a large number of rumors and conspiracy theories, which even led to dramatic real-world consequences. “Conspiracy in the Time of Corona: Automatic Detection of Emerging COVID-19 Conspiracy Theories in Social Media and the News” by Shahsavari, Holur, Wang, Tangherlini, and Roychowdhury grounds on a machine learning approach to automatically discover and investigate the narrative frameworks supporting such rumors and conspiracy theories [ 17 ]. Authors uncover how the various narrative frameworks rely on the alignment of otherwise disparate domains of knowledge, and how they attach to the broader reporting on the pandemic. These alignments and attachments are useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Moreover, identifying the narrative frameworks that provide the generative basis for these stories may also contribute to devise methods for disrupting their spread.

The widespread diffusion of rumors and conspiracy theories during the outbreak has also been analyzed in “Partisan Public Health: How Does Political Ideology Influence Support for COVID-19 Related Misinformation?” by Nicholas Havey. The author investigates how political leaning influences the participation in the discourse of six COVID-19 misinformation narratives: 5G activating the virus, Bill Gates using the virus to implement a global surveillance project, the “Deep State” causing the virus, bleach, and other disinfectants as ingestible protection against the virus, hydroxychloroquine being a valid treatment for the virus, and the Chinese Communist party intentionally creating the virus [ 13 ]. Results show that conservative users dominated most of these discussions and pushed diverse conspiracy theories. The study further highlights how political and informational polarization might affect the adherence to health recommendations and can, thus, have dire consequences for public health.

figure 1

Network based on the web-page URLs shared on Twitter from January 16, 2020 to April 15, 2020 [ 18 ]. Each node represents a web-page URL, while connections indicate links among web-pages. The purple nodes represent traditional news sources, the orange nodes indicate the low-quality and misinformation news sources, and the green nodes represent authoritative health sources. The edges take the color of the source, while the node size is based on the degree

“Understanding High and Low Quality URL Sharing on COVID-19 Twitter Streams” by Singh, Bode, Budak, Kawintiranon, Padden, and Vraga investigate URL sharing patterns during the pandemic, for different categories of websites [ 18 ]. Specifically, authors categorize URLs as either related to traditional news outlets, authoritative health sources, or low-quality and misinformation news sources. Then, they build networks of shared URLs (see Fig. 1 ). They find that both authoritative health sources and low-quality/misinformation ones are shared much less than traditional news sources. However, COVID-19 misinformation is shared at a higher rate than news from authoritative health sources. Moreover, the COVID-19 misinformation network appears to be dense (i.e., tightly connected) and disassortative. These results can pave the way for future intervention strategies aimed at fragmenting networks responsible for the spread of misinformation.

The relationship between news sentiment and real-world events is a long-studied matter that has serious repercussions for agenda setting and (mis-)information spreading. In “Around the world in 60 days: An exploratory study of impact of COVID-19 on online global news sentiment” , Chakraborty and Bose explore this relationship for a large set of worldwide news articles published during the COVID-19 pandemic [ 5 ]. They apply unsupervised and transfer learning-based sentiment analysis techniques and they explore correlations between news sentiment scores and the global and local numbers of infected people and deaths. Specific case studies are also conducted for countries, such as China, the US, Italy, and India. Results of the study contribute to identify the key drivers for negative news sentiment during an infodemic, as well as the communication strategies that were used to curb negative sentiment.

Farrell, Gorrell, and Bontcheva investigate one of the most damaging sides of online malicious content: online abuse and hate speech. In “Vindication, Virtue and Vitriol: A study of online engagement and abuse toward British MPs during the COVID-19 Pandemic” , they adopt a mixed methods approach to analyze citizen engagement towards British MPs online communications during the pandemic [ 11 ]. Among their findings is that certain pressing topics, such as financial concerns, attract the highest levels of engagement, although not necessarily negative. Instead, other topics such as criticism of authorities and subjects like racism and inequality tend to attract higher levels of abuse, depending on factors such as ideology, authority, and affect.

Yet, another aspect of online manipulation—that is, automation and social bot interference—is tackled by Uyheng and Carley in their article “Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines”  [ 22 ]. Using a combination of machine learning and network science, the authors investigate the interplay between the use of social media automation and the spread of hateful messages. They find that the use of social bots yields more results when targeting dense and isolated communities. While the majority of extant literature frames hate speech as a linguistic phenomenon and, similarly, social bots as an algorithmic one, Uyheng and Carley adopt a more holistic approach by proposing a unified framework that accounts for disinformation, automation, and hate speech as interlinked processes, generating insights by examining their interplay. The study also reflects on the value of taking a global approach to computational social science, particularly in the context of a worldwide pandemic and infodemic, with its universal yet also distinct and unequal impacts on societies.

It has now become clear that text is not the only way to convey online misinformation and propaganda [ 10 ]. Instead, images such as those used for memes are being increasingly weaponized for this purpose. Based on this evidence, Wang, Lee, Wu, and Shen investigate US-targeted Chinese COVID propaganda, which happens to rely heavily on text images [ 23 ]. In their article “Influencing Overseas Chinese by Tweets: Text-Images as the Key Tactic of Chinese Propaganda” , they tracked thousands of Twitter accounts involved in the #USAVirus propaganda campaign. A large percentage ( \(\simeq 38\%\) ) of those accounts was later suspended by Twitter, as part of their efforts for contrasting information operations. Footnote 1 Authors studied the behavior and content production of suspended accounts. They also experimented with different statistical and machine learning models for understanding which account characteristics mostly determined their suspension by Twitter, finding that the repeated use of text images played a crucial part.

Overall, the great interest around the COVID-19 infodemic and, more broadly, about research themes such as online manipulation, automation, and abuse, combined with the growing risks of future infodemics, make this special issue a timely endeavor that will contribute to the future development of this crucial area. Given the recent advances and breadth of the topic, as well as the level of interest in related events that followed this special issue—such as dedicated panels, webinars, conferences, workshops, and other special issues in journals—we are confident that the articles selected in this collection will be both highly informative and thought provoking for readers. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues, which demand renewed and joint efforts from different computer science fields, as well as from other related disciplines such as the social, political, and psychological sciences. To this regard, the articles in this collection testify and anticipate a growing trend of interdisciplinary studies where diverse theories, models, and techniques will be combined to tackle the different aspects at the core of online misinformation, manipulation, and abuse.

https://blog.twitter.com/en_us/topics/company/2020/information-operations-june-2020.html .

Alqurashi, S., Alhindi, A., & Alanazi, E. (2020). Large Arabic Twitter dataset on COVID-19. arXiv preprint arXiv:2004.04315 .

Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., & Chowell, G. (2020). A large-scale COVID-19 Twitter chatter dataset for open scientific research—An international collaboration. arXiv preprint arXiv:2004.03688 .

Boneh, D., Grotto, A. J., McDaniel, P., & Papernot, N. (2019). How relevant is the Turing test in the age of sophisbots? IEEE Security & Privacy, 17 (6), 64–71.

Article   Google Scholar  

Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., et al. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108 (10), 1378–1384.

Chakraborty, A., & Bose, S. (2020). Around the world in sixty days: An exploratory study of impact of COVID-19 on online global news sentiment. Journal of Computational Social Science .

Chen, E., Lerman, K., & Ferrara, E. (2020). Tracking social media discourse about the COVID-19 pandemic: Development of a public coronavirus Twitter data set. JMIR Public Health and Surveillance, 6 (2), e19273.

Ciampaglia, G. L. (2018). Fighting fake news: A role for computational social science in the fight against digital misinformation. Journal of Computational Social Science, 1 (1), 147–153.

Cinelli, M., Cresci, S., Galeazzi, A., Quattrociocchi, W., & Tesconi, M. (2020). The limited reach of fake news on Twitter during 2019 European elections. PLoS One, 15 (6), e0234689.

Cresci, S. (2020). A decade of social bot detection. Communications of the ACM, 63 (10), 61–72.

Da San M., G., Cresci, S., Barrón-Cedeño, A., Yu, S., Di Pietro, R., & Nakov, P. (2020). A survey on computational propaganda detection. In: The 29th International Joint Conference on Artificial Intelligence (IJCAI’20), pp. 4826–4832.

Farrell, T., Gorrell, G., & Bontcheva, K. (2020). Vindication, virtue and vitriol: A study of online engagement and abuse toward British MPs during the COVID-19 Pandemic. Journal of Computational Social Science .

Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59 (7), 96–104.

Havey, N. (2020). Partisan public health: How does political ideology influence support for COVID-19 related misinformation?. Journal of Computational Social Science .

Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al. (2018). The science of fake news. Science, 359 (6380), 1094–1096.

Luceri, L., Deb, A., Giordano, S., & Ferrara, E. (2019). Evolution of bot and human behavior during elections. First Monday, 24 , 9.

Google Scholar  

Qazi, U., Imran, M., & Ofli, F. (2020). GeoCoV19: a dataset of hundreds of millions of multilingual COVID-19 tweets with location information. ACM SIGSPATIAL Special, 12 (1), 6–15.

Shahsavari, S., Holur, P., Wang, T., Tangherlini, T. R., & Roychowdhury, V. (2020). Conspiracy in the time of corona: Automatic detection of emerging COVID-19 conspiracy theories in social media and the news. Journal of Computational Social Science .

Singh, L., Bode, L., Budak, C., Kawintiranon, K., Padden, C., & Vraga, E. (2020). Understanding high and low quality URL sharing on COVID-19 Twitter streams. Journal of Computational Social Science .

Starbird, K. (2019). Disinformation’s spread: Bots, trolls and all of us. Nature, 571 (7766), 449–450.

Starbird, K., Dailey, D., Mohamed, O., Lee, G., & Spiro, E.S. (2018). Engage early, correct more: How journalists participate in false rumors online during crisis events. In: Proceedings of the 2018 ACM CHI Conference on Human Factors in Computing Systems (CHI’18), pp. 1–12. ACM.

Swire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: challenges and recommendations. Annual Review of Public Health, 41 , 433–451.

Uyheng, J., & Carley, K. M. (2020). Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines. Journal of Computational Social Science .

Wang, A. H. E., Lee, M. C., Wu, M. H., & Shen, P. (2020). Influencing overseas Chinese by tweets: Text-Images as the key tactic of Chinese propaganda. Journal of Computational Social Science .

Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395 (10225), 676.

Download references

Author information

Authors and affiliations.

University of Southern California, Los Angeles, CA, 90007, USA

Emilio Ferrara

Institute of Informatics and Telematics, National Research Council (IIT-CNR), 56124, Pisa, Italy

Stefano Cresci

University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Manno, Switzerland

Luca Luceri

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Emilio Ferrara .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Ferrara, E., Cresci, S. & Luceri, L. Misinformation, manipulation, and abuse on social media in the era of COVID-19. J Comput Soc Sc 3 , 271–277 (2020). https://doi.org/10.1007/s42001-020-00094-5

Download citation

Received : 19 October 2020

Accepted : 23 October 2020

Published : 22 November 2020

Issue Date : November 2020

DOI : https://doi.org/10.1007/s42001-020-00094-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Misinformation
  • Social bots
  • Social media
  • Find a journal
  • Publish with us
  • Track your research
  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Social Media, Freedom of Speech, and the Future of our Democracy

  • < Previous
  • Next chapter >

Social Media, Freedom of Speech, and the Future of our Democracy

Regulating Harmful Speech on Social Media: The Current Legal Landscape and Policy Proposals

  • Published: August 2022
  • Cite Icon Cite
  • Permissions Icon Permissions

Social media platforms have transformed how we communicate with one another. They allow us to talk easily and directly to countless others at lightning speed, with no filter and essentially no barriers to transmission. With their enormous user bases and proprietary algorithms that are designed both to promote popular content and to display information based on user preferences, they far surpass any historical antecedents in their scope and power to spread information and ideas.

The benefits of social media platforms are obvious and enormous. They foster political and public discourse and civic engagement in the United States and around the world. 1 Close Social media platforms give voice to marginalized individuals and groups, allowing them to organize, offer support, and hold powerful people accountable. 2 Close And they allow individuals to communicate with and form communities with others who share their interests but might otherwise have remained disconnected from one another.

At the same time, social media platforms, with their directness, immediacy, and lack of a filter, enable harmful speech to flourish—including wild conspiracy theories, deliberately false information, foreign propaganda, and hateful rhetoric. The platforms’ algorithms and massive user bases allow such “harmful speech” to be disseminated to millions of users at once and then shared by those users at an exponential rate. This widespread and frictionless transmission of harmful speech has real-world consequences. Conspiracy theories and false information spread on social media have helped sow widespread rejection of COVID-19 public-health measures 3 Close and fueled the lies about the 2020 US presidential election and its result. 4 Close Violent, racist, and anti-Semitic content on social media has played a role in multiple mass shootings. 5 Close Social media have also facilitated speech targeted at specific individuals, including doxing (the dissemination of private information, such as home addresses, for malevolent purposes) and other forms of harassment, including revenge porn and cyberbullying.

Signed in as

Institutional accounts.

  • Google Scholar Indexing
  • GoogleCrawler [DO NOT DELETE]

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Skip to content
  • Skip to navigation

Stanford University

Header Menu

SystemX Alliance

Search form

You are here, why ai struggles to recognize toxic speech on social media.

write a speech about uses and abuses of social media

Automated speech police can score highly on technical tests but miss the mark with people, new research shows. 

Facebook says its artificial intelligence models identified and  pulled down 27 million pieces of hate speech in the final three months of 2020 . In 97 percent of the cases, the systems took action before humans had even flagged the posts.

That’s a huge advance, and all the other major social media platforms are using AI-powered systems in similar ways. Given that people post hundreds of millions of items every day, from comments and memes to articles, there’s no real alternative. No army of human moderators could keep up on its own.

But a team of human-computer interaction and AI researchers at Stanford sheds new light on why automated speech police can score highly accurately on technical tests yet  provoke a lot dissatisfaction from humans with their decisions.  The main problem: There is a huge difference between evaluating more traditional AI tasks, like recognizing spoken language, and the much messier task of identifying hate speech, harassment, or misinformation — especially in today’s polarized environment.

Read the study:  The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality

“It appears as if the models are getting almost perfect scores, so some people think they can use them as a sort of black box to test for toxicity,’’ says Mitchell Gordon, a PhD candidate in computer science who worked on the project. “But that’s not the case. They’re evaluating these models with approaches that work well when the answers are fairly clear, like recognizing whether ‘java’ means coffee or the computer language, but these are tasks where the answers are not clear.”

The team hopes their study will illuminate the gulf between what developers think they’re achieving and the reality — and perhaps help them develop systems that grapple more thoughtfully with the inherent disagreements around toxic speech.

Too Much Disagreement

There are no simple solutions, because there will never be unanimous agreement on highly contested issues. Making matters more complicated, people are often ambivalent and inconsistent about how they react to a particular piece of content.

In one study, for example,  human annotators rarely reached agreement  when they were asked to label tweets that contained words from a lexicon of hate speech. Only 5 percent of the tweets were acknowledged by a majority as hate speech, while only 1.3 percent received unanimous verdicts.  In a study  on recognizing misinformation, in which people were given statements about purportedly true events, only 70 percent agreed on whether most of the events had or had not occurred.

Despite this challenge for human moderators, conventional AI models achieve high scores on recognizing toxic speech —  .95 “ROCAUC” — a popular metric for evaluating AI models in which 0.5 means pure guessing and 1.0 means perfect performance. But the Stanford team found that the real score is much lower — at most .73 — if you factor in the disagreement among human annotators.

Reassessing the Models

In a new study,  the Stanford team re-assesses the performance of today’s AI models by getting a more accurate measure of what people truly believe and how much they disagree among themselves.

The study was overseen by  Michael Bernstein  and  Tatsunori Hashimoto , associate and assistant professors of computer science and faculty members of the  Stanford Institute for Human-Centered Artificial Intelligence  (HAI). In addition to Gordon, Bernstein, and Hashimoto, the paper’s co-authors include Kaitlyn Zhou, a PhD candidate in computer science, and Kayur Patel, a researcher at Apple Inc.

To get a better measure of real-world views, the researchers developed an algorithm to filter out the “noise” — ambivalence, inconsistency, and misunderstanding — from how people label things like toxicity, leaving an estimate of the amount of true disagreement. They focused on how repeatedly each annotator labeled the same kind of language in the same way. The most consistent or dominant responses became what the researchers call "primary labels," which the researchers then used as a more precise dataset that captures more of the true range of opinions about potential toxic content.

The team then used that approach to refine datasets that are widely used to train AI models in spotting toxicity, misinformation, and pornography. By applying existing AI metrics to these new “disagreement-adjusted” datasets, the researchers revealed dramatically less confidence about decisions in each category. Instead of getting nearly perfect scores on all fronts, the AI models achieved only .73 ROCAUC in classifying toxicity and 62 percent accuracy in labeling misinformation. Even for pornography — as in, “I know it when I see it” — the accuracy was only .79.

Someone Will Always Be Unhappy. The Question Is Who?

Gordon says AI models, which must ultimately make a single decision, will never assess hate speech or cyberbullying to everybody’s satisfaction. There will always be vehement disagreement. Giving human annotators more precise definitions of hate speech may not solve the problem either, because people end up suppressing their real views in order to provide the “right” answer.

But if social media platforms have a more accurate picture of what people really believe, as well as which groups hold particular views, they can design systems that make more informed and intentional decisions.

In the end, Gordon suggests, annotators as well as social media executives will have to make value judgments with the knowledge that many decisions will always be controversial.

“Is this going to resolve disagreements in society? No,” says Gordon. “The question is what can you do to make people less unhappy. Given that you will have to make some people unhappy, is there a better way to think about whom you are making unhappy?”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more . 

Why AI Struggles To Recognize Toxic Speech on Social Media - by Edmund L. Andrews -  Human-Centered Artificial Intelligence  - July 13, 2021

Via :  hai.stanford.edu

Stanford University

  • Maps & Directions
  • Search Stanford
  • Terms of Use
  • Copyright Complaints

©  Stanford University , Stanford , California 94305

  • All Stories
  • Journalists
  • Expert Advisories
  • Media Contacts
  • X (Twitter)
  • Arts & Culture
  • Business & Economy
  • Education & Society
  • Environment
  • Law & Politics
  • Science & Technology
  • International
  • Michigan Minds Podcast
  • Michigan Stories
  • 2024 Elections
  • Artificial Intelligence
  • Abortion Access
  • Mental Health

Hate speech in social media: How platforms can do better

  • Morgan Sherburne

With all of the resources, power and influence they possess, social media platforms could and should do more to detect hate speech, says a University of Michigan researcher.

Libby Hemphill

Libby Hemphill

In a report from the Anti-Defamation League , Libby Hemphill, an associate research professor at U-M’s Institute for Social Research and an ADL Belfer Fellow, explores social media platforms’ shortcomings when it comes to white supremacist speech and how it differs from general or nonextremist speech, and recommends ways to improve automated hate speech identification methods.

“We also sought to determine whether and how white supremacists adapt their speech to avoid detection,” said Hemphill, who is also a professor at U-M’s School of Information. “We found that platforms often miss discussions of conspiracy theories about white genocide and Jewish power and malicious grievances against Jews and people of color. Platforms also let decorous but defamatory speech persist.”

How platforms can do better

White supremacist speech is readily detectable, Hemphill says, detailing the ways it is distinguishable from commonplace speech in social media, including:

  • Frequently referencing racial and ethnic groups using plural noun forms (whites, etc.)
  • Appending “white” to otherwise unmarked terms (e.g., power)
  • Using less profanity than is common in social media to elude detection based on “offensive” language
  • Being congruent on both extremist and mainstream platforms
  • Keeping complaints and messaging consistent from year to year
  • Describing Jews in racial, rather than religious, terms

“Given the identifiable linguistic markers and consistency across platforms, social media companies should be able to recognize white supremacist speech and distinguish it from general, nontoxic speech,” Hemphill said.

The research team used commonly available computing resources, existing algorithms from machine learning and dynamic topic modeling to conduct the study.

“We needed data from both extremist and mainstream platforms,” said Hemphill, noting that mainstream user data comes from Reddit and extremist website user data comes from Stormfront.

What should happen next?

Even though the research team found that white supremacist speech is indentifiable and consistent—with more sophisticated computing capabilities and additional data—social media platforms still miss a lot and struggle to distinguish nonprofane, hateful speech from profane, innocuous speech.

“Leveraging more specific training datasets, and reducing their emphasis on profanity can improve platforms’ performance,” Hemphill said.

The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists’ conversations are dangerous and hateful.

“Social media platforms can enable social support, political dialogue and productive collective action. But the companies behind them have civic responsibilities to combat abuse and prevent hateful users and groups from harming others,” Hemphill said. “We hope these findings and recommendations help platforms fulfill these responsibilities now and in the future.”

More information:

  • Report: Very Fine People: What Social Media Platforms Miss About White Supremacist Speech
  • Related: Video: ISR Insights Speaker Series: Detecting white supremacist speech on social media
  • Podcast: Data Brunch Live! Extremism in Social Media

University of Michigan Logo

412 Maynard St. Ann Arbor, MI 48109-1399 Email [email protected] Phone 734-764-7260 About Michigan News

  • Engaged Michigan
  • Global Michigan
  • Michigan Medicine
  • Public Affairs

Publications

  • Michigan Today
  • The University Record

Office of the Vice President for Communications © 2024 The Regents of the University of Michigan

How should social media platforms combat misinformation and hate speech?

Subscribe to the center for technology innovation newsletter, niam yaraghi niam yaraghi nonresident senior fellow - governance studies , center for technology innovation @niamyaraghi.

April 9, 2019

Social media companies are under increased scrutiny for their mishandling of hateful speech and fake news on their platforms. There are two ways to consider a social media platform: On one hand, we can view them as technologies that merely enable individuals to publish and share content, a figurative blank sheet of paper on which anyone can write anything. On the other hand, one can argue that social media platforms have now evolved curators of content. I argue that these companies should take some responsibility over the content that is published on their platforms and suggest a set of strategies to help them with dealing with fake news and hate speech.

Artificial and Human Intelligence together

At the beginning, social media companies established themselves not to hold any accountability over the content being published on its platform. In the intervening years, they have since set up a mix of automated and human driven editorial processes to promote or filter certain types of content. In addition to that, their users are increasingly using these platforms as the primary source of getting their news. Twitter moments , in which you can see a brief snapshot of the daily news, is a prime example of how Twitter is getting closer to becoming a news media. As social media practically become news media, their level of responsibility over the content which they distribute should increase accordingly.

While I believe it is naïve to consider social media as merely neutral content sharing technologies with no responsibility, I do not believe that we should either have the same level of editorial expectation from social media that we have from traditional news media.

The sheer volume of content shared on social media makes it impossible to establish a comprehensive editorial system. Take Twitter as an example: It is estimated that 500 million tweets are sent per day. Assuming that each tweet contains 20 words on average, the volume of content published on Twitter in one single day will be equivalent to that of New York Times in 182 years. The terminology and focus of the hate speech changes over time, and most fake news articles contain some level of truthfulness in them. Therefore, social media companies cannot solely rely on artificial intelligence or humans to monitor and edit their content. They should rather develop approaches that utilize artificial and human intelligence together.

Finding the needle in a haystack

To overcome the editorial challenges of so much content, I suggest that the companies focus on a limited number of topics which are deemed important with significant consequences. The anti-vaccination movement and those who believe in flat-earth theory are both spreading anti-scientific and fake content. However, the consequences of believing that vaccines cause harm are eminently more dangerous than believing that the earth is flat. The former creates serious public health problems, the latter makes for a good laugh at a bar. Social media companies should convene groups of experts in various domains to constantly monitor the major topics in which fake news or hate speech may cause serious harm.

It is also important to consider how recommendation algorithms on social media platforms may inadvertently promote fake and hateful speech. At their core, these recommendation systems group users based on their shared interests and then promote the same type of content to all users within each group. If most of the users in one group have interests in, say, flat-earth theory and anti-vaccination hoaxes, then the algorithm will promote the anti-vaccination content to the users in the same group who may only be interested in flat-earth theory. Over time, the exposure to such promoted content could persuade the users who initially believed in vaccines to become skeptical about them. Once the major areas of focus for combating the fake and hateful speech is determined, the social media companies can tweak their recommendation systems fairly easily so that they will not nudge users to the harmful content.

Once those limited number of topics are identified, social media companies should decide on how to fight the spread of such content. In rare instances, the most appropriate response is to censor and ban the content with no hesitation. Examples include posts that incite violence or invite others to commit crimes. The recent New Zealand incident in which the shooter live broadcasted his heinous crimes on Facebook is the prime example of the content which should have never been allowed to be posted and shared on the platform.

Facebook currently relies on its community of users to flag such content and then uses an army of real humans to monitor such content within 24 hours to determine if they are actually in violation of its terms of use. Live content is monitored by humans once it reaches a certain level of popularity. While it is easier to use artificial intelligence to monitor textual content in real-time, our technologies to analyze images and videos are quickly advancing. For example, Yahoo! has recently made its algorithms to detect offensive and adult images public. The AI algorithms of Facebook are getting smart enough to detect and flag non-consensual intimate images .

Fight misinformation with information

Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and Facebook bans white supremacist content. The other is to provide alternative information alongside the content with fake information so that the users are exposed to the truth and correct information. This approach, which is implemented by YouTube, encourages users to click on the links with verified and vetted information that would debunk the misguided claims made in fake or hateful content. If you search “Vaccines cause autism” on YouTube, while you still can view the videos posted by anti-vaxxers, you will also be presented with a link to the Wikipedia page of MMR vaccine that debunks such beliefs.

While we yet have to empirically examine and compare the effectiveness of these alternative approaches, I prefer to present users with the real information and allow them to become informed and willfully abandon their misguided beliefs by exposing them to the reliable sources of information. Regardless of their short-lived impact, diversity of ideas will ultimately move us forward by enriching our discussions. Social media companies will be able to censor content online, but they cannot control how ideas spread offline. Unless individuals are presented with counter arguments, falsehoods and hateful ideas will spread easily, as they have in the past when social media did not exist.

Related Content

Mary Blankenship, Carol Graham

July 6, 2020

Chris Meserole, Alina Polyakova

May 25, 2018

Clara Hendrickson

May 28, 2019

Related Books

Mark MacCarthy

November 7, 2023

Darrell M. West

May 26, 2011

Jeffrey Rosen, Benjamin Wittes

March 21, 2013

Media & Journalism Social Media

Governance Studies

Center for Technology Innovation

Valerie Wirtschafter

October 26, 2023

Courtney C. Radsch

April 13, 2023

December 18, 2017

Kate has faced years of abuse on social media. She says it's time platforms did something about it

Anonymous woman with long hair holding smartphone

Kate doesn't feel comfortable including her real name in this article – and that's telling. 

Since her early teens, the 23-year-old Queensland woman has experienced so much harassment and abuse online, she's become very wary of what she'd face for speaking out about it.

On digital platforms including Facebook, Instagram, Tinder and Snapchat, Kate has received unwanted and unsolicited naked photos and videos, persistent direct messages from individuals and verbal abuse from male platform users.

"Physical distance doesn't protect you online," she says.

"I don't feel safe anymore ever posting or commenting publicly about things … I'm always worried."

Kate's not alone. A 2020 Plan International survey showed that in Australia 65 per cent of girls and young women reported being harassed or abused online . 

And it's not just the harassment targeted at Kate that makes her uncomfortable.

She says there's a "constant build-up" of "misogynistic [or] violent" content on social platforms, including in the comments sections of other posts.

"You scroll through your newsfeed and everything you see is degrading," she says.

"There is a harm in constantly having to see terrible content."

Kate wants social media platforms to do more to protect their users.

Alice Marwick, associate professor at the University of North Carolina's Center for Information, Technology and Public Life, says platforms have a "huge … social responsibility" to do so.

Portrait photo of a dark-haired woman with natural background

Dr Marwick says it's incumbent upon companies who create a space for users to congregate and communicate online to "understand the harms and impacts of their features" and to better deal with them.

And she says, not only do platforms need to take better action when abuse occurs, they must take measures to prevent it happening in the first place.

How much are platforms doing to protect users?

For years, platforms have been under pressure from many different quarters to do more to tackle abuse perpetrated through their networks. 

Technology companies, with some exceptions, are  generally not legally liable for their users' content and behaviour. 

But Dr Marwick, who specialises in the social implications of social media technologies, says platforms still have a "moral and ethical responsibility" to their users.

Rachel Haas, vice president of member safety at the dating app Bumble, agrees.

In what she says is an industry first, her organisation has this year partnered with global gender-based violence support service Chayn to offer online trauma support to Bumble users who experience sexual assault or relationship abuse through connections made using the app.

Ms Haas says the partnership means users who experience harm will have access to therapists and, "in very sensitive cases", up to six therapy sessions.

Portrait photo of blonde-haired woman in front of beige wall

Just a few months in, the impact of the partnership on Bumble users is unclear. But it demonstrates a social platform making moves to address the needs of abuse victims and survivors.

Dr Marwick says Twitter, where harassment of users has historically been a significant issue, has also implemented new safety features, including greater control over who replies to your tweets, and muting and blocking certain content more easily.

Kara Hinesley, public policy director at Twitter Australia and New Zealand, says that 65 per cent of abusive content that Twitter takes action on today is identified "proactively using technology instead of relying solely on reports from people using Twitter".

Ms Hinesley also points to new tools like Safety Mode, which automatically blocks accounts for seven days for using potentially harmful language or sending repetitive and uninvited replies or mentions. It is currently being tested.

The platform still has "a ways to go", Dr Marwick says, but it has made significant progress, particularly in relation to others.

"Sites like Telegram, Gab, Reddit, TikTok – they're kind of wild west-y right now," Dr Marwick says.

"A lot of these spaces just don't have robust mechanisms put into place."

Challenges in identifying abuse online

Understanding what constitutes online abuse can be difficult.

From the shoulders up, Kara Hinesley, who has long, fair hair, smiles.

When, several years ago, Kate was being inundated with daily late-night messages and voice recordings from a man on Facebook, she was hit with a dilemma.

"I didn't know how to deal with it at the time. It made me [feel] unsafe, but it sort of seemed innocuous, so I didn't really say anything about it," she says.

Kate argues that social platforms don't make it any easier.

She says some online behaviour that has made her uncomfortable, for example when a man accessed and shared photos from one of her social media accounts of her exercising, felt "really gross" but she thought "technically [the man] hadn't done anything that would violate the platform's rules".

Unlike explicit harassment, the photo-sharing or late-night messages were not clearly harmful.

This is part of what can make harassment and abuse difficult to identify. Harmful interactions can sometimes resemble healthy communication between friends or loved ones. That's why many of the behaviours that make women feel unsafe may go undetected.

Abusive speech can also be difficult to identify, in part because it's become so normalised, Dr Marwick says.

"Misogynistic speech is so much of a daily part of life that women just learn to accept it to be on the platform," she says.

There are additional challenges in stamping out abuse online.

Dr Marwick acknowledges that the enormous scale of content social media platforms contend with makes oversight difficult.

"These just are not human-scaled platforms at this point," she says. "Furthermore, they cover hundreds of languages and cultures," she says.

Protecting users from harm and abuse is therefore "complicated and expensive".

But that doesn't mean platforms can throw up their hands.

"Platforms have to be willing to invest in harassment resources in many different languages in many different spaces, and to understand how these things differ from country to country, from culture to culture," Dr Marwick says.

The funds exist for many companies. "These are incredibly profitable companies. These aren't companies that are barely scraping by," Dr Marwick says.

"Mark Zuckerberg isn't bootstrapping, living in his parents' basement. These are some of the most profitable [and] successful companies in the world that pay undergrads right out of college [around] $200,000 a year.

"There needs to be more money spent on human content moderation, and especially human content moderation in non-English speaking context," she says.

Solutions needed 'from the ground up'

Moderating existing content isn't enough to target abuse online, Dr Marwick says.

She'd like to see platforms incorporate the means to proactively mitigate harassment into their design.

"We need to build ways to deal with harassment into the platform from the ground up, rather than trying to implement it after the fact," she says.

She says any new platform being built today should be thinking about harassment as they build.

Kate hopes that's the case, and that more digital platforms start engaging with women's safety experts and users with diverse experiences.

"I think [platforms] just don't understand how women experience harm," she says.

"You can't possibly fathom how these platforms are going to be harmful in different ways without speaking to people who are impacted by them," she says.

Dr Rosalie Gillett is a postdoctoral research fellow at the Queensland University of Technology and an ABC Top 5 Humanities scholar for 2021. 

RN in your inbox

Get more stories that go beyond the news cycle with our weekly newsletter.

  • X (formerly Twitter)

Related Stories

‘just really nasty’: online dating can turn ugly, quickly for older women as 'rejection violence' grows.

A middle-aged woman looking past the camera with a beach in the background

'Predators can roam': How Tinder is turning a blind eye to sexual assault

Young woman looking at camera against a background of Tinder profiles.

Bumble changes 'unmatch' feature to protect users following Hack and Four Corners investigation

write a speech about uses and abuses of social media

  • Internet Culture
  • Relationships
  • Social Media

CodeAvail

Evaluate Two Ways Through Which Social Media Can Be Abused Or Manipulated

Evaluate Two Ways Through Which Social Media Can Be Abused Or Manipulated

Social media is now a crucial part of our daily lives, linking billions of people worldwide. It allows us to share our thoughts, stay informed, and communicate instantly. However, not everything on social media is positive. It can also be abused or manipulated in harmful ways. In this blog, we’ll evaluate two ways through which social media can be abused or manipulated: spreading misinformation and disinformation, and online harassment and cyberbullying.

How Is Social Media Being Abused?

Table of Contents

Social media is being abused in various ways, including:

  • Spreading Misinformation and Disinformation: False or misleading information is intentionally shared to deceive or manipulate others, often for political, financial, or social reasons.
  • Online Harassment and Cyberbullying: Individuals use social media platforms to harass, intimidate, or threaten others, leading to emotional distress, mental health issues, and sometimes even physical harm.
  • Manipulating Public Opinion: Social media can be used to sway public opinion on various issues through the dissemination of biased or misleading content, affecting perceptions and behaviors.
  • Privacy Violations: Personal information shared on social media can be exploited for malicious purposes, such as identity theft, stalking, or targeted advertising without consent.
  • Promoting Hate Speech and Extremism: Extremist groups and individuals use social media to spread hate speech, incite violence, and recruit followers, contributing to societal division and conflict.

Spread of Misinformation and Disinformation

Let’s start by understanding what misinformation and disinformation are. Misinformation is false or inaccurate information that is spread, regardless of whether there is intent to deceive.

Disinformation, on the other hand, is false information deliberately created and spread to mislead people.

Methods of Dissemination

  • Fake News Websites: These are websites that look like legitimate news sources but publish false stories. They often have sensational headlines to attract clicks and shares.
  • Bots and Automated Accounts: Bots are automated programs that can spread information quickly across social media platforms. They can be used to amplify certain messages or create the illusion of widespread agreement.
  • Deepfakes and Doctored Images/Videos: Deepfakes use artificial intelligence to create realistic but fake videos or images. Doctored media are altered to mislead viewers about what really happened.

Case Studies/Examples

  • Political Misinformation Campaigns: During elections, false information can be spread to influence voters. For example, false claims about a candidate’s actions or policies can sway public opinion.
  • COVID-19 Related Disinformation: During the pandemic, false information about treatments and the virus spread rapidly. This led to confusion and sometimes dangerous behaviors, like avoiding vaccines.

Impact on Society

  • Public Opinion and Polarization: Misinformation can divide society by creating or deepening existing divides. People start to believe in completely different versions of reality.
  • Threats to Democratic Processes: When misinformation affects elections, it undermines democracy. People may make decisions based on false information.
  • Public Health and Safety: False information about health can lead to serious consequences. For example, if people believe in false cures or ignore real medical advice, their health could be at risk.

Online Harassment and Cyberbullying

Another major issue on social media is online harassment and cyberbullying. This involves using digital platforms to bully, harass, or intimidate someone.

Methods and Platforms

  • Trolling and Hate Speech: Trolls deliberately post provocative or offensive messages to upset people. Hate speech targets people based on their race, religion, gender, or other characteristics.
  • Doxxing and Swatting: Doxxing involves sharing someone’s private information online without their permission. Swatting is a dangerous prank where someone makes a false report to emergency services to get them to respond to someone else’s address.
  • Revenge Adult Content and Non-Consensual Sharing of Private Information: Sharing intimate images or information without someone’s consent is a severe violation of privacy and can have devastating effects.
  • High-Profile Cases of Cyberbullying: Celebrities and public figures often face intense online harassment. For example, some have received threats or have been subjected to coordinated attacks by trolls.
  • Harassment Campaigns Targeting Activists or Public Figures: Activists who speak out on social issues often become targets of harassment. This can deter them from continuing their work and silence important voices.

Impact on Individuals

  • Psychological and Emotional Effects: Being harassed or bullied online can lead to anxiety, depression, and other mental health issues. The constant negativity can be overwhelming.
  • Impact on Mental Health and Well-being: Victims of cyberbullying might experience severe stress and trauma, affecting their overall well-being.
  • Consequences for Personal and Professional Lives: Online harassment can damage someone’s reputation, making it hard for them to find jobs or maintain relationships.

Comparison and Evaluation: Two Forms of Abuse

Similarities between the two forms of abuse.

  • Anonymity and Lack of Accountability: Both misinformation and cyberbullying often occur because people feel anonymous online. They might think they can say or do anything without facing consequences.
  • Rapid and Widespread Dissemination: Social media allows information and messages to spread quickly to a large audience, whether it’s false information or harmful messages.

Differences in Impact and Scope

  • Societal vs. Individual Impact: Misinformation often affects society as a whole, while cyberbullying primarily targets individuals. However, both can have wide-reaching effects.
  • Long-term vs. Immediate Consequences: Misinformation can have long-term consequences, such as altering public opinion over time. Cyberbullying often has immediate, severe impacts on the victim.

Challenges in Addressing and Mitigating These Abuses

  • Technological Challenges: Detecting false information and abusive behavior can be difficult, especially as perpetrators use more sophisticated methods.
  • Legal and Regulatory Hurdles: Laws and regulations often lag behind technological advances. Creating effective laws that protect users without infringing on free speech is a delicate balance.
  • Ethical Considerations: Ensuring that measures to combat abuse don’t overstep and infringe on privacy or free expression is a significant ethical challenge.

Strategies for Mitigation

Role of social media platforms.

  • Algorithms and AI for Detecting Abuse: Platforms can use advanced technology to identify and remove false information or abusive content.
  • User Reporting and Moderation Systems: Allowing users to report harmful content and having a team to review these reports can help manage abuse.

Legal and Policy Frameworks

  • International Cooperation and Regulation: Because the internet is global, countries need to work together to create regulations that address these issues.
  • National Laws and Enforcement: Stronger national laws and better enforcement can help protect users and hold perpetrators accountable.

Public Awareness and Education

  • Media Literacy Programs: Teaching people how to critically evaluate the information they see online can help reduce the impact of misinformation.
  • Educational Campaigns on Safe Online Behavior: Educating users about the dangers of cyberbullying and how to protect themselves can help create a safer online environment.

Social media has the power to connect us, but it also has the potential to be abused and manipulated in harmful ways.

By (evaluating two ways through which social media can be abused or manipulated) understanding how misinformation and disinformation spread, and recognizing the impact of online harassment and cyberbullying, we can take steps to mitigate these issues.

It’s important for individuals, platforms, and governments to work together to create a safer and more truthful online space.

Remember, staying informed and vigilant is our best defense against the abuse and manipulation of social media.

Related Posts

Tips on How to Design Professional Venn Diagrams in Python

Tips on How to Design Professional Venn Diagrams in Python

Venn diagram is the most popular diagram in scientific research articles and can be utilized to describe the relationship between various data sets. From the…

How to Get Help With Programming in R With Online Resources

How to Get Help With Programming in R With Online Resources

Writing a programming assignment is not an easy task for many students. In the modern education system, students need R programming assignments because of the…

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

You are using an outdated browser. Please upgrade your browser to improve your experience.

Suggested Results

Antes de cambiar....

Esta página no está disponible en español

¿Le gustaría continuar en la página de inicio de Brennan Center en español?

al Brennan Center en inglés

al Brennan Center en español

Informed citizens are our democracy’s best defense.

We respect your privacy .

  • Research & Reports

Principles for Social Media Use by Law Enforcement

Unfettered social media surveillance by police imperils constitutional rights and marginalized communities. Our best practices help mitigate these risks.

Rachel Levinson-Waldman

  • Social Media

Introduction

Social media is a powerful tool for connection and civic involvement, serving myriad purposes. It facilitates community-building, connecting like-minded people and fostering alliance development, including on sensitive or controversial topics; it helps grassroots movements find financial and other support; it promotes political education; it assists civic organizations in organizing and magnifying the reach of offline efforts; it elevates nonmainstream narratives; it encourages artistic expression; and more. footnote1_uzYFJwgJfAOh 1 See, e.g., Marcia Mundt, Karen Ross, and Charla M. Burnett, “Scaling Social Movements Through Social Media: The Case of Black Lives Matter,” Social Media + Society 4, no. 4 (October–December 2018): 1–14, https://journals.sagepub.com/doi/epub/10.1177/2056305118807911 ; Jane Hu, “The Second Act of Social-Media Activism,” New Yorker , August 3, 2020, https://www.newyorker.com/culture/cultural-comment/the-second-act-of-social-media-activism ; and Shira Ovide, “How Social Media Has Changed Civil Rights Protests,” New York Times, updated December 17, 2020, https://www.nytimes.com/2020/06/18/technology/social-media-protests.html .

Users of color benefit especially from social media’s wide-ranging applications. Black and Hispanic users of X (formerly Twitter) have leveraged that platform to spur political engagement and give voice to underrepresented groups. footnote2_eGRHcf3gES9S 2 Black Twitter and Hispanic Twitter (like Asian American Twitter and Feminist Twitter) are monikers coined for the space on Twitter (now X) in which people “discuss issues of concern to themselves and their communities — issues they say either are not covered by mainstream media, or are not covered with the appropriate cultural context.” Deen Freelon et al., How Black Twitter and Other Social Media Communities Interact with Mainstream News , Knight Foundation, February 27, 2018, 38–39, https://knightfoundation.org/wp-content/uploads/2018/02/Marginalized-Twitter-v5.pdf . See also Brooke Auxier, “Social Media Continue to Be Important Political Outlets for Black Americans,” Pew Research Center, December 11, 2020, https://www.pewresearch.org/short-reads/. 2020/12/11/social-media-continue-to-be-important-political-outlets-for-black-americans . College students of color have used social media to share stories about inequitable or traumatic treatment at predominantly white colleges and universities. footnote3_kV9jiKTwpKqB 3 Christian Peña, “How Social Media Is Helping Students of Color Speak Out About Racism on Campus,” PBS NewsHour, September 8, 2020, https://www.pbs.org/newshour/education/how-social-media-is-helping-students-of-color-speak-out-about-racism-on-campus ; and Dominique Skye McDaniel, “As Digital Activists, Teens of Color Turn to Social Media to Fight for a More Just World,” Conversation, April 20, 2023, https://theconversation.com/as-digital-activists-teens-of-color-turn-to-social-media-to-fight-for-a-more-just-world-201841 . And “Black Twitter” in particular has a record of pushing the national media to cover overlooked stories. footnote4_hGBBuicVf5fY 4 Freelon et al., How Black Twitter and Other Social Media Communities Interact , 46–47. See also Auxier, “Social Media Continue to Be Important Political Outlets”; Brooke Auxier, “Activism on Social Media Varies by Race and Ethnicity, Age, Political Party,” Pew Research Center, July 13, 2020, https://www.pewresearch.org/short-reads/2020/07/13/activism-on-social-media-varies-by-race-and-ethnicity-age-political-party ; and University of Kansas, “Social Media Use Increases Latino Political Participation,” news release, November 5, 2018, https://news.ku.edu/2018/11/02/social-media-use-increases-latino-political-participation . Research shows that young people of color are the demographic group most likely to turn to social media both to consume news and to amplify their own political involvement. footnote5_s4naVIqmLlcj 5 Matthew D. Luttig and Cathy J. Cohen, “How Social Media Helps Young People — Especially Minorities and the Poor — Get Politically Engaged,” Washington Post, September 9, 2016, https://www.washingtonpost.com/news/monkey-cage/wp/2016/09/09/how-social-media-helps-young-people-especially-minorities-and-the-poor-get-politically-engaged ; Auxier, “Social Media Continue to Be Important Political Outlets”; and Auxier, “Activism on Social Media Varies by Race and Ethnicity, Age, Political Party.” As the Supreme Court has recognized, online platforms “can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.” footnote6_apfowJLrq9ro 6 Packingham v. North Carolina, 582 U.S. 98, 107 (2017).

This far-reaching use makes social media an attractive source of information and intelligence for law enforcement. Officers can easily view publicly available information online and follow individuals and hashtags, often without even needing an account. They can also create undercover accounts to join online groups, monitor activity anonymously, or connect directly with individuals — with the attendant risks described below. Social media can provide evidence of criminal activity, from white-collar crime to inciting violence to drug and firearm offenses. It can also be used to commit crimes, such as stalking, harassment, and child sexual exploitation. footnote7_jrSw8REUZ0Tv 7 See LexisNexis Risk Solutions, Social Media Use in Law Enforcement: Crime Prevention and Investigative Activities Continue to Drive Usage , November 2014, https://centerforimprovinginvestigations.org/wp-content/uploads/2018/11/2014-social-media-use-in-law-enforcement-pdf.pdf (documenting multiple use cases of police use of social media).

Many law enforcement agencies contract with software vendors that offer proprietary, opaque computer algorithms to collect and analyze massive amounts of data. These algorithms supply agencies with running reports of social media posts on topics, groups, and individuals of interest, allowing law enforcement to analyze associations, and even discern viewpoints. Such tools facilitate the monitoring, collection, and analysis of data far more quickly and cheaply than any individual officer could accomplish, implicating the Supreme Court’s recognition that a “central aim” of the Constitution’s drafters was “to place obstacles in the way of a too permeating police surveillance.” footnote8_tsM7nmo9d1F2 8 Carpenter v. U.S., 138 S. Ct. 2206, 2214 (2018) (quoting U.S. v. Di Re, 332 U.S. 581, 595 (1948)) (internal quotation marks omitted). See also Rachel Levinson-Waldman, “Government Access to and Manipulation of Social Media: Legal and Policy Challenges,” Howard Law Journal 61, no. 3 (2018): 523–62, https://www.brennancenter.org/sites/default/files/publications/images/RLW_HowardLJ_Article.pdf .

In some circumstances, targeted social media use can be both appropriate and productive. As the Brennan Center has noted in the context of the FBI, “public social media posts that express specific and credible threats, when brought to the FBI’s attention, can by themselves be all the evidence necessary to justify opening a preliminary or full investigation, as would any other source of information indicating that a crime is taking place or in the works. Likewise, once the FBI opens a properly predicated investigation, agents may logically conclude that monitoring and recording public or private social media posts would be a fruitful investigative step to gather the evidence necessary for a prosecution.” footnote9_jvnCoUbN214K 9 Michael German and Kaylana Mueller-Hsia, Focusing the FBI: A Proposal for Reform , Brennan Center for Justice, July 28, 2022, 7, https://www.brennancenter.org/our-work/research-reports/focusing-fbi .

When there are no less intrusive means available and when its use is properly limited and narrowly scoped, social media can also help augment preparation for public events, as set out below.

But unbounded social media use by law enforcement can cause considerable harm. As the White House Office of Science and Technology Policy recently recognized, “unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity — often without their knowledge or consent.” footnote10_bhErkntD0T1N 10 Office of Science and Technology Policy, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People , White House, October 2022, 3, https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf . These threats include incursions into constitutionally protected speech and association, disproportionate focus on and repercussions for marginalized communities, and overcollection of irrelevant information. The Department of Justice has advised that even publicly available social media data should not be collected or used indiscriminately by law enforcement. footnote11_k7v1z3Tr8S4f 11 Global Advisory Committee, Developing a Policy on the Use of Social Media in Intelligence and Investigative Activities: Guidance and Recommendations , Office of Justice Programs, Department of Justice, February 2013, 6, https://bja.ojp.gov/sites/g/files/xyckuh186/files/media/document/developing_a_policy_on_the_use_of_social_media_in_intelligence_and_inves.pdf .

Moreover, although the issue is understudied, little empirical evidence supports the value of broadscale social media monitoring, footnote12_gi5sTEx4aq3k 12 See German and Mueller-Hsia, Focusing the FBI , 6 (“No one could reasonably suggest that having the FBI employ a team of agents to collect, digitize, and scour for vague indicators of wrongdoing every book, newspaper, magazine, newsletter, press release, and broadcast interview, song, poem, or speech published would be an effective or cost-efficient way to prevent crime or terrorism, especially given that more than half of the violent crime in the U.S. goes unsolved every year. The same holds true for social media.”). and government officials have expressed skepticism about the efficacy of the practice. A 2021 internal review by the Department of Homeland Security Office of the General Counsel, for instance, observed that agents trying to predict threats by collecting social media and other open-source data often instead gathered information on “a broad range of general threats,” ultimately yielding “information of limited value” that included “memes, hyperbole, statements on political organizations and other protected First Amendment speech.” footnote13_kdCJpC3CQKJp 13 Office of the General Counsel, Report on DHS Administrative Review into I&A Open Source Collection and Dissemination Activities During Civil Unrest: Portland, Oregon, June through July 2020 , Department of Homeland Security, January 6, 2021, 22, 27, http://cdn.cnn.com/cnn/2021/images/10/01/internal.review.report.20210930.pdf .

This explainer sets out the major risks inherent to law enforcement’s use of social media and then outlines proposed best practices. It aims to contribute to the development of policies that clarify and build appropriate guardrails around social media use by law enforcement.

Risks from Law Enforcement Monitoring of Social Media

Suppression of first amendment–protected activities.

Law enforcement officers routinely monitor hashtags, event pages, location data, and other information on social media ahead of public gatherings and political protests to glean information, follow organizers and participants, and develop response plans. footnote1_q7vRGo8KuHxS 1 See Brennan Center for Justice, “Civil Rights Concerns About Social Media Monitoring by Law Enforcement,” November 6, 2019, 2, https://www.brennancenter.org/our-work/research-reports/statement-civil-rights-concerns-about-monitoring-social-media-law ; and Mundt, Ross, and Burnett, “Scaling Social Movements Through Social Media” (quoting a Black Lives Matter group leader who shared, “I made a Facebook event for a vigil we held for Terence Crutcher. Literally 3 minutes later I got a call from” the local FBI). Some social media monitoring companies tout their ability to create revealing maps of individuals’ online networks, which implicates the right to freedom of association. It also raises questions about the extent to which a person’s ideology can be ascertained via their online presence. footnote2_e4bezRJO8Zl0 2 Rachel Levinson-Waldman and Mary Pat Dwyer, “LAPD Documents Show What One Social Media Surveillance Firm Promises Police,” Brennan Center for Justice, November 17, 2021, https://www.brennancenter.org/our-work/analysis-opinion/lapd-documents-show-what-one-social-media-surveillance-firm-promises . Concerningly, police have even used social media to compile profiles based on First Amendment–protected activities and share them among local, state, and federal agen­cies — increasing the risk that protesters will later face retaliatory targeting. footnote3_dPczOZnv5FOC 3 See, e.g., Antonia Noori Farzan, “Memphis Police Used Fake Facebook Account to Monitor Black Lives Matter, Trial Reveals,” Washington Post , August 23, 2018, https://www.washingtonpost.com/news/morning-mix/wp/2018/08/23/memphis-police-used-fake-facebook-account-to-monitor-black-lives-matter-trial-reveals ; Alice Speri and Maryam Saleh, “An Immigrant Journalist Faces Deportation as ICE Cracks Down on Its Critics,” Intercept , November 28, 2018, https://theintercept.com/2018/11/28/ice-immigration-arrest-journalist-manuel-duran ; and Geofeedia, “Baltimore County Police Department and Geofeedia Partner to Protect the Public During Freddie Gray Riots,” case study, accessed December 27, 2023, http://www.aclunc.org/docs/20161011_geofeedia_baltimore_case_study.pdf (document obtained by the ACLU of Northern California in 2016 through a public records request to the Glendale Police Department in California).

Police in Memphis, Tennessee, for instance, created dossiers on local activists in 2015 and 2016 and shared them with the city council and multiple law enforcement agencies, violating a federal consent decree prohibiting infringement on First Amendment activities. footnote4_rX6kBFa25df3 4 Brentin Mock, “Memphis Police Spying on Activists Is Worse Than We Thought,” Bloomberg, July 27, 2018, https://www.bloomberg.com/news/articles/2018–07–27/memphis-police-spying-on-black-lives-matter-runs-deep ; and Kendrick v. Chandler, No. 76CV0449 (W.D. Tenn. September 14, 1978), order, judgment, and decree (“ Kendrick Decree”),   https://www.memphispdmonitor.com/_files/ugd/03602e_632d1f4ea1b94b579c559f3489fdaa71.pdf . And in 2022, a social media monitoring vendor under contract with the Los Angeles Police Department (LAPD) alerted the department about a small community education event focused on LAPD’s own online surveillance. footnote5_st5Nq5H8LuAa 5 Rachel Levinson-Waldman, “Documents Show LAPD Monitoring of Community Meeting on . . . LAPD Social Media Monitoring,” Brennan Center for Justice, September 9, 2022, https://www.brennancenter.org/our-work/analysis-opinion/documents-show-lapd-monitoring-community-meeting-lapd-social-media . With no legitimate indication that the event or its attendees posed any threat to public order, the LAPD nevertheless included the event (in which the Brennan Center participated) in internal reports disseminated to several of its divisions.

When police target individuals for surveillance because of their political viewpoints, people may choose to censor their online activity and associations to reduce the risk of govern­mental monit­or­ing. footnote6_iL8C3tznlVkP 6 Brennan Center for Justice, “Doc Society v. Blinken,” updated February 1, 2024, https://www.brennancenter.org/our-work/court-cases/doc-society-v-blinken ; and Knight First Amendment Institute, “Twitter, Reddit File in Support of Lawsuit Challenging U.S. Government’s Social Media Registration Requirement for Visa Applicants,” news release, May 29, 2020, https://knightcolumbia.org/content/twitter-reddit-file-in-support-of-lawsuit-challenging-us-governments-social-media-registration-requirement-for-visa-applicants?_preview_=4d450decff . Research bears this out: one study showed that people are less willing to share non-majority views online when reminded that the government monitors these activities. footnote7_vLBo1UfXWzCR 7 Kaveh Waddell, “How Surveillance Stifles Dissent on the Internet,” Atlantic , April 5, 2016, https://www.theatlantic.com/technology/archive/2016/04/how-surveillance-mutes-dissent-on-the-internet/476955 . Remarkably, this effect was actually more prominent among people who thought they had nothing to hide.

This chilling effect undermines social media’s ability to serve as the new public square and weakens civic connections, particularly for those groups most likely to be targeted online. One environmental activist who had been targeted by police due in part to her online activity described it thus: “Once I realized we were being surveilled and information was being used against us in different ways, I stopped sharing and making these kinds of posts. . . . It made me think, am I safe to share things publicly? Photos of my children? Life events? Political beliefs?” footnote8_y9gG7uX3GYg0 8 Gabriella Sanchez and Rachel Levinson-Waldman, “Police Social Media Monitoring Chills Activism,” Brennan Center for Justice, November 18, 2022, https://www.brennancenter.org/our-work/analysis-opinion/police-social-media-monitoring-chills-activism .

Facilitation and Magnification of Bias in Policing

Racial and ethnic bias in policing, a well-recognized phenomenon, seeps into online monitoring as well. Documented examples abound of police using social media to target activists of color and groups seeking racial justice. The Boston Police Department, for example, enlisted a social media monitoring tool to track mentions of the terms Black Lives Matter and Muslim Lives Matter , among others. footnote9_t3qfM71cTPYZ 9 Nasser Eledroos and Kade Crockford, “Social Media Monitoring in Boston: Free Speech in the Crosshairs,” ACLU of Massachusetts, 2018, https://privacysos.org/social-media-monitoring-boston-free-speech-crosshairs . Similarly, the LAPD employed an online surveillance tool to monitor hashtags such as #BlackLivesMatter and #SayHerName, along with tweets about victims of police killings, including Sandra Bland and Tamir Rice. footnote10_wlBxJIeRPDVi 10 Mary Pat Dwyer, “LAPD Documents Reveal Use of Social Media Monitoring Tools,” Brennan Center for Justice, September 8, 2021, https://www.brennancenter.org/our-work/analysis-opinion/lapd-documents-reveal-use-social-media-monitoring-tools . And during the 2020 racial justice demonstrations following the deaths of George Floyd and Breonna Taylor, law enforcement agencies around the country turned to a tool from the artificial intelligence (AI) surveillance company Dataminr to monitor protestors through their social media activity. footnote11_l3bMOW7dyXgy 11 Sam Biddle, “Police Surveilled George Floyd Protests with Help from Twitter-Affiliated Startup Dataminr,” Intercept , July 9, 2020, https://theintercept.com/2020/07/09/twitter-dataminr-police-spy-surveillance-black-lives-matter-protests .

When it comes to police surveillance, what happens online does not always stay online. The Fresno Police Department in California has used a tool called Beware that captures social media data to help calculate individuals’ “threat scores” and shared that information with operators dispatching officers on calls, posing the risk that officers might arrive ready to shoot or with a SWAT team in tow if social media inaccurately flagged someone as a threat. footnote12_x35VhQmRg3pB 12 Justin Jouvenal, “The New Way Police Are Surveilling You: Calculating Your Threat ‘Score,’” Washington Post, January 10, 2016, https://www.washingtonpost.com/local/public-safety/the-new-way-police-are-surveilling-you-calculating-your-threat-score/2016/01/10/e42bccac-8e15–11e5-baf4-bdf37355da0c_story.html .

Images and other data gleaned from social media also feed the sweeping reach of gang databases, footnote13_wUqXYlE2bUL4 13 See statement of Chief Dermot Shea, New York City Police Department, before the New York City Council Committee on Public Safety, June 13, 2018, 4, https://jjie.org/wp-content/uploads/2018/06/Gang-Testimony-Public-Version.docx . which typically consist almost entirely of individuals of color and are plagued with inaccuracies, as reported by oversight offices in multiple big-city police departments. footnote14_nShvRP0p5qf6 14 See Matt Masterson, “Gang Database ‘Strains Police-Community Relations’ City Watchdog Says,” WTTW (PBS Chicago), April 11, 2019, https://news.wttw.com/2019/04/11/gang-database-strains-police-community-relations-city-watchdog-says (describing a 2019 Chicago Police Department inspector general report finding that 95 percent of the individuals in Chicago’s gang database were Black or Latino and that the database was filled with poor-quality data and suffered from a lack of sufficient controls, procedural protections, and transparency); City of Chicago Office of Inspector General, Follow-Up Inquiry on the Chicago Police Department’s “ Gang Database ,” March 2021, 18, https://igchicago.org/wp-content/uploads/2021/03/OIG-Follow-Up-Inquiry-on-the-Chicago-Police-Departments-Gang-Database.pdf (finding that the Chicago Police Department had “made minimal progress” toward a functional database, thereby undermining crime-fighting efforts and misleading the public); and KCAL (CBS Los Angeles), “DOJ Revokes LAPD Access to CalGang Database After Gang Framing Scandal,” July 14, 2020, https://www.cbsnews.com/losangeles/news/doj-revokes-lapd-access-to-calgang-database-after-gang-framing-scandal/ (reporting that LAPD was barred from using the California gang database after revelations that officers had entered inaccurate information into database to frame people as gang members). According to an April 2023 report released by the New York City Police Department (NYPD) inspector general, individuals can be added to the NYPD’s gang database based solely on social media content when it amounts to “self-admission” of one’s membership to a gang. footnote15_ipVOmO6OpZaf 15 NYPD Office of Inspector General (OIG), An Investigation into NYPD’s Criminal Group Database , April 2023, 25, https://www.nyc.gov/assets/doi/reports/pdf/2023/16CGDRpt.Release04.18.2023.pdf . “Self-admission” is broadly defined to include “an individual’s use of language, symbols, pictures, or colors associated with a criminal group.” footnote16_vo4Bcz3Ji4lS 16 NYPD OIG, Investigation into NYPD’s Criminal Group Database , 5. Being photographed with a known gang member — even in a context has nothing to do with gang activity — or the use of a specific emoji is sufficient. And inclusion in a gang database has real consequences. Individuals listed as gang members may face increased bail and police escalation of routine traffic stops. footnote17_fFAYOrlegy1w 17 Josmar Trujillo and Alex S. Vitale, Gang Takedowns in the De Blasio Era: The Dangers of “ Precision Policing ,” Policing and Social Justice Project, City University of New York, 2019, 7, 12, 15, https://static1.squarespace.com/static/5de981188ae1bf14a94410f5/t/5df14904887d561d6cc9455e/1576093963895/2019+New+York+City+Gang+Policing+Report+-+FINAL%29.pdf . While the NYPD OIG stated that it did not find evidence of specific harms caused by inclusion in the database, it also declined to investigate potential harms, which it said would have been difficult to assess and outside the scope of its report. NYPD OIG, Investigation into NYPD’s Criminal Group Database , 3, 21–22. Men of color experience these harms most acutely: 99 percent of individuals included in the NYPD’s gang database are Black or Latino; Black men comprise the vast majority. footnote18_foGOm4and5r7 18 NYPD OIG, Investigation into NYPD’s Criminal Group Database , 34. Furthermore, online surveillance can reinforce policing biases even when the police themselves are not doing the monitoring. Dataminr employees have reported that online searches for gang members and gang-related activities to include in the company’s news alert service for police and other public-sector clients focused predominantly on users of color, reflecting their perceptions of both the company’s orders and law enforcement’s appetite for “threat fodder.” footnote19_rr9dMRxT4jCF 19 Sam Biddle, “Twitter Surveillance Startup Targets Communities of Color for Police,” Intercept , October 21, 2020, https://theintercept.com/2020/10/21/dataminr-twitter-surveillance-racial-profiling .

Political inclinations also affect how law enforcement officers view online activity. Numerous online posts suggested that violence would occur at the U.S. Capitol on January 6, 2021, yet law enforcement seemed to minimize the threat, failing to take the threat seriously enough to plan for an attack. footnote20_fIhFtU746hkC 20 See Justin Hendrix, “Facebook Provided Warning to FBI Before January 6, GAO Report Reveals,” Just Security (blog), May 5, 2022, https://www.justsecurity.org/81384/facebook-provided-warning-to-fbi-before-january-6-gao-report-reveals ; and S. Comm. on Homeland Security and Governmental Affairs, Planned in Plain Sight: A Review of the Intelligence Failures in Advance of January 6th, 2021 , June 2023, https://www.hsgac.senate.gov/wp-content/uploads/230627_HSGAC-Majority-Report_Jan-6-Intel.pdf . Instead, in the months leading up to the attack, federal law enforcement focused on framing racial justice protests as being instigated by antifa (or anti-fascists), in line with the Donald Trump administration’s messaging and reflecting prevalent law enforcement sympathies. footnote21_xNK88qKvllKk 21 Curtis Waltman, “In California, Homeland Security Continues to Argue that Antifa, Not White Supremacists, Pose ‘The Greatest Threat to Public Safety,’” Muckrock , April 10, 2018, https://www.muckrock.com/news/archives/2018/apr/10/dhss-antifa-neonazi-CA-rundown ; and Will Carless, “As FBI Probed Jan. 6, Many Agents Sympathized with Insurrection, According to Newly Released Email,” USA Today , October 15, 2022, https://www.usatoday.com/story/news/nation/2022/10/15/jan-6-insurrection-fbi-agents-paul-abbate-warning/10498351002 . See also Michael German, Hidden in Plain Sight: Racism, White Supremacy, and Far-Right Militancy in Law Enforcement , Brennan Center for Justice, August 27, 2020, https://www.brennancenter.org/our-work/research-reports/hidden-plain-sight-racism-white-supremacy-and-far-right-militancy-law . This history suggests that even if widespread social media surveillance were effective in flagging threats, neither its use nor its application will be equitable.

Difficulty Accurately Interpreting Posts

Accurately assessing the meanings of posts, pictures, music, videos, and other forms of expression and communication on social media is notoriously challenging. Individuals use in-group slang, and both law enforcement personnel and algorithmic tools may fail to recognize sarcasm, satire, or hyperbole. Trying to interpret posts by young people, who often use memes and pop culture references that may be inscrutable to outsiders, can intensify these challenges. This effect is likely to be heightened for young people of color and immigrant youths, who are more heavily policed and more susceptible to inaccurate or biased presumptions that gestures, clothing, and other characteristics viewed online indicate gang activity or other criminal behavior. footnote22_wFRZpVRUyvAf 22 See Kianna Ortiz and Ananya Roy, All Eyes on Us , Youth Justice Board, Center for Court Innovation, January 2020, 11–13, https://www.innovatingjustice.org/sites/default/files/media/document/2020/Report_YJB_06302020.pdf.[/fn ] The Immigrant Legal Resource Center, for instance, reported on the deportation of a mentally disabled teenager that was evidently based in large part on Facebook pictures of the teen wearing a Chicago Bulls t-shirt, Nike shoes, and blue clothes, including a shirt that was part of his required school uniform — presumed to be evidence of membership in the MS-13 gang.Laila L. Hlass and Rachel Prandini, Deportation by Any Means Necessary: How Immigration Officials Are Labeling Immigrant Youth as Gang Members , Immigrant Legal Resource Center, May 21, 2018, 3, https://www.ilrc.org/sites/default/files/resources/deport_by_any_means_nec-20180521.pdf . See also Philip Marcelo, “Court Decision Deals Blow to Boston Police Gang Database,” Boston.com , January 12, 2022, https://www.boston.com/news/local-news/2022/01/12/court-decision-deals-blow-to-boston-police-gang-database (describing how a federal appeals court overturned the immigration board’s decision to deport the teenager after determining that the Boston Police Department’s gang database relied on “an erratic point system built on unsubstantiated inferences” and did not contain compelling evidence of gang membership or association).

Critically, police and others in positions of authority may be more apt to perceive a social media post as dangerous based on the speaker or their viewpoint. For example, police in Kansas arrested a Black teenager in 2020 on charges that he had contributed to inciting a riot through a Snapchat post; in fact, his post denounced violence rumored to be coming toward his hometown. footnote23_xUBr8odsASzA 23 Amy Renee Leiker, “Outcry Follows Arrest of 2 Men over Social Media Post That Urged Violence in Wichita Area,” Wichita Eagle , June 8, 2020, https://www.kansas.com/news/local/crime/article243267626.html ; and Quiaana Pinkston, “Justice for Rashawn Mayes,” GoFundMe, June 4, 2020, https://www.gofundme.com/f/justice-for-rashawn-mayes . In another instance, the former head of the civil rights division at the Oregon Department of Justice was wrongly identified as a threat — and was ultimately forced out of his job — because he tweeted graphics from a popular Public Enemy album that police misinterpreted as menacing. footnote24_p7rvwrZ2uCrl 24 John Sepulvado, “Black Lives Matter Report: Tweet Quoting Public Enemy Prompted DOJ Investigation,” Oregon Public Broadcasting, April 11, 2016, https://www.opb.org/news/article/black-lives-matter-report-tweet-quoting-public-enemy-prompted-doj-investigation . That tweet and others were discovered and flagged for DOJ officials by an investigator using a digital surveillance tool trained to look for social media mentions of “Black Lives Matter” and “KKK,” among other terms.

Misleading Inferences

A person’s social media connections can reflect everything from close relationships to passing acquaintanceships. An outside observer may struggle to accurately interpret the strength or depth of those associations. People may feel social pressure to connect with family members, colleagues, professional acquaintances, or classmates online, and they may feel compelled to interact with their connections by commenting on or liking a post whether or not a relationship is remote. These dynamics make it particularly fraught to ascribe criminal intent or activity to an individual based on online linkages, which may be superficial or, even if they do reflect an actual relationship, may not signify participation in illegal activity.

The presumption that online proximity necessarily reflects a real-life affiliation can have tragic consequences, especially for young people of color. In one example, a Black New York City teen spent more than a year at the Rikers Island correctional facility, much of it in solitary confinement, based largely on the district attorney’s assessment that he was a member of a criminal gang. footnote25_xn6kkNCxWVtR 25 Ben Popper, “How the NYPD Is Using Social Media to Put Harlem Teens Behind Bars,” Verge , December 10, 2014, https://www.theverge.com/2014/12/10/7341077/nypd-harlem-crews-social-media-rikers-prison . The district attorney relied on Facebook photos of the teen with members of a local crew (a group of young people, typically young men, loosely affiliated by block or housing development) and several posts from crew members that he had liked. In fact, the teen was simply connected to crew members because they were his neighbors and family members. footnote26_rHVdTp7KOH5w 26 Popper, “How the NYPD Is Using Social Media.”

Also troubling is the law enforcement practice of feeding pictures drawn from social media into facial recognition programs to generate leads and identify possible subjects. These algorithms have a documented history of performing less accurately on people with darker skin, and women of color in particular. footnote27_chN5Hg3J4Mhd 27 See Malachi Barrett, “How Authorities Use Social Media to Aid Investigations,” Government Technology , August 11, 2021, https://www.govtech.com/news/how-authorities-use-social-media-to-aid-investigations . At least six people in four states (Georgia, Maryland, Michigan, and New Jersey) are known to have been wrongfully arrested because they were misidentified by facial recognition software. Five of them were Black men; one was a Black woman. footnote28_aeLoFoKkbnQa 28 Kashmir Hill, “Eight Months Pregnant and Arrested After False Facial Recognition Match,” New York Times , August 6, 2023, https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.html ; Katie Hawkinson, “In Every Reported Case Where Police Mistakenly Arrested Someone Using Facial Recognition, That Person Has Been Black,” Business Insider , August 6, 2023, https://www.businessinsider.com/in-every-reported-false-arrests-based-on-facial-recognition-that-person-has-been-black-2023–8 ; Khari Johnson, “Face Recognition Software Led to His Arrest. It Was Dead Wrong,” Wired , February 28, 2023, https://www.wired.com/story/face-recognition-software-led-to-his-arrest-it-was-dead-wrong ; and Khari Johnson, “How Wrongful Arrests Based on AI Derailed 3 Men’s Lives,” Wired , March 7, 2022, https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives .

Social media can even be used for misdirection, further undermining its ostensible utility for law enforcement. In one such instance, a young man posted multiple photos of himself in different locations holding a pistol to convey the impression that he was ready for a violent confrontation. In reality, the man took the photos in a single afternoon with a borrowed gun and posted them over time to suggest that he typically had the weapon with him. He told a sociologist conducting fieldwork in his neighborhood that he did not carry the weapon around, let alone plan to use it; he simply wanted people to assume that he was armed to feel more secure in public. footnote29_te6mjfWAaVo3 29 Melissa De Witte, “Gang-Associated Youth Avoid Violence by Acting Tough Online, Stanford Sociologist Finds,” Stanford News Service , May 1, 2019, https://news.stanford.edu/press-releases/2019/05/01/gangs-use-social-media . The same sociologist, Stanford University professor Forrest Stuart, has documented the ways in which Black youths who drive Chicago’s drill music culture promote a tough and sometimes violent image online that, by design, often vastly overstates the actual levels of violence in their daily lives. footnote30_i7o7Bz7JbPxq 30 Forrest Stuart, Ballad of the Bullet: Gangs, Drill Music, and the Power of Online Infamy (Princeton, NJ: Princeton University Press, 2020).

When lies reinforce existing biases, the risk that law enforcement or intelligence agencies will be duped into acting on fake posts is amplified. In 2020, the Maine fusion center disseminated FBI and DHS reports to local law enforcement agencies warning of potential violence at anti–police brutality demonstrations. These warnings turned out to be based on fake social media posts by right-wing provocateurs. footnote31_uWLTswXfUqyd 31 Nathan Bernard, “Maine Spy Agency Spread Far-Right Rumors of BLM Protest Violence,” Mainer , July 7, 2020, https://web.archive.org/web/20220218053843/https://mainernews.com/maine-spy-agency-spread-far-right-rumors-of-blm-protest-violence.[/fn ] And online platforms can supercharge the reach of inaccurate information, as when a young South Asian student was wrongly identified as one of the Boston Marathon bombers, devastating his family.See Jay Caspian King, “Should Reddit Be Blamed for the Spreading of a Smear?,” New York Times, July 25, 2013, https://www.nytimes.com/2013/07/28/magazine/should-reddit-be-blamed-for-the-spreading-of-a-smear.html ; and NPR, “How Social Media Smeared a Missing Student as a Terrorism Suspect,” April 18, 2016 , https://www.npr.org/sections/codeswitch/2016/04/18/474671097/how-social-media-smeared-a-missing-student-as-a-terrorism-suspect .

Unreliability of Automated Digital Surveillance Tools

Many third-party social media surveillance tools utilize keyword lists to flag threats for law enforcement. Yet these tools struggle to account for variables like tone, speaker, and context. footnote32_uWfwNS5GMQKB 32 Natasha Duarte, Emma Llansó, and Anna Loup, Mixed Messages? The Limits of Automated Social Media Content Analysis , Center for Democracy and Technology, November 2017, 3, https://cdt.org/wp-content/uploads/2017/11/Mixed-Messages-Paper.pdf . Automated programs may simply report all posts containing flagged words, leading to piles of irrelevant reports. Police in Jacksonville, Florida, learned that rather than uncovering early threat indicators, flagging the word bomb elicited posts describing first-rate pizza or beer as “the bomb.” footnote33_tpwzGBKHLymM 33 Ben Conarck, “Sheriff’s Office’s Social Media Tool Regularly Yielded False Alarms,” Jacksonville.com , May 30, 2017, https://www.jacksonville.com/story/news/crime/2017/05/30/sheriff-s-office-s-social-media-tool-regularly-yielded-false/15757269007 . School officials who have purchased monitoring software to scour students’ social media accounts have ultimately found that those tools offered insufficient value as well. footnote34_jju5wkJ595Dw 34 Lizzie O’Leary, “Why Expensive Social Media Monitoring Has Failed to Protect Schools,” Slate , June 4, 2022, https://slate.com/technology/2022/06/social-media-monitoring-software-schools-safety.html . Notably, these kinds of products also violate the major platforms’ policies, which prohibit developers from using their data for surveillance purposes. footnote35_n6DR2cpWKlwb 35 Meta, “Meta Platform Terms,” updated April 25, 2023, https://developers.facebook.com/terms/dfc_platform_terms/#datause ; and X Corp., “Developer Terms: More About Restricted Uses of the Twitter APIs,” accessed December 27, 2023, https://developer.twitter.com/en/developer-terms/more-on-restricted-use-cases .

Moreover, natural language processing tools that are trained in one language can have trouble accurately interpreting other ones. footnote36_lTk6MMLQFiv5 36 See Duarte, Llansó, and Loup, Mixed Messages? , 14–15. In one example, Israeli police held and questioned a man who had posted the greeting “good morning” in Arabic; an automated tool mistranslated it into Hebrew as “attack them.” footnote37_zghggBjewvw3 37 Dirk Hovy, “Demographic Factors Improve Classification Performance,” Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing 1 (July 2015): 752–62, https://aclanthology.org/P15–1073.pdf . This bias can also surface when a system attempts to understand dialects for which training data was lacking. A 2017 study of natural language processing tools found that they miscategorized African American Vernacular English (AAVE) as non-English. One system incorrectly identified it as Danish with 99.9 percent confidence. footnote38_hEPF67joCMCR 38 Duarte, Llansó, and Loup, Mixed Messages? , 15. A more recent study demonstrated that even some of the more sophisticated large language models can falter considerably in many non-English languages. footnote39_ddaeGFBVbTo0 39 See Gabriel Nicholas and Aliya Bhatia, Lost in Translation: Large Language Models in Non-English Content Analysis , Center for Democracy and Technology, May 2023, https://cdt.org/wp-content/uploads/2023/05/non-en-content-analysis-primer-051223–1203.pdf .

Undercover Accounts Ripe for Misuse

Finally, the use of undercover accounts or false identities on social media presents particular opportunities for mischief and privacy intrusions. Alias identities can be used to trick people into accepting connections they would not have permitted in real life, allowing law enforcement officers to see a wealth of information that they would not otherwise be privy to — including posts, pictures, and information about friends and family members — and even exchange and view private communications. In-person covert activity raises these same concerns but comes with built-in limitations: an officer interacting in person cannot easily pretend to be a target’s childhood best friend, or someone of a different race or ethnicity, or multiple people. None of these limitations applies to online covert activity.

The relative ease of purporting to be more than one person online implicates the Supreme Court’s growing recognition that effortless surveillance may “alter the relationship between citizen and government in a way that is inimical to democratic society.” footnote40_yqYPYteNwP3K 40 U.S. v. Jones, 565 U.S. 400, 416 (2012) (Sotomayor, J., concurring) (quoting U.S. v. Cuevas-Perez, 640 F.3d 272, 285 (7th Cir. 2011)) (internal quotation marks omitted). Technological tools can populate fake accounts with a sufficient range of interests and connections to look legitimate. Some digital monitoring companies even create fake accounts in bulk, using them to scrape millions of data points from public social media accounts. This practice violates Facebook’s and Instagram’s policies, but as long as the surveillance companies evade detection, police departments can buy licenses to use these products to gather information on individuals and groups anonymously. footnote41_n9lDpOQspgpN 41 Jonathan Vanian, “Meta Sues Voyager Labs, Saying It Created Fake Accounts to Scrape User Data,” CNBC , January 12, 2023, https://www.cnbc.com/2023/01/12/meta-sues-voyager-labs-over-scraping-user-data.html ; and Rachel Levinson-Waldman and Gabriella Sanchez, “Meta Sues Surveillance Firm That Worked with Police,” Brennan Center for Justice, January 26, 2023, https://www.brennancenter.org/our-work/analysis-opinion/meta-sues-surveillance-firm-worked-police . According to an amended complaint filed by Meta, Voyager continued to create fake accounts and scrape data from hundreds of thousands of Facebook users even after Meta served the company with multiple cease-and-desist letters and filed suit.

At the same time, while many police departments have public-facing policies permitting personnel to use covert accounts in at least some circumstances, only a fraction of those policies incorporate baseline limitations on online undercover activity — such as requirements that accounts be used only when reasonable articulable suspicion of a crime exists or that they be subject to regular supervisory review — let alone the more robust restraints set out below. footnote42_bJLP05Mhy8jE 42 Rachel Levinson-Waldman, “Directory of Police Department Social Media Policies,” Brennan Center for Justice, updated February 7, 2024, https://www.brennancenter.org/our-work/research-reports/directory-police-department-social-media-policies . These loose controls have led to abuses: the Memphis Police Department used an undercover Facebook account to target racial justice activists, footnote43_qYI172to0f17 43 Antonia Noori Farzan, “Memphis Police Used Fake Facebook Account to Monitor Black Lives Matter, Trial Reveals,” Washington Post , August 23, 2018, https://www.washingtonpost.com/news/morning-mix/wp/2018/08/23/memphis-police-used-fake-facebook-account-to-monitor-black-lives-matter-trial-reveals . and the NAACP sued the city of Minneapolis for alleged discriminatory use of undercover social media accounts to target activists of color. footnote44_o0GDnxCrxPg0 44 David Schuman, “NAACP Files Lawsuit Against Minneapolis for Alleged Discriminatory Social Media Surveillance,” WCCO (CBS News Minnesota), updated April 26, 2023, https://www.cbsnews.com/minnesota/news/naacp-files-lawsuit-against-minneapolis ; and Minnesota Department of Human Rights v. City of Minneapolis, 27CV234177 (Minn. Dist. Ct. March 31, 2023), https://mn.gov/mdhr/assets/Court%20Enforceable%20Agreement_tcm1061–571942.pdf . The lawsuit resulted in a consent decree between the city and the Minnesota Department of Human Rights that contains new procedural and oversight mechanisms for undercover account use. footnote45_mTKHYy5HIl8N 45 Jonah Kaplan, “MPD Settlement Agreement Approved, Altering the Future of Policing in Minneapolis,” WCCO (CBS News Minnesota), March 31, 2023, https://www.cbsnews.com/minnesota/news/minneapolis-state-officials-unanimously-approve-court-enforceable-settlement-agreement . Limitations for offline undercover operations have been an important part of consent decrees in places like New York City, where police have a long history of targeting political groups seeking racial justice. See Hamid Hassan Raza v. City of New York, No. 13CV3448 (E.D.N.Y. March 20, 2017), stipulation of settlement and order, exhibit A, 16–17, https://static1.squarespace.com/static/5c1bfc7eee175995a4ceb638/t/5d2f69ca7df2d700014089b8/1563388365355/Revised+Handschu+Guidelines.pdf .

Also worth noting is that use of a false name directly violates Facebook’s terms of service. The company’s head of civil rights has emphasized that there is no exception to this policy for the police, and it has sent sharp letters of rebuke to both the Memphis and Los Angeles police departments, And Meta filed suit in 2022 against a company that uses fake accounts to scrape data from Facebook users and has advertised its services to the government. footnote46_gcvdn0A3WT27 46 Roy Austin, vice president of civil rights and deputy general counsel, Meta, letter to Michel Moore, chief, Los Angeles Police Department, November 11, 2021, https://about.fb.com/wp-content/uploads/2021/11/LAPD-Letter.pdf ; Andrea Kirkpatrick, director and associate general counsel for security, Facebook, letter to Michael Rallings, director, Memphis Police Department, September 19, 2018, https://www.eff.org/document/facebook-letter-memphis-police-department-fake-accounts ; and Levinson-Waldman and Sanchez, “Meta Sues Surveillance Firm That Worked with Police.” See also Mara Hvistendahl, “FBI Provides Chicago Police with Fake Online Identities for ‘Social Media Exploitation’ Team,” Intercept , May 20, 2022, https://theintercept.com/2022/05/20/chicago-police-fbi-social-media-surveillance-fake .

Best Practices for Law Enforcement

Social media is here to stay, and no configuration of legal and policy safeguards will eliminate the possibility of harm or misuse. Circumstances will arise in which investigating a crime or ensuring public safety necessitates using social media to view or gather information. At a minimum, police agencies should have a legitimate law enforcement purpose to monitor or collect social media data, although that requirement alone would still risk permitting far too much online information gathering with too few guardrails.

Accordingly, and given the harms articulated above, local, regional, and state law enforcement agencies that use social media for investigative and other purposes should develop and implement policies and practices consistent with the recommendations outlined below. These policies should include substantial mechanisms for input from community members and experts in privacy, civil rights, and civil liberties, among other fields. Best practices will evolve as more information emerges about both the benefits and risks of this technology. Policies would also benefit from legislation making them legally enforceable through private lawsuits.

Social Media Use Policies

Agencies that use social media monitoring in furtherance of their official missions should have publicly available policies that describe their practices and set out restrictions and oversight requirements. Those policies should contain the following provisions:

1. Criminal Investigations

Social media data may be viewed, monitored, or collected only when an agency has established specific and articulable facts showing reasonable grounds to believe that the data is relevant and material to an ongoing criminal investigation. Information gathered from social media should be documented in cases’ investigative files as soon as is practicable.

Data collected should not be shared with other law enforcement agencies absent either a showing of reasonable suspicion that the information contains evidence of criminal activity over which the receiving agency has jurisdiction, or relevance to an ongoing investigation or pending criminal trial in which the receiving agency is then engaged. agencies should also have a memorandum of agreement in place confirming that the receiving agency will abide by equivalent limitations in any use or further dissemination of the data.

When agency use of social media is likely to yield information about First Amendment–protected rights, best practices regarding profiling and targeting of constitutionally protected activity (discussed below) should be followed.

2. Preparation for Public Events

Publicly available social media content may be monitored or viewed in advance of significant public events solely to determine the resources necessary to keep participants and the public safe. When agencies can make these determinations in ways that do not risk incidentally viewing First Amendment–protected information — such as by consulting a permit application or contacting an event organizer directly — these means are preferred. All social media monitoring should be approved by an individual at the rank of police chief or a named senior-level designee.

Social media surveillance should only be undertaken when specific, articulable, and credible facts demonstrate a public safety concern justifying the monitoring. Such concerns and the supporting facts should be documented in writing. The documentation should include a description of the social media searches to be made, including identification of any search terms, individuals, or hashtags monitored; the justification(s) for those searches; and the factor(s) necessitating social media as a necessary tool for making resource determinations.

A determination that a public safety concern exists should never be based to any degree on the constitutionally protected political or religious beliefs or the ethnic, racial, national, or religious identity of an individual or group, nor should it be based on violence at a previous event that resulted from police activity.

Only data relevant to a law enforcement agency’s resource and planning determinations to ensure public safety should be collected. If officers find no indication of criminal activity while monitoring, then social media data should not be retained past the event date. If online surveillance uncovers information connected to an existing criminal investigation, then that data may be retained in accordance with relevant statutes and departmental rules. footnote1_l7uQQaEhcjQk 1 Global Advisory Committee, Recommendations for First Amendment–Protected Events for State and Local Law Enforcement Agencies , Office of Justice Programs, Department of Justice, December 2011, 12, https://bja.ojp.gov/sites/g/files/xyckuh186/files/media/document/Role_of_State_and_Local_Law_Enforcement_at_First_Amendment_Events_Reference_Card.pdf .

Data should not be shared with other law enforcement agencies absent a demonstrable showing of necessity to address a specific and articulated public safety concern and a memorandum of agreement with the receiving agency confirming that it will abide by the limitations set out above in any use or further dissemination of the data.

3. Evaluation of Social Media Information

Any information collected from social media must be evaluated for validity and reliability prior to being used as criminal intelligence and must be authenticated before being used in a criminal investigation. footnote2_pTRRDoqbwPqj 2 See Global Advisory Committee, Developing a Policy on the Use of Social Media , 15–16.

4. Controls for Undercover Account Use

Because of the risk of abuse and the inherent lack of transparency, as well as platform policies prohibiting their use, undercover accounts should be used extremely sparingly if at all, and only with a policy in place that requires:

  • documentation and supervisory confirmation that no less invasive means are available, and that a subpoena or warrant to the social media platform is impossible or would not accomplish the law enforcement purpose;
  • a showing that use of the account is likely to obtain information from someone reasonably suspected of criminal activity related to a small category of serious crimes (as defined in advance), and that law enforcement expects the information is necessary to a properly initiated investigation of such crimes;
  • documentation and supervisory approval of the name on the undercover account, the officer who will use it, and the purpose for its use;
  • the names of the persons or groups with whom the officer will seek to connect;
  • regular, ongoing reviews at intervals no longer than 45 days to confirm the account’s continued necessity for the approved purpose; and
  • automatic termination of account access unless the chief of police or a named senior designee explicitly authorizes an extension.

Permission to impersonate an actual person in the course of an approved criminal investigation should be limited to a narrow set of circumstances (e.g., assisting in identifying an online stalker) — the officer must, in addition to satisfying the restrictions above, obtain the permission of the person being impersonated, Officers may not change the individual’s password or alter other private or sensitive information related to the account without explicit consent, and may not take any actions — including interacting with individuals or groups or posting on behalf of the impersonated individual — beyond the pre-determined limited set of actions necessary to carry out the investigation. Furthermore, the impersonated person must be allowed to withdraw permission at any point, at which time the officer must immediately cease use of and access to the account. All undercover activity on the impersonated account must be properly documented in the investigative file.

5. Protections Against Profiling and Targeting of Constitutionally Protected Activity

Collection or monitoring of social media is prohibited when it is based to any degree on the race, religion, ethnic or national origin, gender or gender identity, sexual orientation or characteristics, or immigration status of an individual or group, except when trustworthy information specific and limited in time and location links persons possessing these traits to the description of individuals suspected of criminal activity, or it is based to any degree on a person’s exercise of First Amendment freedoms, or is reasonably likely to chill the exercise of such freedoms, except where there is reasonable suspicion of criminal activity or planning and clear evidence indicates that the First Amendment–protected activity directly relates to the suspected criminal activity or planning, or in the narrow context of event planning (subject to the restrictions set forth above).

When social media use during a criminal investigation is reasonably likely to yield information about the exercise of First Amendment–protected rights, data collection should not commence until the following measures have been met:

  • Completion of documentation clearly demonstrating that (i) the expected collection of information about First Amendment rights is unavoidably necessary for the proper conduct of the investigation and (ii) every reasonable precaution has been employed to minimize the collection and retention of information about, or interference with, First Amendment rights; footnote3_qTdyyGlVd2Uh 3 See ACLU of Tennessee v. City of Memphis, No. 17CV02120 (W.D. Tenn. September 21, 2020), amended judgment and decree (“Modified Kendrick Decree”), 9, https://www.memphispdmonitor.com/_files/ugd/03602e_a3fae3908fa74b2aa1e325ce181de427.pdf.%5b . and
  • Review by a supervisor confirming specifically that each factor immediately above has been met and approving the social media collection. Social media data collection should be subject to regular, ongoing reviews at intervals no longer than 45 days to confirm continued adherence to each condition above. These reviews should be conducted by the chief of police or a named senior designee, and they should include written documentation of a decision to reapprove or terminate the collection.

6. Whistleblower Protections

Protections to prevent retaliation against internal whistleblowers who disclose violations or abuses of social media monitoring practices — along with effective redress mechanisms — must be in place.

7. Contracting Limitations

The police department or its contracting authority must require any vendor to agree to be bound by any of the above conditions that are relevant to the vendor’s services. They should also obtain from vendors a demonstration or other explanation of how they will comply. Law enforcement agencies may not contract with any vendor who cannot or will not adhere to these requirements.

Social Media Use Reports

Every law enforcement agency that uses social media monitoring for investigative, intelligence, or event-planning purposes should publish a regular written report at least every two years. These reports should include the following information at a minimum:

  • the number of criminal investigations and cases in which social media was used to gather information, monitor individuals, or collect evidence;
  • the number of covert accounts used by the agency, broken down by division or other relevant sub-unit;
  • the number of criminal investigations and cases in which covert accounts were used, including both the total number and the number disaggregated by category of crime, as well as the length of time in each investigation for which a covert account was approved;
  • of the investigations in which covert accounts were used, the number that used impersonating accounts, including both the total number and the number disaggregated by category of crime;
  • the average length of time that covert accounts remained open;
  • the number of criminal investigations for which officers collected information about the exercise of First Amendment rights, including the total number, the number disaggregated by category of crime, and the number that involved any violations of this policy;
  • the number of public events (other than those hosted by the department itself) for which officers viewed social media content or collected online data, including the date of each event; and
  • the number of events for which agencies retained social media data beyond the event date or for which provisions of this policy were otherwise violated.

Social Media Monitoring Product Approvals

Jurisdictions should hold public hearings and obtain local government oversight and approval before contracting with vendors that facilitate the collection or analysis of social media data for the permitted purposes described above. The oversight and approval process must include considerations of cost, effects (including potential repercussions for marginalized communities and First Amendment–protected activities). The process should also outline accountability and oversight measures, and it should require a demonstration of efficacy evaluated and validated by an independent third party. Any contract must include stringent oversight, auditing, and public disclosure measures.

Third-Party Audits

Independent oversight entities should audit every law enforcement agency’s social media monitoring practices and disclosures on an ongoing basis to ensure compliance with departmental policies and with constitutional protections and safeguards. The results of each audit should be posted publicly on the agency’s website.

Related Resources

Directory of police department social media policies.

While many departments have policies addressing the use of social media data, most are too permissive or provide little transparency about actual practices.

Drawing of person holding phone

Study Reveals Inadequacy of Police Departments’ Social Media Surveillance Policies

Hundreds of law enforcement agencies lack the safeguards needed to prevent officers from misusing social media to target First Amendment activity and minorities. Our best practices show how to fill the gaps.

Capitol building in background and "Black Lives Matter" sign held up in the foreground

Records Show DC and Federal Law Enforcement Sharing Surveillance Info on Racial Justice Protests

Officers tracked social media posts about racial justice protests with no evidence of violence, threatening First Amendment rights.

Documents Reveal How DC Police Surveil Social Media Profiles and Protest Activity

Ftc must investigate meta and x for complicity with government surveillance, we’re suing the nypd to uncover its online surveillance practices, senate ai hearings highlight increased need for regulation, documents reveal widespread use of fake social media accounts by dhs, informed citizens are democracy’s best defense.

write a speech about uses and abuses of social media

Hate speech and disinformation in South Africa’s elections: big tech make it tough to monitor social media

write a speech about uses and abuses of social media

Professor Emeritus, Rhodes University, Rhodes University

Disclosure statement

Guy Berger has received funding from the thinktank Research ICT Africa, where he is a Distinguished Research Fellow.

Rhodes University provides funding as a partner of The Conversation AFRICA.

View all partners

There’s a growing global movement to ensure that researchers can get access to the huge quantity of data assembled and exploited by digital operators.

Momentum is mounting because it’s becoming increasingly evident that data is power. And access to it is the key – for a host of reasons, not least transparency, human rights and electoral integrity.

But there’s currently a massive international asymmetry in access to data.

In the European Union and the US, some progress has been made. For example, EU researchers studying risks have a legal right of access. In the US too, some companies have taken voluntary steps to improve access.

The situation is generally very different in the global south.

The value of data access can be seen vividly in the monitoring of social media during elections. South Africa is a case in point. A powerful “big data” analysis was recently published about online attacks on women journalists there, raising the alarm about escalation around – and after – the election on 29 May.

A number of groups working with data are attempting to monitor hate speech and disinformation on social media ahead of South Africa’s national and provincial polls. At a recent workshop involving 10 of these initiatives, participants described trying to detect co-ordinated “information operations” that could harm the election, including via foreign interference.

But these researchers can’t get all the data they need because the tech companies don’t give them access.

This has been a concern of mine since I first commissioned a handbook about harmful online content – Journalism, Fake News & Disinformation: Handbook for Journalism Education and Training – six years ago. My experience since then includes overseeing a major UN study called Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression .

Over the years, I’ve learnt that to dig into online disinformation, you need to get right inside the social media engines. Without comprehensive access to the data they hold, you’re left in relative darkness about the workings of manipulators, the role of misled punters and the fuel provided by mysterious corporate algorithms.

Looking at social media in the South African elections, the researchers at the recent workshop shared how they were doing their best with what limited data they had. They were all monitoring text on social platforms. Some were monitoring audio, while a few were looking at “synthetic content” such as material produced with generative AI.

About half of ten initiatives were tracking followers, impressions and engagement. Nearly all were checking content on Twitter; at least four were monitoring Facebook; three covered YouTube; and two included TikTok.

WhatsApp was getting scant attention. Though most messaging on the service is encrypted, the company knows (but doesn’t disclose) which registered user is bulk sending content to which others, who forwards this on, whether group admins are active or not, and a host of other “metadata” details that could help monitors to track dangerous trajectories.

But the researchers can’t do the necessary deep data dives. They’ve set out the difficult data conditions they work under in a public statement explaining how they are severely constrained in their access to data.

One data source they use is expensive (and limited) packages from marketing brokers (who in turn have purchased data assets wholesale from the platforms).

A second source is from analysing published posts online (which excludes in-group and WhatsApp communications). Using scraped data is limited and labour-intensive. Findings are superficial. And it’s risky: scraping is forbidden in most platforms’ terms of use.

None of the researchers covering South Africa’s elections have direct access to the platforms’ own Application Programme Interfaces (APIs). These gateways provide a direct pipeline into the computer servers hosting data. This major resource is what companies use to profile users, amplify content, target ads and automate content moderation. It’s an essential input for monitoring online electoral harms.

In the EU, the Digital Services Act enables vetted researchers to legally demand and receive free, and potentially wide-ranging, API access to search for “systemic risks” on the platforms.

It’s also more open in the US. There, Meta, the multinational technology giant that owns and operates Facebook, Instagram and WhatsApp, cherrypicked 16 researchers in the 2020 elections (of which only five projects have published their findings). The company has subsequently outsourced the judging of Facebook and Instagram access requests (from anywhere worldwide) to the University of Michigan .

One of the South African researchers tried that channel, without success.

Other platforms such as TikTok are still making unilateral decisions, even in the US, as to who has data access.

Outside the EU and the US, it’s hard even to get a dialogue going with the platforms.

The fightback

Last November, I invited the bigger tech players to join a workshop in Cape Town on data access and elections in Africa. There was effectively no response .

The same pattern is evident in an initiative earlier this year by the South African National Editors’ Forum. The forum suggested a dialogue around a human rights impact assessment of online risks to the South African elections. They were ignored .

Against this background, two South African NGOs – the Legal Resources Centre and the Campaign for Free Expression – are using South Africa’s expansive Promotion of Access to Information Act to compel platforms to disclose their election plans.

But the companies have refused to respond, claiming that they do not fall under South African jurisdiction. This has led to appeals being launched to the country’s Information Regulator to compel disclosures.

Further momentum for change may also come from Unesco, which is promoting international Guidelines for the Governance of Digital Platforms. These highlight transparency and the issue of research access. Unesco has also published a report that I researched titled Data Sharing to Foster Information as a Public Good.

In the works is an incipient African Alliance for Access to Data , now involving five pan-African formations. This coalition (I’m interim convenor) is engaging the African Union on the issues.

But there’s no guarantee yet that all this will lead the platforms to open up data to Africans and researchers in the global south.

  • Social media
  • South Africa
  • Hate speech
  • Disinformation
  • Online attacks
  • South Africa election 2024

write a speech about uses and abuses of social media

Data Manager

write a speech about uses and abuses of social media

Director, Social Policy

write a speech about uses and abuses of social media

Coordinator, Academic Advising

write a speech about uses and abuses of social media

Head, School of Psychology

write a speech about uses and abuses of social media

Senior Research Fellow - Women's Health Services

Image

Your child’s brain is developing rapidly, making them more susceptible to the harms of social media. And though they might put on a brave face, they could be hurting underneath. It’s time to unmask the harms of social media.

Up to 95% of youth ages 13–17 report using a social media platform opens in a new tab , with more than a third saying they use social media “almost constantly.”

There is growing concern for children and teens using social media. Social media can be incredibly harmful for youth. Kids need less screen time for healthy growth and development. We can work together to establish social media boundaries, model healthy social media use, and teach children how to use it safely.

Social Harms Parent PowerPoint Presentation image

The harms of social media

Image

Teens who spent more than 3 hours per day on social media faced double the risk of experiencing poor mental health outcomes.

Image

Nearly half of teens ages 13 to 17 said using social media makes them feel worse .

Image

Almost 60% of teenage girls say they’ve been contacted by a stranger on social media platforms in ways that make them feel uncomfortable.

Image

According to a survey of 8th and 10th graders, the average time spent on social media is 3.5 hours per day and almost 15% (1 in 7) spends 7+ hours per day on social media .

Image

More than 60% of teens are regularly exposed to hate-based content .

Image

Excessive social media use has been linked to sleep problems, attention problems, and feelings of exclusion among teenagers.

Image

In a review of 36 studies, a consistent relationship was found between cyberbullying on social media and depression among children of all ages.

Image

In a national survey of girls ages 11 to 15, one-third or more say they feel “addicted” to a social media platform .

Image

More than half of teens report that it would be hard to give up social media .

Share this message with others

What do Utah parents think about social media?

It can be scary and intimidating raising kids in a world filled with technology—predators, inappropriate content, bullying, and a distorted reality are just some of the concerns you might have. But you’re not alone! We asked Utah parents what they thought about social media, its effects on their children, and what they’re doing to help protect their kids.

  • 88% believe social media has a detrimental impact on children and youth.
  • 63% were concerned about social media impacting their child’s mental health.
  • 60% were concerned about social media impacting their child’s body image.
  • 94% enforce boundaries with their children’s social media usage, like enforcing time limits, content restrictions, and setting age limits.
  • 84% encourage their children to unplug from social media and participate in other activities.

Image

What can you do to protect your child?

Governor Cox speaking at the Harms of Social Media press conference

Reconsider allowing your child to have social media and encourage them to wait to use it until they are an adult.

Governor Spencer J. Cox

  • Create a family media plan. Agreed-upon expectations can help establish healthy technology boundaries at home – including social media use. A family media plan opens in a new tab can promote open family discussion and rules about media use and include topics such as balancing screen/online time, content boundaries, and not disclosing personal information.
  • Create tech-free zones and encourage children to foster in-person relationships. Electronics can be a distraction after bedtime and can interfere with sleep. Consider restricting the use of phones, tablets, and computers for at least 1 hour before bedtime and through the night. Keep family mealtimes and in-person gatherings device-free to build social bonds and engage in a two-way conversation. Help your child develop social skills and nurture his or her in-person relationships by encouraging unstructured and offline connections with others and making unplugged interactions a daily priority. Learn more from the American Academy of Pediatrics opens in a new tab .
  • Model responsible social media behavior. Children often learn behaviors and habits from what they see around them. Parents can set a good example of what responsible and healthy social media use looks like by limiting their own use and being mindful of social media habits.
  • Teach kids about technology and empower them to be responsible online participants at the appropriate age. Discuss with children the risks of social media as well as the importance of respecting privacy and protecting personal information in age-appropriate ways. Have conversations with children about who they are connecting with, their privacy settings, their online experiences, and how they are spending their time online. Encourage them to seek help should they need it. Learn more from the American Academy of Pediatrics Center of Excellence on Social Media and Youth Mental Health opens in a new tab and American Psychological Association Health Advisory on Social Media Use in Adolescence opens in a new tab .
  • Report cyberbullying and online abuse and exploitation. Talk to your child about cyberbullying and what to do if they are being harassed through email, text message, online games, or social media. Make sure they understand the dangers of being contacted by an adult online, especially if they are being asked to share private images or perform intimate or sexual acts.
  • Work with other parents to help establish shared norms and practices and to support programs and policies around healthy social media use. Despite what your kids may say, you’re not the only parent who won’t let their children have social media or who sets family rules about phones and technology.

In 2023, the Utah State Legislature passed Senate Bill 152 opens in a new tab   and opens in a new tab House Bill 311 , enacting the Utah Social Media Regulation Acts. Learn more about the laws here opens in a new tab .

Research & resources

  • Gabb:  What is Sextortion? Everything You Need to Know  
  • Gabb Study: When Teens Take a Break from Social Media opens in a new tab
  • The Common Sense Census:  Media Use by Tweens and Teens, 2021 opens in a new tab
  • U.S. Surgeon General’s Advisory: opens in a new tab Social Media and Youth Mental Health
  • The Atlantic:  All Work and No Play: Why Your Kids Are More Anxious, Depressed
  • American Academy of Pediatrics: National Center of Excellence on Social Media and Youth Mental Health opens in a new tab
  • American Academy of Pediatrics: Media and Young Minds opens in a new tab
  • American Psychological Association: Health advisory on social media use in adolescence opens in a new tab
  • University of Utah Health: The impact of social media on teens’ mental health opens in a new tab and Tips for healthy social media use: Parents and teens opens in a new tab
  • PBS Utah: Social media and youth mental health opens in a new tab
  • Teen Mental Health Is Plummeting, and Social Media is a Major Contributing Cause opens in a new tab
  • Social media and mental health opens in a new tab
  • U.S. Surgeon General’s Advisory: Our Epidemic of Loneliness and Isolation opens in a new tab
  • Haidt, J., & Twenge, J. (ongoing). Adolescent mood disorders since 2010: A collaborative review. opens in a new tab Unpublished manuscript, New York University.

Social Harms educational materials

View all educational and campaign materials

Image

If you or someone you know is considering suicide

Tell us about your experience with harms to minors from engaging with social media platforms.

Presenting social harms: a parent presentation

Featured image for “Presenting social harms: a parent presentation”

How does social media affect sleep?

Featured image for “How does social media affect sleep?”

'Hugely damaging': Most Americans are being harassed online when using Facebook, Twitter and Reddit

Online hate and harassment rose sharply in the United States this year across demographics, leading to the highest rates in the country since 2020, according to a new study released Tuesday.

The Anti-Defamation League’s annual survey found more than half of Americans reported experiencing online hate and harassment at some point in their lives, up from 40% in 2022. About 33% of adults said they experienced online hate, up from 23% in 2022 and the increase was starker for teens, with 51% reporting online hate this year, up from 36%.

The poll surveyed 2,139 adults and 550 youth in March and April, asking about their lifetime experiences with online hate as well as what they had seen and heard in the past 12 months. The findings have a margin of error of plus or minus 2 percentage points for adults and 4 percentage points for youth. 

“Online hate and harassment is a really serious problem,” Jordan Kraemer, director of research at ADL, said. “Even when it stays online, it’s hugely damaging and the people to whom it’s the most damaging are often those who are not in a position of power to make the necessary changes.” 

Which social media platforms see hate speech?  

Although Facebook is still the platform where harassment occurs the most, attacks have been steadily decreasing on the site, according to poll respondents. Of those who reported harassment, 54% indicated that it took place on Facebook, compared with 66% who said that in 2021.

“Although we saw a slight decline on Facebook, we saw increases on TikTok, Twitter, Reddit, and Instagram, so it’s certainly possible that it’s happening elsewhere as opposed to being reduced overall,” Kraemer said. 

Respondents who reported hate and harassment taking place on Reddit jumped from 5% in 2022 to 15% in 2023. A similar increase was reported by respondents on TikTok, with 19% saying they had been attacked on the platform this year compared with 15% saying that in 2022. About 27% of respondents said they were attacked on Twitter, compared with 21% of respondents saying that in 2022.

In its annual social media safety index , GLAAD, a nonprofit focused on LGBTQ advocacy, gave low and failing scores to all five main social media platforms – Facebook, Instagram, TikTok, YouTube and Twitter – on LGBTQ safety. Twitter was the most dangerous social platform for LGBTQ people, according to GLAAD.

Oliver Haimson, an assistant professor at the University of Michigan School of Information, said through his research he found that especially for the transgender community, Facebook is not typically used as the primary social media platform to explore gender and sexual identities. He said this could be a reason why people are experiencing less harassment on Facebook compared with before.

“Some of my past work has found that people use Facebook more to keep up with friends and family and people that they know in the physical world,” Haimson said. “So, people aren’t usually doing a lot of playing around with identity on Facebook.” 

Online hate targets LGBTQ people and transgender youth 

Increases in harassment rates are more pronounced among LGBTQ populations and marginalized communities of color, with about 75% of transgender respondents saying it is a problem.

Ross von Metzke, director of communications for the It Gets Better Project, a nonprofit aiming to uplift and empower LGBTQ youth, said the rise in online hate and harassment against LGBTQ and transgender youth in particular can be attributed to the rise in bills being proposed about these communities. 

More than 500 bills have been introduced by Republicans this year that affect the LGBTQ population. The majority of these bills specifically impact transgender life by limiting access to health care, bathrooms and sports. 

“That sort of rhetoric and behavior translates to online hate. There’s a direct line,” von Metzke said. “If you are seeing a rise in hateful bigoted behavior among school administrators, or lawmakers or parents … that’s going to bleed over into the digital space.” 

The ADL survey found online harassment can lead to withdrawal from online spaces. Experts noted this is particularly harmful for LGBTQ youth, who often utilize social media as a form of gender and sexual exploration. 

“This is really especially difficult because so many LGBTQ people and trans people in particular use online spaces to build community and it’s really important, especially for people who are more isolated,” Haimson said.

ADL said it strongly recommends tech companies enact policies that combat hate and harassment on their platforms in a transparent and equitable way. This should include regaining the public trust through regularly conducted individual audits, the ADL report stated. 

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Addict Behav Rep
  • v.17; 2023 Jun

Social media use and abuse: Different profiles of users and their associations with addictive behaviours

Deon tullett-prado.

a Victoria University, Australia

Vasileios Stavropoulos

b University of Athens, Greece

Rapson Gomez

c Federation University, Australia

Associated Data

The data is made available via a link document.

Introduction

Social media use has become increasingly prevalent worldwide. Simultaneously, concerns surrounding social media abuse/problematic use, which resembles behavioural and substance addictions, have proliferated. This has prompted the introduction of ‘Social Media Addiction’ [SMA], as a condition requiring clarifications regarding its definition, assessment and associations with other addictions. Thus, this study aimed to: (a) advance knowledge on the typology/structure of SMA symptoms experienced and: (b) explore the association of these typologies with addictive behaviours related to gaming, gambling, alcohol, smoking, drug abuse, sex (including porn), shopping, internet use, and exercise.

A sample of 968 [Mage = 29.5, SDage = 9.36, nmales = 622 (64.3 %), nfemales = 315, (32.5 %)] adults was surveyed regarding their SMA experiences, using the Bergen Social Media Addiction Scale (BSMAS). Their experiences of Gaming, Internet, Gambling, Alcohol, Cigarette, Drug, Sex, Shopping and Exercise addictions were additionally assessed, and latent profile analysis (LPA) was implemented.

Three distinct profiles were revealed, based on the severity of one’s SMA symptoms: ‘low’, ‘moderate’ and ‘high’ risk. Subsequent ANOVA analyses suggested that participants classified as ‘high’ risk indicated significantly higher behaviours related to internet, gambling, gaming, sex and in particular shopping addictions.

Conclusions

Results support SMA as a unitary construct, while they potentially challenge the distinction between technological and behavioural addictions. Findings also imply that the assessment of those presenting with SMA behaviours, as well as prevention and intervention targeting SMA at risk groups, should consider other comorbid addictions.

1. Introduction

Social media – a form of online communication in which users create profiles, generate and share content, while forming online social networks/communities ( Obar & Wildman, 2015 ), is quickly growing to become almost all consuming in the media landscape. Currently the number of daily social media users exceeds 53 % (∼4.5 billion users) of the global population, approaching 80 % among more developed nations ( Countrymeters, 2021 , DataReportal, 2021 ). Due to technological advancements, the rise of ‘digital natives’ (i.e. children and adolescents raised with and familiarised with digital technology) and coronavirus pandemic triggered lockdowns, the frequency and duration of social media usage has been steadily increasing as people compensate for a lack of face to face interaction or grow with Social Media as a normal part of their lives (i.e. ∼ 2 h and 27 min average daily; DataReportal, 2021 , Heffer et al., 2019 , Zhong et al., 2020 , Nguyen, 2021 ). Furthermore, social media is increasingly involved in various domains of life including education, economics and even politics, to the point where engagement with the economy and wider society almost necessitates its use, driving the continued proliferation of social media use ( Calderaro, 2018 , Nguyen, 2021 , Mabić et al., 2020 , Mourão and Kilgo, 2021 ). This societal shift towards increased social media use has had some positive benefits, serving to facilitate the creation and maintenance of social groups, increase access to opportunities for career advancement and created wide ranging and accessible education options for many users ( Calderaro, 2018 , Prinstein et al., 2020 , Bouchillon, 2020 , Nguyen, 2021 ). However, for a minority of users - roughly 5–10 % ( Bányai et al., 2017 , Luo et al., 2021 , Brailovskaia et al., 2021 ) – social media use has become excessive, to the point where it dominates one’s life, similarly to an addictive behaviour - a state known as 'problematic social media use' ( Sun & Zhang, 2020 ). For these users, social media is experienced as the single most important activity in one’s life, while compromising their other roles and obligations (e.g. family, romance, employment; Sun and Zhang, 2020 , Griffiths and Kuss, 2017 ). This is a situation associated with low mood/depression, the compromise of one’s identity, social comparison leading to anxiety and self-esteem issues, work, academic/career difficulties, compromised sleep schedules and physical health, and even social impairment leading to isolation ( Anderson et al., 2017 , Sun and Zhang, 2020 , Gorwa and Guilbeault, 2020 ).

1.1. Problematic social media engagement in the context of addictions

Problematic social media use is markedly similar to the experience of substance addiction, thus leading to problematic social media use being modelled by some as a behavioural addiction - social media addiction (SMA; Sun and Zhang, 2020 ). In brief, an addiction loosely refers to a state where an individual experiences a powerful craving to engage with a behaviour, and inability to control their related actions, such that it begins to negatively impact their life ( Starcevic, 2016 ). Although initially the term referred to substance addictions induced by psychotropic drugs (e.g., amphetamines), it later expanded to include behavioural addictions ( Chamberlain et al., 2016 ). These reflect a fixation and lack of control, similar to those experienced in the abuse of substances, related to one’s excessive/problematic behaviours ( Starcevic, 2016 ).

Indeed, behavioural addictions, such as gaming, gambling and (arguably) social media addiction (SMA) share many common features with substance related addictions ( Zarate et al., 2022 ). Their similarities extend beyond the core addiction manifestations of fixation, loss of control and negative life consequences ( Grant et al., 2010 , Bodor et al., 2016 , Martinac et al., 2019 , Zarate et al., 2022 ). For instance, it has been evidenced that common risk factors/mechanisms (e.g., low impulse control), behavioural patterns (e.g., chronic relapse; sudden “spontaneous” quitting), ages of onset (e.g., adolescence and young adulthood) and negative life consequences (e.g., financial and legal difficulties) are similar between the so-called behavioural addictions and formally diagnosed substance addictions ( Grant et al., 2010 ). Moreover, such commonalities often accommodate the concurrent experience of addictive presentations, and/or even the substitution/flow from one addiction to the next (e.g., gambling and alcoholism; Bodor et al., 2016 , Martinac et al., 2019 , Grant et al., 2010 ).

With these features in mind, SMA has been depicted as characterized by the following six symptoms; A deep preoccupation with social media use (salience), use to either increase their positive feelings and/or buffer their negative feelings (mood modification), the requirement for progressively increasing time-engagement to get the same effect (i.e., tolerance), withdrawal symptoms such as irritability and frustration when access is reduced (withdrawal), the development of tensions with other people due to under-performance across several life domains (conflict) and reduced self-regulation resulting in an inability to reduce use (relapse; Andreassen et al., 2012 , Brown, 1993 , Griffiths and Kuss, 2017 , Sun and Zhang, 2020 ).

This developing model of SMA has been gaining popularity as the most widely used conceptualisation of problematic social media use, and guiding the development of relevant measurement tools ( Andreassen et al., 2012 , Haand and Shuwang, 2020 , Prinstein et al., 2020 ; Van den Eijnden et al., 2016) ). However, SMA is not currently uniformly accepted as an understanding of problematic social media use. Some critics have labelled the SMA model a premature pathologisation of ordinary social media use behaviours with low construct validity and little evidence for its existence, often inviting alternative proposed classifications derived by cognitive-behavioural or contextual models ( Sun & Zhang, 2020 ; Panova & Carbonell, 2018 7; Moretta, Buodo, Demetrovics & Potenza, 2022 ). Furthermore, the causes, risk factors and consequences of SMA, as well as the measures employed in its assessment have yet to be elucidated in depth, with research in the area being largely exploratory in nature ( Prinstein et al., 2020 , Sun and Zhang, 2020 ). In this context, what functional, regular and excessive social media use behaviours may involve has also been debated ( Wegmann et al., 2022 ). Thus, there is a need for further research clarifying the nature of SMA, identifying risk factors and related negative outcomes, as well as potential methods of treatment ( Prinstein et al., 2020 , Sun and Zhang, 2020 , Moretta et al., 2022 ).

Two avenues important for realizing these goals (and the focus of this study) involve: a) profiling SMA behaviours in the broader community, and b) decoding their associations with other addictions. Profiling these behaviours would involve identifying groups of people with particular patterns of use rather than simply examining trends in behaviour across the greater population. This would allow for clearer understandings of the ways in which different groups experience SMA and a more person-centred analysis (i.e., focused on finer understandings of personal experiences, Bányai et al., 2017 ). Moreover, when combined with analyses of association, it can allow for assertions not only about whether SMA associates with a variable, but about which components of the experience of SMA associate with a variable, allowing for more nuanced understandings. One such association with much potential for exploration, is that of SMA with other addictions (i.e., how does a certain SMA type differentially relate with other addictive behaviors, such as gambling and/or substance abuse?). Such knowledge would be useful, due to the shared common features and risk factors between addictions. It would allow for a greater understanding of the likelihood of comorbid addictions, or of flow from one addiction to the next ( Bodor et al., 2016 , Martinac et al., 2019 , Grant et al., 2010 ). However, the various links between different addictions are not identical, with alcoholism (for example) associating less strongly with excessive/problematic internet use than with problematic/excessive (so called “addictive) sex behaviours ( Grant et al., 2010 ). In that line, some studies have suggested the consideration of different addiction subgroups (e.g., substance, behavioural and technology addictions Marmet et al., 2019 ), and/or different profiles of individuals being prone to manifest some addictive behaviours more than others ( Zilberman et al., 2018 ). Accordingly, one may assume that distinct profiles of those suffering from SMA behaviours may be more at risk for certain addictions over others, rather than with addictions in general ( Zarate et al., 2022 ).

Understanding these varying connections could be vital for SMA treatment. Co-occurring addictions often reinforce each-other through their behavioural effects. Furthermore, by targeting only a single addiction type in a treatment, other addictions an individual is vulnerable to can come to the fore ( Grant et al., 2010 , Miller et al., 2019 ). Thus, a holistic view of addictive vulnerability may require consideration ( Grant et al., 2010 , Miller et al., 2019 ). This makes the identification of individual SMA profiles, as well as any potential co-occurring addictions, pivotal for more efficient assessment, prevention and intervention of SMA behaviours.

To the best of the authors’ knowledge, four studies to date have attempted to explore SMA profiles. Three of those have been conducted predominantly with European adolescent samples, and varied in terms of the type and number of profiles detected ( Bányai et al., 2017 , Brailovskaia et al., 2021 , Luo et al., 2021 , Cheng et al., 2022 ). The fourth was conducted with English speaking adults from the United Kingdom and the United States ( Cheng et al., 2022 ). Of extant studies, Bányai et al. (2017) identified three profiles varying quantitively (i.e., in terms of their SMA symptoms’ severity) across a low, moderate and high range. In contrast, Brailovskaia et al., 2021 , Luo et al., 2021 identified four and five profiles that varied both quantitatively and qualitatively in terms of the type of SMA symptoms reported. Brailovskaia et al., (2021) proposed the ‘low symptom’, ‘low withdrawal’ (i.e., lower overall SMA symptoms with distinctively lower withdrawal), ‘high withdrawal’ (i.e., higher overall SMA symptoms with distinctively higher withdrawal) and ‘high symptom’ profiles. Luo et al. (2021) supported the ‘casual’, ‘regular’, ‘low risk high engagement’, ‘at risk high engagement’ and ‘addicted’ user profiles, which demonstrated progressively higher SMA symptoms severity alongside significant differences regarding mood modification, relapse, withdrawal and conflict symptoms, that distinguished the low and high risk ‘high engagement’ profiles. Finally, considering the occurrence of different SMA profiles in adults, Cheng and colleagues, (2022), supported the occurrence of ‘no-risk’, ‘at risk’ and ‘high risk’ social media users applying in both US and UK populations, with the UK sample showing a lower proportion of the ‘no-risk’ profile (i.e. UK = 55 % vs US = 62.2) and a higher percentage of the high risk profile (i.e. UK = 11.9 % vs US = 9.1 %). Thus, considering the number of identified profiles best describing the population of social media users, Cheng and colleagues’ findings (2022) were similar to Bányai and colleagues’ (2017) suggestions for SMA behaviour profiles of adolescents. At this point it should be noted, that none of the four studies exploring SMA behaviours profiles to date has taken into consideration different profile parameterizations, meaning that potential differences in the heterogeneity/ variability of those classified within the same profile were not considered (e.g. some profiles maybe more loose/ inclusive than others; Bányai et al., 2017 , Brailovskaia et al., 2021 , Luo et al., 2021 , Cheng et al., 2022 ).

The lack of convergence regarding the optimum number and the description of SMA profiles occurring, as well as age, cultural and parameterization limitations of the four available SMA profiling studies, invites further investigation. This is especially evident in light of preliminary evidence confirming one’s SMA profile may link more to certain addictions over others ( Zarate et al., 2022 ). Indeed, those suffering from SMA behaviours have been shown to display heightened degrees of alcohol and drug use, a vulnerability to internet addiction in general, while presenting lower proneness towards exercise addiction and tobacco use ( Grant et al., 2010 , Anderson et al., 2017 , Duradoni et al., 2020 , Spilkova et al., 2017 ). In terms of gambling addiction, social media addicts display similar results on tests of value-based decision making as gambling addicts ( Meshi et al., 2019 ). Finally, regarding shopping addiction, the proliferation of advertisements for products online, and the ease of access via social media to online stores could be assumed to have an intensifying SMA effect ( Rose & Dhandayudham, 2014 ). Aside from these promising, yet relatively limited findings, the assessed connections between SMA and other addictions tend to be either addressed in isolation (e.g., SMA with gambling only and not multiple other addiction forms; Gainsbury et al., 2016a , Gainsbury et al., 2016b ) and in a variable (and not person) focused manner (e.g., higher levels of SMA relate with higher levels of drug addiction; Spilkova et al., 2017 ), which overlooks an individual’s profile. These profiles are vitally needed, as knowing the type of individual who may experience a series of disparate addictions is paramount for identifying at risk social media users and populations in need of more focused prevention/intervention programs ( Grant et al., 2010 ). Hence, using person focused methods such as latent profile(s) analysis (LPA) that address the ways in which distinct variations/profiles in SMA behaviours may occur, and how these relate with other addictions is imperative ( Lanza & Cooper, 2016 ).

1.2. Present study

To address this research priority, while considering SMA behaviours as being normally distributed (i.e., a minimum–maximum continuum) across the different profiles of users in the general population, the present Australian study uses a large community sample, solid psychometric measures and a sequence of differing in parameterizations LCA models aiming to: (a) advance past knowledge on the typology/structure of SMA symptom one experiences and: (b) innovatively explore the association of these typologies with a comprehensive list of addictive behaviours related to gaming, gambling, alcohol, smoking, drug abuse, sex (including porn), shopping, internet use, and exercise.

Based on Cheng and colleagues (2022) and Bányai and colleagues (2017), it was envisaged that three profiles arrayed in terms of ascending SMA symptoms’ severity would be likely identified. Furthermore, guided by past literature supporting closer associations between technological and behavioural addictions than with substance related addictions, it was hypothesized that those classified at higher SMA risk profiles would report higher symptoms of other technological and behavioural addictions, such as those related to excessive gaming and gambling, than with drug addiction ( Chamberlain and Grant, 2019 , Zarate et al., 2022 ).

2.1. Participants

The current study was conducted in Australia. Responses initially retrieved included 1097 participants. Of those, 129 were not considered for the current analyses. In particular, 84 respondents were classified as preview-only registrations and did not address any items, 5 presented with systematic response inconsistencies, and thus were considered invalid, 11 were excluded as potential bots, 11 had not provided their informed consent (i.e., did not tick the digital consent box, although they later addressed the survey), and 18 were taken out for not fulfilling age conditions (i.e., being adults), in line with the ethics approval received. Therefore, responses from 968 English-speaking adults from the general community were examined. An online sample of adult, English speaking participants aged 18 to 64 who were familiar with gaming [ N  = 968, M age  = 29.5, SD age  = 9.36, n males  = 622 (64.3 %), n females  = 315, (32.5 %), n trans/non-binary  = 26 (2.7 %), n queer  =  1 (0.1 %), n other  =  1 (0.1 %), n missing  =  3 (0.3 %)] was analysed. According to Hill (1998) random sampling error is required to lie below 4 %, that is satisfied by the current sample’s 3 % (SPH analytics, 2021). See Table 1 for participants’ sociodemographic information.

Socio-demographic and online use characteristics of participants.

Note: Percentages represent the percentage of that sex which is represented by any one grouping, rather than percentages of the overall population.

2.2. Measures

Psychometric instruments targeting sociodemographics, SMA and a semi-comprehensive range of behavioral, digital and substance addictions were employed. These instruments involved the Bergen Social Media Addiction Scale (BSMAS; Andreassen et al., 2012 ), the Internet Gaming Disorder 9 items Short Form (IGDS-SF9; Pontes & Griffiths, 2015 ), The Internet Disorder Scale (IDS9-SF; ( Pontes & Griffiths, 2016 ), the Online Gambling Disorder Questionnaire (IGD-Q; González-Cabrera et al., 2020 ), the 10-Item Alcohol Use Disorders Identification Test (AUDIT; Saunders et al., 1993 , the Five Item Cigarette Dependance Scale (CDS-5; Etter et al., 2003 ), the 10- item Drug Abuse Screening Test (DAST-10; Skinner, 1982 ), the Bergen-Yale Sex Addiction Scale (BYSAS; Andreassen et al., 2018), the Bergen Shopping Addiction Scale (BSAS; Andreassen et al., 2015) and the 6-item Revised Exercise Addiction Inventory (EAI-R; Szabo et al., 2019 ). Precise details of these measures, including values related to assumptions can be found in Table 2 .

Measure descriptions and internal consistency.

Note Table 2 : Streiner’s (2003) guidelines are used when measuring internal reliability, with Cronbachs Alpha scores in the range of 0.60–0.69 labelled ‘acceptable’, ranges between 0.70 and 0.89 labelled ‘good’ and ranges between 0.90 and 1.00 labelled ‘excellent’. Acceptable values of skewness fall between − 3 and + 3, and kurtosis is appropriate from a range of − 10 to + 10 ( Brown, 2006 ). OGD-G kurtosis (13.90) and skewness (3.45) exceeded the recommended limits ( Brown, 2006 ). However, LPA does not assume data distribution linearity, normality and or homogeneity ( Rosenberg et al., 2019 ). Considering aim B, related to detecting significant reported differences on measures for gaming, sex, shopping, exercise, gambling, alcohol, drug, cigarette and internet addiction symptoms respectively, anova results were derived after bootstrapping the sample 1000 times to ensure that normality assumptions were met. Case bootstrapping calculates the means of 1000 resamples of the available data and computes the results analysing these means, which are normally distributed ( Tong, Saminathan, & Chang, 2016 ).

2.3. Procedure

Approval was received from the Victoria University Human Research Ethics Committee (HRE20-169). Data was collected in August 2019 to August 2020 via an online survey link distributed via social media (i.e., Facebook; Instagram; Twitter), digital forums (i.e. reddit) and the Victoria University learning management system. Familiarity with gaming was preferred, so that associations with one’s online gaming patterns were studied. The link first took potential participants to the Plain Language Information Statement (PLIS) which informed on the study requirements and participants’ anonymity and free of penalty withdrawal rights. Digital provision of informed consent (i.e., ticking a box) was required by the participants before proceeding to the survey.

2.4. Statistical analyses

Statistical analyses were conducted via: a) R-studio for the latent profile(s) analyses (LPA) and; b) Jamovi for descriptive statistics and profiles’ comparisons. Regarding aim A, LPA identified naturally homogenous subgroups within a population ( Rosenberg et al., 2019 ). Through the TIDYLPA CRAN R package, a number of models varying in terms of their structure/parameterization and the number of ‘profiles’ were tested using the six BSMAS criteria/items as indicators ( Rosenberg et al., 2019 ; see Table 3 ).

LCA model parameterization characteristics.

Subsequently, the constructed models were compared regarding selected fit indices (i.e., Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), bootstrapped Lo-Mendel Rubin test (B-LMR or LRT), entropy and the N_Min; Rosenberg et al., 2019 ) 1 . This involved 1: Dismissing any models with N -Min’s equalling 0, as each profile requires at least one participant, 2: Dismissing models with entropy scores below 0.64 ( Tein et al., 2013 ), 3: Dismissing models with nonsignificant BLMR value, and 4: assessing the remaining models on their AIC/BIC looking for an elbow point in the decline or the lowest values.

Regarding aim B of the study, ANOVA with bootstrapping (1000x) was employed to detect significant profile differences regarding one’s gaming, sex, shopping, exercise, gambling, alcohol, drug, cigarette and internet addiction symptoms respectively.

All analyses’ assumptions were met with one exception 2 . The measure of Online Gambling disorder experience violated guidelines for the acceptable departure from normality and homogeneity ( Kim, 2013 ). Given this violation, results regarding gambling addiction should be considered with some caution.

3.1. Aim A: LPA of BSMAS symptoms

The converged models’ fit, varying by number of profiles and parametrization is displayed in Table 4 , with the CIP parameterisation presenting as the optimum (i.e. lower AIC and BIC, and 1–8 profiles converging; all CVDP, CIUP, CVUP models did not converge except the CVUP one profile). Subsequently, the CIP models were further examined via the TIDYLPA Mclust function (see Table 5 ). AIC and BIC decreased as the number of profiles increased. This flattened past 3 profiles (i.e., elbow point; Rosenberg et al., 2019 ). Furthermore, past 3 profiles, N -min reached zero, indicating profiles with zero participants in them – thus reducing interpretability. Lastly, the BLRT test reached non significance once the model had 4 profiles, again indicating the 3-profile model as best fitting. Therefore, alternative CIP -models were rejected in favour of the 3-profile one. This displayed a level of classification accuracy well above the suggested cut off point of 0.76 (entropy = 0.90; Larose et al., 2016 ), suggesting over 90 % correct classification ( Larose et al., 2016 ). Regarding the profiles’ proportions, counts revealed 33.6 % as profile 1, 52.4 % as profile 2, 14 % as profile 3.

Initial model testing.

Fit indices of cip models with 1–8 classes.

Table 6 and Fig. 1 present the profiles’ raw mean scores across the 6 BSMAS items whilst Table 7 and Fig. 2 present the standardised mean scores.

Raw Mean Scores and Standard Error of the 6 BSMAS Criteria Across the Three Classes/Profiles.

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

Raw symptom experience of the three classes.

Standardised mean scores of the 6 bsmas criteria Across the Three Classes/Profiles.

Note: For standard errors, see Table 6 .

An external file that holds a picture, illustration, etc.
Object name is gr2.jpg

Standardized symptom experience of the three classes.

Profile 1 scores varied from 1.74 to 2.98 raw and between 0.08 and 0.58 standard deviations above the sample mean symptom experience. In terms of plateaus and steeps, profile 1 displayed a raw score plateaus across symptoms 1–3 (salience, tolerance, mood modification), a decline in symptom 4 (relapse), and another plateau across symptoms 5–6 (withdrawal and conflict). It further displayed a standardized score plateau around the level of 0.5 standard deviations across symptoms 1–3 and a decline across symptoms 4–6. Profile 2 varied consistently between raw mean scores of 1 and 1.36 across the 6 SMA symptoms, and between −0.74 and −0.53 standard deviations from the sample mean with general plateaus in standardized score across symptoms 1–3 and 4–6. Finally, profile 3 mean scores varied between 3.02 and 3.95 raw and 1.26 to 1.88 standardized. Plateaus were witnessed in the raw scores across symptoms 1–3 (salience, tolerance, mood modification), a decline at symptom 4 (relapse), a relative peak at symptom 5 (withdrawal), and a further decline across symptom 6 (conflict). However, the standardized scores for profile 3 were relatively constant across the first four symptoms, before sharply reaching a peak at symptom 5 and then declining once more. Accordingly, the three profiles were identified as severity profiles ‘Low’ (profile 2), ‘Moderate’ (profile 1) and ‘High’ (profile 3) risk. Table 8 , Table 9 provide the profile means and standard deviations, as well as their pairwise comparisons across the series of other addictive behaviors assessed.

Post Hoc Descriptives across a semi-comprehensive list of addictions.

Post Hoc Comparisons of the SMA profiles revealed across the addictive behaviors measured.

3.2. Aim 2: BSMAS profiles and addiction risk/personal factors.

Table 8 , Table 9 display the Jamovi outputs for the BSMAS profiles and their means and standard deviations, as well as their pairwise comparisons across the series of other addictive behaviors assessed using ANOVA. Cohen’s (1988) benchmarks were used for eta squared values, with > 0.01 indicating small, >0.059 medium and > 0.138 large effects. ANOVA results were derived after bootstrapping the sample 1000 times to ensure that normality assumptions were met. Case bootstrapping calculates the means of 1000 resamples of the available data and computes the results analysing these means, which are normally distributed ( Tong et al., 2016 ). SMA profiles significantly differed across the range of behavioral addiction forms examined with more severe SMA profiles presenting consistently higher scores with a medium effect size regarding gaming ( F  = 57.5, p  <.001, η 2  = 0.108), sex ( F  = 39.53, p  <.001, η 2  = 0.076) and gambling ( F  = 40.332, p  <.001, η 2  = 0.078), and large effect sizes regarding shopping ( F  = 90.06, p  <.001, η 2  = 0.159) and general internet addiction symptoms ( F  = 137.17, p  <.001, η2 = 0.223). Only relationships of ‘medium’ size or greater were considered further in this analysis, though small effects were found with alcoholism ( F  = 11.34, p  <.001, η 2  = 0.023), substance abuse ( F  = 4.83, p  =.008, η 2  = 0.01) and exercise addiction ( F  = 5.415, p  =.005, η2 = 0.011). Pairwise comparisons consistently confirmed that the ‘low’ SMA profile scored significantly lower than the ‘moderate’ and the ‘high’ SMA profile’, and the ‘moderate’ SMA profile being significantly lower than the ‘high’ SMA profile across all addiction forms assessed (see Table 8 , Table 9 ).

4. Discussion

The present study examined the occurrence of distinct SMA profiles and their associations with a range of other addictive behaviors. It did so via uniquely combining a large community sample, measures of established psychometric properties addressing both SMA and an extensive range of other proposed substance and behavioral addictions, to calculate the best fitting model in terms of parameterization and profile number. A model of the CIP parameterization with three profiles was supported by the data. The three identified SMA profiles ranged in terms of severity and were labeled as ‘low’ (52.4 %), ‘moderate’ (33.6 %) and ‘high’ (14 %) SMA risk. Membership of the ‘high’ SMA risk profile was shown to link with significantly higher reported experiences of Internet and shopping addictive behaviours, and moderately with higher levels of addictive symptoms related to gaming, sex and gambling.

4.1. Number and variations of SMA profiles

Three SMA profiles, entailing ‘low’ (52.4 %), ‘moderate’ (33.6 %) and ‘high’(14 %) SMA risk were supported, with symptom 5 – withdrawal – displaying the highest inter-profile disparities. These results help clarify the number of SMA profiles in the population, as past findings were inconsistent supporting either 3, or 4 or 5 SMA profiles ( Bányai et al., 2017 , Brailovskaia et al., 2021 , Luo et al., 2021 ), as well as the nature of the differences between these profiles (i.e. quantitative: “how much/high one experiences SMA symptoms” or qualitative: “the type of SMA symptoms one experiences”). Our findings are consistent with the findings of Bányai and colleagues (2017) and Cheng and colleagues (2022) indicating a unidimensional experience of SMA (i.e., that the intensity/severity an individual reports best defines their profile membership, rather than the type of SMA symptoms) with three profiles ranging in severity from ‘low’ to ‘moderate’ to ‘high’ and those belonging at the higher risk profiles being the minority. Conversely, these results stand in opposition with two past studies identifying profiles that varied qualitatively (i.e., specific SMA symptoms experienced more by certain profiles) and suggesting the occurrence of 4 and 5 profiles respectively ( Brailovskaia et al., 2021 , Luo et al., 2021 ). Such differences might be explained by variations in the targeted populations of these studies. Characteristics such as gender, nationality and age all have significant effects on how and why social media is employed ( Andreassen et al., 2016 ; Hsu et al., 2015 ; Park et al., 2015 ). Given that the two studies in question utilized European, adolescent samples, the difference in the culture and age of our samples may have produced our varying results, ( Brailovskaia et al., 2021 , Luo et al., 2021 ). Comparability issues may also explain these results, given the profiling analyses implemented in the studies of Brailovskaia and colleagues, (2021), as well as Luo and colleagues (2021) did not extensively consider different profiles parameterizations, as the present study and Cheng et al. (2022) did. Furthermore, the results of this study closely replicated those of the Cheng et al., (2022) study, with both studies identifying a near identical pattern of symptom experience across three advancing levels of severity. This replication of results may indicate their accuracy, strengthening the validity of SMA experience models involving 3 differentiated profiles of staggered severity. Both our findings and Cheng et al.’s findings indicate profiles characterized by higher levels of cognitive symptoms (salience, withdrawal and mood modification) for each class when compared to their experience of behavioral symptoms (Relapse, withdrawal, conflict; Cheng et al., 2022 ). Further research may focus on any potentially mediating/moderating factors that may be interfering, and potentially further replicate such results, proving their reliability. Furthermore, given that past studies (with different results) utilized European, adolescent samples, cultural and age comparability limitations need to be considered and accounted for in future research ( Bányai et al., 2017 , Brailovskaia et al., 2021 ; Cheng et al., 2022 ).

Regarding withdrawal being the symptom of highest discrepancy between profiles, findings suggest that it may be more SMA predictive, and thus merit specific assessment or diagnostic attention, aligning with past literature ( Bányai et al., 2017 , Luo et al., 2021 , Brailovskaia et al., 2021 , Smith and Short, 2022 ). Indeed, the experience of irritability and frustration when abstaining from usage has been shown to possess higher differentiation power regarding diagnosing and measuring other technological addictions such as gaming, indicating the possibility of a broader centrality to withdrawal across the constellation of digital addictions ( Gomez et al., 2019 ; Schivinski et al., 2018 ).

Finally, the higher SMA risk profile percentage in the current study compared with previous research [e.g., 14 % in contrast to the 4.5 % ( Bányai et al., 2017 ), 4.2 % ( Luo et al., 2021 ) and 7.2 % ( Brailovskaia et al., 2021 )] also invites significant plausible interpretations. The data collection for the present Australian study occurred between August 2019 to August 2020, while Bányai and their colleagues (2017) collected their data in Hungary in March 2015, and Brailovskaia and their colleagues (2021) in Lithuania and Germany between October 2019 and December 2019. The first cases of the COVID-19 pandemic outside China were reported in January 2020, and the pandemic isolation measures prompted more intense social media usage, to compensate for their lack of in person interactions started unfolding later in 2020 ( Ryan, 2021 , Saud et al., 2020 ). Thus, it is likely that the higher SMA symptom scores reported in the present study are inflated by the social isolation conditions imposed during the time the data was collected. Furthermore, the present study involves an adult English-speaking population rather than European adolescents, as the studies of Bányai and their colleagues (2017) and Brailovskaia and their colleagues (2021). Thus, age and/or cultural differences may explain the higher proportion of the high SMA risk profile found. For instance, it is possible that there may be greater SMA vulnerability among older demographics and/or across countries. The explanation of differences across counties is reinforced by the findings of Cheng and colleagues (2022) who assessed and compared UK and US adult populations, the first is less likely, as younger age has been shown to relate to higher SMA behaviors ( Lyvers et al., 2019 ). Overall, the present results closely align with that of Cheng and colleagues (2022), who also collected their data during a similar period (between May 18, 2020 and May 24, 2020) from English speaking countries (as the present study did). They, in line with our findings, also supported the occurrence of three SMA behavior profiles, with the low risk profile exceeding 50 % of the general population and those at higher risk ranging above 9 %.

4.2. Concurrent addiction risk

Considering the second study aim, ascending risk profile membership was strongly related to increased experiences of internet and shopping addiction, while it moderately connected with gaming, gambling and sex addictions. Finally, it weakly associated with alcohol, exercise and drug addictions. These findings constitute the first semi-comprehensive cross-addiction risk ranking of SMA high-risk profiled individuals, allowing the following implications.

Firstly, no distinction was found between the so called “technological” and other behavioral addictions, potentially contradicting prior theory on the topic ( Gomez et al., 2022 ). Typically, the abuse of internet gaming/pornography/social media, has been classified as behavioral addiction ( Enrique, 2010 , Savci and Aysan, 2017 ). However, their shared active substance – the internet – has prompted some scholars to suggest that these should be classified as a distinct subtype of behavioral addictions named “technological/ Internet Use addictions/disorders” ( Savci & Aysan, 2017 ). Nevertheless, the stronger association revealed between the “high” SMA risk profile and shopping addictions (not always necessitating the internet), compared to other technology related addictions, challenges this conceptual distinction ( Savci & Aysan, 2017 ). This finding may point to an expanding intersection between shopping and SMA, as an increasing number of social media platforms host easily accessible product and services advertising channels (e.g., Facebook property and car selling/marketing groups, Instagram shopping; Rose & Dhandayudham, 2014 ). In turn, the desire to shop may prompt a desire to find these services online, share shopping endeavors with others or find deals one can only access through social media creating a reciprocal effect ( Rose & Dhandayudham, 2014 ). This possibility aligns with previous studies assuming reciprocal addictive co-occurrences ( Tullett-Prado et al., 2021 ). This relationship might also be exacerbated by shared causal factors underpinning addictions in general, such as one’s drive for immediate gratification and/or impulsive tendencies ( Andreassen et al., 2016 ; Niedermoser et al., 2021 ). Although such interpretations remain to be tested, the strong SMA and shopping addiction link evidenced suggests that clinicians should closely examine the shopping behaviors of those suffering from SMA behaviours, and if comorbidity is detected – address both addictions concurrently ( Grant et al., 2010 , Miller et al., 2019 ). Conclusively, despite some studies suggesting the distinction between technological, and especially internet related (e.g., SMA, internet gaming), addictions and other behavioral addictions ( Gomez et al., 2022 , Zarate et al., 2022 ), the current study’s high risk SMA profile associations appear not to differentiate based on the technological/internet nature that other addictions may involve.

Secondly, results suggest a novel hierarchical list of the types of addictions related to the higher SMA risk profile. While previous research has established links between various addictive behaviors and SMA (i.e., gaming and SMA; Wang et al., 2015 ), these have never before - to the best of the authors’ knowledge – been examined simultaneously allowing their comparison/ranking. Therefore, our findings may allow for more accurate predictions about the addictive comorbidities of SMA, aiding in SMA’s assessment and treatment. For example, Internet, shopping, gambling, gaming and sex addictions were all shown to more significantly associate with the high risk SMA profile than exercise and substance related addictive behaviors ( King et al., 2014 ; Gainsbury et al., 2016a ; Gainsbury et al., 2016b ; Rose and Dhandayudham, 2014 , Kamaruddin et al., 2018 , Leung, 2014 ). Thus, clinicians working with those with SMA may wish to screen for gaming and sex addictions. Regardless of the underlying causes, this hierarchy provides the likelihood of one addiction precipitating and perpetuating another in a cyclical manner, guiding assessment, prevention, and intervention priorities of concurrent addictions.

Lastly, these results indicate a lower relevance of the high risk SMA profile with exercise/substance addictive behaviors. Considering excessive exercise, our study reinforces literature indicating decreased physical activity among SMA and problematic internet users in general ( Anderson et al., 2017 , Duradoni et al., 2020 ). Naturally, those suffering from SMA behaviours spend large amounts of time sedentary in front of a screen, precluding excessive physical activities. Similarly, the lack of a significant relationship between tobacco abuse and SMA has also been identified priori, perhaps due to the cultural divide between social media and smoking in terms of their acceptance by wider society and of the difference in their users ( Spilkova et al., 2017 ). Contrary to expectations, there were weak/negligible associations between the high SMA risk profile with substance and alcohol abuse behaviours. This finding contradicts current knowledge supporting their frequent comorbidity ( Grant et al., 2010 , Spilkova et al., 2017 ; Winpenny et al., 2014 ). This finding may potentially be explained by individual differences between these users, as while one can assume many traits are shared between those vulnerable to substances and SMA, these may be expressed differently. For example, despite narcissism being a common addiction risk factor, its predictive power is mediated by reward sensitivity in SMA, where in alcoholism and substances, no such relationship exists ( Lyvers et al., 2019 ). Perhaps the constant dopamine rewards and the addictive reward schedule of social media targets this vulnerability in a way that alcoholism does not. Overall, one could assume that the associations between SMA and less “traditionally” (i.e., substance related; Gomez et al., 2022 ) viewed addictions deserves more attention. Thus, future research is recommended.

4.3. Limitations and future direction

The current findings need to be considered in the light of various limitations. Firstly, limitations related to the cross-sectional, age specific and self-report surveyed data are present. These methodological restrictions do not allow for conclusions regarding the longitudinal and/or causal associations between different addictions, nor for generalization of the findings to different age groups, such as adolescents. Furthermore, the self-report questionnaires employed may accommodate subjectivity biases (e.g., subjective and/or false memory recollections; Hoerger & Currell, 2012 ; Sun & Zhang, 2020 The latter risk is reinforced by the non-inclusion of social desirability subscales in the current study, posing obstacles in ensuring participant responses are accurate.

Additionally, there is a conceptual overlap between SMA and Internet Addiction (IA), which operates as an umbrella construct inclusive of all online addictions (i.e., irrespective of the aspect of the Internet being abused; Anderson et al., 2017 , Savci and Aysan, 2017 ). Thus, caution is warranted considering the interpretation of the SMA profiles and IA association, as SMA may constitute a specific subtype included under the IA umbrella ( Savci & Aysan, 2017 ). However, one should also consider that: (a) SMA, as a particular IA subtype is not identical to IA ( Pontes, & Griffiths, 2014 ); and (b) recent findings show that IA and addictive behaviours related to specific internet applications, such as SMA, could correlate with different types of electroencephalogram [EEG] activity, suggesting their neurophysiological distinction (e.g. gaming disorder patients experience raised delta and theta activity and reduced beta activity, while Internet addiction patients experience raised gamma and reduced beta and delta activity; Burleigh et al., 2020 ). Overall, these advocate in favour of a careful consideration of the SMA profiles and IA associations.

Finally, the role of demographic differences, related to one’s gender and age, which have been shown to mediate the relationship between social media engagement and symptoms of other psychiatric disorders ( Andreassen et al., 2016 ) have not been attended here.

Thus, regarding the present findings and their limitations, future studies should focus on a number of key avenues; (1) achieving a more granular understanding of SMA’s associations with comorbid addictions via case study or longitudinal research (e.g., cross lag designs), (2) further clarifying the nature of the experience of SMA symptoms, (3) investigating the link between shopping addiction and SMA, as well as potential interventions that target both of these addictions simultaneously and, (4) attending to gender and age differences related to the different SMA risk profiles, as well as how these may associate with other addictions.

5. Conclusion

The present study bears significant implications for the way that SMA behaviours are assessed among adults in the community and subsequently addressed in adult clinical populations. By profiling the ways in which SMA symptoms are experienced, three groups of adult social media users, differing regarding the reported intensity of their SMA symptoms were revealed. These included the ‘low’ (52.4 %), ‘moderate’ (33.6 %) and ‘high’ (14 %) SMA risk profiles. The high SMA risk profile membership was strongly related to increased rates of reported internet and shopping related addictive behaviours, moderately associated with gaming, gambling and sex related addictive behaviours and weakly associated with alcohol, exercise and drug related addictive behaviours, to the point that such associations were negligible at most. These results enable a better understanding of those experiencing higher SMA behaviours, and the introduction of a risk hierarchy of SMA-addiction comorbidities that needs to be taken into consideration when assessing and/or treating those suffering from SMA symptoms. Specifically, SMA and its potential addictive behaviour comorbidities may be addressed with psychoeducation and risk management techniques in the context of SMA relapse prevention and intervention plans, with a greater emphasis on shopping and general internet addictive behaviours. Regarding epidemiological implications, the inclusion of 14 % of the sample in the high SMA risk profile implies that while social media use can be a risky experience, it should not be over-pathologized. More importantly, and provided that the present findings are reinforced by other studies, SMA awareness campaigns might need to be introduced, while regulating policies should concurrently address the risk for multiple addictions among those suffering from SMA behaviours.

Note 1: Firstly, results were compared across all converged models. In brief, the AIC and BIC are measures of the prediction error which penalize goodness of fit by the number of parameters to prevent overfit, models with lower scores are deemed better fitting ( Tein et al., 2013 ). Of the 16 possible models, the parameterization with the most consistently low AIC’s and BIC’s across models with 1–8 profiles were chosen, eliminating 8 of the possible models. Subsequently, the remaining models were more closely examined through TIDYLPA using the compare solutions command, with the. BLMR operating as a direct comparison between 2 models (i.e. the model tested and a similar model with one profile less) on their relative fit using likelihood ratios. A BLMR based output p value will be obtained for each comparison pair with lower p-values corresponding to the greater fit among the models tested (i.e. if BLMR p >.05, the model with the higher number of profiles needs to be rejected; Tein et al., 2013). Entropy is an estimate of the probability that any one individual is correctly allocated in their profile/profile. Entropy ranges from 0 to 1 with higher scores corresponding with a better model ( Tein et al., 2013 ; Larose et al., 2016 ). Finally, the N_min represents the minimum proportion of sample participants in any one presentation profile and aids in determining the interpretability/parsimony of a model. If N_min is 0, then there is a profile or profilees in the model empty of members. Thus, the interpretability and parsimony of the model is reduced ( CRAN, 2021 ). These differing fit indices were weighed up against eachother in order to identify the best fitting model (Akogul & Erisoglu, 2017). This best fitting model was subsequently applied to the datasheet, and then the individual profilees examined through the use of descriptive statistics in order to identify their characteristics.

Note 2: With regards to the assumptions of the LPA Model, as a non-parametric test, no assumptions were made regarding the distribution of data. With regards to the subsequent ANOVA analyses, 2 assumptions were made as to the nature of the distribution. Homogeneity of variances and Normality. Thus, the distribution of the data was assessed via Jamovi. Skewness and Kurtosis for all measures employed in the ANOVA analyses. Skewness ranged from 0.673 to 2.49 for all variables bar the OGD-Q which had a skewness of 3.45. Kurtosis ranged from 0.11 to 6 for variables bar the OGD-Q which had a kurtosis of 13.9. Thus, all measures excepting the OGD-Q sat within the respective acceptable ranges of + 3 to −3 and + 10 to −10 recommended by Brown and Moore (2012).

Dr Vasileios Stavropoulos received funding by:

The Victoria University, Early Career Researcher Fund ECR 2020, number 68761601.

The Australian Research Council, Discovery Early Career Researcher Award, 2021, number DE210101107.

Ethical Standards – Animal Rights

All procedures performed in the study involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors. Thus, the present study was approved by the Human Ethics Research Committee of Victoria University (Australia).

Informed consent

Informed consent was obtained from all individual participants included in the study.

Confirmation statement

Authors confirm that this paper has not been either previously published or submitted simultaneously for publication elsewhere.

Publication

Authors confirm that this paper is not under consideration for publication elsewhere. However, the authors do disclose that the paper has been considered elsewhere, advanced to the pre-print stage and then withdrawn.

Authors assign copyright or license the publication rights in the present article.

Availability of data and materials

Data is deposited as a supplementary file with the current document.

CRediT authorship contribution statement

Deon Tullett-Prado: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation. Vasileios Stavropoulos: Supervision, Resources, Funding acquisition, Project administration. Rapson Gomez: Supervision, Resources. Jo Doley: Supervision, Resources.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Biographies

Deon Tullett-Prado: Deon Tullett-Prado is a PhD candidate and emerging researcher in the area of behavioral addictions and in particular Internet Gaming Disorder. His expertise involves advanced statistical analysis skills and innovative techniques regarding population profiling.

Dr Vasileios Stavropoulos: Dr Vasileios Stavropoulos is a member of the Australian Psychological Society (APS) and a registered psychologist endorsed in Clinical Psychology with the Australian Health Practitioner Regulation Authority (AHPRA). Vasileios' research interests include the areas of Behavioral Addictions and Developmental Psychopathology. In that context, Vasileios is a member of the European Association of Developmental Psychology (EADP) and the EADP Early Researchers Union. Considering his academic collaborations, Vasileios maintains his research ties with the Athena Studies for Resilient Adaptation Research Team of the University of Athens, the International Gaming Centre of Nottingham Trent University, Palo Alto University and the Korean Advanced Institute of Science and Technology. Vasileios has received the ARC DECRA award 2021.

Dr Rapson Gomez: Rapson Gomez is professor in clinical psychology who once directed clinical training at the School of Psychology, University of Tasmania (Hobart, Australia). Now he focuses on research using innovative statistical techniques with a particular focus on ADHD, biological methods of personality, psychometrics and Cyberpsychology.

Dr Jo Doley: A lecturer at Victoria University, Dr Doley has a keen interest in the social aspects of body image and eating disorders. With expertise in a variety of quantitative methodologies, including experimental studies, delphi studies, and systematic reviews, Dr Doley has been conducting research into the ways that personal characteristics like sexual orientation and gender may impact on body image. Furthermore, in conjunction with the cyberpsychology group at VU they have been building a new expertise on digital media and it’s potential addictive effects.

Appendix A Supplementary data to this article can be found online at https://doi.org/10.1016/j.abrep.2023.100479 .

Appendix A. Supplementary material

The following are the Supplementary data to this article:

Data availability

  • Anderson E.L., Steen E., Stavropoulos V. Internet use and Problematic Internet Use: A systematic review of longitudinal research trends in adolescence and emergent adulthood. International Journal of Adolescence and Youth. 2017; 22 (4):430–454. doi: 10.1080/02673843.2016.1227716. [ CrossRef ] [ Google Scholar ]
  • Andreassen C.S., Torsheim T., Brunborg G.S., Pallesen S. Development of a Facebook addiction scale. Psychological reports. 2012; 110 (2):501–517. doi: 10.2466/02.09.18.PR0.110.2.501-517. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Andreassen C.S., Billieux J., Griffiths M.D., Kuss D.J., Demetrovics Z., Mazzoni E., et al. The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: A large-scale cross-sectional study. Psychology of Addictive Behaviors. 2016; 30 (2):252. doi: 10.1037/adb0000160. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bányai F., Zsila Á., Király O., Maraz A., Elekes Z., Griffiths M.D., et al. Problematic social media use: Results from a large-scale nationally representative adolescent sample. PloS one. 2017; 12 (1):e0169839. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bodor D., Tomić A., Ricijaš N., Filipčić I. Impulsiveness in alcohol addiction and pathological gambling. Alcoholism and psychiatry research: Journal on psychiatric research and addictions . 2016; 52 (2):149–158. doi: 10.20471/apr.2016.52.02.05. [ CrossRef ] [ Google Scholar ]
  • Bouchillon B.C. Social networking for interpersonal life: A competence-based approach to the rich get richer hypothesis. Social Science Computer Review. 2020; 0894439320909506 doi: 10.1177/0894439320909506. [ CrossRef ] [ Google Scholar ]
  • Brailovskaia J., Truskauskaite-Kuneviciene I., Kazlauskas E., Margraf J. The patterns of problematic social media use (SMU) and their relationship with online flow, life satisfaction, depression, anxiety and stress symptoms in Lithuania and in Germany. Current Psychology. 2021; 1–12 doi: 10.1007/s12144-021-01711-w. [ CrossRef ] [ Google Scholar ]
  • Brown R.I.F. Some contributions of the study of gambling to the study of other addictions. Gambling behavior and problem gambling. 1993; 1 :241–272. [ Google Scholar ]
  • Brown T.A. Confirmatory factor analysis for applied research. The Guilford Press; 2006. [ Google Scholar ]
  • Burleigh T.L., Griffiths M.D., Sumich A., Wang G.Y., Kuss D.J. Gaming disorder and internet addiction: A systematic review of resting-state EEG studies. Addictive Behaviors. 2020; 107 (106429):32283445. doi: 10.1016/j.addbeh.2020.106429. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Calderaro A. In: Outhwaite W., Turner S., editors. Volume Set. SAGE Publications; 2018. Social media and politics. (The SAGE Handbook of Political Sociology: Two). [ CrossRef ] [ Google Scholar ]
  • Chamberlain S.R., Grant J.E. In: A transdiagnostic approach to obsessions, compulsions and related phenomena. Cambridge. Fontenelle L.F., Yücel M., editors. University Press; 2019. Behavioral addictions. [ CrossRef ] [ Google Scholar ]
  • Chamberlain S.R., Lochner C., Stein D.J., Goudriaan A.E., van Holst R.J., Zohar J., et al. Behavioural addiction—a rising tide? European neuropsychopharmacology. 2016; 26 (5):841–855. doi: 10.1016/j.euroneuro.2015.08.013. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cheng C., Ebrahimi O.V., Luk J.W. Heterogeneity of prevalence of social media addiction across multiple classification schemes: Latent profile analysis. Journal of Medical Internet Research. 2022; 24 (1):e27000. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Countrymeters. (2021). World Population. Retrieved from https://rb.gy/v6cqlq.
  • CRAN. (2021). Introduction to tidyLPA. CRAN. https://cran.r-project.org/web/packages/tidyLPA/vignettes/Introduction_to_tidyLPA.html .
  • DataReportal. (2021). DIGITAL 2021 OCTOBER GLOBAL STATSHOT REPORT. Retrieved from https://datareportal.com/reports/digital-2021-october-global-statshot.
  • Duradoni M., Innocenti F., Guazzini A. Well-being and social media: A systematic review of Bergen addiction scales. Future Internet. 2020; 12 (2):24. doi: 10.3390/fi12020024. [ CrossRef ] [ Google Scholar ]
  • Enrique E. Addiction to new technologies and to online social networking in young people: A new challenge. Adicciones. 2010; 22 (2) doi: 10.20882/adicciones.196. DOI. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Etter J.F., Le Houezec J., Perneger T. A Self-Administered Questionnaire to Measure Dependence on Cigarettes: The Cigarette Dependence Scale. Neuropsychopharmacol. 2003; 28 :359–370. doi: 10.1038/sj.npp.1300030. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant J.E., Potenza M.N., Weinstein A., Gorelick D.A. Introduction to behavioral addictions. The American journal of drug and alcohol abuse. 2010; 36 (5):233–241. doi: 10.3109/00952990.2010.491884. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Griffiths M.D., Kuss D. Adolescent social media addiction (revisited) Education and Health. 2017; 35 (3):49–52. [ Google Scholar ]
  • Gomez R., Stavropoulos V., Brown T., Griffiths M.D. Factor structure of ten psychoactive substance addictions and behavioural addictions. Psychiatry Research. 2022; 114605 doi: 10.1016/j.psychres.2022.114605. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gainsbury S.M., King D.L., Russell A.M., Delfabbro P., Derevensky J., Hing N. Exposure to and engagement with gambling marketing in social media: Reported impacts on moderate-risk and problem gamblers. Psychology of Addictive Behaviors. 2016; 30 (2):270. doi: 10.1037/adb0000156. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gainsbury S.M., Delfabbro P., King D.L., Hing N. An exploratory study of gambling operators’ use of social media and the latent messages conveyed. Journal of Gambling Studies. 2016; 32 (1):125–141. doi: 10.1007/s10899-015-9525-2. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gomez R., Stavropoulos V., Beard C., Pontes H.M. Item response theory analysis of the recoded internet gaming disorder scale-short-form (IGDS9-SF) International Journal of Mental Health and Addiction. 2019; 17 (4):859–879. doi: 10.1007/s11469-018-9890-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • González-Cabrera J., Machimbarrena J.M., Beranuy M., Pérez-Rodríguez P., Fernández-González L., Calvete E. Journal of Clinical Medicine . Vol. 9. MDPI AG; 2020. Design and Measurement Properties of the Online Gambling Disorder Questionnaire (OGD-Q) in Spanish Adolescents; p. 120. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gorwa R., Guilbeault D. Unpacking the social media bot: A typology to guide research and policy. Policy & Internet. 2020; 12 (2):225–248. doi: 10.1002/poi3.184. [ CrossRef ] [ Google Scholar ]
  • Haand R., Shuwang Z. The relationship between social media addiction and depression: A quantitative study among university students in Khost, Afghanistan. International Journal of Adolescence and Youth. 2020; 25 (1):780–786. doi: 10.1080/02673843.2020.1741407. [ CrossRef ] [ Google Scholar ]
  • Heffer, T., Good, M., Daly, O., MacDonell, E., & Willoughby, T. (2019). The longitudinal association between social-media use and depressive symptoms among adolescents and young adults: An empirical reply to Twenge et al.(2018).  Clinical Psychological Science ,  7 (3), 462-470. DOI: https://doi.org/10.1177/216770261881272.
  • Hoerger M., Currell C. In: APA handbook of ethics in psychology, Vol. 2. Practice, teaching, and research. Knapp S.J., Gottlieb M.C., Handelsman M.M., VandeCreek L.D., editors. American Psychological Association; 2012. Ethical issues in Internet research; pp. 385–400. [ CrossRef ] [ Google Scholar ]
  • Hsu M.H., Chang C.M., Lin H.C., Lin Y.W. Determinants of continued use of social media: the perspectives of uses and gratifications theory and perceived interactivity. Information Research. 2015 [ Google Scholar ]
  • Kamaruddin N., Rahman A.W.A., Handiyani D. Pornography addiction detection based on neurophysiological computational approach. Indonesian Journal of Electrical Engineering and Computer Science. 2018; 10 (1):138–145. [ Google Scholar ]
  • Kim H.Y. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis. Restorative Dentistry & Endodontics. 2013; 38 (1):52–54. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • King D.L., Delfabbro P.H., Kaptsis D., Zwaans T. Adolescent simulated gambling via digital and social media: An emerging problem. Computers in Human Behavior. 2014; 31 :305–313. doi: 10.1016/j.chb.2013.10.048. [ CrossRef ] [ Google Scholar ]
  • Lanza S.T., Cooper B.R. Latent profile analysis for developmental research. Child Development Perspectives. 2016; 10 (1):59–64. doi: 10.1111/cdep.12163. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Larose C., Harel O., Kordas K., Dey D.K. Latent class analysis of incomplete data via an entropy-based criterion. Statistical Methodology. 2016; 32 :107–121. doi: 10.1016/j.stamet.2016.04.004. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leung L. Predicting Internet risks: A longitudinal panel study of gratifications- sought, Internet addiction symptoms, and social media use among children and adolescents. Health Psychology and Behavioral Medicine: An Open Access Journal. 2014; 2 (1):424–439. doi: 10.1080/21642850.2014.902316. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luo T., Qin L., Cheng L., Wang S., Zhu Z., Xu J., et al. Determination the cut-off point for the Bergen social media addiction (BSMAS): Diagnostic contribution of the six criteria of the components model of addiction for social media disorder. Journal of Behavioral Addictions. 2021 doi: 10.1556/2006.2021.00025. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lyvers M., Narayanan S.S., Thorberg F.A. Disordered social media use and risky drinking in young adults: Differential associations with addiction-linked traits. Australian journal of psychology. 2019; 71 (3):223–231. doi: 10.1111/ajpy.12236. [ CrossRef ] [ Google Scholar ]
  • Mabić, M., Gašpar, D., & Bošnjak, L. L. (2020). Social Media and Employment–Students' vs. Employers' Perspective. In  Proceedings of the ENTRENOVA-ENTerprise REsearch InNOVAtion Conference , 6(1), 482-492.
  • Marmet S., Studer J., Wicki M., Bertholet N., Khazaal Y., Gmel G. Unique versus shared associations between self-reported behavioral addictions and substance use disorders and mental health problems: A commonality analysis in a large sample of young Swiss men. Journal of Behavioral Addictions. 2019; 8 (4):664–677. doi: 10.1556/2006.8.2019.70. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Martinac M., Karlović D., Babić D. Neuroscience of Alcohol. Academic Press; 2019. Alcohol and Gambling Addiction; pp. 529–535. [ CrossRef ] [ Google Scholar ]
  • Meshi D., Elizarova A., Bender A., Verdejo-Garcia A. Excessive social media users demonstrate impaired decision making in the Iowa Gambling Task. Journal of Behavioral Addictions. 2019; 8 (1):169–173. doi: 10.1556/2006.7.2018.138. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller W.R., Forcehimes A.A., Zweben A. Guilford Publications; 2019. Treating addiction: A guide for professionals. [ Google Scholar ]
  • Moretta T., Buodo G., Demetrovics Z., Potenza M.N. Tracing 20 years of research on problematic use of the internet and social media: Theoretical models, assessment tools, and an agenda for future work. Comprehensive Psychiatry. 2022; 112 doi: 10.1016/j.comppsych.2021.152286. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mourão R.R., Kilgo D.K. Black Lives Matter Coverage: How Protest News Frames and Attitudinal Change Affect Social Media Engagement. Digital Journalism. 2021; 1–21 doi: 10.1080/21670811.2021.1931900. [ CrossRef ] [ Google Scholar ]
  • Nguyen M.H. LAB University of Applied Sciences; case: 2021. The impact of social media on students’ lives. [ Google Scholar ]
  • Niedermoser D.W., Petitjean S., Schweinfurth N., Wirz L., Ankli V., Schilling H., et al. Shopping addiction: A brief review. Practice Innovations. 2021 doi: 10.1037/pri0000152. [ CrossRef ] [ Google Scholar ]
  • Obar, J. A., & Wildman, S. S. (2015). Social media definition and the governance challenge- an introduction to the special issue. Obar, JA and Wildman, S.(2015). Social media definition and the governance challenge: An introduction to the special issue. Telecommunications policy , 39(9), 745-750. DOI: 10.1016/j.telpol.2015.07.014.
  • Panova T., Carbonell X. Is smartphone addiction really an addiction? Journal of behavioral addictions. 2018; 7 (2):252–259. doi: 10.1556/2006.7.2018.49. 2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Park C., Jun J., Lee T. Consumer characteristics and the use of social networking sites: A comparison between Korea and the US. International Marketing Review. 2015; 32 (3/4):414–437. doi: 10.1108/IMR-09-2013-0213. [ CrossRef ] [ Google Scholar ]
  • Pontes H.M., Griffiths M.D. Internet addiction disorder and internet gaming disorder are not the same. Journal of Addiction Research & Therapy. 2014; 5 (4) doi: 10.4172/2155-6105.1000e124. [ CrossRef ] [ Google Scholar ]
  • Pontes H.M., Griffiths M.D. Measuring DSM-5 Internet Gaming Disorder: Development and validation of a short psychometric scale. Computers in Human Behavior. 2015; 45 :137–143. doi: 10.1016/j.chb.2014.12.006. [ CrossRef ] [ Google Scholar ]
  • Pontes H.M., Griffiths M.D. The development and psychometric properties of the Internet Disorder Scale–Short Form (IDS9-SF). Addicta . The Turkish Journal on Addictions. 2016; 3 (2) doi: 10.1016/j.addbeh.2015.09.003. [ CrossRef ] [ Google Scholar ]
  • Prinstein M.J., Nesi J., Telzer E.H. Commentary: An updated agenda for the study of digital media use and adolescent development–future directions following Odgers & Jensen (2020) Journal of Child Psychology and Psychiatry. 2020; 61 (3):349–352. doi: 10.1111/jcpp.13190. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosenberg J.M., Beymer P.N., Anderson D.J., Van Lissa C.J., Schmidt J.A. tidyLPA: An R package to easily carry out latent profile analysis (LPA) using open- source or commercial software. Journal of Open Source Software. 2019; 3 (30):978. doi: 10.21105/joss.00978. [ CrossRef ] [ Google Scholar ]
  • Rose S., Dhandayudham A. Towards an understanding of Internet-based problem shopping behaviour: The concept of online shopping addiction and its proposed predictors. Journal of Behavioral Addictions. 2014; 3 (2):83–89. doi: 10.1556/JBA.3.2014.003. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ryan J.M. Routledge; 2021. Timeline of COVID-19. COVID-19: Global pandemic, societal responses, ideological solutions , xiii-xxxii. [ Google Scholar ]
  • Savci M., Aysan F. Technological addictions and social connectedness: Predictor effect of internet addiction, social media addiction, digital game addiction and smartphone addiction on social connectedness. Dusunen Adam: Journal of Psychiatry & Neurological Sciences. 2017; 30 (3):202–216. doi: 10.5350/DAJPN2017300304. [ CrossRef ] [ Google Scholar ]
  • Saud M., Mashud M.I., Ida R. Usage of social media during the pandemic: Seeking support and awareness about COVID-19 through social media platforms. Journal of Public Affairs. 2020; 20 (4):e2417. [ Google Scholar ]
  • Saunders J.B., Aasland O.G., Babor T.F., La Fuente De, Grant M. Development of the alcohol use disorders identification test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption-II. Addiction. 1993; 88 (6):791–804. doi: 10.1111/j.1360-0443.1993.tb02093.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schivinski B., Brzozowska-Woś M., Buchanan E.M., Griffiths M.D., Pontes H.M. Psychometric assessment of the internet gaming disorder diagnostic criteria: An item response theory study. Addictive Behaviors Reports. 2018; 8 :176–184. doi: 10.1016/j.abrep.2018.06.004. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Skinner H.A. The drug abuse screening test. Addictive Behaviors. 1982; 7 (4):363–371.https. doi: 10.1016/0306-4603(82)90005-3. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith T., Short A. Needs affordance as a key factor in likelihood of problematic social media use: Validation, Latent Profile Analysis and comparison of TikTok and Facebook problematic use measures. Addictive Behaviors. 2022; 107259 doi: 10.1016/j.addbeh.2022.107259. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Spilkova J., Chomynova P., Csemy L. Predictors of excessive use of social media and excessive online gaming in Czech teenagers. Journal of Behavioral Addictions. 2017; 6 (4):611–619. doi: 10.1556/2006.6.2017.064. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Starcevic V. Behavioural addictions: A challenge for psychopathology and psychiatric nosology. Australian & New Zealand Journal of Psychiatry. 2016; 50 (8):721–725. doi: 10.1177/0004867416654009. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sun Y., Zhang Y. A review of theories and models applied in studies of social media addiction and implications for future research. Addictive Behaviors. 2020; 106699 doi: 10.1016/j.addbeh.2020.106699. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Szabo A., Pinto A., Griffiths M.D., Kovácsik R., Demetrovics Z. The psychometric evaluation of the Revised Exercise Addiction Inventory: Improved psychometric properties by changing item response rating. Journal of Behavioral Addictions. 2019; 8 (1):157–161. doi: 10.1556/2006.8.2019.06. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tein J.Y., Coxe S., Cham H. Statistical power to detect the correct number of classes in latent profile analysis. Structural equation modeling: a multidisciplinary journal. 2013; 20 (4):640–657. doi: 10.1080/10705511.2013.824781. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tong L.I., Saminathan R., Chang C.W. Uncertainty assessment of non-normal emission estimates using non-parametric bootstrap confidence intervals. Journal of Environmental Informatics. 2016; 28 (1):61–70. doi: 10.3808/jei.201500322. [ CrossRef ] [ Google Scholar ]
  • Tullett-Prado D., Stavropoulos V., Mueller K., Sharples J., Footitt T.A. Internet Gaming Disorder profiles and their associations with social engagement behaviours. Journal of Psychiatric Research. 2021; 138 :393–403. doi: 10.1016/j.jpsychires.2021.04.037. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van den Eijnden R.J., Lemmens J.S., Valkenburg P.M. The social media disorder scale. Computers in Human Behavior. 2016; 61 :478–487. doi: 10.1177/0004867416654009. [ CrossRef ] [ Google Scholar ]
  • Wang C.W., Ho R.T., Chan C.L., Tse S. Exploring personality characteristics of Chinese adolescents with internet-related addictive behaviors: Trait differences for gaming addiction and social networking addiction. Addictive Behaviors. 2015; 42 :32–35. doi: 10.1016/j.addbeh.2014.10.039. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wegmann E., Billieux J., Brand M. Mental health in a digital world . Academic Press; 2022. Internet-use disorders: A theoretical framework for their conceptualization and diagnosis; pp. 285–305. [ Google Scholar ]
  • Winpenny E.M., Marteau T.M., Nolte E. Exposure of children and adolescents to alcohol marketing on social media websites. Alcohol and Alcoholism. 2014; 49 (2):154–159. doi: 10.1093/alcalc/agt174. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zarate D., Ball M., Montag C., Prokofieva M., Stavropoulos V. Unravelling the Web of Addictions: A Network Analysis Approach. Addictive Behaviors Reports. 2022; 100406 doi: 10.1016/j.abrep.2022.100406. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhong B., Huang Y., Liu Q. Mental health toll from the coronavirus: Social media usage reveals Wuhan residents’ depression and secondary trauma in the COVID-19 outbreak. Computers in Human Behavior. 2020; 114 doi: 10.1016/j.chb.2020.106524. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zilberman N., Yadid G., Efrati Y., Neumark Y., Rassovsky Y. Personality profiles of substance and behavioral addictions. Addictive Behaviors. 2018; 82 :174–181. doi: 10.1016/j.addbeh.2018.03.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]

Advertisement

Supported by

What to Know About the Supreme Court Arguments on Social Media Laws

Both Florida and Texas passed laws regulating how social media companies moderate speech online. The laws, if upheld, could fundamentally alter how the platforms police their sites.

  • Share full article

A view of the Supreme Court building.

By David McCabe

McCabe reported from Washington.

Social media companies are bracing for Supreme Court arguments on Monday that could fundamentally alter the way they police their sites.

After Facebook, Twitter and YouTube barred President Donald J. Trump in the wake of the Jan. 6, 2021, riots at the Capitol, Florida made it illegal for technology companies to ban from their sites a candidate for office in the state. Texas later passed its own law prohibiting platforms from taking down political content.

Two tech industry groups, NetChoice and the Computer & Communications Industry Association, sued to block the laws from taking effect. They argued that the companies have the right to make decisions about their own platforms under the First Amendment, much as a newspaper gets to decide what runs in its pages.

So what’s at stake?

The Supreme Court’s decision in those cases — Moody v. NetChoice and NetChoice v. Paxton — is a big test of the power of social media companies, potentially reshaping millions of social media feeds by giving the government influence over how and what stays online.

“What’s at stake is whether they can be forced to carry content they don’t want to,” said Daphne Keller, a lecturer at Stanford Law School who filed a brief with the Supreme Court supporting the tech groups’ challenge to the Texas and Florida laws. “And, maybe more to the point, whether the government can force them to carry content they don’t want to.”

If the Supreme Court says the Texas and Florida laws are constitutional and they take effect, some legal experts speculate that the companies could create versions of their feeds specifically for those states. Still, such a ruling could usher in similar laws in other states, and it is technically complicated to accurately restrict access to a website based on location.

Critics of the laws say the feeds to the two states could include extremist content — from neo-Nazis, for example — that the platforms previously would have taken down for violating their standards. Or, the critics say, the platforms could ban discussion of anything remotely political by barring posts about many contentious issues.

What are the Florida and Texas social media laws?

The Texas law prohibits social media platforms from taking down content based on the “viewpoint” of the user or expressed in the post. The law gives individuals and the state’s attorney general the right to file lawsuits against the platforms for violations.

The Florida law fines platforms if they permanently ban from their sites a candidate for office in the state. It also forbids the platforms from taking down content from a “journalistic enterprise” and requires the companies to be upfront about their rules for moderating content.

Proponents of the Texas and Florida laws, which were passed in 2021, say that they will protect conservatives from the liberal bias that they say pervades the platforms, which are based in California.

“People the world over use Facebook, YouTube, and X (the social-media platform formerly known as Twitter) to communicate with friends, family, politicians, reporters, and the broader public,” Ken Paxton, the Texas attorney general, said in one legal brief. “And like the telegraph companies of yore, the social media giants of today use their control over the mechanics of this ‘modern public square’ to direct — and often stifle — public discourse.”

Chase Sizemore, a spokesman for the Florida attorney general, said the state looked “forward to defending our social media law that protects Floridians.” A spokeswoman for the Texas attorney general did not provide a comment.

What are the current rights of social media platforms?

They now decide what does and doesn’t stay online.

Companies including Meta’s Facebook and Instagram, TikTok, Snap, YouTube and X have long policed themselves, setting their own rules for what users are allowed to say while the government has taken a hands-off approach.

In 1997, the Supreme Court ruled that a law regulating indecent speech online was unconstitutional, differentiating the internet from mediums where the government regulates content. The government, for instance, enforces decency standards on broadcast television and radio.

For years, bad actors have flooded social media with misleading information , hate speech and harassment, prompting the companies to come up with new rules over the last decade that include forbidding false information about elections and the pandemic. Platforms have banned figures like the influencer Andrew Tate for violating their rules, including against hate speech.

But there has been a right-wing backlash to these measures, with some conservatives accusing the platforms of censoring their views — and even prompting Elon Musk to say he wanted to buy Twitter in 2022 to help ensure users’ freedom of speech.

What are the social media platforms arguing?

The tech groups say that the First Amendment gives the companies the right to take down content as they see fit, because it protects their ability to make editorial choices about the content of their products.

In their lawsuit against the Texas law, the groups said that just like a magazine’s publishing decision, “a platform’s decision about what content to host and what to exclude is intended to convey a message about the type of community that the platform hopes to foster.”

Still, some legal scholars are worried about the implications of allowing the social media companies unlimited power under the First Amendment, which is intended to protect the freedom of speech as well as the freedom of the press.

“I do worry about a world in which these companies invoke the First Amendment to protect what many of us believe are commercial activities and conduct that is not expressive,” said Olivier Sylvain, a professor at Fordham Law School who until recently was a senior adviser to the Federal Trade Commission chair, Lina Khan.

How does this affect Big Tech’s liability for content?

A federal law known as Section 230 of the Communications Decency Act shields the platforms from lawsuits over most user content. It also protects them from legal liability for how they choose to moderate that content.

That law has been criticized in recent years for making it impossible to hold the platforms accountable for real-world harm that flows from posts they carry, including online drug sales and terrorist videos.

The cases being argued on Monday do not challenge that law head-on. But the Section 230 protections could play a role in the broader arguments over whether the court should uphold the Texas and Florida laws. And the state laws would indeed create new legal liability for the platforms if they take down certain content or ban certain accounts.

Last year, the Supreme Court considered two cases, directed at Google’s YouTube and Twitter, that sought to limit the reach of the Section 230 protections. The justices declined to hold the tech platforms legally liable for the content in question.

What comes next?

The court will hear arguments from both sides on Monday. A decision is expected by June.

Legal experts say the court may rule that the laws are unconstitutional, but provide a road map on how to fix them. Or it may uphold the companies’ First Amendment rights completely.

Carl Szabo, the general counsel of NetChoice, which represents companies including Google and Meta and lobbies against tech regulations, said that if the group’s challenge to the laws fails, “Americans across the country would be required to see lawful but awful content” that could be construed as political and therefore covered by the laws.

“There’s a lot of stuff that gets couched as political content,” he said. “Terrorist recruitment is arguably political content.”

But if the Supreme Court rules that the laws violate the Constitution, it will entrench the status quo: Platforms, not anybody else, will determine what speech gets to stay online.

Adam Liptak contributed reporting.

David McCabe covers tech policy. He joined The Times from Axios in 2019. More about David McCabe

IMAGES

  1. ≫ Effects of Social Media Addiction Free Essay Sample on Samploon.com

    write a speech about uses and abuses of social media

  2. ≫ Harm Effects of Social Media on Adolescents Free Essay Sample on

    write a speech about uses and abuses of social media

  3. Addiction Of Social Media PPT Template and Google Slides

    write a speech about uses and abuses of social media

  4. ≫ Free Speech on Social Media Free Essay Sample on Samploon.com

    write a speech about uses and abuses of social media

  5. How to report abuse on Social Media

    write a speech about uses and abuses of social media

  6. Uses and Abuses of Social Media, Bergen Global CMI/UiB, November 4 2021

    write a speech about uses and abuses of social media

VIDEO

  1. Write a dialogue about use and abuse of social media ।। Use and abuse of social media dialogue

  2. Debate On How Social Media Has Improved Human Communication

  3. A Informative Speech about The Use Of Social Media

  4. Social Media Trap (Dark Reality) 🧐 #viral #youtube #trending #socialmedia #reality

  5. Essay "Uses and abuses of Social Media" in English . Roll of Social Media in our Life

  6. Hate speech spreading on social media

COMMENTS

  1. Social media harms teens' mental health, mounting evidence shows. What now?

    The concern, and the studies, come from statistics showing that social media use in teens ages 13 to 17 is now almost ubiquitous. Two-thirds of teens report using TikTok, and some 60 percent of ...

  2. Racism, Hate Speech, and Social Media: A Systematic Review and Critique

    In a review and critique of research on race and racism in the digital realm, Jessie Daniels (2013) identified social media platforms—specifically social network sites (SNSs)—as spaces "where race and racism play out in interesting, sometimes disturbing, ways" (Daniels 2013, 702).Since then, social media research has become a salient academic (sub-)field with its own journal (Social ...

  3. Misinformation, manipulation, and abuse on social media in the era of

    Contributions. In light of the previous considerations, the purpose of this special issue was to collect contributions proposing models, methods, empirical findings, and intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information, and ...

  4. Teens are spending nearly 5 hours daily on social media. Here are the

    41%. Percentage of teens with the highest social media use who rate their overall mental health as poor or very poor, compared with 23% of those with the lowest use. For example, 10% of the highest use group expressed suicidal intent or self-harm in the past 12 months compared with 5% of the lowest use group, and 17% of the highest users expressed poor body image compared with 6% of the lowest ...

  5. Social media brings benefits and risks to teens. Psychology can help

    Adolescents should be routinely screened for signs of "problematic social media use" that can impair their ability to engage in daily roles and routines, and may present risk for more serious psychological harms over time. The use of social media should be limited so as to not interfere with adolescents' sleep and physical activity.

  6. In Their Own Words: How Adolescents Use Social Media and How It Affects

    Over the last decade, there has been a burgeoning interest in dissecting the relationship between adolescents' social media use and their well-being (Valkenburg, Meier, & Beyens, 2022).This surge of interest is attributed to social media's increasingly pervasive presence in the lives of young people, raising concerns among parents, educators, and policymakers about its potential impact on ...

  7. Misinformation, manipulation, and abuse on social media in the era of

    The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the ...

  8. Social media abuse News, Research and Analysis

    Articles on Social media abuse. Displaying 1 - 20 of 26 articles. ... New research shows that antisemitic posts surged as the 'free speech absolutist' took over the social media giant. And it ...

  9. Regulating Harmful Speech on Social Media: The Current Legal Landscape

    The benefits of social media platforms are obvious and enormous. They foster political and public discourse and civic engagement in the United States and around the world. 1 Close Social media platforms give voice to marginalized individuals and groups, allowing them to organize, offer support, and hold powerful people accountable. 2 Close And they allow individuals to communicate with and ...

  10. Why AI Struggles To Recognize Toxic Speech on Social Media

    Automated speech police can score highly on technical tests but miss the mark with people, new research shows. Facebook says its artificial intelligence models identified and pulled down 27 million pieces of hate speech in the final three months of 2020. In 97 percent of the cases, the systems took action before humans had even flagged the posts.

  11. Hate speech in social media: How platforms can do better

    The report recommends that social media platforms: 1) enforce their own rules; 2) use data from extremist sites to create detection models; 3) look for specific linguistic markers; 4) deemphasize profanity in toxicity detection; and 5) train moderators and algorithms to recognize that white supremacists' conversations are dangerous and hateful.

  12. The Struggle for Human Attention: Between the Abuse of Social Media and

    Human attention has become an object of study that defines both the design of interfaces and the production of emotions in a digital economy ecosystem. Guided by the control of users' attention, the consumption figures for digital environments, mainly social media, show that addictive use is associated with multiple psychological, social, and ...

  13. How should social media platforms combat misinformation and hate speech

    Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and ...

  14. Kate has faced years of abuse on social media. She says it's time

    Kate doesn't feel comfortable including her real name in this article - and that's telling. Since her early teens, the 23-year-old Queensland woman has experienced so much harassment and abuse ...

  15. Evaluate Two Ways Through Which Social Media Can Be Abused Or Manipulated

    Social media is being abused in various ways, including: Spreading Misinformation and Disinformation: False or misleading information is intentionally shared to deceive or manipulate others, often for political, financial, or social reasons. Online Harassment and Cyberbullying: Individuals use social media platforms to harass, intimidate, or threaten others, leading to emotional distress ...

  16. Hate Speech on Social Media: Global Comparisons

    Summary. Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing. Policies used to curb hate speech risk ...

  17. Principles for Social Media Use by Law Enforcement

    Introduction. Social media is a powerful tool for connection and civic involvement, serving myriad purposes. It facilitates community-building, connecting like-minded people and fostering alliance development, including on sensitive or controversial topics; it helps grassroots movements find financial and other support; it promotes political education; it assists civic organizations in ...

  18. Social media use and abuse: Different profiles of users and their

    1. Introduction. Social media - a form of online communication in which users create profiles, generate and share content, while forming online social networks/communities (Obar & Wildman, 2015), is quickly growing to become almost all consuming in the media landscape.Currently the number of daily social media users exceeds 53 % (∼4.5 billion users) of the global population, approaching 80 ...

  19. Hate speech and disinformation in South Africa's elections: big tech

    A number of groups working with data are attempting to monitor hate speech and disinformation on social media ahead of South Africa's national and provincial polls. ... Write an article and join ...

  20. Unmask the dangers of social media

    Create a family media plan. Agreed-upon expectations can help establish healthy technology boundaries at home - including social media use. A family media plan opens in a new tab can promote open family discussion and rules about media use and include topics such as balancing screen/online time, content boundaries, and not disclosing personal information.

  21. Hidden behind the obvious: Misleading keywords and ...

    1. Introduction. While social media provides a platform for all users to freely express themselves, cases of offensive language are not rare and can severely impact user experience and even the civility of a community [4].When such offence is intentional or targeted, it is further considered abuse [5]. Hate speech, which is speech that directly attacks or promotes hate towards a group or an ...

  22. Social media hate speech is growing on TikTok, Twitter and Reddit: Poll

    Which social media platforms see hate speech? Although Facebook is still the platform where harassment occurs the most, attacks have been steadily decreasing on the site, according to poll ...

  23. Shouting into the Void: Why Reporting Abuse to Social Media Platforms

    When the Pew Research Center asked people in the U.S. how well social media companies were doing in addressing online harassment on their platforms, nearly 80 percent said that companies were doing "an only fair or poor job." 10 Emily Vogels, "A majority say social media companies are doing an only fair or poor job addressing online ...

  24. Social media use and abuse: Different profiles of users and their

    1.1. Problematic social media engagement in the context of addictions. Problematic social media use is markedly similar to the experience of substance addiction, thus leading to problematic social media use being modelled by some as a behavioural addiction - social media addiction (SMA; Sun and Zhang, 2020).In brief, an addiction loosely refers to a state where an individual experiences a ...

  25. What to Know About the Supreme Court Case on Free Speech on Social

    Published Feb. 25, 2024 Updated Feb. 26, 2024. Social media companies are bracing for Supreme Court arguments on Monday that could fundamentally alter the way they police their sites. After ...

  26. Three Ways Social Media Companies Can Disarm Abusive Users

    Here are three key steps social media companies should take now to prioritize accountability, while protecting free expression: 1. Apply and publicize rules in real time. Social media companies must spell out their rules and make this information visible in real time within their platforms. Information on the policies that govern acceptable ...