Artificial intelligence and the Futures of Learning

AI and future of learning

The Artificial Intelligence and the Futures of Learning project builds on the  Recommendation on the Ethics of Artificial Intelligence  adopted at the 41st session of the UNESCO General Conference in 2019 and follows up on the recommendations of the UNESCO global report  Reimagining our futures together: a new social contract for education ,  launched in November 2021. It is implemented within the framework of the  Beijing Consensus on Artificial Intelligence and Education  and against the backdrop of the UNESCO Strategy on technological innovation in education (2021-2025) .

The project will address both the human and technological dimensions related to AI and the futures of learning. 

Strands of work

The project consists of three independent but complementary strands: 

  • AI and the Future of Learning
  • Guidance for Generative AI in education and research
  • AI Competency Frameworks for Students and Teachers 

Policy dialogue and consultations

  • International Forums on AI and Education: International Forum on AI and Education: Steering AI to Empower Teachers and Transform Teaching (December 2022); International Forum on AI and Education: Ensuring AI as a Common Good to Transform Education (December 2021); International Forum on AI and the Futures of Education: Developing competencies for the AI Era (December 2020); and International Conference on Artificial Intelligence and Education: Planning Education in the AI Era: Lead the leap (May 2019).
  • Ministerial Roundtable on Generative AI in Education : A virtual ministerial meeting on Generative AI in education took place on 25 May 2023, which gathered 25 ministers to debate on the urgent need for regulations on generative AI and competencies needed to reap its benefits. More information: https://www.unesco.org/en/articles/ministerial-roundtable-generative-ai-education
  • Consultations on AI competency framework for teachers (October 2022) : the consultation was attended by 15 international experts in AI and education and more than 70 participants. More information: https://www.unesco.org/en/articles/unesco-supports-definition-and-development-ai-competencies-teachers

Knowledge production

  • Guidance for Generative AI in Education and Research: The Guidance has been drafted and will be launched during Digital Learning Week (4-7 September, 2023).
  • Drafting the AI Competency Framework for school students: The first draft of the Framework for consultation that will be presented during Digital Learning Week (4-7 September 2023).
  • Drafting the AI Competency Framework for teachers: The first draft of the Framework for consultation that will be presented during Digital Learning Week (4-7 September 2023).
  • K-12 AI curricula: a mapping of government-endorsed AI curricula : The report builds on the results of a survey on “AI curricula for school students” which was circulated to all UNESCO Member States in 2022. The report will inform the development of the AI Competency Framework for Students . 
  • A survey on the governmental use of AI in education : Completed in early 2023, the survey covers assessments of Member States’ setting up of regulations on ethics of AI and its use in education, strategies for AI in education, and national programmes on developing AI competencies for teachers. The results of the survey informed the AI Competency Framework for teachers. 
  • Definition of Algorithm Literacy and Data Literacy : A call for contribution to the definition of Algorithm Literacy and Data Literacy was launched in June 2023. The selected think-pieces will feed inputs for the development of AI Competency Frameworks for students and teachers.
  • An in-depth case study on K-12 AI curricula of the United Arab Emirates : In collaboration with the Regional Center for Educational Planning (RCEP) and the Ministry of Education of the United Arab Emirates (UAE), UNESCOcompleted a case study on the UAE’s K-12 AI curriculum and its implementation. The case study is planned for release in 2023.

Capacity building

  • Workshop on AI curriculum development for schools in Oman (May 2022) : UNESCO, the Ministry of Education of Oman, RCEP, UNESCO Doha and Ericsson conducted an online workshop to empower 25+ national curriculum developers in integrating AI competencies into K-12 education. More information: https://www.unesco.org/en/articles/oman-embarks-development-k-12-ai-curricula-support-unesco-and-rcep
  • Workshop on Coding and AI for teachers in Lebanon (May 2023): As part of the “Teaching Coding and AI for Teachers and K-12 Students” programme in Lebanon, UNESCO, UNESCO Beirut and Ministry of Education of Lebanon conducted a three-day training on Coding and AI, empowering teachers and staff with AI skills and computational thinking. More information: https://www.unesco.org/en/articles/unesco-and-lebanon-join-forces-develop-coding-skills-teachers-and-disadvantaged-students

Publications

  • International forum on AI and education: steering AI to empower teachers and transform teaching, 5-6 December 2022; analytical report , UNESCO, 2023
  • K-12 AI curricula: a mapping of government-endorsed AI curricula , UNESCO, 2022
  • AI and education: Guidance for policy-makers , UNESCO, 2021
  • International Forum on AI and Education: Ensuring AI as a Common Good to Transform Education, synthesis report , UNESCO 2021
  • International Forum on AI and the Futures of Education, developing competencies for the AI Era, 7-8 December 2020: synthesis report , UNESCO, 2020
  • Artificial intelligence in education: Compendium of promising initiatives , UNESCO, 2020
  • Beijing Consensus on Artificial Intelligence and Education , UNESCO, 2019
  • Artificial intelligence in education: Compendium of promising initiatives , UNESCO, 2019
  • International conference on Artificial intelligence and Education, Planning education in the AI Era: Lead the leap: final report , UNESCO 2019

Related events

  • Digital Learning Week on “Steering technology for education” , 4 -7 September 2023
  • Ministerial Roundtable on Generative AI in Education , 25 May 2023
  • International forum on AI and education: steering AI to empower teachers and transform teaching , 5-6 December 2022
  • International Forum on AI and Education: Ensuring AI as a Common Good to Transform Education , 7-8 December 2021
  • Launch of the AI and the Futures of Learning Project , 30 September 2021 
  • International Forum on Artificial Intelligence and the Futures of Education , 7-9 December 2020
  • Online Edition of Mobile Learning Week 2020 - Beyond Disruption: Technology Enabled Learning Futures ,12-14 October 2020
  • International Conference on Artificial Intelligence and Education , 16-18 May 2019
  • Mobile Learning Week 2019 – Artificial intelligence for Sustainable Development ,4-8 March 2019

The project is supported by the Tomorrow Advancing Life Education Group (TAL) of China, a long-term partner of UNESCO and one of the sponsors of the International Conference on Artificial Intelligence and Education.

Related items

  • Artificial intelligence
  • Become a Member
  • Artificial Intelligence
  • Computational Thinking
  • Digital Citizenship
  • Edtech Selection
  • Global Collaborations
  • STEAM in Education
  • Teacher Preparation
  • ISTE Certification
  • School Partners
  • Career Development
  • ISTELive 24
  • Solutions Summit
  • Leadership Exchange
  • 2024 ASCD Leadership Summit
  • 2025 ASCD Annual Conference
  • Edtech Product Database
  • Solutions Network
  • Sponsorship & Advertising
  • Sponsorship & Advertising

Artificial Intelligence in Education

artificial intelligence in education policy

To prepare students to thrive as learners and leaders of the future, educators must become comfortable teaching with and about Artificial Intelligence. Generative AI tools such as ChatGPT , Claude and Midjourney , for example, further the opportunity to rethink and redesign learning. Educators can use these tools to strengthen learning experiences while addressing the ethical considerations of using AI. ISTE is the global leader in supporting schools in thoughtfully, safely and responsibly introducing AI in ways that enhance learning and empower students and teachers.

Interested in learning how to teach AI?

Sign up to learn about ISTE’s AI resources and PD opportunities. 

ASCD + ISTE StretchAI

StretchAI: An AI Coach Just for Educators

ISTE and ASCD are developing the first AI coach specifically for educators. With Stretch AI, educators can get tailored guidance to improve their teaching, from tips on ways to use technology to support learning, to strategies to create more inclusive learning experiences. Answers are based on a carefully validated set of resources and include the citations from source documents used to generate answers. If you are interested in becoming a beta tester for StretchAI, please sign up below.

Leaders' Guide to Artificial Intelligence

School leaders must ensure the use of AI is thoughtful and appropriate, and supports the district’s vision. Download this free guide  (or the UK version ) to get the background you need to guide your district in an AI-infused world.

UPDATED! Free Guides for Engaging Students in AI Creation

ISTE and GM have partnered to create Hands-On AI Projects for the Classroom guides to provide educators with a variety of activities to teach students about AI across various grade levels and subject areas. Each guide includes background information for teachers and student-driven project ideas that relate to subject-area standards. 

The hands-on activities in the guides range from “unplugged” projects to explore the basic concepts of how AI works to creating chatbots and simple video games with AI, allowing students to work directly with innovative AI technologies and demonstrate their learning. 

These updated hands-on guides are available in downloadable PDF format in English, Spanish and Arabic from the list below.

Hands-On AI Projects for the Classroom: A Guide for Elementary Teachers cover image

Artificial Intelligence Explorations for Educators unpacks everything educators need to know about bringing AI to the classroom. Sign up for the next course and find out how to earn graduate-level credit for completing the course.

Teach AI Feature

As a co-founder of  TeachAI , ISTE provides guidance to support school leaders and policy makers around leveraging AI for learning.

Ai joint page image square

Dive deeper into AI and learn how to navigate ChatGPT in schools with curated resources and tools  from ASCD and ISTE.

Join our Educator AI Community on Connect

ISTE+ASCD’s free online community brings together educators from around the world to share ideas and best practices for using artificial intelligence to support learning.

Learn More From These Podcasts, Blog Posts, Case Studies and Websites

Chat GPT Version Id05 Ih Ki9a XD Vlr CZ996 RB Kd8 xd DYB3 Gu

Partners Code.org, ETS, ISTE and Khan Academy offer engaging sessions with renowned experts to demystify AI, explore responsible implementation, address bias, and showcase how AI-powered learning can revolutionize student outcomes

Edsurge podcast better representation in ai

One of the challenges with bias in AI comes down to who has access to these careers in the first place, and that's the area that Tess Posner, CEO of the nonprofit AI4All, is trying to address.

Aiex banner 1602805883

Featuring in-depth interviews with practitioners, guidelines for classroom teachers and a webinar about the importance of AI in education, this site provides K-12 educators with practical tools for integrating AI and computational thinking across their curricula.

Web 1433045 1920

This 15-hour, self-paced introduction to artificial intelligence is designed for students in grades 9-12. Educators and students should create a free account at P-TECH before viewing the course.

Westville

Explore More in the Learning Library

Explore more books, articles, and tools about artificial intelligence in the Learning Library.

  • artificial intelligence

Become an Insider

Sign up today to receive premium content.

Home

How to Enact an AI Policy in Your K–12 Schools

Bryan Krause

Bryan Krause is a K–12 Education Strategist for CDW Education. He is a former teacher, coach and district administrator with more than 30 years of experience in Colorado. He was principal of a school that suffered a school shooting and has shared his experience and learnings with numerous school districts and organizations nationally. In addition, he has led response teams in numerous school crisis situations to help in response and recovery.

Wendy jones

Wendy Jones is a K–12 Education Strategist Manager for CDW•G.

As K–12 leaders plan for the return to school, many should consider the need for a policy on artificial intelligence. AI conversations are being had in ed tech circles, with experts and thought leaders discussing the technology as it advances. At this year’s Consortium for School Networking conference in March, panelists discussed ChatGPT and its place in schools. At ISTELive 23, presenters urged schools to have an AI policy in place, adding that it’s better to have a policy that can be revised than to have no guidance.

In talking about AI, and looking for advice to create a policy, it’s important to remember that AI and machine learning have been around for years. While ChatGPT accelerated the interest in generative AI, many technologies in schools already rely on AI or ML to some extent. Microsoft and Google both use the tech for features as familiar as autocomplete.

There’s AI in school safety technology as well. Cybersecurity tools such as web filtering and network monitoring rely on the tech, and physical security solutions such as next-generation cameras use AI for license plate recognition and more.

Understanding how AI works and, more importantly, how it doesn’t work is key for modern school leaders.

Click the banner to unlock complimentary resources from CDW for your modern K–12 classroom.

Navigate Common Fears and Misconceptions Around AI in Schools

Many of the common fears about AI are unfounded. Rumors swirl that AI will replace teachers or that students will stop learning how to read and write.

Instead, some of the most innovative teachers have already begun incorporating AI into lesson plans , using the tool to improve student writing and comprehension.

Others fear the technology will cause classes to stray from the curriculum or render it irrelevant. Yet, in the same way that teachers are integrating this tool into individualized lessons, they are using it to support and further the standards to which they need to teach.

One genuine concern is that of digital equity. If schools ban the use of generative AI in classrooms or on school devices, students with devices and access at home will continue to explore the capabilities of AI. Those who rely on school-issued devices, meanwhile, will fall behind their peers when it comes to AI skills, which experts say will be crucial in the workforce in the near future.

LEARN MORE: How ChatGPT is impacting innovation in K-12 education.

Create AI Policies for Schools That Are Fair and Thoughtful

While it’s important to enact a policy, school leaders should make sure they plan it thoughtfully. To do so, they must gather input from all the necessary stakeholders. This includes school staff — administrators, the IT department and educators — as well as students and the community.

Students are the end users of this tech in many circumstances, and they will need AI skills in future jobs, so their perspective is important to the conversation.

Engaging the community is also essential before creating a policy on AI. This ensures community members are informed about how AI is being used in the classroom and how it works. Including the community can lessen or dispel the fears around this technology.

DIVE DEEPER: What are the pros and cons of ChatGPT in education?

Bookmark Resources on AI in K–12 Schools

School leaders should also turn to trusted resources for recommendations on their AI policies. The U.S. Department of Education recently released insights and recommendations on AI for schools. The White House also put out its Blueprint for an AI Bill of Rights . These resources can give admins a starting point when creating a policy.

The team of education strategists at CDW can also help K–12 IT professionals struggling to navigate an AI policy. They work with school districts and boards of education across the country to provide recommendations, guidance and direction on emerging technologies. Interested school leaders can also attend the AI webinar hosted by CDW Education July 27 .

This article is part of the “ ConnectIT: Bridging the Gap Between Education and Technology ” series.

artificial intelligence in education policy

  • Digital Transformation
  • IT Governance
  • Artificial Intelligence
  • Machine Learning

Related Articles

AI concept with two illustrated heads one brain one circuitboard yellow background

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT

Copyright © 2024 CDW LLC 200 N. Milwaukee Avenue , Vernon Hills, IL 60061 Do Not Sell My Personal Information

  • Research article
  • Open access
  • Published: 24 April 2023

Artificial intelligence in higher education: the state of the field

  • Helen Crompton   ORCID: orcid.org/0000-0002-1775-8219 1 , 3 &
  • Diane Burke 2  

International Journal of Educational Technology in Higher Education volume  20 , Article number:  22 ( 2023 ) Cite this article

96k Accesses

96 Citations

64 Altmetric

Metrics details

This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged. The findings show that research was conducted in six of the seven continents of the world. The trend has shifted from the US to China leading in the number of publications. Another new trend is in the researcher affiliation as prior studies showed a lack of researchers from departments of education. This has now changed to be the most dominant department. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for 72% of the studies focused on students, 17% instructors, and 11% managers. In answering the overarching question of how AIEd was used in HE, grounded coding was used. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. This systematic review revealed gaps in the literature to be used as a springboard for future researchers, including new tools, such as Chat GPT.

A systematic review examining AIEd in higher education (HE) up to the end of 2022.

Unique findings in the switch from US to China in the most studies published.

A two to threefold increase in studies published in 2021 and 2022 to prior years.

AIEd was used for: Assessment/Evaluation, Predicting, AI Assistant, Intelligent Tutoring System, and Managing Student Learning.

Introduction

The use of artificial intelligence (AI) in higher education (HE) has risen quickly in the last 5 years (Chu et al., 2022 ), with a concomitant proliferation of new AI tools available. Scholars (viz., Chen et al., 2020 ; Crompton et al., 2020 , 2021 ) report on the affordances of AI to both instructors and students in HE. These benefits include the use of AI in HE to adapt instruction to the needs of different types of learners (Verdú et al., 2017 ), in providing customized prompt feedback (Dever et al., 2020 ), in developing assessments (Baykasoğlu et al., 2018 ), and predict academic success (Çağataylı & Çelebi, 2022 ). These studies help to inform educators about how artificial intelligence in education (AIEd) can be used in higher education.

Nonetheless, a gap has been highlighted by scholars (viz., Hrastinski et al., 2019 ; Zawacki-Richter et al., 2019 ) regarding an understanding of the collective affordances provided through the use of AI in HE. Therefore, the purpose of this study is to examine extant research from 2016 to 2022 to provide an up-to-date systematic review of how AI is being used in the HE context.

Artificial intelligence has become pervasive in the lives of twenty-first century citizens and is being proclaimed as a tool that can be used to enhance and advance all sectors of our lives (Górriz et al., 2020 ). The application of AI has attracted great interest in HE which is highly influenced by the development of information and communication technologies (Alajmi et al., 2020 ). AI is a tool used across subject disciplines, including language education (Liang et al., 2021 ), engineering education (Shukla et al., 2019 ), mathematics education (Hwang & Tu, 2021 ) and medical education (Winkler-Schwartz et al., 2019 ),

Artificial intelligence

The term artificial intelligence is not new. It was coined in 1956 by McCarthy (Cristianini, 2016 ) who followed up on the work of Turing (e.g., Turing, 1937 , 1950 ). Turing described the existence of intelligent reasoning and thinking that could go into intelligent machines. The definition of AI has grown and changed since 1956, as there has been significant advancements in AI capabilities. A current definition of AI is “computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and the use of data for complex processing tasks” (Popenici et al., 2017 , p. 2). The interdisciplinary interest from scholars from linguistics, psychology, education, and neuroscience who connect AI to nomenclature, perceptions and knowledge in their own disciplines could create a challenge when defining AI. This has created the need to create categories of AI within specific disciplinary areas. This paper focuses on the category of AI in Education (AIEd) and how AI is specifically used in higher educational contexts.

As the field of AIEd is growing and changing rapidly, there is a need to increase the academic understanding of AIEd. Scholars (viz., Hrastinski et al., 2019 ; Zawacki-Richter et al., 2019 ) have drawn attention to the need to increase the understanding of the power of AIEd in educational contexts. The following section provides a summary of the previous research regarding AIEd.

Extant systematic reviews

This growing interest in AIEd has led scholars to investigate the research on the use of artificial intelligence in education. Some scholars have conducted systematic reviews to focus on a specific subject domain. For example, Liang et. al. ( 2021 ) conducted a systematic review and bibliographic analysis the roles and research foci of AI in language education. Shukla et. al. ( 2019 ) focused their longitudinal bibliometric analysis on 30 years of using AI in Engineering. Hwang and Tu ( 2021 ) conducted a bibliometric mapping analysis on the roles and trends in the use of AI in mathematics education, and Winkler-Schwartz et. al. ( 2019 ) specifically examined the use of AI in medical education in looking for best practices in the use of machine learning to assess surgical expertise. These studies provide a specific focus on the use of AIEd in HE but do not provide an understanding of AI across HE.

On a broader view of AIEd in HE, Ouyang et. al. ( 2022 ) conducted a systematic review of AIEd in online higher education and investigated the literature regarding the use of AI from 2011 to 2020. The findings show that performance prediction, resource recommendation, automatic assessment, and improvement of learning experiences are the four main functions of AI applications in online higher education. Salas-Pilco and Yang ( 2022 ) focused on AI applications in Latin American higher education. The results revealed that the main AI applications in higher education in Latin America are: (1) predictive modeling, (2) intelligent analytics, (3) assistive technology, (4) automatic content analysis, and (5) image analytics. These studies provide valuable information for the online and Latin American context but not an overarching examination of AIEd in HE.

Studies have been conducted to examine HE. Hinojo-Lucena et. al. ( 2019 ) conducted a bibliometric study on the impact of AIEd in HE. They analyzed the scientific production of AIEd HE publications indexed in Web of Science and Scopus databases from 2007 to 2017. This study revealed that most of the published document types were proceedings papers. The United States had the highest number of publications, and the most cited articles were about implementing virtual tutoring to improve learning. Chu et. al. ( 2022 ) reviewed the top 50 most cited articles on AI in HE from 1996 to 2020, revealing that predictions of students’ learning status were most frequently discussed. AI technology was most frequently applied in engineering courses, and AI technologies most often had a role in profiling and prediction. Finally, Zawacki-Richter et. al. ( 2019 ) analyzed AIEd in HE from 2007 to 2018 to reveal four primary uses of AIEd: (1) profiling and prediction, (2) assessment and evaluation, (3) adaptive systems and personalization, and (4) intelligent tutoring systems. There do not appear to be any studies examining the last 2 years of AIEd in HE, and these authors describe the rapid speed of both AI development and the use of AIEd in HE and call for further research in this area.

Purpose of the study

The purpose of this study is in response to the appeal from scholars (viz., Chu et al., 2022 ; Hinojo-Lucena et al., 2019 ; Zawacki-Richter et al., 2019 ) to research to investigate the benefits and challenges of AIEd within HE settings. As the academic knowledge of AIEd HE finished with studies examining up to 2020, this study provides the most up-to-date analysis examining research through to the end of 2022.

The overarching question for this study is: what are the trends in HE research regarding the use of AIEd? The first two questions provide contextual information, such as where the studies occurred and the disciplines AI was used in. These contextual details are important for presenting the main findings of the third question of how AI is being used in HE.

In what geographical location was the AIEd research conducted, and how has the trend in the number of publications evolved across the years?

What departments were the first authors affiliated with, and what were the academic levels and subject domains in which AIEd research was being conducted?

Who are the intended users of the AI technologies and what are the applications of AI in higher education?

A PRISMA systematic review methodology was used to answer three questions guiding this study. PRISMA principles (Page et al., 2021 ) were used throughout the study. The PRISMA extension Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Protocols (PRISMA-P; Moher et al., 2015 ) were utilized in this study to provide an a priori roadmap to conduct a rigorous systematic review. Furthermore, the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA principles; Page et al., 2021 ) were used to search, identify, and select articles to be included in the research were used for searching, identifying, and selecting articles, then in how to read, extract, and manage the secondary data gathered from those studies (Moher et al., 2015 , PRISMA Statement, 2021 ). This systematic review approach supports an unbiased synthesis of the data in an impartial way (Hemingway & Brereton, 2009 ). Within the systematic review methodology, extracted data were aggregated and presented as whole numbers and percentages. A qualitative deductive and inductive coding methodology was also used to analyze extant data and generate new theories on the use of AI in HE (Gough et al., 2017 ).

The research begins with the search for the research articles to be included in the study. Based on the research question, the study parameters are defined including the search years, quality and types of publications to be included. Next, databases and journals are selected. A Boolean search is created and used for the search of those databases and journals. Once a set of publications are located from those searches, they are then examined against an inclusion and exclusion criteria to determine which studies will be included in the final study. The relevant data to match the research questions is then extracted from the final set of studies and coded. This method section is organized to describe each of these methods with full details to ensure transparency.

Search strategy

Only peer-reviewed journal articles were selected for examination in this systematic review. This ensured a level of confidence in the quality of the studies selected (Gough et al., 2017 ). The search parameters narrowed the search focus to include studies published in 2016 to 2022. This timeframe was selected to ensure the research was up to date, which is especially important with the rapid change in technology and AIEd.

The data retrieval protocol employed an electronic and a hand search. The electronic search included educational databases within EBSCOhost. Then an additional electronic search was conducted of Wiley Online Library, JSTOR, Science Direct, and Web of Science. Within each of these databases a full text search was conducted. Aligned to the research topic and questions, the Boolean search included terms related to AI, higher education, and learning. The Boolean search is listed in Table 1 . In the initial test search, the terms “machine learning” OR “intelligent support” OR “intelligent virtual reality” OR “chatbot” OR “automated tutor” OR “intelligent agent” OR “expert system” OR “neural network” OR “natural language processing” were used. These were removed as they were subcategories of terms found in Part 1 of the search. Furthermore, inclusion of these specific AI terms resulted in a large number of computer science courses that were focused on learning about AI and not the use of AI in learning.

Part 2 of the search ensured that articles involved formal university education. The terms higher education and tertiary were both used to recognize the different terms used in different countries. The final Boolean search was “Artificial intelligence” OR AI OR “smart technologies” OR “intelligent technologies” AND “higher education” OR tertiary OR graduate OR undergraduate. Scholars (viz., Ouyang et al., 2022 ) who conducted a systematic review on AIEd in HE up to 2020 noted that they missed relevant articles from their study, and other relevant journals should intentionally be examined. Therefore, a hand search was also conducted to include an examination of other journals relevant to AIEd that may not be included in the databases. This is important as the field of AIEd is still relatively new, and journals focused on this field may not yet be indexed in databases. The hand search included: The International Journal of Learning Analytics and Artificial Intelligence in Education, the International Journal of Artificial Intelligence in Education, and Computers & Education: Artificial Intelligence.

Electronic and hand searches resulted in 371 articles for possible inclusion. The search parameters within the electronic database search narrowed the search to articles published from 2016 to 2022, per-reviewed journal articles, and duplicates. Further screening was conducted manually, as each of the 138 articles were reviewed in full by two researchers to examine a match against the inclusion and exclusion criteria found in Table 2 .

The inter-rater reliability was calculated by percentage agreement (Belur et al., 2018 ). The researchers reached a 95% agreement for the coding. Further discussion of misaligned articles resulted in a 100% agreement. This screening process against inclusion and exclusion criteria resulted in the exclusion of 237 articles. This included the duplicates and those removed as part of the inclusion and exclusion criteria, see Fig.  1 . Leaving 138 articles for inclusion in this systematic review.

figure 1

(From: Page et al., 2021 )

PRISMA flow chart of article identification and screening

The 138 articles were then coded to answer each of the research questions using deductive and inductive coding methods. Deductive coding involves examining data using a priori codes. A priori are pre-determined criteria and this process was used to code the countries, years, author affiliations, academic levels, and domains in the respective groups. Author affiliations were coded using the academic department of the first author of the study. First authors were chosen as that person is the primary researcher of the study and this follows past research practice (e.g., Zawacki-Richter et al., 2019 ). Who the AI was intended for was also coded using the a priori codes of Student, Instructor, Manager or Others. The Manager code was used for those who are involved in organizational tasks, e.g., tracking enrollment. Others was used for those not fitting the other three categories.

Inductive coding was used for the overarching question of this study in examining how the AI was being used in HE. Researchers of extant systematic reviews on AIEd in HE (viz., Chu et al., 2022 ; Zawacki-Richter et al., 2019 ) often used an a priori framework as researchers matched the use of AI to pre-existing frameworks. A grounded coding methodology (Strauss & Corbin, 1995 ) was selected for this study to allow findings of the trends on AIEd in HE to emerge from the data. This is important as it allows a direct understanding of how AI is being used rather than how researchers may think it is being used and fitting the data to pre-existing ideas.

Grounded coding process involved extracting how the AI was being used in HE from the articles. “In vivo” (Saldana, 2015 ) coding was also used alongside grounded coding. In vivo codes are when codes use language directly from the article to capture the primary authors’ language and ensure consistency with their findings. The grounded coding design used a constant comparative method. Researchers identified important text from articles related to the use of AI, and through an iterative process, initial codes led to axial codes with a constant comparison of uses of AI with uses of AI, then of uses of AI with codes, and codes with codes. Codes were deemed theoretically saturated when the majority of the data fit with one of the codes. For both the a priori and the grounded coding, two researchers coded and reached an inter-rater percentage agreement of 96%. After discussing misaligned articles, a 100% agreement was achieved.

Findings and discussion

The findings and discussion section are organized by the three questions guiding this study. The first two questions provide contextual information on the AIEd research, and the final question provides a rigorous investigation into how AI is being used in HE.

RQ1. In what geographical location was the AIEd research conducted, and how has the trend in the number of publications evolved across the years?

The 138 studies took place across 31 countries in six of seven continents of the world. Nonetheless, that distribution was not equal across continents. Asia had the largest number of AIEd studies in HE at 41%. Of the seven countries represented in Asia, 42 of the 58 studies were conducted in Taiwan and China. Europe, at 30%, was the second largest continent and had 15 countries ranging from one to eight studies a piece. North America, at 21% of the studies was the continent with the third largest number of studies, with the USA producing 21 of the 29 studies in that continent. The 21 studies from the USA places it second behind China. Only 1% of studies were conducted in South America and 2% in Africa. See Fig.  2 for a visual representation of study distribution across countries. Those continents with high numbers of studies are from high income countries and those with low numbers have a paucity of publications in low-income countries.

figure 2

Geographical distribution of the AIEd HE studies

Data from Zawacki-Richter et. al.’s ( 2019 ) 2007–2018 systematic review examining countries found that the USA conducted the most studies across the globe at 43 out of 146, and China had the second largest at eleven of the 146 papers. Researchers have noted a rapid trend in Chinese researchers publishing more papers on AI and securing more patents than their US counterparts in a field that was originally led by the US (viz., Li et al., 2021 ). The data from this study corroborate this trend in China leading in the number of AIEd publications.

With the accelerated use of AI in society, gathering data to examine the use of AIEd in HE is useful in providing the scholarly community with specific information on that growth and if it is as prolific as anticipated by scholars (e.g., Chu et al., 2022 ). The analysis of data of the 138 studies shows that the trend towards the use of AIEd in HE has greatly increased. There is a drop in 2019, but then a great rise in 2021 and 2022; see Fig.  3 .

figure 3

Chronological trend in AIEd in HE

Data on the rise in AIEd in HE is similar to the findings of Chu et. al. ( 2022 ) who noted an increase from 1996 to 2010 and 2011–2020. Nonetheless Chu’s parameters are across decades, and the rise is to be anticipated with a relatively new technology across a longitudinal review. Data from this study show a dramatic rise since 2020 with a 150% increase from the prior 2 years 2020–2019. The rise in 2021 and 2022 in HE could have been caused by the vast increase in HE faculty having to teach with technology during the pandemic lockdown. Faculty worldwide were using technologies, including AI, to explore how they could continue teaching and learning that was often face-to-face prior to lockdown. The disadvantage of this rapid adoption of technology is that there was little time to explore the possibilities of AI to transform learning, and AI may have been used to replicate past teaching practices, without considering new strategies previously inconceivable with the affordances of AI.

However, in a further examination of the research from 2021 to 2022, it appears that there are new strategies being considered. For example, Liu et. al.’s, 2022 study used AIEd to provide information on students’ interactions in an online environment and examine their cognitive effort. In Yao’s study in 2022, he examined the use of AI to determine student emotions while learning.

RQ2. What departments were the first authors affiliated with, and what were the academic levels and subject domains in which AIEd research was being conducted?

Department affiliations

Data from the AIEd HE studies show that of the first authors were most frequently from colleges of education (28%), followed by computer science (20%). Figure  4 presents the 15 academic affiliations of the authors found in the studies. The wide variety of affiliations demonstrate the variety of ways AI can be used in various educational disciplines, and how faculty in diverse areas, including tourism, music, and public affairs were interested in how AI can be used for educational purposes.

figure 4

Research affiliations

In an extant AIED HE systematic review, Zawacki-Richter et. al.’s ( 2019 ) named their study Systematic review of research on artificial intelligence applications in higher education—where are the educators? In this study, the authors were keen to highlight that of the AIEd studies in HE, only six percent were written by researchers directly connected to the field of education, (i.e., from a college of education). The researchers found a great lack in pedagogical and ethical implications of implementing AI in HE and that there was a need for more educational perspectives on AI developments from educators conducting this work. It appears from our data that educators are now showing greater interest in leading these research endeavors, with the highest affiliated group belonging to education. This may again be due to the pandemic and those in the field of education needing to support faculty in other disciplines, and/or that they themselves needed to explore technologies for their own teaching during the lockdown. This may also be due to uptake in professors in education becoming familiar with AI tools also driven by a societal increased attention. As the focus of much research by education faculty is on teaching and learning, they are in an important position to be able to share their research with faculty in other disciplines regarding the potential affordances of AIEd.

Academic levels

The a priori coding of academic levels show that the majority of studies involved undergraduate students with 99 of the 138 (72%) focused on these students. This was in comparison to the 12 of 138 (9%) for graduate students. Some of the studies used AI for both academic levels: see Fig.  5

figure 5

Academic level distribution by number of articles

This high percentage of studies focused on the undergraduate population was congruent with an earlier AIED HE systematic review (viz., Zawacki-Richter et al., 2019 ) who also reported student academic levels. This focus on undergraduate students may be due to the variety of affordances offered by AIEd, such as predictive analytics on dropouts and academic performance. These uses of AI may be less required for graduate students who already have a record of performance from their undergraduate years. Another reason for this demographic focus can also be convenience sampling, as researchers in HE typically has a much larger and accessible undergraduate population than graduates. This disparity between undergraduates and graduate populations is a concern, as AIEd has the potential to be valuable in both settings.

Subject domains

The studies were coded into 14 areas in HE; with 13 in a subject domain and one category of AIEd used in HE management of students; See Fig.  6 . There is not a wide difference in the percentages of top subject domains, with language learning at 17%, computer science at 16%, and engineering at 12%. The management of students category appeared third on the list at 14%. Prior studies have also found AIEd often used for language learning (viz., Crompton et al., 2021 ; Zawacki-Richter et al., 2019 ). These results are different, however, from Chu et. al.’s ( 2022 ) findings that show engineering dramatically leading with 20 of the 50 studies, with other subjects, such as language learning, appearing once or twice. This study appears to be an outlier that while the searches were conducted in similar databases, the studies only included 50 studies from 1996 to 2020.

figure 6

Subject domains of AIEd in HE

Previous scholars primarily focusing on language learning using AI for writing, reading, and vocabulary acquisition used the affordances of natural language processing and intelligent tutoring systems (e.g., Liang et al., 2021 ). This is similar to the findings in studies with AI used for automated feedback of writing in a foreign language (Ayse et al., 2022 ), and AI translation support (Al-Tuwayrish, 2016 ). The large use of AI for managerial activities in this systematic review focused on making predictions (12 studies) and then admissions (three studies). This is positive to see this use of AI to look across multiple databases to see trends emerging from data that may not have been anticipated and cross referenced before (Crompton et al., 2022 ). For example, to examine dropouts, researchers may consider examining class attendance, and may not examine other factors that appear unrelated. AI analysis can examine all factors and may find that dropping out is due to factors beyond class attendance.

RQ3. Who are the intended users of the AI technologies and what are the applications of AI in higher education?

Intended user of AI

Of the 138 articles, the a priori coding shows that 72% of the studies focused on Students, followed by a focus on Instructors at 17%, and Managers at 11%, see Fig.  7 . The studies provided examples of AI being used to provide support to students, such as access to learning materials for inclusive learning (Gupta & Chen, 2022 ), provide immediate answers to student questions, self-testing opportunities (Yao, 2022 ), and instant personalized feedback (Mousavi et al., 2020 ).

figure 7

Intended user

The data revealed a large emphasis on students in the use of AIEd in HE. This user focus is different from a recent systematic review on AIEd in K-12 that found that AIEd studies in K-12 settings prioritized teachers (Crompton et al., 2022 ). This may appear that HE uses AI to focus more on students than in K-12. However, this large number of student studies in HE may be due to the student population being more easily accessibility to HE researchers who may study their own students. The ethical review process is also typically much shorter in HE than in K-12. Therefore, the data on the intended focus should be reviewed while keeping in mind these other explanations. It was interesting that Managers were the lowest focus in K-12 and also in this study in HE. AI has great potential to collect, cross reference and examine data across large datasets that can allow data to be used for actionable insight. More focus on the use of AI by managers would tap into this potential.

How is AI used in HE

Using grounded coding, the use of AIEd from each of the 138 articles was examined and six major codes emerged from the data. These codes provide insight into how AI was used in HE. The five codes are: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. For each of these codes there are also axial codes, which are secondary codes as subcategories from the main category. Each code is delineated below with a figure of the codes with further descriptive information and examples.

Assessment/evaluation

Assessment and Evaluation was the most common use of AIEd in HE. Within this code there were six axial codes broken down into further codes; see Fig.  8 . Automatic assessment was most common, seen in 26 of the studies. It was interesting to see that this involved assessment of academic achievement, but also other factors, such as affect.

figure 8

Codes and axial codes for assessment and evaluation

Automatic assessment was used to support a variety of learners in HE. As well as reducing the time it takes for instructors to grade (Rutner & Scott, 2022 ), automatic grading showed positive use for a variety of students with diverse needs. For example, Zhang and Xu ( 2022 ) used automatic assessment to improve academic writing skills of Uyghur ethnic minority students living in China. Writing has a variety of cultural nuances and in this study the students were shown to engage with the automatic assessment system behaviorally, cognitively, and affectively. This allowed the students to engage in self-regulated learning while improving their writing.

Feedback was a description often used in the studies, as students were given text and/or images as feedback as a formative evaluation. Mousavi et. al. ( 2020 ) developed a system to provide first year biology students with an automated personalized feedback system tailored to the students’ specific demographics, attributes, and academic status. With the unique feature of AIEd being able to analyze multiple data sets involving a variety of different students, AI was used to assess and provide feedback on students’ group work (viz., Ouatik et al., 2021 ).

AI also supports instructors in generating questions and creating multiple question tests (Yang et al., 2021 ). For example, (Lu et al., 2021 ) used natural language processing to create a system that automatically created tests. Following a Turing type test, researchers found that AI technologies can generate highly realistic short-answer questions. The ability for AI to develop multiple questions is a highly valuable affordance as tests can take a great deal of time to make. However, it would be important for instructors to always confirm questions provided by the AI to ensure they are correct and that they match the learning objectives for the class, especially in high value summative assessments.

The axial code within assessment and evaluation revealed that AI was used to review activities in the online space. This included evaluating student’s reflections, achievement goals, community identity, and higher order thinking (viz., Huang et al., 2021 ). Three studies used AIEd to evaluate educational materials. This included general resources and textbooks (viz., Koć‑Januchta et al., 2022 ). It is interesting to see the use of AI for the assessment of educational products, rather than educational artifacts developed by students. While this process may be very similar in nature, this shows researchers thinking beyond the traditional use of AI for assessment to provide other affordances.

Predicting was a common use of AIEd in HE with 21 studies focused specifically on the use of AI for forecasting trends in data. Ten axial codes emerged on the way AI was used to predict different topics, with nine focused on predictions regarding students and the other on predicting the future of higher education. See Fig.  9 .

figure 9

Predicting axial codes

Extant systematic reviews on HE highlighted the use of AIEd for prediction (viz., Chu et al., 2022 ; Hinojo-Lucena et al., 2019 ; Ouyang et al., 2022 ; Zawacki-Richter et al., 2019 ). Ten of the articles in this study used AI for predicting academic performance. Many of the axial codes were often overlapping, such as predicting at risk students, and predicting dropouts; however, each provided distinct affordances. An example of this is the study by Qian et. al. ( 2021 ). These researchers examined students taking a MOOC course. MOOCs can be challenging environments to determine information on individual students with the vast number of students taking the course (Krause & Lowe, 2014 ). However, Qian et al., used AIEd to predict students’ future grades by inputting 17 different learning features, including past grades, into an artificial neural network. The findings were able to predict students’ grades and highlight students at risk of dropping out of the course.

In a systematic review on AIEd within the K-12 context (viz., Crompton et al., 2022 ), prediction was less pronounced in the findings. In the K-12 setting, there was a brief mention of the use of AI in predicting student academic performance. One of the studies mentioned students at risk of dropping out, but this was immediately followed by questions about privacy concerns and describing this as “sensitive”. The use of prediction from the data in this HE systematic review cover a wide range of AI predictive affordances. students Sensitivity is still important in a HE setting, but it is positive to see the valuable insight it provides that can be used to avoid students failing in their goals.

AI assistant

The studies evaluated in this review indicated that the AI Assistant used to support learners had a variety of different names. This code included nomenclature such as, virtual assistant, virtual agent, intelligent agent, intelligent tutor, and intelligent helper. Crompton et. al. ( 2022 ), described the difference in the terms to delineate the way that the AI appeared to the user. For example, if there was an anthropomorphic presence to the AI, such as an avatar, or if the AI appeared to support via other means, such as text prompt. The findings of this systematic review align to Crompton et. al.’s ( 2022 ) descriptive differences of the AI Assistant. Furthermore, this code included studies that provide assistance to students, but may not have specifically used the word assistance. These include the use of chatbots for student outreach, answering questions, and providing other assistance. See Fig.  10 for the axial codes for AI Assistant.

figure 10

AI assistant axial codes

Many of these assistants offered multiple supports to students, such as Alex , the AI described as a virtual change agent in Kim and Bennekin’s ( 2016 ) study. Alex interacted with students in a college mathematics course by asking diagnostic questions and gave support depending on student needs. Alex’s support was organized into four stages: (1) goal initiation (“Want it”), (2) goal formation (“Plan for it”), (3) action control (“Do it”), and (4) emotion control (“Finish it”). Alex provided responses depending on which of these four areas students needed help. These messages supported students with the aim of encouraging persistence in pursuing their studies and degree programs and improving performance.

The role of AI in providing assistance connects back to the seminal work of Vygotsky ( 1978 ) and the Zone of Proximal Development (ZPD). ZPD highlights the degree to which students can rapidly develop when assisted. Vygotsky described this assistance often in the form of a person. However, with technological advancements, the use of AI assistants in these studies are providing that support for students. The affordances of AI can also ensure that the support is timely without waiting for a person to be available. Also, assistance can consider aspects on students’ academic ability, preferences, and best strategies for supporting. These features were evident in Kim and Bennekin’s ( 2016 ) study using Alex.

Intelligent tutoring system

The use of Intelligent Tutoring Systems (ITS) was revealed in the grounded coding. ITS systems are adaptive instructional systems that involve the use of AI techniques and educational methods. An ITS system customizes educational activities and strategies based on student’s characteristics and needs (Mousavinasab et al., 2021 ). While ITS may be an anticipated finding in AIED HE systematic reviews, it was interesting that extant reviews similar to this study did not always describe their use in HE. For example, Ouyang et. al. ( 2022 ), included “intelligent tutoring system” in search terms describing it as a common technique, yet ITS was not mentioned again in the paper. Zawacki-Richter et. al. ( 2019 ) on the other hand noted that ITS was in the four overarching findings of the use of AIEd in HE. Chu et. al. ( 2022 ) then used Zawacki-Richter’s four uses of AIEd for their recent systematic review.

In this systematic review, 18 studies specifically mentioned that they were using an ITS. The ITS code did not necessitate axial codes as they were performing the same type of function in HE, namely, in providing adaptive instruction to the students. For example, de Chiusole et. al. ( 2020 ) developed Stat-Knowlab, an ITS that provides the level of competence and best learning path for each student. Thus Stat-Knowlab personalizes students’ learning and provides only educational activities that the student is ready to learn. This ITS is able to monitor the evolution of the learning process as the student interacts with the system. In another study, Khalfallah and Slama ( 2018 ) built an ITS called LabTutor for engineering students. LabTutor served as an experienced instructor in enabling students to access and perform experiments on laboratory equipment while adapting to the profile of each student.

The student population in university classes can go into the hundreds and with the advent of MOOCS, class sizes can even go into the thousands. Even in small classes of 20 students, the instructor cannot physically provide immediate unique personalize questions to each student. Instructors need time to read and check answers and then take further time to provide feedback before determining what the next question should be. Working with the instructor, AIEd can provide that immediate instruction, guidance, feedback, and following questioning without delay or becoming tired. This appears to be an effective use of AIEd, especially within the HE context.

Managing student learning

Another code that emerged in the grounded coding was focused on the use of AI for managing student learning. AI is accessed to manage student learning by the administrator or instructor to provide information, organization, and data analysis. The axial codes reveal the trends in the use of AI in managing student learning; see Fig.  11 .

figure 11

Learning analytics was an a priori term often found in studies which describes “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011 , p. 34). The studies investigated in this systematic review were across grades and subject areas and provided administrators and instructors different types of information to guide their work. One of those studies was conducted by Mavrikis et. al. ( 2019 ) who described learning analytics as teacher assistance tools. In their study, learning analytics were used in an exploratory learning environment with targeted visualizations supporting classroom orchestration. These visualizations, displayed as screenshots in the study, provided information such as the interactions between the students, goals achievements etc. These appear similar to infographics that are brightly colored and draw the eye quickly to pertinent information. AI is also used for other tasks, such as organizing the sequence of curriculum in pacing guides for future groups of students and also designing instruction. Zhang ( 2022 ) described how designing an AI teaching system of talent cultivation and using the digital affordances to establish a quality assurance system for practical teaching, provides new mechanisms for the design of university education systems. In developing such a system, Zhang found that the stability of the instructional design, overcame the drawbacks of traditional manual subjectivity in the instructional design.

Another trend that emerged from the studies was the use of AI to manage student big data to support learning. Ullah and Hafiz ( 2022 ) lament that using traditional methods, including non-AI digital techniques, asking the instructor to pay attention to every student’s learning progress is very difficult and that big data analysis techniques are needed. The ability to look across and within large data sets to inform instruction is a valuable affordance of AIEd in HE. While the use of AIEd to manage student learning emerged from the data, this study uncovered only 19 studies in 7 years (2016–2022) that focused on the use of AIEd to manage student data. This lack of the use was also noted in a recent study in the K-12 space (Crompton et al., 2022 ). In Chu et. al.’s ( 2022 ) study examining the top 50 most cited AIEd articles, they did not report the use of AIEd for managing student data in the top uses of AIEd HE. It would appear that more research should be conducted in this area to fully explore the possibilities of AI.

Gaps and future research

From this systematic review, six gaps emerged in the data providing opportunities for future studies to investigate and provide a fuller understanding of how AIEd can used in HE. (1) The majority of the research was conducted in high income countries revealing a paucity of research in developing countries. More research should be conducted in these developing countries to expand the level of understanding about how AI can enhance learning in under-resourced communities. (2) Almost 50% of the studies were conducted in the areas of language learning, computer science and engineering. Research conducted by members from multiple, different academic departments would help to advance the knowledge of the use of AI in more disciplines. (3) This study revealed that faculty affiliated with schools of education are taking an increasing role in researching the use of AIEd in HE. As this body of knowledge grows, faculty in Schools of Education should share their research regarding the pedagogical affordances of AI so that this knowledge can be applied by faculty across disciplines. (4) The vast majority of the research was conducted at the undergraduate level. More research needs to be done at the graduate student level, as AI provides many opportunities in this environment. (5) Little study was done regarding how AIEd can assist both instructors and managers in their roles in HE. The power of AI to assist both groups further research. (6) Finally, much of the research investigated in this systematic review revealed the use of AIEd in traditional ways that enhance or make more efficient current practices. More research needs to focus on the unexplored affordances of AIEd. As AI becomes more advanced and sophisticated, new opportunities will arise for AIEd. Researchers need to be on the forefront of these possible innovations.

In addition, empirical exploration is needed for new tools, such as ChatGPT that was available for public use at the end of 2022. With the time it takes for a peer review journal article to be published, ChatGPT did not appear in the articles for this study. What is interesting is that it could fit with a variety of the use codes found in this study, with students getting support in writing papers and instructors using Chat GPT to assess students work and with help writing emails or descriptions for students. It would be pertinent for researchers to explore Chat GPT.

Limitations

The findings of this study show a rapid increase in the number of AIEd studies published in HE. However, to ensure a level of credibility, this study only included peer review journal articles. These articles take months to publish. Therefore, conference proceedings and gray literature such as blogs and summaries may reveal further findings not explored in this study. In addition, the articles in this study were all published in English which excluded findings from research published in other languages.

In response to the call by Hinojo-Lucena et. al. ( 2019 ), Chu et. al. ( 2022 ), and Zawacki-Richter et. al. ( 2019 ), this study provides unique findings with an up-to-date examination of the use of AIEd in HE from 2016 to 2022. Past systematic reviews examined the research up to 2020. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged.

The findings show that of the 138 studies examined, research was conducted in six of the seven continents of the world. In extant systematic reviews showed that the US led by a large margin in the number of studies published. This trend has now shifted to China. Another shift in AIEd HE is that while extant studies lamented the lack of focus on professors of education leading these studies, this systematic review found education to be the most common department affiliation with 28% and computer science coming in second at 20%. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for, 72% of the studies focused on students, 17% instructors, and 11% managers.

Grounded coding was used to answer the overarching question of how AIEd was used in HE. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. Assessment and evaluation had a wide variety of purposes, including assessing academic progress and student emotions towards learning, individual and group evaluations, and class based online community assessments. Predicting emerged as a code with ten axial codes, as AIEd predicted dropouts and at-risk students, innovative ability, and career decisions. AI Assistants were specific to supporting students in HE. These assistants included those with an anthropomorphic presence, such as virtual agents and persuasive intervention through digital programs. ITS systems were not always noted in extant systematic reviews but were specifically mentioned in 18 of the studies in this review. ITS systems in this study provided customized strategies and approaches to student’s characteristics and needs. The final code in this study highlighted the use of AI in managing student learning, including learning analytics, curriculum sequencing, instructional design, and clustering of students.

The findings of this study provide a springboard for future academics, practitioners, computer scientists, policymakers, and funders in understanding the state of the field in AIEd HE, how AI is used. It also provides actionable items to ameliorate gaps in the current understanding. As the use AIEd will only continue to grow this study can serve as a baseline for further research studies in the use of AIEd in HE.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Alajmi, Q., Al-Sharafi, M. A., & Abuali, A. (2020). Smart learning gateways for Omani HEIs towards educational technology: Benefits, challenges and solutions. International Journal of Information Technology and Language Studies, 4 (1), 12–17.

Google Scholar  

Al-Tuwayrish, R. K. (2016). An evaluative study of machine translation in the EFL scenario of Saudi Arabia. Advances in Language and Literary Studies, 7 (1), 5–10.

Ayse, T., & Nil, G. (2022). Automated feedback and teacher feedback: Writing achievement in learning English as a foreign language at a distance. The Turkish Online Journal of Distance Education, 23 (2), 120–139. https://doi.org/10.7575/aiac.alls.v.7n.1p.5

Article   Google Scholar  

Baykasoğlu, A., Özbel, B. K., Dudaklı, N., Subulan, K., & Şenol, M. E. (2018). Process mining based approach to performance evaluation in computer-aided examinations. Computer Applications in Engineering Education, 26 (5), 1841–1861. https://doi.org/10.1002/cae.21971

Belur, J., Tompson, L., Thornton, A., & Simon, M. (2018). Interrater reliability in systematic review methodology: Exploring variation in coder decision-making. Sociological Methods & Research, 13 (3), 004912411887999. https://doi.org/10.1177/0049124118799372

Çağataylı, M., & Çelebi, E. (2022). Estimating academic success in higher education using big five personality traits, a machine learning approach. Arab Journal Scientific Engineering, 47 , 1289–1298. https://doi.org/10.1007/s13369-021-05873-4

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8 , 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510

Chu, H., Tu, Y., & Yang, K. (2022). Roles and research trends of artificial intelligence in higher education: A systematic review of the top 50 most-cited articles. Australasian Journal of Educational Technology, 38 (3), 22–42. https://doi.org/10.14742/ajet.7526

Cristianini, N. (2016). Intelligence reinvented. New Scientist, 232 (3097), 37–41. https://doi.org/10.1016/S0262-4079(16)31992-3

Crompton, H., Bernacki, M. L., & Greene, J. (2020). Psychological foundations of emerging technologies for teaching and learning in higher education. Current Opinion in Psychology, 36 , 101–105. https://doi.org/10.1016/j.copsyc.2020.04.011

Crompton, H., & Burke, D. (2022). Artificial intelligence in K-12 education. SN Social Sciences, 2 , 113. https://doi.org/10.1007/s43545-022-00425-5

Crompton, H., Jones, M., & Burke, D. (2022). Affordances and challenges of artificial intelligence in K-12 education: A systematic review. Journal of Research on Technology in Education . https://doi.org/10.1080/15391523.2022.2121344

Crompton, H., & Song, D. (2021). The potential of artificial intelligence in higher education. Revista Virtual Universidad Católica Del Norte, 62 , 1–4. https://doi.org/10.35575/rvuen.n62a1

de Chiusole, D., Stefanutti, L., Anselmi, P., & Robusto, E. (2020). Stat-Knowlab. Assessment and learning of statistics with competence-based knowledge space theory. International Journal of Artificial Intelligence in Education, 30 , 668–700. https://doi.org/10.1007/s40593-020-00223-1

Dever, D. A., Azevedo, R., Cloude, E. B., & Wiedbusch, M. (2020). The impact of autonomy and types of informational text presentations in game-based environments on learning: Converging multi-channel processes data and learning outcomes. International Journal of Artificial Intelligence in Education, 30 (4), 581–615. https://doi.org/10.1007/s40593-020-00215-1

Górriz, J. M., Ramírez, J., Ortíz, A., Martínez-Murcia, F. J., Segovia, F., Suckling, J., Leming, M., Zhang, Y. D., Álvarez-Sánchez, J. R., Bologna, G., Bonomini, P., Casado, F. E., Charte, D., Charte, F., Contreras, R., Cuesta-Infante, A., Duro, R. J., Fernández-Caballero, A., Fernández-Jover, E., … Ferrández, J. M. (2020). Artificial intelligence within the interplay between natural and artificial computation: Advances in data science, trends and applications. Neurocomputing, 410 , 237–270. https://doi.org/10.1016/j.neucom.2020.05.078

Gough, D., Oliver, S., & Thomas, J. (2017). An introduction to systematic reviews (2nd ed.). Sage.

Gupta, S., & Chen, Y. (2022). Supporting inclusive learning using chatbots? A chatbot-led interview study. Journal of Information Systems Education, 33 (1), 98–108.

Hemingway, P. & Brereton, N. (2009). In Hayward Medical Group (Ed.). What is a systematic review? Retrieved from http://www.medicine.ox.ac.uk/bandolier/painres/download/whatis/syst-review.pdf

Hinojo-Lucena, F., Arnaz-Diaz, I., Caceres-Reche, M., & Romero-Rodriguez, J. (2019). A bibliometric study on its impact the scientific literature. Education Science . https://doi.org/10.3390/educsci9010051

Hrastinski, S., Olofsson, A. D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., Jaldemark, J., Ryberg, T., Öberg, L.-M., Fuentes, A., Gustafsson, U., Humble, N., Mozelius, P., Sundgren, M., & Utterberg, M. (2019). Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education. Postdigital Science and Education, 1 (2), 427–445. https://doi.org/10.1007/s42438-019-00046-x

Huang, C., Wu, X., Wang, X., He, T., Jiang, F., & Yu, J. (2021). Exploring the relationships between achievement goals, community identification and online collaborative reflection. Educational Technology & Society, 24 (3), 210–223.

Hwang, G. J., & Tu, Y. F. (2021). Roles and research trends of artificial intelligence in mathematics education: A bibliometric mapping analysis and systematic review. Mathematics, 9 (6), 584. https://doi.org/10.3390/math9060584

Khalfallah, J., & Slama, J. B. H. (2018). The effect of emotional analysis on the improvement of experimental e-learning systems. Computer Applications in Engineering Education, 27 (2), 303–318. https://doi.org/10.1002/cae.22075

Kim, C., & Bennekin, K. N. (2016). The effectiveness of volition support (VoS) in promoting students’ effort regulation and performance in an online mathematics course. Instructional Science, 44 , 359–377. https://doi.org/10.1007/s11251-015-9366-5

Koć-Januchta, M. M., Schönborn, K. J., Roehrig, C., Chaudhri, V. K., Tibell, L. A. E., & Heller, C. (2022). “Connecting concepts helps put main ideas together”: Cognitive load and usability in learning biology with an AI-enriched textbook. International Journal of Educational Technology in Higher Education, 19 (11), 11. https://doi.org/10.1186/s41239-021-00317-3

Krause, S. D., & Lowe, C. (2014). Invasion of the MOOCs: The promise and perils of massive open online courses . Parlor Press.

Li, D., Tong, T. W., & Xiao, Y. (2021). Is China emerging as the global leader in AI? Harvard Business Review. https://hbr.org/2021/02/is-china-emerging-as-the-global-leader-in-ai

Liang, J. C., Hwang, G. J., Chen, M. R. A., & Darmawansah, D. (2021). Roles and research foci of artificial intelligence in language education: An integrated bibliographic analysis and systematic review approach. Interactive Learning Environments . https://doi.org/10.1080/10494820.2021.1958348

Liu, S., Hu, T., Chai, H., Su, Z., & Peng, X. (2022). Learners’ interaction patterns in asynchronous online discussions: An integration of the social and cognitive interactions. British Journal of Educational Technology, 53 (1), 23–40. https://doi.org/10.1111/bjet.13147

Long, P., & Siemens, G. (2011). Penetrating the fog: Analytics in learning and education. Educause Review, 46 (5), 31–40.

Lu, O. H. T., Huang, A. Y. Q., Tsai, D. C. L., & Yang, S. J. H. (2021). Expert-authored and machine-generated short-answer questions for assessing students learning performance. Educational Technology & Society, 24 (3), 159–173.

Mavrikis, M., Geraniou, E., Santos, S. G., & Poulovassilis, A. (2019). Intelligent analysis and data visualization for teacher assistance tools: The case of exploratory learning. British Journal of Educational Technology, 50 (6), 2920–2942. https://doi.org/10.1111/bjet.12876

Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., & Stewart, L. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4 (1), 1–9. https://doi.org/10.1186/2046-4053-4-1

Mousavi, A., Schmidt, M., Squires, V., & Wilson, K. (2020). Assessing the effectiveness of student advice recommender agent (SARA): The case of automated personalized feedback. International Journal of Artificial Intelligence in Education, 31 (2), 603–621. https://doi.org/10.1007/s40593-020-00210-6

Mousavinasab, E., Zarifsanaiey, N., Kalhori, S. R. N., Rakhshan, M., Keikha, L., & Saeedi, M. G. (2021). Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interactive Learning Environments, 29 (1), 142–163. https://doi.org/10.1080/10494820.2018.1558257

Ouatik, F., Ouatikb, F., Fadlic, H., Elgoraria, A., Mohadabb, M. E. L., Raoufia, M., et al. (2021). E-Learning & decision making system for automate students assessment using remote laboratory and machine learning. Journal of E-Learning and Knowledge Society, 17 (1), 90–100. https://doi.org/10.20368/1971-8829/1135285

Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011–2020. Education and Information Technologies, 27 , 7893–7925. https://doi.org/10.1007/s10639-022-10925-9

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T., Mulrow, C., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. British Medical Journal . https://doi.org/10.1136/bmj.n71

Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12 (22), 1–13. https://doi.org/10.1186/s41039-017-0062-8

PRISMA Statement. (2021). PRISMA endorsers. PRISMA statement website. http://www.prisma-statement.org/Endorsement/PRISMAEndorsers

Qian, Y., Li, C.-X., Zou, X.-G., Feng, X.-B., Xiao, M.-H., & Ding, Y.-Q. (2022). Research on predicting learning achievement in a flipped classroom based on MOOCs by big data analysis. Computer Applied Applications in Engineering Education, 30 , 222–234. https://doi.org/10.1002/cae.22452

Rutner, S. M., & Scott, R. A. (2022). Use of artificial intelligence to grade student discussion boards: An exploratory study. Information Systems Education Journal, 20 (4), 4–18.

Salas-Pilco, S., & Yang, Y. (2022). Artificial Intelligence application in Latin America higher education: A systematic review. International Journal of Educational Technology in Higher Education, 19 (21), 1–20. https://doi.org/10.1186/S41239-022-00326-w

Saldana, J. (2015). The coding manual for qualitative researchers (3rd ed.). Sage.

Shukla, A. K., Janmaijaya, M., Abraham, A., & Muhuri, P. K. (2019). Engineering applications of artificial intelligence: A bibliometric analysis of 30 years (1988–2018). Engineering Applications of Artificial Intelligence, 85 , 517–532. https://doi.org/10.1016/j.engappai.2019.06.010

Strauss, A., & Corbin, J. (1995). Grounded theory methodology: An overview. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 273–285). Sage.

Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungs problem. Proceedings of the London Mathematical Society, 2 (1), 230–265.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59 , 443–460.

MathSciNet   Google Scholar  

Ullah, H., & Hafiz, M. A. (2022). Exploring effective classroom management strategies in secondary schools of Punjab. Journal of the Research Society of Pakistan, 59 (1), 76.

Verdú, E., Regueras, L. M., Gal, E., et al. (2017). Integration of an intelligent tutoring system in a course of computer network design. Educational Technology Research and Development, 65 , 653–677. https://doi.org/10.1007/s11423-016-9503-0

Vygotsky, L. S. (1978). Mind and society: The development of higher psychological processes . Harvard University Press.

Winkler-Schwartz, A., Bissonnette, V., Mirchi, N., Ponnudurai, N., Yilmaz, R., Ledwos, N., Siyar, S., Azarnoush, H., Karlik, B., & Del Maestro, R. F. (2019). Artificial intelligence in medical education: Best practices using machine learning to assess surgical expertise in virtual reality simulation. Journal of Surgical Education, 76 (6), 1681–1690. https://doi.org/10.1016/j.jsurg.2019.05.015

Yang, A. C. M., Chen, I. Y. L., Flanagan, B., & Ogata, H. (2021). Automatic generation of cloze items for repeated testing to improve reading comprehension. Educational Technology & Society, 24 (3), 147–158.

Yao, X. (2022). Design and research of artificial intelligence in multimedia intelligent question answering system and self-test system. Advances in Multimedia . https://doi.org/10.1155/2022/2156111

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 1–27. https://doi.org/10.1186/s41239-019-0171-0

Zhang, F. (2022). Design and application of artificial intelligence technology-driven education and teaching system in universities. Computational and Mathematical Methods in Medicine . https://doi.org/10.1155/2022/8503239

Zhang, Z., & Xu, L. (2022). Student engagement with automated feedback on academic writing: A study on Uyghur ethnic minority students in China. Journal of Multilingual and Multicultural Development . https://doi.org/10.1080/01434632.2022.2102175

Download references

Acknowledgements

The authors would like to thank Mildred Jones, Katherina Nako, Yaser Sendi, and Ricardo Randall for data gathering and organization.

Author information

Authors and affiliations.

Department of Teaching and Learning, Old Dominion University, Norfolk, USA

Helen Crompton

ODUGlobal, Norfolk, USA

Diane Burke

RIDIL, ODUGlobal, Norfolk, USA

You can also search for this author in PubMed   Google Scholar

Contributions

HC: Conceptualization; Data curation; Project administration; Formal analysis; Methodology; Project administration; original draft; and review & editing. DB: Conceptualization; Data curation; Project administration; Formal analysis; Methodology; Project administration; original draft; and review & editing. Both authors read and approved this manuscript.

Corresponding author

Correspondence to Helen Crompton .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Crompton, H., Burke, D. Artificial intelligence in higher education: the state of the field. Int J Educ Technol High Educ 20 , 22 (2023). https://doi.org/10.1186/s41239-023-00392-8

Download citation

Received : 30 January 2023

Accepted : 23 March 2023

Published : 24 April 2023

DOI : https://doi.org/10.1186/s41239-023-00392-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial Intelligence
  • Systematic review
  • Higher education

artificial intelligence in education policy

Educating in a World of Artificial Intelligence

  • Posted February 9, 2023
  • By Jill Anderson
  • Learning Design and Instruction
  • Teachers and Teaching
  • Technology and Media

Girl in school library with AI graphic

Senior Researcher Chris Dede isn't overly worried about growing concerns over generative artificial intelligence, like ChatGPT, in education. As a longtime researcher on emerging technologies, he's seen many decades where new technologies promised to upend the field. Instead, Dede says artificial intelligence requires educators to get smarter about how they teach in order to truly take advantage of what AI has to offer.“The trick about AI is that to get it, we need to change what we're educating people for because if you educate people for what AI does well, you're just preparing them to lose to AI. But if you educate them for what AI can't do, then you've got IA [Intelligence Augmentation],” he says. Dede, the associate director of research for the National AI Institute for Adult Learning and Online Education , says AI raises the bar and it has the power to significantly impact learning in powerful ways.

In this episode of the Harvard EdCast, Dede talks about how the field of education needs to evolve and get smarter, in order to work with — not against — artificial intelligence. 

ADDITIONAL RESOURCES

  • Dede's keynote on Intelligence Augmentation , delivered at an AI and Education conference
  • Brief on Intelligence Augmentation, co-authored by Dede for HGSE’s Next Level Lab

Jill Anderson:  I'm Jill Anderson. This is the Harvard EdCast. 

Chris Dede thinks we need to get smarter about using artificial intelligence and education. He has spent decades exploring emerging learning technologies as a Harvard researcher. The recent explosion of generative AI, like ChatGPT, has been met with mixed reactions in education. Some public school districts have banned it. Some colleges and universities have tweaked their teaching and learning already. 

Generative AI raises que

Chris Dede

Chris Dede: I've actually been working with AI for more than half a century. Way back when when I was a graduate student, I read the first article on AI in education, which was published in 1970. And the author confidently predicted that we wouldn't need teachers within five or six years because AI was going to do everything. And of course, we still see predictions like that today. 

But having lived through nine hype cycles for AI, I'm both impressed by how much it's advanced, but I'm also wary about elaborate claims for it. And there is a lot of excitement now about generative AI is the term that people are using, which includes programs like ChatGPT. It includes things like Dolly that are capable of creating images. It includes really AI on its own doing performances that we previously would have thought were something that people would have to do. 

But it's interesting to compare ChatGPT to a search engine. And people don't remember this, but there was a time when-- before search engines when people really struggled to find resources, and there was enormous excitement when search engines came out. And search engines are, in fact, AI. They are based on AI at the back end, coming up with lists of things that hopefully match what you typed in. In fact, the problem with the search engine becomes not trying to find anything, but trying to filter everything to decide what's really useful. 

So you can think of ChatGPT as the next step beyond a search engine where instead of getting a list of things and then you decide which might be useful and you examine them, you get an answer that says, this is what I think you want. And that is really more the AI taking charge than it is the AI saying, I can help you. Here's some things that you might look at and decide about. That makes me wary because AI is not at a stage where it really understands what it's saying. 

And so it will make up things when it doesn't know them, kind of a not very good student seeing if they can fake out the teacher. And it will provide answers that are not customized to somebody's culture or to somebody's reading level or to somebody's other characteristics. So it's really quite limited. 

I know that Harvard has sent some wording out that I've now put into my syllabi about students being welcome to use whatever tools they want. But when they present something as their work, it has to be something that they wrote themselves. It can't be something that somebody else wrote, which is classic plagiarism. It can't be something that Chat AI wrote that they're presenting as their work and so on. I think that what Chat AI does is it raises the bar for human performance. 

I know a lot about what people are going through now in terms of job interviews because my older daughter is an HR manager, and my younger daughter just graduated. And she's having a lot of job interviews. And in contrast to earlier times, now, job interviews typically involve a performance. 

If you're going to be hired for a marketing position, they'll say bring in a marketing plan when we do our face-to-face interview on this, and we'll evaluate it. Or in her case, in mechanical engineering, they say when you come in, there's this system that you're going to have a chance to debug, and we'll see how well you do it. Those employers are going to type the same thing into Chat AI. And if someone comes in with something that isn't any better than Chat AI, they're not going to get hired because why hire somebody that can't outcompete a free resource? 

Jill Anderson:  Oh interesting. 

Chris Dede: So it raises the bar for human performance in an interesting way. 

Jill Anderson:  Your research looks at something called intelligence augmentation. I want to know what that means and how that's different from artificial intelligence. 

Chris Dede: Intelligence augmentation is really about the opposite of this sort of negative example I was describing where now you've got to outthink Chat AI if you want to get a job. It says, when is the whole more than the sum of the parts? When do a person and AI working together do things that neither one could do as well on their own? 

And often, people think, well, yeah, I can see a computer programmer, there might be intelligence augmentation because I know that machines can start to do programming. What they don't realize is that it applies to a wide range of jobs, including mine, as a college professor. So I am the associate director for research in a national AI institute funded by the National Science Foundation on adult learning and online education. And one of the things the Institute is building is AI assistants for college faculty. 

So there's question answering assistants to help with student questions, and there's tutoring assistants and library assistants and laboratory assistants. There's even a social assistant that can help students in a large class meet other students who might be good learning partners. So now, as a professor, I'm potentially surrounded by all these assistants who are doing parts of my job, and I can be deskilled by that, which is a bad future. You sort of end up working for the assistant where they say, well, here's a question I can't answer. 

So you have to do it. Or you can upskill because the assistant is taking over routine parts of the job. And in turn, you can focus much more deeply on personalization to individual students, on bringing in cultural dimensions and equity dimensions that AI does not understand and cannot possibly help with. The trick about AI is that to get it, we need to change what we're educating people for because if you educate people for what AI does well, you're just preparing them to lose to AI. But if you educate them for what AI can't do, then you've got IA. 

Jill Anderson:  So that's the goal here. We have to change the way that we're educating young people, even older people at this point. I mean, everybody needs to change the way that they're learning about these things and interacting with them. 

Chris Dede: They do. And we're hampered by our system of assessment because the assessments that we use, including Harvard with the GRE and the SAT and so on, those are what AI does well. AI can score really well on psychometric tests. So we're using the wrong measure, if you will. We need to use performance assessments to measure what people can do to get into places like Harvard or higher education in general because that's emphasizing the skills that are going to be really useful for them. 

Jill Anderson:  You mentioned at the start artificial intelligence isn't really something brand new. This has been around for decades, but we're so slow to adapt and prepare and alter the way that we do things that once it reaches kind of the masses, we're already behind. 

Chris Dede:  Well, we are. And the other part of it is that we keep putting old wine in new bottles. I mean, this is — if I had to write a headline for the entire history of educational technology, it would be old wine in new bottles. But we don't understand what the new bottle really means. 

So let me give you an example of something that I think generative AI could make a big difference, be very powerful, but I'm not seeing it discussed in all the hype about generative AI. And that is evidence-based modeling for local decisions. So let's take climate change. 

One of the problems with climate change is that let's say that you're in Des Moines, Iowa, and you read about all this flooding in California. And you say to yourself, well, I'm not next to an ocean. I don't live in California. And I don't see why I should be that worried about this stuff. 

Now, no one has done a study, I assume, of flooding in Des Moines, Iowa, in 2050 based on mid-level projections about climate change. But with generative AI, we can estimate that now. 

Generative AI can reach out across topographic databases, meteorological databases, and other related databases to come up with here's the parts of Des Moines that are going to go underwater in 2050 and here's how often this is going to happen if these models are correct. That really changes the dialogue about climate change because now you're talking about wait a minute.  You mean that park I take my kids to is going to have a foot of water in it? So I think that kind of evidence-based modeling is not something that people are doing with generative AI right now, but it's perfectly feasible. And that's the new wine that we can put in the new bottle. 

Jill Anderson:  That's really a great way to use that. I mean, and you could even use that in your classroom. Something that you said a long, long time ago was that — and this is paraphrasing — the idea that we often implement new technology, and we make this mistake of focusing on students first rather than teachers.   Chris Dede:  In December, I gave a keynote at a conference called Empowering Learners for the Age of AU that has been held the last few years. And one of the things I talked about was the shift from teaching to learning. Both are important, but teaching is ultimately sort of pouring knowledge into the minds of learners. And learning is much more open ended, and it's essential for the future because every time you need to learn something new, you can't afford to go back and have another master's degree. You need to be able to do self-directed learning. 

And where AI can be helpful with this is that AI can be like an intellectual partner, even when you don't have a teacher that can help you learn in different ways. One of the things that I've been working on with a professor at the Harvard Business School is AI systems that can help you learn negotiation. 

Now, the AI can't be the person you're negotiating with. AI is not good at playing human beings — not yet and not for quite a long time, I think. But what AI can do is to create a situation where a human being can play three people at once. So here you are. You're learning how to negotiate a raise. 

You go into a virtual conference room. There's three virtual people who are three bosses. There's one simulation specialist behind all three, and you negotiate with them. And then at the end, the system gives you some advice on what you did well and not so well. 

And if you have a human mentor, that person gives you advice as well. Ronda Bandy, who was a professor in HGSE until she moved to Hunter College, she and I have published five articles on the work we did for the HGSE's Reach Every Reader Project on using this kind of digital puppeteering to help teachers practice equitable discussion leading. So again, here's something that people aren't talking about where AI on the front end can create rich evocative situations, and AI and machine learning on the back end can find really interesting patterns for improvement. 

Jill Anderson:  You know, Chris, how hard is it to get there for educators? 

Chris Dede: I think, in part, that's what these national AI institutes are about. Our institute, which is really adult learning with a workplace focus, is looking at that part of the spectrum. There's another institute whose focus is middle school and high school and developing AI partners for students where the student and the partner are learning together in a different kind of IA. There's a third Institute that's looking at narrative and storytelling as a powerful form of education and how can AI help with narrative and storytelling. 

You can imagine sitting down. Mom and dad aren't around. You've got a storybook like Goldilocks and the Three Bears, and you've got something like Alexa that can listen to what you're reading and respond. 

And so you begin, and you say, Goldilocks went out of her house one day and went into the woods and got lost. And Alexa says, why do you think Goldilocks went into the woods? Was she a naughty girl? No. Or was she an adventurous girl, or was she deeply concerned about climate change and wanting to study ecosystems? 

I mean, I'm being playful about this, but I think the point is that AI doesn't understand any of the questions that it's asking but it can ask the questions, and then the child can start to think deeper than just regurgitating the story. So there's all sorts of possibilities here that we just have to think of as new wine instead of asking how can AI automate our order thinking about teaching and learning. 

Jill Anderson:  I've been hearing a lot of concern about writing in particular-- writing papers where young people are actually expressing their own ideas, concerns about plagiarism and cheating, which I would say the latter have long existed as challenges in education, aren't really a new one. Does AI really change this? And how might a higher ed or any educator really look at this differently? 

Chris Dede:  So I think where AI changes this is it helps us understand the kind of writing that we should be teaching versus the kind of writing that we are teaching. So I remember preparing my children for the SAT, and it used to have something called the essay section. And you had to write this very formal essay that was a certain number of paragraphs, and the topic sentences each had to do this and so on. 

Nobody in the world writes those kinds of essays in the real world. They're just like an academic exercise. And of course, AI now can do that beautifully. 

But any reporter will tell you that they could never use Chat AI to write their stories because stories is what they write. They write narratives. If you just put in a description, you'll be fired from your reportorial job because no one is interested in descriptions. They want a story. 

So giving students a description and teaching them to turn it into a story or teaching them to turn it into something else that has a human and creative dimension for it, how would you write this for a seventh-grader that doesn't have much experience with the world? How would you write this for somebody in Russia building on the foundation of what AI gives you and taking it in ways that only people can? That's where writing should be going. 

And of course, good writing teachers will tell you, well, that's nothing new. I've been teaching my students how to write descriptive essays. The people who are most qualified to talk about the limits of AI are the ones who teach what the AI is supposedly doing. 

Jill Anderson:  So do you have any helpful tips for educators regardless of what level they're working at on where to kind of begin embracing this technology? 

Chris Dede: What AI can do well is what's called reckoning, which is calculative prediction. And I've given some examples of that with flooding in Des Moines and other kinds of things. And what people do is practical wisdom, if you will, and it involves culture and ethics and what it's like to be embodied and to have the biological things that are part of human nature and so on. 

So when I look at what I'm teaching, I have to ask myself, how much of what I'm teaching is reckoning? So I'm preparing people to lose to AI. And how much of what I'm teaching is practical wisdom? 

So for example, we spend a lot of time in vocational technical education and standard academic education teaching people to factor. How do you factor these complex polynomials? 

There is no workplace anywhere in the world, even in the most primitive possible conditions, where anybody makes a living by factoring. It's an app. It's an app on a phone. Should you know a little bit about factoring so it's not magic? Sure. 

Should you become fluent in factoring? Absolutely not. It's on the wrong side of the equation.  So I think just teachers and curriculum developers and assessors and stakeholders in the outcomes of education need to ask themselves, what is being taught now, and which parts of it are shifting over? And how do we include enough about those parts that AI isn't magic? But how do we change the balance of our focus to be more on the practical wisdom side? 

Jill Anderson:  So final thoughts here — don't be scared but figure out how to use this to your advantage? 

Chris Dede: Yeah, don't be scared. AI is not smart. It really isn't. People would be appalled if they knew how little AI understands what it's telling you, especially given how much people seem to be relying on it. But it is capable of taking over parts of what you do that are routine and predictable and, in turn, freeing up the creative and the innovative and the human parts that are really the rewarding part of both work the life. 

EdCast: Chris Dede is a senior research fellow at the Harvard Graduate School of Education. He is also a co-principal investigator of the National Artificial Intelligence Institute in adult learning and online education. I'm Jill Anderson. This is the Harvard EdCast produced by the Harvard Graduate School of Education. Thanks for listening.  [MUSIC PLAYING] 

Subscribe to the Harvard EdCast.

EdCast logo

An education podcast that keeps the focus simple: what makes a difference for learners, educators, parents, and communities

Related Articles

Student with virtual reality headset

Learning in Digital Worlds

Artificial intelligence concept art

Get on Board with AI

Anant Agarwal discusses how and why educators need to embrace AI

Child learning on laptop conference

Sal Khan on Innovations in the Classroom

OECD iLibrary logo

  • My Favorites

You have successfully logged in but...

... your login credentials do not authorize you to access this content in the selected format. Access to this content in this format requires a current subscription or a prior purchase. Please select the WEB or READ option instead (if available). Or consider purchasing the publication.

OECD Digital Education Outlook: OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots

  • Disclaimers

Introduction

Smart education technologies: definitions and context, the uses of artificial intelligence in classrooms and educational systems, future potentials, 2. artificial intelligence in education: bringing it all together.

Artificial intelligence has led to a generation of technologies in education – for use in classrooms and by school systems more broadly – with considerable potential to bring education forward. This chapter provides a broad overview of the technologies currently being used, their core applications, and their potential going forward. The chapter also provides definitions of some of the key terms that will be used throughout this book. It concludes with a discussion of the potentials that may be achieved if these technologies are integrated, the shifts in thinking about supporting learners through one-on-one learning experiences to influencing systems more broadly, and other key directions for R&D and policy in the future.

For decades, educators and researchers have looked to computers as having the potential to revolutionise education. Today, much of the use of computers in education still falls short of revolutionary – a lot of learning still involves one instructor teaching many students simultaneously, and considerable computer-based learning takes place using curricula and technologies that replicate traditional practices such as drill and practice. However, the best practices of computers in education appear to go considerably beyond that. Millions of learners now use intelligent tutoring systems as part of their mathematics classes – systems that recognise student knowledge, implement mastery learning where students do not advance until they can demonstrate understanding of a topic, and have hints available on demand (VanLehn, 2011[1]) . Millions of learners around the world watch lectures and complete exercises within massive online open courses, offering the potential to study thousands of topics that would not be available in colleges locally (Milligan and Littlejohn, 2017[2]) . An increasing number of children and adults learn from (and are even assessed within) advanced online interactions such as simulations, games, virtual reality, and augmented reality (De Jong and Van Joolingen, 1998[3] ; Rodrigo et al., 2015[4] ; Shin, 2017[5] ; De Freitas, 2018[6]) . Perhaps none of these systems fully captures the sophistication dreamed of in early accounts of the potential of these systems (Carbonell, 1970[7]) (Stephenson, 1998[8]) . On the other hand, their scale and degree integration into formal educational systems have gone beyond what seemed plausible even as recently as the turn of the century.

Increasingly, computer-based education has been artificially intelligent education. Advances in artificial intelligence (AI) in the 1980s, 1990s, and first decade of the new millennium have translated to new potentials for learning technologies in several areas. The core advances in AI in those decades led to advances in more specialised use of AI in education – the research and practice communities of learning analytics and educational data mining – from around 2004 to today. As the research field advanced, new methods filtered into systems used by learners at scale. AI is today used to recognise what students know (and their engagement and learning strategies) to predict their future trajectories, better assess learners along multiple dimensions, and – ultimately – help both humans and computers decide how to better support students.

As these technologies develop, as they mature, as they scale, it becomes worth asking the question: where are we going? And where could we be going? If we can understand the frontiers and potentials of artificial intelligence in education, we may be able to shape research and development through careful design of policy over the next decade to get there.

In the chapters of the book, the authors use their expertise in specific areas and challenges in artificial intelligence in education to explore the frontiers and potential of this area. What technology and teaching approaches are just becoming available in research classrooms that could soon be available to a much broader range of students? How can artificial intelligence shape educational systems more broadly, from academic advising to credentialing, to make them more adaptive to learner needs? Where could we be in 10 years with the right guidance and support for research and development? Where are the opportunities for incremental but positive impacts on learners? And where are the opportunities for radically transforming education and learner experiences?

In the remainder of this overview, I will clarify some terms and domains relevant to this book. Next, I will situate the chapters of this book in the context of broader trends and opportunities in the field (including some trends and opportunities that were not explicitly covered by the authors). The final section will discuss upcoming opportunities of a broader nature, cutting across types of artificially intelligent learning technologies that can be supported through shifts in policy.

In this section, some definitions and context that are key to understanding smart technologies in education are presented.

Educational technology

Educational technology at its most obvious level refers simply to the use of any technology – any applied machinery or equipment – in education. Throughout the last hundred years, practitioners and researchers have sometimes become overly enthusiastic about finding applications for new technologies in education. See, for instance, reports by Cuban (1986[9]) of instructors teaching students with traditional lecture pedagogies but inside an early-generation airplane.

Today, most discussion of technology in education is focused on computers and digitalisation, though older technologies such as radio and television still play an important role – especially in many middle-income countries during the recent COVID-19 pandemic (OECD, n.d.[10]) . Educational technologies can refer to a range of technologies. I provide a few examples here (others are given in the context of the chapters in this book).

Computer tutors or intelligent tutoring systems provide students with a learning experience where the learning system adapts presentation based on some model or ongoing assessment of the student, a model of the subject area being learned, and a model of how to teach (Wenger, 1987[11]) . Each of these models can be more sophisticated or more basic. Baker (2016[12]) notes that contemporary intelligent tutoring systems tend to be sophisticated in only one area (which differs between systems) and very simple in other areas.

Digital learning games embed learning into a fun activity that resembles a game. The degree of gamification can vary from activities that embed learning into core gameplay and which may not even seem to be a learning activity (see, for instance, SimCity and Civilisation) to more obvious learning activities where the student gets rewards for successful performance (for instance, getting to throw a banana at a monkey after answering a math problem correctly in MathBlaster ). 

Simulations are computerised imitations of a process or activity that would be difficult or costly to do in the real world as an educational activity. Increasing numbers of students today use virtual laboratories to conduct experiments that could be dangerous, expensive, or difficult, and also to receive feedback and learning support while completing these activities.

Virtual reality systems embed learners in 3D depictions of real-world activities. Like simulations, they make it feasible to engage in activities from a home or computer lab that would be expensive, dangerous, or simply impossible to engage in otherwise. Augmented reality systems embed additional information and experiences into real-world activities, ranging from pop-up details that appear and ambient displays (information that is available in the environment without having to focus on it) to overlaying a different world on top of the current one. Both augmented reality and virtual reality often rely upon headsets to present visual information to learners.

Educational robots have a physical presence and interact with students in real-world activities to support their learning. While robots as educational DIY kits have been available since the 1980s, a recent development sees robots take up the role of tutor.

Massive online open courses (MOOCs) provide students with a basic learning experience, typically consisting of videos and quizzes. The innovation around MOOCs is not in the learning experience – it is typically a simplified version of a large lecture class – but, rather, in making materials developed by faculty at world-famous universities, often on highly specialised topics, accessible to learners around the world.

Educational data

Data are, quite simply, facts gathered together. Whereas a few facts gathered together do not enable us to reason about the relationships represented in that information, the accumulation of large quantities of information does and that is the modern power of big data. Educational data used to be dispersed, hard to collect, and small-scale. Individual teachers might keep a gradebook on paper; the school might keep disciplinary records in the basement; and curriculum developers would have a very limited idea of how their materials were being used and what students were struggling with. Today, educational data is gathered at a much larger scale. Gradebooks, disciplinary data, assessment data, absence data and more is stored centrally by local education agencies (or often by national or even trans-national vendors). Curriculum developers often gather extensive data on usage and learning. As of this writing, the regulations around handling, storage, and use of educational data vary considerably between countries, with some countries having very strict practices (particularly on the European continent), and other countries having less restrictive regulations. Each of these sources of data can be used to improve educational quality and support learning, supporting both artificial intelligence/machine learning (next definition) and human refinement of learning content and experiences.

Artificial intelligence and machine learning

Artificial intelligence is the capacity for computers to perform tasks traditionally thought to involve human intelligence or, more recently, tasks beyond the ability of human intelligence. Stemming from relatively simple, general-purpose systems in the 1960s, artificial intelligence today generally involves more specific-purpose systems that complete a specific task involving reasoning about data or the world, and then interaction with the world (more commonly through a phone or a computer interface than actual physical interaction). Machine learning (increasingly called data science, and also called both data mining and analytics) is a sub-area of artificial intelligence, present at a low level since the beginning of the field but becoming a particular emphasis in the 1990s through to today. Machine learning is when a system discovers patterns from data – becoming more effective at doing so when more data is available (and even more so, when more comprehensive or representative data is available). There is a broad range of machine-learning methods, classified mostly into supervised learning (attempting to predict or infer a specific known variable) and unsupervised learning (trying to discover the structure or relationships in a set of variables). There have roughly been two generations of machine learning: a first generation of relatively simple, interpretable methods and a second generation of much more complex, sophisticated, hard-to-interpret methods.

Artificial Intelligence in Education (AIED)

Artificial Intelligence in Education (AIED) arose as an interdisciplinary subfield in the early 1980s with a bi-annual (now annual) conference and peer-reviewed journal, although examples of this research area were present even before that. Much of the early work in artificial intelligence in education involved intelligent tutoring systems but the field has broadened over the years to include all of the types of educational systems/interactions defined above, and has expanded to include several independent conferences and journals. The revolution in machine learning and data mining impacted artificial intelligence in education as well, with a significant shift around 2010 – influenced by the emergence of a separate scientific conference, Educational Data Mining – towards much more use of this type of method. Today, AIED systems incorporate a range of functionality for identifying aspects of the learner, and a range of ways they can interact with and respond to learners.

Learning analytics

Learning Analytics, also referred to as Educational Data Mining, has emerged as a field since 2008 with two major international conferences and peer-reviewed journals. The goal of learning analytics is to use the increasing amounts of data coming from education to better understand and make inferences on learners and the contexts which they learn from. Learning analytics and educational data mining apply the methods of machine learning/data science to education, with methods and problems emerging specific to education. Challenges such as inferring student knowledge in real-time and predicting future school dropout have seen particular interest, but there have been a range of other applications for these methods, from inferring prerequisite relationships in a domain such as mathematics to understanding the factors that lead to student boredom. A taxonomy of methods and applications for learning analytics is given in (Baker and Siemens, 2014[13] ; DeFalco et al., 2017[14]) . Learning analytics models are most frequently deployed in two types of technology: intelligence augmentation systems and personalised learning systems (discussed in the next section).

Intelligence augmentation systems , also called decision support systems, communicate information to stakeholders such as teachers and stakeholders in a way that supports decision-making. While they can simply provide raw data, they often provide information distilled through machine-learning models, predictions, or recommendations. Intelligence augmentation systems often leverage predictive analytics systems, which make predictions about students’ potential future outcomes, and – ideally – also provide understandable reasons for these predictions. Predictive analytics systems are now used at scale to try to understand which students are at risk of dropping out of high school or failing to complete college, with an eye towards providing interventions which get students back on track. Intelligence augmentation systems often communicate information to stakeholders through dashboards , which communicate data through graphs and tables that allow the user to drill down for information about specific learners. Today, personalised learning systems and predictive analytics systems often use dashboards to communicate information to teachers, occasionally make dashboards available for school counselors, academic advisors, and school leaders, and rarely make dashboards available for parents. The quality of the data presented in dashboards can vary considerably from learning system to learning system.

This book focuses on two key areas: 1) New Educational Technologies and Approaches for the Classroom, and 2) New Educational Technologies and Approaches for Educational Systems. These new technologies often but not always involve artificial intelligence. Within this section, I will summarise work in each of these areas, including both the work discussed in the chapters in this report, but going beyond as well.

New educational technologies and approaches for the classroom

As computerised educational technologies become more commonly accessible to teachers and students, there is increasing awareness that the technology does not simply increase convenience for teachers or provide a fun alternative activity for students – it can promote new methods for teaching and learning. 

Personalised Learning. One major trend within learning, driven by these technologies, is the move towards personalising learning to a greater degree. Personalisation of learning did not start with computerised technology – in a sense, it has been available since the first use of one-on-one tutoring, thousands of years ago (if not earlier). However, with the increase in systematised, standardised schooling and teaching over a hundred years ago, awareness increased that many students’ learning needs were being poorly met by one-size-fits-all curriculum. Classroom approaches such as mastery learning (each student works on material until mastery and only then moves on to the next topic) were developed, but proved difficult to scale due to the demands on the teacher. Educational technologies provided a ready solution to this problem – the computer could manage some of the demands of personalising learning, identifying each individual student’s degree of mastery and providing them with learning activities relevant to their current position within the curriculum.

The first dimension that educational technologies became effective at personalising for was a student’s knowledge or state of learning. Molenaar (2021[15]) details efforts to develop better personalisation of learning for learners, providing a framework for the degree of automation in personalised learning systems. Her chapter discusses the shift from teacher-driven systems to computer-based technologies that can take a larger role in immediate decision-making, remaining within guidelines and goals specified by the teacher.

Next, educational technologies became more effective at personalising for differences in students’ self-regulated learning – their ability to make good choices during learning that enhance their learning outcomes and efficiency. This topic is also discussed in Inge Molenaar’s chapter (Molenaar, 2021[15]) . Modern educational technologies in many cases have the ability to recognise when students are using ineffective or inefficient strategies, and to provide them recommendations or nudges to get back onto a more effective trajectory.

A contemporary trend, which is still primarily in research classrooms rather than wide-scale deployment, is the move towards also recognising and adapting to student engagement, affect, and emotion. Discussed by Sidney D’Mello (2021[16]) , these systems recognise these aspects of a student’s experience either from their interaction and behaviour within the system or from physical and physiological sensors. There are now several examples of educational technologies – particularly intelligent tutoring systems and games – which have been able to identify a student who is bored, frustrated, or gaming the system (trying to find strategies to complete materials without needing to learn) and re-engage them productively e.g. (DeFalco et al., 2017[14]) .

Increasing research also looks at trying to personalise to increase broader motivation or interest. This work differs from the work on engagement and affect in terms of time-scale. Whereas engagement and affect often manifests in brief time periods – as short as a few seconds – motivation and interest are more long-term stable aspects of student experience. Work by Kizilcec and colleagues (Kizilcec et al., 2017[17]) , for instance, has tried to connect student learning experiences with their values, leading to greater degrees of completion of online courses. Work by Walkington and colleagues (Walkington, 2013[18] ; Walkington and Bernacki, 2019[19]) has modified the contents of learning systems to match student personal interests, leading students to work faster, become disengaged less often, and learn more.

New Pedagogies. Although the most obvious impact of artificially-intelligent educational technologies is through personalising learning directly, new pedagogies and teacher practices have also emerged. These pedagogies and practices enable teachers to support their students or provide their students with experiences in ways that were generally not feasible prior to the technology being developed.

Perhaps the largest shift has been in the information available to teachers. Dashboards provide teachers with data on a range of aspects of their students’ performance and learning. This has produced a major shift in how homework is used. In the past, homework would need to be brought to class by students. It could be graded by the teacher after that (meaning that feedback and learning support would be delayed), or students could grade it with the teacher in a large group, which is not a very time-efficient approach. In contrast, data from homework technologies today can become available to teachers in real-time. This means that teachers can identify which students are struggling and which materials students struggled on in general before class even starts. This enables strategies where, for instance, teachers identify which students displayed common errors and can identify students who can demonstrate both incorrect and correct problem-solving strategies for whole-class discussion. It also enables teachers to message students who are behind in completing materials (or even in starting to work through materials), helping get the student back on track (Arnold and Pistilli, 2012[20]) .

Similar uses are available for formative assessment systems, which are being increasingly used in contexts where students have high-stakes end-of-year examinations. These systems often go beyond teacher-designed homework in terms of their breadth and comprehensiveness of coverage of key skills and concepts. They are increasingly used by teachers to determine what topics to review with their classes as well as what types of supplemental supports to provide to specific students.

Formative assessment systems are increasingly used in K-12 education worldwide. The most widely used formative assessment systems, such as NWEA MAP (Finnerty, 2018[21]) , present students with traditional multiple-choice items and measure straightforward mathematics and language arts competencies – essentially providing another test to students to complete, but one where their teachers will get useful data linked to competencies that will be seen on future standardised examination. A small number of emerging formative assessment systems assess more complicated constructs and/or embed assessment into more complex activities, such as games (Shute and Kim, 2014[22]) .

Data from formative assessment systems can be used with platforms designed to provide lists of supplemental resources for specific skills, concepts, and topics. Especially post-COVID, both local education agencies, and regional and national governments have worked to develop platforms with supplemental learning resources for students and parents. However, right now, these platforms are generally not connected directly to formative assessment systems so the teacher or parent needs to look up the resources for a student struggling with a specific competency.

One concern about formative assessment systems is that time spent using a formative assessment system is time not spent learning – a loss of instructional time. For this reason, there has been a trend towards embedding formative assessment into personalised learning. Several widely-used personalised learning systems, such as MATHia, Mindspark, Reasoning Mind, and ASSISTments, provide teachers with formative assessment data on which competencies the student is missing (Feng and Heffernan, 2006[23] ; Khachatryan et al., 2014[24]) . This information is distilled from students’ regular learning activity, avoiding a loss of instructional time.

There is also better information available to teachers on what is going on in their classes in real-time, an area discussed in detail by Pierre Dillenbourg (Dillenbourg, 2021[25]) . Classroom analytics can provide the teacher with information on a range of aspects of class performance, from individual students’ difficulties with material in real-time to the relative effectiveness of collaboration by different student groups. A teacher cannot watch every student (or every student group) at all times – better data can help them understand where to focus their efforts, and which students would benefit from a conversation right now.

Beyond just providing better data, it is possible to use technology to give students a range of experiences that were not feasible a generation ago. In their chapter, Tony Belpaeme and Fumihide Tanaka (Belpaeme and Tanaka, 2021[26]) discuss the new potentials of having robots interact with students in classrooms.

Using simulations and games in class can enable teachers to demonstrate complex and hard to understand systems to students. They can also allow students to explore and interact with these systems on their own. There seems to be particular educational benefit to the combination of a rich simulation or game experience that enables a student to develop informal, practical understanding, and then a teacher lecture or explanation that helps a student bridge from that informal understanding to more formal, academic conceptual understanding (Asbell-Clarke et al., 2020[27]) . Modern technologies also offer new potentials for the use of collaborative learning, with systems that can scaffold effective collaboration strategies (Strauß and Rummel, 2020[28]) , and systems that can provide rich experiences to collaborate around, such as interactive tabletops (Martinez Maldonado et al., 2012[29]) . 

Equity. New educational technologies are typically designed with the goal of improving student and teacher experiences and outcomes. However, the designers of these systems do not always consider how the full spectrum of learners are impacted. Often, systems are designed by members of specific demographic groups (typically higher socio-economic status, not identified as having special needs, and members of racial/ethnic/national majority groups) with members of their own groups in mind (not always intentionally). This can lead to lower educational effectiveness for members of other groups.

For example, Judith Good (2021[30]) discusses how there has been little effort to create educational technologies specifically designed for students with disabilities or special needs. She discusses examples of technologies that could support learners with autism, dysgraphia and visual impairment. The lack of attention to individuals with special needs by the scientific community and by developers of artificially intelligent educational technologies is a major source of inequity and a missed opportunity. Designing policies that facilitate developing systems to support learners with special needs (for instance, by developing approaches that improve access to data on disabilities while protecting student privacy) and the creation of incentives to develop for special needs populations may help to address this inequity.

Another key area of inequity is in support of historically underserved and underrepresented populations, including ethnic/racial minorities and linguistic minorities. Most educational technologies are developed by members of historically well-supported populations. They are often first piloted with members of historically well-supported populations. Testing for effectiveness with historically underrepresented populations often occurs only in later stages of development (or in final large-scale evaluations of efficacy) when it is too late to make major design changes. There is increasing evidence that both educational research design findings and algorithms obtained on majority populations can fail to apply or function more poorly for other populations of learners (Ocumpaugh et al., 2014[31] ; Karumbaiah, Ocumpaugh and Baker, 2019[32]) .

REPORT on artificial intelligence in education, culture and the audiovisual sector

19.4.2021 - ( 2020/2017(INI) )

Committee on Culture and Education Rapporteur: Sabine Verheyen Rapporteur for the opinion (*): Ondřej Kovařík, Committee on Civil Liberties, Justice and Home Affairs (*) Associated committee – Rule 57 of the Rules of Procedure

MOTION FOR A EUROPEAN PARLIAMENT RESOLUTION

Explanatory statement.

  • OPINION OF THE COMMITTEE ON CIVIL LIBERTIES, JUSTICE AND HOME AFFAIRS
  • OPINION OF THE COMMITTEE ON THE INTERNAL MARKET AND CONSUMER PROTECTION
  • OPINION OF THE COMMITTEE ON LEGAL AFFAIRS
  • OPINION OF THE COMMITTEE ON WOMEN'S RIGHTS AND GENDER EQUALITY

INFORMATION ON ADOPTION IN COMMITTEE RESPONSIBLE

Final vote by roll call in committee responsible.

on artificial intelligence in education, culture and the audiovisual sector

( 2020/2017(INI) )

The European Parliament ,

–   having regard to the Charter of Fundamental Rights of the European Union,

–   having regard to Articles 165, 166 and 167 of the Treaty on the Functioning of the European Union,

–   having regard to the Council conclusions of 9 June 2020 on shaping Europe’s digital future [1] ,

–   having regard to the opinion of the European Economic and Social Committee of 19 September 2018 on the digital gender gap [2] ,

–   having regard to the Commission proposal for a regulation of the European Parliament and of the Council of 6 June 2018 establishing the Digital Europe Programme for the period 2021-2027 ( COM(2018)0434 ),

–   having regard to the Commission communication of 30 September 2020 on the Digital Education Action Plan 2021-2027: Resetting education and training for the digital age ( COM(2020)0624 ),

–   having regard to the Commission communication of 30 September 2020 on achieving the European Education Area by 2025 ( COM(2020)0625 ),

–   having regard to the Commission report of 19 February 2020 on the safety and liability implications of artificial intelligence, the Internet of Things and robotics ( COM(2020)0064 ),

–   having regard to the Commission white paper of 19 February 2020 entitled ‘Artificial Intelligence – A European approach to excellence and trust’ ( COM(2020)0065 ),

–   having regard to the Commission communication of 19 February 2020 on a European strategy for data ( COM(2020)0066 ),

–   having regard to the Commission communication of 25 April 2018 entitled ‘Artificial Intelligence for Europe’ ( COM(2018)0237 ),

–   having regard to the Commission communication of 17 January 2018 on the Digital Education Action Plan ( COM(2018)0022 ),

–   having regard to the report of the Commission High-Level Expert Group on Artificial Intelligence of 8 April 2019 entitled ‘Ethics Guidelines for Trustworthy AI’,

–   having regard to its resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics [3] ,

–   having regard to its resolution of 11 September 2018 on language equality in the digital age [4] ,

–   having regard to its resolution of 12 June 2018 on modernisation of education in the EU [5] ,

–   having regard to its resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics [6] ,

–   having regard to its resolution of 1 June 2017 on digitising European industry [7] ,

–   having regard to the briefing of its Policy Department for Structural and Cohesion Policies of May 2020 on the use of artificial intelligence in the cultural and creative sectors,

–   having regard to the in-depth analysis of its Policy Department for Structural and Cohesion Policies of May 2020 on the use of artificial intelligence in the audiovisual sector,

–   having regard to the study of its Policy Department for Citizens’ Rights and Constitutional Affairs of April 2020 on the education and employment of women in science, technology and the digital economy, including AI and its influence on gender equality,

–   having regard to Rule 54 of its Rules of Procedure,

–   having regard to the opinions of the Committee on Civil Liberties, Justice and Home Affairs, the Committee on the Internal Market and Consumer Protection, the Committee on Legal Affairs and the Committee on Women’s Rights and Gender Equality,

–   having regard to the report of the Committee on Culture and Education (A9-0127/2021),

A.   whereas artificial intelligence (AI) technologies, which may have a direct impact on our societies, are being developed at a fast pace and are increasingly being used in almost all areas of our lives, including education, culture and the audiovisual sector; whereas ethical AI is likely to help improve labour productivity and help accelerate economic growth;

B.   whereas the development, deployment and use of AI, including the software, algorithms and data used and produced by it, should be guided by the ethical principles of transparency, explainability, fairness, accountability and responsibility;

C.   whereas public investment in AI in the Union has been vastly lagging behind other major economies; whereas underinvestment in AI will be likely to have an impact on the Union’s competitiveness across all sectors;

D.   whereas an integrated approach to AI and the availability, collection and interpretation of high-quality, trustworthy, fair, transparent, reliable, secure and compatible data are essential for the development of ethical AI;

E.   whereas Article 21 of the EU Charter of Fundamental Rights prohibits discrimination on a wide range of grounds; whereas multiple forms of discrimination should not be replicated in the development, deployment and use of AI systems;

F.   whereas gender equality is a core principle of the Union enshrined in the Treaties and should be reflected in all Union policies, including in education, culture and the audiovisual sector, as well as in the development of technologies such as AI;

G.   whereas past experiences, especially in technical fields, have shown that developments and innovations are often based mainly on male data and that women’s needs are not fully reflected; whereas addressing these biases requires greater vigilance, technical solutions and the development of clear requirements of fairness, accountability and transparency;

H.   whereas incomplete and inaccurate data sets, the lack of gender-disaggregated data and incorrect algorithms can distort the processing of an AI system and jeopardise the achievement of gender equality in society; whereas data on disadvantaged groups and intersectional forms of discrimination tends to be incomplete or even absent;

I.   whereas gender inequalities, stereotypes and discrimination can also be created and replicated through the language and images disseminated by the media and AI-powered applications; whereas education, cultural programmes and audiovisual content have considerable influence in shaping people’s beliefs and values and are a fundamental tool for combatting gender stereotypes, decreasing the digital gender gap and establishing strong role models; whereas an ethical and regulatory framework must be in place ahead of implementing automatised solutions for these key areas in society;

J.   whereas science and innovation can bring life-changing benefits, especially for those who are furthest behind, such as women and girls living in remote areas; whereas scientific education is important for obtaining skills, decent work and jobs of the future, as well as for breaking with gender stereotypes that regard these as stereotypically masculine fields; whereas science and scientific thinking are key to democratic culture, which in turn is fundamental for advancing gender equality;

K.   whereas one woman in ten in the Union has already suffered some form of cyber-violence since the age of 15 and whereas cyber-harassment remains a concern in the development of AI, including in education; whereas cyber-violence is often directed at women in public life, such as activists, women politicians and other public figures; whereas AI and other emerging technologies can play an important role in preventing cyber-violence against women and girls and educating people;

L.   whereas the Union and its Member States have a particular responsibility to harness, promote and enhance the added value of AI technologies and to make sure that these technologies are safe and serve the well-being and general interest of Europeans; whereas these technologies can make a huge contribution to achieving our common goal of improving the lives of our citizens and fostering prosperity in the Union by helping to develop better strategies and innovation in a number of areas, namely in education, culture and the audiovisual sector;

M.   whereas most AI is based on open-source software, which means that source codes can be inspected, modified and enhanced;

N.   whereas certain adjustments to specific existing EU legislative instruments may be necessary to reflect the digital transformation and address new challenges posed by the use of AI technologies in education, culture and the audiovisual sector, such as the protection of personal data and privacy, combatting discrimination, promoting gender equality, and respecting intellectual property rights (IPR), environmental protection and consumers’ rights;

O.   whereas it is important to provide the audiovisual sector with access to data from the global platforms and major players in order to ensure a level playing field;

P.   whereas AI and future applications or inventions made with the help of AI can have a dual nature, much like with any other technology; whereas AI and related technologies raise many concerns regarding the ethics and transparency of their development, deployment and use, notably on data collection, use and dissemination; whereas the benefits and risks of AI technologies in education, culture and the audiovisual sector must be carefully assessed and their effects on all aspects of society thoroughly and continuously analysed, without undermining their potential;

Q.   whereas education aims to achieve human potential, creativity and authentic social change, while using data-driven AI systems incorrectly may hinder human and social development;

R.   whereas education and educational opportunities are a fundamental right; whereas the development, deployment and use of AI technologies in the education sector should be classified as high risk and subject to stricter requirements on safety, transparency, fairness and accountability;

S.   whereas high-quality, fast and secure pervasive connectivity, broadband, high-capacity networks, IT expertise, digital skills, digital equipment and infrastructure, as well as societal acceptance and a targeted and accommodating policy framework, are some of the preconditions for the broad and successful deployment of AI in the Union; whereas it is essential that such infrastructure and equipment be deployed equally across the Union in order to tackle the persistent digital gap between its regions and citizens;

T.   whereas addressing the gender gap in science, technology, engineering, arts and maths (STEAM) subjects is an absolute necessity to ensure that the whole of society is equally and fairly represented when developing, deploying and using AI technologies, including the software, algorithms and data used and produced by them;

U.   whereas it is essential to ensure that all people in the Union acquire the necessary skills from an early age in order to better understand the capabilities and limitations of AI, to prepare themselves for the increasing presence of AI and related technologies in all aspects of human activity, and to be able to fully embrace the opportunities that they offer; whereas the widespread acquisition of digital skills across all parts of society in the Union is a precondition for achieving a fair digital transformation beneficial to all;

V.   whereas, with that aim in view, the Member States must invest in digital education and media training, equipping schools with the proper infrastructure and the necessary end devices, and place greater emphasis on the teaching of digital skills and capabilities as part of school curricula;

W.   whereas AI and related technologies can be used to improve learning and teaching methods, notably by helping education systems to use fair data to improve educational equity and quality, while promoting tailor-made curricula and better access to education and improving and automating certain administrative tasks; whereas equal and fair access to digital technologies and high-speed connectivity are required in order to make the use of AI beneficial to the whole of society; whereas it is of the utmost importance to ensure that digital education is accessible to all, including those from disadvantaged backgrounds and people with disabilities; whereas learning outcomes do not depend on technology per se, but on how teachers can use technology in pedagogically meaningful ways;

X.   whereas AI has particular potential to offer solutions for the day-to-day challenges of the education sector, such as the personalisation of learning, monitoring learning difficulties, the automation of subject-specific content/knowledge, providing better professional training and supporting the transition to a digital society;

Y.   whereas AI could have practical applications in terms of reducing the administrative work of educators and educational institutions, freeing up time for their core teaching and learning activities;

Z.   whereas new AI-based applications in education are facilitating progress in a variety of disciplines, such as language learning and maths;

AA.   whereas AI-enabled personalised learning experiences can not only help to increase students’ motivation and enable them to reach their full potential, but also reduce drop-out rates;

AB.   whereas AI can increasingly help make teachers more effective by giving them a better understanding of students’ learning methods and styles and helping them to identify learning difficulties and better assess individual progress;

AC.   whereas the Union’s digital labour market is lacking almost half a million experts in big data sciences and data analysis, who are intrinsic to the development and use of quality and trustworthy AI;

AD.   whereas the application of AI in education raises concerns around the ethical use of data, learners’ rights, data access and protection of personal data, and therefore entails risks to fundamental rights such as the creation of stereotyped models of learners’ profiles and behaviour that could lead to discrimination or risks of doing harm by the scaling-up of bad pedagogical practices;

AE.   whereas culture plays a central role in the use of AI technologies at scale and is emerging as a key discipline for cultural heritage thanks to the development of innovative technologies and tools and their effective application to respond to the needs of the sector;

AF.   whereas AI technologies can be used to promote and protect cultural heritage, including by using digital tools to preserve historical sites and finding innovative ways to make the datasets of cultural artefacts held by cultural institutions across the Union more widely and easily accessible, while allowing users to navigate the vast amount of cultural and creative content; whereas the promotion of interoperability standards and frameworks is key in this regard;

AG.   whereas the use of AI technologies for cultural and creative content, notably media content and tailored content recommendations, raises issues around data protection, discrimination and cultural and linguistic diversity, risks producing discriminatory output based on biased entry data, and could restrict diversity of opinion and media pluralism;

AH.   whereas AI-based personalised content recommendations can often better target individuals’ specific needs, including cultural and linguistic preferences; whereas AI can help to promote linguistic diversity in the Union and contribute to the wider dissemination of European audiovisual works, in particular through automatic subtitling and dubbing of audiovisual content in other languages; whereas making media content across languages is therefore fundamental to support cultural and linguistic diversity;

AI.   whereas AI drives innovation in newsrooms by automating a variety of mundane tasks, interpreting data and even generating news such as weather forecasts and sports results;

AJ.   whereas Europe’s linguistic diversity means that promoting computational linguistics for rights-based AI offers specific potential for innovations which can be used to make global cultural and information exchanges in the digital age democratic and non-discriminatory;

AK.   whereas AI technologies may have the potential to benefit special needs education, as well as the accessibility of cultural and creative content for people with disabilities; whereas AI enables solutions such as speech recognition, virtual assistants and digital representations of physical objects; whereas digital creations are already playing their part in making such content available to people with disabilities;

AL.   whereas AI applications are omnipresent in the audiovisual sector, in particular on audiovisual content platforms;

AM.   whereas AI technologies therefore contribute to the creation, planning, management, production, distribution, localisation and consumption of audiovisual media products;

AN.   whereas while AI can be used to generate fake content, such as ‘deepfakes’, which are growing exponentially and constitute an imminent threat to democracy, it can also be used as an invaluable tool for identifying and immediately combatting such malicious activity, for example through real-time fact checking or labelling of content; whereas most deepfake material is easy to spot; whereas at the same time, AI-powered detection tools are generally successful in flagging and filtering out such content; whereas there is a lack of a legal framework on this issue;

General observations

1.   Underlines the strategic importance of AI and related technologies for the Union; stresses that the approach to AI and its related technologies must be human-centred and anchored in human rights and ethics, so that AI genuinely becomes an instrument that serves people, the common good   and the general interest of citizens;

2.   Underlines that the development, deployment and use of AI in education, culture and the audiovisual sector must fully respect fundamental rights, freedoms and values, including human dignity, privacy, the protection of personal data, non-discrimination and freedom of expression and information, as well as cultural diversity and intellectual property rights, as enshrined in the Union Treaties and the Charter of Fundamental Rights;

3.   Asserts that education, culture and the audiovisual sector are sensitive areas as far as the use of AI and related technologies is concerned, as they have the potential to impact on the cornerstones of the fundamental rights and values of our society; stresses, therefore, that ethical principles should be observed in the development, deployment and use of AI and related technologies in these sectors, including the software, algorithms and data used and produced by them;

4.   Recalls that algorithms and AI should be ‘ethical by design’, with no built-in bias, in a way that guarantees maximum protection of fundamental rights;

5.   Reiterates the importance of developing quality, compatible and inclusive AI and related technologies for use in deep learning which respect and defend the values of the Union, notably gender equality, multilingualism and the conditions necessary for intercultural dialogue, as the use of low-quality, outdated, incomplete or incorrect data may lead to poor predictions and in turn discrimination and bias; highlights that it is essential to develop capabilities at both national and Union level to improve data collection, safety, systematisation and transferability, without harming privacy; takes note of the Commission’s proposal to create a single European data space;

6.   Recalls that AI may give rise to biases and thus to various forms of discrimination based on sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation; recalls, in this regard, that the rights of all people must be ensured and that AI and related technologies must not be discriminatory in any form;

7.   Emphasises that such bias and discrimination can arise from already biased datasets that reflect existing discrimination in society; recalls, in this context, that it is essential to involve the relevant stakeholders, including civil society, to prevent gender, social and cultural biases from being inadvertently included in AI algorithms, systems and applications; stresses the need to work on the most efficient way of reducing bias in AI systems in line with ethical and non-discrimination standards; underlines that the datasets used to train AI should be as broad as possible in order to represent society in the best and most relevant way, that the outputs should be reviewed to avoid all forms of stereotypes, discrimination and bias and that, where appropriate, AI should be used to identify and rectify human bias wherever it exists; calls on the Commission to encourage and facilitate the sharing of de-biasing strategies for data;

8.   Calls on the Commission and the Member States to take into account ethical aspects, including from a gender perspective, when developing AI policy and legislation, and, if necessary, to adapt the current legislation, including Union programmes and ethical guidelines for the use of AI;

9.   Calls on the Commission and the Member States to devise measures that fully incorporate the gender dimension, such as awareness-raising campaigns, training and curricula, which should provide information to citizens on how algorithms operate and their impact on their daily lives; further calls on them to nurture gender-equal mindsets and working conditions that lead to the development of more inclusive technology products and work environments; urges the Commission and the Member States to ensure the inclusion of digital skills and AI training in school curricula and to make them accessible to all, as a way to close the digital gender divide;

10.   Stresses the need for training for workers and educators dealing with AI to promote the ability to identify and correct gender-discriminatory practices in the workplace and in education, and for workers developing AI systems and applications to identify and remedy gender-based discrimination in the AI systems and applications they develop; calls for the establishment of clear responsibilities in companies and educational institutions to ensure that there is no gender-based discrimination in the workplace or educational context; highlights that genderless images of AI and robots should be used for educational and cultural purposes, unless gender is a key factor for some reason;

11.   Highlights the importance of the development and deployment of AI applications in education, culture and the audiovisual sector in collecting gender-disaggregated and other equality data, and of applying modern machine learning de-biasing techniques, if needed, to correct gender stereotypes and gender biases which may have negative impacts;

12.   Calls on the Commission to include education in the regulatory framework for high-risk AI applications, given the importance of ensuring that education continues to contribute to the public good, as well as the high sensitivity of data on pupils, students and other learners; emphasises that in the education sector, this deployment should involve educators, learners and the wider society and should take into account the needs of all and the expected benefits in order to ensure that AI is used purposefully and ethically;

13.   Calls on the Commission to encourage the use of Union programmes such as Horizon Europe, Digital Europe and Erasmus+ to promote multidisciplinary research, pilot projects, experiments and the development of tools including training, for the identification of gender biases in AI, as well as awareness-raising campaigns for the general public;

14.   Stresses the need to create diverse teams of developers and engineers to work alongside the main actors in education, culture and the audiovisual sector in order to prevent gender or social bias from being inadvertently included in AI algorithms, systems and applications; stresses the need to consider the variety of different theories through which AI has been developed to date and could be further advanced in the future;

15.   Points out that taking due care to eliminate bias and discrimination against particular groups, including gender stereotypes, should not halt technological progress;

16.   Reiterates the importance of fundamental rights and the overarching supremacy of the legislation of data and privacy protection, which is imperative when dealing with such technologies; recalls that data protection and privacy can be particularly affected by AI, in particular children’s data; underlines that the principles established in the General Data Protection Regulation (GDPR) [8] are binding for the deployment of AI in that regard; recalls, moreover, that all AI applications must fully respect Union data protection law, namely the GDPR and the ePrivacy Directive [9] ; stresses the right to obtain human intervention when AI and related technologies are being used;

17.   Calls on the Commission and the Member States to implement an obligation of transparency and explainability of AI-based automated individual decisions taken within the framework of prerogatives of public power, and to implement penalties to enforce this; calls for the implementation of systems which use human verification and intervention by default, and for due process, including the right of appeal and redress as well as access to remedies;

18.   Notes the potentially negative impact of personalised advertising, in particular micro-targeted and behavioural advertising, and of the assessment of individuals, especially minors, without their consent, by interfering in the private life of individuals, asking questions as to the collection and use of the data used to personalise advertising, and offering products or services or setting prices; calls on the Commission, therefore, to introduce strict limitations on targeted advertising based on the collection of personal data, starting with a ban on cross-platform behavioural advertising, without harming small and medium-sized enterprises (SMEs); recalls that the ePrivacy Directive currently only allows targeted advertising subject to opt-in consent, otherwise making it illegal; calls on the Commission to prohibit the use of discriminatory practices for the provision of services or products;

19.   Stresses the need for media organisations to be informed about the main parameters of algorithm-based AI systems that determine ranking and search results on third-party platforms, and for users to be informed about the use of AI in decision-making services and empowered to set their privacy parameters via transparent and understandable measures;

20.   Stresses that AI can support content creation in education, culture and the audiovisual sector, alongside information and educational platforms, including listings of different kinds of cultural objects and a multitude of data sources; notes the risks of IPR infringement when blending AI and different technologies with a multiplicity of sources (documents, photos, films) to improve how that data is displayed, researched and visualised; calls for AI to be used to ensure a high level of IPR protection within the current legislative framework, such as by alerting individuals and businesses if they are in danger of inadvertently infringing the rules, or by assisting IPR rights holders if the rules are actually infringed; emphasises, therefore, the importance of having an appropriate legal framework at Union level for the protection of IPR in connection with the use of AI;

21.   Stresses the need to strike a balance between, on the one hand, the development of AI systems and their use in education, culture and the audiovisual sector and, on the other, measures to safeguard competition and market competitiveness for AI companies in these sectors; emphasises, in this regard, the need to encourage companies to invest in the innovation of AI systems used in these sectors, while also ensuring that those providing such applications do not obtain a market monopoly; underlines the need for AI to be made widely available to the cultural and creative sectors and industries (CCSI) across Europe in order to maintain a level playing field and fair competition for all stakeholders and actors in Europe; calls on the Commission and the Member States, when taking decisions on competition policy, including mergers, to take greater account of the role played by data and algorithms in the concentration of market power;

22.   Stresses the need to systematically address the social, ethical and legal issues raised by the development, deployment and use of AI such as the transparency and accountability of algorithms, non-discrimination, equal opportunities, freedom and diversity of opinion, media pluralism and the ownership, collection, use and dissemination of data and content; recommends that common European guidelines and standards to protect privacy be devised while making effective use of the data available; calls for transparency in the development and accountability in the use of algorithms;

23.   Calls on the Commission to put forward a comprehensive set of provisions designed to regulate AI applications on a horizontal basis and to supplement them with sector-specific rules, for example for audiovisual media services;

24.   Stresses the need for investment in research and innovation on the development, deployment and use of AI and its applications in education, culture and the audiovisual sector; highlights the importance of public investment in these services and the complementary added value provided by public-private partnerships in order to achieve this objective and deploy the full potential of AI in these sectors, in particular education, in view of the substantial amount of private investment made in recent years; calls on the Commission to find additional funding to promote research and innovation into AI applications in these sectors;

25.   Underlines that algorithmic systems can be an enabler for reducing the digital divide in an accelerated way, but unequal deployment risks creating new divides or accelerating the deepening of existing ones; expresses its concern that knowledge and infrastructure are not developed in a consistent way across the Union, which limits the accessibility of products and services that rely on AI, in particular in sparsely populated and socio‑economically vulnerable areas; calls on the Commission to ensure cohesion in the sharing of the benefits of AI and related technologies;

26.   Calls on the Commission to establish requirements for the procurement and deployment of AI and related technologies by Union public sector bodies in order to ensure compliance with Union law and fundamental rights; highlights the added value of using instruments such as public consultations and impact assessments prior to the procurement or deployment of AI systems, as recommended in the report of the Special Rapporteur to the UN General Assembly on AI and its impact on freedom of opinion and expression [10] ; encourages public authorities to incentivise the development and deployment of AI by public funding and public procurement; stresses the need to strengthen the market by providing SMEs with the opportunity to participate in the procurement of AI applications in order to ensure the involvement of technology companies of all sizes and thus guarantee resilience and competition;

27.   Calls for independent audits to be conducted regularly to examine whether the AI applications being used and the related checks and balances are in accordance with specified criteria, and for those audits to be supervised by independent and sufficient overseeing authorities; calls for specific stress tests to assist and enforce compliance;

28.   Notes the benefits and risks of AI in terms of cybersecurity and its potential in combatting cybercrime, and emphasises the need for any AI solutions to be resilient to cyberattacks while respecting Union fundamental rights, especially the protection of personal data and privacy; stresses the importance of monitoring the safe use of AI and the need for close collaboration between the public and private sectors to counter user vulnerabilities and the dangers arising in this connection; calls on the Commission to evaluate the need for better prevention in terms of cybersecurity and mitigation measures thereof;

29.   Highlights that the COVID-19 pandemic crisis can be considered as a probation period for the development, deployment and use of digital and AI-related technologies in education and culture, as exemplified by the many online schooling platforms and online tools for cultural promotion employed across the Member States; calls on the Commission, therefore, to take stock of those examples when considering a common Union approach to the increased use of such technological solutions;

30.   Recalls the importance of strengthening digital skills and achieving a high standard of media, digital and information literacy at Union level as a prerequisite for the use of AI in education; underlines the need, in this regard, to ensure Union-wide digital and AI literacy, in particular through the development of training opportunities for teachers; insists that the use of AI technologies in schools should help to narrow the social and regional digital gap; welcomes the Commission’s updated Digital Education Action Plan, which addresses the use of AI in education; calls on the Commission, in that regard, to make digital capabilities, media literacy and training and AI-related skills the priorities of this plan, while raising awareness about the potential misuses and malfunctioning of AI; calls on the Commission, in that connection, to place special emphasis on children and young people in precarious situations, as they need particular support in the area of digital education; urges the Commission to duly address AI and robotics initiatives in education in its forthcoming AI legislative proposals; urges the Member States to invest in digital equipment in schools, using Union funds to this end;

31.   Highlights that the use of AI in education systems brings a wide range of possibilities, opportunities and tools for making it more innovative, inclusive, efficient and increasingly effective by introducing new high-quality learning methods that are quick, personalised and student-centric; stresses, however, that as it will impact education and social inclusion, the availability of such tools must be ensured for all social groups by establishing equal access to education and learning and leaving no one behind, especially people with disabilities;

32.   Underlines that in order to engage with AI both critically and effectively, citizens need at least a basic understanding of this technology; calls on the Member States to integrate awareness-raising campaigns about AI in their actions on digital literacy; calls on the Commission and the Member States to promote digital literacy plans and forums for discussion to involve citizens, parents and students in a democratic dialogue with public authorities and stakeholders concerning the development, deployment and use of AI technologies in education systems; stresses the importance of providing educators, trainers and others with the right tools and know-how with regard to AI and related technologies in terms of what they are, how they are used and how to use them properly and in accordance with the law, in order to avoid IPR infringements; highlights, in particular, the importance of digital literacy for people working in the education and training sectors and of improving digital training for the elderly, bearing in mind that the younger generations already have a basic notion of these technologies, having grown up with them;

33.   Stresses that the real objective of AI in education systems should be to make education as individualised as possible, offering students personalised academic paths in line with their strengths and weaknesses and didactic material tailored to their characteristics, while maintaining educational quality and the integrating principle of our education systems;

34.   Recalls the fundamental and multifaceted role that teachers play in education and in making it inclusive, especially in early childhood, where skills are acquired that will enable students to progress throughout their lives, such as in personal relations, study skills, empathy and cooperative work; stresses, therefore, that AI technologies cannot be used to the detriment or at the expense of in-person education, as teachers must not be replaced by any AI or AI-related technologies;

35.   Stresses that the learning benefits of using AI in education will depend not only on AI itself, but on how teachers use AI in the digital learning environment to meet the needs of pupils, students and teachers; points out, therefore, the need for AI programmers to involve teaching communities in the development, deployment and use of AI technologies where possible, creating a nexus environment to form connections and cooperation between AI programmers, developers, companies, schools, teachers and other public and private stakeholders in order to create AI technologies that are suitable for real-life educational environments, reflect the age and developmental readiness of each learner and meet the highest ethical standards; highlights that educational institutions should only deploy trustworthy, ethical, human-centred technologies which are auditable at every stage of their lifecycle by public authorities and civil society; emphasises the advantages of free and open-source solutions in this regard; calls for schools and other educational establishments to be provided with the financial and logistical support as well as the expertise required to introduce solutions for the learning of the future;

36.   Highlights, moreover, the need to continuously train teachers so they can adapt to the realities of AI-powered education and acquire the necessary knowledge and skills to use AI technologies in a pedagogical and meaningful way, enabling them to fully embrace the possibilities offered by AI and to understand its limitations; calls for digital teaching to be part of every teacher’s training in the future and calls for teachers and people working in education and training to be given the opportunity to continue their training in digital teaching throughout their lives; calls, therefore, for the development of training programmes in AI for teachers in all fields and across Europe; highlights, furthermore, the importance of reforming teaching programmes for new generations of teachers allowing them to adapt to the realities of AI-powered education, as well as the importance of drawing up and updating handbooks and guidelines on AI for teachers;

37.   Is concerned about the lack of specific higher education programmes for AI and the lack of public funding for AI across the Member States; believes that this is putting Europe’s future digital ambitions at risk;

38.   Is worried about the fact that few AI researchers are pursuing an academic career as tech firms can offer better pay and less bureaucracy for research; believes that part of the solution would be to direct more public money towards AI research at universities;

39.   Underlines the importance of equipping people with general digital skills from childhood onwards in order to close the qualifications gap and better integrate certain population groups into the digital labour market and digital society; points out that it will become more and more important to train highly skilled professionals from all backgrounds in the field of AI, ensure the mutual recognition of such qualifications throughout the Union, and upskill the existing and future workforce to enable it to cope with the future realities of the labour market; encourages the Member States, therefore, to assess their educational offer and to upgrade it with AI-related skills, where necessary, and to put in place specific curricula for AI developers, while also including AI in traditional curricula; highlights the need to ensure mutual recognition of professional qualifications in AI skills across the Union, as several Member States are upgrading their educational offer with AI-related skills and putting in place specific curricula for AI developers; welcomes the Commission’s efforts to include digital skills as one of the qualifications requirements for certain professions harmonised at Union level under the Professional Qualifications Directive [11] ; stresses the need for these to be in line with the assessment list of the ethical guidelines for trustworthy AI, and welcomes the Commission’s proposal to transform this list into an indicative curriculum for AI developers; recalls the special needs of vocational education and training (VET) with regard to AI and calls for a collaborative approach across Europe to enhance the potential offered by AI in VET; underlines the importance of training highly skilled professionals in this area, including ethical aspects in curricula, and of supporting underrepresented groups in this field, as well as of creating incentives for those professionals to seek work within the Union; recalls that women are underrepresented in AI and that this may create significant gender imbalances in the future labour market;

40.   Stresses the need for governments and educational institutions to rethink, rework and adapt their educational curricula to the needs of the 21st century by devising educational programmes that place greater emphasis on STEAM subjects in order to prepare learners and consumers for the increasing prevalence of AI and facilitate the acquisition of cognitive skills; underlines, in this regard, the importance of diversifying this sector and of encouraging students, especially women and girls, to enrol in STEAM courses, in particular in robotics and AI-related subjects; calls for more financial and scientific resources to motivate skilled people to stay in the Union while attracting those with skills from third countries; notes, furthermore, the considerable number of start-ups working with AI and developing AI technologies; stresses that SMEs will require additional support and AI-related training to comply with digital and AI-related regulation;

41.   Notes that automation and the development of AI may drastically and irreversibly change employment; emphasises that priority should be given to tailoring skills to the needs of the future job market, in particular in education and the CCSI; underlines, in this context, the need to upskill the future workforce; stresses, furthermore, the importance of deploying AI to reskill and upskill the European labour market in the CCSI, in particular in the audiovisual sector, which has already been severely impacted by the COVID-19 crisis;

42.   Calls on the Commission to assess the level of risk of AI deployment in the education sector in order to ascertain whether AI applications in education should be included in the regulatory framework for high risk and subject to stricter requirements on safety, transparency, fairness and accountability, in view of the importance of ensuring that education continues to contribute to the public good and the acute sensitivity of data on pupils, students and other learners; underlines that datasets used to train AI should be reviewed in order to avoid reinforcing certain stereotypes and other kinds of bias;

43.   Calls on the Commission to propose a futureproof legal framework for AI so as to provide legally binding ethical measures and standards to ensure fundamental rights and freedoms and the development of trustworthy, ethical and technically robust AI applications, including integrated digital tools, services and products such as robotics and machine learning, with particular regard to education; calls for the data used and produced by AI applications in education to be accessible, interoperable and of high quality, and to be shared with the relevant public authorities in an accessible way and with respect for copyright and trade secrets legislation; recalls that children constitute a vulnerable group who deserve particular attention and protection; stresses that while AI can benefit education, it is necessary to take into account its technological, regulatory and social aspects, with adequate safeguards and a human-centred approach that ultimately ensures that human beings are always able to control and correct the system’s decisions; points out, in this regard, that teachers must control and supervise any deployment and use of AI technologies in schools and universities when interacting with pupils and students; recalls that AI systems must not take any final decision that could affect educational opportunities, such as students’ final evaluation, without full human supervision; recalls that automated decisions about natural persons based on profiling, where they have legal or similar effects, must be strictly limited and always require the right to human intervention and the right to an explanation under the GDPR; underlines that this should be strictly adhered to, especially in the education system, where decisions about future chances and opportunities are taken;

44.   Expresses serious concern that schools and other education providers are becoming increasingly dependent on educational technology (edtech) services, including AI applications, provided by a few private companies that enjoy a dominant market position; believes that this should be scrutinised through Union competition rules; stresses the importance, in this regard, of supporting the uptake of AI by SMEs in education, culture and the audiovisual sector through the appropriate incentives that create a level playing field; calls, in this context, for investment in European IT companies in order to develop the necessary technologies within the Union, given that the major companies that currently provide AI are based outside the Union; strongly recalls that the data of minors is strictly protected by the GDPR and can only be processed if completely anonymised or consent has been given or authorised by the holder of parental responsibility, in strict compliance with the principles of data minimisation and purpose limitation; calls for more robust protection and safeguards in the education sector where children’s data is concerned and calls on the Commission to take more effective steps in that regard; calls for clear information to be provided to children and their parents about the possible use and processing of children’s data, including through awareness-raising and information campaigns;

45.   Underlines the specific risks in the use of AI automated recognition applications, which are developing at pace; recalls that children are a particularly sensitive group; recommends that the Commission and the Member States ban automated biometric identification, such as facial recognition for educational and cultural purposes, on educational and cultural premises, unless its use is allowed by law;

46.   Stresses the need to increase customer choice to stimulate competition and broaden the range of services offered by AI technologies for educational purposes; encourages public authorities, in this regard, to incentivise the development and deployment of AI technologies through public funding and public procurement; considers that technologies used by public education providers or purchased with public funding should be based on open-source technologies;

47.   Notes that innovation in education is overdue, as highlighted by the COVID-19 pandemic and the ensuing switch to online and distance learning; stresses that AI-driven educational tools such as those for assessing and identifying learning difficulties can improve the quality and effectiveness of online learning;

48.   Stresses that next-generation digital infrastructure and internet coverage are of strategic significance for providing AI-powered education to European citizens; in light of the COVID-19 crisis, calls on the Commission to elaborate a strategy for a European 5G that ensures Europe’s strategic resilience and is not dependent on technologies from states which do not share our values;

49.   Calls for the creation of a pan-European university and research network focused on AI in education, which should bring together institutions and experts from all fields to examine the impact of AI on learning and identify solutions to enhance its potential;

Cultural heritage

50.   Reiterates the importance of access to culture for every citizen throughout the Union; highlights, in this context, the importance of the exchange of best practices among Member States, educational facilities and cultural institutions and similar stakeholders; further considers it of vital importance that the resources available at both Union and national level are used to the maximum of their potential in order to further improve access to culture; stresses that there are a multitude of options to access culture and that all varieties should be explored in order to determine the most appropriate option; highlights the importance of consistency with the Marrakech Treaty;

51.   Stresses that AI technologies can play a significant role in preserving, restoring, documenting, analysing, promoting and managing tangible and intangible cultural heritage, including by monitoring and analysing changes to cultural heritage sites caused by threats such as climate change, natural disasters and armed conflicts;

52.   Stresses that AI technologies can increase the visibility of Europe’s cultural diversity; points out that these technologies provide new opportunities for cultural institutions, such as museums, to produce innovative tools for cataloguing artefacts as well as documenting and making cultural heritage sites more accessible, including through 3D modelling and augmented virtual reality; stresses that AI will also enable museums and art galleries to introduce interactive and personalised services for visitors by providing them with a list of suggested items based on their interests, expressed in person and online;

53.   Stresses that the use of AI will herald new innovative approaches, tools and methodologies allowing cultural workers and researchers to create uniform databases with suitable classification schemes as well as multimedia metadata, enabling them to make connections between different cultural heritage objects and thus increase knowledge and provide a better understanding of cultural heritage;

54.   Stresses that good practices in AI technologies for the protection and accessibility of cultural heritage, in particular for people with disabilities, should be identified and shared between cultural networks across the Union, while encouraging research on the various uses of AI to promote the value, accessibility and preservation of cultural heritage; calls on the Commission and the Member States to promote the opportunities offered by the use of AI in the CCSI;

55.   Stresses that AI technologies can also be used to monitor the illicit trafficking of cultural objects and the destruction of cultural property, while supporting data collection for recovery and reconstruction efforts of both tangible and intangible cultural heritage; notes, in particular, that the development, deployment and use of AI in customs screening procedures may support efforts to prevent the illicit trafficking of cultural heritage, in particular to supplement systems which allow customs authorities to target their efforts and resources on items that pose the greatest risk;

56.   Notes that AI could benefit the research sector, for example through the role that predictive analytics can play in fine-tuning data analysis, for example on the acquisition and movement of cultural objects; stresses that the Union must step up investment and foster partnerships between industry and academia in order to enhance research excellence at European level;

57.   Recalls that AI can be a revolutionary tool for promoting cultural tourism and highlights its considerable potential in predicting tourism flows, which could help cities struggling with over-tourism;

Cultural and creative sectors and industries (CCSI)

58.   Regrets the fact that culture is not among the priorities outlined in policy options and recommendations on AI at Union level, notably the Commission’s white paper of 19 February 2020 on AI; calls for these recommendations to be revised in order to make culture an AI policy priority at Union level; calls on the Commission and the Member States to address the potential impact of the development, deployment and use of AI technologies on the CCSI and to make the most of the Next Generation EU recovery plan to digitise these sectors to respond to new forms of consumption in the 21st century;

59.   Points out that AI has now reached the CCSI, as exemplified by the automatic production of texts, videos and pieces of music; emphasises that creative artists and cultural workers must have the digital skills and training required to use AI and other digital technologies; calls on the Commission and the Member States to promote the opportunities offered by the use of AI in the CCSI, by making more funding available from science and research budgets, and to establish digital creativity centres in which creative artists and cultural workers develop AI applications, learn how to use these and other technologies and test them;

60.   Acknowledges that AI technologies have the potential to boost a growing number of jobs in the CCSI facilitated by greater access to these technologies; emphasises, therefore, the importance of boosting digital literacy in the CCSI to make these technologies more inclusive, usable, learnable, and interactive for these sectors;

61.   Emphasises that the interaction between AI and the CCSI is complex and requires an in‑depth assessment; welcomes the Commission’s report of November 2020 entitled ‘Trends and Developments in Artificial Intelligence – Challenges to the IPRS Framework’ and the study on copyright and new technologies: copyright data management and artificial intelligence; underlines the importance of clarifying the conditions of use of copyright‑protected content as data input (images, music, films, databases, etc.) and in the production of cultural and audiovisual outputs, whether created by humans with the assistance of AI or autonomously generated by AI technologies; invites the Commission to study the impact of AI on the European creative industries; reiterates the importance of European data and welcomes the statements made by the Commission in this regard, as well as the placing of artificial intelligence and related technologies high on the agenda;

62.   Stresses the need to set up a coherent vision of AI technologies in the CCSI at Union level; calls on the Member States to strengthen the focus on culture in their AI national strategies to ensure that the CCSI embrace innovation and remain competitive and that cultural diversity is safeguarded and promoted at Union level in the new digital context;

63.   Stresses the importance of creating a Union-wide heterogeneous milieu for AI technologies to encourage cultural diversity and support minorities and linguistic diversity, while also strengthening the CCSI through online platforms, allowing Union citizens to be included and to participate;

64.   Calls on the Commission and the Member States to support a democratic debate on AI technologies and to provide a regular forum for discussion with civil society, researchers, academia and stakeholders to raise awareness on the benefits and the challenges of its use in the CCSI; emphasises, in that connection, the role which art and culture can play in familiarising people with AI and fostering public debate about it, as they can provide vivid, tangible examples of machine learning, for example in the area of music;

65.   Calls on the Commission and the Member States to address the issue of AI-generated content and its challenges to authorship and copyright infringement; asks the Commission, in that regard, to assess the impact of AI and related technologies on the audiovisual sector and the CCSI , with a view to promoting cultural and linguistic diversity, while respecting authors’ and performers’ rights;

66.   Stresses that the European Institute of Innovation and Technology (EIT), in particular its future Knowledge and Innovation Community (KIC) dedicated to cultural and creative industries (CCI), should play a leading role in developing a European strategy on AI in education, culture and the audiovisual sector and can help accelerate and harvest AI applications to these sectors;

67.   Notes that AI has already entered the creative value chain at the level of creation, production, dissemination and consumption and is therefore having an immense impact on the CCSI, including music, the film industry, art and literature, through new tools, software and AI-assisted production for easier production, while providing inspiration and enabling the broader public to create content;

68.   Calls on the Commission to carry out studies and consider policy options to tackle the detrimental impact of AI-based control of online streaming services designed to limit diversity and/or maximise profits by including or prioritising certain content in the consumer offer, as well as how this impacts cultural diversity and creators’ earnings;

69.   Believes that AI is becoming increasingly useful for the CCSI in creation and production activities;

70.   Emphasises the role of an author’s personality for the expression of free and creative choices that constitute the originality of works [12] ; underlines the importance of limitations and exceptions to copyright when using content as data input, notably in education, academia and research, and in the production of cultural and creative output, such as audiovisual output and user-generated content;

71.   Takes the view that consideration should be given to protecting AI-generated technical and artistic creations in order to encourage this form of creativity;

72.   Stresses that in the data economy context, better copyright data management is achievable, for the purpose of better remunerating authors and performers, notably in enabling the swift identification of the authorship and right ownership of content, thus contributing to lowering the number of orphan works; further highlights that AI technological solutions should be used to improve copyright data infrastructure and the interconnection of metadata in works, but also to facilitate the transparency obligation provided in Article 19 of Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market [13] for up‑to‑date, relevant and comprehensive information on the exploitation of authors’ and performers’ works and performances, particularly in the presence of a plurality of rights holders and of complex licensing schemes;

73.   Calls for the intellectual property action plan announced by the Commission to address the question of AI and its impact on the creative sectors, taking account of the need to strike a balance between protecting IPR and encouraging creativity in the areas of education, culture and research; considers that the Union can be a leader in the creation of AI technologies if it adopts an operational regulatory framework and implements proactive public policies, particularly as regards training programmes and financial support for research; asks the Commission to assess the impact of IPR on the research and development of AI and related technologies, as well as on the CCSI, including the audiovisual sector, with particular regard to authorship, fair remuneration of authors and related questions;

74.   Calls on the Commission to consider the legal aspects of the output produced using AI technologies, as well as cultural content generated with the use of AI and related technologies; considers it important to support the production of cultural content; reiterates, however, the importance of safeguarding the Union’s unique IPR framework and that any changes should be made with the necessary due care, in order not to disrupt the delicate balance; calls on the Commission to produce an in-depth assessment with regard to the possible legal personality of AI-produced content, as well as the application of IPR to AI-generated content and to content created with the use of AI tools;

75.   Calls on the Commission, in addition, to consider developing, in very close cooperation with Member States and the relevant stakeholders, verification mechanisms or systems for publishers, authors and creators in order to assist them in verifying what content they may use and to more easily determine what is protected under IPR legislation;

76.   Calls on the Commission to lay down rules designed to guarantee effective data interoperability in order to make content purchased on a platform accessible via any digital tool irrespective of brand;

Audiovisual sector

77.   Notes that AI is often used to enable automated decision-making algorithms to disseminate and order the cultural and creative content displayed to users; stresses that these algorithms are a ‘black box’ for users; stresses that the algorithms used by media service providers, video sharing platforms (VSPs) and music streaming services should be designed in such a way that they do not privilege specific works by limiting their ‘personalised’ suggestions to the most popular works, for targeted advertising, commercial purposes or to maximise profit; calls for recommendation algorithms and personalised marketing to be explainable and transparent where possible, in order to give consumers an accurate and comprehensive insight into these processes and content and to ensure that personalised services are not discriminatory and in line with the recently adopted Platform to Business Regulation [14] and New Deal for Consumers Omnibus Directive [15] ; calls on the Commission to address the ways in which content moderation algorithms are optimised to engage users, and to propose recommendations to increase user control over the content they see, by guaranteeing and properly implementing the right of users to opt out of recommended and personalised services; underlines, moreover, that consumers must be informed when they are interacting with an automated decision process and that their choices and performance must not be limited; stresses that the use of AI mechanisms for the commercial surveillance of consumers must be countered, even if it concerns ‘free services’, by ensuring that it is strictly in line with fundamental rights and the GDPR; stresses that all regulatory changes must take into consideration the impact on vulnerable consumers;

78.   Underlines that what is illegal offline shall be illegal online; notes that AI tools have the potential and are already used to fight illegal content online, but strongly recalls ahead of the forthcoming Digital Services Act that such tools must always respect fundamental rights, especially freedom of expression and information, and should not lead to a general monitoring obligation for the internet, or to the removal of legal material disseminated for education, journalistic, artistic or research purposes; stresses that algorithms should be used only as a flagging mechanism in content moderation, subject to human intervention, as AI is unable to reliably distinguish between legal, illegal and harmful content; notes that terms and conditions should always include community guidelines as well as an appeal procedure;

79.   Recalls, furthermore, that there should be no general monitoring, as stipulated in Article 15 of the e-Commerce Directive [16] , and that specific content monitoring for audiovisual media services should be in accordance with the exceptions laid down in Union legislation; recalls that AI applications must adhere to internal and external safety protocols, which should be technically accurate and robust in nature; considers that this should extend to operation in normal, unknown and unpredictable situations alike;

80.   Stresses, moreover, that the use of AI in algorithm-based content recommendations on media service providers, such as video on demand services and VSPs, may have a serious impact on cultural and linguistic diversity, notably regarding the obligation to ensure the prominence of European works under Article 13 of the Audiovisual Media Services Directive (Directive (EU) 2018/1808 [17] ); notes that the same concerns are equally relevant for the music streaming services and calls for the development of indicators to assess cultural diversity and the promotion of European works on such services;

81.   Calls on the Commission and the Member States to step up their financial support for the development, deployment and use of AI in the area of the automatic subtitling and dubbing of European audiovisual works, in order to foster cultural and language diversity in the Union and enhance the dissemination of and access to European audiovisual content;

82.   Calls on the Commission to establish a clear ethical framework for the use of AI technologies in media in order to prevent all forms of discrimination and ensure access to culturally and linguistically diverse content at Union level, based on accountable, transparent and inclusive algorithms, while respecting individuals’ choices and preferences;

83.   Points out that AI can play a major role in the rapid spread of disinformation; stresses, in that regard, that the framework should address the misuse of AI to disseminate fake news and online misinformation and disinformation, while avoiding censorship; calls on the Commission, therefore, to assess the risks of AI assisting the spread of disinformation in the digital environment as well as solutions on how AI could be used to help counter disinformation;

84.   Calls on the Commission to take regulatory measures to ensure that media service providers have access to the data generated by the provision and dissemination of their content on other providers’ platforms; emphasises that full data transfer from platform operators to media service providers is vital if the latter are to understand their audience better and thus improve the services they offer in keeping with people’s wishes;

85.   Stresses the importance of increasing funding for Digital Europe, Creative Europe and Horizon Europe in order to reinforce support for the European audiovisual sector, namely by collaborative research projects and experimental pilot initiatives on the development, deployment and use of ethical AI technologies;

86.   Calls for close collaboration between Member States in developing training programmes aimed at reskilling or upskilling workers to make them better prepared for the social transition that the use of AI technologies in the audiovisual sector will entail;

87.   Considers that AI has enormous potential to help drive innovation in the news media sector; believes that the widespread integration of AI, such as for content generation and distribution, the monitoring of comments sections, the use of data analytics, and identifying doctored photos and videos, is key for saving on costs in newsrooms in the light of diminishing advertising revenues and for devoting more resources to reporting on the ground and thus increase the quality and variety of content;

Online disinformation: deepfakes

88.   Stresses the importance of ensuring online and offline media pluralism to guarantee the quality, diversity and reliability of the information available;

89.   Recalls that accuracy, independence, fairness, confidentiality, humanity, accountability and transparency, as driving forces behind the principles of freedom of expression and access to information in online and offline media, are decisive in the fight against disinformation and misinformation;

90.   Notes the important role which independent media play in culture and the daily life of citizens; stresses that disinformation represents a fundamental problem, as copyright and IPR generally are being constantly infringed; calls on the Commission, in cooperation with the Member States, to continue its work on raising awareness of this problem, countering the effects of disinformation as well as the source problems; considers it important, furthermore, to develop educational strategies to specifically improve digital literacy in this regard;

91.   Recalls that with new techniques rapidly emerging, detecting false and manipulated content such as deepfakes may become increasingly challenging due to the ability of malicious producers to generate sophisticated algorithms that can be successfully trained to evade detection, thus seriously undermining our basic democratic values; asks the Commission to assess the impact of AI in the creation of deepfakes, to establish appropriate legal frameworks to govern their creation, production or distribution for malicious purposes, and to propose recommendations for, among other initiatives, action against any AI-powered threats to free and fair elections and democracy;

92.   Welcomes recent initiatives and projects to create more efficient deepfake detection tools and transparency requirements; stresses, in this regard, the need to explore and invest in methods for tackling deepfakes as a crucial step in combatting misinformation and harmful content; considers that AI-enabled solutions can be helpful in this regard; asks the Commission, therefore, to impose an obligation for all deepfake material or any other realistically made synthetic videos to state that the material is not original and a strict limitation when used for electoral purposes;

93.   Is concerned that AI is having an ever greater influence on the way information is found and consumed online; points out that so-called filter bubbles and echo chambers are restricting diversity of opinion and undermining open debate in society; urges, therefore, that the way platform operators use algorithms to process information must be transparent and that users must be given greater freedom to decide whether and what information they want to receive;

94.   Points out that AI technologies are already being used in journalism, for example to produce texts or, in the context of investigative research, to analyse large data sets; emphasises that in the context of producing information of significance to society as a whole, it is important that automated journalism should draw on correct and comprehensive data, in order to prevent the dissemination of fake news; emphasises that the basic principles of quality journalism, such as editorial supervision, must also apply to journalistic content produced using AI technologies; calls for AI-generated texts to be clearly identified as such, in order to safeguard trust in journalism;

95.   Highlights the potential of AI to facilitate and encourage multilingualism by developing language-related technologies and enabling online European content to be discovered;

96.   Instructs its President to forward this resolution to the Council and the Commission.

“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted ”

Alan Turing, 1947

The last decade has been transformative for AI, arousing both fear and excitement for humanity. Seen as ‘the new electricity’, AI has advanced to the point that it will have such a systemic impact that it could substantially change all aspects of society for the next century.

Whilst it is easy to understand the potential effects of AI on sectors such telecommunications, transportation, traffic management, health care, evaluating its long-term effects on education, culture and the audiovisual sector is considerably more challenging. Although there is a consensus that AI and automation is likely to create more wealth and to simplify a vast array of processes, the use of AI has also raised serious concerns that it may result in an increase in inequality, discrimination and unemployment.

The potential impact of AI on education, culture and the audiovisual sector is however rarely discussed and is mostly unknown. Yet this question is of utmost importance because AI is already being used to teach curricula, as well as to produce movies, songs, stories and paintings.

The purpose of this report is therefore to understand concretely how AI currently impacts these sectors and how future technological advances in AI will impact them further over the next decade. In particular, the Rapporteur reflects on how AI may transform these sectors and which particular regulatory challenges the Union may have to face in that regard.

(i) AI is reshaping education

AI is transforming learning, teaching, and education radically. The whirlwind speed of technological development accelerates the radical transformation of educational practices, institutions and policies. In this field, AI has many applications, such as customisable approaches to learning, AI-based tutors, textbooks and course material with customised content, smart algorithms to determine best teaching methods, AI game engines, and adaptive user models in personalised learning environments (PLE) which can allow early identification of difficulties, such as dyslexia or risks of early school leaving.

Personalised learning experience is the cornerstone of the use of AI in education. It would allow students to enjoy an educational approach that is fully tailored to their individual abilities, needs and difficulties, whilst enabling teachers to closely monitor students’ progress. However in order to make personalised education a reality, large amounts of personal data need to be collected, used and analysed.

The Rapporteur stresses in that regard that the current lack of access to personal data on students is likely to prevent the successful implementation of AI in education. It is thus essential to ensure the safety and transparency of personal data collection, use, management and dissemination, whilst safeguarding confidentiality and privacy of learners’ personal data. Moreover, addressing the risks of AI potential bias, as well as tackling the issue of data storage should be a priority in any initiatives for the wide deployment of AI in the education system at Union level.

Although there is little chance that teachers will be replaced by machines in the near future, the increasing use of AI in education implies the need to rethink education overall, as well as to reflect on the redefinition of teaching, the role of the teachers, and, as a result, the subsequent retraining required to adapt to a AI-based educational system.

Considering that less than 40% of teachers in the Union have received courses on ICT inclusion in the classroom throughout their Initial Teacher Education (ITE), the Rapporteur would like to stress the crucial importance of training teachers so they acquire digital skills as a prerequisite to becoming familiar with AI. They could then take advantage of AI technologies, but also make them aware of the potential dangers of AI.

This issue can also be seen more widely, with 42% of the Union population still lacking basic digital skills. There are also serious regional discrepancies in access to digital infrastructure and in digital skills attainment across the Union.

Emerging technology trends related to digital transformation, such as AI, have profound implications in terms of the skills required for the evolving digital economy. In particular, the notion of lifelong learning in AI has emerged as one of the key strategies for job security and employment in the digital era.

The Rapporteur suggests that citizens are trained to acquire the necessary digital skills, whilst carefully assessing what AI-related skills are needed today and in the future, and that the necessary measures are taken to address existing and emerging skill gaps.

It also crucial to ensure that the prerequisites for the deployment and the relevant use of AI, in terms of internet access, connectivity, networks and infrastructures, are met.

  (ii) AI can be used to safeguard and promote cultural heritage

In recent years, AI has been of increasing relevance to cultural heritage, notably in response to potential modern threats, such as climate change or conflicts. AI can have various applications in that regard: it can be used to enhance users’ experience by enabling visitors of cultural institutions and museums to create personal narrative trails or to enjoy virtual tour guides. Conversational bots could communicate in an interactive way about cultural heritage on any topics and in any language. They would also make the access to information easier whilst providing a vivid cultural experience to users.

AI could also facilitate the understanding of the history of the Union, such as how the ‘Time Machine Project’ aims to create advanced AI technologies to make sense of vast amounts of information from complex historical data sets stored in archives and museums. This enables the transformation of fragmented data into useable knowledge by mapping Union’s entire social, cultural and geographical evolution. This may facilitate the exploration of the cultural, economic, and historical development of European cities, and improve understanding thereof.

(iii) AI changes the way the cultural and creative industries work, in particular the audiovisual sector

AI use is rapidly expanding in media with many applications:

-   Data-driven marketing and advertising, by training machine learning algorithms to develop promotional movie trailers and design advertisements,

-   Personalisation of users’ experience, by using machine learning to recommend personalised content based on data from user activity and behaviour,

-   Search optimisation, by using AI to improve the speed and efficiency of the media production process and the ability to organise visual assets,

-   Content creation, by generating video clips from automatic video segments ready for broadcast and special effects, such as re-creating a younger version of an actor digitally or creating new content with a deceased actor,

-   Script writing such as simple factual text creation (sports and news reports produced by robots), but also for writing fictional stories, such as the experimental short movie ‘Sunspring’,

-   Viewer interaction on complex story lines, such as the last episode of the British series ‘Black Mirror’, ‘Bandersnatch’,

-   Automated captioning and subtitling, such audio-to-text processes, for viewers with disabilities

-   Automated content moderation on audiovisual content.

Whilst AI offers a wide range of opportunities in producing high quality cultural and creative content, the centralised distribution and access to such content raises a number of ethical and legal issues, notably on data protection, freedom of expression and cultural diversity.

Cultural and creative works, notably audiovisual works, are mainly distributed through

large centralised platforms, which conditions media consumption to the proprietary algorithms developed by these platforms.

The Rapporteur points out that algorithm-based personalised recommendations are potentially detrimental to cultural and linguistic diversity, preventing under-represented cultural and creative content from appearing in suggestions provided by these systems. On the largest platforms, the criteria used to select or recommend a work are neither transparent nor auditable, and are likely to be decided on the basis of economic factors that solely benefit these platforms.

The question of cultural and linguistic diversity in recommendation systems is therefore crucial and must be addressed. The Rapporteur stresses the need to set up a clear legal framework for transparent, accountable and inclusive algorithms, in order to safeguard and promote cultural and linguistic diversity .

Regulatory challenges triggered by AI applications within the audiovisual sector are also linked to existing legal acts, such as the AVMSD. Thus a more in-depth assessment might be needed as to the urgency and/or political momentum for future adaptions of these files to AI.

Whilst AI can help empower many creators, making CCS more prosperous and driving cultural diversity, the large majority of artists and entrepreneurs may not still be familiar with AI tools.

There is a lack of technical knowledge among creators precluding them from experimenting with machine learning and reaping the benefits they can bring. Therefore it is essential to assess which skills would be needed in the near future, whilst at the same time improving training systems, including upskilling and reskilling, guaranteeing lifelong learning throughout the whole working life and beyond.

In that context, the Rapporteur suggests setting up an AI observatory with an objective of harmonising and facilitating evidence-based scrutiny of new developments in AI in order to tackle the question of auditability and accountability of AI applications in CCS .

(v) Countering fake news

AI-technologies are increasingly used to disseminate fake news, notably through the use of ‘deepfakes’.

Deepfakes are synthetic images or videos generated by AI using ‘deep learning machines’ and generative adversial networks (GAN). Humans cannot distinguish deepfakes from authentic content. Deepfakes can be used for all kinds of trickery, most commonly used for ‘face swaps’, from harmless satires and film tweaks to malicious hoaxes, targeted harassment, deepfake porn or financial fraud, The danger of deepfakes is to make people believe something is real when it is not, and thus may be used as a particularly powerful and potent weapon for online disinformation, spreading virally on platforms and social media, where they can influence public opinion, voting processes and election results.

Whilst AI is frequently singled out for its role in spreading fake news, it could also play a significant role in countering and combating fake news and disinformation, as evidenced by projects such as “Fake News Challenge”. AI systems can reverse-engineer AI generated fake news, and help spot manipulated content. However algorithms generating deepfakes are getting more and more sophisticated, and detecting them as a result is getting increasingly difficult.

The Rapporteur therefore stresses the need to tackle the misuse of AI in disseminating fake news and online misinformation, notably by exploring ways to efficiently detect deepfakes.

OPINION OF THE COMMITTEE ON CIVIL LIBERTIES, JUSTICE AND HOME AFFAIRS  ( 16.7.2020 )

for the Committee on Culture and Education

Rapporteur for opinion (*): Ondřej Kovařík

(*) Associated committee – Rule 57 of the Rules of Procedure

SUGGESTIONS

The Committee on Civil Liberties, Justice and Home Affairs calls on the Committee on Culture and Education, as the committee responsible, to incorporate the following suggestions into its motion for a resolution:

1.   Underlines that the use of AI in the education, culture and audiovisual sectors must fully respect fundamental rights, freedoms and values, including privacy, the protection of personal data, non-discrimination and freedom of expression and information, as enshrined in the EU Treaties and the Charter of Fundamental Rights of the European Union; welcomes the Commission’s White Paper on Artificial Intelligence in this regard, and invites the Commission to include the educational sector, limited to areas posing significant risks, in the future regulatory framework for high-risk AI applications;

2.   Recalls that AI may give rise to biases and thus to various forms of discrimination based on sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation; in this regard, recalls that everyone’s rights must be ensured and that AI initiatives must not be discriminatory in any form;

3.   Emphasises that such bias and discrimination can arise from already biased sets of data, reflecting existing discrimination in society; stresses that AI must avoid bias leading to prohibited discrimination and must not reproduce discrimination processes; underlines the need to take these risks into account when designing AI technologies, as well as, the importance of working with AI technology providers to address persistent loopholes facilitating discrimination, and recommends that AI designing and developing teams should reflect the diversity of society;

4.   Notes that the use of AI in education brings a wide range of possibilities and opportunities, for instance to facilitate access to information, to improve research methods or to understand how pupils learn and to offer them customisation, while at the same time posing risks regarding equal access to education and learning equalities at an increasingly younger age and for vulnerable and historically disadvantaged groups; calls for a sufficient data-sharing infrastructure between AI applications and public research entities; points out that equity and inclusion are core values that should be duly taken into account when designing policies for AI in education; calls for the non-discriminatory use of AI in the education sector; recalls the risks and discrimination that may arise from recently developed AI tools used for purposes of school admissions, and calls for them to be rectified as soon as possible; underlines the need for a proper assessment of the AI tools used in the education sector to identify their impact on the rights of the child;

5.   Acknowledges that using digital and AI technologies can help develop increasingly effective educational tools and lead to a more inclusive society, by countering traditional forms of discrimination including lack of access to services, by bringing education to disadvantaged communities, persons with disabilities in line with the EU Accessibility Act, and other categories of European citizens lacking proper access to education, and by providing access to adequate learning opportunities;

6.   Underlines that the benefits of AI should be shared with all parts of society, leaving no one behind; stresses the need to fully take into consideration the specific needs of the most vulnerable groups, such as children, persons with disabilities, elderly people and other groups at risk of exclusion; expresses its concerns about limited accessibility of the internet in some regions across the EU, and calls on the Commission and the Member States to deploy sustained efforts to ameliorate telecommunications infrastructures;

7.   Recognises the possibilities of AI in the culture sector in terms of developing music, art and other cultural expressions; emphasises that freedom of expression is an important freedom and value and that a pluriform cultural landscape is of great value to society; calls on the Commission to keep these values in mind when drafting its proposals on AI;

8.   Welcomes the Commission’s plan to update the Digital Education Action Plan so as to make it more ambitious and integrated with a view to making educational systems fit for the digital age, notably through making better use of data and AI-based technologies; calls on all stakeholders, both public and private, to closely cooperate in implementing these educational reforms;

9.   Stresses the need to ensure more general public awareness of AI at all levels, as a key element to enable the public to make informed decisions and help strengthen the resilience of our societies; underlines that this must also include public awareness of the risks in terms of privacy and biases related to AI; invites the Commission and the Member States to include the above in educational programmes and programmes which support the arts;

10.   Underlines the urgent need to educate the public at every level in the use of AI and to equip all European citizens, including vulnerable groups, with basic digital skills enabling equal social and economic opportunities, as well as the need to have high-quality ICT programmes in education systems, at all levels; calls for the digital gender gap not to be underestimated and for measures to be taken to remedy it; welcomes the upcoming update of the Skills Agenda aimed at allowing everyone to benefit from EU digital transformation; emphasises the importance of training teachers and educators in the use of AI, especially those responsible for underage students; notes that significant skills shortages still exist in the digital and technology sectors; underlines the importance of diversifying this sector and to encourage students, in particular women and girls, to enrol in Science, Technology, Engineering and Mathematics (STEM) courses, in particular in robotics and AI-related subjects, in addition to those related to their career aspirations; calls for more financial and scientific resources to motivate skilled people to stay in the EU and to attract those with skills from abroad; furthermore, notes that there are a considerable number of start-ups working with AI and developing AI technologies; stresses that small and medium-sized enterprises (SMEs) will require additional support and AI-related training to comply with digital and AI-related regulation;

11.   Recalls that data protection and privacy can be particularly affected by AI; underlines the principles established in Regulation (EU) 2016/679 of the European Parliament and of the Council (the General Data Protection Regulation (GDPR)) [18] as binding principles for AI deployment; recalls that all AI applications need to fully respect Union data protection law, namely the GDPR and Directive (EC) 2002/58 of the European Parliament and of the Council (the ePrivacy Directive) [19] (currently under revision);

12.   Recalls that children constitute a vulnerable public who deserve particular attention and protection; recalls that automated decisions about natural persons based on profiling, where they have legal or similar effects, must be strictly limited and always require the right to human intervention and to explicability under the GDPR; underlines that this should be strictly adhered to, especially in the education system, where decisions about future chances and opportunities are taken; observes that a few private companies are dominating the educational technology (edtech) sector in some Member States, and believes this should be scrutinised through EU competition rules; strongly recalls that data of minors is strictly protected by the GDPR, and that children’s data can only be processed if completely anonymised or where consent has been given or authorised by the holder of parental responsibility over the child; therefore calls for stronger protection and safeguards in the education sector where children’s data are concerned; calls for clear information to be provided to children and their parents, including via awareness and information campaigns, about the possible use and processing of children’s data;

13.   Underlines the specific risks existing in the use of AI automated recognition applications, which are currently developing rapidly; recalls that children are a particularly sensitive public; recommends that the Commission and the Member States ban automated biometric identification, such as facial recognition for educational and cultural purposes, on educational and cultural premises, unless its use is allowed in law;

14.   Calls on the Commission and the Member States to implement an obligation of transparency and explainability of AI-based automated individual decisions taken within the framework of prerogatives of public power, and to implement penalties to enforce such obligations; calls for the implementation of systems which use human verification and intervention by default, and for due process, including the right of appeal, and access to remedies; recalls that automated decisions about natural persons based on profiling, where they have legal or similar effects, must be strictly limited and always require the right to human intervention and to explainability under the GDPR;

15.   Calls for independent audits to be conducted regularly to examine whether AI applications being used and the related checks and balances are in accordance with specified criteria, and for those audits to be supervised by independent and sufficient overseeing authorities; calls for specific stress tests to assist and enforce compliance;

16.   Points out that AI can play a major role in the rapid spread of disinformation; therefore calls on the Commission to assess the risks of AI assisting the spread of disinformation in the digital environment, and to propose recommendations, among others, for action against any AI-powered threats to free and fair elections and democracy; observes that deep fakes can also be used to manipulate elections, to disseminate disinformation and for other undesirable actions; notes furthermore that the immersive experiences facilitated by AI can be exploited by malicious actors; asks the Commission to propose recommendations, including possible restrictions in this regard, in order to adequately safeguard against the use of these technologies for illegal purposes; also calls for an assessment of how AI could be used to help counter disinformation; calls on the Commission to ensure that any future regulatory framework does not lead to censorship of legal individual content uploaded by users; recalls that critical thinking and the ability to interact with skill and confidence in the online environment are needed more than ever;

17.   Notes that AI is often used to enable automated decision-making algorithms to disseminate and order the content displayed to users; stresses that these algorithms are a ‘black box’ for users; calls on the Commission to address the ways in which content moderation algorithms are optimised towards engagement of their users; also calls on the Commission to propose recommendations to increase user control over the content they see, and to ask AI applications and internet platforms to give users the possibility to choose to have content displayed in a neutral order, in order to give them more control on the way content is ranked to them, including options for ranking outside their ordinary content consumption habits and for opting out completely from any content curation;

18.   Notes the potential negative impact of personalised advertising, in particular micro-targeted and behavioural advertising, and of assessment of individuals, especially minors, without their consent, by interfering in the private life of individuals, asking questions as to the collection and use of the data used to personalise advertising, and offering products or services or setting prices; calls, therefore, on the Commission to introduce strict limitations on targeted advertising based on the collection of personal data, starting by introducing a prohibition on cross-platform behavioural advertising, while not hurting SMEs; recalls that currently the ePrivacy Directive only allows targeted advertising subject to opt-in consent, otherwise making it illegal; calls on the Commission to prohibit the use of discriminatory practices for the provision of services or products;

19.   Underlines that what is illegal offline shall be illegal online; notes that AI tools have the potential and are already used to fight illegal content online, but strongly recalls ahead of the Digital Services Act expected for the end of this year that such tools must always respect fundamental rights, especially freedom of expression and information, and should not lead to a general monitoring obligation for the internet, or to the removal of legal material disseminated for education, journalistic, artistic or research purposes; stresses that algorithms should be used only as a flagging mechanism in content moderation, subject to human intervention, as AI is unable to reliably distinguish between legal, illegal and harmful content; notes that terms and conditions should always include community guidelines as well as an appeal procedure;

20.   Notes the benefits and risks of AI in terms of cybersecurity and its potential in combating cybercrime, and emphasises the need for any AI solutions to be resilient to cyberattacks while respecting EU fundamental rights, especially the protection of personal data and privacy; stresses the importance of monitoring the safe use of AI and the need for close collaboration between the public and private sectors to counter user vulnerabilities and the dangers arising in this connection; calls on the Commission to evaluate the need for better prevention in terms of cybersecurity and mitigation measures thereof;

21.   Stresses that next-generation digital infrastructure and internet coverage are of strategic significance for providing AI-powered education to European citizens; in light of the COVID-19 crisis, calls on the Commission to elaborate a strategy for a European 5G that ensures Europe’s strategic resilience and is not dependent on technology from states who do not share our values;

22.   Calls on the Commission and the Member States to support the use of AI in the area of digitalised cultural heritage.

INFORMATION ON ADOPTION IN COMMITTEE ASKED FOR OPINION

Date adopted

16.7.2020

 

 

 

Result of final vote

+:

–:

0:

59

7

1

Members present for the final vote

Magdalena Adamowicz, Konstantinos Arvanitis, Katarina Barley, Pietro Bartolo, Nicolas Bay, Vladimír Bilčík, Vasile Blaga, Ioan-Rareş Bogdan, Saskia Bricmont, Joachim Stanisław Brudziński, Jorge Buxadé Villalba, Damien Carême, Caterina Chinnici, Clare Daly, Marcel de Graaff, Lena Düpont, Laura Ferrara, Nicolaus Fest, Jean-Paul Garraud, Sylvie Guillaume, Andrzej Halicki, Balázs Hidvéghi, Evin Incir, Sophia in ‘t Veld, Patryk Jaki, Lívia Járóka, Fabienne Keller, Peter Kofod, Moritz Körner, Juan Fernando López Aguilar, Nuno Melo, Roberta Metsola, Nadine Morano, Javier Moreno Sánchez, Maite Pagazaurtundúa, Nicola Procaccini, Emil Radev, Paulo Rangel, Terry Reintke, Diana Riba i Giner, Ralf Seekatz, Michal Šimečka, Martin Sonneborn, Sylwia Spurek, Tineke Strik, Ramona Strugariu, Annalisa Tardino, Tomas Tobé, Milan Uhrík, Tom Vandendriessche, Bettina Vollath, Jadwiga Wiśniewska, Elena Yoncheva, Javier Zarzalejos

Substitutes present for the final vote

Abir Al-Sahlani, Bartosz Arłukowicz, Malin Björk, Delara Burkhardt, Gwendoline Delbos-Corfield, Nathalie Loiseau, Erik Marquardt, Sira Rego, Domènec Ruiz Devesa, Paul Tang, Hilde Vautmans, Tomáš Zdechovský

Substitutes under Rule 209(7) present for the final vote

Sven Mikser

FINAL VOTE BY ROLL CALL IN COMMITTEE ASKED FOR OPINION

59

+

PPE

Magdalena Adamowicz, Bartosz Arłukowicz, Vladimír Bilčík, Vasile Blaga, Ioan‑Rareş Bogdan, Lena Düpont, Andrzej Halicki, Balázs Hidvéghi, Lívia Járóka, Nuno Melo, Roberta Metsola, Nadine Morano, Emil Radev, Paulo Rangel, Ralf Seekatz, Tomas Tobé, Tomáš Zdechovský

S&D

Katarina Barley, Pietro Bartolo, Delara Burkhardt, Caterina Chinnici, Sylvie Guillaume, Evin Incir, Juan Fernando López Aguilar, Sven Mikser, Javier Moreno Sánchez, Domènec Ruiz Devesa, Sylwia Spurek, Paul Tang, Bettina Vollath, Elena Yoncheva

Renew

Abir Al‑Sahlani, Sophia in 't Veld Fabienne Keller, Moritz Körner, Nathalie Loiseau, Maite Pagazaurtundúa, Michal Šimečka, Ramona Strugariu, Hilde Vautmans

Verts/ALE

Saskia Bricmont, Damien Carême, Gwendoline Delbos‑Corfield, Erik Marquardt, Terry Reintke, Diana Riba i Giner, Tineke Strik

ECR

Joachim Stanisław Brudziński, Jorge Buxadé Villalba, Patryk Jaki, Nicola Procaccini, Jadwiga Wiśniewska

GUE/NGL

Konstantinos Arvanitis, Malin Björk, Clare Daly, Sira Rego

NI

Laura Ferrara, Martin Sonneborn, Milan Uhrík

7

-

PPE

Javier Zarzalejos

ID

Nicolas Bay, Nicolaus Fest, Jean‑Paul Garraud, Marcel de Graaff, Peter Kofod, Tom Vandendriessche

1

0

ID

Annalisa Tardino

Key to symbols:

+   :   in favour

-   :   against

0   :   abstention

OPINION OF THE COMMITTEE ON THE INTERNAL MARKET AND CONSUMER PROTECTION  ( 6.7.2020 )

Rapporteur for opinion: Kim Van Sparrentak

The Committee on the Internal Market and Consumer Protection calls on the Committee on Culture and Education, as the committee responsible, to incorporate the following suggestions into its motion for a resolution:

A.   whereas artificial intelligence (AI) has the potential to offer solutions for day-to-day challenges of the education sector such as the personalisation of learning, monitoring learning difficulties, automation of subject-specific content/knowledge, providing better professional training, and supporting the transition to a digital society;

B.   whereas AI could have practical applications in terms of reducing the administrative work of educators and educational institutions, freeing up time for their core teaching and learning activities;

C.   whereas the application of AI in education raises concerns around the ethical use of data, learners’ rights, data access and protection of personal data, and therefore entails risks to fundamental rights such as the creation of stereotyped models of learners’ profiles and behaviour that could lead to discrimination or risks of doing harm by the scaling-up of bad pedagogical practices;

D.   whereas AI applications are omnipresent in the audiovisual sector, in particular on audiovisual content platforms;

1.   Notes that the Commission has proposed to support public procurement in intelligent digital services, in order to encourage public authorities to rapidly deploy products and services that rely on AI in areas of public interest and the public sector; highlights the importance of public investment in these services and the complementary added value provided by public-private partnerships in order to secure this objective and deploy the full potential of AI in the education, culture and audiovisual sectors; emphasises that in the education sector, the development and deployment of AI should involve all those participating in the educational process and wider society and take into account their needs and the expected benefits, especially for the most vulnerable and disadvantaged, in order to ensure that AI is used purposefully and ethically and delivers real improvements for those concerned; considers that products and services developed with public funding should be published under open-source licences with full respect for the applicable legislation, including Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market; stresses the importance of this deployment for reskilling and upskilling the European labour market, and particularly in the culture and audiovisual sectors, which will be severely impacted by the COVID-19 crisis;

2.   Recognises that children are an especially vulnerable group in terms of influencing their behaviour; stresses that while AI can be a tool that can benefit their education, it is necessary to take into account the technological, regulatory and social aspects of the introduction of AI in education, with adequate safeguards and a human-centric approach that ensures that human beings are, ultimately, always able to control and correct the system’s decisions; points to the need for a review and updating of the relevant sectoral rules; underlines in this regard that the legal framework governing AI in the education sector should, in particular, provide for legally binding measures and standards to prevent practices that would undermine fundamental rights and freedoms, and ensure the development of trustworthy, ethical and technically robust AI applications, including integrated digital tools, services and products such as robotics and machine learning;

3.   Notes the potential of AI-based products in education, especially in making high-quality education available to all pupils in the EU; stresses the need for governments and educational institutions to rethink and rework educational programmes with a stronger emphasis on STEAM subjects, in order to prepare learners and consumers for the increasing presence of AI and to facilitate the acquisition of cognitive skills; underlines the need to improve the digital skills of those participating in the educational process and wider society, while having regard to the objectives of ‘A Europe fit for the digital age’;

4.   Underlines that algorithmic systems can be an enabler for reducing the digital divide in an accelerated way, but unequal deployment risks creating new divides or accelerating the deepening of the existing ones; expresses its concern that knowledge and infrastructure are not developed in a consistent way across the EU, which limits the accessibility of products and services that rely on AI, in particular in sparsely populated and socio-economically vulnerable areas; calls on the Commission to ensure cohesion in the sharing of the benefits of AI and related technologies;

5.   Calls on the Commission to consider education as a sector where significant risks can be expected to occur from certain uses of AI applications, which may potentially undermine fundamental rights and result in high costs in both human and social terms, and to take this consideration into account when assessing what types or uses of AI applications would be covered by a regulatory framework for high-risk AI applications, given the importance of ensuring that education continues to contribute to the public good and given the high sensitivity of data on pupils, students and other learners; calls on the Commission to include certain AI applications in the education sector, such as those that are subject to certification schemes or include sensitive personal data, in the regulatory framework for high-risk AI applications; underlines that data sets used to train AI and the outputs should be reviewed in order to avoid all forms of stereotypes, discrimination and biases, and where appropriate, make use of AI to identify and correct human biases where they might exist; points out, accordingly, that appropriate conformity assessments are needed in order to verify and ensure that all the provisions concerning high-risk applications are complied with, including testing, inspection and certification requirements; stresses the importance of securing the integrity and the quality of the data;

6.   Welcomes the efforts of the Commission to include digital skills as part of the qualification requirements for certain professions harmonised at EU level under the Professional Qualifications Directive; highlights the need to ensure mutual recognition of professional qualifications in AI skills across the EU, as several Member States are upgrading their educational offer with AI-related skills and putting in place specific curricula for AI developers; stresses the need for these to be in line with the assessment list of the Ethical Guidelines for Trustworthy AI, and welcomes the Commission’s proposal to transform this list into an indicative curriculum for AI developers; underlines the importance of training highly skilled professionals in this area, including ethical aspects in their curricula, and supporting underrepresented groups in the field, as well as creating incentives for those professionals to seek work within the EU;

7.   Takes note that schools and other public education providers are increasingly using educational technology services, including AI applications; expresses its concern that these technologies are currently provided by just a few technology companies; stresses that this may lead to unequal access to data and limit competition by market dominance and restricting consumer choice; encourages public authorities to take an innovative approach towards public procurement, so as to broaden the range of offers that are made to public education providers across Europe; stresses in this regard the importance of supporting the uptake of AI by SMEs in the education, culture and audiovisual sector through the appropriate incentives that create a level playing field; calls, in this context, for investment in European IT companies in order to develop the necessary technologies within the EU; considers that technologies used by public education providers or purchased with public money should be based on open-source technology where possible, while having full respect for the applicable legislation, including Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market;

8.   Calls for the data used by AI applications in the education sector to be accessible, interoperable and of high quality, and to be shared with the relevant public authorities in a standardised way and with respect for copyright and trade secrets legislation, so that the data can be used, in accordance with the European data protection and privacy rules and ethical, democratic and transparency standards, in the development of curricula and pedagogical practices (in particular when these services are purchased with public money or offered to public education providers for free, considering that education is a common good); calls on the Commission to ensure fair access to data for all companies, and in particular SMEs and cultural and creative companies, which play an essential role in sustaining social cohesion and cultural diversity in Europe, as well as democratic values;

9.   Stresses the importance of developing guidelines for the public procurement of such services and applications for the public sector, including for education providers, in order to ensure the relevant educational objectives, consumer choice, a level and fair playing field for AI solution providers and respect for fundamental rights; stresses the need for public buyers to take into account specific criteria linked to the relevant educational objectives, such as non-discrimination, fundamental rights, diversity, the highest standards of privacy and data protection, accessibility for learners with special needs, environmental sustainability and, specifically when purchasing services for public education providers, the involvement of all those participating in the educational process; stresses the need to strengthen the market by providing SMEs with the opportunity to participate in the procurement of AI applications in order to ensure the involvement of technology companies of all sizes in the sector and thus guarantee resilience and competition;

10.   Underlines the unreliability of the current automated means of removing illegal content from online platforms on which audiovisual content is shared, which may lead to inadvertent removal of legitimate content; notes that neither the E-Commerce Directive nor the revised Audiovisual Media Services Directive on video sharing platforms imposes a general monitoring obligation; recalls, to that end, that there should be no general monitoring, as stipulated in Article 15 of the E-Commerce Directive, and that specific content monitoring for audiovisual services should be in accordance with the exceptions laid down in the European legislation; recalls the key requirements for AI applications, such as accountability, including review structures within business processes, and reporting of negative impacts; emphasises that transparency should also include traceability and explainability of the relevant systems; recalls that AI applications must adhere to internal and external safety protocols, which should be technically accurate and robust in nature; considers that this should extend to operation in normal, unknown and unpredictable situations alike;

11.   Calls for recommendation algorithms and personalised marketing on audiovisual platforms, including video streaming platforms, news platforms and platforms disseminating cultural and creative content, to be explainable, to the extent technically possible, in order to give consumers an accurate and comprehensive insight into these processes and content and ensure that personalised services are not discriminatory and are in line with the recently adopted Platform to Business Regulation and New Deal for Consumers Omnibus Directive; stresses the need to guarantee and properly implement the right of users to opt out from recommended and personalised services; points out in this regard that a description should be provided to users that allows for a general and adequate understanding of the functions concerned, notably on the data used, the purpose of the algorithm, and personalisation and its outcomes, following the principles of explainability and fairness; calls for the development of mechanisms providing monitoring of the consumer’s rights of informed consent and freedom of choice when submitting data;

12.   Notes that the deployment of AI in customs screening procedures may support efforts to prevent the illicit trafficking of cultural heritage, in particular to supplement systems which allow customs authorities to target their efforts and resources on those items presenting the highest risk;

13.   Underlines that consumers must be informed when they are interacting with an automated decision process and that their choices and performance must not be limited; stresses that the use of AI mechanisms for commercial surveillance of consumers must be countered, even if it concerns ‘free services’, by ensuring that it is strictly in line with fundamental rights and the GDPR; stresses that all regulatory changes must take in consideration the impact on vulnerable consumers;

14.   Points out that the deployment, development and implementation of AI must make it easier for consumers and learners with some form of disability to use tools to access audiovisual content;

15.   Underlines the need for upskilling of the future workforce; recognises the benefits of forecasting which jobs will be disrupted by digital technology such as automation, digitalisation and AI;

16.   Points out that the AI systems that are developed, implemented and used in the European Union, in any of the three sectors referred to in this report, must reflect the EU’s cultural diversity and multilingualism.

Date adopted

29.6.2020

 

 

 

Result of final vote

+:

–:

0:

34

3

3

Members present for the final vote

Andrus Ansip, Pablo Arias Echeverría, Alessandra Basso, Brando Benifei, Adam Bielan, Biljana Borzan, Dita Charanzová, Deirdre Clune, David Cormand, Petra De Sutter, Carlo Fidanza, Alexandra Geese, Sandro Gozi, Maria Grapini, Svenja Hahn, Eugen Jurzyca, Arba Kokalari, Marcel Kolaja, Andrey Kovatchev, Maria-Manuel Leitão-Marques, Morten Løkkegaard, Adriana Maldonado López, Antonius Manders, Beata Mazurek, Leszek Miller, Dan-Ștefan Motreanu, Anne-Sophie Pelletier, Christel Schaldemose, Andreas Schwab, Ivan Štefanec, Kim Van Sparrentak, Marion Walsmann

Substitutes present for the final vote

Marc Angel, Pascal Arimont, Marco Campomenosi, Maria da Graça Carvalho, Salvatore De Meo, Karen Melchior, Tsvetelina Penkova, Antonio Maria Rinaldi

34

+

EPP

S&D

RENEW

ID

GREENS/EFA

ECR

EUL/NGL

Pascal Arimont, Deirdre Clune, Arba Kokalari, Antonius Manders, Dan‑Ștefan Motreanu, Marion Walsmann

Marc Angel, Brando Benifei, Biljana Borzan, Maria Grapini, Maria‑Manuel Leitão‑Marques, Adriana Maldonado López, Leszek Miller, Tsvetelina Penkova, Christel Schaldemose

Andrus Ansip, Dita Charanzová, Sandro Gozi, Svenja Hahn, Morten Løkkegaard, Karen Melchior

Alessandra Basso, Marco Campomenosi, Antonio Maria Rinaldi

David Cormand, Petra De Sutter, Alexandra Geese, Marcel Kolaja, Kim Van Sparrentak

Adam Bielan, Carlo Fidanza, Eugen Jurzyca, Beata Mazurek

Anne‑Sophie Pelletier

3

-

EPP

Pablo Arias Echeverría, Salvatore De Meo, Andreas Schwab

3

0

EPP

Maria da Graça Carvalho, Andrey Kovatchev, Ivan Štefanec

OPINION OF THE COMMITTEE ON LEGAL AFFAIRS  ( 22.9.2020 )

Rapporteur for opinion: Angel Dzhambazki

The Committee on Legal Affairs calls on the Committee on Culture and Education, as the committee responsible, to incorporate the following suggestions into its motion for a resolution:

1.   Underlines the strategic importance of using artificial intelligence (AI) and related technologies, and stresses that the European approach in this regard must be human-centred so that AI genuinely becomes an instrument in the service of people and the common good, contributing to the general interest of citizens, including in the audiovisual, cultural and educational sectors; points out that AI can support content creation in those sectors; stresses that AI can support content creation in the education, culture and audiovisual sectors, alongside information and educational platforms, including listings of different kinds of cultural objects and a multitude of data sources; notes the risks of infringement of intellectual property rights (IPRs) when blending AI and different technologies with a multiplicity of sources (documents, photos, films) to improve the way those data are displayed, researched and visualised; calls for the use of AI to ensure a high level of IPR protection within the current legislative framework, for example by alerting individuals and businesses if they are in danger of inadvertently infringing the rules or assisting IPR rightholders if the rules are actually infringed; emphasises, therefore, the importance of having an appropriate European legal framework for the protection of IPRs in connection with the use of AI;

2.   Highlights that the consistent integration of AI in the education sector has the potential to meet some of the biggest challenges of education, to come up with innovative teaching and learning practices, and finally, to accelerate progress towards achieving the Sustainable Development Goals in order to meet the targets of the 2030 Agenda for Education;

3.   Reiterates the importance of access to culture for every citizen throughout the Union; highlights in this context the importance of the exchange of best practices among Member States, educational facilities and cultural institutions and similar stakeholders; further considers it of vital importance that the resources available at both EU and national level are used to the maximum of their potential in order to further improve access to culture; stresses that there are a multitude of options to access culture and that all varieties should be explored in order to determine the most appropriate option; highlights the importance of consistency with the Marrakech Treaty;

4.   Calls on the Commission to realise the full potential of artificial intelligence (AI) for the purposes of improving communication with citizens, through cultural and audiovisual online platforms, for example by keeping citizens informed of what is happening at decision-making level, narrowing the gap between the EU and the grassroots, and promoting social cohesion between EU citizens;

5.   Highlights that education, culture and the audiovisual sector are sensitive areas for the use of AI and related technologies since they have the potential to impact our societies and the fundamental rights they uphold; contends, therefore, that legally binding ethical principles should be observed in their deployment, development and use;

6.   Notes how artificial intelligence and related technologies may be used in developing or applying new methods of education in areas including language learning, academia generally, specialised learning, etc; highlights the importance not only of using such technologies for educational purposes but also of digital literacy and public awareness of the former; stresses the importance of providing educators, trainers and others with the right tools and know-how with regard to AI and related technologies in terms of what they are, how they are used and how to use them as tools properly and according to the law, so as to avoid IPR infringements; highlights in particular the importance of digital literacy for staff working in education, as well as of improving digital training for the elderly, considering that newer generations already have a basic notion of these technologies, having grown up with them;

7.   Emphasises that European artificial intelligence should safeguard and promote core values of our Union such as democracy, independent and free media and information sources, quality education, environmental sustainability, gender balance and cultural and linguistic diversity;

8.   Calls on the Commission, the Member States and the business community to actively and fully exploit the potential of AI in providing the facts and combating fake news, disinformation, xenophobia and racism on cultural and audiovisual online platforms, while at the same time avoiding censorship;

9.   Notes that the independence of the creative process raises issues related to ownership of IPRs; considers, in this connection, that it would not be appropriate to seek to impart legal personality to AI technologies;

10.   Notes that AI can play an important role in promoting and protecting our European and national cultural diversity, especially when used by audiovisual online platforms in promoting content to customers;

11.   Notes that AI could benefit the research sector, for example through the role that predictive analytics can play in fine-tuning data analysis, for example on the acquisition and movement of cultural objects; stresses that the EU must step up investment and foster partnerships between industry and academia in order to enhance research excellence at European level;

12.   Notes the important role which independent media play in culture and the daily life of citizens; stresses that fake media represent a fundamental problem, as copyright and IPRs generally are being constantly infringed; calls on the Commission, in cooperation with the Member States, to continue its work on raising awareness of this problem, countering the effects of fake media as well as the source problems; considers it of importance, furthermore, to develop educational strategies to improve digital literacy specifically in this regard;

13.   Notes that AI-based software, such as image recognition software, could vastly enhance the ability of educational facilities and teachers to provide and develop modern, innovative and high-quality schooling methods, improve the digital literacy and e-skills of the entire population, and enable education to be more accessible; considers that such schooling methods should nevertheless be assessed as to their reliability and accuracy and should ensure fairness in education, non-discrimination, and the safety of children and minors both within educational facilities and when connected remotely within an educational context; highlights the importance of privacy and data protection legislation in order to ensure adequate protection of personal data, in particular children’s data, through transparent and reliable data sources respectful of IPRs; considers it vital that these technologies are only integrated into the existing systems if the protection of fundamental rights and privacy is an absolute given; stresses, however, that recognition software must be used only for educational purposes and not under any circumstances to monitor access to establishments; highlights in this regard the dependence on external data and a few market-dominating software providers; recalls that technologies procured with public money should be developed as open source software to enable the sharing and reuse of resources, making them available throughout the EU and thus increasing benefits and reducing public spending, while ensuring full respect for the applicable legislation, including Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market [20] ;

14.   Notes that if the use of AI is to benefit the education and research sector, the EU must encourage training in the skills of the future, and in particular an ethical and responsible approach to AI technologies; adds, with that aim in view, that this training must not be reserved for pupils focusing on scientific and technical subjects, who are already more familiar with these tools, but must instead target as many people as possible, in particular in the younger generations;

15.   Stresses that the need for investment in research and innovation regarding the use and development of AI and its cultural, educational and audiovisual applications is a key consideration in this respect; calls on the Commission to find additional funding to promote research and innovation regarding AI applications in these sectors;

16.   Expresses serious concern that schools and other education providers are becoming increasingly dependent on educational technology services, including AI applications, provided by companies with a dominant market position, most of which are based outside the EU;

17.   Underlines the need to ensure EU-wide digital and AI literacy, especially through the development of training opportunities for teachers; insists that the use of AI technologies in schools should contribute to narrowing the social and regional digital gap;

18.   Highlights that the COVID-19 pandemic crisis can be considered as a probation period for the development and use of digital and AI-related technologies in the educational and cultural sectors, as exemplified by the many online schooling platforms and online tools for cultural promotion employed across the Member States; calls on the Commission, therefore, to take stock of those examples when considering a common EU approach to the increased use of such technological solutions;

19.   Points out that data protection and privacy can be particularly seriously affected by AI; advocates compliance with the principles laid down in the General Data Protection Regulation (GDPR);

20.   Calls on the Commission to take more effective steps to protect the personal data of pupils and teachers in the education sphere;

21.   Emphasises that the interaction between AI and the creative industries is complex and requires an in‑depth assessment; welcomes the ongoing study ‘Trends and Developments in Artificial Intelligence - Challenges to the IPRS Framework’ and the study on ‘Copyright and new technologies: copyright data management and Artificial Intelligence’; underlines the importance of clarifying the conditions of use of copyright‑protected content as data input (images, music, films, databases, etc) and in the production of cultural and audiovisual outputs, whether created by humans with the assistance of AI or autonomously generated by AI technologies; invites the Commission to study the impact of AI on the European creative industries; reiterates the importance of European data and welcomes the statements made by the Commission in this regard, as well as the placing of artificial intelligence and related technologies high on the agenda;

22.   Emphasises the role of an author’s personality for the expression of free and creative choices that constitute the originality of works [21] ; underlines the importance of limitations and exceptions to copyright when using content as data input, notably in education, academia and research, and in the production of cultural and audiovisual outputs, including user-generated content;

23.   Emphasises that the interaction between AI and the creative industries is complex and requires an in-depth assessment; takes the view that consideration should be given to protecting AI-generated technical and artistic creations, in order to encourage this form of creativity;

24.   Stresses that in the data economy context, better copyright data management is achievable, for the purpose of better remunerating authors and performers, notably in enabling the swift identification of the authorship and right ownership of content, thus contributing to lowering the number of orphan works; further highlights that AI technological solutions should be used to improve copyright data infrastructure and the interconnection of metadata in works, but also to facilitate the transparency obligation provided in Article 19 of the Directive on Copyright and related rights in the Digital Single Market for up‑to‑date, relevant and comprehensive information on the exploitation of authors’ and performers’ works and performances, particularly in the presence of a plurality of rightholders and of complex licensing schemes;

25.   Stresses the need to work on the most efficient way of reducing bias in AI systems, in line with ethical and non-discrimination standards; underlines that data sets used to train AI should be as broad as possible in order to represent society in the best relevant way, that the outputs should be reviewed to avoid all forms of stereotypes, discrimination and biases, and, when appropriate, AI should be made use of to identify and correct human biases where they could exist; calls on the Commission to encourage and facilitate the sharing of de-biasing strategies for data;

26.   Asks the Commission to assess the impact of AI and AI-related technologies on the audiovisual and creative sector, in particular with regard to authorship and related questions;

27.   Calls for the intellectual property action plan announced by the Commission to address the question of AI and its impact on the creative sectors, taking account of the need to strike a balance between protecting IPRs and encouraging creativity in the areas of education, culture and research; considers that the EU can be a leader in the creation of AI technologies if it adopts an operational regulatory framework and implements proactive public policies, particularly as regards training programmes and financial support for research; asks the Commission to assess the impact of IPRs on the research and development of AI and related technologies, as well as on the audiovisual and creative sectors, in particular with regard to authorship, fair remuneration of authors and related questions;

28.   Highlights the future role that the inclusion of AI-based technological tools should have in terms of conservation, disclosure and heritage control, as also in the associated research projects;

29.   Stresses the need to strike a balance between, on the one hand, the development of AI systems and their use in the educational, cultural and audiovisual sectors and, on the other, measures to safeguard competition and market competitiveness for AI companies in these sectors; emphasises in this regard the need to encourage companies to invest in the innovation of AI systems used in these sectors, while at the same time ensuring that those providing such applications do not obtain a market monopoly;

30.   Stresses that in no scenario could the use of AI and related technologies become a reality without human oversight; reiterates the importance of fundamental rights and the overarching supremacy of data and privacy protection legislation, which is imperative when dealing with such technologies;

31.   Asks the Commission to assess the impact of AI and AI-related technologies in creating new audiovisual works such as deep fakes, and to establish appropriate legal consequences to be attached to their creation, production or distribution for malicious purposes;

32.   Notes that automation and the development of AI could pose a threat to employment, and emphasises once again that priority must be given to safeguarding jobs, in particular in the education, culture and creative sectors;

33.   Calls on the Commission to launch an EU-level education plan on digital and AI literacy, in coordination with the Member States, with a particular focus on school students and youth;

34.   Calls on the Commission to consider the legal aspects of the outputs produced using AI technologies, as well as cultural content generated with the use of AI and related technologies; considers it important to support the production of cultural content; reiterates, however, the importance of safeguarding the Union’s unique IPR framework and that any changes should be made with the necessary due care, in order not to disrupt the delicate balance; calls on the Commission to produce an in-depth assessment with regard to the possible legal personality of AI-produced content, as well as the application of IPRs to AI-generated content and to content created with the use of AI tools;

35.   Calls on the Commission to establish requirements for the procurement and deployment of artificial intelligence and related technologies by EU public sector bodies, to ensure compliance with Union law and fundamental rights; highlights the added value of instruments such as public consultations and impact assessments, to be run prior to the procurement or deployment of artificial intelligence systems, as recommended in the report of the Special Rapporteur to the UN General Assembly on AI and its impact on freedom of opinion and expression [22] ;

36.   Calls on the Commission to lay down rules designed to guarantee effective data interoperability, in order to make content purchased on a platform accessible via any digital tool irrespective of brand;

37.   Emphasises that the challenges brought by the use of artificial intelligence and related technologies can only be overcome by establishing data quality obligations and transparency and oversight requirements, in order to enable the public and authorities to assess compliance with Union law and fundamental rights; awaits the Commission’s proposals following its communication on a European strategy for data [23] as regards the sharing and pooling of datasets;

38.   Calls on the Commission, in addition, to consider developing, in very close cooperation with Member States and the relevant stakeholders, verification mechanisms or systems for publishers, authors, creators, etc, in order to assist them in verifying what content they may use and to more easily determine what is protected under IPR legislation.

Date adopted

10.9.2020

 

 

 

Result of final vote

+:

–:

0:

22

2

1

Members present for the final vote

Manon Aubry, Gunnar Beck, Geoffroy Didier, Angel Dzhambazki, Ibán García Del Blanco, Jean-Paul Garraud, Esteban González Pons, Mislav Kolakušić, Gilles Lebreton, Karen Melchior, Jiří Pospíšil, Franco Roberti, Marcos Ros Sempere, Liesje Schreinemacher, Stéphane Séjourné, Raffaele Stancanelli, Marie Toussaint, Adrián Vázquez Lázara, Axel Voss, Marion Walsmann, Tiemo Wölken, Lara Wolters, Javier Zarzalejos

Substitutes present for the final vote

Heidi Hautala, Emil Radev

22

+

EPP

Geoffroy Didier, Esteban González Pons, Jiří Pospíšil, Emil Radev, Axel Voss, Marion Walsmann, Javier Zarzalejos

S&D

Ibán García Del Blanco, Franco Roberti, Marcos Ros Sempere, Tiemo Wölken, Lara Wolters

RENEW

Karen Melchior, Liesje Schreinemacher, Stéphane Séjourné, Adrián Vázquez Lázara

ID

Gunnar Beck, Jean‑Paul Garraud, Gilles Lebreton

ECR

Angel Dzhambazki, Raffaele Stancanelli

NI

Mislav Kolakušić

2

-

VERTS/ALE

Heidi Hautala, Marie Toussaint

1

0

GUE/NGL

Manon Aubry

OPINION OF THE COMMITTEE ON WOMEN'S RIGHTS AND GENDER EQUALITY  ( 14.9.2020 )

Rapporteur for opinion: Maria da Graça Carvalho

The Committee on Women’s Rights and Gender Equality calls on the Committee on Culture and Education, as the committee responsible, to incorporate the following suggestions into its motion for a resolution:

A.   whereas gender equality is a core principle of the European Union enshrined in the Treaties, and should be reflected in all EU policies, including in education, culture, and the audiovisual sector, as well as in the development of technologies such as Artificial Intelligence (AI), these being key channels for changing attitudes and challenging stereotypes and gender biases in existing social norms; whereas the development of digitalisation and technologies like AI are fundamentally transforming our reality and their regulation today will highly influence our future societies; whereas there is a need to advocate for a human-centred approach anchored in human rights and ethics for the development and use of AI;

B.   whereas Article 21 of the EU Charter of Fundamental Rights prohibits discrimination on a wide range of grounds and should be a guiding principle; whereas multiple forms of discrimination should not be reproduced in the design, input, development and use of AI systems based on gender-biased algorithms, or in the social contexts in which such algorithms are used;

C.   whereas past experiences, especially in technical fields, have shown us that developments and innovations are often based mainly on male data and that women’s needs are not fully reflected; whereas addressing these biases requires greater vigilance, technical solutions and the development of clear requirements of fairness, accountability and transparency;

D.   whereas incomplete and inaccurate data sets, the lack of gender-disaggregated data and incorrect algorithms can distort the processing of an AI system and jeopardise the achievement of gender equality in society; whereas data on disadvantaged groups and intersectional forms of discrimination tend to be incomplete and even absent;

E.   whereas gender inequalities, stereotypes and discrimination can also be created and replicated through the language and images disseminated by the media and AI-powered applications; whereas education, cultural programmes and audiovisual content have considerable influence in shaping people’s beliefs and values and are a fundamental tool for combatting gender stereotypes, decreasing the digital gender gap, and establishing strong role models; whereas an ethical and regulatory framework must be in place ahead of implementing automatised solutions for these key areas in society;

F.   whereas science and innovation can bring life-changing benefits, especially for those who are furthest behind, such as women and girls living in remote areas; whereas scientific education is important for obtaining skills, decent work, and jobs of the future, as well as for breaking with gender stereotypes that regard these as stereotypically masculine fields; whereas science and scientific thinking are key to democratic culture, which in turn is fundamental for advancing gender equality;

G.   whereas women are significantly under-represented in the AI sector, whether as creators, developers or consumers; whereas the full potential of women’s skills, knowledge and qualifications in the digital and AI fields as well as that of information, communication and technology (ICT), along with their reskilling, can contribute to boosting the European economy; whereas globally only 22 % of AI professionals are female; whereas the lack of women in AI development not only increases the risk of bias, but also deprives the EU of diversity, talent, vision and resources, and is therefore an obstacle to innovation; whereas gender diversity enhances female attitudes in teams and team performance and favours the potential for innovation in both public and private sectors;

H.   whereas in the EU one woman in ten has already suffered some form of cyberviolence since the age of 15 and cyberharassment remains a concern in the development of AI, including in education; whereas cyberviolence is often directed at women in public life, such as activists, women politicians and other public figures; whereas AI and other emerging technologies can play an important role in preventing cyberviolence against women and girls and educating people;

I.   whereas the EU is facing an unparalleled shortage of women in Science, Technology, Engineering and Mathematics (STEM) careers and education, given that women account for 52 % of the European population, yet only for one in three of STEM graduates;

J.   whereas despite the positive trend in the involvement and interest of women in STEM education, the percentages remain insufficient, especially considering the importance of STEM-related careers in an increasingly digitalised world;

1.   Considers that AI has great potential to promote gender equality provided that already existing conscious and unconscious bias are eliminated; stresses the need for further regulatory efforts to ensure that AI respects the principles and values of gender equality and non-discrimination as enshrined in Article 21 of the Charter of Fundamental Rights; stresses, further, the importance of accountability, of a differentiated and transparent risk-based approach, and of continuous monitoring of existing and new algorithms and of their results;

2.   Stresses the need for media organisations to be informed about the main parameters of algorithm-based AI systems that determine ranking and search results on third-party platforms, and for users to be informed about the use of AI in decision-making services and empowered to set their privacy parameters via transparent and understandable measures;

3.   Recalls that algorithms and AI should be ‘ethical by design’, with no built-in bias, in a way that guarantees maximum protection of fundamental rights;

4.   Calls for policies targeted at increasing the participation of women in the fields related to STEM, AI and the research and innovation sector, and for the adoption of a multi-level approach to address the gender gap at all levels of education, with particular emphasis on primary education, as well as employment in the digital sector, highlighting the importance of upskilling and reskilling;

5.   Recognises that gender stereotyping, cultural discouragement and the lack of awareness and promotion of female role models hinders and negatively affects girls’ and women’s opportunities in ICT, STEM and AI and leads to discrimination and fewer opportunities for women in the labour market; stresses the importance of increasing the number of women in these sectors, which will contribute to women’s participation and economic empowerment, as well as to reducing the risks associated with the creation of so-called ‘biased algorithms’;

6.   Encourages the Commission and the Member States to purchase educational, cultural and audiovisual services from providers that apply gender balance in their workplace, promote public procurement policies and guidelines that stimulate companies to hire more women for STEM jobs, and facilitate the distribution of funds to companies in the educational, cultural and audiovisual sectors that take account of gender balance criteria;

7.   Emphasises the cross-sectoral nature of gender-based discrimination rooted in conscious or unconscious gender bias and manifested in the education sector, the portrayal of women in the media and advertising on-screen and off-screen, and the responsibility of both public and private sectors in terms of proactively recruiting, developing and retaining female talent and instilling an inclusive business culture;

8.   Calls on the Commission and the Member States to take into account ethical aspects, including from a gender perspective, when developing AI policy and legislation, and, if necessary, to adapt the current legislation, also including EU programmes and ethical guidelines for the use of AI;

9.   Encourages the Member States to enact a strategy to promote women’s participation in STEM, ICT and AI-related studies and careers in relevant existing national strategies to achieve gender equality, defining a target for the participation of women researchers in STEM and AI projects; urges the Commission to address the gender gap in STEM, ICT and AI-related careers and education, and to set this as a priority of the Digital Skills Package in order to promote the presence of women at all levels of education, as well as in the upskilling and reskilling of the labour force;

10.   Recognises that producers of AI solutions must make a greater effort to test products thoroughly in order to anticipate potential errors impacting vulnerable groups; calls for work to be stepped up on a tool to teach algorithms to recognise disturbing human behaviour, which would identify those elements that most frequently contribute to discriminatory mechanisms in the automated decision-making processes of algorithms;

11.   Underlines the importance of ensuring that the interests of women experiencing multiple forms of discrimination and who belong to marginalised and vulnerable groups are adequately taken into account and represented in any future regulatory framework; notes with concern that marginalised groups risk suffering new technological, economic and social divides with the development of AI;

12.   Calls for specific measures and legislation to combat cyberviolence; stresses that the Commission and the Member States should provide appropriate funding for the development of AI solutions that prevent and fight cyberviolence and online sexual harassment and exploitation directed against women and girls and help educate young people; calls for the development and implementation of effective measures tackling old and new forms of online harassment for victims in the workplace;

13.   Notes that for the purpose of analysing the impacts of algorithmic systems on citizens, access to data should be extended to appropriate parties, notably independent researchers, media and civil society organisations, while fully respecting Union data protection and privacy law; points out that users must always be informed when an algorithm has been used to make a decision concerning them, particularly where the decision relates to access to benefits or to a product;

14.   Calls on the Commission and the Member States to devise measures that fully incorporate the gender dimension, such as awareness-raising campaigns, training and curricula, which should provide information to citizens on how algorithms operate and their impact on their daily lives; further calls on them to nurture gender-equal mindsets and working conditions that lead to the development of more inclusive technology products and work environments; urges the Commission and the Member States to ensure the inclusion of digital skills and AI training in school curricula and to make them accessible to all, as a way to close the digital gender divide; 

15.   Stresses the need for training for workers and educators dealing with AI to promote the ability to identify and correct gender-discriminatory practices in the workplace and in education, and for workers developing AI systems and applications to identify and remedy gender-based discrimination in the AI systems and applications they develop; calls for the establishment of clear responsibilities in companies and educational institutions to ensure that there is no gender-based discrimination in the workplace or educational context; highlights that genderless images of AI and robots should be used for educational and cultural purposes, unless gender is a key factor for some reason;

16.   Highlights the importance of the development and deployment of AI applications in the educational, cultural and audiovisual sectors in collecting gender-disaggregated and other equality data, and of applying modern machine learning de-biasing techniques, if needed, to correct gender stereotypes and gender biases which may have negative impacts; 

17.   Urges the Commission and the Member States to collect gender-disaggregated data in order to feed datasets in a way that promotes equality; also calls on them to measure the impact of the public policies put in place to incorporate the gender dimension by analysing the data collected; stresses the importance of using complete, reliable, timely, unbiased, non-discriminatory and gender-sensitive data in the development of AI;

18.   Calls on the Commission to include education in the regulatory framework for high-risk AI applications, given the importance of ensuring that education continues to contribute to the public good, as well as the high sensitivity of data on pupils, students and other learners; emphasises that in the education sector, this deployment should involve educators, learners and the wider society and should take into account the needs of all and the expected benefits in order to ensure that AI is used purposefully and ethically;

19.   Calls on the Commission to encourage the use of EU programmes such as Horizon Europe, Digital Europe and Erasmus+ to promote multidisciplinary research, pilot projects, experiments and the development of tools including training, for the identification of gender biases in AI, as well as awareness-raising campaigns for the general public;

20.   Stresses the need to create diverse teams of developers and engineers to work alongside the main actors in the educational, cultural and audiovisual sectors in order to prevent gender or social bias being inadvertently included in AI algorithms, systems and applications; stresses the need to consider the variety of different theories through which AI has been developed to date and could be further advanced in the future;

21.   Points out that the fact of taking due care to eliminate bias and discrimination against particular groups, including gender stereotypes, should not halt technological progress.

Date adopted

10.9.2020

 

 

 

Result of final vote

+:

–:

0:

28

3

4

Members present for the final vote

Christine Anderson, Simona Baldassarre, Robert Biedroń, Vilija Blinkevičiūtė, Annika Bruna, Margarita de la Pisa Carrión, Gwendoline Delbos-Corfield, Rosa Estaràs Ferragut, Frances Fitzgerald, Cindy Franssen, Heléne Fritzon, Lina Gálvez Muñoz, Arba Kokalari, Alice Kuhnke, Elżbieta Katarzyna Łukacijewska, Maria Noichl, Pina Picierno, Sirpa Pietikäinen, Samira Rafaela, Evelyn Regner, Diana Riba i Giner, Eugenia Rodríguez Palop, Christine Schneider, Jessica Stegrud, Isabella Tovaglieri, Ernest Urtasun, Hilde Vautmans, Elissavet Vozemberg-Vrionidi, Chrysoula Zacharopoulou, Marco Zullo

Substitutes present for the final vote

Maria da Graça Carvalho, Derk Jan Eppink, Elena Kountoura, Radka Maxová, Susana Solís Pérez

28

+

PPE

Maria da Graça Carvalho, Rosa Estaràs Ferragut, Frances Fitzgerald, Cindy Franssen, Arba Kokalari, Elżbieta Katarzyna Łukacijewska, Sirpa Pietikäinen, Christine Schneider, Elissavet Vozemberg‑Vrionidi

S&D

Robert Biedroń, Vilija Blinkevičiūtė, Heléne Fritzon, Lina Gálvez Muñoz, Maria Noichl, Pina Picierno, Evelyn Regner

Renew

Radka Maxová, Samira Rafaela, Susana Solís Pérez, Hilde Vautmans, Chrysoula Zacharopoulou

Verts/ALE

Gwendoline Delbos‑Corfield, Alice Kuhnke, Diana Riba i Giner, Ernest Urtasun

GUE/NGL

Elena Kountoura, Eugenia Rodríguez Palop

NI

Marco Zullo

3

-

ID

Annika Bruna

ECR

Derk Jan Eppink, Jessica Stegrud

4

0

ID

Christine Anderson, Simona Baldassarre, Isabella Tovaglieri

ECR

Margarita de la Pisa Carrión

Date adopted

16.3.2021

 

 

 

Result of final vote

+:

–:

0:

25

0

4

Members present for the final vote

Asim Ademov, Isabella Adinolfi, Christine Anderson, Ilana Cicurel, Gilbert Collard, Gianantonio Da Re, Laurence Farreng, Tomasz Frankowski, Hannes Heide, Irena Joveva, Petra Kammerevert, Niyazi Kizilyürek, Ryszard Antoni Legutko, Predrag Fred Matić, Dace Melbārde, Victor Negrescu, Niklas Nienaß, Peter Pollák, Marcos Ros Sempere, Domènec Ruiz Devesa, Monica Semedo, Andrey Slabakov, Massimiliano Smeriglio, Michaela Šojdrová, Sabine Verheyen, Theodoros Zagorakis, Milan Zver

Substitutes present for the final vote

Christian Ehler, Marcel Kolaja

25

+

ECR

Dace Melbārde

ID

Gilbert Collard

NI

Isabella Adinolfi

PPE

Asim Ademov, Christian Ehler, Tomasz Frankowski, Peter Pollák, Michaela Šojdrová, Sabine Verheyen, Theodoros Zagorakis, Milan Zver

Renew

Ilana Cicurel, Laurence Farreng, Irena Joveva, Monica Semedo

S&D

Hannes Heide, Petra Kammerevert, Predrag Fred Matić, Victor Negrescu, Marcos Ros Sempere, Domènec Ruiz Devesa, Massimiliano Smeriglio

The Left

Niyazi Kizilyürek

Verts/ALE

Marcel Kolaja, Niklas Nienaß

0

-

.

.

4

0

ECR

Ryszard Antoni Legutko, Andrey Slabakov

ID

Christine Anderson, Gianantonio Da Re

  • [1] OJ C 202 I, 16.6.2020, p. 1.
  • [2] OJ C 440, 6.12.2018, p. 37.
  • [3] OJ C 449, 23.12.2020, p. 37.
  • [4] OJ C 433, 23.12.2019, p. 42.
  • [5] OJ C 28, 27.1.2020, p. 8.
  • [6] OJ C 252, 18.7.2018, p. 239.
  • [7] OJ C 307, 30.8.2018, p. 163.
  • [8] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (OJ L 119, 4.5.2016, p. 1).
  • [9] Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37).
  • [10] Report of the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, 29 August 2018.
  • [11] Directive 2005/36/EC of the European Parliament and of the Council of 7 September 2005 on the recognition of professional qualifications (OJ L 255, 30.9.2005, p. 22).
  • [12] Court of Justice of the European Union, Case C-833/18, SI and Brompton Bicycle Ltd v Chedech / Get2Get.
  • [13] Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (OJ L 130, 17.5.2019, p. 92).
  • [14] Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services (OJ L 186, 11.7.2019, p. 57).
  • [15] Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules (OJ L 328, 18.12.2019, p. 7).
  • [16] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (OJ L 178, 17.7.2000, p. 1).
  • [17] Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services in view of changing market realities (OJ L 303, 28.11.2018, p. 69).
  • [18] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
  • [19] Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37).
  • [20] OJ L 130, 17.5.2019, p. 92.
  • [21] Court of Justice of the European Union, SI and Brompton Bicycle Ltd v Chedech Get2Get, Case C-833/18.
  • [22] Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, https://undocs.org/A/73/348.
  • [23] COM(2020) 66 final, https://ec.europa.eu/info/sites/info/files/communication-european-strategy-data-19feb2020_en.pdf.
  • News & Events

AI in Education: Considering Ethics and Power

vakil-480.jpg

Difficult ethical and moral questions will play a central role in whether artificial intelligence will expand opportunities and equity in STEM education or make things worse, Northwestern University learning scientist  Sepehr Vakil said during his closing keynote at a National Science Foundation convening of principal investigators in Arlington, Virgina.

“Researchers and classroom educators must also be involved and engaged in the evolution of AI in learning and schools,” Vakil said. 

Vakil, whose talk during the  2024  EDU Core Research meeting addressed the future of AI in STEM education, also participated in a Q & A with Yolanda Rankin, associate professor of computer science at Emory University. Rankin is considered  one  of the leading researchers working on issues of computing education and equity.

Vakil directs the  Technology, Race, Ethics, and Equity in Education (TREE) Lab at Northwestern’s School of Education and Social Policy. An associate professor, he  also was  co-lead  of  the Spencer Foundation’s research convening on Equity and AI in Education,  serves  as a senior advisor to Expanding Computing Education Pathways in Illinois and  is serving on a  National Academies of Sciences, Engineering, and Medicine’s Committee on  Computing and Data Science in K12  Education.

“AI is unique in the kinds of moral and ethical questions that it raises about complex issues like the environment, warfare and surveillance, to name a few,” he said. “AI ethics in STEM education cannot be viewed as separate from these broader ethical and moral questions. ”

“Our field must engage with the political economy of AI, asking always who is controlling the tech, the data, and to whose benefit?” Vakil added. “How might the National Science Foundation, and those of us who have NSF-funded projects, intervene and shift this ecosystem?”

  • Read Vakil's bio.
  • Read his full talk on  Substack.

Summary Artificial Intelligence 2024 Legislation

AI artificial intelligence image

Artificial intelligence—the use of computer systems to perform tasks that normally require human intelligence, such as learning and decision making—has the potential to spur innovation and transform industry and government. As AI advances, more products and services are coming onto the market. For example, companies are developing AI to help consumers run their household appliances and allow the elderly to stay in their homes longer. AI is used in health care, self-driving cars, digital assistants and many other areas of daily life.

Concerns about potential misuse or unintended consequences of AI, however, have prompted efforts to develop standards. The National Institute of Standards and Technology, for example, is holding discussions with the public and private sectors to develop federal standards for the creation of reliable, robust and trustworthy AI systems.

In the 2024 legislative session, at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills, and six states, Puerto Rico and the Virgin Islands adopted resolutions or enacted legislation. Examples of those actions include:

  • Indiana created an AI task force.
  • South Dakota revised its statutes to clarify that a person is guilty of possessing child pornography if the person knowingly possesses any visual depiction of a minor engaging in a prohibited sexual act, or in a simulation of a prohibited sexual act, or any computer-generated child pornography. A violation of the revised law is a Class 4 felony.
  • Tennessee required the governing boards of public institutions of higher education to promulgate rules and required local education boards and public charter schools to adopt policies, regarding the use of AI by students, teachers, faculty and staff for instructional purposes.
  • Utah created the Artificial Intelligence Policy Act.
  • The Virgin Islands established a real-time, centralized crime data system within the state police department.
  • West Virginia created a select committee on artificial intelligence.

This webpage covers key legislation related to AI issues generally. Legislation related solely to specific AI technologies, such as facial recognition, deepfakes or autonomous cars, is being tracked separately.

 

 

 

 

 

 

Alabama 

 

Age of a Child for Offenses 

Pending 

Relates to child sexual abuse material; provides for the age of a child for offenses involving child sexual abuse material; provides a cause of action for certain offenses involving child sexual abuse material. 

Child Pornography; Criminal Use 

Alabama 

 

Distribution of Materially Deceptive Media 

Pending 

Relates to elections; provides that the distribution of materially deceptive media to influence an upcoming election is a crime; authorizes certain parties to seek permanent injunctive relief against anyone who distributes materially deceptive media in an attempt to influence an upcoming election; provides definitions. 

Elections 

Alaska 

 

Artificial Intelligence 

Pending 

Relates to artificial intelligence. 

Government Use; Impact Assessment; Notification; Private Right of Action; Responsible Use 

Alaska 

 

Person Definition 

Pending 

Relates to the definition of person. 

Personhood 

Alaska 

 

Artificial Intelligence Usage 

Pending 

Relates to the use of artificial intelligence to create or alter a representation of the voice or likeness of an individual. 

Criminal Use 

Alaska 

 

Artificial Intelligence 

Pending 

Relates to artificial intelligence. 

Elections; Government Use; Impact Assessment; Responsible Use 

Arizona 

 

Artificial Intelligence and Sexual Abuse Materials 

Pending 

Relates to artificial intelligence; relates to sexual abuse materials. 

Child Pornography; Criminal Use 

Arizona 

 

Sexual Exploitation of a Minor 

Pending 

Provides that a person commits sexual exploitation of a minor by knowingly recording, filming, photographing, developing or duplicating any visual depiction in which a minor is engaged in exploitive exhibition or other sexual conduct; producing, publishing, altering or generating with artificial intelligence any visual depiction in which a minor is engaged in exploitive exhibition or other sexual conduct. 

Child Pornography; Criminal Use 

Arizona 

 

Fraudulent Voice Recordings 

Pending 

Relates to fraudulent voice recordings. 

Criminal Use 

Arizona 

 

Ballot Processing and Electronic Adjudication and Limit 

Pending 

Provides that machines or devices used at any election for federal, state or county offices may only be certified for use in this state and may only be used in this state if they comply with the Help America Vote Act of 2002 and if those machines or devices have been tested and approved by a laboratory that is accredited pursuant to the Help America Vote Act of 2002; machines, devices, firmware or software used in this state may not include any artificial intelligence or learning hardware. 

Elections 

Arizona 

 

Artificial Intelligence Use 

Pending 

Relates to artificial intelligence use; relates to aggravating circumstance. 

Criminal Use 

Arkansas 

None 

 

 

 

 

California 

 

Automated Decision Tools 

Failed 

Relates to the state Fair Employment and Housing Act. Requires a deployer and a developer of an automated decision tool to perform an impact assessment for any such tool the deployer uses that includes a statement of the purpose of the tool and its intended benefits, uses and deployment contexts. Requires a public attorney to, before commencing an action for injunctive relief, provide written notice to a deployer or developer of the alleged violations and provide a specified opportunity to cure violations. 

Impact Assessment; Notification; Private Sector Use; Responsible Use 

California 

 

Contracts Against Public Policy: Personal 

Pending 

Provides that a provision in an agreement between an individual and any other person for the performance of personal or professional services is contrary to public policy and deemed unconscionable if the provision meets specified conditions relating to the use of a digital replica of the voice or likeness of an individual in lieu of the work of the individual or to train a generative artificial intelligence system. 

Private Sector Use 

California 

 

Mental Health: Impacts of Social Media 

Pending 

Requires the Mental Health Services Oversight and Accountability Commission to explore, among other things, the persons and populations that use social media and the negative mental health risks associated with social media and artificial intelligence. Requires the commission to report to specified policy committees of the Legislature a statewide strategy to understand, communicate and mitigate mental health risks associated with the use of social media by children and youth. 

Responsible Use 

California 

 

Health Care Coverage: Discrimination 

Failed 

Prohibits a health care service plan or health insurer from discriminating based on race, color, national origin, sex, age or disability through the use of clinical algorithms in its decision-making. 

Health Use; Private Sector Use 

California 

 

Artificial Intelligence: Standards and Content 

Pending 

Declares the intent of the Legislature to subsequently amend this bill to include provisions that would require California-based companies that are in the business of generative artificial intelligence to implement the Coalition for Content Provenance and Authenticity’s technical open standard and content credentials into their tools and platforms. 

Provenance 

California 

 

Artificial Intelligence: Disclosure 

Pending 

States the intent of the Legislature to enact legislation that would create a disclosure requirement for content generated through artificial intelligence. 

Provenance 

California 

 

Crimes: Child Pornography 

Pending 

Defines depicting a person under specified number of years of age personally engaging in or simulating sexual conduct as including a representation of a real or fictitious person through use of artificially intelligent software or computer-generated means, who is, or who a reasonable person would regard as being, a real person under specified number of years of age, engaging in or simulating sexual conduct. 

Child Pornography; Criminal Use 

California 

 

Crimes: Sexual Exploitation of a Child 

Pending 

Makes a person guilty of a misdemeanor or a felony if the person knowingly develops, duplicates, prints or exchanges any representation of information, data or image, generated using artificial intelligence, that depicts a person under the age of 18 years engaged in an act of sexual conduct. 

Child Pornography; Criminal Use 

California 

 

Artificial Intelligence: Training Data Transparency 

Pending 

Requires, on or before Jan. 1, 2026, a developer, as defined, of an artificial intelligence system or service, as defined, made available to Californians for use, regardless of whether the terms of that use include compensation, to post on the developer's internet website documentation regarding the data used to train the artificial intelligence system or service, as specified. 

Private Sector Use; Provenance 

California 

 

Automated Decision Systems 

Pending 

States the intent of the Legislature to enact legislation relating to commercial algorithms and artificial intelligence-enabled medical devices. 

Health Use 

California 

 

Political Advertisements: Artificial Intelligence 

Pending 

Requires a person, committee or other entity that creates, originally publishes or originally distributes a qualified political advertisement to include in the advertisement a specified disclosure that the advertisement was generated, in whole or in part, using artificial intelligence. Defines qualified political advertisement to include any paid advertisement that relates to a candidate for federal, state or local office. 

Elections 

California 

 

Community Colleges: Faculty: Artificial Intelligence 

Pending 

Prohibits artificial intelligence from being used to replace community college faculty for purposes of providing academic instruction to, and regular interaction with, students in a course of instruction, and would authorize artificial intelligence to only be used as a peripheral tool to support faculty in carrying out those tasks for uses such as course development, assessment and tutoring. 

Education Use; Government Use; Effect on Labor/Employment 

California 

 

Telecommunications: Automatic Dialing-Announcing Device 

Pending 

Provides that existing law authorizes the Public Utilities Commission to control and regulate the connection of an automatic dialing-announcing device to a telephone line. Provides that existing law imposes various requirements on the use of an automatic dialing-announcing device. Expands the definition of automatic dialing-announcing device to include calls made using an artificial voice. 

Private Sector Use 

California 

 

Contracts Against Public Policy 

Pending 

Provides that a provision in an agreement between an individual and any other person for the performance of personal or professional services is contrary to public policy and deemed unconscionable if the provision meets specified conditions relating to the use of a digital replica of the voice or likeness of an individual in lieu of the work of the individual or to train a generative artificial intelligence system. 

Private Sector Use 

California 

 

State Department of Education: Artificial Intelligence 

Pending 

Requires the superintendent of public instruction, in consultation with the State Board of Education, to, on or before Jan. 1, 2025, convene a working group for the purpose of exploring how artificial intelligence and other forms of similarly advanced technology are currently being used in education, identifying how they may be used in the future, and developing best practices to ensure that those technologies advance, rather than harm, educational quality. 

Education Use; Government Use; Studies 

California 

 

Artificial Intelligence: Legal Professionals 

Pending 

Expresses the intent of the Legislature to enact legislation that would require legal professionals to disclose to the court whether they have used artificial intelligence or machine learning to prepare any pleadings, motions or other documents filed with any court in this state. 

Judicial Use; Private Sector Use 

California 

 

Artificial Intelligence 

Pending 

States the intent of the Legislature to enact legislation to define the term artificial intelligence. 

 

California 

 

Automated Decision Tools 

Pending 

Requires a deployer and a developer of an automated decision tool, as defined, to, on or before Jan. 1, 2026, and annually thereafter, perform an impact assessment for any automated decision tool the deployer uses that includes, among other things, a statement of the purpose of the automated decision tool and its intended benefits, uses and deployment contexts. Requires a deployer or developer to provide the impact assessment to the Civil Rights Department within seven days of a request. 

Government Use; Impact Assessment; Notification; Private Sector Use; Responsible Use 

California 

 

Artificial Intelligence 

Pending 

Requires the Department of Technology to issue regulations to establish standards for watermarks to be included in covered AI-generated material. Requires the department's standard to, at a minimum, require an AI-generating entity to include digital content provenance in the watermarks. 

Provenance 

California 

 

Universal Basic Income 

Pending 

States that it is the intent of the Legislature to enact legislation to promote economic security and stability for California residents by creating a universal basic income program for residents whose employment is replaced by artificial intelligence. 

Effect on Labor/Employment 

California 

 

Artificial Intelligence 

Pending 

Declares the intent of the Legislature to enact legislation relating to artificial intelligence. 

 

California 

 

Data Digesters 

Pending 

Requires data digesters to register with the California Privacy Protection Agency, pay a registration fee and provide specified information. Prescribes penalties for a failure to register as required by these provisions. Requires the agency to create a page on its internet website where this registration information is accessible to the public, and creates a fund known as the Data Digester Registry Fund. 

Private Sector Use; Provenance 

California 

 

23 Asilomar AI Principles 

Pending 

Expresses the support of the Legislature for the 23 Asilomar AI Principles as guiding values for the development of artificial intelligence and of related public policy. 

Government Use; Private Sector Use; Responsible Use 

California 

 

Artificial Intelligence 

Pending 

Urges the U.S. government to impose an immediate moratorium on the training of AI systems more powerful than GPT-4 for at least six months to allow time to develop much-needed AI governance systems. 

 

California 

 

Health Care Coverage: Independent Medical Review 

Pending 

Requires a health care service plan or disability insurer that provides coverage for mental health or substance use disorders to treat a modification, delay or denial issued in response to an authorization request for coverage of treatment for a mental health or substance use disorder for an insured up to a specified age as if the modification, delay or denial is also a grievance submitted by the enrollee or insured. 

Effect on Labor/Employment; Responsible Use 

California 

 

Department of Technology: Artificial Intelligence 

Failed 

Requires any state agency that utilizes generative artificial intelligence to directly communicate with a natural person to provide notice to that person that the interaction with the state agency is being communicated through artificial intelligence. Requires the state agency to provide instructions to inform the natural person how they can directly communicate with a natural person from the state agency. 

Government Use; Oversight/Governance 

California 

 

Department of Technology: Advanced Technology: Research 

Failed 

Provides for the Artificial Intelligence for California Research Act, which would require the Department of Technology, upon appropriation by the Legislature, to develop and implement a comprehensive research plan to study the feasibility of using advanced technology to improve state and local government services. Requires the research plan to include, among other things, an analysis of the potential benefits and risks of using artificial intelligence technology in government services. 

Government Use 

California 

 

California Interagency AI Working Group 

Pending 

Creates the California Interagency AI Working Group to deliver a report to the Legislature regarding artificial intelligence. Requires require the working group members to be Californians with expertise in at least two of certain areas, including computer science, artificial intelligence and data privacy. Requires the report to include a recommendation of a definition of artificial intelligence as it pertains to its use in technology for use in legislation. 

Studies 

California 

 

Public Contracts: Artificial Intelligence Services 

Pending 

Requires the Department of Technology to establish safety, privacy and nondiscrimination standards relating to artificial intelligence services, as defined. Prohibits a contract for artificial intelligence services from being entered into by the state unless the provider meets those standards. 

Government Use; Responsible Use 

California 

 

California Artificial Intelligence Research Hub 

Pending 

Requires the Government Operations Agency, the Governor's Office of Business and Economic Development and the Department of Technology to collaborate to establish the California Artificial Intelligence Research Hub in the Government Operations Agency. The bill would require the hub to serve as a centralized entity to facilitate collaboration between government agencies, academic institutions and private sector partners to advance artificial intelligence research and development. 

Oversight/Governance 

California 

 

Artificial Intelligence Accountability Act 

Pending 

Requires the Government Operations Agency, the Department of Technology and the Office of Data and Innovation to produce a State of California Benefits and Risks of Generative Artificial Intelligence Report that includes certain items, including an examination of the most significant, potentially beneficial uses for deployment of generative artificial intelligence tools by the state, and would require those entities to update the report. 

Cybersecurity; Government Use; Notification; Responsible Use 

California 

 

Crimes: Child Pornography 

Pending 

Includes an image generated using artificial intelligence as a computer-generated image, relative to crimes and child pornography. 

Child Pornography; Criminal Use 

California 

 

Consumer Protection: Generative Artificial Intelligence 

Pending 

States the intent of the Legislature to enact legislation that would establish a mechanism to allow consumers to easily determine whether images, audio, video or text were created by generative artificial intelligence. 

Provenance 

California 

 

Artificial Intelligence Technology 

Pending 

Relates to artificial intelligence technology. Defines various terms related to artificial intelligence and synthetic voice, video and image recordings produced by artificial intelligence, and would clarify that use of such synthetic recordings, as specified, is deemed to be a false personation for purposes of these and other criminal provisions. 

Judicial Use; Private Sector Use 

California 

 

Safe and Secure Innovation for Artificial Intelligence 

Pending 

Enacts the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. Requires a developer of a covered model to determine whether it can make a positive safety determination with respect to a covered model before initiating training of that covered model, as specified. Defines positive safety determination to mean a determination with respect to a covered model, that is not a derivative model, that a developer can reasonably exclude the possibility that the covered model is hazardous. 

Oversight/Governance; Private Sector Use; Responsible Use 

California 

 

State Government: Obtaining Expertise 

Pending 

States the intent of the Legislature to subsequently amend the bill to support the state's efforts in meeting the challenges posed by specified urgent and critical issues, to strengthen the exchange of expertise and research between the state and its world-leading institutions of higher education to advance the state's global leadership in technology and innovation, and to partner with the state's world-leading institutions of higher education to identify solutions to responsibly spur innovation. 

Education Use; Effect on Labor/Employment 

California 

 

Health Care Coverage: Utilization Review 

Pending 

Requires a health care service plan or health insurer to ensure that a licensed physician supervises the use of artificial intelligence decision-making tools when those tools are used to inform decisions to approve, modify or deny requests by providers for authorization prior to, or concurrent with, the provision of health care services to enrollees or insureds. 

Health Use 

California 

 

Public Benefits Contracts: Phone Operator Jobs 

Pending 

Deletes a specified exception for contracts between a state agency and a health care service plan or a specialized health care service plan regulated by the Department of Managed Health Care and for contracts between a state agency and a disability insurer or specialized health insurer regulated by the Department of Insurance. 

Effect on Labor/Employment; Government Use 

California 

 

Insurance Disclosures 

Pending 

Requires a property and casualty insurer to disclose to an applicant or insured when it has used artificial intelligence to make decisions on, or make decisions that affect, applications and claims review, as specified. 

Private Sector Use 

California 

 

Public Postsecondary Education: Artificial Intelligence 

Pending 

Requires an unspecified public institution of higher education to establish the Artificial Intelligence and Deepfake Working Group to evaluate and advise the Legislature and the public on the relevant issues and impacts of artificial intelligence and deepfakes, as provided. 

Studies 

California 

 

Artificial Intelligence Group 

Pending 

Requires the superintendent, in consultation with the state Board of Education, to convene a working group, composed as provided, for the purpose of evaluating artificial intelligence-enabled teaching and learning practices, as specified. 

Education Uses; Government Use; Studies 

Colorado 

 

Funding for School Safety Firearm Detection Systems 

Pending 

Concerns a program to fund the acquisition of firearm detection software for use in schools. 

Education Use; Government Use 

Colorado 

 

Candidate Election Deepfake Disclosures 

Pending 

Concerns the use of a deepfake in a communication related to a candidate for elective office; requires disclosure, provides for enforcement, and creates a private cause of action for candidates. 

Elections 

Connecticut 

 

Unlawful Dissemination of Intimate Images 

Pending 

Concerns unlawful dissemination of intimate images that are digitally altered or created using artificial intelligence; criminalizes unauthorized dissemination of intimate images that are digitally altered or created using artificial intelligence. 

Criminal Use 

Connecticut 

 

Artificial Intelligence Deceptive Synthetic Media 

Pending 

Concerns artificial intelligence, deceptive synthetic media and elections; prohibits distribution of certain deceptive synthetic media within the 90-day period preceding an election or primary. 

Elections 

Connecticut 

 

Artificial Intelligence 

Pending 

Protects the public from harmful unintended consequences of artificial intelligence; trains Connecticut's workforce to use artificial intelligence. 

Education/Training; Education Use; Effect on Labor/Employment; Responsible Use 

Connecticut 

 

School Resources 

Pending 

Relates to grants for special education; repeals provisions relating to grants to certain local and regional boards of education; requires that the superintendent of schools annually provide updated copies of the blueprints and floor plans for each school to all law enforcement, fire, public health, emergency management and emergency medical services personnel. 

Education Use; Government Use 

Delaware 

 

Artificial Intelligence Commission 

Pending 

Creates the Delaware Artificial Intelligence (AI) Commission; provides that this commission shall be tasked with making recommendations to the General Assembly and Department of Technology and Information on AI utilization and safety within the state; provides that the commission shall additionally conduct an inventory of all generative AI usage within state executive, legislative and judicial agencies and identify high risk areas for the implementation of generative AI. 

Government Use; Studies 

District of Columbia 

 

Stop Discrimination by Algorithms Act of 2023 

Pending 

Prohibits users of algorithmic decision-making from utilizing algorithmic eligibility determinations in a discriminatory manner; requires corresponding notices to individuals whose personal information is used; provides for appropriate means of civil enforcement. 

Audit; Notification; Responsible Use 

Florida 

 

Education 

Pending 

Revises provisions and provides requirements for computer science in K-12 schools; revises provisions relating to weighted GPA calculations for certain courses; revises provisions relating to Bright Futures Scholarship Program; establishes Artificial Intelligence in Education Task Force. 

Education Use; Government Use; Studies 

Florida 

 

Artificial Intelligence Use in Political Advertising 

To governor 

Requires certain political advertisements, electioneering communications or other miscellaneous advertisements to include a specified disclaimer; specifies requirements for the disclaimer; provides for criminal and civil penalties; authorizes any person to file certain complaints; provides for expedited hearings. 

Elections 

Florida 

 

Verification of Reemployment Assistance 

Pending 

Provides requirements for reemployment assistance benefit conditions for non-Florida residents; removes requirements that certain skills assessments of claimants be voluntary; revises circumstances under which Department of Commerce disqualifies claimants from benefits; requires department to verify claimants’ identities before paying benefits and to weekly cross-check certain information; prohibits benefits from being paid for claims that have not been cross-checked. 

Government Use 

Florida 

 

Artificial Intelligence Transparency 

Failed 

Creates Government Technology Modernization Council within the Department of Management Services; requires entities and persons to create safety and transparency standards for content, images and videos generated by AI; requires disclosures for certain communications, interactions, images, likenesses and content; prohibits use of a natural person’s image or likeness without consent; provides certain political ads are subject to specified requirements. 

Child Pornography; Criminal Use; Elections; Government Use; Notification; Oversight/Governance; Private Sector Use; Provenance; Responsible Use 

Florida 

 

Health Care Innovation 

Failed 

Creates Health Care Innovation Council within the state Department of Health for specified purpose; requires council to submit annual reports to governor and Legislature; requires department to administer revolving loan program for applicants seeking to implement certain health care innovations in this state; authorizes department to contract with third party to administer program, including loan servicing, and manage revolving loan fund. 

Health Use; Studies 

Florida 

 

Use of Artificial Intelligence in Political Advertising 

Failed 

Defines the term generative artificial intelligence; requires that certain political advertisements, electioneering communications or other miscellaneous advertisements include a specified disclaimer; provides for civil penalties; authorizes the filing of complaints regarding violations with the Florida Elections Commission. 

Elections 

Florida 

 

Artificial Intelligence 

Failed 

Creates the Artificial Intelligence Advisory Council within the Department of Management Services; requires the department to provide administrative support to the council; requires members to be appointed to the council by a specified date; requires each state agency to prepare and submit, by a specified date and using money appropriated by the Legislature, an inventory report for all automated decision systems that are being developed, used or procured by the agency; provides legislative intent. 

Government Use; Responsible Use; Studies 

Florida 

 

Verification of Reemployment Assistance 

Pending 

Cites this act as the Promoting Work, Deterring Fraud Act of 2024; provides requirements for reemployment assistance benefit conditions for non-Florida residents; removes requirements that certain skills assessments of claimants be voluntary; revises circumstances under which the department disqualifies claimants from benefits; requires the department to verify claimants’ identities before paying benefits; requires the department to procure an online workforce search and match tool for a specified purpose. 

Government Use 

Florida 

 

Computer Science Education 

Failed 

Provides that state academic standards include computer science skills; requires K-12 public schools to provide computer science instruction; requires the department to publish specified information on its website relating to computer science education and certain industry certifications; requires the Department of Education to adopt and publish by a specified date a strategic plan for computer science education; creates the AI in Education Task Force within the department. 

Education/Training 

Florida 

 

Artificial Intelligence Transparency 

To governor 

Creates the Government Technology Modernization Council; requires the council to submit specified recommendations to the Legislature and specified reports to the governor and the Legislature by specified dates; prohibits a person from knowingly possessing or controlling or intentionally viewing photographs, motion pictures, representations, images, data files, computer depictions or other presentations which the person knows to include generated child pornography; provides for criminal penalties. 

Elections; Government Use; Notification; Oversight/Governance; Private Sector Use; Provenance; Responsible Use 

Florida 

 

Public Records and Artificial Intelligence Transparency 

Failed 

Provides an exemption from public records requirements for information relating to investigations by the Department of Legal Affairs and law enforcement agencies of certain artificial intelligence transparency violations; provides that certain information received by the department remains confidential and exempt upon completion or inactive status of an investigation; provides for future legislative review and repeal of the exemption; provides a statement of public necessity. 

Government Use 

Florida 

 

Health Care Innovation Council 

To governor 

Creates the Health Care Innovation Council to tap into the best knowledge and experience available by regularly bringing together subject matter experts to explore and discuss innovations in technology, workforce and service delivery models that can be exhibited as best practices, implemented or scaled to improve the quality and delivery of health care; provides for a revolving loan program for applicants seeking to implement innovative solutions; appropriates funds to the Department of Health. 

Health Use; Studies 

Florida 

 

Education 

Failed 

Revises eligibility requirements for a New Worlds Scholarship Account; requires each school district and prekindergarten provider to notify the parent of each eligible student of the process to request and receive a scholarship when providing certain screening and progress monitoring results; renames the New Worlds Reading Initiative as the New Worlds Learning Initiative; expands the initiative to include improvement in mathematics skills. 

Education Use; Government Use 

Georgia 

 

General Provisions Regarding Insurance 

Pending 

Relates to general provisions regarding insurance. 

Health Use 

Georgia 

 

Laws and Statutes 

Pending 

Relates to laws and statutes, provides for protections against discrimination by artificial intelligence and automated decision tools; prohibits certain defenses; provides for definitions; provides for related matters; repeals conflicting Laws. 

Responsible Use 

Georgia 

 

Technology Authority 

Pending 

Provides that the state Technology Authority shall conduct an inventory of all systems that employ artificial intelligence and are in use by any agency and develop and establish policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence and are in use by agencies; requires an annual report. 

Government Use; Impact Assessment; Responsible Use 

Georgia 

 

Obscene Material 

Pending 

Prohibits the distribution of computer generated obscene material depicting a child; provides for affirmative defenses; provides that a person commits the offense of criminal trespass involving a wild animal in the first degree if such person enters a cage, enclosure or other area where a wild animal is housed or otherwise contained, into which the person knows he or she has no legal authority, license or permission to enter, and harasses the wild animal and such wild animal suffers an injury or death. 

Child Pornography; Criminal Use 

Georgia 

 

Senate Study Committee on Artificial Intelligence 

Pending 

Creates the Senate Study Committee on Artificial Intelligence. 

Government Use; Private Sector Use; Responsible Use; Studies 

Guam 

None 

 

 

 

 

Hawaii 

 

Algorithmic Decision Making 

Pending 

Prohibits users of algorithmic decision-making from utilizing algorithmic eligibility determinations in a discriminatory manner; requires users of algorithmic decision-making to send corresponding notices to individuals whose personal information is used; requires users of algorithmic decision-making to submit annual reports to the Department of the Attorney General; provides for appropriate means of civil enforcement. 

Audit; Notification; Private Sector Use; Responsible Use 

Hawaii 

 

Campaign Advertisements 

Pending 

Requires any campaign advertisement that contains any image, video footage or audio recording that is created with the use of generative artificial intelligence to include a disclosure statement regarding the use of that technology; subjects violators to administrative fines. 

Elections 

Hawaii 

 

Wildfire Forecast System 

Pending 

Provides that the Legislature finds that an early detection system for wildfire will improve public safety by allowing government agencies and emergency management personnel to issue timely warnings and enhance the preparedness of first responders; requires the University of Hawaii to develop a wildfire forecast system for the state; appropriates funds. 

Government Use 

Hawaii 

 

Generative Artificial Intelligence 

Pending 

Establishes a plan for the use of generative artificial intelligence in state agencies, departments and government branches; requires the Office of Enterprise Technology Services to carry out risk assessments and to prepare guidelines for state uses; requires reports to the Legislature. 

Government Use; Impact Assessment; Notification; Responsible Use 

Hawaii 

 

Office of Artificial Intelligence Safety and Regulation 

Pending 

Establishes an artificial intelligence working group in the Office of Enterprise Technology Services to develop acceptable use policies and guidelines for the regulation, development, deployment and use of artificial technologies in the state; appropriates funds. 

Government Use; Impact Assessment; Oversight/Governance; Private Sector Use; Responsible Use 

Hawaii 

 

Artificial Intelligence Government Services 

Pending 

Establishes and appropriates funds for an artificial intelligence government services pilot program to provide certain government services to the public through an internet portal that uses artificial intelligence technologies. 

Government Use 

Hawaii 

 

Law Enforcement Artificial Intelligence Training 

Pending 

Urges the leadership of the Department of Law Enforcement to periodically undergo training on crimes relating to artificial intelligence technology. 

Criminal Use; Government Use 

Hawaii 

 

Law Enforcement Artificial Intelligence Training 

Pending 

Urges the leadership of the Department of Law Enforcement to periodically undergo training on crimes relating to artificial intelligence technology. 

Criminal Use; Government Use 

Hawaii 

 

Wildfire Forecast System 

Pending 

Provides that the University of Hawaii shall establish and implement a program to develop a wildfire forecast system for the state using artificial intelligence approaches; provides that the university shall develop the system to forecast the risk of wildfire statewide to enhance public safety, preparedness and risk mitigation, including improving the preparedness of firefighters and enabling residents to take proactive fire mitigation measures for their homes and plan for evacuations; appropriates funds. 

Government Use 

Hawaii 

 

State Health Planning and Development Agency 

Pending 

Amends the functions and duties of the state health planning and development agency; appropriates moneys for administrative costs and to establish positions; requires the plan to be developed no later than 2025 and shall be updated annually thereafter; adds remote monitoring, artificial intelligence and workforce development to examples of emerging health issues that the agency may provide reports, studies and recommendations on. 

Health Use 

Hawaii 

 

State Health Planning and Development Agency 

Pending 

Amends the functions of the state health planning and development agency; declares that the general fund expenditure ceiling is exceeded; establishes positions; makes an appropriation. 

Health Use 

Hawaii 

 

Algorithmic Eligibility Determinations 

Pending 

Prohibits users of algorithmic decision-making from utilizing algorithmic eligibility determinations in a discriminatory manner; requires users of algorithmic decision-making to send corresponding notices to individuals whose personal information is used; requires users of algorithmic decision-making to submit annual reports to the Department of the Attorney General; provides for appropriate means of civil enforcement. 

Audit; Impact Assessment; Notification; Private Sector Use; Responsible Use 

Hawaii 

 

Office of Artificial Intelligence Safety and Regulation 

Pending 

Establishes the Office of Artificial Intelligence Safety and Regulation within the Department of Commerce and Consumer Affairs to regulate the development, deployment and use of artificial intelligence technologies in the state; prohibits the deployment of artificial intelligence products in the state unless affirmative proof establishing the product's safety is submitted to the office; makes an appropriation. 

Impact Assessment; Oversight/Governance; Responsible Use 

Hawaii 

 

Department of Corrections Artificial Intelligence Use 

Pending 

Requires the Department of Corrections and Rehabilitation to conduct a study to determine the feasibility of using artificial intelligence technology to assist the department with improving safety at correctional institutions; authorizes the department to contract with consultants to conduct the study; requires a report to the Legislature; appropriates moneys; declares that the appropriation exceeds the state general fund expenditure ceiling for a specified fiscal year. 

Government Use 

Hawaii 

 

Artificial Intelligence Government Services 

Pending 

Establishes and appropriates funds for an artificial intelligence government services pilot program to provide certain government services to the public through an internet portal that uses artificial intelligence technologies. 

Government Use 

Hawaii 

 

State Health Planning and Developmental Agency 

Pending 

Appropriates the function of the State Health Planning and Developmental Agency with the Department of Health; declares that the general fund expenditure ceiling is exceeded; establishes positions. 

Health Use 

Hawaii 

 

Artificial Intelligence and Art 

Pending 

Encourages the U.S. Congress to pass the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2023 (No Fakes Act) and the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024 (No AI Fraud Act). 

Criminal Use 

Hawaii 

 

Artificial Intelligence and Art 

Pending 

Encourages the U.S. Congress to pass the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2023 (No Fakes Act) and the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024 (No AI Fraud Act). 

Criminal Use 

Idaho 

 

Crimes Against Children 

Pending 

Amends existing law to provide that digital, computer-generated images may be considered in the crime of sexual exploitation of a child; revises provisions regarding the Internet Unit. 

Child Pornography; Criminal Use 

Idaho 

 

State Affairs 

Pending 

Relates to the artificial intelligence advisory council; adds to existing laws to define terms, to establish the state artificial intelligence advisory council, to provide for powers and duties of the council, to require inventory reports regarding automated decision systems employed by state agencies, to provide for the appointment of council members and council meeting requirements, and to provide a sunset date. 

Government Use; Studies 

Illinois 

 

Artificial Intelligence Video Interview Act 

Pending 

Amends the Artificial Intelligence Video Interview Act; makes a technical change in a section concerning the short title. 

Private Sector Use 

Illinois 

 

Hospital Diagnostics Certification 

Pending 

Amends the University of Illinois Hospital Act and the Hospital Licensing Act; provides that before using any diagnostic algorithm to diagnose a patient, a hospital must first confirm that the diagnostic algorithm has been certified by the Department of Public Health and the Department of Innovation and Technology, has been shown to achieve as or more accurate diagnostic results than other diagnostic means, and is not the only method of diagnosis available to a patient. 

Health Use 

Illinois 

 

Anti Click Gambling Data Analytics Collection Act 

Pending 

Creates the Anti-Click Gambling Data Analytics Collection Act; provides that no entity that operates a remote gambling platform or a subsidiary of the entity shall collect data from a participant with the intent to predict how the participant will gamble in a particular gambling or betting scenario. 

Private Sector Use 

Illinois 

 

Safe Patient Limits Act 

Pending 

Creates the Safe Patient Limits Act; provides the maximum number of patients that may be assigned to a registered nurse in specified situations; provides that nothing shall preclude a facility from assigning fewer patients to a registered nurse than the limits provided in act; provides that nothing in the act precludes the use of patient acuity systems consistent with the Nurse Staffing by Patient Acuity Act. 

Health Use; Effect on Labor/Employment 

Illinois 

 

Human Rights Act 

Pending 

Amends the Human Rights Act; provides that an employer that uses predictive data analytics in its employment decisions may not consider the applicant's race or ZIP code when used as a proxy for race to reject an applicant in the context of recruiting, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure or terms, privileges, or conditions of employment. 

Private Sector Use; Responsible Use 

Illinois 

 

Insurance Code and Motor Vehicle Liability 

Pending 

Amends the Insurance Code; provides that an insurer shall not, regarding any motor vehicle liability insurance practice, unfairly discriminate based on age, race, color, national or ethnic origin, immigration or citizenship status, sex, sexual orientation, disability, gender identity or gender expression, or use any external consumer data and information sources in a way that unfairly discriminates. 

Private Sector Use; Responsible Use; 

Illinois 

 

Election Code 

Pending 

Amends the Election Code; in provisions concerning the prevention of voting or candidate support and conspiracy to prevent voting, provides that the term deception or forgery includes, but is not limited to the creation and distribution of a digital replica or deceptive social media content that a reasonable person would incorrectly believe is a true depiction of an individual, is made by a government official or candidate for office within the state, or is an announcement or communication. 

Elections 

Illinois 

 

Courses of Study Article of the School Code 

Pending 

Amends the Courses of Study Article of the School Code; provides that all school districts shall, with guidance and standards provided by the state Board of Education and a group of educators convened by the state Board of Education, ensure that students receive developmentally appropriate opportunities to gain digital literacy skills beginning in elementary school; provides that digital literacy instruction shall include developmentally appropriate instruction in digital citizenship skills, media. 

Education/Training 

Illinois 

 

Artificial Intelligence Reporting Act 

Pending 

Creates the Artificial Intelligence Reporting Act; provides that each state agency shall prepare an annual report concerning the state agency's use of covered algorithms in its operations; sets forth reporting requirements; provides that, within the specified number of months after the effective date of the act, and each year thereafter, each state agency shall submit the report to the General Assembly, the auditor general and the Department of Innovation and Technology. 

Government Use; Oversight/Governance 

Illinois 

 

State Government Law 

Pending 

Amends the Departments of State Government Law of the Civil Administrative Code of Illinois; provides that all state agency artificial intelligence systems or state-funded artificial intelligence systems must follow the trustworthiness, equity and transparency standards framework established by the National Institute for Standards and Technology's AI Risk Management Framework; specifies time frames for compliance. 

Government Use; Impact Assessment 

Illinois 

 

Criminal Code of 2012 and Child Pornography 

Pending 

Amends the Criminal Code of 2012; provides that for purposes of violating the child pornography law, depicting a person under specified years of age personally engaging in or personally simulating any act of sexual penetration or sexual conduct includes a representation of a real or fictitious person through use of artificially intelligent software or computer-generated means, who is, or who a reasonable person would regard as being, a real person under specified years of age. 

Child Pornography; Criminal Use 

Illinois 

 

Procurement Code 

Pending 

Amends the Procurement Code; requires a vendor who contracts for government services, grants, or leases or purchases of software or hardware to disclose if artificial intelligence technology is, has been, or will be used in the course of fulfilling the contract or in the goods, technology, or services being purchased; provides that the disclosure must be provided to the chief procurement officer, the Department of Innovation and Technology and the General Assembly. 

Government Use 

Illinois 

 

Hospital Act and the Hospital Licensing Act 

Pending 

Amends the University of the State Hospital Act and the Hospital Licensing Act; provides that before using any diagnostic algorithm to diagnose a patient, a hospital must first confirm that the diagnostic algorithm has been certified by the Department of Public Health and the Department of Innovation and Technology, has been shown to achieve as or more accurate diagnostic results than other diagnostic means, and is not the only method of diagnosis available to a patient; sets forth provisions concerning certification of the diagnostic algorithm and annual reporting by the proprietor of the diagnostic algorithm. Amends the Medical Patient Rights Act. Provides that a patient has the right to be told when a diagnostic algorithm will be used to diagnose them. Provides that before a diagnostic algorithm is used to diagnose a patient, the patient must first be presented with the option of being diagnosed without the diagnostic algorithm and consent to the diagnostic algorithm's use. 

Health Use 

Illinois 

 

Automated Decision Tools Act 

Pending 

Creates the Automated Decision Tools Act; provides that, on or before a specified date, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses or designs, codes or produces that includes specified information; provides that a deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision. 

Government Use; Impact Assessment; Notification; Private Sector Use; Responsible Use 

Illinois 

 

Procurement Code 

Pending 

Amends the Procurement Code; requires a vendor who contracts for government services, grants or leases or purchases of software or hardware to disclose if artificial intelligence technology is, has been or will be used in the course of fulfilling the contract or in the goods, technology or services being purchased; provides that the disclosure must be provided to the chief procurement officer, the Department of Innovation and Technology and the General Assembly. 

Government Use 

Illinois 

 

Consumer Fraud and Deceptive Business Practices Act 

Pending 

Amends the Consumer Fraud and Deceptive Business Practices Act; provides that each generative artificial intelligence system and artificial intelligence system that, using any means or facility of interstate or foreign commerce, produces image, video, audio or multimedia AI-generated content shall include on the AI-generated content a clear and conspicuous disclosure that satisfies specified criteria. 

Notification; Private Sector Use; Provenance 

Illinois 

 

Commercial Algorithmic Impact Assessments Act 

Pending 

Creates the Commercial Algorithmic Impact Assessments Act; defines algorithmic discrimination, artificial intelligence, consequential decision, deployer, developer and other terms; requires that by specified amount and annually thereafter, a deployer of an automated decision tool must complete and document an assessment that summarizes the nature and extent of that tool, how it is used and assessment of its risks, among other things. 

Government Use; Impact Assessment; Private Sector Use; Responsible Use 

Illinois 

 

Higher Education Act 

Pending 

Amends the Board of Higher Education Act; provides that within six months of the effective date of the Amendatory Act the Board of Higher Education shall prepare a report to the General Assembly on the state of artificial intelligence education and development in public and private institutions of higher education; sets forth what the report shall contain. 

Education Uses; Government Use; Studies 

Illinois 

 

Bolstering Online Transparency Act 

Pending 

Creates the Bolstering Online Transparency Act; provides that a person shall not use an automated online account, or bot, to communicate or interact with another person in this state online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election, unless the person makes a specified disclosure. 

Private Sector Use 

Illinois 

 

Innovation and Technology Act 

Pending 

Amends the Department of Innovation and Technology Act; makes changes to the composition of the task force; provides that the task force shall include two members appointed by the speaker of the House of Representatives, one of whom shall serve as a co-chairperson. 

Education Uses; Government Use; Effect on Labor/Employment; Private Sector Use; Studies 

Illinois 

 

Consumer Fraud and Deceptive Business Practices Act 

Pending 

Amends the Consumer Fraud and Deceptive Business Practices Act; provides that it is an unlawful practice within the meaning of the act for a licensed mental health professional to provide mental health services to a patient through the use of artificial intelligence without first obtaining informed consent from the patient for the use of artificial intelligence tools and disclosing the use of artificial intelligence tools to the patient before providing services through the use of artificial intelligence. 

Health Use; Notification 

Illinois 

 

Artificial Intelligence Video Interview Act 

Pending 

Amends the Artificial Intelligence Video Interview Act; makes a technical change in a section concerning the short title. 

Private Sector Use 

Illinois 

 

Safe Patient Limits Act 

Pending 

Creates the Safe Patient Limits Act; provides the maximum number of patients that may be assigned to a registered nurse in specified situations; provides that nothing shall preclude a facility from assigning fewer patients to a registered nurse than the limits provided in the act; provides that the maximum patient assignments may not be exceeded, regardless of the use and application of any patient acuity system. 

Health Use; Effect on Labor/Employment 

Illinois 

 

Information Act 

Pending 

Amends the Freedom of Information Act; provides that administrative or technical information associated with automated data operations shall be exempt from inspection and copying, but only to the extent that disclosure would jeopardize the security of the system or its data or the security of materials exempt under the act. 

Government Use 

Illinois 

 

Unlawful Deepfake and Minors 

Pending 

Amends the Criminal Code of 2012; creates the offense of unlawful deepfake of a minor engaging in sexual activity; provides that any person who, with knowledge that the material is a deepfake depicting a minor under 18 years of age, knowingly distributes, advertises, exhibits, exchanges with, promotes or sells any material that depicts a minor engaging in sexual conduct or sexual penetration is guilty of a Class 1 felony. 

Child Pornography; Criminal Use 

Illinois 

 

Election Code 

Pending 

Amends the Election Code; provides that, if a person, committee or other entity creates, originally publishes or originally distributes a qualified political advertisement, the qualified political advertisement shall include, in a clear and conspicuous manner, a statement that the qualified political advertisement was generated in whole or substantially by artificial intelligence that satisfies specified requirements; provides for civil penalties and exceptions to the provision. 

Elections 

Illinois 

 

Safe Patient Limits Act 

Pending 

Creates the Safe Patient Limits Act; provides the maximum number of patients that may be assigned to a registered nurse in specified situations; provides that nothing shall preclude a facility from assigning fewer patients to a registered nurse than the limits provided in the act; provides that the maximum patient assignments may not be exceeded, regardless of the use and application of any patient acuity system; requires the Department of Public Health to adopt rules governing the implementation. 

Health Use 

Illinois 

 

Artificial Intelligence and False Personation 

Pending 

Amends the Criminal Code of 2012; provides that certain forms of false personation may be accomplished by artificial intelligence; defines artificial intelligence. 

Criminal Use 

Indiana 

 

Technology 

 

Creates the Artificial Intelligence Task Force; provides that political subdivisions, state agencies, school corporations and state educational institutions may adopt a technology resources policy and cybersecurity policy, subject to specified guidelines; provides that a person with which a state agency enters into a licensing contract for use of a software application designed to run on generally available desktop or server hardware may not restrict the hardware on which the agency runs the software. 

Education Use; Government Use; Studies 

Iowa 

 

State of Disaster Emergencies 

Pending 

Relates to powers and duties applicable to state of disaster emergencies and public health disasters. 

Government Use 

Iowa 

 

Review and Ongoing Rescission of Administrative Rules 

Pending 

Provides for review and ongoing rescission of administrative rules. 

Government Use 

Iowa 

 

Conduct of Elections 

Pending 

Relates to the conduct of elections, including the use of artificial intelligence and deceptive statements and provides penalties. 

Elections 

Iowa 

 

State of Disaster Emergencies 

Pending 

Relates to powers and duties applicable to state of disaster emergencies and public health disasters. 

Government Use 

Kansas 

 

Generative Artificial Intelligence in Election Campaign 

Failed 

Relates to prohibiting the use of generative artificial intelligence to create false representations of candidates in election campaign media or of state officials. 

Elections 

Kansas 

 

Generative Artificial Intelligence in Election Campaign 

Pending 

Relates to prohibiting the use of generative artificial intelligence to create false representations of candidates in election campaign media or of state officials. 

Elections 

Kentucky 

 

Promotion of Family well-being 

Pending 

Requires the eligibility periods for all public assistance programs administered by the Cabinet for Health and Family Services be extended to the maximum period of eligibility permitted under federal law; prohibits the Cabinet for Health and Family Services from relying exclusively on automated, artificial intelligence-based, or algorithmic software in the identification of fraud in programs administered by the cabinet. 

Government Use 

Kentucky 

 

Artificial Intelligence Task Force 

Pending 

Directs the Legislative Research Commission to establish the Artificial Intelligence Task Force to study the impact of artificial intelligence on operation and procurement policies of state government agencies and consumer protection needed in private and public sectors; provides recommendations on artificial intelligence systems that would enhance state government operations and legislative initiatives needed to provide consumer protection in the private and public sectors. 

Government Use; Studies 

Kentucky 

 

Cabinet for Health and Family Services 

Pending 

Requires the eligibility periods for all public assistance programs administered by the Cabinet for Health and Family Services be extended to the maximum period of eligibility permitted under federal law; prohibits the Cabinet for Health and Family Services from relying exclusively on automated, artificial intelligence-based, or algorithmic software in the identification of fraud in programs administered by the cabinet. 

Government Use 

Kentucky 

 

Technology in Education 

Pending 

Makes legislative findings and declarations and establishes the Artificial Intelligence in Kentucky's Schools project, establishes requirements for the Kentucky Department of Education to implement the project, requires the department to design professional development trainings related to artificial intelligence, establishes professional development requirement for teachers, administrators, school council members and school board members. 

Education Use; Government Use 

Kentucky 

 

Automated Online Activity 

Pending 

Defines terms; prohibits an automated online account, or bot, from communicating or interacting with another person in Kentucky online with the intent to mislead the other person about its artificial identify for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction; provides that a violation of the act is a deceptive act or practice in the conduct of trade or commerce; prohibits a private right of action. 

Private Sector Use 

Louisiana 

 

Insurance 

Pending 

Relates to unfair discrimination in insurance practices. 

Private Sector Use 

Louisiana 

 

Political Campaigns 

Pending 

Regulates the use of deepfakes and artificial intelligence technology in political advertising. 

Elections 

Louisiana 

 

Information Technology 

Pending 

Provides for registration of foundation models. 

Private Sector Use 

Maine 

 

Data Privacy and Protection Act 

Pending 

Enacts the data privacy and protection act; requires policies, practices and procedures for data privacy; prohibits retaliation for the exercise of a right relating to personal data and prohibits discriminatory practices in the collection, processing or transfer of personal data; relates to civil penalty. 

Impact Assessment; Private Sector Use 

Maine 

 

Address Unsafe Staffing of Nurses and Improve Patient Care 

Pending 

Establishes the Maine Quality Care Act to ensure adequate direct care registered nurse staffing assignments in health care facilities, including hospitals, freestanding emergency departments and ambulatory surgical facilities, to provide safe and effective patient care. It establishes minimum direct-care registered nurse staffing requirements based on patient care unit and patient needs, specifies the method to calculate a health care facility's compliance with the staffing requirements, protects direct-care registered nurses from retaliation and includes notice, record-keeping and enforcement requirements. 

Health Use; Effect on Labor/Employment 

Maryland 

 

Pava LaPere Innovation Acceleration Grant Program 

Pending 

Establishes the Pava LaPere Innovation Acceleration Grant Program in the Maryland Technology Development Corporation; establishes the Baltimore Innovation Initiative Pilot Program within the Maryland Innovation Initiative of the corporation; requires certain appropriations for the programs to be included in the annual budget bill in certain fiscal years; repeals a mandated appropriation for a certain telework assistance program. 

Appropriations; Health Use; Private Sector Use 

Maryland 

 

Artificial Intelligence in Governmental Services Act 

Pending 

Alters certain requirements relating to an annual evaluation of the use of emerging technologies in providing public services by the secretary of information technology; requires the evaluation to include an assessment of the use of emerging technologies to ensure public services remain efficient, effective and responsive to state residents’ needs and the potential benefits and risks inherent in the deployment of artificial intelligence and other emerging technologies. 

Government Use 

Maryland 

 

Talent Innovation Program and Fund 

Pending 

Establishes the Talent Innovation Program in the state Department of Labor to increase access to high-quality job training by using innovative and sustainable talent financing mechanisms to help meet skill needs in the state's prominent and emerging industry sectors; requires the department, beginning on Jan. 1, 2025, and each Jan. 1 thereafter, to report to the governor, the president of the Senate and speaker of the House on program activities and use of the fund. 

Effect on Labor/Employment 

Maryland 

 

Technology Advisory Commission 

Pending 

Establishes the Technology Advisory Commission to study and make recommendations on technology and science developments and use in the state; requires the governor to include in the annual budget bill an appropriation of specified amount for the commission; requires the commission to submit a report on its activities and recommendations to the governor and the General Assembly by Dec. 31 each year. 

Responsible Use; Studies 

Maryland 

 

Automated Employment Decision Tools Prohibition 

Pending 

Prohibits, subject to a certain exception, an employer from using an automated employment decision tool to make certain employment decisions; requires an employer, under certain circumstances, to notify an applicant for employment of the employer's use of an automated employment decision tool within specified days after the use; provides certain penalties per violation for an employer that violates the notification requirement of the act. 

Impact Assessment; Notification; Private Sector Use; Responsible Use 

Maryland 

 

Artificial Intelligence Governance Act of 2024 

Pending 

Requires each unit of state government to conduct certain inventories and assessments; requires the Department of Information Technology to conduct certain monitoring and adopt certain policies and procedures; prohibits a unit of state government from implementing or using a system that employs artificial intelligence under certain circumstances; establishes the Governor's Artificial Intelligence Subcabinet of the Governor's Executive Council. 

Government Use; Impact Assessment; Studies 

Maryland 

 

Artificial Intelligence Tools Sales and Use Tax 

Pending 

Prohibits the secretary of commerce from issuing a tax credit certificate for the purchase of certain cybersecurity technologies or services for a taxable year beginning after specified date; alters the definition of digital product under the state sales and use tax to include certain artificial intelligence tools; allows a credit against the state income tax for costs paid or incurred by a qualified buyer for certain artificial intelligence tools. 

Private Sector Use; Taxes 

Maryland 

 

Artificial Intelligence Guidelines and Pilot Program 

Pending 

Requires the state Department of Education, in consultation with the Department of Information Technology, to develop and update guidelines on artificial intelligence for county boards of education and to develop a pilot program to support the Artificial Intelligence Subcabinet of the Governor's Executive Council; requires the pilot program to identify best uses of artificial intelligence, reduce barriers to responsible use of artificial intelligence and promote training in artificial intelligence. 

Education Use; Government Use; Responsible Use 

Maryland 

 

Safe Artificial Intelligence Act 

Pending 

Affirms the state General Assembly’s commitment to aligning with President Joseph Biden's vision for the safe and responsible use of artificial intelligence, as delineated in the Blueprint for an Artificial Intelligence Bill of Rights. 

Responsible Use 

Maryland 

 

Center for School Safety Requirements 

Pending 

Requires the specified state Center for School Safety, in collaboration with public safety agencies, the state Department of Education, local school systems, the University System of specified state and other public institutions of higher education in the state, to conduct an evaluation of firearm detection platforms; authorizes funds from the Safe Schools Fund to be used to assist local school systems in the procurement and maintenance of firearm detection platforms. 

Education Use; Government Use 

Maryland 

 

Entrepreneurial Innovation Programs 

Pending 

Establishes the Pava LaPere Innovation Acceleration Grant Program in the Maryland Technology Development Corporation; establishes the Baltimore Innovation Initiative Pilot Program within the Maryland Innovation Initiative of the corporation; requires certain appropriations for the programs to be included in the annual budget bill in certain fiscal years; repeals a mandated appropriation for a certain telework assistance program. 

Appropriations; Health Use; Private Sector Use 

Maryland 

 

Artificial Intelligence Governance Act of 2024 

Pending 

Requires each unit of state government to conduct certain inventories and assessments; requires the Department of Information Technology to conduct certain monitoring and adopt certain policies and procedures; prohibits a unit of state government from implementing or using a system that employs artificial intelligence under certain circumstances; establishes the Governor's Artificial Intelligence Subcabinet of the Governor's Executive Council. 

Education Use; Government Use; Impact Assessment; Oversight/Governance; Responsible Use; Studies 

Maryland 

 

Technology Advisory Commission 

Pending 

Establishes the Technology Advisory Commission to study and make recommendations on technology and science developments and use in the state; requires the governor to include in the annual budget bill an appropriation of a specified amount for the commission; requires the commission to submit a report on its activities and recommendations to the governor and the General Assembly by the specified date of each year. 

Government Use; Effect on Labor/Employment; Responsible Use; Studies 

Maryland 

 

Automated Employment Decision Tools Prohibition 

Pending 

Prohibits, subject to a certain exception, an employer from using an automated employment decision tool to make certain employment decisions; requires an employer, under certain circumstances, to notify an applicant for employment of the employer's use of an automated employment decision tool within 30 days after the use; provides certain penalties per violation for an employer that violates the notification requirement of the act. 

Impact Assessment; Effect on Labor/Employment; Private Sector Use 

Maryland 

 

Artificial Intelligence Guidelines and Pilot Program 

Pending 

Requires the state Department of Education, in consultation with the AI Subcabinet of the Governor's Executive Council, to develop and update guidelines on artificial intelligence for county boards of education and to develop a pilot program to support the AI Subcabinet of the Governor's Executive Council. 

Education Use; Government Use; Responsible Use 

Maryland 

 

Computer Science Education Content Standards 

Pending 

Requires public high schools to promote and increase the enrollment of certain students in high school computer science courses; requires, beginning on or before the specified date, the state Board of Education to update computer science content standards to include certain information; requires county boards of education to provide developmentally appropriate computer science instruction in public elementary and middle schools in the county. 

Education/Training 

Maryland 

 

Artificial Intelligence Advisory and Oversight 

Pending 

Establishes the Maryland Artificial Intelligence Advisory and Oversight Commission to guide the state in growing, developing, using and diversifying artificial intelligence in the state; requires the commission to report its findings and recommendations to the governor and the General Assembly on or before Dec. 1, 2024, and each year thereafter. 

Oversight/Governance 

Maryland 

S 1089 

Student and School Employee Data Privacy 

Pending 

Amends student and school employee data privacy protections to apply to websites, services and applications that use artificial intelligence. 

Education Use; Government Use 

Massachusetts 

 

Economic Revitalization 

Pending 

Finances the immediate economic revitalization, community development and housing needs of the commonwealth. 

Appropriations 

Massachusetts 

 

State Agency Automated Decision Making 

Pending 

Relates to state agency automated decision-making, artificial intelligence, transparency, fairness and individual rights. 

Government Use; Responsible Use; Studies 

Massachusetts 

 

Media Literacy in Schools 

Pending 

Relates to media literacy in schools. 

Education Use 

Massachusetts 

 

Preventing Dystopian Work Environments 

Pending 

Relates to preventing dystopian work environments. 

Impact Assessment; Effect on Labor/Employment; Notification; Private Sector Use 

Massachusetts 

 

Use of Artificial Intelligence in Mental Health Service 

Pending 

Relates to the use of artificial intelligence in mental health services. 

Health Use 

Massachusetts 

 

Automated Decision Making 

Pending 

Establishes a commission on automated decision-making by government in the commonwealth. 

Government Use; Responsible Use; Studies 

Massachusetts 

 

Appropriations for the Fiscal Year 2023 

Pending 

Makes appropriations for fiscal year 2023 to provide for supplementing certain existing appropriations and for certain other activities and project reports, recommending that the same ought to pass with an amendment striking out all after the enacting clause and inserting in place thereof the text of Senate document No. 23. 

Appropriations 

Massachusetts 

 

Regulate Generative Artificial Intelligence 

Pending 

Drafted with the help of ChatGPT, this bill regulates generative artificial intelligence models like ChatGPT. 

Responsible Use 

Massachusetts 

 

Automated Decision Making by Government 

Pending 

Establishes a commission on automated decision-making by government in the commonwealth. 

Government Use; Responsible Use; Studies 

Massachusetts 

 

Cybersecurity and Artificial Intelligence 

Pending 

Relates to cybersecurity and artificial intelligence. 

Audit; Government Use; Responsible Use 

Michigan 

 

Campaign Finances 

Pending 

Modifies sentencing guidelines for campaign finance violations. 

Elections 

Minnesota 

 

State Government 

Pending 

Relates to state government; provides for certain crime, public safety, victim, sentencing, expungement, clemency, evidence, policing, private security, corrections, firearm, controlled substances, community supervision and 911 Emergency Communication System policy provisions in statutes and laws; provides for reports; authorizes rulemaking; appropriates money. 

Government Use 

Minnesota 

 

Consumer Protection 

Pending 

Relates to consumer protection; modifies various provisions governing debt collection, garnishment and consumer finance; provides for debtor protections; requires a review of certain statutory forms. 

Private Sector Use 

Minnesota 

 

Higher Education 

Pending 

Relates to higher education; establishes academic freedom protections for state colleges and universities faculty; creates an artificial intelligence working group; requires a report. 

Education Uses; Studies 

Minnesota 

 

Consumer Protection 

Pending 

Relates to consumer protection; modifies various provisions governing debt collection, garnishment and consumer finance; provides for debtor protections; requires a review of certain statutory forms. 

Private Sector Use 

Minnesota 

 

Higher Education 

Pending 

Relates to higher education; establishes academic freedom protections for state colleges and universities faculty; creates an artificial intelligence working group; requires a report. 

Education Uses; Studies 

Minnesota 

 

Consumer Protection 

Pending 

Relates to consumer protection; modifies various provisions governing debt collection, garnishment and consumer finance; provides for debtor protections; requires a review of certain statutory forms. 

Private Sector Use 

Mississippi 

 

Political Communications 

Failed 

Creates a new section of law to provide that if any political communications were generated in whole or in part by synthetic media using artificial intelligence algorithms, then such political communications shall have a clear and prominent disclaimer stating that the information contained in the political communication was generated using artificial intelligence algorithms; provides that if any published campaign materials or published printed materials were generated in whole or in part by synthetic media using artificial intelligence algorithms, then such published campaign materials or published printed materials shall have a clear and prominent disclaimer stating that the material was generated using artificial intelligence algorithms; provides that if any newspaper either domiciled in the state, or outside of the state circulating inside the state of Mississippi, shall print any editorial or news story that was generated in whole or in part by synthetic media using artificial intelligence algorithms, then such newspaper shall have a clear and prominent disclaimer stating that the editorial or news story was generated using artificial intelligence algorithms; and for related purposes. 

Elections 

Mississippi 

 

Artificial Intelligence in Education Task Force Act 

Pending 

Enacts the Artificial Intelligence in Education Task Force Act for the purpose of evaluating potential applications of artificial intelligence in K-12 and higher education and to develop policy recommendations for responsible and effective uses by students and educators; establishes the task force membership requirements and appointment criteria; provides the duties and responsibilities of the task force, including that the task force provide recommendations for incorporating AI into educational standards. 

Education Use; Government Use; Responsible Use; Studies 

Mississippi 

 

Political Advertisements Using AI 

Failed 

Requires qualified political advertisements that utilize artificial intelligence to disclose the use of artificial intelligence to the public; defines what is considered a qualified political advertisement and artificial intelligence as used in this section; clarifies what information must be present in an advertisement to satisfy the disclosure requirement; specifies who is not liable for the failure of disclosure of the use of artificial intelligence. 

Elections 

Missouri 

 

Task Forces 

Pending 

Modifies provisions relating to task forces. 

Government Use; Studies 

Missouri 

 

Duties of the Literacy Advisory Council 

Pending 

Modifies duties of the literacy advisory council to include reviews of the use of technology in schools. 

Education Use 

Montana 

No 2024 legislative session 

 

 

 

 

Nebraska 

 

Nebraska Political Accountability and Disclosure Act 

Pending 

Regulates artificial intelligence in media and political advertisement under the Nebraska Political Accountability and Disclosure Act. 

Elections; Provenance 

Nebraska 

 

Dyslexia Research Grant Program 

Pending 

Creates the Dyslexia Research Grant Program. 

Education Uses 

Nevada 

No 2024 legislative session 

 

 

 

 

New Hampshire 

 

Artificial Intelligence 

Pending 

Establishes the crime of fraudulent use of artificial intelligence and sets penalties therefor; establishes a cause of action for fraudulent use of artificial intelligence; establishes registration of lobbyists who have been found to have fraudulently used artificial intelligence in certain cases; establishes a mechanism for the enforcement of a ban on the fraudulent use of artificial intelligence in elections. 

Elections 

New Hampshire 

 

Use of Artificial Intelligence for Personal Defense 

Failed 

Affirms that, under the Second Amendment to the U.S. Constitution, a person may use autonomous artificial intelligence for defense purposes, subject to specified limitations. 

 

New Hampshire 

 

Artificial Intelligence 

Pending 

Relates to the use of artificial intelligence by state agencies; prohibits state agencies from using artificial intelligence to manipulate, discriminate against or surveil members of the public. 

Government Use; Responsible Use 

New Jersey 

 

Artificial Intelligence Economic Growth Study 

Pending 

Requires the commissioner of Labor and Workforce Development to conduct a study and issue a report on the impact of artificial intelligence on the growth of the state economy. 

Effect on Labor/Employment 

New Jersey 

 

Hiring Decisions Automated Employment Decision Tools 

Pending 

Regulates use of automated employment decision tools in hiring decisions. 

Audit; Notification; Private Sector Use 

New Jersey 

 

Automated Employment Decision Tools Auditing 

Pending 

Creates standards for independent bias auditing of automated employment decision tools. 

Audit; Private Sector Use 

New Jersey 

 

Hiring Process Artificial Intelligence Uses 

Pending 

Regulates use of artificial intelligence-enabled video interviews in hiring process. 

Notification; Private Sector Use 

New Jersey 

 

Identity Theft 

Pending 

Extends crime of identity theft to include fraudulent impersonation or false depiction by means of artificial intelligence or deepfake technology. 

Criminal Use 

New Jersey 

 

Use of Automated Tools in Hiring Decisions 

Pending 

Regulates use of automated tools in hiring decisions to minimize discrimination in employment. 

Audit; Notification; Private Sector Use 

New Jersey 

 

State Agencies Automated Systems Regulations 

Pending 

Regulates use of automated systems and artificial intelligence by state agencies. 

Government Use; Oversight/Governance; Studies 

New Jersey 

 

Use of Automated Tools in Hiring Decisions 

Pending 

Regulates use of automated tools in hiring decisions to minimize discrimination in employment. 

Audit; Effect on Labor/Employment; Notification; Private Sector Use 

New Mexico 

 

Campaign Reporting Act 

 

Provides that if a person creates, produces or purchases an advertisement that contains materially deceptive media, which includes but is not limited to artificial intelligence, the advertisement shall include a disclaimer; creates the crime of distributing or entering into an agreement with another person to distribute materially deceptive media to, among other things, mislead electors; provides for civil and criminal penalties. 

Elections 

New Mexico 

 

Use of Artificial Intelligence Transparency 

Failed - Adjourned 

Relates to use of artificial intelligence transparency. 

Government Use; Impact Assessment 

New Mexico 

 

Artificial Intelligence Work Group 

Failed - Adjourned 

Relates to artificial intelligence work group. 

Government Use; Studies 

New York 

 

Automated Employment Decision Tools 

Failed 

Establishes criteria for the use of automated employment decision tools; provides for enforcement for violations of such criteria. 

Impact Assessment; Private Sector Use; Responsible Use 

New York 

 

Motor Vehicle Insurer Discrimination 

Pending 

Prohibits motor vehicle insurers from discrimination based on socioeconomic factors. 

Private Sector Use 

New York 

 

Digital Fairness Act 

Pending 

Enacts the Digital Fairness Act; requires any entity that conducts business in New York and maintains the personal information of 500 or more individuals to provide meaningful notice about their use of personal information; establishes unlawful discriminatory practices relating to targeted advertising. 

Audit; Government Use; Impact Assessment; Responsible Use; Studies 

New York 

 

State Privacy Act 

Pending 

Enacts the State Privacy Act to require companies to disclose their methods of de-identifying personal information, to place special safeguards around data sharing and to allow consumers to obtain the names of all entities with whom their information is shared. 

Audit; Government Use; Notification; Private Sector Use 

New York 

 

School Safety Planning and Training 

Pending 

Relates to classroom safety mechanisms, emergency medical equipment and evidence-based best practices for school safety planning and training. 

Education Use; Government Use 

New York 

 

State Units 

Pending 

Requires state units to purchase a product or service that is or contains an algorithmic decision system that adheres to responsible artificial intelligence standards; specifies content included in responsible artificial intelligence standards; requires the commissioner of taxation and finance to adopt certain regulations; alters the definition of unlawful discriminatory practice to include acts performed through algorithmic decision systems. 

Government Use; Responsible Use 

New York 

 

Political Artificial Intelligence Disclaimer (PAID) Act 

Pending 

Amends the Election Law, in relation to the use and disclosure of synthetic media; provides that a political communication which was produced by or includes any synthetic media shall be required to disclose the use of such synthetic media; provides that the disclosure on printed or digital political communications, including but not limited to brochures, flyers, posters, mailings,, or internet advertising, shall be printed or typed in an appropriate legible form. 

Elections 

New York 

 

State Office of Algorithmic Innovation 

Pending 

Creates a state Office of Algorithmic Innovation to set policies and standards to ensure algorithms are safe, effective, fair and ethical and that the state is conducive to promoting algorithmic innovation. 

Oversight/Governance; Responsible Use 

New York 

 

Applicants of the Empire State Film Production Credit 

Pending 

Amends the Tax Law; prohibits applicants of the Empire State Film Production Credit from using artificial intelligence that would displace any natural person in their productions. 

Effect on Labor/Employment; Private Sector Use 

New York 

 

Department of Labor Study on Artificial Intelligence 

Pending 

Requires the Department of Labor to study the long-term impact of artificial intelligence on the state workforce, including but not limited to on-the-job performance, productivity, training, education requirements, privacy and security; prohibits any state entity from using artificial intelligence in any way that would displace any natural person from their employment with such state entity until the department's final report is received. 

Education/Training; Effect on Labor/Employment; Studies 

New York 

 

Employers and Employment Agencies 

Pending 

Requires employers and employment agencies to notify candidates for employment if machine learning technology is used to make hiring decisions prior to the use of such technology. 

Effect on Labor/Employment; Private Sector Use 

New York 

 

Disclosure of the Use of Artificial Intelligence 

Pending 

Requires disclosure of the use of artificial intelligence in political communications; directs the state board of elections to create criteria for determining whether a political communication contains an image or video footage created through generative artificial intelligence and to create a definition of content generated by artificial intelligence. 

Elections 

New York 

 

Use of Automated Decision Tools by Landlords 

Pending 

Relates to the use of automated decision tools by landlords for making housing decisions; sets conditions and rules for use of such tools. 

Private Sector Use 

New York 

 

Generative Artificial Intelligence Created Books 

Pending 

Provides that any book that was wholly or partially created through the use of generative artificial intelligence, published in the state, shall conspicuously disclose upon the cover of the book, that such book was created with the use of generative artificial intelligence; provides that books subject to such provisions shall include, but not be limited to, all printed and digital books, regardless of the target age group or audience, consisting of text, pictures, audio, puzzles, games or any combination. 

Private Sector Use 

New York 

 

Oaths of Responsible Use Requirement 

Pending 

Provides that every operator of a generative or surveillance advanced artificial intelligence system that is accessible to residents of the state shall require a user to create an account prior to utilizing such service; provides that prior to each user creating an account, such operator shall present the user with a conspicuous digital or physical document that the user must affirm under penalty of perjury prior to the creation or continued use of such account. 

Responsible Use; Private Sector Use 

New York 

 

Admissibility of Evidence Created or Processed by AI 

Pending 

Sets rules and procedures for the admissibility of evidence created or processed by artificial intelligence. 

Government Use; Judicial Use 

New York 

 

Artificial Intelligence Bill of Rights 

Pending 

Enacts the New York Artificial Intelligence Bill of Rights to provide residents of the state with rights and protections to ensure that any system making decisions without human intervention impacting their lives do so lawfully, properly and with meaningful oversight. 

Audit; Government Use; Impact Assessment; Notification; Responsible Use; Private Sector Use 

New York 

 

Contract Requirements for Digital Replicas 

Pending 

Establishes requirements for contracts involving the creation and use of digital replicas. 

Private Sector Use 

New York 

 

Publications Using Artificial Intelligence 

Pending 

Requires that every newspaper, magazine or other publication printed or electronically published in the state, which contains the use of generative artificial intelligence or other information communication technology, shall identify that certain parts of such newspaper, magazine or publication were composed through the use of artificial intelligence or other information communication technology. 

Provenance 

New York 

 

Robot Tax Act 

Pending 

Imposes on every corporation subject to certain taxes a tax surcharge to be in an amount equal to the sum of any taxes or fees imposed by the state or any political subdivision thereof computed based on an employee's wage, including but not limited to income tax and unemployment insurance, paid by the corporation or the employee for an employee's final year of employment with the company where such employee was displaced in such taxable year due to the employee's position being replaced by technology. 

Effect on Labor/Employment 

New York 

 

Advanced Artificial Intelligence Licensing Act 

Pending 

Enacts the Advanced Artificial Intelligence Licensing Act; provides for regulation of advanced artificial intelligence systems (Part A); requires registration and licensing of high-risk advanced artificial intelligence systems and related provisions regarding the operation of such systems (Part B); establishes the advanced artificial intelligence ethical code of conduct (Part C); prohibits the development and operation of certain artificial intelligence systems (Part D). 

Oversight/Governance; Responsible Use; Studies 

New York 

 

Electronic Monitoring by Employer or Employment Agency 

Failed 

Restricts the use by an employer or an employment agency of electronic monitoring or an automated employment decision tool to screen a candidate or employee for an employment decision unless such tool has been the subject of a bias audit within the last year and the results of such audit have been made public; requires notice to employment candidates of the use of such tools; provides remedies; makes a conforming change to the civil rights law. 

Audit; Private Sector Use; Responsible Use 

New York 

 

Determination of Insurance Rates 

Pending 

Prohibits the use of external consumer data and information sources being used when determining insurance rates; provides that no insurer shall unfairly discriminate based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression, or use any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources, in a way that unfairly discriminates. 

Private Sector Use 

New York 

 

Disclosure of Political Communication Produced by AI 

Pending 

Requires the disclosure of political communication produced by artificial intelligence technology; defines terms; provides that any person who, with intent to damage a candidate or deceive the electorate, creates and disseminates artificial media shall be guilty of a class E felony; establishes the fair use of artificial intelligence code; makes related provisions. 

Elections 

New York 

 

Political Communication Using Generative AI 

Pending 

Prohibits political communication containing any photo, video or audio depiction of a candidate created in whole or in part using generative artificial intelligence. 

Elections 

New York 

 

Political Communications Utilizing AI 

Pending 

Requires any political communication, whether made by phone call, email or other message-based communication, that utilizes an artificial intelligence system to engage in human-like conversation with another shall, by reasonable means, apprise the person of the fact that they are communicating with an artificial intelligence system. 

Elections 

New York 

 

Notice Requirements From Insurers 

Pending 

Provides for notice requirements where an insurer authorized to write accident and health insurance in this state, a corporation organized pursuant to article 43 of this chapter, or a health maintenance organization certified pursuant to article 44 of the public health law uses artificial intelligence-based algorithms in the utilization review process. 

Health Use; Notification 

New York 

 

New York AI Child Safety Act 

Pending 

Enacts the New York AI Child Safety Act; relates to the unlawful promotion or possession of a sexual performance of a child created by digitization; defines terms; increases penalties from a class D or E felony to a class C felony. 

Child Pornography; Criminal Use 

New York 

 

Use of Automated Employment Decision Tools 

Pending 

Establishes criteria for the use of automated employment decision tools; provides for enforcement for violations of such criteria. 

Impact Assessment; Private Sector Use; Responsible Use 

New York 

 

Screening of Candidate for Employment Decision 

Pending 

Restricts the use by an employer or an employment agency of electronic monitoring or an automated employment decision tool to screen a candidate or employee for an employment decision unless such tool has been the subject of a bias audit within the last year and the results of such audit have been made public; requires notice to employment candidates of the use of such tools; provides remedies; makes a conforming change to the civil rights law. 

Audit; Notification; Private Sector Use 

New York 

 

Digital Fairness Act 

Pending 

Enacts the Digital Fairness Act; requires any entity that conducts business in New York and maintains the personal information of 500 or more individuals to provide meaningful notice about their use of personal information; establishes unlawful discriminatory practices relating to targeted advertising. 

Audit; Government Use; Impact Assessment; Responsible Use; Studies 

New York 

 

Fashion Workers Act 

Pending 

Provides that a model management company shall not engage in business from offices in the state or enter any arrangement with a person for the purpose of providing model management company services to persons in the state unless the model management company is registered; provides that each model management company required to be registered shall provide the Department of Labor with specified information. 

Private Sector Use 

New York 

 

Use of Automated Employment Decision Tools 

Pending 

Specifies the requirements for a deployer of an automated employment decision tool; provides that a deployer shall perform an impact assessment for any automated employment decision tool that such deployer uses; provides that a developer of an automated employment decision tool shall provide a deployer with a statement regarding the intended use of the automated employment decision tool and documentation regarding, among other things, the known limitations of the consequential automated decision tool. 

Impact Assessment; Private Sector Use; Responsible Use 

New York 

 

State Commission to Study Artificial Intelligence 

Pending 

Creates a temporary state commission to study and investigate how to regulate artificial intelligence, robotics and automation; repeals such commission. 

Studies 

New York 

 

Political Artificial Intelligence Disclaimer Act 

Pending 

Provides that all political committees that make an expenditure for a political communication shall be required to disclose the identity of the political committee which made the expenditure for such political communication; provides that any political communication which was produced by or includes any synthetic media shall be required to disclose the use of such synthetic media; provides that all committees shall keep records of their use of synthetic media during each campaign cycle. 

Elections 

New York 

 

Motor Vehicle Insurers 

Pending 

Prohibits motor vehicle insurers from discrimination based on socioeconomic factors in determining algorithms used to construct actuarial tables, coverage terms, premiums and/or rates. 

Private Sector Use 

New York 

 

Advertisement Disclosures 

Pending 

Amends the General Business Law; requires advertisements to disclose the use of synthetic media; imposes a $1,000 civil penalty for a first violation and a $5,000 penalty for any subsequent violation. 

Private Sector Use 

New York 

 

Applicants of the Empire State Film Production Credit 

Pending 

Excludes a production which uses artificial intelligence in a manner which results in the displacement of employees whose salaries are qualified expenses, unless such replacement is permitted by a current collective bargaining agreement in force covering such employees, from the definition of qualified film for the purposes of the Empire State Film Production Credit; defines qualified film. 

Effect on Labor/Employment; Private Sector Use 

New York 

 

Legislative Oversight of Automated Decision-Making 

Pending 

Provides that any state agency shall be prohibited from utilizing or applying any automated decision-making system in performing any function that is related to the delivery of any public assistance benefit, will have a material impact on the rights, civil liberties, safety or welfare of any individual within the state, or affects any statutorily or constitutionally provided right of an individual, unless such utilization or application of the automated decision-making system is authorized in law. 

Government Use; Impact Assessment; Oversight/Governance 

New York 

 

Disclosure of the Use of Artificial Intelligence 

Pending 

Requires disclosure of the use of artificial intelligence in political communications; provides that any political communication, regardless of whether such communication is considered a substantial or nominal expenditure, that uses an image or video footage that was generated in whole or in part with the use of artificial intelligence, as defined by the state board of elections, shall be required to disclose that artificial intelligence was used in such communication. 

Elections 

New York 

 

Use of Electronic Monitoring by Employer 

Pending 

Restricts the use by an employer or an employment agency of electronic monitoring or an automated employment decision tool to screen a candidate or employee for an employment decision unless such tool has been the subject of a bias audit within the last year and the results of such audit have been made public; requires notice to employment candidates of the use of such tools; provides remedies; makes a technical change to the civil rights law. 

Impact Assessment; Private Sector Use; Responsible Use 

New York 

 

Contract Requirements 

Pending 

Establishes contract requirements for contracts involving the creation and use of digital replicas. 

Private Sector Use 

New York 

 

Use of Automated Decision Tools by Landlords 

Pending 

Relates to the use of automated decision tools by landlords for making housing decisions; sets conditions and rules for use of such tools. 

Private Sector Use; Responsible Use 

New York 

 

Publication Use of Generative Artificial Intelligence 

Pending 

Requires that every newspaper, magazine or other publication printed or electronically published in this state, which contains the use of generative artificial intelligence or other information communication technology, shall identify that certain parts of such newspaper, magazine or publication were composed through the use of artificial intelligence or other information communication technology. 

Private Sector Use; Provenance 

New York 

 

Publishers of Books Created Using AI 

Pending 

Requires publishers of books created wholly or partially with the use of generative artificial intelligence to disclose such use of generative artificial intelligence before the completion of such sale; applies to all printed and digital books consisting of text, pictures, audio, puzzles, games or any combination thereof. 

Private Sector Use; Provenance 

New York 

 

State Commission on AI, Robotics and Automation 

Pending 

Creates a temporary state commission to study and investigate how to regulate artificial intelligence, robotics and automation; repeals such commission. 

Effect on Labor/Employment; Studies 

New York 

 

Collection of Oaths of Responsible Use 

Pending 

Requires the collection of oaths of responsible use from users of certain generative or surveillance advanced artificial intelligence systems by the operators of such systems and transmission of such oaths to the attorney general. 

Government Use; Private Sector Use; Responsible Use 

New York 

 

Artificial Intelligence Bill of Rights 

Pending 

Enacts the state Artificial Intelligence Bill of Rights to provide residents of the state with rights and protections to ensure that any system making decisions without human intervention impacting their lives do so lawfully, properly and with meaningful oversight. 

Audit; Impact Assessment; Government Use; Notification; Private Sector Use; Responsible Use 

New York 

 

Registration of Artificial Intelligence Companies 

Pending 

Requires the registration of certain companies whose primary business purpose is related to artificial intelligence as evidenced by their North American Industry Classification System code. 

Private Sector Use 

New York 

 

Admissibility of Evidence Created by AI 

Pending 

Sets rules and procedures for the admissibility of evidence created or processed by artificial intelligence. 

Judicial Use 

New York 

 

New York Artificial Intelligence Ethics Commission 

Pending 

Establishes the New York Artificial Intelligence Ethics Commission. 

Government Use; Oversight/Governance; Private Sector Use; Responsible Use; Studies 

North Carolina 

 

Reductions In Energy and Water Consumption 

Pending 

Requires reductions in energy and water consumption in public buildings. 

Government Use 

North Carolina 

 

Automation and the Workforce 

Pending 

Establishes a study committee on automation and the workforce. 

Private Sector Use; Effect on Labor/Employment; Studies 

North Carolina 

 

Stormwater Permits 

Pending 

Establishes deadlines for decisions by the Department of Environmental Quality on applications for stormwater permits and applications for permits proceeding under the express review program; makes other changes. 

Appropriations; Government Use 

North Dakota 

No 2024 legislative session 

 

 

 

 

Ohio 

 

AI Generated Product Watermark 

Pending 

Requires AI-generated products have a watermark, to prohibit simulated child pornography and to prohibit identity fraud using a replica of a person. 

Child Pornography; Criminal Use; Provenance 

Oklahoma 

 

Artificial Intelligence Technology Act of 2024 

Pending 

Relates to artificial intelligence technology; creates the state Artificial Intelligence Act of 2024; provides for non-codification; provides an effective date. 

 

Oklahoma 

 

Artificial Intelligence Bill of Rights 

Pending 

Relates to artificial intelligence; creates the state Artificial Intelligence Bill of Rights; provides definitions; establishes the rights of Oklahomans when interacting with artificial intelligence; provides for codification; provides an effective date. 

Notification; Provenance; Responsible Use 

Oklahoma 

 

Artificial Intelligence Utilization Review Act 

Pending 

Relates to health insurance; creates the Artificial Intelligence Utilization Review Act; provides definitions; mandates a notice for artificial intelligence use in review; mandates human review of specialist's denials; provides civil liability; provides penalties; provides caps on penalties; provides for codification; provides an effective date. 

Health Use 

Oklahoma 

 

Schools Artificial Intelligence Program 

Pending 

Relates to schools; directs the state Department of Education to make available certain artificial intelligence program; directs the department to provide certain trainings, workshops and courses; directs the department to facilitate certain partnerships; mandates the integration of certain concepts in school curricula; directs for development of certain education modules; clarifies preference for certain learning and experience; creates the state Artificial Intelligence Education Revolving Fund. 

Appropriations; Education/Training; Education Use 

Oklahoma 

 

State Government and Definitions 

Pending 

Relates to state government; provides definitions; directs the Office of Management and Enterprise Services to conduct certain inventory; provides required information; directs inventory to be made publicly available; directs certain ongoing assessments be made of artificial intelligence systems; directs for development of certain policies and procedures; requires certain policies be included; permits revision of policies and procedures; requires policies and procedures be posted. 

Government Use; Impact Assessment; Responsible Use 

Oklahoma 

 

Ethical Artificial Intelligence Act 

Pending 

Relates to technology; creates a new title; creates the Ethical Artificial Intelligence Act; provides definitions; directs deployers of automated decision tools to complete and document certain impact assessment; provides required details of impact assessment; directs developers of automated decision tools to complete and document certain impact assessment; directs deployers and developers to make impact assessment of certain updates. 

Government Use; Impact Assessment; Private Sector Use; Responsible Use 

Oklahoma 

 

Crimes and Punishments 

Pending 

Relates to crimes and punishments; relates to the state law on obscenity and child pornography; expands scope of crime to include materials and pornography generated via artificial intelligence; modifies certain terms to include artificial intelligence-generated images; defines term; provides an effective date. 

Child Pornography; Criminal Use 

Oklahoma 

 

State Government 

Pending 

Relates to state government; creates the Citizen's Bill of Rights; provides short title; defines terms; restricts certain entities from taking certain actions relating to currency; guarantees certain rights for the use of gold and silver; restricts certain entities from taking certain actions relating to digital identification; prohibits certain entities from implementing a social credit score; prohibits certain entities from taking certain actions relating to medical procedures. 

Government Use; Health Use; Effect on Labor/Employment; Private Sector Use; Responsible Use 

Oregon 

 

Artificial Intelligence 

To governor 

Establishes the Task Force on Artificial Intelligence; provides that the task force shall examine and identify terms and definitions related to artificial intelligence that are used in technology-related fields and may be used for legislation; provides that the task force shall begin its work by examining the terms and definitions used by the U.S. government and relevant federal agencies. 

Studies 

Oregon 

 

AI in Campaign Ads 

To governor 

Relates to the use of artificial intelligence in campaign communications; provides that a campaign communication that includes any form of synthetic media must include a disclosure stating that the image, audio recording or video recording has been manipulated; provides that the secretary of state may institute proceedings to enjoin any violation of this requirement; provides that the court shall impose a civil penalty. 

Elections 

Pennsylvania 

 

Artificial Intelligence Registry 

Pending 

Amends the act known as The Administrative Code, in powers and duties of the Department of State and its departmental administrative board; provides for artificial intelligence registry. 

Oversight/Governance; Private Sector Use 

Pennsylvania 

 

Sexual Abuse of Children 

Pending 

Amends specified title on crimes and offenses of the Pennsylvania Consolidated Statutes, in sexual offenses; provides for the offense of unlawful dissemination of artificially generated depiction; relates to minors; provides for the offense of sexual abuse of children and for the offense of transmission of sexually explicit images by minor. 

Criminal Use 

Pennsylvania 

 

Uniform 911 Surcharge and for Termination 

Pending 

Relates to 911 emergency communication services; provides for a Legislative Budget and Finance Committee study; provides for termination; provides that the committee shall study and make recommendations with respect to, among other things, determining any efficiencies that can be gained in the current 911 system or potential efficiencies that can be gained with a different 911 system. 

Government Use 

Pennsylvania 

 

Administration of Assistance Programs 

Pending 

Amends the act known as the Human Services Code, in public assistance; provides for administration of assistance programs. 

Government Use 

Pennsylvania 

 

Health Insurers 

Pending 

Provides for disclosure by health insurers of the use of artificial intelligence-based algorithms in the utilization review process. 

Health Use; Notification 

Pennsylvania 

 

Automated Employment Decision Tool 

Pending 

Amends the act known as the Pennsylvania Human Relations Act; provides for definitions; provides for use of automated employment decision tool; provides for civil penalties. 

Audit; Notification; Private Sector Use; Responsible Use 

Pennsylvania 

 

Fraudulent Misrepresentation of a Candidate 

Pending 

Amends the act known as the Pennsylvania Election Code, in penalties; provides for the offense of fraudulent misrepresentation of a candidate; imposes a penalty. 

Elections 

Pennsylvania 

 

Artificial Intelligence 

Pending 

Directs the Joint State Government Commission to establish an advisory committee to conduct a study on the field of artificial intelligence and its impact and potential future impact. 

Government Use; Responsible Use; Private Sector Use; Studies 

Pennsylvania 

 

Offense of Sexual Abuse of Children 

Pending 

Amends a specified title related to crimes and offenses of the State Consolidated Statutes, in minors; provides for the offense of sexual abuse of children and for the offense of transmission of sexually explicit images by minor. 

Child Pornography; Criminal Use 

Pennsylvania 

 

Artificial Intelligence 

Pending 

Directs the Joint State Government Commission to establish an advisory committee to conduct a study on the field of artificial intelligence and its impact and potential future impact. 

Government Use; Responsible Use; Private Sector Use; Studies 

Puerto Rico 

 

Penal Code 

Pending 

Amends the Penal Code, for the purposes of including as an aggravating circumstance to any crime that has been committed through the use or through artificial intelligence; adds definition of artificial intelligence. 

Criminal Use 

Puerto Rico 

 

Electoral Code 

Pending 

Amends the Electoral Code 2020, to include as an aggravating circumstance any electoral crime that is has committed through the use or through artificial intelligence; adds definition of artificial intelligence. 

Criminal Use; Elections 

Puerto Rico 

 

Innovation and Technology Service Law 

Pending 

Relates to the Innovation and Technology Service Law for the purposes of declaring and establishing the government’s public policy on the development and use of artificial intelligence capabilities by government agencies. 

Government Use 

Puerto Rico 

 

Public Safety and Science and Technology 

 

Requires the Public Safety, Science and Technology Commission of the House of Representatives of the Commonwealth of Puerto Rico to investigate the implications of the use of artificial intelligence technologies with respect to security, human rights and civil liberties, privacy, health, ethics, economy, education, manufacturing, agriculture, energy, environment, consumption and any other aspect AI technologies may have in people's daily lives. 

Government Use; Private Sector Use; Responsible Use; Studies 

Puerto Rico 

 

Artificial Intelligence Officer 

Pending 

Creates an artificial intelligence officer attached to the Puerto Rico Innovation and Technology Service and establishes their duties; creates the Artificial Intelligence Council of the Government; establishes its duties; orders Puerto Rico Innovations and Technology Services to create and develop the public policy of the government in relation to the implementation of artificial intelligence through the agencies. 

Government Use; Oversight/Governance 

Puerto Rico 

 

Department of State of the Government 

Pending 

Orders the Department of State to make a registry of all companies or businesses that operate, develop or use artificial intelligence systems. 

Private Sector Use 

Rhode Island 

 

Business Regulation Regarding Insurance Discrimination 

Pending 

Prohibits the use of any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources, in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression, by an insurer regarding any insurance practice. 

Private Sector Use; Responsible Use 

Rhode Island 

 

The Atmosphere Protection Act 

Pending 

Prohibits the intentional release of hazardous polluting emissions into the atmosphere and provides for a natural climate while increasing resiliency by prohibiting deliberate atmospheric pollution and manipulation of the environment; provides that violation fees would be collected and placed into a trust fund for municipal-level allocation for projects that promote the safety of life and property as well as environmental and agricultural health free from hazardous atmospheric activities. 

Government Use; Private Sector Use 

Rhode Island 

 

Use of Artificial Intelligence in Mental Health Service 

Pending 

Defines artificial intelligence and regulates its use in providing mental health services. 

Health Use 

Rhode Island 

 

Generative Artificial Intelligence Models Regulations 

Pending 

Authorizes the office of attorney general to promulgate, adopt and enforce rules and regulations concerning generative artificial intelligence models, such as ChatGPT, to protect the public's safety, privacy and intellectual property rights. 

Private Sector Use 

Rhode Island 

 

Artificial Intelligence Accountability Act 

Pending 

Requires the Department of Administration to provide an inventory of all state agencies using artificial intelligence and establishes a permanent commission to monitor the use of artificial intelligence in state government and make recommendations for state government policy and other decisions; directs the commission to make recommendations regarding changes in the way state government uses artificial intelligence. 

Government Use; Responsible Use; Studies 

Rhode Island 

 

Automated Decision Tools 

Pending 

Requires a deployer and a developer of an automated decision tool to perform an impact assessment that includes a statement of the purpose of the automated decision tool and its intended benefits, uses and deployment contexts. 

Government Use; Impact Assessment; Notification; Private Sector Use; Responsible Use 

Rhode Island 

 

Regulatory Provisions on Automated Decision Tools 

Pending 

Requires companies that develop or deploy high-risk AI systems to conduct impact assessments and adopt risk management programs, would apply to both developers and deployers of AI systems and would require obligations of these different types of companies based on their role in the AI ecosystem; requires deployers of high-risk AI systems to perform an impact assessment prior to deploying an AI system and annually thereafter. 

Impact Assessment; Private Sector Use 

Rhode Island 

 

Artificial Intelligence in the Decision-Making Process 

Pending 

Establishes a permanent commission to monitor the use of artificial intelligence in state government to make state government policy and other decisions; directs the commission to make recommendations regarding changes in the way state government uses artificial intelligence. 

Government Use; Oversight/Governance; Responsible Use 

Rhode Island 

 

Use of Facial and Bio Metric Recognition Technology 

Pending 

Relates to state affairs and government; relates to video lottery games, table games and sports wagering; relates to the Rhode Island Consumer Protection Gaming Act; prohibits the use of facial recognition technology and biometric recognition technology in video-lottery terminals at pari-mutuel licensees in the state or in online betting applications. 

Private Sector Use 

A. Samoa 

Not available 

 

 

 

 

South Carolina 

None 

 

 

 

 

South Dakota 

 

Child Pornography 

 

Revises provisions related to the possession, distribution and manufacture of child pornography; provides that a person is guilty of possessing child pornography if the person knowingly possesses any visual depiction of a minor engaging in a prohibited sexual act, or in a simulation of a prohibited sexual act, or any computer-generated child pornography; provides that a violation is a Class 4 felony; defines the crime of distributing child pornography. 

Child Pornography; Criminal Use 

Tennessee 

 

Education Rules and Policies 

Pending 

Relates to education; requires the governing board of each public institution of higher education to promulgate rules and requires each local board of education and governing body of a public charter school to adopt a policy, regarding the use of artificial intelligence technology by students, teachers, faculty and staff for instructional and assignment purposes. 

Education Use; Government Use 

Tennessee 

 

Election Laws 

Pending 

Requires political advertisements that are created in whole or in part by artificial intelligence to include certain disclaimers; requires materially deceptive media disseminated for purposes of a political campaign to include certain disclaimers; establishes criminal penalties and the right to injunctive relief for violations. 

Criminal Use; Elections 

Tennessee 

 

Boards and Commissions 

Pending 

Creates the artificial intelligence advisory council to recommend an action plan to guide awareness, education and usage of artificial intelligence in state government that aligns with the state's policies and goals and that supports public employees in the efficient and effective delivery of customer service. 

Government Use; Studies 

Tennessee 

 

State Government 

Pending 

Requires each department of the executive branch to develop a plan to prevent the malicious and unlawful use of artificial intelligence for the purpose of interfering with the operation of the department, its agencies and divisions and persons and entities regulated by the respective department; requires each department to report its plan, findings and recommendations to each member of the General Assembly no later than the specified date. 

Government Use; Responsible Use 

Tennessee 

 

Protecting Schools and Events Act 

Pending 

Enacts the Protecting State Schools and Events Act; subject to appropriations, requires the Department of Education to contract for the provision of walk-through metal detectors to local education agencies. 

Education Use; Government Use 

Tennessee 

 

Consumer Protection 

Pending 

Requires a person to include a disclosure on certain content generated by artificial intelligence that the content was generated using artificial intelligence; makes it an unfair or deceptive act or practice under the Tennessee Consumer Protection Act of 1977 to distribute certain content generated using artificial intelligence without the required disclosure. 

Provenance 

Tennessee 

 

Artificial Intelligence Advisory Council Act 

Pending 

Enacts the Tennessee Artificial Intelligence Advisory Council Act. 

Government Use; Effect on Labor/Employment; Private Sector Use; Responsible Use; Studies 

Tennessee 

 

Statutes and Codification 

Pending 

Defines life for statutory construction purposes to mean the condition that distinguishes animals and plants from inorganic matter, including the capacity for growth, reproduction, functional activity and continual change preceding death; excludes from the definition artificial intelligence, a computer algorithm, a software program, computer hardware or any type of machine. 

Personhood 

Tennessee 

 

Directed Studies 

Pending 

Requires Tennessee Advisory Commission on Intergovernmental Relations to conduct a study on approaches to the regulation of artificial intelligence and submit a report of such study, including recommended legislative approaches, to the speakers of each house and the legislative librarian no later than the specified date. 

Studies 

Tennessee 

 

Directed Studies 

Pending 

Relates to directed studies; requires Tennessee Advisory Commission on Intergovernmental Relations to conduct a study on approaches to the regulation of artificial intelligence and submit a report of such study, including recommended legislative approaches, to the speakers of each house and the legislative librarian no later than specified date. 

Studies 

Tennessee 

 

Use of Artificial Intelligence Technology by Students 

 

Requires the governing board of each public institution of higher education to promulgate rules and requires each local board of education and governing body of a public charter school to adopt a policy, regarding the use of artificial intelligence technology by students, teachers, faculty and staff for instructional and assignment purposes. 

Education Uses 

Tennessee 

 

Election Laws 

Pending 

Requires political advertisements that are created in whole or in part by artificial intelligence to include certain disclaimers; requires materially deceptive media disseminated for purposes of a political campaign to include certain disclaimers; establishes criminal penalties and the right to injunctive relief for violations. 

Criminal Use; Elections 

Tennessee 

 

State Government 

Pending 

Requires each department of the executive branch to develop a plan to prevent the malicious and unlawful use of artificial intelligence for the purpose of interfering with the operation of the department, its agencies and divisions and persons and entities regulated by the respective department; requires each department to report its plan, findings and recommendations to each member of the General Assembly no later than the specified date. 

Government Use; Responsible Use 

Tennessee 

 

Boards and Commissions 

Pending 

Creates the artificial intelligence advisory council to recommend an action plan to guide awareness, education and usage of artificial intelligence in state government that aligns with the state's policies and goals and that supports public employees in the efficient and effective delivery of customer service. 

Government Use; Studies 

Tennessee 

 

Statutes and Codification 

Pending 

Defines life for statutory construction purposes to mean the condition that distinguishes animals and plants from inorganic matter, including the capacity for growth, reproduction, functional activity and continual change preceding death; excludes from the definition artificial intelligence, a computer algorithm, a software program, computer hardware or any type of machine. 

Personhood 

Tennessee 

 

Sexual Offenses 

Pending 

Specifies that for the purposes of sexual exploitation of children offenses, the term material includes computer-generated images created, adapted or modified by artificial intelligence; defines artificial intelligence. 

Child Pornography; Criminal Use 

Tennessee 

 

Artificial Intelligence Advisory Council Act 

Pending 

Enacts the Tennessee Artificial Intelligence Advisory Council Act. 

Education Uses; Government Use; Effect on Labor/Employment; Private Sector Use; Responsible Use; Studies 

Tennessee 

 

Walk Through Metal Detectors 

Pending 

Enacts the Protecting Tennessee Schools and Events Act; subject to appropriations, requires the Department of Education to contract for the provision of walk-through metal detectors to Local Education Agencies. 

Education Uses; Government Use 

Texas 

No 2024 legislative session 

 

 

 

 

Utah 

 

Utah Legal Personhood Amendments 

To governor 

Addresses legal personhood. 

Personhood 

Utah 

 

Artificial Intelligence in Political Advertising 

Failed 

Addresses artificial intelligence and political advertising. 

Elections 

Utah 

 

Criminal Justice Amendments 

 

Amends provisions regarding the chair of a Criminal Justice Coordinating Council; amends the crime for an escape; moves the crime for an aggravated escape to a separate statute; addresses the use of an algorithm or a risk assessment tool score in determinations about pretrial release, diversion, sentencing, probation and parole; relates to pretrial risk assessment tools; provides that the court may not rely solely on an algorithm or a risk assessment tool score when making any decision regarding probation. 

Government Use 

Utah 

 

Boards and Commissions Modifications 

To governor 

Modifies boards and commissions. 

Effect on Labor/Employment 

Utah 

 

Governors Office of Economic Opportunity 

 

Modifies provisions related to the Governor's Office of Economic Opportunity; revises definitions; modifies the membership of the Governor's Office of Economic Opportunity board; modifies provisions regarding the Unified Economic Opportunity Commission; modifies provisions about the purpose of the Economic Opportunity Act; makes technical and conforming changes. 

Education/Training 

Utah 

 

Information Technology Act Amendments 

 

Relates to an audio or visual communication that, among other things, is intended to influence voting for or against a candidate or ballot proposition in an election or primary in the state; provides that an audio communication that contains synthetic audio media shall include specified words audibly at the beginning and end of the communication; provides that a visual communication that contains synthetic media shall display specified words during each portion containing synthetic media. 

Elections 

Utah 

 

Artificial Intelligence Amendments 

 

Creates the Artificial Intelligence Policy Act. 

Notification; Oversight/Governance; Private Sector Use 

Vermont 

 

Electronic Monitoring 

Pending 

Relates to restricting electronic monitoring of employees and employment-related automated decision systems. 

Impact Assessment; Private Sector Use; Responsible Use 

Vermont 

 

Artificial Intelligence Systems 

Pending 

Relates to regulating developers and deployers of certain artificial intelligence systems. 

Government Use; Impact Assessment; Private Sector Use; Responsible Use; Provenance 

Vermont 

 

Inherently Dangerous Artificial Intelligence Systems 

Pending 

Relates to creating oversight and liability standards for developers and deployers of inherently dangerous artificial intelligence systems. 

Government Use; Impact Assessment; Private Sector Use; Responsible Use 

Virginia 

 

Department of Criminal Justice Services 

Failed 

Relates to Department of Criminal Justice Services; relates to law-enforcement agencies; relates to use of generative artificial intelligence and machine learning systems; provides that the Department of Criminal Justice Services shall have the power and duty to establish a comprehensive framework for the use of generative artificial intelligence and machine learning systems, both defined in the bill, by law-enforcement agencies, which shall include developing policies and procedures. 

Education/Training; Government Use 

Virginia 

 

Artificial Intelligence Developer Act 

Pending - Carryover 

Relates to Artificial Intelligence Developer Act established; relates to civil penalty; creates operating standards for developers and deployers, as those terms are defined in the bill, relating to artificial intelligence, including avoiding certain risks, protecting against discrimination, providing disclosures and conducting impact assessments and provides that the Office of the Attorney General shall enforce the provisions of the bill. 

Impact Assessment; Private Sector Use; Responsible Use 

Virginia 

 

Public Education 

Pending - Carryover 

Relates to public education; relates to dual enrollment and concurrent enrollment; relates to high school graduation. 

Education/Training 

Virginia 

 

Office of Education Economics 

Pending 

Relates to Office of Education Economics; relates to Administration of the Virginia Education and Workforce Longitudinal Data System; relates to report. 

Education Use; Government Use 

Virginia 

 

AI-Generated Image 

Failed 

Relates to unauthorized creation of image of another; relates to AI-generated image; relates to penalties; creates a Class 1 misdemeanor for any person who knowingly and intentionally creates any videographic or still image using artificial intelligence of any nonconsenting person if that person is totally nude, performing sexual acts, clad in undergarments, or in a state of undress so as to expose the genitals, pubic area, buttocks, or female breast and such. 

Criminal Use 

Virginia 

 

Artificial Intelligence Technology in Education Study 

Failed 

Relates to study; relates to Board of Education; relates to work group on the use of artificial intelligence technology in education; relates to report; requires the Board of Education, in collaboration with the State Council of Higher Education for Virginia, to convene a work group to study and make recommendations on guidelines for the use and integration of AI technology in education in public elementary and secondary schools and public institutions of higher education. 

Education Use; Studies 

Virginia 

 

Use of Artificial Intelligence by Public Bodies 

To governor 

Directs the Joint Commission on Technology and Science (JCOTS), in consultation with relevant stakeholders, to conduct an analysis of the use of artificial intelligence by public bodies in the commonwealth and the creation of a Commission on Artificial Intelligence. JCOTS shall submit a report of its findings and recommendations to the chairmen of the House Committees on Appropriations and Communications, Technology and Innovation and the Senate Committees on Finance and Appropriations and General Laws and Technology no later than Dec. 1, 2024. 

Education Use; Government Use; Impact Assessment; Responsible Use 

Virginia 

 

Public Education 

Pending - Carryover 

Relates to public education; relates to dual enrollment and concurrent enrollment; relates to high school graduation; makes several changes relating to graduation from a public high school in the commonwealth, including eliminating the requirement for a student to complete one virtual course in order to graduate from high school and specifying various options and requirements relating to earning career and technical education credentials for the purpose of satisfying high school graduation. 

Education/Training; Education Use 

Virginia 

 

Commission On Artificial Intelligence 

Failed 

Relates to Commission on Artificial Intelligence; relates to report; relates to sunset; creates the Commission on Artificial Intelligence to advise the governor on issues related to artificial intelligence and make advisory recommendations based on its findings. 

Effect on Labor/Employment; Responsible Use; Studies 

Virginia 

 

Joint Commission on Technology and Science 

Pending - Carryover 

Relates to Joint Commission on Technology and Science; relates to study; relates to advancements in artificial intelligence; relates to report; directs the Joint Commission on Technology and Science to study advancements in artificial intelligence (AI), including assessing the impacts of deepfakes, data privacy implications and misinformation; measures to ensure these technologies do not indirectly or directly lead to discrimination. 

Government Use; Responsible Use; Studies 

U.S. Virgin Islands 

 

Real Time Crime Center 

 

Amends specified title of the state code to establish a real-time crime center centralized crime data system within the State Police Department. 

Government Use 

Washington 

 

Charter of Peoples Personal Data Rights 

Pending 

Creates a charter of people's personal data rights. 

Government Use; Private Sector Use 

Washington 

 

Artificial Intelligence Task Force 

Pending 

Establishes an Artificial Intelligence Task Force. 

Government Use; Studies 

Washington 

 

Ethical Artificial Intelligence 

Pending 

Promotes ethical artificial intelligence by protecting against algorithmic discrimination. 

Impact Assessment; Responsible Use 

Washington 

 

Digital Empowerment and Workforce Inclusion Act 

Pending 

Creates the Washington digital empowerment and workforce inclusion act. 

Effect on Labor/Employment 

Washington 

 

Blueprint for an AI Bill of Rights 

Pending 

Affirms specified state commitment to the Blueprint for an AI Bill of Rights. 

 

Washington 

 

Use of Automated Decision Systems 

Pending 

Establishes guidelines for government procurement and use of automated decision systems to protect consumers, improve transparency and create more market predictability. 

Audit; Government Use Impact Assessment; Notification 

Washington 

 

Charter of Peoples Personal Data Rights 

Pending 

Creates a charter of people's personal data rights. 

Government Use; Private Sector Use 

Washington 

 

Artificial Intelligence Task Force 

To governor 

Establishes a task force to assess current uses and trends and make recommendations to the Legislature regarding guidelines and legislation for the use of artificial intelligence systems; provides that task force findings and recommendations must include, among other things, a literature review of public policy issues with artificial intelligence, including benefits and risks to the public broadly and historically excluded communities, racial equity considerations, workforce impacts and ethical concerns. 

Studies 

Washington 

 

Fiscal Biennium Supplemental Operating Appropriations 

To governor 

Makes 2023-2025 fiscal biennium supplemental operating appropriations. 

Appropriations; Education/Training 

Washington 

 

Office of Privacy and Data Protection 

Pending 

Requires the office of privacy and data protection to develop guidelines for the use of artificial intelligence. 

Government Use 

Washington 

 

Use of Artificial Intelligence Language Learning Models 

Pending 

Concerns the use of artificial intelligence language learning models in official court filings. 

Judicial Use; Notification 

Washington 

 

Pornographic Material Involving Minors 

Pending 

Concerns deepfake artificial intelligence-generated pornographic material involving minors. 

Child Pornography; Criminal Use 

Washington 

 

Employee Rights in the Workplace 

Pending 

Protects employee rights in the workplace with regard to the use of digital technology. 

Private Sector Use 

West Virginia 

 

Artificial Intelligence Task Force 

Pending 

Creates an Artificial Intelligence Task Force. 

Cybersecurity; Education Use; Government Use; Effect on Labor/Employment; Responsible Use; Studies 

West Virginia 

 

State Task Force on Artificial Intelligence 

To governor 

Creates a state Task Force on Artificial Intelligence; sets forth the membership of the same; provides for appointment of members; delineates responsibilities of the task force; provides it complete a report and specifies contents of same; provides a date for termination of the task force. 

Education Use; Government Use; Private Sector Use; Studies 

West Virginia 

 

Artificial Intelligence Select Committee 

 

Creates a Select Committee on Artificial Intelligence. 

Studies 

West Virginia 

 

Child Pornography 

Pending 

Relates to establishing the criminal offenses of creating, producing, distributing or possessing with intent to distribute artificial intelligence-created visual depictions of child pornography when no actual minor is depicted. 

Child Pornography; Criminal Use 

Wisconsin 

 

Schools to Acquire Proactive Firearm Detection Software 

Pending 

Relates to grants to schools to acquire proactive firearm detection software; makes an appropriation. 

Education Use; Government Use 

Wisconsin 

 

Artificial Intelligence Content Disclosure 

Pending 

Concerns disclosures regarding content generated by artificial intelligence in political advertisements; grants rule-making authority; provides for a penalty. 

Elections 

Wisconsin 

 

Use of Artificial Intelligence By State Agencies 

Pending 

Concerns use of artificial intelligence by state agencies and staff reduction goals. 

Government Use; Effect on Labor/Employment 

Wisconsin 

 

Schools to Acquire Proactive Firearm Detection Software 

Pending 

Relates to grants to schools to acquire proactive firearm detection software; makes an appropriation. 

Education Use; Government Use 

Wisconsin 

 

Content Generated by Artificial Intelligence Disclosure 

Pending 

Concerns disclosures regarding content generated by artificial intelligence in political advertisements; grants rule-making authority; provides a penalty. 

Elections 

Wisconsin 

 

Use of Artificial Intelligence by State Agencies 

Pending 

Concerns use of artificial intelligence by state agencies and staff reduction goals. 

Government Use; Effect on Labor/Employment 

Wisconsin 

 

Generative Artificial Intelligence Disclaimer 

Pending 

Concerns disclaimer required when interacting with generative artificial intelligence that simulates conversation. 

Notification; Private Sector Use 

Wyoming 

None 

 

 

 

 

artificial intelligence in education policy

  LexisNexis Terms and Conditions  

Explanation of Categories  

Appropriations  Legislation regarding funding for programs or studies.     Audit  Legislation that discusses an audit or evaluation of how the use of artificial intelligence is functioning.     Child Pornography  Legislation prohibiting the use of artificial intelligence to create or generate pornographic images of children or representations of children.     Criminal Use  Legislation that focuses on the use of artificial intelligence as an element or in the commission of a crime.  

Cybersecurity  Use of artificial intelligence in cyberattacks or to assist in bolstering cybersecurity efforts.     Education/Training  Education or training programs to develop skills or knowledge in artificial intelligence.     Education Use  Legislation focused on the use of artificial intelligence by K-12 and other educational institutions, including use in instruction and use by students.     Elections  Use of artificial intelligence in processing election results and campaign materials (artificial intelligence specifically mentioned).     Government Use  Legislation focused on the use of artificial intelligence by government agencies and law enforcement.     Health Use  Legislation focused on the use of artificial intelligence in health care or by health care professionals.     Impact Assessment  May require a documented risk-based evaluation of an automated decision tool or other artificial intelligence tool.     Judicial Use  Legislation focused on the use of artificial intelligence in judicial proceedings and by legal professionals.     Effect on Labor/Employment  Legislation that relates to the effect artificial intelligence has on the workforce, type, quality and number of jobs and labor markets.     Notification  May require informing consumers or employees that they may be interacting in some way with artificial intelligence tools.     Oversight/Governance  Legislation that may require an office or agency to supervise or oversee the use of artificial intelligence and ensure its responsible use.     Private Right of Action  Provisions that grant individuals a private right of action as a legal remedy.     Private Sector Use  Legislation focused on the use of artificial intelligence by private sector businesses and organizations.     Provenance  Relates to requiring disclosures of data sources used to train artificial intelligence systems and mechanisms, like watermarking and disclosures, to help identify when artificial intelligence has been used.     Responsible Use  May prohibit use of artificial intelligence tools that contribute to any type of algorithmic discrimination, unjustified differential treatment, or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex, religion, age, national origin, language, disability, veteran status, genetic information, reproductive health or other classifications protected by state laws.     Studies  Legislation requiring a study of artificial intelligence issues or creating a task force, advisory body, commission or other regulatory, advisory or oversight entity.     Taxes  Legislation that provides tax benefits related to artificial intelligence.    

Contact NCSL

For more information on this topic, use this form to reach NCSL staff.

  • What is your role? Legislator Legislative Staff Other
  • Is this a press or media inquiry? No Yes
  • Admin Email

Related Resources

Ncsl 50-state searchable bill tracking databases, nalit news | spring 2024.

In this Issue: NALIT’s Communication Platform, 2024-25 Election of Executive Committee, Strategic Planning: Aligning the IT Investment, 2024 NCSL Legislative Summit, 2024 NALIT Professional Development Seminar, NALIT Legislative Exchange Program (NLEP) – Host States, 2024 NALIT Legislative Exchange (NLEP) – Call for Participants Digital team.

State of Play | Tackling Social Media Regulations for Children

With increasing scrutiny on children’s mental health and privacy, state lawmakers are debating new regulations for social media use by minors.

Artificial Intelligence and Educational Policy: Bridging Research and Practice

  • Conference paper
  • First Online: 30 June 2023
  • Cite this conference paper

artificial intelligence in education policy

  • Seiji Isotani   ORCID: orcid.org/0000-0003-1574-0784 10 , 11 ,
  • Ig Ibert Bittencourt   ORCID: orcid.org/0000-0001-5676-2280 10 , 11 &
  • Erin Walker   ORCID: orcid.org/0000-0002-0976-9065 12  

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1831))

Included in the following conference series:

  • International Conference on Artificial Intelligence in Education

2867 Accesses

The use of artificial intelligence (AI) in education has been on the rise, and government and non-government organizations around the world are establishing policies and guidelines to support its safe implementation. However, there is a need to bridge the gap between AI research practices and their potential applications to design and implement educational policies. To help the community to address this challenge, we propose a workshop on AI and Educational Policy with the theme “Opportunities at the Intersection between AI and Education Policy.” The workshop aimed to identify global challenges related to education and the adoption of AI, discuss ways in which AI might support learning scientists in addressing those challenges, learn about AI and education policy initiatives already in place, and identify opportunities for new policies to be established. We intend to develop action plans grounded in the learning sciences that identify opportunities and guidelines for specific AI policies in education.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bai, W., et al.: Anxiety and depressive symptoms in college students during the late stage of the COVID-19 outbreak: a network approach. Transl. Psychiatry 11 (1), 638 (2021)

Article   Google Scholar  

Baker, R.S.: Stupid tutoring systems, intelligent humans. Int. J. Artif. Intell. Educ. 26 (2), 600–614 (2016)

World Bank: The State of Global Learning Poverty: 2022 Update. Technical Report. A new joint publication of the World Bank, UNICEF, FCDO, USAID, the Bill & Melinda Gates Foundation (2022)

Google Scholar  

Calzada, I., Cobo, C.: Unplugging: deconstructing the smart city. J. Urban Technol. 22 (1), 23–43 (2015). https://doi.org/10.1080/10630732.2014.971535

Carney, S.: Reimagining our futures together: a new social contract for education. Comparative Education, 1–2 (2022). https://doi.org/10.1080/03050068.2022.2102326

Department of International Cooperation, Ministry of Science and Technology (MOST), P.R.China. (2017, September 15). Next generation artificial intelligence development plan issued by State Council. China Science and Technology Newsletter. https://www.mfa.gov.cn/ce/cefi//eng/kxjs/P020171025789108009001.pdf

European Union: New rules for artificial intelligence – questions and answers (2021, April 26). https://ec.europa.eu/commission/presscorner/api/files/document/print/en/qanda_21_1683/QANDA_21_1683_EN.pdf

Friedman, L., Black, N.B., Walker, E., Roschelle, J.: Safe AI in Education Needs You. ACM (2021, November 8). https://cacm.acm.org/blogs/blog-cacm/256657-safe-ai-in-education-needs-you/fulltext

Holmes, W., et al.: Ethics of AI in education: towards a community-wide framework. Int. J. Artif. Intell. Educ. 32 (3), 504–526 (2021)

Joaquim, S., Bittencourt, I.I., de Amorim Silva, R., Espinheira, P.L., Reis, M.: What to do and what to avoid on the use of gamified intelligent tutor system for low-income students. Educ. Inf. Technol. 27 (2), 2677–2694 (2021). https://doi.org/10.1007/s10639-021-10728-4

Madaio, M.A., et al.: Collective support and independent learning with a voice-based literacy technology in rural communities. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2020)

Miao, F., Holmes, W., Huang, R., Zhang, H.: AI and Education: A Guidance for Policymakers. UNESCO Publishing (2021). https://unesdoc.unesco.org/ark:/48223/pf0000376709

Miao, F.: K-12 AI Curricula: A Mapping of Government-Endorsed AI Curricula. UNESCO Publishing (2022). https://unesdoc.unesco.org/ark:/48223/pf0000380602

NITI Aayog: Tamil Nadu Safe & Ethical Artificial Intelligence Policy 2020 (2022, November). https://indiaai.gov.in/research-reports/tamil-nadu-safe-ethical-artificial-intelligence-policy-2020

NITI Aayog: National Strategy for Artificial Intelligence (2018). https://indiaai.gov.in/research-reports/national-strategy-for-artificial-intelligence

Oyelere, S.S., et al. Artificial intelligence in african schools: towards a contextualized approach. In: 2022 IEEE Global Engineering Education Conference (EDUCON), pp. 1–7. IEEE (2022). https://doi.org/10.1109/educon52537.2022.9766550

Paiva, R., Bittencourt, I.I.: Helping Teachers Help Their Students: A Human-AI Hybrid Approach. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12163, pp. 448–459. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52237-7_36

Chapter   Google Scholar  

Reimers, F.: Education and Covid-19: Recovering from the Shock Created by the Pandemic and Building Back Better. International Academy of Education and UNESCO, Geneva (2021). http://www.ibe.unesco.org/en/news/education-and-covid-19-recovering-shock-created-pandemic-and-building-back-better-educational

Reimers, F., Amaechi, U., Banerji, A., Wang, M.: An Educational Calamity: Learning and Teaching During the Covid-19 Pandemic. Independently published (2021). https://www.amazon.com/educational-calamity-Learning-teaching-Covid-19/dp/B091DYRDPV

Reimers, F.M. (ed.): Primary and Secondary Education During Covid-19. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-81500-4

Book   Google Scholar  

Reimers, F.M., Amaechi, U., Banerji, A., Wang, M.: Education in crisis. transforming schools for a post-covid-19 renaissance. In: Education to Build Back Better. Springer International Publishing, pp. 1–20 (2022). https://doi.org/10.1007/978-3-030-93951-9_1

Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., Floridi, L.: The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI & Soc. 36 (1), 59–77 (2020). https://doi.org/10.1007/s00146-020-00992-2

Roschelle, J., Lester, J., Fusco, J., (eds.): AI and the future of learning: Expert panel report, Digital Promise (2020). https://circls.org/reports/ai-report

Santos, J., Bittencourt, I., Reis, M., Chalco, G., Isotani, S.: Two billion registered students affected by stereotyped educational environments: an analysis of gender-based color bias. Humanities and Social Sciences Communications 9 (1), 1–16 (2022). https://www.nature.com/articles/s41599-022-01220-6

Schiff, D.: Education for AI, not AI for education: the role of education and ethics in national AI policy strategies. Int. J. Artif. Intelli. Edu. 32 (3), 527–563 (2021). https://doi.org/10.1007/s40593-021-00270-2

Schleicher, A.: World Class. How to Build a 21st-Century School System. OECD (2018). https://doi.org/10.1787/9789264300002-en

Smart Nation Digital Government Office: National AI Strategy (2019). https://www.smartnation.gov.sg/files/publications/national-ai-strategy.pdf

TALIS: TALIS 2018 Results: Teachers and School Leaders as Lifelong Learners. OECD Report. (2019). https://doi.org/10.1787/23129638

The Institute for Ethical AI in Education: Developing an ethical framework for ai in education. University of Buckingham (2021, March 30). https://www.buckingham.ac.uk/research-the-institute-for-ethical-ai-in-education/

The White House: Blueprint for an ai bill of rights: Making automated systems work for the American people. The United States Government (2022, October 4). https://www.whitehouse.gov/ostp/ai-bill-of-rights/

Uchidiuno, J.O., Ogan, A., Yarzebinski, E., Hammer, J.: Going global: Understanding english language learners’ student motivation in english language MOOCs. Int. J. Artif. Intelli. Edu. 28 (4), 528–552 (2017). https://doi.org/10.1007/s40593-017-0159-7

UNESCO: Beijing Consensus on Artificial Intelligence and Education (2019). https://unesdoc.unesco.org/ark:/48223/pf0000368303

Vinuesa, R., et al.: The role of artificial intelligence in achieving the sustainable development goals. Nature Communications 11 (1) (2020). https://doi.org/10.1038/s41467-019-14108-y

RIA: Research ICT Africa Launches New AI Policy Think Tank for Africa (2021). https://researchictafrica.net/2021/01/29/ria-launches-ai-policy-research-centre/

Download references

Acknowledgments

This workshop initiative is supported by the Center for Integrative Research on Computer and Learning Sciences (CIRCLS - https://circls.org ), a center that connects learning sciences projects in the United States, and where AI and Education Policy is a key topic in the intersection between research and practice.

Other Organizing Committee Members:

• Deblina Pakhira, Digital Promise;

• Dalila Dragnic-Cindric, Digital Promise;

• Cassandra Kelley, University of Pittsburgh;

• Judi Fusco, Digital Promise;

• Jeremy Roschelle, Digital Promise.

The authors have used Grammarly and ChatGPT to improve the text.

Author information

Authors and affiliations.

Harvard Graduate School of Education, Cambridge, MA, 02138, USA

Seiji Isotani & Ig Ibert Bittencourt

NEES: Center for Excellence in Social Technologies, Federal University of Alagoas, Maceio, AL, 57072-970, Brazil

University of Pittsburgh, Pittsburgh, PA, 15260, USA

Erin Walker

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Seiji Isotani .

Editor information

Editors and affiliations.

University of Southern California, Los Angeles, CA, USA

University of British Columbia, Vancouver, BC, Canada

Genaro Rebolledo-Mendez

University of Leeds, Leeds, UK

Vania Dimitrova

North Carolina State University, Raleigh, NC, USA

Noboru Matsuda

UNED, Madrid, Spain

Olga C. Santos

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Isotani, S., Bittencourt, I.I., Walker, E. (2023). Artificial Intelligence and Educational Policy: Bridging Research and Practice. In: Wang, N., Rebolledo-Mendez, G., Dimitrova, V., Matsuda, N., Santos, O.C. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. AIED 2023. Communications in Computer and Information Science, vol 1831. Springer, Cham. https://doi.org/10.1007/978-3-031-36336-8_9

Download citation

DOI : https://doi.org/10.1007/978-3-031-36336-8_9

Published : 30 June 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-36335-1

Online ISBN : 978-3-031-36336-8

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

More From Forbes

Next-gen education: 8 strategies leveraging ai in learning platforms.

Forbes Technology Council

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Ashok Manoharan, Founder/CTO FocusLabs .

The world of education is changing at the same rate as the digital landscape. The integration of artificial intelligence (AI) into learning systems has opened up new opportunities for improving the educational experience. For students of all ages, utilizing AI creates a multitude of options to improve results, increase engagement and personalize learning. Let's look at eight tactics that are transforming the landscape of next-generation education using AI-powered learning platforms, supported by real-world facts and insights.

Personalized Learning Paths

McKinsey's analysis found that individualized learning routes can boost student engagement by up to 60% while increasing educational results by 30%.

AI systems use students' learning behaviors, preferences and performance data to generate personalized learning routes. Personalized learning paths allow students to learn at their own speed and style, encouraging a greater grasp and mastery of ideas.

Adaptive Education

Markets And Markets estimates that the global market for adaptive learning products will reach $ 5.3 billion by 2025 . Adaptive learning platforms use artificial intelligence algorithms to dynamically modify the complexity and pace of learning content based on students' real-time performance data. Each student will receive specific support and challenges thanks to this individualized strategy, which maximizes learning outcomes.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024, intelligent tutoring systems.

U.S. Department of Education research indicates that intelligent tutoring systems can raise student achievement levels to the same level as one-on-one tutoring. AI-powered tutoring systems mimic the work of human tutors by offering tailored comments, explanations and support to pupils. These systems respond to individual learning demands, providing interactive and engaging learning experiences that foster deep comprehension and skill development.

Natural Language Processing (NLP) For Feedback

A study published in the Journal of Educational Psychology found that providing timely and specific feedback via NLP boosts student learning and engagement. NLP algorithms assess students' written responses and provide real-time feedback on their comprehension and communication skills. NLP improves examination efficiency and efficacy by automating the feedback process, allowing educators to focus on targeted interventions.

Smart Content Curation

According to a Deloitte poll, 68% of educators believe AI-powered content curation improves student learning outcomes. Artificial intelligence (AI) algorithms help teachers select and arrange instructional materials from a variety of sources, including books, articles and videos. AI improves their understanding experience and encourages deeper engagement with the topic by recommending appropriate resources that are matched with learning objectives.

Predictive Analytics For Student Success

By using artificial intelligence (AI) to evaluate student behavior and performance data, predictive analytics can spot trends that point to possible difficulties in the classroom or high dropout rates. Educators can use these insights to act early, offering focused assistance and interventions to help students succeed.

Gamification And Immersive Learning

AI-enhanced gamification incorporates game mechanics, challenges and rewards into the learning process, encouraging students to actively participate and progress. Virtual reality (VR) and augmented reality (AR) are examples of immersive technologies that further improve participation by establishing dynamic and immersive learning environments.

Continuous Assessment And Feedback

AI provides real-time assessment of student progress and comprehension, delivering immediate feedback to both students and educators. By monitoring learning activities and performance data, AI finds areas for development and allows for individualized interventions to help students advance.

Incorporating artificial intelligence (AI) into learning platforms has the potential to transform teaching and learning in the future generation of education. Using tailored learning paths, adaptive algorithms, intelligent tutoring systems and other AI-driven tactics, educators may provide more interesting, effective and inclusive learning experiences to students all around the world. The future of education is more promising than ever because of AI, which will enable students to realize their full potential and prosper in a world that is changing quickly.

Incorporating AI into educational platforms sounds super promising, but there are still some big challenges to tackle. At the outset, it has got to be accessible. This means the AI needs to be affordable and adaptable for students from all sorts of backgrounds, speaking different languages and having various abilities.

Then there’s the whole issue of data privacy. Student information must be kept safe and used ethically; this helps build trust and encourages more people to get on board with using AI in education. Another thing is inequality in AI—this one’s a tough nut to crack! If an AI system only learns from limited datasets, it might just keep existing inequalities going strong instead of breaking them down. We need fair algorithms so everyone gets a truly tailored learning experience. If we want this integration of technology in schools done right, changes have got to happen both within school systems themselves as well as through teacher training programs, which should now include how best teachers can use these new tools alongside traditional teaching methods.

Tackling these issues head-on will help us make sure that integrating artificial intelligence into education leads us where we hope: toward more engaging lessons personalized for each learner.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Ashok Manoharan

  • Editorial Standards
  • Reprints & Permissions

2023-24 Guidance for Artificial Intelligence Tools and Other Services

Ap african american studies policy.

Generative AI tools must be used ethically, responsibly, and intentionally to support student learning, not to bypass it. Accordingly, the AP African American Studies Individual Student Project must be the student’s own work. While students are permitted to use generative AI tools consistent with this policy, their use is optional and not mandatory.  

Students can use generative AI tools as optional aids for exploration of potential topics of inquiry, initial searches for sources of information, confirming their understanding of a complex text, or checking their writing for grammar and tone. However, students must read primary and secondary sources directly, perform their own analysis and synthesis of evidence, and make their own choices on how to communicate effectively in their presentations. It remains the student’s responsibility to engage deeply with credible, valid sources and integrate diverse perspectives when working on the project.  

AP Art and Design Policy

The use of artificial intelligence tools by AP Art and Design students is categorically prohibited at any stage of the creative process. 

AP Capstone Policy

Generative AI tools must be used ethically, responsibly, and intentionally to support student learning, not to bypass it. Accordingly, all performance tasks submitted in AP Seminar and AP Research must be the student’s own work. While students are permitted to use generative AI tools consistent with this policy, their use is optional and not mandatory. 

Students can use generative AI tools as optional aids for exploration of potential topics of inquiry, initial searches for sources of information, confirming their understanding of a complex text, or checking their writing for grammar and tone. However, students must read primary and secondary sources directly, perform their own analysis and synthesis of evidence, and make their own choices on how to communicate effectively both in their writing and presentations. It remains the student’s responsibility to engage deeply with credible, valid sources and integrate diverse perspectives when working on the performance tasks. Students must complete interim “checkpoints” with their teacher to demonstrate genuine engagement with the tasks.   

Required Checkpoints and Attestations   for AP Capstone

To ensure students are not using generative AI to bypass work, students must complete interim checkpoints with their teacher to demonstrate genuine engagement with the tasks. AP Seminar and AP Research students will need to complete the relevant checkpoints successfully to receive a score for their performance tasks. Teachers must attest, to the best of their knowledge, that students completed the checkpoints authentically. Failure to complete the checkpoints will result in a score of 0 on the associated task.  

In AP Seminar, teachers assess the authenticity of student work based on checkpoints that take the form of short conversations with students during which students make their thinking and decision-making visible (similar to an oral defense). These checkpoints should occur during the sources and research phase (IRR and IWA), and argument outline phase (IWA only). A final validation checkpoint (IRR and IWA) requires teachers to confirm the student’s final submission is, to the best of their knowledge, authentic student work. 

In AP Research, students must complete checkpoints in the form of in-progress meetings and work in the Process and Reflection Portfolio (PREP). No further checkpoints will be required. 

College Board reserves the right to investigate submissions where there is evidence of the inappropriate use of generative AI as an academic integrity violation and request from students copies of their interim work for review.  

Please see the AP Seminar and AP Research course and exam descriptions (CEDs) for the current policy on AI and other tools along with guidance on administering mandatory checkpoints.

AP Computer Science Principles Policy

AP Computer Science Principles students are permitted to utilize generative AI tools as supplementary resources for understanding coding principles, assisting in code development, and debugging. This responsible use aligns with current guidelines for peer collaboration on developing code.    

Students should be aware that generative AI tools can produce incomplete code, code that creates or introduces biases, code with errors, inefficiencies in how the code executes, or code complexities that make it difficult to understand and therefore explain the code. It is the student’s responsibility to review and understand any code co-written with AI tools, ensuring its functionality. Additionally, students must be prepared to explain their code in detail, as required on the end-of-course exam. 

Trending News

Sheppard, Mullin, Richter & Hampton LLP full service Global 100 law firm handling corporate law

Related Practices & Jurisdictions

  • Communications, Media & Internet
  • Health Law & Managed Care
  • Corporate & Business Organizations
  • Consumer Protection

artificial intelligence in education policy

If your organization has not updated its policies to comply with Utah’s  Artificial Intelligence Policy Act  (the “Act”), now is the time. As we noted in a prior  blog post , this law took effect on May 1 st . While it imposes certain AI-related disclosure obligations on businesses and individuals as a whole, the obligations for regulated occupations (which include those  licensed by the Utah Division of Professional Licensing , such as clinical services provided by a licensed healthcare provider, including a physician or nurse), are stricter.

What Does the Act Require?

The Act requires that licensed providers “prominently disclose when…interacting with a generative artificial intelligence” in the provision of regulated services ( i.e ., the practice of medicine or nursing). The provider must make the disclosure verbally at the beginning of the conversation and electronically prior to a written exchange. The Act describes generative AI as an artificial system trained on data that communicates via text, audio, or visuals and generates unscripted outputs similar to a human with limited or zero human supervision. This definition is vague and stakeholders are awaiting clarity as to which communication technologies constitute generative AI under the Act. For example, whether or not all automated chatbots constitute generative AI may be up for debate. Non-regulated businesses and individuals are subject to a less stringent standard – it is acceptable for a person to communicate with their deployed generative AI without prior disclosure. However,  if a person inquires  whether he/she is communicating with a person or AI, then such non-regulated business/individuals must clearly and conspicuously disclose that the communications are with generative AI.

How Is It Enforced?

In addition to enforcement powers available to the Division of Consumer Protection, individuals who violate the Act may be administratively fined $2,500 per violation and potentially sued in court. In a lawsuit, providers may be subject to an additional $2,500 penalty for each violation. Providers who violate any administrative or court orders may be subject to an additional civil penalty of $5,000 per violation. Further, providers can’t push liability onto the generative AI (or its manufacturer) as a defense by blaming it for providers’ violations of the Act.

Office of AI Policy

The Act also establishes the Office of Artificial Intelligence Policy to encourage the development of AI technologies by creating and overseeing an artificial intelligence learning laboratory program, in addition to consulting with stakeholders about regulatory changes in this sphere. The Office will accept participants for its learning laboratory program which allows for regulatory mitigation during AI testing and development periods. Regulatory mitigation may include reduced penalties under the Act. Such programs may last up to 12 months, which may be extended one time for an additional 12-month period.

Providers using generative AI in Utah should ensure that they have appropriate policies, procedures and disclosures in place to comply with the Act. While the Act clearly applies to licensed providers physically located in Utah, Utah-licensed clinicians providing telehealth services to Utah patients (even if the provider is not physically located in Utah) must also comply. Therefore, any telehealth provider with a presence in Utah that is using generative AI should evaluate whether it is subject to the Act. 

Listen to this post

Current legal analysis, more from sheppard, mullin, richter & hampton llp, upcoming legal education events.

Foley and Lardner LLP Law Firm

Sign Up for e-NewsBulletins

IMAGES

  1. The Impact of Artificial Intelligence (AI) on the Future of Education

    artificial intelligence in education policy

  2. 7 Real-Life Examples of AI in Education

    artificial intelligence in education policy

  3. TOP 12 Applications of AI in Education

    artificial intelligence in education policy

  4. 10 Application of Artificial Intelligence in Education- Pickl.AI

    artificial intelligence in education policy

  5. Artificial Intelligence in Education: Benefits, Challenges, and Use

    artificial intelligence in education policy

  6. 9 Applications of Artificial Intelligence in Education

    artificial intelligence in education policy

VIDEO

  1. AI in Action Series: How is AI Impacting Firms? Session A: AI and Health Services

  2. Hanging Out: Paradigm Shifts in AI

  3. AI Policy Template for Organizations

COMMENTS

  1. U.S. Department of Education Shares Insights and Recommendations for

    Today, the U.S. Department of Education's Office of Educational Technology (OET) released a new report, "Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations" that summarizes the opportunities and risks for AI in teaching, learning, research, and assessment based on public input. This report is part of the Biden-Harris Administration's ongoing ...

  2. Artificial intelligence in education

    Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and accelerate progress towards SDG 4. However, rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks.

  3. PDF Artificial Intelligence and the Future of Teaching and Learning

    leaders, policy makers, researchers, and educational technology innovators and providers as they work together on pressing policy issues that arise as Artificial Intelligence (AI) is used in education. AI can be defined as "automation based on associations." When computers automate reasoning

  4. A comprehensive AI policy education framework for university ...

    Rationale for an artificial intelligence education policy. With generative AI tools becoming easily accessible to the public in recent months, they are rapidly being integrated into various fields and industries. This has created an urgent need for universities to develop an AI education policy that prepares students to work with and understand ...

  5. Artificial intelligence and the Futures of Learning

    The Artificial Intelligence and the Futures of Learning project builds on the Recommendation on the Ethics of Artificial Intelligence adopted at the 41st session of the UNESCO General Conference in 2019 and follows up on the recommendations of the UNESCO global report Reimagining our futures together: a new social contract for education, launched in November 2021.

  6. PDF Artificial Intelligence and Educational Policy: Bridging ...

    Artificial Intelligence and Educational Policy: Bridging Research and Practice. Artificial Intelligence and Educational Policy: Bridging Research and Practice. Seiji Isotani1,2(B), Ig Ibert Bittencourt1,2, and Erin Walker3 1Harvard Graduate School of Education, Cambridge, MA 02138, USA {seiji_isotani,ig_bittencourt}@gse.harvard.edu.

  7. AI and education: guidance for policy-makers

    Policy-makers and educators have entered uncharted territory that raises fundamental questions on how the future of learning will interact with AI. The bottom line is that the deployment and use of AI in education must be guided by the core principles of inclusion and equity.

  8. Competing visions of artificial intelligence in education—A heuristic

    Artificial intelligence and education: A critical view through the lens of human rights, democracy, and the rule of law: 2022: Government: European Commission: 2: Government: The impact on artificial intelligence on learning, teaching and education: 2022: Commissioned paper, named author: European Parliament: 3: Government

  9. Artificial intelligence in education: challenges and opportunities for

    This paper belongs to a series which is designed to nurture the international debate about a wide range of education policy issues. ... Evolution and Revolution in Artificial Intelligence in Education. International Journal of Artificial Intelligence in Education, Volume 26, Issue 2, pp. 582-599.Rundle, M. (2015). 'How Geekie's Adaptive ...

  10. Education for AI, not AI for Education: The Role of Education and

    As of 2021, more than 30 countries have released national artificial intelligence (AI) policy strategies. These documents articulate plans and expectations regarding how AI will impact policy sectors, including education, and typically discuss the social and ethical implications of AI. This article engages in thematic analysis of 24 such national AI policy strategies, reviewing the role of ...

  11. ISTE

    Artificial Intelligence in Education To prepare students to thrive as learners and leaders of the future, educators must become comfortable teaching with and about Artificial Intelligence. Generative AI tools such as ChatGPT , Claude and Midjourney , for example, further the opportunity to rethink and redesign learning.

  12. Artificial Intelligence in Education: Implications for Policymakers

    One trending theme within research on learning and teaching is an emphasis on artificial intelligence (AI). While AI offers opportunities in the educational arena, blindly replacing human involvement is not the answer. Instead, current research suggests that the key lies in harnessing the strengths of both humans and AI to create a more effective and beneficial learning and teaching experience ...

  13. How to Enact an AI Policy in Your K-12 Schools

    00:00. As K-12 leaders plan for the return to school, many should consider the need for a policy on artificial intelligence. AI conversations are being had in ed tech circles, with experts and thought leaders discussing the technology as it advances. At this year's Consortium for School Networking conference in March, panelists discussed ...

  14. Artificial intelligence in higher education: the state of the field

    This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 ...

  15. AI technologies for education: Recent research & future directions

    2.1 Prolific countries. Artificial intelligence in education (AIEd) research has been conducted in many countries around the world. The 40 articles reported AIEd research studies in 16 countries (See Table 1).USA was so far the most prolific, with nine articles meeting all criteria applied in this study, and noticeably seven of them were conducted in K-12.

  16. AI in Education| Harvard Graduate School of Education

    EdCast: Chris Dede is a senior research fellow at the Harvard Graduate School of Education. He is also a co-principal investigator of the National Artificial Intelligence Institute in adult learning and online education. I'm Jill Anderson. This is the Harvard EdCast produced by the Harvard Graduate School of Education. Thanks for listening.

  17. 2. Artificial intelligence in education: Bringing it all together

    It also creates a need for policy that increases support for teacher professional development in data-driven decision-making. Dashboards have generally been more widely-used by teachers with higher levels of data literacy. ... Intelligent Humans", International Journal of Artificial Intelligence in Education, Vol. 26/2, pp. 600-614, https ...

  18. Artificial Intelligence (AI) and Education

    Artificial Intelligence (AI) and Education. Educational tools enabled by AI have recently attracted attention for their potential to improve education quality and enhance traditional teaching and learning methods. Although there is no single consensus definition, AI generally allows computers to perform tasks that are conventionally thought to ...

  19. How AI can transform education for students and teachers

    Advances in artificial intelligence (AI) could transform education systems and make them more equitable. It can accelerate the long overdue transformation of education systems towards inclusive learning that will prepare young people to thrive and shape a better future.; At the same time, teachers can use these technologies to enhance their teaching practice and professional experience.

  20. REPORT on artificial intelligence in education, culture and the

    - having regard to the report of the Commission High-Level Expert Group on Artificial Intelligence of 8 April 2019 entitled 'Ethics Guidelines for Trustworthy AI', - having regard to its resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics [3],

  21. AI in Education: Considering Ethics and Power

    Sepehr Vakil is associate professor of learning sciences. Difficult ethical and moral questions will play a central role in whether artificial intelligence will expand opportunities and equity in STEM education or make things worse, Northwestern University learning scientist Sepehr Vakil said during his closing keynote at a National Science Foundation convening of principal investigators in ...

  22. Artificial Intelligence 2024 Legislation

    Utah created the Artificial Intelligence Policy Act. The Virgin Islands established a real-time, centralized crime data system within the state police department. ... Enacts the Artificial Intelligence in Education Task Force Act for the purpose of evaluating potential applications of artificial intelligence in K-12 and higher education and to ...

  23. Artificial Intelligence and Educational Policy: Bridging Research and

    The use of artificial intelligence (AI) in education has been on the rise, and government and non-government organizations around the world are establishing policies and guidelines to support its safe implementation. However, there is a need to bridge the gap between AI research practices and their potential applications to design and implement ...

  24. Next-Gen Education: 8 Strategies Leveraging AI In Learning ...

    Adaptive Education. Markets And Markets estimates that the global market for adaptive learning products will reach $5.3 billion by 2025.Adaptive learning platforms use artificial intelligence ...

  25. 2023-24 Guidance for Artificial Intelligence Tools and Other Services

    Generative AI tools must be used ethically, responsibly, and intentionally to support student learning, not to bypass it. Accordingly, all performance tasks submitted in AP Seminar and AP Research must be the student's own work. While students are permitted to use generative AI tools consistent with this policy, their use is optional and not ...

  26. Browse journals and books

    Abridged Science for High School Students. The Nuclear Research Foundation School Certificate Integrated, Volume 2. Book. • 1966. Abschlusskurs Sonografie der Bewegungsorgane First Edition. Book. • 2024. Absolute Radiometry. Electrically Calibrated Thermal Detectors of Optical Radiation.

  27. How to Comply with the Utah Artificial Intelligence Privacy Act

    If your organization has not updated its policies to comply with Utah's Artificial Intelligence Policy Act (the "Act"), now is the time. As we noted in a prior blog post, this law took ...