( = 972)
Adapting teaching methods to the “multiple intelligences” of students leads to better learning .
The opening survey statement from Blanchette Sarrasin et al. ( 2019 ) caught Howard Gardner's attention, because it clearly draws from his Multiple Intelligences (henceforth MI) theory (Gardner, 1983 ). In a recent paper, Gardner ( 2020 ) says he was disturbed by this so-called “neuromyth,” both because it says nothing about the brain, and because it is not an idea that he has put forth or defended. On that basis, Gardner ( 2020 ) argues that MI theory does not qualify as a neuromyth. According to the author of Frames of Mind , some years ago, there may have been merit in exposing neuromyths, but the practice has gone too far and has now become problematic rather than helpful.
In this opinion paper, I first challenge Gardner's ( 2020 ) view that MI theory contains no “neuro.” Then, I highlight the fact that Gardner and his research team spent an entire decade, through the Spectrum Project , contemplating the hypothesis—embedded into the opening survey statement—that matching modes of instruction to MI intelligence profiles promotes learning. When taken for granted, such an unproven research hypothesis is considered as a false belief—a neuromyth derived from MI theory. Then, I argue that research aimed at testing the MI–instruction “matching” hypothesis is still hampered by a lack of satisfactory measures of MI intelligence profiles. Finally, I expose how Gardner's ( 2020 ) position may, paradoxically, entertain the “problematic” neuromyth. To foster a more constructive dialog between scientists and educators, I follow Gardner's ( 2020 ) advice to properly qualify (i.e., to debunk) the survey statement, in terms of both robustness and caveats.
Gardner ( 2020 ) states that “there is no mention of the brain” in his original work, insisting that “MI is a psychological theory, pure and simple” (p. 3). Because MI theory contains no “neuro,” he claims, there is no reason why it should be associated with the “provocative and contentious neuromyth” term. However, Gardner has typically called MI “a psychobiological theory: psychological because it is a theory of the mind, biological because it privileges information about the brain, the nervous system, and ultimately, [he] believe[s], the human genome” (Gardner, 2011b , p. 7). In the opening chapters of Frames of Mind , after disposing of traditional, IQ theories of intelligence, Gardner ( 1983 ) draws from brain science of the day to posit the basic premise of MI theory—that intelligences are distinct computational capacities that have emerged, over the course of evolution and across cultures, from the human cerebral cortex:
We find, from recent work in neurology, increasingly persuasive evidence for functional units in the nervous systems. There are units subserving microscopic abilities in the individual columns of the sensory or frontal areas; and there are much larger units, visible to inspection, which serve more complex and molar human functions, like linguistic or spatial processing. These suggest a biological basis for specialized intelligences (p. 57).
Such neurological evidence led Gardner ( 1983 ) to include potential isolation by brain damage as one of eight criteria—actually “the single most instructive line of evidence” (p. 63)—to define an intelligence. Critical insights for MI theory also came from Gardner's earlier neuropsychological research conducted in the 1970s on brain-damaged patients suffering from aphasia (Gardner, 2011b , 2016 ). Consistent with intelligences as biopsychological potentials to process information , Davis et al. ( 2011 ) noted that it would be “desirable to secure an atlas of the neural correlates of each of the intelligences” (p. 495) and current neuroscientific investigations of MI theory are undergoing in that direction. For instance, a brain lesion restricted to the left parietal lobe would selectively impair the capacity to discriminate living from non-living entities, i.e., naturalistic intelligence (Shearer and Karanian, 2017 ).
But even with no “neuro” at all, MI theory would still qualify as a potential source of neuromyths, as any scientific theory could—be it psychological, neurological, or a mix of both. Myths may have nothing to do with the brain, but are, nonetheless, myths. Over time, the term “neuromyth” has become a common umbrella to a wide range of unsubstantiated claims, especially in the education field. Some of those claims clearly evoke the brain (e.g., We only use 10% of our brain) , while others do not (e.g., Listening to Mozart's music makes children smarter ). Would it be more appropriate to drop the “neuro” prefix and collectively call them “edumyths”? Actually, it does not matter. They are myths.
Above all, the primary aim of MI theory was to expand the traditional, narrow IQ concept of intelligence to the whole spectrum of brain computational powers, not to provide brain-based educational recommendations. The basic idea of MI theory is that Homo sapiens is biologically endowed with a set of relatively autonomous mental tools (termed “intelligences”) that can be activated to solve problems or to fashion products that are of cultural value. MI theory posits that every individual has, at their disposal, a full intellectual profile of eight intelligences. From one individual to another, some intelligences exhibit low, some exhibit average, and some others exhibit strong biopsychological potentials, but the whole MI intelligence profile—a spectrum of brain computational powers working in synergy—is mobilized to adapt Homo sapiens to newly encountered, culture-bound situations.
Unlike Gardner's ( 2020 ) allegation, the claim in the opening survey statement is not that MI theory is a neuromyth. There has been considerable progress in brain science over the past four decades, and neurological underpinnings of the original rendition of MI theory (Gardner, 1983 ) might need an update (Gardner, 2016 ), but MI theory is still a plausible, legitimate scientific theory of intelligence. The false claim in the opening survey statement is that tailoring instruction to pupils' MI intelligence profiles promotes learning. Gardner ( 2020 ) states that he has “gone to great pains to emphasize that even if the theory is plausible, no educational recommendations follow directly from it” (p. 3). However, since the inception of MI theory some 40 years ago, regarding applications of MI theory in education, Gardner oscillates between two views: the “Rorschach” view and the “matching” view.
According to the “Rorschach” view, defended by Gardner ( 2020 ), no direct educational implications derive from research findings. Cultural values always interface the leap from science to practice. In this view, MI theory is a catalyst for reflection on a pluralistic, rather than a unitary, view of intelligence (Gardner, 1995a ). To use Gardner's ( 2006 ) analogy, from the teachers' standpoint, MI theory is an educational Rorschach test, a backdrop “to support almost any pet educational idea that they had” (Gardner, 2011b , p. 5). MI theory implies only two non-prescriptive teaching practices: “individualizing” and “pluralizing.” By using multiple “entry points” (presenting the teaching materials in more than one way), teachers might activate all intelligences and foster optimal learning, “since some individuals learn better through stories, others through work of art, or hands-on activities” (Gardner, 2011b , p. 7).
According to the alternative, “matching” view, clearly embedded in the opening neuromyth statement, Gardner ( 2020 ) states that it is “not an idea that [he] has put forth or defended” (p. 2). However, in the closing chapter of Frames of Mind , from a purely speculative and prospective standpoint, Gardner ( 1983 ) is quite sympathetic to the idea of matching teaching materials and modes of instruction to MI intelligence profiles:
Educational scholars nonetheless cling to the vision of the optimal match between student and material. In my own view, this tenacity is legitimate: after all, the science of educational psychology is still young; and in the wake of superior conceptualizations and finer measures [emphasis mine], the practice of matching the individual learner's profile to the materials and modes of instruction may still be validated. Moreover, if one adopts M.I. theory, the options for such matches increase: as I have already noted, it is possible that the intelligences can function both as subject matters in themselves and as the preferred means for inculcating diverse subject matter (p. 390).
Albeit speculative, and much to Gardner's surprise, these few lines have attracted tremendous interest in the education field. But testing the matching hypothesis required, in the first place, “finer measures” of MI intelligence profiles. Gardner ( 1992 ) proposed, as an alternative to IQ-like paper-and-pencil (standardized) intelligence tests, natural observations of Homo sapiens freely evolving in ecologically valid, culturally meaningful contexts. For instance, to measure spatial intelligence , “one should allow an individual to explore a terrain for a while and see whether she can find her way around it reliably” (Gardner, 1995b , p. 202). Gardner and his research team spent an entire decade, after the publication of Frames of Mind , exploring the plausibility of a MI theory-based “child-centered” learning program. Their most ambitious initiative was the Spectrum Project , aimed at creating a museum-like, rich environment for children to deploy their biopsychological potentials (intelligences). A set of 15 learning activities covering seven knowledge domains was created to provide a contextually valid assessment battery of MI intelligence profiles. For instance, to assess interpersonal intelligence , children manipulated figures in a scaled-down, 3D replica of their classroom (Chen and Gardner, 2012 ). The distribution of strengths and weaknesses across the range of intelligences was called the Spectrum profile . The ultimate goal was to develop individualized educational interventions adapted to MI intelligence profiles.
However, MI theory does not only posit the existence of eight neurologically plausible intelligences, it also posits that each individual actually combines several intelligences to tackle any given task, making it unlikely for a test to capture purely specific intelligence strengths and weaknesses (e.g., a test that would isolate bodily-kinesthetic from musical, spatial, and interpersonal intelligences, while observing an individual dancing the tango). Although the 15 assessment tasks from the Spectrum battery have been “shown to demonstrate reliability” (Davis et al., 2011 , p. 496), valid measures of single or multiple deployment of the eight intelligences are still unsettled:
Direct experimental tests of the [MI] theory are difficult to implement and so the status of the theory within academic psychology remains indeterminate. The biological basis of the theory—its neural and genetic correlates—should be clarified in the coming years. But in the absence of consensually agreed upon measures of the intelligences, either individually or in conjunction with one another, the psychological validity of the theory will continue to be elusive (Davis et al., 2011 , p. 498).
Reflecting back on assessment tools for the multiple intelligences, Gardner ( 2016 ) admitted that he has “not devoted significant effort to creating such tests” (p. 169). In light of the enormous investment of time and money, he did not want himself to be “in the assessment business” (Gardner, 2011a , p. xiii). Above all, measuring multiple intelligences is inconsistent with Gardner's critique of the traditional IQ theories of intelligence and, for that reason, he shows “reluctance to create a new kind a strait jacket (Johnny is musically smart but spatially dumb )” (Gardner, 2011b , p. 5).
Accordingly, the opening survey statement is considered as a neuromyth because of a lack of compelling evidence—mainly due to unsatisfactory measures of MI intelligence profiles—that matching modes of instruction to MI intelligence profiles promotes learning. This intuitively appealing hypothesis, contemplated by Gardner's research team at some point (the Spectrum Project ) but still open to scientific inquiry, has somehow been taken for granted by laypersons and, over time, embedded into popular culture. In other words, it became a neuromyth.
Gardner ( 2020 ) blames survey designers for putting up statements “conflating science and practice” and for creating rather than exposing neuromyths. He warns that by “waving the provocative neuromyth flag” with the opening survey statement, the baby (MI theory) might be thrown out with the bathwater (unsubstantiated educational claims derived from it).
First, neither Blanchette Sarrasin et al. ( 2019 ) nor other researchers in the field deliberately put up, in their respective surveys, neuromyth statements. Neuromyths are creatures of their own, to be chased, not created. Twenty-five years ago, Gardner ( 1995b ) debunked seven common myths that have grown up from MI theory. Myth #3 (“Multiple intelligences are learning styles”) was so persistent that Gardner ( 2013 ) found it necessary to debunk it once again in the new millennium. Survey designers simply exposed yet another, very prevalent myth: Tailoring instruction to pupils' MI intelligence profiles promotes learning.
Second, any scientific theory is a potential source of neuromyths. As noted by Geake ( 2008 ), the most pervasive neuromyths are ingrained into valid science. Is Roger Sperry's Nobel Prize at stake just because abusive extrapolations of his findings on functional hemispheric lateralization have given rise to one of the most pervasive neuromyths (“left-brained”—“right-brained” people)? By exposing such a popular neuromyth, might the baby (Sperry's contributions to neuroscience) be thrown out with the bathwater? The scientific integrity of MI theory cannot be harmed by the “problematic” neuromyth. Legitimate scientific theories and discoveries are challenged by empirical scrutiny, not by false beliefs loosely inspired from them.
Gardner ( 2020 ) argues that the way claims are conveyed in neuromyth survey statements (in an all-or-none, true/false fashion) is deceptive. To foster a more constructive dialog between scientists and educators, he advocates that research findings with potential educational implications should be properly qualified , in terms of both robustness and caveats. Surprisingly, rather than qualifying the message (the false claim in the opening survey statement), Gardner ( 2020 ) shoots the messengers (survey designers). A “more constructive” approach would be (1) to underline the scientific robustness of MI theory—its neurological plausibility (Posner, 2004 ) and (2) to disclose caveats pertaining to direct application of MI theory in educational settings, most notably that research aimed at testing the MI–instruction “matching” hypothesis is still hampered by a lack of consensually agreed upon measures of MI intelligence profiles (Davis et al., 2011 ). By shooting the messengers rather than qualifying the message (debunking yet another common myth that has grown up from MI theory), Gardner ( 2020 ) refrains from pulling the bathtub plug and entertains unsubstantiated educational implications of a legitimate scientific theory of intelligence.
The author confirms being the sole contributor of this work and has approved it for publication.
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
I am thankful to Liliane Lalonde for her help with the English language.
Funding. This work was supported by the Canada Foundation for Innovation John R. Evans Leaders Fund grant 18356.
In 1983, in one of the most influential books in a peerlessly influential career, Howard Gardner upended popularly accepted notions of how children think and learn. He proposed, in Frames of Mind , that there was not just a single intelligence that could be measured by one IQ test, but multiple intelligences — many ways of learning and knowing.
With his best-known work, Howard Gardner shifted the paradigm and ushered in an era of personalized learning.
The notion of multiple intelligences — and Gardner’s follow-up ideas about teaching individual students in the ways they can best learn, and teaching important concepts in multiple ways, for many access points — shifted the paradigm, ushering in an era of personalized learning whose promise is still being explored.
Gardner never rested at multiple intelligences. In an award-winning career — which has included MacArthur and Guggenheim fellowships, the University of Louisville’s Grawemeyer Award in Education, and innumerable honorary degrees — he’s focused on ethical development , citizenship (including digital citizenship), professionalism, and the value of college and the liberal arts . He may have retired from teaching in 2019, but his work continues. – Video directed by Jill Anderson, produced by Elio Pajares
Visit Project Zero's website .
Read more about Howard Gardner's intellectual and pedagogical legacy .
Learn about Gardner's latest research on the future of college .
Discover the world's research
Our editors will review what you’ve submitted and determine whether to revise the article.
multiple intelligences , theory of human intelligence first proposed by the psychologist Howard Gardner in his book Frames of Mind (1983). At its core, it is the proposition that individuals have the potential to develop a combination of eight separate intelligences, or spheres of intelligence; that proposition is grounded on Gardner’s assertion that an individual’s cognitive capacity cannot be represented adequately in a single measurement, such as an IQ score . Rather, because each person manifests varying levels of separate intelligences, a unique cognitive profile would be a better representation of individual strengths and weaknesses, according to this theory. It is important to note that, within this theory, every person possesses all intelligences to some degree.
Gardner posited that in order for a cognitive capacity to qualify as an independent “intelligence” (rather than as a subskill or a combination of other kinds of intelligence), it must meet eight specific criteria . First, it must be possible to thoroughly symbolize that capacity by using a specific notation that conveys its essential meaning. Second, neurological evidence must exist that some area of the brain is specialized to control that particular capacity. Third, case studies must exist that show that some subgroups of people (such as child prodigies) exhibit an elevated mastery of a given intelligence. Fourth, the intelligence must have some evolutionary relevance through history and across cultures . Fifth, the capacity must have a unique developmental history for each individual, reflecting each person’s different level of mastery of it. Sixth, the intelligence must be measurable in psychometric studies that are reflective of differing levels of mastery across intelligences. Seventh, the intelligence must have some definite set of core operations that are indicative of its use. Last, the proposed intelligence must be already plausible on the basis of existing means of measuring intelligence.
Gardner’s original theoretical model included seven separate intelligences, with an eighth added in 1999:
These eight intelligences can be grouped into the language-related, person-related, or object-related. The linguistic and musical intelligences are said to be language-related, since they engage both auditory and oral functions, which Gardner argued were central to the development of verbal and rhythmic skill. Linguistic (or verbal-linguistic) intelligence, manifested both orally and in writing, is the ability to use words and language effectively. Those who possess a high degree of verbal-linguistic intelligence have an ability to manipulate sentential syntax and structure, easily acquire foreign languages, and typically make use of a large vocabulary. Musical intelligence includes the ability to perceive and express variations in rhythm, pitch , and melody; the ability to compose and perform music; and the capacity to appreciate music and to distinguish subtleties in its form. It is similar to linguistic intelligence in its structure and origin, and it employs many of the same auditory and oral resources. Musical intelligence has ties to areas of the brain that control other intelligences as well, such as is found in the performer who has a keen bodily-kinesthetic intelligence or the composer who is adept at applying logical-mathematical intelligence toward the manipulation of ratios, patterns, and scales of music.
Person-related intelligences include both interpersonal and intrapersonal cognitive capacities. Intrapersonal intelligence is identified with self-knowledge, self-understanding, and the ability to discern one’s strengths and weaknesses as a means of guiding one’s actions. Interpersonal intelligence is manifested in the ability to understand, perceive, and appreciate the feelings and moods of others. Those with high interpersonal intelligence are able to get along well with others, work cooperatively, communicate effectively, empathize with others, and motivate others.
The four object-related intelligences—logical-mathematical, bodily-kinesthetic, naturalistic, and spatial—are stimulated and engaged by the concrete objects one encounters and the experiences one has. Those objects include physical features of the environment such as plants and animals, concrete things, and abstractions or numbers that are used to organize the environment. Those who exhibit high degrees of logical-mathematical intelligence are able to easily perceive patterns, follow series of commands, solve mathematical calculations, generate categories and classifications, and apply those skills to everyday use. Bodily-kinesthetic intelligence is manifested in physical development, athletic ability, manual dexterity , and understanding of physical wellness. It includes the ability to perform certain valuable functions, such as those of the surgeon or mechanic, as well as the ability to express ideas and feelings as artisans and performers. Spatial intelligence, according to Gardner, is manifested in at least three ways: (1) the ability to perceive an object in the spatial realm accurately, (2) the ability to represent one’s ideas in a two- or three-dimensional form, and (3) the ability to maneuver an object through space by imagining it rotated or by seeing it from various perspectives. Though spatial intelligence may be highly visual, its visual component refers more directly to one’s ability to create mental representations of reality.
Naturalistic intelligence is a later addition to Gardner’s theoretical model and is not as widely accepted as the other seven. It includes the ability to recognize plants, animals, and other parts of the natural environment as well as to see patterns and organizational structures found in nature. Most notably, research remains inconclusive as to whether the naturalistic intelligence fulfills the criterion of being able to be isolated in neurophysiology. In 1999 Gardner also considered whether a ninth intelligence, existential , exists.
This page has been archived and is no longer being updated regularly.
May 2012, Vol 43, No. 5
Print version: page 43
Multiple intelligences: best ideas from research and practice, resource summary.
This work shows teachers and administrators how to successfully integrate Multiple Intelligences into their schools and classrooms. Based on a national investigation of more than 40 schools and on detailed case studies, this book illustrates how teachers in real-life situations in a range of different public schools were able to construct and implement curricula that enabled students to learn challenging disciplinary content through multiple intelligences. It also shows how the organizational practices within these teachers schools supported strong classroom work. Written in a clear, practical style, this book highlights how educators everywhere can both integrate MI theory and foster exceptional student work. This book will be an invaluable resource for soon-to-be as well as practicing teachers and administrators. ISBN: 978-0205342594
Copyright 2022 President and Fellows of Harvard College | Harvard Graduate School of Education
Email Address
By submitting this form, you are granting: Project Zero, 13 Appian Way, Cambridge, Massachusetts, 02138, United States, http://www.pz.gse.harvard.edu permission to email you. You may unsubscribe via the link found at the bottom of every email. (See our Email Privacy Policy for details.) Emails are serviced by Constant Contact.
Redefining intelligence and how we measure it.
This summer, the world of college admissions is under the magnifying glass for what seems to be the nth time over the past few years. Some highlights from 2024 include this application cycle’s infamous FAFSA (Free Application for Federal Student Aid) application debacle , which delayed access to vital aid to millions of students for months. Throughout the year, elite universities have also been falling like dominoes in their reversal of the COVID era policy of test-optional admissions, requiring standardized test results again as part of applicants’ profile. Legacy admissions is also next in line to be under intense scrutiny, with California’s proposed state-wide ban leading the charge.
Amidst wave after wave of seismic changes affecting college admissions, what has remained constant is the achievement culture that underpins this entire industry. This phenomenon and its nefarious effects are discussed in depth in Never Enough , an investigative book by journalist Jennifer Wallace . In her recent webinar with Polygence , she peels back the many layers of the immense pressure placed on our students to achieve - something that is exacerbated by ever dwindling acceptance rates at elite colleges and generational changes to parenting. In fact, students are spending so many of their formative years playing this game of academic profile engineering that they barely give any attention to developing their authentic identities. What ends up happening is that they, with the support of an entourage of counselors and parents, are packaged into highly engineered applicant profiles that reflect hundreds of hours or time and thousands of dollars – all to receive 8 minutes of attention from an admissions officer on the other side of the desk. Those who are lucky to arrive at the doorstep of their dream college may realize that after all of that brand construction, they don’t actually know who they are or what they care about. They’ve perfected the art of the performative rat race but have lost touch with their authentic self. In a world where achievement engineering is the norm and not the exception, intellectual authenticity - something once taken for granted just a generation or two ago - has become the holy grail in admissions.
How can we empower students to become authentic thinkers instead of pressuring them to conform to a handful of sought-after profiles? The answer is to give them permission. Intelligence comes in so many different shapes and forms - we need to show our students that we live and breathe this conviction and that there is no hidden hierarchy ranking the relative value of each type of intelligence. Harvard psychologist Howard Gardner developed the theory of Multiple Intelligences in the late 1970’s and early 1980’s as a direct critique of the standard psychological view of intellect - that there is a single type of intelligence measured by IQ quizzes or other short answer tests. In this theory, Gardner formulates 8 types of intelligence - spatial, bodily-kinesthetic, musical, linguistic, logical-mathematical, interpersonal, intrapersonal, and naturalist, and argues that no one type is inherently superior to another. This challenges the widely held belief that the two types of intelligences that are measured by IQ tests - logical-mathematical and linguistic - are more critical than others.
At Polygence this way of thinking about human intelligence is so foundational to our mission that it inspired our name ( Poly - meaning “multiple”; and -gence - from “intelligence”). As the only research platform that supports humanistic, artistic, and creative projects in addition to traditional STEM projects, we celebrate and empower students to explore the world in all possible ways. It is a widely held misconception that research can only be done in labs by those in white lab coats and that the only acceptable way of showcasing the results of such inquiry is long-form academic papers peppered with citations. That is not only an overly restrictive view of research, but at times a harmful one that motivates students who are otherwise not intellectually passionate about STEM to force themselves into STEM research. Furthermore, research and its role in education remains relatively opaque in society - not many outside academia have a strong grasp of just how critical of an activity it is in advancing human knowledge and developing critical thought in our next generation. Knowing it ourselves is the first step in making this type of inquiry accessible to learners around us. Broadly speaking, research is any activity that broadens the horizon of human knowledge through one or more of the 8 intelligences. Composing a new song is as much a form of research as producing a podcast about dementia , just as animating a short film about environmental toxins is as worthy of a research topic as a paper about gene therapy as a treatment for cancer.
There are 2 major implications of Gardner’s theory in education: individuation and pluralization. Free form student-driven research as offered at Polygence is the best way of delivering on both of these promises. Individuation calls for the personalization of a project’s scope to the student’s specific interests and skill level. it takes into account the most effective ways that individual students learn and tailors the material and pedagogical approach accordingly. This tenet also harkens back to Benjamin Bloom’s famous 2 Sigma Problem , where he demonstrates that students tutored in one-on-one settings perform two standard deviations better than those in traditional classroom settings. Pluralization, on the other hand, calls for the presentation of the same concept in various formats that appeal to different forms of intelligence. This greatly expands the reach of any given topic, but also exposes students to diverse ways of thinking and learning.
This is also a fundamental reason why Polygence has recently cemented a partnership with Mastery Transcript Consortium. In order to fully take advantage of the permission to find themselves rather than to conform to narrowly defined molds, students need to be freed from the constant fear of being judged. Mastery based assessment is not about assigning a numerical value to a student’s achievement, nor is it about judging a student’s ability relative to his peers; rather, it’s about giving students the language to speak about the skills and competencies they developed through the experience of personalized research.
Best 5% interest savings accounts of 2024.
This way of assessing students is based on an absolute scale of abilities whereas traditional grades only ever identify a student’s standing relative to peers. Rather than telling a college that a given applicant ranked 3rd in her class and is in the 98th percentile for verbal reasoning abilities, a mastery-based learning record brings to life that student’s abilities through qualitative descriptions, thereby giving colleges a more three-dimensional picture of its potential students. This will be a welcome change in the sea of cookie-cutter applications and identical test scores that flood admissions officers every year.
Example of a Mastery Learning Record
No matter where this series of changes to the college admissions landscape takes us, it remains our responsibility to ensure that the next generation arrives at college with a clarified rather than a muddied sense of their intellectual identity. The Latin etymology of the word “educate” breaks down into ducere , meaning “to lead”, and e(x) , meaning “out/out of”. Leading out of what? You may ask. I have always been inspired to interpret it as “to lead a learner out of darkness”. The journey of self discovery and enlightenment has sadly become so elusive in this hypercompetitive world of elite admissions, and we now find ourselves in a world where students are woefully unprepared to tackle the challenges of the workforce and of adulthood because they barely know what they are capable of. And they are capable of so much more than we give them credit for.
One Community. Many Voices. Create a free account to share your thoughts.
Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.
In order to do so, please follow the posting rules in our site's Terms of Service. We've summarized some of those key rules below. Simply put, keep it civil.
Your post will be rejected if we notice that it seems to contain:
User accounts will be blocked if we notice or believe that users are engaged in:
So, how can you be a power user?
Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.
This research examines the influence of integrating generative artificial intelligence (GAI) in education, focusing on its acceptance and utilization among elementary education students. Grounded in the Task-Technology Fit (TTF) Theory and an expanded iteration of the Unified Theory of Acceptance and Use of Technology (UTAUT) model, the study analyzes key constructs—Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions—on students’ behavioral intentions and usage behaviors concerning GAI. The UTAUT model, which integrates elements from multiple theories and is widely applied in educational contexts to understand technology adoption behaviors, provides a robust theoretical framework. Additionally, TTF theory, emphasizing the alignment of technology with specific instructional tasks, enhances our understanding of GAI acceptance. This study also investigates the moderating effects of TTF and gender within this framework. Data analysis, conducted through PLS-SEM, is based on responses from 279 elementary education students in China who completed an 8-week course incorporating GAI. Results indicate that Performance Expectancy, Social Influence, and Effort Expectancy significantly influence Behavioral Intention, while Facilitating Conditions have the strongest impact on actual Use Behavior, surpassing their influence on Behavioral Intention. Furthermore, Task-Technology Fit moderates both Performance Expectancy and Effort Expectancy in students’ consideration of GAI use. However, gender does not demonstrate a moderating effect in the overall model. These findings deepen our understanding of elementary school students’ acceptance of GAI technology and provide practical guidance for developers, educational policymakers, teachers, and researchers to effectively integrate GAI into elementary education while maintaining teaching quality.
This is a preview of subscription content, log in via an institution to check access.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Materials and data designed and/or generated in the study are available from the corresponding author on reasonable request.
Abramski, K., Citraro, S., Lombardi, L., Rossetti, G., & Stella, M. (2023). Cognitive network science reveals bias in GPT-3, GPT-3.5 turbo, and GPT-4 mirroring math anxiety in high-school students. Big Data and Cognitive Computing , 7 (3), 124. https://doi.org/10.3390/bdcc7030124 .
Article Google Scholar
Almusawi, H. A., & Durugbo, C. M. (2024). Linking task-technology fit, innovativeness, and teacher readiness using structural equation modelling. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12440-x . Advance online publication.
An, X., Chai, C., Li, Y., Zhou, Y., & Yang, B. (2023). Modeling students’ perceptions of artificial intelligence assisted language learning, Computer Assisted Language Learning Advance online publication. https://doi.org/10.1080/09588221.2023.2246519 .
Bourgonjon, J., Valcke, M., Soetaert, R., & Schellens, T. (2010). Students’ perceptions about the use of video games in the classroom. Computers & Education , 54 , 1145–1156. https://doi.org/10.1016/j.compedu.2009.10.022 .
Chai, C. S., Lin, P. Y., Jong, M. S. Y., Dai, Y., Chiu, T. K. F., & Qin, J. (2021). Perceptions of and behavioral intentions towards learning artificial intelligence in primary school students. Educational Technology & Society , 24 (3), 89–101. https://www.jstor.org/stable/27032858 .
Google Scholar
Chen, J. (2011). The effects of education compatibility and technological expectancy on e-learning acceptance. Computers & Education , 57 (2), 1501–1511. https://doi.org/10.1016/j.compedu.2011.02.009 .
Chen, Y., Li, R., & Liu, X. (2023). Problematic smartphone usage among Chineseadolescents: Role of social/non-social loneliness, use motivations, and grade difference. Current Psychology: Research & Reviews , 42 (14), 11529–11538. https://doi.org/10.1007/s12144-021-02458-0 .
Chen, X., Hu, Z., & Wang, C. (2024). Empowering education development through AIGC: A systematic literature review. Education and Information Technologies . https://doi.org/10.1007/s10639-024-12549-7 . Advance online publication.
Cislaghi, B., & Heise, L. (2020). Gender norms and social norms: Differences, similarities and why they matter in prevention science. Sociology of Health & Illness , 42 (2), 407–422. https://doi.org/10.1111/1467-9566.13008 .
Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers , 21 (3), 719–734. https://doi.org/10.1007/s10796-017-9774-y .
Gefen, D., & Straub, D. (2000). The relative importance of perceived ease of use in IS adoption: A study of E-Commerce adoption. Journal of the Association for Information Systems . Advance online publication. https://doi.org/10.17705/1jais.00008 .
Goodhue, D. L., & Thompson, R. L. (1995). Task-technology fit and individual performance. Mis Quarterly , 19 (2), 213–236. https://doi.org/10.2307/249689 .
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2013). Partial least squares structural equation modeling: Rigorous applications, better results and higher acceptance. Long Range Planning , 46 , 1–12. https://doi.org/10.1016/j.lrp.2013.01.001 .
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review , 31 (1), 2–24. https://doi.org/10.1108/EBR-11-2018-0203 .
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2022). A primer on partial least squares structural equation modeling (PLS-SEM). 3rd Edition . Sage.
Hamhuis, E., Glas, C., & Meelissen, M. (2020). Tablet assessment in primary education: Are there performance differences between TIMSS’ paper-and‐pencil test and tablet test among Dutch grade‐four. Students? British Journal of Educational Technology , 51 (6), 2340–2358. https://doi.org/10.1111/bjet.12914 .
Helsper, E. J., & Eynon, R. (2010). Digital natives: Where is the evidence? British Educational Research Journal , 36 (3), 503–520. http://www.jstor.org/stable/27823621 .
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science , 43 (1), 115–135. https://doi.org/10.1007/s11747-014-0403-8 .
Hsu, L. (2021). EFL learners’ self-determination and acceptance of LMOOCs: The UTAUT model. Computer Assisted Language Learning . https://doi.org/10.1080/09588221.2021.1976210 . Advance online publication.
Jauhiainen, J. S., & Guerra, A. G. (2023). GAI and ChatGPT in School Children’s education: Evidence from a school lesson. Sustainability , 15 (18), 14025. https://doi.org/10.3390/su151814025 .
Kasneci, E., Seßler, K., Küchemann, S. (2023). ChatGPT for Good? On opportunities and challenges of large Language models for Education. Learning and individual differences . 103. 102274. https://doi.org/10.1016/j.lindif.2023.102274 .
Lin, W. S., & Wang, C. H. (2012). Antecedences to continued intentions of adopting e-learning system in blended learning instruction: A contingency framework based on models of information system success and task-technology fit. Computers & Education , 58 (1), 88–99. https://doi.org/10.1016/j.compedu.2011.07.008 .
Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences , 13 (4), 410. https://doi.org/10.3390/educsci13040410 .
Lou, Y. (2023). Exploring the application of ChatGPT to English teaching in a Malaysia primary school. Journal of Advanced Research in Education , 2 (4), 47–54. https://doi.org/10.56397/JARE.2023.07.08 .
Lozano, A., & Blanco Fontao, C. (2023). Is the education system prepared for the irruption of artificial intelligence? A study on the perceptions of students of primary education degree from a dual perspective: Current pupils and future teachers. Education Sciences , 13 (7), 733. https://doi.org/10.3390/educsci13070733 .
Ma, N., Du, L., & Lu, Y. (2022). A model of factors influencing in-service teachers’ social network prestige in online peer assessment. Australasian Journal of Educational Technology , 38 (5), 100–118. https://doi.org/10.14742/ajet.7622 .
Maheshwari, G. (2023). Factors influencing students’ intention to adopt and use ChatGPT in higher education: A study in the Vietnamese context. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12333-z . Advance online publication.
Purković, D., Suman, D., & Jelaska, I. (2021). Age and gender differences between pupils’ preferences in teaching general and compulsory technology education in Croatia. International Journal of Technology and Design Education , 31 (5), 919–937. https://doi.org/10.1007/s10798-020-09586-x .
Rani, G., Singh, J., & Khanna, A. (2023). Comparative analysis of generative AI models. In 2023 International Conference on Advances in Computation, Communication and Information Technology (ICAICCIT) (pp. 760–765). Faridabad, India: IEEE. https://doi.org/10.1109/ICAICCIT60255.2023.10465941 .
Raza, S. A., Qazi, W., Khan, K. A., & Salam, J. (2021). Social isolation and Acceptance of the Learning Management System (LMS) in the time of COVID-19 pandemic: An expansion of the UTAUT Model. Journal of Educational Computing Research , 59 (2), 183–208. https://doi.org/10.1177/0735633120960421 .
Raza, S. A., Qazi, Z., Qazi, W., & Ahmed, M. (2022). E-learning in higher education during COVID-19: Evidence from blackboard learning system. Journal of Applied Research in Higher Education , 14 (4), 1603–1622. https://doi.org/10.1108/JARHE-02-2021-0054 .
Shirolkar, S. D., & Kadam, R. (2023). Determinants of adoption and usage of the online examination portal (OEP) in Indian universities. Education & Training (London) , 65 (6/7), 827–847. https://doi.org/10.1108/ET-09-2022-0360 .
Steinberg, L., & Monahan, K. C. (2007). Age differences in resistance to peer influence. Developmental Psychology , 43 , 1531–1543. https://doi.org/10.1037/0012-1649.43.6.1531 .
Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interactive Learning Environments . https://doi.org/10.1080/10494820.2023.2209881 . Advance online publication.
Strzelecki, A., & Elarabawy, S. (2024). Investigation of the moderation effect of gender and study level on the acceptance and use of GAI by higher education students: Comparative evidence from Poland and Egypt. British Journal of Educational Technology . https://doi.org/10.1111/bjet.13425 . Advance online publication.
Tenenhaus, M., Vinzi, V. E., Chatelin, Y. M., & Lauro, C. (2005). PLS path modeling. Computational Statistics and Data Analysis , 48 (1), 159–205. https://doi.org/10.1016/j.csda.2004.03.005 .
Article MathSciNet Google Scholar
Tian, S., & Yang, W. (2023). Modeling the use behavior of interpreting technology for student interpreters: An extension of UTAUT model. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12225-2 .
Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments , 10 (1), 15. https://doi.org/10.1186/s40561-023-00237-x .
Ustun, A. B., Karaoglan-Yilmaz, F. G., Yilmaz, R., Ceylan, M., & Uzun, O. (2023). Development of UTAUT-based augmented reality acceptance scale: A validity and reliability study. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12321-3 . Advance online publication.
Venkatesh, V., & Zhang, X. (2010). Unified theory of acceptance and use of technology: US vs. China. Journal of Global Information Technology Management , 13 (1), 5–27. https://doi.org/10.1080/1097198X.2010.10856507 .
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly , 27 , 425–478. https://doi.org/10.2307/30036540 .
Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly , 157–178. https://doi.org/10.2307/41410412 .
Venkatesh, V., Thong, J. Y., & Xu, X. (2016). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems , 17 (5), 328–376. https://doi.org/10.17705/1jais.00428 .
Wu, B., & Chen, X. (2017). Continuance intention to use MOOCs: Integrating the technology acceptance model (TAM) and task technology fit (TTF) model. Computers in Human Behavior , 67 , 221–232. https://doi.org/10.1016/j.chb.2016.10.028 .
Zhang, P., & Tur, G. (2023). A systematic review of ChatGPT use in K-12 education. European Journal of Education . https://doi.org/10.1111/ejed.12599 . Advance online publication.
Zheng, L., Gao, L., & Huang, Z. (2024). Can Chatbots based on generative Artificial Intelligence Facilitate OnlineCollaborative Learning Performance? E-education Research , 03 , 70–76. https://doi.org/10.13811/j.cnki.eer.2024.03.010 .
Zhou, T., Lu, Y., & Wang, B. (2010). Integrating TTF and UTAUT to explain mobile banking user adoption. Computers in Human Behavior , 26 (4), 760–767. https://doi.org/10.1016/j.chb.2010.01.013 .
Download references
This research was funded by the Jiangsu Province Education Science “14th Five-Year Plan” Project (C/2023/01/64), and Interdisciplinary Research Foundation for the Doctoral Candidates of Beijing Normal University (Grant Number BNUXKJC2326).
Authors and affiliations.
School of Educational Technology, Faculty of Education, Beijing Normal University, No. 19, XinJieKouWai St., HaiDian District, Beijing, PR China
Lei Du & Beibei Lv
You can also search for this author in PubMed Google Scholar
Correspondence to Beibei Lv .
Ethical approval.
All procedures performed in the study involving human participants were in accordance with the World Medical Association Declaration of Helsinki. The research participants agreed to participate in the study and their complete anonymity was ensured.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee.
Informed consent was obtained from all individual participants included in the study. The test and questionnaire were conducted anonymously. Students’ and teachers’ participation was voluntary.
The authors declare that they have no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Du, L., Lv, B. Factors influencing students’ acceptance and use generative artificial intelligence in elementary education: an expansion of the UTAUT model. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12835-4
Download citation
Received : 13 March 2024
Accepted : 04 June 2024
Published : 13 June 2024
DOI : https://doi.org/10.1007/s10639-024-12835-4
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Humans and machines: a match made in productivity heaven. Our species wouldn’t have gotten very far without our mechanized workhorses. From the wheel that revolutionized agriculture to the screw that held together increasingly complex construction projects to the robot-enabled assembly lines of today, machines have made life as we know it possible. And yet, despite their seemingly endless utility, humans have long feared machines—more specifically, the possibility that machines might someday acquire human intelligence and strike out on their own.
Sven Blumberg is a senior partner in McKinsey’s Düsseldorf office; Michael Chui is a partner at the McKinsey Global Institute and is based in the Bay Area office, where Lareina Yee is a senior partner; Kia Javanmardian is a senior partner in the Chicago office, where Alex Singla , the global leader of QuantumBlack, AI by McKinsey, is also a senior partner; Kate Smaje and Alex Sukharevsky are senior partners in the London office.
But we tend to view the possibility of sentient machines with fascination as well as fear. This curiosity has helped turn science fiction into actual science. Twentieth-century theoreticians, like computer scientist and mathematician Alan Turing, envisioned a future where machines could perform functions faster than humans. The work of Turing and others soon made this a reality. Personal calculators became widely available in the 1970s, and by 2016, the US census showed that 89 percent of American households had a computer. Machines— smart machines at that—are now just an ordinary part of our lives and culture.
Those smart machines are also getting faster and more complex. Some computers have now crossed the exascale threshold, meaning they can perform as many calculations in a single second as an individual could in 31,688,765,000 years . And beyond computation, which machines have long been faster at than we have, computers and other devices are now acquiring skills and perception that were once unique to humans and a few other species.
QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.
AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem-solving, and even exercising creativity. You’ve probably interacted with AI even if you don’t realize it—voice assistants like Siri and Alexa are founded on AI technology, as are some customer service chatbots that pop up to help you navigate websites.
Applied AI —simply, artificial intelligence applied to real-world problems—has serious implications for the business world. By using artificial intelligence, companies have the potential to make business more efficient and profitable. But ultimately, the value of AI isn’t in the systems themselves. Rather, it’s in how companies use these systems to assist humans—and their ability to explain to shareholders and the public what these systems do—in a way that builds trust and confidence.
For more about AI, its history, its future, and how to apply it in business, read on.
Learn more about QuantumBlack, AI by McKinsey .
What is machine learning.
Machine learning is a form of artificial intelligence that can adapt to a wide range of inputs, including large sets of historical data, synthesized data, or human inputs. (Some machine learning algorithms are specialized in training themselves to detect patterns; this is called deep learning. See Exhibit 1.) These algorithms can detect patterns and learn how to make predictions and recommendations by processing data, rather than by receiving explicit programming instruction. Some algorithms can also adapt in response to new data and experiences to improve over time.
The volume and complexity of data that is now being generated, too vast for humans to process and apply efficiently, has increased the potential of machine learning, as well as the need for it. In the years since its widespread deployment, which began in the 1970s, machine learning has had an impact on a number of industries, including achievements in medical-imaging analysis and high-resolution weather forecasting.
The volume and complexity of data that is now being generated, too vast for humans to process and apply efficiently, has increased the potential of machine learning, as well as the need for it.
Deep learning is a more advanced version of machine learning that is particularly adept at processing a wider range of data resources (text as well as unstructured data including images), requires even less human intervention, and can often produce more accurate results than traditional machine learning. Deep learning uses neural networks—based on the ways neurons interact in the human brain —to ingest data and process it through multiple neuron layers that recognize increasingly complex features of the data. For example, an early layer might recognize something as being in a specific shape; building on this knowledge, a later layer might be able to identify the shape as a stop sign. Similar to machine learning, deep learning uses iteration to self-correct and improve its prediction capabilities. For example, once it “learns” what a stop sign looks like, it can recognize a stop sign in a new image.
Case study: vistra and the martin lake power plant.
Vistra is a large power producer in the United States, operating plants in 12 states with a capacity to power nearly 20 million homes. Vistra has committed to achieving net-zero emissions by 2050. In support of this goal, as well as to improve overall efficiency, QuantumBlack, AI by McKinsey worked with Vistra to build and deploy an AI-powered heat rate optimizer (HRO) at one of its plants.
“Heat rate” is a measure of the thermal efficiency of the plant; in other words, it’s the amount of fuel required to produce each unit of electricity. To reach the optimal heat rate, plant operators continuously monitor and tune hundreds of variables, such as steam temperatures, pressures, oxygen levels, and fan speeds.
Vistra and a McKinsey team, including data scientists and machine learning engineers, built a multilayered neural network model. The model combed through two years’ worth of data at the plant and learned which combination of factors would attain the most efficient heat rate at any point in time. When the models were accurate to 99 percent or higher and run through a rigorous set of real-world tests, the team converted them into an AI-powered engine that generates recommendations every 30 minutes for operators to improve the plant’s heat rate efficiency. One seasoned operations manager at the company’s plant in Odessa, Texas, said, “There are things that took me 20 years to learn about these power plants. This model learned them in an afternoon.”
Overall, the AI-powered HRO helped Vistra achieve the following:
Read more about the Vistra story here .
Generative AI (gen AI) is an AI model that generates content in response to a prompt. It’s clear that generative AI tools like ChatGPT and DALL-E (a tool for AI-generated art) have the potential to change how a range of jobs are performed. Much is still unknown about gen AI’s potential, but there are some questions we can answer—like how gen AI models are built, what kinds of problems they are best suited to solve, and how they fit into the broader category of AI and machine learning.
For more on generative AI and how it stands to affect business and society, check out our Explainer “ What is generative AI? ”
The term “artificial intelligence” was coined in 1956 by computer scientist John McCarthy for a workshop at Dartmouth. But he wasn’t the first to write about the concepts we now describe as AI. Alan Turing introduced the concept of the “ imitation game ” in a 1950 paper. That’s the test of a machine’s ability to exhibit intelligent behavior, now known as the “Turing test.” He believed researchers should focus on areas that don’t require too much sensing and action, things like games and language translation. Research communities dedicated to concepts like computer vision, natural language understanding, and neural networks are, in many cases, several decades old.
MIT physicist Rodney Brooks shared details on the four previous stages of AI:
Symbolic AI (1956). Symbolic AI is also known as classical AI, or even GOFAI (good old-fashioned AI). The key concept here is the use of symbols and logical reasoning to solve problems. For example, we know a German shepherd is a dog , which is a mammal; all mammals are warm-blooded; therefore, a German shepherd should be warm-blooded.
The main problem with symbolic AI is that humans still need to manually encode their knowledge of the world into the symbolic AI system, rather than allowing it to observe and encode relationships on its own. As a result, symbolic AI systems struggle with situations involving real-world complexity. They also lack the ability to learn from large amounts of data.
Symbolic AI was the dominant paradigm of AI research until the late 1980s.
Neural networks (1954, 1969, 1986, 2012). Neural networks are the technology behind the recent explosive growth of gen AI. Loosely modeling the ways neurons interact in the human brain , neural networks ingest data and process it through multiple iterations that learn increasingly complex features of the data. The neural network can then make determinations about the data, learn whether a determination is correct, and use what it has learned to make determinations about new data. For example, once it “learns” what an object looks like, it can recognize the object in a new image.
Neural networks were first proposed in 1943 in an academic paper by neurophysiologist Warren McCulloch and logician Walter Pitts. Decades later, in 1969, two MIT researchers mathematically demonstrated that neural networks could perform only very basic tasks. In 1986, there was another reversal, when computer scientist and cognitive psychologist Geoffrey Hinton and colleagues solved the neural network problem presented by the MIT researchers. In the 1990s, computer scientist Yann LeCun made major advancements in neural networks’ use in computer vision, while Jürgen Schmidhuber advanced the application of recurrent neural networks as used in language processing.
In 2012, Hinton and two of his students highlighted the power of deep learning. They applied Hinton’s algorithm to neural networks with many more layers than was typical, sparking a new focus on deep neural networks. These have been the main AI approaches of recent years.
Traditional robotics (1968). During the first few decades of AI, researchers built robots to advance research. Some robots were mobile, moving around on wheels, while others were fixed, with articulated arms. Robots used the earliest attempts at computer vision to identify and navigate through their environments or to understand the geometry of objects and maneuver them. This could include moving around blocks of various shapes and colors. Most of these robots, just like the ones that have been used in factories for decades, rely on highly controlled environments with thoroughly scripted behaviors that they perform repeatedly. They have not contributed significantly to the advancement of AI itself.
But traditional robotics did have significant impact in one area, through a process called “simultaneous localization and mapping” (SLAM). SLAM algorithms helped contribute to self-driving cars and are used in consumer products like vacuum cleaning robots and quadcopter drones. Today, this work has evolved into behavior-based robotics, also referred to as haptic technology because it responds to human touch.
Learn more about QuantumBlack, AI by McKinsey .
The term “artificial general intelligence” (AGI) was coined to describe AI systems that possess capabilities comparable to those of a human . In theory, AGI could someday replicate human-like cognitive abilities including reasoning, problem-solving, perception, learning, and language comprehension. But let’s not get ahead of ourselves: the key word here is “someday.” Most researchers and academics believe we are decades away from realizing AGI; some even predict we won’t see AGI this century, or ever. Rodney Brooks, an MIT roboticist and cofounder of iRobot, doesn’t believe AGI will arrive until the year 2300 .
The timing of AGI’s emergence may be uncertain. But when it does emerge—and it likely will—it’s going to be a very big deal, in every aspect of our lives. Executives should begin working to understand the path to machines achieving human-level intelligence now and making the transition to a more automated world.
For more on AGI, including the four previous attempts at AGI, read our Explainer .
Narrow AI is the application of AI techniques to a specific and well-defined problem, such as chatbots like ChatGPT, algorithms that spot fraud in credit card transactions, and natural-language-processing engines that quickly process thousands of legal documents. Most current AI applications fall into the category of narrow AI. AGI is, by contrast, AI that’s intelligent enough to perform a broad range of tasks.
AI is a big story for all kinds of businesses, but some companies are clearly moving ahead of the pack . Our state of AI in 2022 survey showed that adoption of AI models has more than doubled since 2017—and investment has increased apace. What’s more, the specific areas in which companies see value from AI have evolved, from manufacturing and risk to the following:
One group of companies is pulling ahead of its competitors. Leaders of these organizations consistently make larger investments in AI, level up their practices to scale faster, and hire and upskill the best AI talent. More specifically, they link AI strategy to business outcomes and “ industrialize ” AI operations by designing modular data architecture that can quickly accommodate new applications.
We have yet to see the longtail effect of gen AI models. This means there are some inherent risks involved in using them—both known and unknown.
The outputs gen AI models produce may often sound extremely convincing. This is by design. But sometimes the information they generate is just plain wrong. Worse, sometimes it’s biased (because it’s built on the gender, racial, and other biases of the internet and society more generally).
It can also be manipulated to enable unethical or criminal activity. Since gen AI models burst onto the scene, organizations have become aware of users trying to “jailbreak” the models—that means trying to get them to break their own rules and deliver biased, harmful, misleading, or even illegal content. Gen AI organizations are responding to this threat in two ways: for one thing, they’re collecting feedback from users on inappropriate content. They’re also combing through their databases, identifying prompts that led to inappropriate content, and training the model against these types of generations.
But awareness and even action don’t guarantee that harmful content won’t slip the dragnet. Organizations that rely on gen AI models should be aware of the reputational and legal risks involved in unintentionally publishing biased, offensive, or copyrighted content.
These risks can be mitigated, however, in a few ways. “Whenever you use a model,” says McKinsey partner Marie El Hoyek, “you need to be able to counter biases and instruct it not to use inappropriate or flawed sources, or things you don’t trust.” How? For one thing, it’s crucial to carefully select the initial data used to train these models to avoid including toxic or biased content. Next, rather than employing an off-the-shelf gen AI model, organizations could consider using smaller, specialized models. Organizations with more resources could also customize a general model based on their own data to fit their needs and minimize biases.
It’s also important to keep a human in the loop (that is, to make sure a real human checks the output of a gen AI model before it is published or used) and avoid using gen AI models for critical decisions, such as those involving significant resources or human welfare.
It can’t be emphasized enough that this is a new field. The landscape of risks and opportunities is likely to continue to change rapidly in the coming years. As gen AI becomes increasingly incorporated into business, society, and our personal lives, we can also expect a new regulatory climate to take shape. As organizations experiment—and create value—with these tools, leaders will do well to keep a finger on the pulse of regulation and risk.
The Blueprint for an AI Bill of Rights, prepared by the US government in 2022, provides a framework for how government, technology companies, and citizens can collectively ensure more accountable AI. As AI has become more ubiquitous, concerns have surfaced about a potential lack of transparency surrounding the functioning of gen AI systems, the data used to train them, issues of bias and fairness, potential intellectual property infringements, privacy violations, and more. The Blueprint comprises five principles that the White House says should “guide the design, use, and deployment of automated systems to protect [users] in the age of artificial intelligence.” They are as follows:
At present, more than 60 countries or blocs have national strategies governing the responsible use of AI (Exhibit 2). These include Brazil, China, the European Union, Singapore, South Korea, and the United States. The approaches taken vary from guidelines-based approaches, such as the Blueprint for an AI Bill of Rights in the United States, to comprehensive AI regulations that align with existing data protection and cybersecurity regulations, such as the EU’s AI Act, due in 2024.
There are also collaborative efforts between countries to set out standards for AI use. The US–EU Trade and Technology Council is working toward greater alignment between Europe and the United States. The Global Partnership on Artificial Intelligence, formed in 2020, has 29 members including Brazil, Canada, Japan, the United States, and several European countries.
Even though AI regulations are still being developed, organizations should act now to avoid legal, reputational, organizational, and financial risks. In an environment of public concern, a misstep could be costly. Here are four no-regrets, preemptive actions organizations can implement today:
Most organizations are dipping a toe into the AI pool—not cannonballing. Slow progress toward widespread adoption is likely due to cultural and organizational barriers. But leaders who effectively break down these barriers will be best placed to capture the opportunities of the AI era. And—crucially—companies that can’t take full advantage of AI are already being sidelined by those that can, in industries like auto manufacturing and financial services.
To scale up AI, organizations can make three major shifts :
Learn more about QuantumBlack, AI by McKinsey , and check out AI-related job opportunities if you’re interested in working at McKinsey.
Articles referenced:
This article was updated in April 2024; it was originally published in April 2023.
Related articles.
IMAGES
VIDEO
COMMENTS
The theory of multiple intelligences challenges the idea of a single IQ, where human beings have one central "computer" where intelligence is housed. Howard Gardner, the Harvard professor who originally proposed the theory, says that there are multiple types of human intelligence, each representing different ways of processing information:
This brief paper summarizes a mixed method review of over 500 neuroscientific reports investigating the proposition that general intelligence (g or IQ) and multiple intelligences (MI) can be integrated based on common and unique neural systems.Extrapolated from this interpretation are five principles that inform teaching and curriculum so that education can be strengths-based and personalized ...
The theory of multiple intelligences, developed by psychologist Howard Gardner in the late 1970's and early 1980's, posits that individuals possess eight or more relatively autonomous ... 2005). Conversely, future research may reveal that existing intelligences such as linguistic intelligence are more accurately conceived of as several sub ...
Among them is the theory of multiple intelligences developed by Howard Gardner, Ph.D., John H. and Elisabeth A. Hobbs Research Professor of Cognition and Education at the Harvard Graduate School of Education at Harvard University. Gardner's early work in psychology and later in human cognition and human potential led to his development of the ...
Multiple intelligences theory (MI) developed by Howard Gardner, an American psychologist, in late 1970s and early 1980s, asserts that each individual has different learning areas. ... This chapter discusses the historical and theoretical dimensions of multiple intelligences as well as the research conducted on the theory. We have also provided ...
Regarding research on multiple intelligences, Ronald et al. (2001) covered the research objects of kindergarten pupils, higher graders of elementary schools, and high school students as well as the research fields of foreign language vocabulary memory, motivation to learn, mathematical problem solving, and reading comprehension of English and ...
Summary. The theory of multiple intelligences (MI) was set forth in 1983 by Howard Gardner. The theory holds that all individuals have several, relatively autonomous intelligences that they deploy in varying combinations to solve problems or create products that are valued in one or more cultures. Together, the intelligences underlie the range ...
Introduction. Gardner's (1983, 2006) theory of multiple intelligences [hereafter referred to as multiple intelligences (MI) theory] has had substantial influence on K-12 curriculum design and implementation. This influence has been promoted, at times, through professional development for in-service teachers and in teacher education programs for preservice teachers (see, for example ...
The purpose of this article is to review articles related to multiple intelligences in learning and figure out the multiple intelligences approach in learning a ... determining research questions; determining criteria; generating a framework for articles; searching, filtering, and selecting; analyzing and interpreting the content of each ...
A Primer on Multiple Intelligences is a must-read for graduate students or scholars considering researching cognition, perception, motivation, and artificial intelligence. It will also be of use to those in social psychology, computer science, and pedagogy. ... His current research interests cover many topics in the artificial intelligence ...
Adapting teaching methods to the "multiple intelligences" of students leads to better learning.. The opening survey statement from Blanchette Sarrasin et al. caught Howard Gardner's attention, because it clearly draws from his Multiple Intelligences (henceforth MI) theory (Gardner, 1983).In a recent paper, Gardner says he was disturbed by this so-called "neuromyth," both because it ...
The notion of multiple intelligences — and Gardner's follow-up ideas about teaching individual students in the ways they can best learn, and teaching important concepts in multiple ways, for many access points — shifted the paradigm, ushering in an era of personalized learning whose promise is still being explored.
On May 1st 2019, the first author (MF) conducted a search on the Web of Science with the term multiple intelligences and on August 20th 2020, she repeated the search on ProQuest and Google Scholar with the free ... Research on multiple intelligences teaching and assessment. Asian Journal of Management and Humanity Sciences, 4, 2-3, 106-24. ...
The theory of multiple intelligences, developed by psychologist Howard Gardner in the late 1970s and early 1980s, posits that individuals possess eight or more relatively autonomous intelligences. Individuals draw on these intelligences, individually and corporately, to create products and solve problems that are relevant to the societies in which they live. The eight identified intelligences ...
The theory of multiple intelligences, devel-. oped by psychologist Howard Gardner in. the late 1970s and early1980s, posits that. individuals possess eight or more relatively. autonomous ...
However, relatively little research has been devoted to examining the multiple intelligences of physically disabled language learners despite the global trend of foreign language instruction. Referring to the field of language instruction, some studies have reported the positive effects of MI-based instruction on English language learning.
Multiple intelligences, theory of human intelligence first proposed by the psychologist Howard Gardner in his book Frames of Mind (1983). At its core, it is the proposition that individuals have the potential to develop a combination of eight separate intelligences, or spheres of intelligence; that ... research remains inconclusive as to ...
In 1983, Howard Gardner, PhD, made a fateful choice.While proposing that people's abilities might be divided up into seven different spheres—linguistic, logical-mathematical, musical, spatial, bodily/kinesthetic, interpersonal and intrapersonal—the Harvard professor decided to call these categories "intelligences" rather than, say, "talents."
The range of human intelligences is best assessed through contextually based, "intelligence-fair" instruments. Three research projects growing out of the theory are described. Preliminary data secured from Project Spectrum, an application in early childhood, indicate that even 4- and 5-year-old children exhibit distinctive profiles of strength ...
Overview. The standard psychological view of intellect states that there is a single intelligence, adequately measured by IQ or other short answer tests. Multiple intelligences (MI) theory, on the other hand, claims on the basis of evidence from multiple sources that human beings have a number of relatively discrete intellectual capacities.
In this study, we present the latest version of the Multiple Intelligences Profiling Questionnaire (MIPQ III) that is based on Howard Gardner's (e.g., 1983, 1999) MI theory. The operationalization of nine MI scales is tested with an empirical sample of Finnish preadolescents and adults (n = 410). Results of the internal consistency analysis show that the nine MIPQ III dimensions have ...
The theory of multiple intelligences proposes the differentiation of human intelligence into specific intelligences, rather than defining intelligence as a single, ... Proceedings from the 1998 Henry B. & Jocelyn Wallace National Research Symposium on talent development. Great Potential Press. pp. 219-228.
Based on a national investigation of more than 40 schools and on detailed case studies, this book illustrates how teachers in real-life situations in a range of different public schools were able to construct and implement curricula that enabled students to learn challenging disciplinary content through multiple intelligences. It also shows how ...
Harvard psychologist Howard Gardner developed the theory of Multiple Intelligences in the late 1970's and early 1980's as a direct critique of the standard psychological view of intellect ...
This research examines the influence of integrating generative artificial intelligence (GAI) in education, focusing on its acceptance and utilization among elementary education students. Grounded in the Task-Technology Fit (TTF) Theory and an expanded iteration of the Unified Theory of Acceptance and Use of Technology (UTAUT) model, the study analyzes key constructs—Performance Expectancy ...
The term "artificial general intelligence" (AGI) was coined to describe AI systems that possess capabilities comparable to those of a human. In theory, AGI could someday replicate human-like cognitive abilities including reasoning, problem-solving, perception, learning, and language comprehension.
Summary: Researchers have developed a technique that allows artificial intelligence (AI) programs to better map three-dimensional spaces using two-dimensional images captured by multiple cameras ...