From IQ to AI: The Case for Multidimensional Intelligence
For more than a century, we have tried to reduce intelligence to a single number, a value that promises to capture who we are, what we can do, and what we might become. But what if this tidy measurement has misled us all along? What if intelligence, in both people and machines, is far messier, richer, and more varied than any test score or benchmark could ever reveal? As artificial intelligence (AI) becomes more woven into daily life, the time has come to rethink what it really means to be “intelligent” and why our old models no longer suffice.
Human Intelligence
Intelligence quotient, or IQ, has long been set forth as a definitive measure of human cognitive ability. Its precursor was developed in the early 20th century by French psychologist Alfred Binet in collaboration with Théodore Simon, as a framework intended not to classify intelligence hierarchically but to identify children in need of support in France’s newly-established obligatory education system. Tests using this framework generally resulted in a single value (a number), intended to reflect the mental age of children. Binet never purported that any such value would be representative of the full breadth of human intelligence, nor did he contend that this value was fixed. Indeed, as a measure of mental age, one would expect it to change as a child grows older. Despite this, tests derived from Binet’s work were soon co-opted for broader and more deterministic purposes.
Intelligence testing was institutionalized in the U.S. military during World War I and eventually found widespread civilian use, especially in schools. Over time, public perception of IQ scores became to be treated not as fallible and context-dependent estimates, but as stable, objective facts about individuals. This blind faith in IQ’s precision masked its deep unreliability, allowing a flawed measure to shape lives and policies while serving to justify profoundly harmful policies.
There is no shortage of criticism of intelligence testing from psychologists, philosophers, and sociologists. A popularization of their objections appears in The Mismeasure of Man by biologist Stephen Jay Gould. There, Gould summarized arguments against the flattening of cognition into a single number. Furthermore, the assumption that intelligence is fixed is not supported by empirical data. Longitudinal studies show that intelligence test scores can vary significantly over time due to environmental, educational, and motivational factors.
Even worse, intelligence test outcomes have been used to support racism and eugenics in the U.S. and elsewhere. American psychologist Henry Goddard used test data to argue for forced sterilization of those deemed “feebleminded,” a broad and unscientific category often applied to the poor, the disabled, people of color, and immigrants. Goddard developed an infamous classification system that was used to institutionalize and sterilize tens of thousands under state eugenics laws. It is now believed that Goddard fabricated at least some—if not a large portion—of the data used to support these programs. Eventually, Goddard accepted that his research methods were flawed (but he remained a segregationist).
Modern intelligence testing is more broadly-based but still problematic. The 2003 Stanford-Binet test produces an IQ value based on a weighting of individual intelligences: fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory. The 2024 Wechsler Adult Intelligence Scale (WAIS) produces four values, one for each of verbal comprehension, perceptual reasoning, working memory, and processing speed that are aggregated into an IQ score. While outcomes of these tests appear to be more stable and predictive than those of the early 20th century, objections have been raised based on their underlying assumption that intelligence is quantifiable in one number, cultural and socioeconomic biases, and inability to predict the general successfulness of test takers outside of a few specific contexts.
Nonetheless, even though the intelligence testing landscape is not as pernicious as it was in Goddard’s day, there is still ample room for improvement. Standardized tests such as the SAT (Scholastic Assessment Test) and ACT (American College Testing) have become deeply embedded in the U.S. educational system, marketed as objective tools for assessing college readiness and predicting academic success. In practice, however, these tests disproportionately reward socioeconomic privilege (students with high levels of household income, parental education, and access to resources tend to obtain higher scores) and have fueled the growth of a lucrative “testing industrial complex” that commodifies access to higher education.1

Furthermore, intelligence tests are poorly equipped to assess neurodivergent individuals, such as those with dyslexia, on the autism spectrum, with attention challenges, or having other atypical cognitive profiles. The tests rely on narrow, normative assumptions about how intelligence is expressed, processed, and demonstrated. Such assumptions often exclude or misrepresent the cognitive challenges—and strengths—of neurodivergence. Numerous efforts have been made to address the issue by recasting human intelligence as a multi-dimensional set of qualities that are dynamic over an individual’s lifespan, but none have gained widespread public traction. One of the more well-known efforts is Howard Gardner’s theory of multiple intelligences, which proposes that there are eight distinct intelligences rather than a single, general intelligence. These intelligences are linguistic, logical-mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalist.2
For example, bodily-kinesthetic includes coordination, dexterity, and movement control, including the ability to use one’s body to express emotion. This is a characteristic that is strong in professional athletes, dancers, and craftspeople. Interpersonal intelligence encompasses sensitivity to other individuals’ moods, feelings, temperaments, and motivations, while intrapersonal intelligence relates to self-reflection, introspection, and goal-setting. These latter two intelligences are applicable in almost any setting.
Other arrangements and re-imaginings of intelligence are possible, including some with minute degrees of granularity. But for purposes of this discussion, the exact number of the multiple intelligences—be it 8, 16, or 100—is not as important as the recognition that multiple intelligences exist.3
When considering the cognitive strengths and challenges of individuals, it is not difficult to find examples of multiple intelligences in practice:
- A childhood friend who was a mediocre student but a successful business leader as an adult
- A university professor acclaimed in their field who seems unable to develop personal friendships or relationships
- A teenager who can ace a calculus class but struggles to tie their own shoes
- A highly-empathic adult who is deathly afraid of small, harmless insects
- A technology company CEO who falls for obvious online conspiracy theories and pseudoscience
- A mechanic with little formal education who can diagnose and repair complex engine problems by sound and feel alone
- An artist who struggles with basic arithmetic yet creates profoundly moving visual works
- A child who has trouble reading but shows extraordinary sensitivity to animals and natural environments
- A brilliant scientist who routinely forgets to attend meetings and struggles to meet deadlines
None of these individuals—nor any of us—can have their intelligence, abilities, potential, or humanity summarized by any one scalar value. There is a tendency to think, “This person is so exceptional at task X that it makes no sense that they cannot perform task Y equally well.” But if tasks X and Y require different intelligences, such an inability should not be surprising at all.4 Along the same lines, an individual who has been unusually successful at one task may assume without evidence that they are equally competent across unrelated domains. This assumption often goes unchallenged when the individual’s status or authority reinforces it. Such an over-generalization reflects a misunderstanding of intelligence as unitary and transferable, when in reality, cognitive strengths are often domain-specific.
Accordingly, the facets of human intelligence are better represented as a vector—a multidimensional array of values, where each dimension corresponds to a distinct type of intelligence. Any such representation is necessarily incomplete yet an improvement over previous characterizations. In simpler terms, intelligence is like a set of sliders on a control panel, each measuring a different skill or way of thinking. No one has all of the sliders maxed out.
Moreover, these values can change over time. This is largely due to neuroplasticity, the brain’s ability to reorganize itself by forming new neural connections throughout life. Neuroplasticity enables individuals to improve or compensate in one area of intelligence even if they have long-standing weaknesses in another. For example, someone with initially poor interpersonal skills may, through practice and feedback, develop greater empathy and social awareness over time.
Also, different types of intelligence tend to develop along different timelines. Bodily-kinesthetic intelligence may peak in early adulthood, while verbal skills often deepen with age and life experience. Even in older adults, studies have shown that targeted mental training, physical activity, and enriched environments can lead to measurable improvements in cognitive function. This challenges the outdated assumption that any form of intelligence is a static, genetically predetermined capacity.5
Machine Intelligence
A nuanced, dynamic view of human intelligence provides a point of comparison for consideration of machine intelligence, particularly in the age of artificial intelligence (AI) and large language models (LLMs). While human cognition is shaped by biology, experience, and the brain’s inherent adaptability, machine intelligence is structured through algorithms, training data, and computational architectures. Yet both can exhibit specialized capabilities and limitations across different domains.
Just as no human has all of their cognitive sliders maxed out, AI systems also show uneven performance, excelling in some tasks while failing at others in ways that reveal underlying design choices and constraints of training data. Understanding these parallels and divergences can be part of making sense of what machine intelligence is (and isn’t), and how it compares to human intelligence.
This comparison suggests a broadening of the definition of intelligence to include facets of human intelligence and facets of machine intelligence with some overlap between the two. Thus, the multidimensional model can apply not only to the intelligence of humans but also to that of machines.
Traditional conceptions of machine intelligence have often focused on achieving human-like general intelligence or surpassing human performance on narrowly defined benchmarks. Indeed, the first hypothetical metric of machine intelligence was the Turing test, effectively a single metric that is as naive and incomplete as the IQ test. Proposed by computer scientist Alan Turing in 1950, the test suggests that if a machine can produce text-based conversation indistinguishable from that of a human, it should be considered intelligent.
But just as humans demonstrate a range of cognitive strengths and weaknesses, modern AI systems like LLMs display highly specific patterns of competence. For example, an LLM might demonstrate remarkable fluency in generating coherent text or summarizing vast bodies of knowledge, while struggling with basic commonsense reasoning, long-term planning, or contextual nuance. These are capacities that, in humans, might draw upon interpersonal or intrapersonal intelligences, as well as lived experience. In an era where LLMs can pass superficial forms of the Turing test with ease, their limitations are clear. The Turing test falls short pragmatically because it conflates linguistic mimicry with understanding and success in conversation with intelligence.
Thinking in terms of multidimensional machine intelligence allows us to move past simplistic comparisons (e.g., “smarter than a human” or “able to fool a human”) and toward a more granular understanding of AI’s capabilities and limitations. In this light, intelligence, whether biological or artificial, emerges not as a singular quality but as a distributed profile of strengths and gaps, each informed by the respective system’s developmental context, whether that be a human’s neurons and experience or a machine’s datasets and models.
Along these lines, the table below includes definitions of eight possible machine intelligences, describes their core functions, and identifies overlap (or lack thereof) with human intelligences.
| Dimension | Definition | Core Functions | Overlap with Human Intelligence |
|---|---|---|---|
| Linguistic Intelligence | Ability to understand, generate, and manipulate human language. | Translation, summarization, dialogue, question answering. | Yes—but fundamentally different in nature. Humans use conceptual reasoning, grounded in meaning, emotion, and experience; LLMs rely on statistical pattern recognition without true understanding. |
| Logical-Computational Intelligence | Capacity for structured reasoning, symbolic manipulation, and problem-solving. | Code generation, mathematics, formal logic tasks. | Logical-Mathematical |
| Spatial-Relational Intelligence | Ability to interpret and generate visual or spatial relationships. | Image recognition, object detection, robotic navigation. | Spatial |
| Interfacing Intelligence | Competence in using external tools or systems to accomplish tasks. | Calling APIs or plugins, using calculators, database querying, tool augmentation. | No direct analog; akin to human tool-use |
| Contextual Adaptability | Ability to adjust behavior or output based on situational context. | Prompt tuning, memory-based conditioning, few-shot learning. | Intrapersonal / Interpersonal (partial overlap) |
| Self-Monitoring and Calibration | Capability to estimate uncertainty and recognize limitations. | Expressing uncertainty, deferring tasks, limiting hallucinations. | Intrapersonal (partial overlap) |
| Instructional or Task Intelligence | Facility with following instructions and executing procedural steps. | Step-by-step problem solving, rule-following, workflow management. | Bodily-Kinesthetic (execution/planning) None—uniquely computational |
| Synthetic Memory Integration | Ability to interface with persistent external memory systems at scale and with precision. | Embedding retrieval, long-context attention, querying structured data stores. | None—uniquely computational |
As an example, consider linguistic intelligence (the ability to understand and generate language). While both humans and LLMs exhibit linguistic intelligence, the underlying mechanisms are fundamentally different. Humans typically rely on conceptual reasoning, semantic understanding, and pragmatic context. Language use is deeply reliant on lived experience, social cognition, sensory input, and goal-directed thought. When a human speaks or writes, they often do so with intentionality, drawing on meaning, emotion, and a model of the listener’s mind.
LLMs, by contrast, operate through statistical pattern matching. They are trained on massive text corpora to predict the next token in a sequence based on learned associations between words, phrases, and structures. Although the output can appear fluent and coherent, often indistinguishable from human text, it arises from probabilistic inference over text patterns, not from understanding in the human sense. LLMs lack grounding in sensory reality, and they do not know what words mean—they merely recognize how words tend to be used in relation to one another.
The overlap, then, is functional but not structural. Both humans and machines can produce grammatically correct, contextually appropriate language, but only humans do so by connecting words to concepts, beliefs, intentions, and embodied knowledge. LLMs, on the other hand, can hallucinate by making statements that are probabilistically likely but factually incorrect.6
The eight intelligences selected for this table are certainly debatable. There is no well-established framework for machine intelligence. Further, the extent and nature of the overlap of any of these with human intelligence remains unclear as the capabilities of LLMs are evolving at a rapid pace. All of this strongly suggests that these definitions will need some tuning.
A Unified View
Nonetheless, even this initial effort can help frame and contextualize our interactions with other individuals and machines. The multiple intelligences theory for humans (as proposed by Gardner) and the multidimensional machine intelligence framework outlined above can be combined into a unified taxonomy of cognitive capabilities by treating both as vector spaces of distinct but sometimes overlapping functional domains. Each dimension in this common framework represents a type of cognitive processing or problem-solving mode, such as linguistic, spatial, reasoning, memory manipulation, or contextual adaptation. Where overlap exists, such as in linguistic and logical reasoning, comparisons can be made based on outputs, while recognizing architectural and functional differences (e.g., conceptual reasoning vs. statistical inference) that drive these outputs. Where no overlap exists, such as synthetic memory integration in machines or certain types of bodily-kinesthetic intelligence in humans, the framework can set those sliders to a null value.
This unified view allows us to treat both human and machine intelligence as configurable profiles rather than scalar rankings. Just as we should not expect some humans on the autism spectrum to be able to thrive in a highly-social or highly-contextual educational environment, we should not expect LLMs to be able to react with emotion when ingesting a piece of music (especially if that music has not been described in the LLM’s training data).
Recognizing these mismatches between abilities and expectations encourages a more compassionate and realistic approach to both human–human and human–machine interaction. We meet people and systems where they are, rather than where we assume they should be. By doing so with humans, we foster communication that is not only more empathic and respectful but also more effective. By doing so with machines we avoid misattributing capabilities they do not possess, reduce frustration and over-reliance, and instead design interactions that exercise their actual abilities.
Also, appreciating the dynamism of these intelligences is equally important. Human cognitive strengths can shift over time through development, learning, and experience, just as machine capabilities can evolve through new architectures, retraining, fine-tuning, or integration with new tools. Understanding the multidimensional profile of the cognitive substrate we are engaging with—human or artificial—makes it possible to tailor our communication in ways that are better received, better processed, and ultimately more useful.
Conclusion
There is a need to reject singular, reductive definitions of intelligence in both humans or machines. Intelligence is not a fixed quantity but a profile of varied abilities that evolve across time, context, and cognitive architecture. Recognizing this complexity allows us to better support neurodivergent individuals, critique outdated testing systems, and engage with AI in ways that are both pragmatic and principled. Moving forward, our educational, technological, and ethical frameworks should be built on the understanding that intelligence is not what fits into a number or a test. A multidimensional view can reshape how we design systems, build institutions, and relate to one another in a world of increasingly varied intelligence.
Notes
- The College Board and other testing companies function as quasi-monopolies, generating substantial revenue not only from testing fees but also from prep materials, score reports, and student data sales. ↩︎
- Some variations include a ninth “existential” intelligence, the capacity to grapple with abstract questions about life, consciousness, religion, and reality. ↩︎
- A caveat is appropriate here. Given the misuse of IQ and other single numerical measures of intelligence, it is not out of the realm of possibility that any quantification of any number of intelligences could also be misused. Indeed, it is human nature to oversimplify and generalize. But a theory of human intelligence that is descriptive can and should be logically separated from any public policy that is prescriptive. ↩︎
- A classic example that we are all painfully aware of is that the intelligence profile that makes one a good candidate for public office is dramatically different from the intelligence profile that makes one an effective holder of that office. ↩︎
- But one should not ignore or downplay the importance of genetics. An individual’s genetic makeup is the starting point for that individual’s journey through life. It is not determinative, but it is well known that intellectual capabilities (and certain types of neurodivergence) travel together within families. ↩︎
- Humans too are prone to cognitive biases that lead us to form or express beliefs with unwarranted certainty, often in the absence of solid evidence. ↩︎





Conversation
Join the Conversation
Discussion hosted at InfiniteConversations.com