The General Intelligence Factor [/color]
Despite some popular assertions, a single factor for intelligence, called g, can be measured with IQ tests and does predict success in life
No subject in psychology has provoked more intense public controversy than the study of human intelligence. From its beginning, research on how and why people differ in overall mental ability has fallen prey to political and social agendas that obscure or distort even the most well-established scientific findings. Journalists, too, often present a view of intelligence research that is exactly the opposite of what most intelligence experts believe. For these and other reasons, public understanding of intelligence falls far short of public concern about it. The IQ experts discussing their work in the public arena can feel as though they have fallen down the rabbit hole into Alice's Wonderland.
The debate over intelligence and intelligence testing focuses on the question of whether it is useful or meaningful to evaluate people according to a single major dimension of cognitive competence. Is there indeed a general mental ability we commonly call "intelligence," and is it important in the practical affairs of life? The answer, based on decades of intelligence research, is an unequivocal yes. No matter their form or content, tests of mental skills invariably point to the existence of a global factor that permeates all aspects of cognition. And this factor seems to have considerable influence on a person's practical quality of life. Intelligence as measured by IQ tests is the single most effective predictor known of individual performance at school and on the job. It also predicts many other aspects of well-being, including a person's chances of divorcing, dropping out of high school, being unemployed or having illegitimate children [see illustration].
By now the vast majority of intelligence researchers take these findings for granted. Yet in the press and in public debate, the facts are typically dismissed, downplayed or ignored. This misrepresentation reflects a clash between a deeply felt ideal and a stubborn reality. The ideal, implicit in many popular critiques of intelligence research, is that all people are born equally able and that social inequality results only from the exercise of unjust privilege. The reality is that Mother Nature is no egalitarian. People are in fact unequal in intellectual potential--and they are born that way, just as they are born with different potentials for height, physical attractiveness, artistic flair, athletic prowess and other traits. Although subsequent experience shapes this potential, no amount of social engineering can make individuals with widely divergent mental aptitudes into intellectual equals.
Of course, there are many kinds of talent, many kinds of mental ability and many other aspects of personality and character that influence a person's chances of happiness and success. The functional importance of general mental ability in everyday life, however, means that without onerous restrictions on individual liberty, differences in mental competence are likely to result in social inequality. This gulf between equal opportunity and equal outcomes is perhaps what pains Americans most about the subject of intelligence. The public intuitively knows what is at stake: when asked to rank personal qualities in order of desirability, people put intelligence second only to good health. But with a more realistic approach to the intellectual differences between people, society could better accommodate these differences and minimize the inequalities they create.
Extracting g
Early in the century-old study of intelligence, researchers discovered that all tests of mental ability ranked individuals in about the same way. Although mental tests are often designed to measure specific domains of cognition--verbal fluency, say, or mathematical skill, spatial visualization or memory--people who do well on one kind of test tend to do well on the others, and people who do poorly generally do so across the board. This overlap, or intercorrelation, suggests that all such tests measure some global element of intellectual ability as well as specific cognitive skills. In recent decades, psychologists have devoted much effort to isolating that general factor, which is abbreviated g, from the other aspects of cognitive ability gauged in mental tests.
The statistical extraction of g is performed by a technique called factor analysis. Introduced at the turn of the century by British psychologist Charles Spearman, factor analysis determines the minimum number of underlying dimensions necessary to explain a pattern of correlations among measurements. A general factor suffusing all tests is not, as is sometimes argued, a necessary outcome of factor analysis. No general factor has been found in the analysis of personality tests, for example; instead the method usually yields at least five dimensions (neuroticism, extraversion, conscientiousness, agreeableness and openness to ideas), each relating to different subsets of tests. But, as Spearman observed, a general factor does emerge from analysis of mental ability tests, and leading psychologists, such as Arthur R. Jensen of the University of California at Berkeley and John B. Carroll of the University of North Carolina at Chapel Hill, have confirmed his findings in the decades since. Partly because of this research, most intelligence experts now use g as the working definition of intelligence.
The general factor explains most differences among individuals in performance on diverse mental tests. This is true regardless of what specific ability a test is meant to assess, regardless of the test's manifest content (whether words, numbers or figures) and regardless of the way the test is administered (in written or oral form, to an individual or to a group). Tests of specific mental abilities do measure those abilities, but they all reflect g to varying degrees as well. Hence, the g factor can be extracted from scores on any diverse battery of tests.
Conversely, because every mental test is "contaminated" by the effects of specific mental skills, no single test measures only g. Even the scores from IQ tests--which usually combine about a dozen subtests of specific cognitive skills--contain some "impurities" that reflect those narrower skills. For most purposes, these impurities make no practical difference, and g and IQ can be used interchangeably. But if they need to, intelligence researchers can statistically separate the g component of IQ. The ability to isolate g has revolutionized research on general intelligence, because it has allowed investigators to show that the predictive value of mental tests derives almost entirely from this global factor rather than from the more specific aptitudes measured by intelligence tests.
In addition to quantifying individual differences, tests of mental abilities have also offered insight into the meaning of intelligence in everyday life. Some tests and test items are known to correlate better with g than others do. In these items the "active ingredient" that demands the exercise of g seems to be complexity. More complex tasks require more mental manipulation, and this manipulation of information--discerning similarities and inconsistencies, drawing inferences, grasping new concepts and so on--constitutes intelligence in action. Indeed, intelligence can best be described as the ability to deal with cognitive complexity.
This description coincides well with lay perceptions of intelligence. The g factor is especially important in just the kind of behaviors that people usually associate with "smarts": reasoning, problem solving, abstract thinking, quick learning. And whereas g itself describes mental aptitude rather than accumulated knowledge, a person's store of knowledge tends to correspond with his or her g level, probably because that accumulation represents a previous adeptness in learning and in understanding new information. The g factor is also the one attribute that best distinguishes among persons considered gifted, average or retarded.
Several decades of factor-analytic research on mental tests have confirmed a hierarchical model of mental abilities. The evidence, summarized most effectively in Carroll's 1993 book, Human Cognitive Abilities, puts g at the apex in this model, with more specific aptitudes arrayed at successively lower levels: the so-called group factors, such as verbal ability, mathematical reasoning, spatial visualization and memory, are just below g, and below these are skills that are more dependent on knowledge or experience, such as the principles and practices of a particular job or profession.
Some researchers use the term "multiple intelligences" to label these sets of narrow capabilities and achievements. Psychologist Howard Gardner of Harvard University, for example, has postulated that eight relatively autonomous "intelligences" are exhibited in different domains of achievement. He does not dispute the existence of g but treats it as a specific factor relevant chiefly to academic achievement and to situations that resemble those of school. Gardner does not believe that tests can fruitfully measure his proposed intelligences; without tests, no one can at present determine whether the intelligences are indeed independent of g (or each other). Furthermore, it is not clear to what extent Gardner's intelligences tap personality traits or motor skills rather than mental aptitudes.
(continued...)