Intelligence quotient

An intelligence quotient or IQ is a score derived from a set of standardized tests developed to measure a person's cognitive abilities ("intelligence") in relation to their age group. An IQ test does not measure intelligence the way a ruler measures height (absolutely), but rather the way a race measures speed (relatively).

For people living in the prevailing conditions of the developed world, IQ is highly heritable, and by adulthood the influence of family environment on IQ is undetectable. IQ test scores are correlated with measures of brain structure and function, as well as performance on simple tasks that anyone can complete within a few seconds.

IQ is correlated with academic success; it can also predict important life outcomes such as job performance, socioeconomic advancement, and "social pathologies". Recent work has demonstrated links between IQ and health, longevity, and functional literacy.

History
Early attempts of mental tests were those of Sir Galton (1863) and James Cattell (1888). These tests were more physical tests than mental ones. Their importance was in developing the idea that one's IQ can be measured and is different from person to person. They also proposed normal distributions of mental tests results within a large population.

Alfred Binet and his colleague Theodore Simon created the Binet-Simon scale in 1905, which used testing to identify students who could benefit from extra help in school. Their assumption was that lower scores indicated the need for more teaching, not an inability to learn. This interpretation is still held by some modern experts.

Notably, Binet himself made no claim that his test properly measured intelligence. He stated in his paper New Methods for the Diagnosis of the Intellectual Level of Subnormals that
 * "This scale properly speaking does not permit the measure of the intelligence, because intellectual qualities are not superposable, and therefore cannot be measured as linear surfaces are measured, but are on the contrary, a classification, a hierarchy among diverse intelligences; and for the necessities of practice this classification is equivalent to a measure."

In 1910, Henry H. Goddard proposed three categories for the "feeble-minded" based on IQ scores: moron (IQ of 51–70), imbecile (IQ of 26–50), and idiot (IQ of 0–25). This taxonomy was the standard of intelligence research for decades.

In 1916, Stanford University psychologist Lewis Terman released the "Stanford Revision of the Binet-Simon Scale", generally known as the Stanford-Binet test. This became the most commonly administered test for many decades. The term "intelligence quotient," in which each student's score was the quotient of his or her tested mental age with his or her actual age, was adopted by Terman from a 1912 proposal by German psychologist William Stern. This led to refined testing developed by Robert Yerkes for United States Army recruits.

Today, the most commonly administered IQ test is the WISC-III test, originally developed by David Wechsler in 1974. The WISC-III test comprises ten types of problems, categorized by difficulty and by skill type (verbal and performance scales). A revised version, the WISC-IV, was released in 2003 and is used regularly in assessments. However, the interpretation of various combinations of subscales is still being researched. Another notable type of IQ test is the Bailey Scale of Infant Development, regarded as the 'best' means of testing cognitive development in infants.

Today, informal online IQ tests are popular, but they are at best rough approximations. The tests are not expert certified and notable limitations include a small number of questions and a lack of the time limit.

IQ score distribution
IQ scores are expressed as a number normalized so that the average IQ in an age group is 100. In other words, an individual scoring 115 is above average when compared to people in the same age group. It is common practice to standardize so that the standard deviation (&sigma;) of scores is 15, although some IQ tests use difference scales (for example, the Stanford Binet IQ test uses a standard deviation of 16, and the Cattell IIIB test uses a standard deviation of 24). Tests are designed so that the distribution of IQ scores is Gaussian; that is, it follows a bell curve. A difference has been documented between the IQ score distributions of left-handed and right-handed test subjects; the distribution in left-handed people tends to cluster at the two extremes of the IQ scale.

(The following numbers apply to IQ scales standard deviation σ = 15.) Roughly 68% of the population has an IQ between 85 and 115. The "normal" range, or range between &minus;2 and +2 standard deviations from the mean, is between 70 and 130, and contains about 95% of the population. An accurate score below 70 may indicate mental retardation, and a score above 130 may indicate intellectual giftedness. Retardation may result from normal variation or from a genetic or developmental malady; analogously, some otherwise normal people are very short, and others have dwarfism. Giftedness appears to be normal variation; autistic savants have often astonishing cognitive powers but below-average IQ's. It has been observed that scores outside the range 55 to 145 must be cautiously interpreted because there are smaller numbers of respondents with which to make comparisons in those ranges. Moreover, at such extreme values, the normal distribution is a less accurate estimate of the true IQ distribution.

In actuality there is a higher percentage of the population measured at 3 or more standard deviation levels on the test than the probabilities of the normal distribution would predict. Some IQ scoring procedures may attempt to integrate such clusters of statistical outliers into the curve by adjusting the scores so that they better represent actual probabilities (according to Silverman) and in these cases, scores around 145 and above may actually have been notably higher, were they not so adjusted.

Most IQ tests in the United States tend to use a SD-15 or SD-16 scale, meaning that one standard deviation corresponds to +/- 16 points on the IQ scale. However, European IQ tests tend to use a SD-24 or SD-25 scale, resulting in discrepancies. Therefore, an IQ of 130 (+2 standard deviations) in the U.S. might correspond to an IQ of 148-150 in Europe. Due to these differences, percentiles are more accurate measurements than IQ numbers.

IQ and General Intelligence Factor''
Modern IQ tests produce scores for different areas (e.g., language fluency, three-dimensional thinking, etc.), with the summary score calculated from subtest scores. Individual subtest scores tend to correlate with one another, even when seemingly disparate in content. Analyses of an individual's scores on the subtests of a single IQ test or the scores from a variety of different IQ tests (e.g., Stanford-Binet, WISC-R, Raven's Progressive Matrices and others) will reveal that they all measure a single common factor and various factors that are specific to each test. This kind of factor analysis has led to the theory that underlying these disparate cognitive tasks is a single factor, termed the general intelligence factor (or g), that corresponds with the common-sense concept of intelligence. In the normal population, g and IQ are roughly 90% correlated and are often used interchangeably.

Genetics vs environment
The role of genes and environment (nature vs. nurture) in determining IQ is reviewed in Plomin et al. (2001, 2003). The degree to which genetic variation contributes to observed variation in a trait is measured by a statistic called heritability. Heritability scores range from 0 to 1, and can be interpreted as the percentage of variation (e.g. in IQ) that is due to variation in genes. Twins studies and adoption studies are commonly used to determine the heritability of a trait. Until recently heritability was mostly studied in children. These studies yield an estimate of heritability of 0.5; that is, half of the variation in IQ among the children studied was due to variation in their genes. The remaining half was thus due to environmental variation and measurement error. A heritability of 0.5 implies that IQ is "substantially" heritable. Studies with adults show that they have a higher heritability of IQ than children do and that heritability could be as high as 0.8, though it is probably not this high. The American Psychological Association's 1995 task force on "Intelligence: Knowns and Unknowns" concluded that within the White population the heritability of IQ is “around .75” (p. 85).

Considerable research has focused on biological correlates of g; see General intelligence factor and the section on brain size below. For example, general intelligence and MRI brain volume measurements are correlated, and the effect is primarily determined by genetic factors.

Environment
Environmental factors play a large role in determining IQ in situations where environmental conditions are variable. Proper childhood nutrition appears critical for cognitive development; malnutrition can lower IQ. Other research indicates environmental factors such as prenatal exposure to toxins, duration of breastfeeding, and micronutrient deficiency can affect IQ. However, in the developed world, none of these effects are sufficiently pronounced to be important.

In the developed world, there is some environmental effect on the IQ of children, accounting for up to a quarter of the variance. However, by adulthood, this correlation disappears, so that the cognitive ability of adults living in the prevailing conditions of the developed world is highly heritable.

Nearly all personality traits show that, contrary to expectations, environmental effects actually cause adoptive siblings raised in the same family to be as different as children raised in different families (Harris, 1998; Plomin & Daniels, 1987). Put another way, shared environmental variation for personality is zero, and all environmental effects are nonshared. Intelligence is actually an exception to this rule, at least among children. The IQs of adoptive siblings, who share no genetic relation but do share a common family environment, are correlated at .32. Despite attempts to isolate them, the factors that cause adoptive siblings to be similar have not been identified, though it could be related to parents choosing the type of children they will adopt. However, as explained below, shared family effects on IQ disappear after adolescence.

Active genotype-environment correlation, also called the "nature of nurture", is observed for IQ. This phenomenon is measured similarly to heritability; but instead of measuring variation in IQ due to genes, variation in environment due to genes is determined. One study found that 40% of variation in measures of home environment are accounted for by genetic variation. This suggests that the way human beings craft their environment is due in part to genetic influences.

Development
It is reasonable to expect that genetic influences on traits like IQ should become less important as we gain experiences with age. Surprisingly, the opposite occurs. Heritability measures in infancy are as low as 20%, around 40% in middle childhood, and as high as 80% in adulthood.

Shared family effects also seem to disappear by adulthood. Adoption studies show that, after adolescence, adopted siblings are no more similar in IQ than strangers (IQ correlation near zero), while full siblings show an IQ correlation of 0.6. Twin studies reinforce this pattern: monozygotic (identical) twins raised separately are highly similar in IQ (0.86), more so than dizygotic (fraternal) twins raised together (0.6) and much more than adopted siblings (~0.0).

Most of the IQ studies described above were conducted in developed countries, such as the United States, Japan, and Western Europe. Also, a few studies have been conducted in Moscow, East Germany, and India, and those studies produce similar results. Any such investigation is limited to describing the genetic and environmental variation found within the populations studied. This is a caveat of any heritability study.

Mental retardation
About 75–80 percent of mental retardation is familial (runs in families), and 20–25 percent is due to organic problems, such as chromosomal abnormalities or brain damage. Mild to severe mental retardation is a symptom of several hundred single-gene disorders and many chromosomal abnormalities, including small deletions. Based on twin studies, moderate to severe mental retardation does not appear to be familial, but mild mental retardation does. That is, the relatives of the moderate to severely mentally retarded have normal ranges of IQs, whereas the families of the mildly mentally retarded have IQs skewing lower.

IQ score ranges (from DSM-IV):
 * mild mental retardation: IQ 50–55 to 70; children require mild support; formally called "Educable Mentally Retarded".
 * moderate retardation: IQ 35–40 to 50–55; children require moderate supervision and assistance; formally called "Trainable Mentally Retarded".
 * severe mental retardation: IQ 20–25 to 35–40; can be taught basic life skills and simple tasks with supervision.
 * profound mental retardation: IQ below 20–25; usually caused by a neurological condition; require constant care.

The rate of mental retardation is higher among males than females, and higher among blacks than whites, according to a 1991 U.S. Centers for Disease Control and Prevention (CDC) study.

By race, the overall rate was 16.6 per 1000 for blacks and 6.8 per 1000 for whites. Rates of mental retardation for black males, the group with the highest rates, were 1.7 times higher than black females, 2.4 times higher than white males, and 3.1 times higher than white females.

Individuals with IQs below 70 have been exempted from the death penalty in the U.S. since 2002.

IQ, education, and income
Tambs et al. (1989) found that occupational status, educational attainment, and IQ are individually heritable; and further found that "genetic variance influencing educational attainment &hellip; contributed approximately one-fourth of the genetic variance for occupational status and nearly half the genetic variance for IQ". In a sample of US siblings, Rowe et al. (1997) report that the inequality in education and income was predominantly due to genes, with shared environmental factors playing a subordinate role.

Regression
The heritability of IQ determines the extent to which the IQ of children will be similar to the IQ of parents. Because the heritability of IQ is less than 100%, the IQ of children tends to "regress" towards the mean IQ of the population. That is, high IQ parents tend to have children who are less bright than their parents, whereas low IQ parents tend to have children who are brighter than their parents. The effect can be quantified by the equation $$\hat y = \bar x + h^2 \left ( \frac{\mbox{mom} + \mbox{dad}}{2} - \bar x \right)$$ where: Thus, if the heritability of IQ is 50%, a couple with an average IQ of 120 may have children that average around an IQ of 110, assuming that both parents come from a population with a median IQ of 100.
 * $$\hat y$$ is the predicted average IQ of Mom and Dad's children
 * $$\bar x$$ is the mean IQ of the population that Mom and Dad come from
 * $$h^2$$ is the heritability of IQ

Brain size and IQ
Modern studies using MRI imaging have shown that brain size correlates with IQ by a factor of approximately .40 among adults (McDaniel, 2005). The correlation between brain size and IQ seems to hold for comparisons between and within families (Gignac et al. 2003; Jensen 1994; Jensen & Johnson 1994). However, one study found no within family correlation (Schoenemann et al. 2000). A study on twins (Thompson et al., 2001) showed that frontal gray matter volume was correlated with g and highly heritable. A related study has reported that the correlation between brain size (reported to have a heritability of 0.85) and g is 0.4, and that correlation is mediated entirely by genetic factors (Posthuma et al 2002).

Brain areas associated with IQ
Many different sources of information have converged on the view that the frontal lobes are critical for fluid intelligence. Patients with damage to the frontal lobe are impaired on fluid intelligence tests (Duncan et al 1995). The volume of frontal grey (Thompson et al 2001) and white matter (Schoenemann et al 2005) have also been associated with intelligence. In addition, recent neuroimaging studies have limited this association to the lateral prefrontal cortex. Duncan and colleagues (2000) showed using Positron Emission Tomography that problem-solving tasks that correlated more highly with IQ also activate the lateral prefrontal cortex. More recently, Gray and colleagues (2003) used functional magnetic resonance imaging (fMRI) to show that those individuals that were more adept at resisting distraction on a demanding working memory task had both a higher IQ and increased prefrontal activity. For a review of this topic, see Gray and Thompson (2004).

The Flynn effect
Worldwide, IQ scores appear to be slowly rising, a trend known as the Flynn effect. However, tests are only renormalized occasionally to obtain mean scores of 100, for example WISC-R (1974), WISC-III (1991) and WISC-IV (2003). Hence it is difficult to compare IQ scores measured years apart.

Race and IQ
While the distributions of IQ scores among different racial-ethnic groups overlap considerably, groups differ in where their members cluster along the IQ scale. Some groups (e.g. East Asians and Jews) tend to cluster higher than whites, while other groups (e.g. blacks and Hispanics) tend to cluster lower than whites. Similar clustering occurs with related variables, such as school achievement, reaction time, and brain size. Many hypotheses have been proposed to explain racial-ethnic group differences in IQ. Neither test bias nor simple differences in socioeconomic status explain the IQ differences. The primary focus of the scientific debate is whether group differences are entirely caused by environmental factors or whether they also reflect a genetic component. The findings of this field are often thought to conflict with fundamental social philosophies, and have thus engendered a large controversy.

Religiousness and IQ
Several studies show an inverse correlation between IQ and degree of religious belief. While almost all research indicates a negative correlation between intelligence and religiosity, this remains a controversial point.

Health and IQ
Persons with a higher IQ have generally lower adult morbidity and mortality. This may be because they better avoid injury and take better care of their own health. It also decreases the risk of Post-Traumatic Stress Disorder, severe depression, and schizophrenia. On the other hand, it increases the risk of Obsessive Compulsive Disorder.

Research in Scotland has shown that a 15-point lower IQ meant people had a fifth less chance of seeing their 76th birthday, while those with a 30-point disadvantage were 37% less likely than those with a higher IQ to live that long.

Economic development and IQ
A controversial book IQ and the Wealth of Nations, claims to show that the wealth of a nation can in large part be explained by the average IQ score. This claim has been both disputed and supported in peer-reviewed papers. The data used has also been questioned.

Practical validity


Evidence for the practical validity of IQ comes from examining the correlation between IQ scores and life outcomes.

Research shows that intelligence plays an important role in many valued life outcomes. In addition to academic success, intelligence correlates with job performance (see below), socioeconomic advancement (e.g., level of education, occupation, and income), and "social pathology" (e.g., adult criminality, poverty, unemployment, dependence on welfare, children outside of marriage). Recent work has demonstrated links between intelligence and health, longevity, and functional literacy. Correlations between g and life outcomes are pervasive, though IQ and happiness do not correlate. IQ and g correlate highly with school performance and job performance, less so with occupational prestige, moderately with income, and to a small degree with law-abidingness.

General intelligence (in the literature typically called "cognitive ability") is the best predictor of job performance by the standard measure, validity. Validity is the correlation between score (in this case cognitive ability, as measured, typically, by a paper-and-pencil test) and outcome (in this case job performance, as measured by a range of factors including supervisor ratings, promotions, training success, and tenure), and ranges between &minus;1.0 (the score is perfectly wrong in predicting outcome) and 1.0 (the score perfectly predicts the outcome). See validity (psychometric). The validity of cognitive ability for job performance tends to increase with job complexity and varies across different studies, ranging from 0.2 for unskilled jobs to 0.8 for the most complex jobs.

A large meta-analysis (Hunter and Hunter, 1984) which pooled validity results across many studies encompassing thousands of workers (32,124 for cognitive ability), reports that the validity of cognitive ability for entry-level jobs is 0.54, larger than any other measure including job tryout (0.44), experience (0.18), interview (0.14), age (&minus;0.01), education (0.10), and biographical inventory (0.37).

Because higher test validity allows more accurate prediction of job performance, companies have a strong incentive to use cognitive ability tests to select and promote employees. IQ thus has high practical validity in economic terms. The utility of using one measure over another is proportional to the difference in their validities, all else equal. This is one economic reason why companies use job interviews (validity 0.14) rather than randomly selecting employees (validity 0.0).

However, legal barriers, most prominently the 1971 United States Supreme Court decision Griggs v. Duke Power Co., have prevented American employers from directly using cognitive ability tests to select employees, despite the tests' high validity. This is largely based on that cognitive ability scores in selection adversely affects some minority groups, due to that different groups have different mean scores on tests of cognitive ability. However, cognitive ability tests are still used in some organizations. The U.S. military uses the Armed Forces Qualifying Test (AFQT), as higher scores correlate with significant increases in effectiveness of both individual soldiers and units, and Microsoft is known for using non-illegal tests that correlate with IQ tests as part of the interview process, weighing the results even more than experience in many cases.

Some researchers have echoed the popular claim that "in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much." (Detterman and Daniel, 1989)

However, some studies suggest IQ continues to confer large benefits even at very high levels. Ability and performance for jobs are linearly related, such that at all IQ levels, an increase in IQ translates into a concomitant increase in performance (Coward and Sackett, 1990). In an analysis of hundreds of siblings, it was found that IQ has a substantial effect on income independently of family background (Murray, 1998).

Other studies question the real-world importance of whatever is measured with IQ tests, especially for differences in accumulated wealth and general economic inequality in a nation. IQ correlates highly with school performance but the correlations decrease the closer one gets to real-world outcomes, like with job performance, and still lower with income. It explains less than one sixth of the income variance. Even for school grades, other factors explain most the variance. Regarding economic inequality, one study found that if we could magically give everyone identical IQs, we would still see 90 to 95 percent of the inequality we see today. . Another recent study (2002) found that wealth, race, and schooling are important to the inheritance of economic status, but IQ is not a major contributor and the genetic transmission of IQ is even less important. Some argue that IQ scores are used as an excuse for not trying to reduce poverty or otherwise improve living standards for all. Claimed low intelligence has historically been used to justify the feudal system and unequal treatment of women (but note that many studies find identical average IQs among men and women; see sex and intelligence). In contrast, others claim that the refusal of high-IQ elites to take IQ seriously as a cause of inequality is itself immoral.

Use of IQ in the United States legal system
The Supreme Court of the United States has also validated the use of IQ results during the sentencing phase of some criminal proceedings. The Supreme Court case of Atkins v. Virginia, decided June 20 2002, held that executions of mentally retarded criminals are "cruel and unusual punishments" prohibited by the Eighth Amendment. In Atkins the court stated that


 * "&hellip;[I]t appears that even among those States that regularly execute offenders and that have no prohibition with regard to the mentally retarded, only five have executed offenders possessing a known IQ less than 70 since we decided Penry. The practice, therefore, has become truly unusual, and it is fair to say that a national consensus has developed against it."

In overturning the Virginia Supreme Court's holding, the Atkins opinion stated that petitioner's IQ result of 59 was a factor making the imposition of capital punishment a violation of his eighth amendment rights. In the opinion's notes the court provided some of the facts relied upon when reaching their decision


 * "At the sentencing phase, Dr. Nelson testified: "Atkins' full scale IQ is 59. Compared to the population at large, that means less than one percentile&hellip;. Mental retardation is a relatively rare thing. It's about one percent of the population." App. 274. According to Dr. Nelson, Atkins' IQ score "would automatically qualify for Social Security disability income." Id., at 280. Dr. Nelson also indicated that of the over 40 capital defendants that he had evaluated, Atkins was only the second individual who met the criteria for mental retardation. Id., at 310. He testified that, in his opinion, Atkins' limited intellect had been a consistent feature throughout his life, and that his IQ score of 59 is not an "aberration, malingered result, or invalid test score." Id., at 308."

Validity and g-loading of specific tests
While IQ is sometimes treated as an end unto itself, scholarly work on IQ focuses to a large extent on IQ's validity, that is, the degree to which IQ predicts outcomes such as job performance, social pathologies, or academic achievement. Different IQ tests differ in their validity for various outcomes.

Tests also differ in their g-loading, which is the degree to which the test score reflects general mental ability rather than a specific skill or "group factor" such as verbal ability, spatial visualization, or mathematical reasoning). g-loading and validity are related in the sense that most IQ tests derive their validity mostly or entirely from the degree to which they measure g (Jensen 1998).

Social construct
Some maintain that IQ is a social construct invented by the privileged classes, used to maintain their privilege. Others maintain that intelligence, measured by IQ or g, reflects a real ability, is a useful tool in performing life tasks and has a biological reality.

The social-construct and real-ability interpretations for IQ differences can be distinguished because they make opposite predictions about what would happen if people were given equal opportunities. The social explanation predicts that equal treatment will eliminate differences, while the real-ability explanation predicts that equal treatment will accentuate differences. Evidence for both outcomes exists. Achievement gaps persist in socioeconomically advantaged, integrated, liberal, suburban school districts in the United States (see Noguera, 2001). Test-score gaps tend to be larger at higher socioeconomic levels (Gottfredson, 2003). Some studies have reported a narrowing of score gaps over time.

The reduction of intelligence to a single score seems extreme and wrong to many people. Opponents argue that it is much more useful to know a person's strengths and weaknesses than to know their IQ score. Such opponents often cite the example of two people with the same overall IQ score but very different ability profiles. As measured by IQ tests, most people have highly balanced ability profiles, with differences in subscores being greater among the more intelligent.

The creators of IQ testing did not intend for the tests to gauge a person's worth, and in many (or, as some people suggest, all) situations, IQ may have little relevance.

The Mismeasure of Man
Some scientists dispute psychometrics entirely. In The Mismeasure of Man, Professor Stephen Jay Gould argues that intelligence tests are based on faulty assumptions and shows their history of being used as the basis for scientific racism. He writes:


 * &hellip;the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups&mdash;races, classes, or sexes&mdash;are innately inferior and deserve their status. (pp. 24–25)

He spends much of the book debunking the concept of IQ, including a historical discussion of how the IQ tests were created and a technical discussion of why g is simply a mathematical artifact. Later editions of the book include criticism of The Bell Curve.

Arthur Jensen, Professor of Educational Psychology, University of California, Berkeley, responds to Gould's criticisms in a paper titled The Debunking of Scientific Fossils and Straw Persons.

The view of the American Psychological Association
In response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a task force to write a consensus statement on the state of intelligence research which could be used by all sides as a basis for discussion. The full text of the report is available at a third-party website. 

The findings of the task force state that IQ scores do have high predictive validity for individual (but not necessarily population) differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They agree that individual (again, not necessarily population) differences in intelligence are substantially influenced by genetics.

They state there is little evidence to show that childhood diet influences intelligence except in cases of severe malnutrition. They agree that there are no significant differences between the average IQ scores of males and females. The task force agrees that large differences do exist between the average IQ scores of blacks and whites, and that these differences cannot be attributed to biases in test construction. While they admit there is no empirical evidence supporting it, the APA task force suggests that explanations based on social status and cultural differences may be possible. Regarding genetic causes, they noted that there is not much direct evidence on this point, but what little there is fails to support the genetic hypothesis.

The APA journal that published the statement, American Psychologist, subsequently published eleven critical responses in January 1997, most arguing that the report failed to examine adequately the evidence for partly-genetic explanations.

The report was published in 1995 and thus does not include a decade of recent research.

Improving IQ
While a large amount of one's IQ is predetermined by genetic factors, the environment can play a role as well. IQ can be improved to a certain extent through reading and application. Improvement in diet and regular exercise can help certain cognitive functions, and getting more sleep may help as well. Depression and stress reduce IQ somewhat, so removal of these factors might also help.

Drugs designed to improve cognitive function, and sometimes IQ scores are called nootropics.

Working memory training, an experimental treatment which has according to one study by Klingberg et al, improved raw scores substantially on Ravens progressive matrices and Ravens advanced progressive matrices, both IQ tests. It has also been claimed in some studies that neurofeedback can increase IQ. However, some would argue that these studies should not necessarily be interpreted as proof that neurofeedback can increase IQ as (a) they don't have a double blind component and (b) it is unknown whether their effects would apply to persons without ADHD, as most of these studies were performed on persons with ADHD. It is possible that the increase in IQ was just a result of better concentration in the subjects.

A recent scientific article on the concept of cognitive reserve included an argument that education and application of the mind can substantially increase IQ.

The "Mozart effect" is the claimed ability of certain musics to enhance intelligence, especially spatial reasoning. However, this effect is not universally accepted. Musical education, as opposed to appreciation, has been shown a number of times to marginally increase IQ in children; however, there is sparsity of information on whether such an effect might apply to adults.

The levels of a variety of chemicals in the brain, such as chlorine, have been shown to relate to intelligence in a variety of ways. It is possible that by adjusting diet, these could be substantially changed.

Future possibilities for improving the skills IQ tests measure include stem cells treatment, genetic modification, better education based on neurological and cognitive discoveries, better nootropics, etc.

Controversy
See article on IQ test controversy.