Tuesday, 10 July 2012
*
I often see people on IQ-interested blogs bandying about estimates of very high IQ, as if these were in any sense precisely discriminatory and predictive between the most intellectually able people: but they are not.
There are several reasons why the meaningfulness of IQ measurements breaks down above about two standard deviations from above the mean (i.e. above about 130, or the top two percent of the population).
*
1. Ceiling effect.
Any IQ test which is suitable to be given to a random/ representative sample of the population cannot discriminate between the abilities of those too far above (or below) the mean.
The very intelligent will all get maximum marks (except for variation due to random errors due to test deficiencies, slapdashness, tiredness etc).
Therefore an IQ test done in a representative school will always and inevitably have both ceiling and floor marks, and no discriminative ability above or below these levels.
(The WORDSUM test of vocabulary definitions in the GSS of the US population - for example - has a ceiling at an IQ less than 120 - so more than ten percent of the population achieve perfect marks).
*
2. Problems with norming.
For an IQ test to be properly normed requires either a complete census of all suitable subjects, or else a truly random sample of sufficient size - yet these requirements are almost-never met - and IQ testing is done on stratified samples that are non-random to unknown degrees.
Usually, what happens is that an IQ test is done on an average-ish population with amplified strata of people above and below average (e.g. much larger numbers of above-average college students than would be detected by a truly random sample of the population).
All this means that the shape of the curve is not known, far away from the mean - and the relationship between marks and IQ is conjectural.
*
3. The 'manifold' of variation between IQ sub tests increases with increasing IQ.
In other words, while people of moderately high intelligence tend to be all-rounders, about equally good at all the IQ sub tests; people of the highest levels of intelligence are much more specialized in their abilities. Their very high abilities tend to be restricted to particular sub tests or sub-domains of the intelligence tests.
A super-adept mathematician 4 SDs above average in number and symbol tasks is usually less than super at linguistic tasks (probably above average, but maybe not much above average) - while a literary super genius may be, often is, only very moderately good at mathematics.
(e.g. CS Lewis - clearly with extremely high intelligence in the linguistic domain - was utterly unable to pass the school certificate mathematics exam despite many attempts - probably set to be passable by the top ten-fifteen percent of the population.)
*
In other words the actual concept of 'general intelligence' or 'g' - which derives from the observation that all quantitatively-measurable cognitive abilities are significantly inter-correlated - begins to break down from around IQ 130.
I repeat: the actual concept of 'general intelligence' of general intelligence (hence IQ) begins to break-down from around two standard deviations above average - in the top couple of percent of the population.
From around and above this point, therefore, ultra-high cognitive abilities tend to be specialized and found in isolation.
*
This may well explain the reason why super-intelligent individuals such as William Shockley and Richard Feynman were seemingly not picked-out by childhood intelligence tests.
Of course there are the possibilities of random measurement errors, under-performance due to illness and other factors, and ceiling effects - but most probably some super-intelligent people are super-intelligent in only limited domains, and their modest performance in other domains drags-down their average IQ score.
And while this phenomenon becomes common, indeed usual, in the top one percent of the population - it is probably found even within the top ten percent of the population, albeit infrequently.
*
4. Problems with discriminating between ultra-high IQ people.
The manifold effect means that discriminating between ultra-highly intelligent people may become merely a matter of weighting sub-tests, deciding which IQ test to use to put individuals into rank order.
But there is a further problem, which is that some very high IQ people can do pretty much any IQ test perfectly if they are given sufficient time.
When she studied intelligence in 'genius' scientists, Anne Roe (The making of a Scientist, 1952) was forced to introduce a restrictive time limit, in order to discriminate between her subjects - otherwise they would all simply score at the maximum.
Part of IQ is, indeed, related speed of mental processing, rapidity of calculation and recall etc - so this has some validity.
But, as Roe recognized and discussed, making speed of completion into the major discriminating factor means that the IQ is confounded by other factors such as perceptual abilities, muscular quickness and accuracy, and general health at the time of the test.
In other words, a tightly-timed IQ test of the sort necessarily used to discriminate between those of very high intelligence, will systematically under-estimate the intelligence of anyone with (for example) impaired eyesight (reading the test items), muscular or coordination problems, or impaired alertness and concentration perhaps due to physical illness - and probably also fatigue, when testing goes on more than a few minutes.
*
In sum, it is not valid to discuss the differences in cognitive ability between those in the top one percent of the population in terms of IQ.
There are indeed important differences among these people, but they cannot be well captured by IQ, nor can IQ tests measure them well, nor can the results of IQ tests be validly normed onto the general population.
In sum -
IQs differences above about 130 are only approximate.
Extremely high cognitive ability is usually specific rather than general;
therefore differences between those of ultra-high intelligence tend to be misleading in terms of their predictive value.
*
I often see people on IQ-interested blogs bandying about estimates of very high IQ, as if these were in any sense precisely discriminatory and predictive between the most intellectually able people: but they are not.
There are several reasons why the meaningfulness of IQ measurements breaks down above about two standard deviations from above the mean (i.e. above about 130, or the top two percent of the population).
*
1. Ceiling effect.
Any IQ test which is suitable to be given to a random/ representative sample of the population cannot discriminate between the abilities of those too far above (or below) the mean.
The very intelligent will all get maximum marks (except for variation due to random errors due to test deficiencies, slapdashness, tiredness etc).
Therefore an IQ test done in a representative school will always and inevitably have both ceiling and floor marks, and no discriminative ability above or below these levels.
(The WORDSUM test of vocabulary definitions in the GSS of the US population - for example - has a ceiling at an IQ less than 120 - so more than ten percent of the population achieve perfect marks).
*
2. Problems with norming.
For an IQ test to be properly normed requires either a complete census of all suitable subjects, or else a truly random sample of sufficient size - yet these requirements are almost-never met - and IQ testing is done on stratified samples that are non-random to unknown degrees.
Usually, what happens is that an IQ test is done on an average-ish population with amplified strata of people above and below average (e.g. much larger numbers of above-average college students than would be detected by a truly random sample of the population).
All this means that the shape of the curve is not known, far away from the mean - and the relationship between marks and IQ is conjectural.
*
3. The 'manifold' of variation between IQ sub tests increases with increasing IQ.
In other words, while people of moderately high intelligence tend to be all-rounders, about equally good at all the IQ sub tests; people of the highest levels of intelligence are much more specialized in their abilities. Their very high abilities tend to be restricted to particular sub tests or sub-domains of the intelligence tests.
A super-adept mathematician 4 SDs above average in number and symbol tasks is usually less than super at linguistic tasks (probably above average, but maybe not much above average) - while a literary super genius may be, often is, only very moderately good at mathematics.
(e.g. CS Lewis - clearly with extremely high intelligence in the linguistic domain - was utterly unable to pass the school certificate mathematics exam despite many attempts - probably set to be passable by the top ten-fifteen percent of the population.)
*
In other words the actual concept of 'general intelligence' or 'g' - which derives from the observation that all quantitatively-measurable cognitive abilities are significantly inter-correlated - begins to break down from around IQ 130.
I repeat: the actual concept of 'general intelligence' of general intelligence (hence IQ) begins to break-down from around two standard deviations above average - in the top couple of percent of the population.
From around and above this point, therefore, ultra-high cognitive abilities tend to be specialized and found in isolation.
*
This may well explain the reason why super-intelligent individuals such as William Shockley and Richard Feynman were seemingly not picked-out by childhood intelligence tests.
Of course there are the possibilities of random measurement errors, under-performance due to illness and other factors, and ceiling effects - but most probably some super-intelligent people are super-intelligent in only limited domains, and their modest performance in other domains drags-down their average IQ score.
And while this phenomenon becomes common, indeed usual, in the top one percent of the population - it is probably found even within the top ten percent of the population, albeit infrequently.
*
4. Problems with discriminating between ultra-high IQ people.
The manifold effect means that discriminating between ultra-highly intelligent people may become merely a matter of weighting sub-tests, deciding which IQ test to use to put individuals into rank order.
But there is a further problem, which is that some very high IQ people can do pretty much any IQ test perfectly if they are given sufficient time.
When she studied intelligence in 'genius' scientists, Anne Roe (The making of a Scientist, 1952) was forced to introduce a restrictive time limit, in order to discriminate between her subjects - otherwise they would all simply score at the maximum.
Part of IQ is, indeed, related speed of mental processing, rapidity of calculation and recall etc - so this has some validity.
But, as Roe recognized and discussed, making speed of completion into the major discriminating factor means that the IQ is confounded by other factors such as perceptual abilities, muscular quickness and accuracy, and general health at the time of the test.
In other words, a tightly-timed IQ test of the sort necessarily used to discriminate between those of very high intelligence, will systematically under-estimate the intelligence of anyone with (for example) impaired eyesight (reading the test items), muscular or coordination problems, or impaired alertness and concentration perhaps due to physical illness - and probably also fatigue, when testing goes on more than a few minutes.
*
In sum, it is not valid to discuss the differences in cognitive ability between those in the top one percent of the population in terms of IQ.
There are indeed important differences among these people, but they cannot be well captured by IQ, nor can IQ tests measure them well, nor can the results of IQ tests be validly normed onto the general population.
In sum -
IQs differences above about 130 are only approximate.
Extremely high cognitive ability is usually specific rather than general;
therefore differences between those of ultra-high intelligence tend to be misleading in terms of their predictive value.
*