Thursday, 3 October 2013

Comparing mental age ratio with percentile measures of IQ - the intrinsic imprecision of high IQ measures in adults

*

I dislike the current method of expressing IQ rankings with an average of 100 and a standard deviation of 15 (or 16): in fact I think it is crazy and bizarre, and puts IQ research into a weird scientific category all or its own, while making IQ measurements all but incomprehensible to almost everybody.

Underneath these IQ numbers lies the more useful and comprehensible information about percentiles; and this would be a much better method of expressing intelligence - for example instead of saying an IQ of 130, just state that the person is in the top 2 % of the population (approx).

*

What is wrong with that? - and how much clearer it is.

Nonetheless, such percentages are still incomprehensible to many people, and it might be better to express the information as its reciprocal - that only one in fifty people have this intelligence as high as this or higher.

This gives a clearer sense of IQ differences - because when the average is 100, then an IQ of 130 looks as if it is almost identical to IQ 126 about twice as superior as 115.

But it probably makes more sense to say that an IQ of 130 (in the top 2%) is more like twice as superior as an IQ of 126 (in the top 4%) and about eight times superior to an IQ of 115 (top 16%)

*

But percentiles are still pretty abstract and incomprehensible, and not many people can think that way.

So there is much to be said for the original method of calculating IQ by mental age ratios.

In this system, the performance of a specific child in a specific examination is interpolated into a graph of the average performance of children of different age groups in that examination.

Thus, when an 8 year old child performs at the level of an average 10 year old, he is described as having a mental age of ten (or, if you must! - an IQ of 125 - since 10 is 25% more than 8).

*

Why was this simple and elegant method of measuring intelligence dropped?

Well, in the first place it wasn't - and parents are used to being told their child's Reading Age, which is a straightforward mental age measurement.

But the main problem is in relation to adult IQs.

It is easy to calculate mental ages for most young children - but the performance in tests reaches a maximum plateau sometime in the teens (and later for men than women) - which means that mental age calculations can only be used to calculate average or below-average performance in adults.

*

However, although the use of percentile measurements among high ability adults would appear to be a good solution; the method is something of a fraud, because the calculated values of intelligence are almost always extrapolated rather than interpolated.

What I mean is that, in most IQ tests, the percentage prevalence of performance at high levels is not known from direct measurement, but is extrapolated on the assumption that performance has a normal distribution.

Just imagine what it means to say that a person has an IQ of 145 - approximately the 0.1 percentile, or one in a thousand of the randomly sampled total population?

To know the actual performance of the top 0.1 percent, it would be necessary to measure several people at this level - maybe twelve? To get twelve people who are in the top 0.1 percent it would be necessary to test 12,000 randomly selected people - except that to be reasonably sure of getting 12 in a random sample you would need to have a larger sample than that... maybe 48,000 people or more?

Whatever the decision, at any rate this is a huge IQ test study, and a study of this size cannot possibly (in practice) be random, or anything like it. And if a total enumeration of the population (i.e. a census) was attempted, then there could not be a precise measure of IQ since the test would need to be short and simple.

Indeed, the situation is even worse! - because any test simple and short enough to be do-able by those of low intelligence would be unable to discriminate precisely among those of high intelligence.

Yet, any test which was difficult enough to be able to discriminate among adults of high IQ, would not be able to be normed accurately against tests suitable for lower IQ.

*

My point is that the allocation of numerical values to high levels of adult intelligence, which is made rational by the percentile method of calculation, must be taken with some bucketloads of salt.

The only high IQ measures that are both clearly understandable and relatively assumption-free are mental age calculations done on young children.

Differences in high levels of adult IQ are indeed measurable; but these differences cannot precisely be allocated percentiles with respect to the whole population, nor can they be understood in terms of mental age.

When it comes to high adult intelligence we can only say that it is high (e.g. within the top one or two percent), and that the IQ of Dr X is higher or lower than Professor Y - but we cannot say much more than than.

*