Thursday, 17 April 2014

What IS intelligence (in the lay sense)? How do people think?


What is a good way of conceptualizing intelligence in the sense that lay people use the term?

It is mostly an ability to think - rather than an amount of knowledge (much knowledge is, rather, a result of intelligence), intelligence is quantitative (people can be more or less intelligent)...

High intelligence is (I would say) a combination of two attributes: thinking rapidly about a large number of things simultaneously

Intelligence could therefore be reduced to a combination of fast processing speed (i.e. 'g'), and a large working memory.

So, how do people think (intelligently)?

I envisage intelligent thinking as going on within a 3D space - and the subject matter of thought being something like complex 3D shapes in this space.

But all thinking must be done in a finite time-frame - say 5 seconds (just to clarify the explanation).


(This 5 seconds represents that the neurons which do the thinking can only be activated (and interact) for a finite time - and that thinking therefore occurs in a time window. Thinking can only be done over about 5 seconds because after that time the earlier information will fade and be lost. That is, only a finite number of things can be 'held in mind simultaneously" - and the "simultaneously" therefore actually boils-down to the period of time in which an idea can remain active while other ideas are added to the 3D space.)


The number and size of 3D shapes is constrained by the size of the space (ie. the size of working memory) and the stuff that can be done with them is constrained by the speed with which manipulations can be performed.

So, there is a maximum of 5 seconds in which to manipulate the ideas (3D shapes) - and higher intelligence means that either more manipulations of ideas can be done in those 5 seconds (because the processing speed is faster), or else that more ideas can be manipulated in 5 seconds (because working memory space is larger) - or both of these (more rapid simultaneous manipulation of more ideas).

What results from this thinking is


So, what is the role of experience and knowledge?

I envisage learning as a matter of chunking.

Chunking is the process by which ideas are summarized or condensed - so that instead of memorizing 1 2 3 4 5 6 as six pieces of information, it could be remembered as (say) two chunks of 123 and 456, or a a single chunk which abstractly represents "the first six digits".

In other words, by chunking we 'encode' information, in a smaller and briefer form. In effect, chunking takes many 3D shapes and encodes them into one 3D shape which can later be decoded to yield the many shapes which made it.

When a person is learning, they are (in effect) taking large amounts of information and encoding them into 3D shapes - so that each shape can be decoded to yield a lot of information.

So a trained and experienced mathematician is able to think using ideas that could be understood as highly-encoded 3D shapes. And indeed, mathematical learning could be imagined as an iterative or cyclical process of chunking, then chunking these chunks into new chunks; by which more-and-more complex ideas are encoded.

(Many ideas are encoded into one chunk - then many of these encoded chunks are combined into one chunk - and so on.)


The basic 'intelligence' is still constrained by the number of ideas which can be held in mind 'simultaneously' (i.e. over a 5 second timespan) and by the speed of manipulation which constrains the number of manipulations possible in 5 seconds.

In other words, intelligence remains constrained by the fixed values of WM size and 'g'/ processing speed.

But after the manipulations are finished, after the thinking has been done; each 3D shape (or the resulting combinations of shapes) can (in effect) be decoded (and the products of decoding themselves be decoded, perhaps many times) to yield potentially a very large amount of information.


So, this model explains how people may suppose that 'intelligence' (in a lay sense) is increased by education, training, knowledge, experience etc - while the real underlying intelligence (consisting of 'g' reducible to processing speed, and Working memory) may be innate, fixed, mostly inherited.

And the cyclical, iterative chunking produced by education and experience, may disguise declining intelligence - either in ageing individuals whose processing speed is slowing, or in a society subject to dysgenic decline of 'g' with a generation upon generation slowing of processing speed.


What this means is that an increasingly specialized and prolonged education can result in exactly this kind of multi-cycle iterative chunking, so that within a specialized field the use of more complex chunks can disguise the decline in processing speed.

In other words, declining 'g'/ processing speed, means that less processing can occur in the constraint of the 5 second time window - but the ideas being processed (within the specialist domain of a specialized education and experience) may be more-complexly encoded. And this may superficially disguise the decline in processing speed.

BUT if an average modern person (with slower processing speed) had the same education and training as the average person of 150 years ago - and was thinking at the same level of generality - then there the average modern would display a very clear inferiority in terms of the maximum complexity of function. 

AND THIS IS WHAT WE FIND - modern intellectual life is characterized by what I have termed microspecialization

(See the chapter "No such thing as ‘Science’ anymore" in )

So, a human geneticist may function intellectually at a high levels within his microspecialism - but will be utterly incompetent at thinking in terms of genetics as a whole, or natural selection, or biology generally; so intellectual life is necessarily fragmented into 'autonomous' micro-units - when when the average intellectually sluggish modern person tries to think in general terms about general issues, they reveal an often embarrassing degree of simple-minded incompetence. 

Because processing speed has declined, the average modern person simply cannot do much manipulation of ideas within the '5 second' temporal constraint of working memory - when compared with what was normal 150 years ago.  

Except on familiar territory, modern people therefore cannot follow complex (several step) explanations which were comprehensible to previous generations - in effect because the beginning of the explanation has long since faded and gone before they can get to the end of it! 


(More exactly, when the 3D space of WM has been filled with ideas, not much in the way of manipulation can be done with these many ideas in the brief time available before these ideas fade and are lost, and need to be re-loaded afresh.)


Friday, 4 April 2014

The Natural Selection of European Genius - a speculation


Speculation time...

Creative Genius is a combination of high general intelligence (g) and (moderately) high Psychoticism (see elsewhere on this blog for the evidence) plus some other things, including luck.

But these are the necessary attributes - intelligence gives metal quickness, quick learning and general knowledge - and psychoticism provides the creativity plus the personal autonomy required to focus on something due to its intrinsic interest and in despite of social pressure to stop doing it and do something else.


1. In the beginning, hunter gatherers were low in intelligence and high in Psychoticism - they were creative back lacked cognitive ability, and seldom made discoveries.

HG = Low IQ & High P


2. In stable and large scale agricultural societies there was selection for higher intelligence and higher 'General Factor Personality' GFP - GFP is what J Phillipe Rushton termed a putative underlying unitary 'pro-social' personality trait which can be assumed to underpin the Costa and McCrae Big Five (i.e. High GFP =  high Agreeableness, Conscientiousness, Extraversion, Openness and low Neuroticism) or Eysenck's Big Three (i.e. High GFP = high Extraversion and low Neuroticism and Psychoticism).

(GFP statistically-underpins the various specific personality traits in just the same way as g underpins the various cognitive abilities.)


Such populations became high in intelligence but uncreative, since individuals would be focused on social expectations (for stable emotions, sociability, hard work and empathy) and would be most rewarded hence motivated by conforming to social expectations.

Their whole mental set up would be outward looking at other people - as contrasted with the inward looking self-evaluating set-up which seems to be required for creativity.

Agriculturalists = High IQ & High GFP (i.e. Low P)


3. Specifically in European societies (of the Middle Ages) there was a further selection for economic specialization in men.

This selection worked because the most reproductively successful men were cognitive specialists - merchants, skilled tradesmen, doctors, lawyers, clerks... indeed these were the ONLY men who actually managed to raise on average more than two children per family.

In order to do such work, these men were Naturally Selected to be motivated not by 'other people' but by the innate rewards of their, often solitary, cognitive and physical skills. This was in fact selection for high Psychoticism - for creativity.

Despite not being very sociable nor very 'charming', such men got good wives via arranged marriages - since parents of a young and healthy girls would prefer their grandchildren not to starve to death.

The result was a population that was both intelligence and creative - due to having high P.

Europeans = High IQ & (relatively) High P

But the High P of the Europeans is NOT the same as the High P of the Hunter Gatherers - it is a High P which is not 'natural and spontaneous' but a High P which has secondarily evolved-from a High GFP.


So we get three types of world population in terms of what I regard as the primary variables of general intelligence (g) and Psychoticism

H-G - High Primary P and Low g

Agric - High GFP and High g

European - High Secondary P and High g.


This is my explanation for why a high incident rate of Creative genius was confined to European populations.

As I said - this is speculation!


Friday, 28 March 2014

What IS somebody's IQ? Only an approximate measure - the example of Robert M Pirsig


IQ is not a precise measurement - especially not at the individual level, and especially not at the highest levels of intelligence when the whole concept of general intelligence breaks-down and there are increasing divergences between specific types of cognitive ability.

There is a tendency to focus upon a person's highest-ever IQ measure - for example in the (excellent!) philosophical novel Zen and the Art of Motorcycle Maintenance the author Robert Pirsig notes the startling fact (and it is a fact) that his (Stanford-Binet) IQ was measured at 170 at the age of nine - which is a level supposedly attained by one in fifty thousand (although such ratios are a result of extrapolation, not measurement).


But an IQ measure in childhood - even on a comprehensive test such as Stanford Binet, is not a measure of adult IQ - except approximately (presumably due to inter-individual differences in the rate of maturation towards mature adulthood).

A document on Pirsig's Wikipedia pages (Talk section) purports to be an official testimonial of Pirsig's IQ measurements from 1961 (when he was about 33 years old) and it reads:


   June 14,1961
   To Whom it May Concern:
   Subject: Indices of the Intellectual Capacity of Robert M. Pirsig
Mr. Pirsig was a subject in one of the institute’s longitudinal research projects and was extensively evaluated as a preschool, elementary, secondary, college and adult on various measures of intellectual ability. A summary of these measures is presented below.
Childhood tests: Mr. Pirsig was administered seven individual intelligence tests between the ages of two and ten. He performed consistently at the 99 plus percentile during this period.
His IQ on the Stanford Binet Form M administered in 1938 when he was nine and a half years old was 170, a level reached by about 2 chilldren in 100,000 at that age level.

In 1949 he took the Miller's Analogy at the Univer. of Minn.. His raw score was 83 and his percentile standing for entering graduate students at the University of Minnesota was 96%tile.
In 1961 he was administered a series of adult tests as part of e follow up study of intelligence. The General Aptitude Test Battery of the United States Employment Service was administered with the following results:
   General Intelligence .......99 % ile
   Verbal Ability .............98 % ile
   Numerical Ability ..........96 % ile
   Spacial Ability ............99 % ile
   John G. Hurst, PhD   Assistant Professor


So, as well as the stratospheric IQ 170, there are other measures at more modest levels around 130 plus a bit (top 2 percent).

Of course there may be ceiling effects - some IQ measures don't try to go higher than the top centile.

But still, lacking that age nine test - and most nine year old's don't have a detailed  IQ personal evaluation - Pirsig's measured IQ would be quoted at about around one in fifty or one in a hundred - rather than 1: 50,000.

Ultra-high IQ measures must be taken with a pinch of salt; because 1. at the individual level IQ measures are not terribly reliable; 2. high levels of IQ do not reflect general intelligence, but more specialized cognitive ability; and 3. even when honest, the number we hear about may be a one-off, and the highest ever recorded from perhaps multiple attempts at many lengths and types of IQ test.


Thursday, 27 March 2014

What is Working Memory? How does it relate to general intelligence? Speculations...


Some time ago I was very interested by 'Working Memory' - trying to understand how to conceptualize it, what its structural brain basis might be - and also to measure it in this study:

The concept of Working Memory played a very large role in my book Psychiatry and the Human Condition - published in 2000.

Here is an excerpt from Psychiatry and the Human Condition describing how I visualized WM c 1999 - other discussions can be found by word-searching the phrase 'Working Memory'.


Working memory (WM) is the site of awareness, located in the prefrontal lobe of the cerebral cortex. WM functions as an integration zone of the brain, where representations from different systems converge and where several items of thought to which we are attending can simultaneously be sustained and manipulated. When we deliberately grapple with a problem and try to think it through - this process is happening in working memory, when we are aware of something it is in working memory, when we wish to attend to a specific stimulus, we represent it is WM...

Awareness comprises attention and working memory (WM). To be aware of an perception it must be selectively attended to, and the representation of that entity must be kept active and held in the brain for a length of time adequate to allow other cognitive representations to interact with it and in a place where other cognitive representations can be projected. Working memory is such a place, a place where information converges and is kept active for longer than usual periods. Hence working memory is the anatomical site of awareness.

The nature of working memory can be understood using concepts derived from cognitive neuroscience. Working memory is a three-dimensional space filled with neurons that can activate in patterns. Cognition is conceptualized as the processing of information in the form of topographically-organized (3-dimensional) patterns of neural activity called representations - because each specific pattern ‘represents’ a perceptual input. So that seeing a particular shape produces a pattern of cell activation on the retina, and this shape is reproduced, summarized, transformed, combined etc in patterns of cell activation in the visual system of the brain - and each pattern of brain cell activation in each visual region retains a formal relationship to the original retinal activation.

Representations are the units of thinking. In the visual system there may be representations of the colour, movement and shading of an object, each of these having been constructed from information extracted from the original pattern of cell activation in the retina (using many built-in and learned assumptions about the nature of the visual world). The propagation and combination of representations is the process of cognition.
Cognitive representations in most parts of the brain typically stay active and persist for a time scale of the order of several tens of milliseconds. But in working memory cognitive representations may be maintained over a much longer time scale - perhaps hundreds or thousands of milliseconds - and probably by the action of specialized ‘delay’ neurons which maintain firing over longer periods. So WM is a 3-D space which contains patterns of nerve firing that are sustained long enough that they can interact with other 'incoming' patterns. This sustaining of cognitive representations means that working memory is also a ‘convergence’ region which brings together and integrates highly processed data from several separate information streams.
Any animal that is able selectively to attend-to and sustain cognitive representations could be said to possess a WM and to be 'aware' - although the content of that awareness and the length of time it can be sustained may be simple and short. The capacity of WM will certainly vary between species, and the structures that perform the function of WM will vary substantially according to the design of the central nervous system. In other words working memory is a function which is performed by structures that have arisen by convergent evolution, WM is not homologous between all animals that possess it - presumably the large and effective WM of an octopus is performed by quite different brain structures from WM in a sheep dog, structure that have no common ancestor and evolved down a quite a different path. The mechanism and connectivity of the human WM allows cognitive representations from different perceptual modalities or from different attended parts of the environment to be kept active simultaneously, to interact, and to undergo integration in order that appropriate whole-organism behavioral responses may be produced.
Working memory is reciprocally-linked to long term memory (LTM), such that representations formed in WM can be stored in LTM as patterns of enhanced or impaired transmission between nerve cells (the mechanism by which this occurs is uncertain but probably involves a structure called the hippocampus). So temporary patterns of active nerves are converted to much more lasting patterns of easier or harder transmission between nerves. The patterns in LTM may be later recalled and re-evoked in WM for further cycles of processing and elaboration.
This is how complex thinking gets done - a certain ,maximum number of representations can interact in WM in the time available (maybe a couple of seconds). So there is a limit to what can be done in WM during the span of activation of its representations. To do more requires storing the intermediate steps in reasoning. The products of an interaction in WM can be summarized (‘chunked’) and ‘posted’ to LTM where they wait until they are need again. When recalled and reactivated these complex packaged representations from LTM can undergo further cycles of interaction and modification, each building up the complexity of representations and of conceptual thought.
WM is therefore conceptualized as a site for integration of attended perceptual information deriving from a range of sensory inputs. Awareness seems to be used to select and integrate relevant inputs from a complex environment to enable animals to choose between a large repertoire of behavioral responses. There is a selective pressure to evolve WM in any animal capable of complex behavioural responses to a complexly variable environment. So the cognitive representations in WM in non-conscious animals are derived from external sensory inputs (eg. vision, hearing, smell, taste and touch).
The critical point for this current argument is that non-conscious animals may be aware of their surroundings, but they lack the capacity to be aware of their own body states. Awareness of outer environment is common, but awareness of inner body states is unique to conscious animals.
Working Memory Revisited in light of IQ
But I now need to go back and revisit my old understanding of Working Memory in light of my more recent understanding of general intelligence - because when I wrote Psychiatry and the Human Condition I knew essentially nothing about IQ. That such a thing can happen has at least two causes - the first is my own obtuse ignorance, the second that when I did try to tackle intelligence I was put-off by the fact (and it is a fact) that nearly-all psychometricians (including many of the best and most famous) are non-biological and non-evolutionary in their basic mode of thinking.
And this is true even when psychologically-trained psychometricians were writing about biology and evolution - it was (and is) obvious that they fundamentally hadn't a clue! This is a matter of training, especially early training - and that fact that traditionally psychology was taught in isolation from biology, hence evolutionary theory - and from medicine - in a weird No Man's Land of proliferating ad hoc theories and slavish devotion to arbitrary methods and statistics.

Working Memory versus Intelligence 
My understanding is that Working Memory and Intelligence are conceptually different, serve somewhat different functions, and are dissociable - such that a person may have higher than average intelligence and lower than average WM - or vice versa.
Intelligence is, roughly, a measure of the speed of processing (which may be roughly equivalent to efficient connectivity) while WM is, roughly, the size of the active 'workspace' -  presumably constrained by the anatomical size of the effective working memory zone and the fact that the relevant nerve cells can only be activated for a timescale of a few seconds.
So, Working Memory might be visualized as the 3D size of the space in which processing occurs - that which is processed may be visualized as the interaction of complex 3D shapes which represent the content of thought (ideas, perceptions, emotions etc) - and intelligence is the speed with which all this happens.
High intelligence means that more interactions can occur within a given size (and duration) of Working Memory; while a larger WM means that for a given level of intelligence, more things can be thought-about simultaneously.
(Something to flag-up: In a computer analogy, intelligence might be equivalent to the speed of a microprocessor in terms of the efficiency and complexity of its circuitry; while working memory might be equivalent to a microprocessor's Cache Memory. So intelligence is how quickly the microprocessor can do operations; while WM represents the amount of information which can be included in active processing possible at a given time. But I will leave critique and development of that analogy to those who know more about computers than I do - which is nearly-everybody.)


So, the highest level of thinking - such as creative genius - would seem to need both a high intelligence and also a large capacity Working Memory; since the WM would allow a person to hold many things simultaneously in-mind, including emotional evaluations - while high intelligence would allow these things to interact complexly within the short time-frame of WM.

The combination of WM and Intelligence can be regarded as processing power, and it can be seen that various combinations can lead to the same overall power. 

For example, to use some simple numbers to give an idea of ratios - 

An IQ of 3 and a WM of 2 - compared with an IQ of 2 and a WM of 3:

The WM number represents the complexity of content, while the IQ number represents the number of iterations of processing. 


So, when the WM is half as much again (3 compared with 2) then that means more possible combinations between the items being processed; and when when the IQ is increased by half - then there are 150% more iterations of processing.  

A WM of 3 and IQ of 2 might be 2 iterations of 3 --

2 X 3 = 6 as a number representing power.

By contrast a lower WM of 2 and IQ of 3 has one and a half time the number of iterations - therefore three iterations instead of two, but with a lower WM number to represent less complex content --

2 X 2 X 2 = 8 as a number representing power


So, in percentage terms, IQ (general intelligence) seemingly has a greater influence on processing power, because it allows more iterations of processing.

However, the quality of thinking of a WM 3/ IQ 2 will be different from a WM 2/ IQ 3 - I would guess that relatively higher intelligence would lead to a more linear, narrow and logically-extrapolative style of thinking; while a relatively higher WM would I think have a more associative style - better at multifaceted judgment.

All speculative stuff! But let's see where it leads...


Monday, 24 March 2014

Researching the decline of intelligence measured by reaction times. Where next?


Now that the approximate magnitude of previously estimated simple reaction times slowing over the past century or so has been confirmed

the evidence of a significant decline in intelligence since Victorian times can no longer be dismissed or ignored.

So, the question arises what next?

1. Other researchers than myself and Michael A Woodley's group need to get involved. So long as all the results come from one group, uncertainty remains, Independent testing/ replication should be attempted.

2. While there is probably no more historical reaction time data to be had; the LPC-SO method of comparing longitudinal with cross sectional data could be applied to further samples of simple reaction times - and to other possible objective measures correlated with general intelligence and with plausible biological links to intelligence.

3. Further historical data on the effects or outcomes of intelligence may be discovered, to supplement Woodley's reanalysis of innovation rates and the incidence of creative geniuses. Any quantifiable human activity or achievement which depends strongly upon intelligence ought to show evidence of decline in-line with slowing simple reaction times.

Historical variability in heritable general intelligence: Its evolutionary origins and socio-cultural consequences. MA Woodley, AJ Figueredo - 2013 - University of Buckingham Press.

4. The effects of normal ageing on simple reaction times needs to be known with more precision - age of onset, shape of curve, sex differences and so on.

5. The quantitative relationship between simple reaction time and currently-measured IQ needs to be known - so as to make a valid conversion formula. The sRT IQ correlation coefficient is too low to make such a formula useful for individuals, but in terms of group averages it could be valuable. Such a conversion formula might turn out to be non-linear - and opens the possibility of measuring (group) intelligence on an interval or even a ratio scale.

6. At a deeper level, an understanding the relationship between general intelligence and reaction times needs development - in particular, can 'g' be coherently defined terms of the objectively measurable speed of processing? What is the minimum possible sRT? What is the effect of slowing sRT on intelligence in terms of interactions with other cognitive constraints?

7. Assuming it is agreed that intelligence has declined very substantially over the past 150 years or so; then the mechanism of this decline needs elucidation - since the rate of decline seems to be faster (maybe even twice as fast?) than the rate predicted by the differential reproductive success of people with different IQ. My preferred explanation of intelligence being damaged by the generation-upon-generation accumulation of novel deleterious mutations (mutations which would, through most of history have led to a high probability of early death during childhood - and thereby the filtering of such mutations from the gene pool).


Tuesday, 18 March 2014

Further evidence of significant slowing of reaction times, and decline of intelligence, over recent decades in the UK: a method comparing longitudinal prediction with cross-sectional observation (LPCSO)


This is a re-analysis of the data from Deary IJ, Der G. Reaction time, age and cognitive ability: longitudinal findings from age 16-63 years in representative population sampless. Aging, Neuropsychology and Cognition. 2005; 12: 187-215.

The principle is that data on the longitudinal slowing of simple Reaction Times (sRTs) measured in longitudinal follow-up studies of segments through the human lifespan, may be interpolated to predict the expected slowing in sRT between 16 and 63; the prediction is compared with the actual measured sRT in cross-sectional studies at ages 16 and 63. 

In other words, the longitudinal data is used to construct an 'ageing curve' which describes the expected slowing of reaction times through the lifetime of an average person. But it was found that the measured reaction times of elderly people were considerably faster than would be expected from the effect of ageing of the youngest cohort - consistent with a generation by generation slowing in sRT. 

The difference between the predicted and observed sRT of elderly people is a measure of the slowing of sRT (ie. 'secular' change, or presumed dysgenic change) over the span of 47 years - and this can be extrapolated to estimate the slowing of sRT expected over longer periods (assuming that the rate of sRT slowing is constant).

This Longitudinal Prediction Cross-Sectional Observation (LPCSO) method predicts a slowing of sRT of about 80 ms in a century, which is very similar to the measured difference of 70ms slowing between Victorian and modern sRT. 

This confirms the very substantial slowing of sRT since the 1800s, from about 180 ms to about 250 ms (a slowing of about one or more standard deviations of modern sRT) which must surely orrespond to a significant decline in general intelligence.

The LPCSO method could be used on other sRT data sets to check this result - and also applied to other possible measures of dysgenic or secular trends.

I use the longitudinal data (16-24, 36-44, 56-63) from women only (see note below) to generate three measures of declining sRT expressed as a slowing of ms/year.

16-24 slows from 295-306 = 1.375 ms/year for 8 years
36-44 slows from 315-332 = 2.125 ms/year for 8 years
56-63 slows from 345-375 = 4.286 ms/year for 7 years

To interpolate the declines from age 34-46 and from 44-56 I simply added the rates of decline from either side and halved it - so:

24-36 slows at a rate of 1.750 ms/year
44-56 slows at a rate of 3.206 ms/year

each of these gaps is 12 years (longer than the 7 or 8 years of the longitudinal studies), so the rate per year is multiplied by 12 years.

So to make the graph we have six points with the following amount of slowing - in ms:

16-24 - 11ms
24-36 - 21ms
36-44 - 17ms
44-56 - 39ms
56-63 - 30 ms

Total predicted decline in sRT from 16-63 = 118 ms - starting from 295 ms age sixteen this would lead to expected sRT of 413 ms at age 63

Predicted sRT at each age
          Age         msec
16 295
24 306
36 327
44 344
56 383
63 413

But the actual measure sRT age 63 was 375ms 

Difference between expected and observed simple RT is 413 ms - 375 ms = 38 ms in 46 years 

- which is an extrapolated slowing of sRT of about 80 ms in a century.

Circles and dotted line = measured sRT in the three different age cohorts
Crosses and solid lines = predicted sRT with increasing age.

This is much the same as the amount of decline detected in the previous study, which measured approx 70 ms of slowing between the Victorian and modern sRTs.

The next step is to locate other data suitable for this 'LPCSO' method of comparing the longitudinal-prediction of change with cross-sectional-observations between different generations, to test this estimate and to look at other potential variables.  

[Note: the above is a partial and preliminary version of a paper currently in preparation with several other authors.]


Note on the decision to analyze only the female data, and to exclude the male data.

This is to clarify that the exclusion of Male data from the above Deary & Der re-analysis, and the decision to focus only on the Female subjects in this study, was a prior exclusion done before embarking upon analysis; and therefore not a post hoc decision performed after analysis.

The youngest male age cohort is 16-24; and I have been convinced by the work of Richard Lynn that men matured in terms of IQ significantly later than women, and that average native British men were probably not mature until at least age 18. 

I therefore decided to omit the 16-24 age group from analysis, on the assumption it would contain a significant proportion of cognitively-immature men whose IQ had not reached the maximum level (thus the 16-24 group would potentially contain people whose IQs were still rising, and others who had begun to decline from ageing – this obscuring the effect of ageing).

By contrast, my understanding was that a large majority of women would have reached cognitive maturity (and maximum IQ) by age 16 – so there was no problem with including women from the 16-24 age cohort.

Having decided to delete the age 16-24 cohort for the men, I was left with only four male data points for reaction times – and therefore just a single internal comparison for the analysis of longitudinal versus cross-sectional change – i.e. that comparison bounded by the 46-54 and 56-63 age cohorts. A graph made of just 4 points spread over just 27 years seemed clearly inadequate for the analysis I envisaged; and there was no internal replicate for the predicted versus actual change in reaction times between cohorts.

Therefore I discarded the male data and analyzed only the females.


Friday, 14 March 2014

Creativity is invisible, deniable, inevitably misunderstood - yet vast in impact


Creativity is, in practice, culturally invisible - although its impact may be seismic.

This is best seen in technologies - where the effects are most apparent and where the archaeological and historical record is of most value. 


The great mass of truly creative breakthroughs in history are unattributed - the men who made them are forgotten, their names were not attached to their creative acts.  This enables credit to be reassigned to 'the folk' or 'culture' - but all actually known-about breakthroughs seem to be attributable to one, or at most two, men.


Creative breakthroughs are extremely difficult and rare - as is shown by the centuries, perhaps even millennia, of stasis which are then suddenly broken by simple breakthroughs - bow and arrow, arch, stirrup, new shapes of plough.

As soon as the breakthrough has been made into an artifact, then it is obvious - many people can understand it, many people can make it, and almost everybody can use it.

Why give special credit to someone just for discovering something obvious?  

So, once the creative breakthrough has been made, by one unattributed man perhaps, its effects can rapidly spread, even across the whole world - human life may be transformed by a single anonymous breakthrough.


Anonymous creative breakthroughs are a sufficient basis for mass cultural change. The mis-match between the obscurity of the individual creator and the vast consequences of that breakthrough really cannot be exaggerated. 

Yet many or most cultures show no evidence of any creative breakthroughs at all - presumably because they utterly lacked creative people. These cultures had sufficient ability to manufacture, train and use technologies of a certain type - and to pass on that knowledge between generations in a stereotypical fashion - but no more.

That is the norm for human history. That is the situation for most people who have ever lived.


So, creative breakthroughs are almost always deniable. As soon as the breakthrough has been made, within minutes perhaps, the extraordinarily rare and special nature of its occurrence is deniable.

Indeed, creativity is deniable largely because it is so rare - few can appreciate that which they cannot do. Alternative explanations are almost-always preferred - creativity is almost always explained-away - especially by the perennial and utterly false cry: 'but it was obvious!'


Sunday, 9 March 2014

What is the potential evidence AGAINST a one standard deviation decline of intelligence over the past 150-200 years?


Here is a list of some objections to and evidence against the assertion that average Victorian IQ would have been measured at one SD higher than moderns - that is at a modern IQ of 115 or more.

My comments follow [in square brackets]


1. The decline of intelligence is too fast to be accounted for by known mechanisms related to differential reproductive success between the most and the least intelligent people.

[I agree, that mechanism only accounts for about half the rate of decline required to produce 1 SD slowing in simple reaction times, hence intelligence. Another mechanism, or more than one extra mechanism, is required. I favour the accumulation of deleterious (intelligence damaging) mutations generation upon generation, due to very low child mortality rates since 1800, compared with all previous times in history.]


2. A 1 SD decline in intelligence since Victorian times would lead to a collapse of high level intellectual activity such as the number of creative geniuses and the rate of major innovations...

[I agree - it would lead to collapse...]

but this collapse has not happened - therefore there cannot have been a 1 SD decline.

[But my interpretation is that collapse has happened: the number of creative geniuses has collapsed and so has the rate of major innovations. Unless we are fooled by hype, or the self-interested self-promotion of insiders, I think this collapse is very obvious indeed across the whole of Western culture. I was writing about this collapse for many years before I came across the evidence of reducing intelligence - but I was trying to explain it in other ways such as the decline in scientific motivation, honesty, institutional factors, modern fashions, bureaucratization, Leftism etc. But the data for intellectual collapse are solid: what is in dispute are the best explanations.]


3. Intelligence has been rising, not falling, in developed countries - as evidenced by the rising average IQ test scores - a phenomenon usually called The Flynn Effect.

[I agree that average IQ test scores rose through the twentieth century - but that this was a matter or rising test scores; meanwhile average intelligence was declining. In other words, test scores were subject to inflation - or more accurately stagflation: as when prices are rising but economic production is declining. IQ test scores were rising, but real intelligence was declining.]


4. The evidence of slowing simple reaction times is not valid, because measurements and sampling methods in Victorian times are too different from modern measurement and sampling methods.

[Michael A Woodley and I have argued that these micro-methodological quibbles are inappropriate and invalid - and I think we have refuted them.]


5. Simple reaction times are not a sufficiently accurate, or valid, measurement of intelligence. In fact the idea that reaction times measure intelligence is obvious nonsense, because the best fist fighters and athletes have the quickest reactions, so they would have to be the most intelligent people - but they aren't...

[Simple reaction times are nothing to do with what the general public thinks of as 'quick reactions', and nothing to do with athletics, sports, or that kind of thing. Since the mid 1800s it has been known that differences in simple reaction time - such as seeing a light flash and pressing a button, are correlated positively with differences in intelligence. The correlation is not very tight, there is a lot of scatter around the line, but there always is a correlation - and average sRT differences accurately predict measured intelligence differences between both individuals and groups such as class, sex and race. Nobody who knew the field disputed the robust correlation between sRT and IQ - and many of the main scholars (such as Jensen) have assumed that the reason for the correlation was causal - that sRT reflects speed of neural processing which is a fundamental aspect of general intelligence. It is dishonest scientific practice to overturn more than a century of good research just because the sRT results go in a direction that you find surprising.] 


6. One SD slowing in sRT does not necessarily imply a 15 point reduction in IQ.

[I agree, because IQ is not a 'real' interval scale - which means that the difference in intelligence measured by 1 IQ point is not known and is presumably varied at different points in the scale. Reaction time is, however, an interval scale - measured in milliseconds. I have assumed that therefore sRT should take priority as the most valid scale and IQ should be calibrated against sRT. Therefore I argue that if sRT has slowed by about one SD then this should be understood to mean one SD decline in real intelligence. ]


7. An sRT slowing of about 70 milliseconds between the 1880s and nowadays may average at about 1 IQ point per decade, but this does not necessarily imply a linear rate of decline - the rate of change may vary.

[I agree. The actual rate of decline will depend on the main causes of decline. This is not known. Indeed, if I am correct that a generation upon accumulation of deleterious and intelligence-damaging  gene mutations is an important factor - the way that this works is not known. My feeling or hunch is that this kind of effect would not be linear but that the incremental amount of damage would increase with each generation - perhaps exponentially or by some other accelerating rate. So that if there were 2 new deleterious mutations per generation, then 4 would be more than twice as harmful as 2; and 8 would be more than twice as harmful as 4 - and so on. So the rate of decline of intelligence (and slowing of sRT) over 150 years need not be linear - but I would guess it is accelerating.]


8. There is just not enough evidence. One historical study with not very many data points is not enough to overturn the consensus from the Flynn effect studies that intelligence is rising.

[Fair point - except that the current consensus is not very secure - since confidence that rising IQ test scores really means rising 'g' (general intelligence) has never been very high. But on the other hand, the sRT historical evidence of declining intelligence is too strong to ignore. The best response is to seek further methods of confirming the decline in intelligence using different data and methods. That is what Michael A Woodley and I are doing, as best we may - but it would be great to have other people also working on the problem.]


Saturday, 8 March 2014

What do YOU believe about the reported slowing of average simple Reaction Times and the (?One Standard Deviation) decline in intelligence since Victorian times


1. Do you believe that Victorian simple reaction time (sRT) data are not comparable with modern data? If so, would you be convinced by evidence of rapidly slowing reaction times over recent decades, measured in one laboratory and using only modern RT machines? Because this kind of evidence is in the pipeline.

2. Do you believe that - despite about 140 years consensus that sRT and IQ are significantly correlated, and the general belief that this correlation is because general intelligence is dependent upon processing speed of which sRT is an indirect measure - there is NOT a causal relationship between simple reaction times and intelligence? That, therefore, average sRTs could be getting much, much slower but that this would not necessarily make any difference to average intelligence?

3. Do you believe that the measured decline in average simple reaction time from a Victorian sRT average speed of about 180 milliseconds (in several independent studies) to a modern average speed of 250 milliseconds (or slower), a slowing of 70 milliseconds plus - is not enough to be of interest: that it is too small to not reflect any significant or meaningful reduction in intelligence.

4. Do you believe that because the measured slowing of sRT over the past 150 years seems unexpected, and is larger than you would have supposed possible, strikes you as indeed ludicrous - that therefore we should simply ignore it?

5. Do you believe that - because the data on long term sRTs seems anomalous with your world view, that we should therefore assume that somehow there is something wrong somewhere with the Victorian to Modern comparison; and therefore we should just carry-on just as if we knew nothing about longitudinal changes in sRTs?

6. Do you believe that there has been a significant reduction in average general intelligence over the past 150 years, but that it is much less than one standard deviation - probably more like HALF a standard deviation? And the large size of the sRT slowing is just a Red Herring?

7. Do you believe that average intelligence has NOT changed over the past 150 years - that moderns have the same intelligence as Victorians? And the slowing of average sRT is irrelevant?

8. Do you believe that average intelligence has increased over the past 150 years despite slowing of sRTs, because you believe the pen-and-paper IQ tests are more valid, reliable and/or objective than reaction time data?

9. Or something else, or what?


Greg Cochran, slowing of simple reaction times and the 1SD decline in intelligence over the past 150-200 years


Greg Cochran has been the most significant (intellectually substantial) critic and opponent of the idea (deriving from myself and Michael A Woodley) that historical reaction time data have shown a significant (approx. one standard deviation or 15 modern IQ point) decline in intelligence since Victorian times. 


In his latest blog posting, Greg takes another side swipe at the idea.

Here is my comment in response.



As you presumably know, I have an extremely high regard for your work (e.g. having provided a back page blurb for 10,000 Year Explosion and invited you to write for Medical Hypotheses on the germ theory of male homosexuality).

And I am – on the whole! – grateful for your opposition to the finding of an approximately 1SD (15plus IQ points by modern measurements) decline in general intelligence in England (and similar places) as measured by simple reaction times since about 150-200 years ago – grateful because it has stimulated me to organize my thoughts on the subject.

But I continue to think you are wrong! and that the evidence you bring against this decline is inadequate – so I continue to hope to persuade you otherwise.

I have three considerations to offer.


1.       The decline in question is (roughly) from IQ 115 to IQ 100 over the space of 150 years – about one IQ point per decade (whatever that means!). But I suggest that this would not be expected to have analogous functional consequences to a decline from 100 to 85, since IQ is not an interval scale.

(In a nutshell, I think Victorian English IQ was *about* the same or a little more than recent Ashkenazi IQ – but has declined.)

This 150 year decline measure in modern IQ units corresponds to a slowing of simple reaction times from approximately 180 to 250ms for men – about 70 milliseconds.

And the minimum RT in the Victorian studies was about 150 ms – which is probably near the physiological minimum RT (and maximum real underlying IQ) constrained by the rate of nerve transmission, length of nerves, speed of synapse etc.

So average Victorian RT was about 30 ms above minimum RT, while modern RT is about 100 ms above minimum.

By contrast – modern reaction times (in Silverman’s study) for men average approximately 250ms with a standard deviation of 50ms – however there are good recent studies with an average RT of 300ms for men.

I would argue (on theoretical grounds) that as RT declines there ‘must’ come a point when it comes-up-against the neural constraints of intelligence, such as short term/ ‘working’ memory (the metal ‘workspace’, activation of which lasts a few seconds, seemingly) – and therefore there would be a non-linear effect of reducing intelligence – intelligence would cross a line and fall off a cliff.

My assumption is that a reduction in (modern normed) IQ from average 115 to 100 would *not* have such a catastrophic effect on high level intellectual (abstract, systemizing) performance as a reduction from average 100 to 85. (At a modern average IQ of 85, top level intellectual activity is *almost* entirely eliminated.)

When we are dealing with the intellectual elites, the same may be more apparent – the initial reduction in RT may retain the possibility of complex inner reasoning; while after a certain threshold the number of possible operations in the mental workspace would drop below the minimum needed for high scale intellectual operations.


2.       It may be that your example of maths does not refute the observation of reduced intelligence. It may be that modern mathematic breakthroughs are of a different character than breakthroughs of the past – and do not require such high intelligence.

I think this may be correct in the sense that I get the impression that modern maths seems to be substantially a cumulative, applied science – somewhat akin to engineering in the sense of bringing to bear already existing techniques to solve difficult problems.

So a top level modern mathematician has (I understand) spent many years of intensive effort learning a toolbox of often-recently-devised methods, and becoming adept at applying them, and learning by experience (and inspiration) where and how to apply them.

This seems more like the Kuhnian idea of Normal Science than the Revolutionary Science of the past – more like an incremental and accumulative social process, than the individualistic, radical re-writings and fresh starts of previous generations. And, relevantly, a method which does not require such great intelligence.

I also note that many other sciences, from biology to physics, have observed the near-disappearance of individual creative genius over the past 150 years – and especially obviously with people born in the past 50 or so years - which would be consistent with reducing intelligence.


3.       Michael Woodley and I have discovered further independent – but convergent – evidence consistent with about 1 SD (15 IQ point) decline in intelligence from Victorian times, again using simple reaction time data – but, as I say, using a completely different sample and methods. The paper is currently under submission.

I mention it because the unchallenged consensus post-Galton has been that simple reaction times has some causal – although not direct – relationship to intelligence; and if we have indeed established that RT has substantially slowed over recent generations, then either this would need to be acknowledged as implying a similarly substantial decline in intelligence – or else the post-Galton consensus of IQ depending on RT would need to be overturned.


Note added: An e-mail correspondent writes:

Take the following claim of Cochran's:

"In another application – if the average genetic IQ potential had decreased by a standard deviation since Victorian times, the number of individuals with the ability to develop new, difficult, and interesting results in higher mathematics would have crashed, bring [sic] such developments to a screeching halt. Of course that has not happened."

Cochran is completely correct in his reasoning, and in his prediction that higher mathematics would have crashed given a one sigma decline in g. His last sentence is however empirically false, because a crash is precisely what the data indicate happened.

Charles Murray, in his 2003 Human Accomplishment presents graphic data of the rate of eminent mathematicians and major accomplishments in mathematics (p. 313). The trends reveal a precipitous decline in the occurrences of both of these between the years 1825 and 1950. Extrapolating the decline in this period out to the year 2000 would place the rate of eminent mathematicians and their accomplishments below the rate observed in 1400, despite massive population growth in the West during this interval. The peak of mathematical accomplishment clearly occurred during the heyday of eugenic fertility in the West, between 1650 and 1800, and actually occurred earlier than the peaks experienced in other areas of science and technology, perhaps suggesting greater sensitivity to shifting population levels of g (a testable prediction incidentally).

These data completely concur with my sense that modern 'mathematics' has stagnated. There are virtually no valid proofs being offered for the long-standing mathematical problems these days. Six of the seven Millennial prize problems remain unsolved. More worrying still, no one seems to have grasped the enormity of the problem posed to the foundations of mathematics by Georg Cantor's work on transfinite numbers, and we are no closer to understanding how these fit into the foundations of mathematics today than we were in the 1900's.

The two greatest mathematicians alive today are Andrew Wiles, who solved Fermat's Last Theorem, and Grigori Perelman, who amongst other things, solved Poincare's Conjecture (the only Millennial prize problem to have been unambiguously solved thus far). Of the two of these, Perelman is the only one who would compare favorably with the great mathematicians of the past. Wiles, whilst having undoubtedly made a major discovery, is clearly second rate by historical standards, as he had to marshal enormous amounts of time and effort into solving just one problem, which was not completed until he was more than 40 years old - an achievement pattern atypical of great mathematicians who typically reach peak accomplishment at less than 35 years of age.

That leaves Perelman, who has been prodigious and productive from a  relatively early age. He is nothing if not scathing about the state of modern mathematics either, having claimed the following in a 2006 interview on why he turned down various prestigious mathematics prizes:

"Of course, there are many mathematicians who are more or less honest. But almost all of them are conformists. They are more or less honest, but they tolerate those who are not honest."

This could of course equally well apply to every area of scientific inquiry in the modern world. Data, such as that presented by Murray and others clearly reveal that what you have today are hoards of 'mathematicians' who are collectively not one iota as accomplished as the relatively less numerous, but vastly more talented individuals who dominated this field in centuries past.

Just because these over-promoted self-promoters claim something is 'interesting', 'new' or even a 'breakthrough' in their field doesn't make it so - the decline in eminence in point of fact makes it antecedently highly implausible that 'mathematicians' today are even capable of generating anything approaching a breakthrough (ultra-rare individuals such as Perelman and Wiles excepted).


See also comments at: 


Monday, 17 February 2014

How does high intelligence evolve?


The simplest way is when (on average) only those of above-average intelligence are able to raise children to sexual maturity.


The assumption is that (in pre-industrial society) almost everybody has above-replacement fertility (averaging significantly more than two children per women) but a situation in which people of lower intelligence almost-never raise any children to adulthood because almost-all children die before reaching sexual maturity.


This was, in fact, probably the situation that prevailed in Medieval Britain, and probably Western Europe generally, and probably pre-modern China and East Asia, and among Ashkenazi Jews in the Middle Ages.


You can think of this as a bottom-threshold, or as (barring rare flukes) a minimum intelligence level for rearing children to adulthood...

And this minimum intelligence necessary to rear children to adulthood being at above average intelligence - and this situation prevailing as a selection environment for long enough to raise the average level.

Of course, what will most likely happen is that as intelligence declines, there is a sharply-declining-probability of raising children to adulthood - a probability which reaches near-zero at somewhere around average intelligence for that group.


The point is that there is not much need for evolving genes associated with higher intelligence, rather the mechanisms is mostly one of a selecting-out of genes associated with low intelligence.

These selected-out genes may have other, not-intelligence-related advantages, which would, of course, be lost.

Possible/ plausible examples of selected-out genes (from surveying the higher intelligence populations) are genes for athleticism (e.g. genes associated with better running, jumping, one-on-one unarmed combat).


And of course this selection process for higher intelligence pretty-much ceased to operate in developed countries from about 1800-1850 - when childhood mortality rates began to plummet towards zero among the least intelligent; and selection became (more or less) for 'pure fertility' - so that any genes associated with any behavioural cause of maintained/ increased fertility (including genes which damage biological functions, and render someone unable/ unwilling to use fertility controlling technologies) would be, have been, amplified in this post-industrial modern population.  

Wednesday, 15 January 2014

Intelligence is a tertiary phenomenon


What is intelligence for?

On the one hand higher intelligence helps in learning, it helps in analysis and understanding... stuff like that. 

But given that most of the most intelligent people have been doing silly and wicked things for much of history - what is the proper use of intelligence?

To be 'a good thing', to be valuable, to benefit mankind: intelligence ought to be a tertiary phenomenon.


1. Motivation

The most important thing is motivation - what you are trying to do.

If you are not trying to do the right thing, then you will do harm - and intelligence only increases the amount of harm you are capable of doing. 


2. Honesty

If you are dishonest, then the material which your intelligence works-upon will inevitably be worthless at best and most likely harmful.


3. Intelligence

If you are 1. properly motivated and 2. honest, there is a good chance that intelligence will yield something useful.

But only when built-upon 1 and 2.


Tuesday, 7 January 2014

The ageing population is a contribution to the decline of intelligence in developed nations


From considering some of the points in this post

it is clear that the ageing population, the change in the age structure (as represented by a population pyramid) has been a factor in reducing average intelligence.


Almost all nations at present have grossly distorted population structures resembling one of another of these extremes:

In terms of the median (average) age, Angola is in the late-teens, Japan is in the mid-forties.

Such extremes of median age have not been seen in human history, and such an extreme difference between nations is highly significant.


The population structure ideal is something in between, and closer to the 'stationary' shape - therefore probably it would be roughly 'pyramidal', but with a much narrower base than Angola.

The developed world nations all approximate to the top-heavy Japanese shape among the indigenous population - very few children at the base of larger proportions of the elderly.


As a person ages they suffer a decline in intelligence - which is objectively (but only approximately) measurable by a slowing of simple reaction times.

Reaction times get faster from young childhood up to sexual maturity as intelligence increases throughout childhood and up to about age 16 for girls and 18 for boys.

Adults are more intelligent than children - and a population grossly-over-dominated by children and early teenagers, like that of many of the developed world nations, will therefore naturally have a lower average intelligence. However, such populations would expect to get more intelligent over the short to medium term (forthcoming years to decades) as current children mature into adulthood.


The opposite applies in the developed nations.

The reaction times/ intelligence does not change much during early adult life; but seemingly decline gets faster and faster in the thirties or forties; so the decline in intelligence from age forty to fifty is much greater than from thirty to forty, and continues to accelerate.


(The actual amount of decline is only imprecisely known, I think, because it would require longitudinal studies lasting many decades. But as a very approximate ballpark figure, I would suggest a loss of about 10 IQ points (2/3 of a standard deviation) from age 20 to 70. Therefore the decline would go something like 30-40 - 1 IQ point lost; 40-50 - 2 IQ points lost; 50-60 - 3 IQ points lost and 60-70 - 4 IQ points lost.)


The developed countries currently have a median average age in the mid-forties, which means that average intelligence has already declined - but as the median age gets above 45 and continues to rise, the rate of intelligence decline will increase further, and further.

At a national level, there would appear to be an apparently sudden, because more rapid, and unavoidable decline in national capability to accomplish functions requiring a population of high intelligence.


Of course, intelligence is not the only thing that changes with age - physical ability declines, and personality also changes - but the objective nature of reaction times makes the picture simpler and more objectively measurable, in principle.

(Objectively measurable, that is, at the population level where the imprecision of simple reaction times for estimating individual intelligence, is overcome by averaging of larger numbers.)


The point is that a population with a top-heavy population pyramid, a population with a median age in the forties and increasing, is a population with:

1. Reduced average intelligence compared with the optimal population structure - and the transition of population structure to the top-heavy form would be accompanied by an increasingly rapid reduction in average intelligence from this cause;

2. And a population with high median age/ top-heavy structure is a population where further and more rapid decline in average intelligence is to be expected over the short to medium term (the coming years, and next few decades; due to the small proportion of the population contained in those age- cohorts that will be moving into young adulthood (with peak intelligence) in the near future.


Note: Of course, the Leftist media propound sustained mass immigration as a solution to this problem; but of course it is not a solution to this problem and very obviously leads to multiple other and intractable problems.

Suffice to say, any potential immigrant groups that could theoretically improve the cognitive deficit will not improve (and probably worsen) the reproductive deficit - and vice versa.

In the long term, the current top-heavy population structure of the developed nations will self-correct, because it is unsustainable in multiple ways; and thus the ageing-contribution to intelligence decline will cease.

However, the intelligence level of the population which stabilizes will, of course, be significantly lower than it was 150-200 years ago - due to the substantial intelligence decline over that period.