Monday, 30 June 2014

What proportion of offspring survived in historical times? - with reference to mutation accumulation


The paper referenced here:
Yann Leseque et al. A Resolution of the Mutation Load Paradox in Humans. Genetics 2012; 191: 1321-1330.

could provide a way into the literature on accumulated mutation damage in other species.

There seem to be a number of variables to consider - how many new mutations per generation, what proportion of offspring survive, how fast the population is growing and probably others.

Although this literature says 88% mortality or 12% surviving,  this is only approximate - and there would have been considerable variation at different points in history.

It also seems a bit high for human reproductive capability - since hunter gatherer women seem seldom to have more than six children (due to late menarche, the children spaced-out by the contraceptive effect of lactation, prolonged lactation and then low fertility from age c 40) - which would not be enough.

So I guess the real number would be more like an average 1/4 or 1/3 of human offspring surviving for most of the time and in most places.


What about about delayed reproduction in modern populations?

Delayed reproduction leads to more chance of mutations (eg from sperm) and problems with poorer quality control on release of older eggs (eg trisomy twenty one is probably the tip of an iceberg of similar problems).

But late reproduction also reduces the number of generations and the possibility of mutation accumulation from that cause - so that modern people only have two generations (e.g. average thirty plus years) - i.e. two new lots of mutations in sixty-something years - where in historical times there would have been three generations per 60-70 years - three lots of new mutations.

So slowing reproduction (by increasing the average age of reproduction) may perhaps reduce mutation accumulation temporarily; given that the effect of aging on mutations may be less per decade than the effect of an extra generation of new mutations.


This was originally a comment at a new blog called Brain Size

Which is shaping-up to be a valuable contribution to intelligence research.

The author, Herr Professor Doktor Pumpkinperson, has the attributes of honesty, persistence (this especially), intelligence and a refreshing disinclination to take offense at the criticism of others!


Population expanision in England with respect to mutation accumulation


When the Black Death (c 1380) halved the population of England, the deaths were disproportionately among the poorest (i.e. apparently 'eugenic').

Then the population took about 200 years (until around 1600) to recover (from 2-4 million) all the time under strong 'eugenic' selection (probably, nearly all of the surviving children came from the elite skilled craftsmen type working class and the 'intellectual' middle classes).

That is a 200 year doubling time. Then it took another 200 years for the population of England to double to 8 million (around 1800); then about 50 years to double again; and about 50 years to double again to 32 million after 1900; and then about 100 years for the most recent doubling.

So, 4 million was probably the usual maximum population for agrarian England, and there have been five doublings of population in about 600 years since the Black Death

(rounded numbers)

1350 - 4 million
1400 - 2 million
1600 - 4 million
1800 - 8 million
1850 - 16 million
1900 - 32 million
2014 - around 64 million

The rate of increase was slow and child mortality was very high until about 1800 or later - then three of the doublings have happened in 200 years since child mortality began to reduce, and fertility began to reduce, and selection was more and more strongly dysgenic.


A comment on the personality trait of Openness (and Personality in general)


Personality is supposed to be independent of intelligence - Personality is a separate explanatory variable which can be seen after Intelligence is controlled-for.

Intelligence is primary as an explanation of behaviour - primary both historically, and because intelligence (very obviously) affects personality - but personality does not affect intelligence.

In other words, as a matter of routine - when measuring personality, one should also test for intelligence - and before looking at the effect of personality on behaviour, one ought to remove the effect of intelligence (by stratified analysis, preferably - i.e. creating narrow strata of IQ and only looking a personality effects within these strata - or else by some kind of regression).


However, much personality research is done on an already-intelligence-stratified sample - such as Psychology Students at Mudsville State University - in these situations the researcher can usually get-away-with missing out IQ testing and just evaluating Personality.

However, this does not apply to the pseudo-trait of Openness - which is often so sensitive to IQ differences that it varies even within strata such as the same class of the same college.


If Intelligence is controlled-for, then the effect of Openness disappears - because Openness is merely 'the personality type of intelligent people in Western-type societies' (but rather badly conceptualized).

While the other personality traits (C, E, A and N), which derive essentially from HJ Eysenck, are robust to IQ differences (especially in college populations which provide most of the subjects): Openness is not.

Openness merely a (weak) correlate of IQ (in Western Societies)... plus noise and cross-contamination from other personality traits (e.g. a little bit of Psychoticism/ Schizotypy).


Take home message: all research on so-called Openness is either ignorant, incompetent - or (usually) both. 


This began as a comment on  the Isegoria blog

Thursday, 26 June 2014

The genius as a 'medium': channeling external influences


(What follows goes outwith science.)

Pretty much all the geniuses I have heard of and who have expressed an opinion seem to say (in one way or another) that the key factor in their genius comes from outwith their conscious motivation - and feels as if it appeared 'ready-made' in their awareness.

In other words, geniuses will often decline credit for the essence of their achievement (and it is other people who often insist upon ascribing agency to the genius).

This means that - to a varying extent - genius seems to be experienced as a mediumistic phenomenon, that being a genius feels like being a channel for insights and understandings and inventions.

From these point, there may be a division among geniuses: crafted versus automatic. In other words, some 'receive' the inspirations, and work-out for themselves how to communicate it by craft; while other geniuses also receive inspiration about communication - for example, deliberately crafted writing from within the writer; versus a more 'automatic' kind of writing which the writer (to some extent) mentally stands-back and observes the emergence of communications.


The difference in these types crafted and automatic types of genius is seen when the product of a genius cannot satisfactorily be accounted-for by the observable personality and ability of that person.

Tolkien and JK Rowling could be taken as examples of the two types. Tolkien received his inspiration as 'given' him - as if discovered by him in fragments of ancient texts; and the achievement of Lord of the Rings can easily be understood in terms of Tolkien's own disposition, his abilities, what he wanted to do. When I see Tolkien in an interview is it obvious how a man like him would write LotR.

By contrast, JK Rowling's Harry Potter series (which is, in my evaluation, is also a work of genius - albeit lesser than LotR). But it is hard - I would say impossible - to understand Harry Potter as plausibly having been crafted by JK Rowling. When I see Rowling in an interview, there is a gross mis-match between the person and the work. I believe that the actual communication of Harry Potter was as a kind of 'automatic writing' - experienced more like taking dictation than crafting prose.

In support of this specific interpretation is that Tolkien felt a strong loyalty to LotR, and a gratitude for having the inspiration; while Rowling appears to be hostile to Harry Potter and has a detached, critical and revisionist attitude towards it - consistent with her not having had much to do with its production, but having mostly observed it emerging.


Where does personal choice and motivation come in?

The genius must accept the external inspiration; and the automatic type of genius must also accept the 'dictation' of the actual mode of communication.

Any attempt to interfere or reshape the external inspiration - or to select or distort the automatic writing - will result in a drying-up of the source of inspiration and loss of automatic writing ability.

However, inspiration can be refused, and distortion of communication can be attempted - with the above consequences. Genius doesn't happen anymore.

Presumably, this accounts for the frequent situation when someone produces a single work of (inspired) genius - but everything else they produce (which is entirely the product of the creator, and lacks external inspiration) is at a qualitatively lower level.


Most of these ideas are derived from A Geography of Consciousnesses by William Arkle (1974) 151-156.

Tuesday, 24 June 2014

The Lop-sided genius - mutations, channelling K, and group selection


The idea of Life History (LH) is that organisms tend to have a default 'r' strategy of fast growth and sexual maturation leading to large numbers of offspring requiring minimal parental investment; but that natural selection can act on groups of organisms to enhance a 'K' strategy of LH which is characterized by slower growth and sexual maturity, smaller numbers of offspring, and a greater investment of parental resources per offspring.

So, among mammals, mice are r selected while humans are K selected - crudely r strategy is for quantity of offspring, while K is for higher quality of offspring. .

But a further aspect of LH theory is that within species there are a range of potential Life Histories - and the young organism may be able to respond to environmental conditions to channel development resources in various ways. For example if conditions are harsh and an early death seems likely, then resources are channeled in a relative r direction; while less stressful conditions may trigger a K strategy.

Michael A Woodley has suggested that the slow LH strategy of K is also a strategy for behavioural specialization - so that a more K-selected population of humans is also more likely to generate behavioural specialists - including cognitive specialists:  people with high and also highly- specialized types of intelligence.

In other words, K-selected populations are more likely to produce geniuses - because geniuses have a Lop-sided kind of cognitive activity; geniuses prioritize their special ability and do not put so much effort into the kind of social interactions and reproductive strategies (mating, courting, marriage, child rearing) which dominate the majority of people.


So geniuses have something wrong with them, from the perspective of individual reproductive success.

This might suggest that genius is simply a pathology, a rare disease, probably a particular set of genetic mutations - which happens to be useful by chance, in some particular times and places...

Alternatively, it may suggest that genius is group-selected - on the basis that it was geniuses which provided the breakthroughs which led to the industrial revolution and the consequent expansion of those European national populations which produced the geniuses (England, France, Greater Germany, Italy etc).

On this scheme, a genius does not - on average - benefit his own reproductive success; but a population which produces enough geniuses will benefits its own population level reproductive success.


SO, what are the ingredients of genius? The answer is twofold: high intelligence plus a high level of personality trait Psychoticism.

But what is Psychoticism, from the perspective of Life History? It can perhaps be seen as a rare result of Lop-sided K -  personality type which combines impairment in social domains (such as Agreeableness/ Empathizing, Conscientiousness, Social Conformity) with an autonomous/ selfish obsession with some other thing.

(Note: High Psychoticism is only rarely found in K-selected populations - there is probably an inverse correlation between the two variables - but it is that rare and strange combination of high K and moderately-high P which is required for creative genius.)

So the personality of a genius is defined, here, by default - by a strategic slow LH but not of the type which would tend to lead to social and sexual success, but instead where long term interest, enjoyment and effort are channelled into... something else.

Something else could be any of the possible domains of genius: mathematics, science, literature, invention, art and sculpture, economics, music... So when there are a lot of geniuses in a population, they are of various and multiple types.

(Although not all types, nor all types at equal frequency - since some populations start with an innately higher level of some talents, and lower levels of others - populations differ).


So, why Europe? Why was it Europe, and nowhere else, that made the industrial revolution?

First, there had to be something - or some things - in Europe which selected-for what it is that geniuses provide: selected-for the products of genius... especially things like inventions. 

To focus on inventions - geniuses do not need to be encouraged: genius does what genius does, and unless actively prevented genius will produce... But that is only half of what is needed: the society must notice what is produced, and value it, and exploit it.


So, if a society has geniuses, then the geniuses will be producing inventions. But only some societies will use these inventions.

IF a society does use inventions, and as a result the society expands (if the population grows from which the geniuses have arisen) then this would indirectly tend to sustain the production of geniuses.

How might this happen? Perhaps by allowing/ encouraging mutations to occur specifically in relation to some of the genes which sustain social intelligence, sexual selection and that kind of thing - and thereby channelling K into specific functional channels - to create a variety of Lop-sided geniuses who are independent of social pressures and motivated to focus on their special ability; rather than a population all-rounders who conform to societal norms.


Wednesday, 18 June 2014

Learning to Parrot - modern intelligence as a "Chinese Room" thought experiment


Suppose that I'm locked in a room and given a large batch of Chinese writing...[but] to me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols...from the point of view of somebody outside the room in which I am locked -- my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese.'

John Searle, Behavioural and Brain Sciences, 1980


The nature of modern technology and educational evaluations is such that people typically understand much, much less than they appear - superficially - to understand.

A modern person is in a position much like that described in Searle's Chinese Room thought experiment outlined above.

Whether in school, college, work, the Mass Media or in almost any kind of discourse - a modern person is able to interact on subjects far beyond his comprehension by algorithmically implementing a predetermined set of rules - recognizing inputs from a chart (whether external or internalized), then matching and selecting 'appropriate' predetermined responses, then ordering and setting them out as a kind of mosaic of 'points'.

This activity is, more or less, automatic - and involves no necessary comprehension of the symbolic inputs or outputs - the whole thing is a matter of cycles of recognition, matching and arranging; back and forth between people or groups.


So, a project is assigned on a certain subject. This subject is looked up on the internet. Passages of text, illustrations, graphs are copied, modified, pasted and arranged stylishly in line with explicit guidelines. The work is returned and marked according to a template referencing the guidelines. Several of these projects are accumulated and an educational qualification is awarded. The student becomes a manager, and the same procedure is followed. A task is assigned, information is gathered and arranged and presented - and evaluated, and perhaps implemented - perhaps as bullet points; and if so these implementations will follow the same process: each bullet point leading to an analogous process of recognition, matching and arranging. Even the question "But does it work?" is 'answered' by the same process of gathering and selecting pre-approved forms of data (sales numbers, surveys focus groups...), matching data to the outputs being evaluated, and arranging this into patterns.


In modern 'abstract' discourse, there is never any point at which any actual person evaluates the exchanges to determine whether real understanding is present or absent - because the formal evaluation procedures (whether in school college, work, politics, government or punditry) are themselves typically conducted on exactly the same basis as that which is being evaluated.

A person who really knows the field, may know that there is zero understanding - but from the perspective of formal evaluation procedures, this individual evaluation is merely opinion, rumour, hearsay and anecdote.

What really matters in modern bureaucratic organizations is the formal procedures - recognition, matching and mosaic-building; and these do not require understanding on the part of any of the participants.


So what is really going on behind the mechanical pretence of understanding?

Social interactions; the usual human stuff of gossip, or status competitions, or money-making, or attempted exploitation, or altruistic assistance... or whatever.

So the relevant thought experiment might be somewhat different from the impersonal and contact-less Chinese Room thought experiment - perhaps a better thought experiment might be interacting Parrots.


Imagine a group of parrots which have been taught a set number of English language phrases, and taught when to use these phrases in response to particular other phrases or the presence of key words; and taught rules about how to combine these phrases. These are then evaluated for their linguistic ability by other parrots who are checking whether the stimulus phrases match the proper response phrases according to the rules; and whether the phrases are being uttered in the proper combinations, according to the rules.


So, for these parrots, learning the English language, understanding the English language, is defined as following the proper rules in recognizing, emitting and combining phrases of English.

An intelligent parrot is defined as one that knows a lot of these rules and always follows them.


Throughout, none of the parrots have a clue what these phrases mean (if anything), nor are they in the slightest degree interested; so far as they themselves are concerned, what is really going on is showing-off or deferring, flirting or repulsing, threatening or submitting, and trying to get more food.


And this is a picture of modern 'intellectual' life - in science, medicine, the arts, politics, government, the mass media... the public arena in general.


Tuesday, 17 June 2014

So, you think you are in favour of eugenics? Do you know the implications?


Current information on the rate of mutation and the fraction of sites in the genome that are subject to selection suggests that each human has received, on average, at least two new harmful mutations from its parents. These mutations were subsequently removed by natural selection through reduced survival or fertility. It has been argued that the mutation load, the proportional reduction in population mean fitness relative to the fitness of an idealized mutation-free individual, allows a theoretical prediction of the proportion of individuals in the population that fail to reproduce as a consequence of these harmful mutations. Application of this theory to humans implies that at least 88% of individuals should fail to reproduce and that each female would need to have more than 16 offspring to maintain population size. This prediction is clearly at odds with the low reproductive excess of human populations. Here, we derive expressions for the fraction of individuals that fail to reproduce as a consequence of recurrent deleterious mutation (ϕ) for a model in which selection occurs via differences in relative fitness, such as would occur through competition between individuals. We show that ϕis much smaller than the value predicted by comparing fitness to that of a mutation-free genotype. Under the relative fitness model, we show that ϕ depends jointly on U and the selective effects of new deleterious mutations and that a species could tolerate 10’s or even 100’s of new deleterious mutations per genome each generation.

  • Adam Eyre-Walker.  
  • A Resolution of the Mutation Load Paradox in Humans. Genetics 2012; 191: 1321-1330.


    I am not suggesting that the above paper is the last word - far from it. Its conclusions require modification in light of some important features the authors have neglected. 

    However, the basic point is that - according to a well established genetic calculation, it would be expected that 88 % of humans would fail to reproduce. The authors regard this as a long-standing unsolved paradox, and try to suggest an answer. But it may not be a paradox - it may simply be what happened in human populations most of the time and in most places through history (in equilibrium, on average) up to about 1800.  


    Even if this number is too big, even if it is much too big, the point is that in order to prevent the accumulation of damaging mutations generation upon generation, in order to prevent the population being overwhelmed and destroyed by genetic damage; a lot of humans would need to fail to reproduce...

    Which, given that - in pre-contraception and -abortion eras - a lot of humans are born (i.e. fertility is high), then there must be *very" high child mortality rates.

    To put this in terms of eugenics, a large majority of people would not be allowed to reproduce at all, or else a large majority of children would have to die (or be killed) merely to stop dysgenics from mutation accumulation - this would have to happen just for things to stay the same.

    To actually improve the functional-adaptedness of the population - in other word to practice eu-genics (by differentially breeding from the better- adapted) would have to come on top of this.  


    To put it simplistically - to perform actual eu-genics as a matter of state policy would require something like the following: 

    1. Slaughter c. 88% of children or sterilize c. 88% of adults, to stay the same - and then... 

    2. Of the remaining c. 12%, breed only from the best adapted minority - to improve the population.  

    Knowing this, are you still in favour of eugenics? 


    Monday, 16 June 2014

    Could the Flynn effect be an invalid artefact? Yes - if IQ tests are no better than any other type of exam at tracking long-term changes in cognitive ability


    Supposing we just accept that IQ tests are no better at measuring long term change in abilities than any other type of examination?

    Then it would not be surprising that the 'Flynn effect' - of rising raw IQ test scores over the twentieth century - seems to have no real-world validity; and is contradicted by slowing simple reaction times over the same timescale.


    But why should we suppose, why should we assume (without proof) in the first place that the raw scores of IQ tests are any better at tracking longitudinal changes of general intelligence than are the raw scores of examinations of (for instance) Latin vocabulary, arithmetic, or historical knowledge?

    Everybody knows that academic exams in Latin, Maths, History or any other substantive field will depend on a multitude of factors - what is taught, how big is the curriculum, how it is taught, how the teaching relates to the exam, how much practice of exams and of what type, the conditions of the exam (including possibilities for cheating), how the exam is marked (including possibilities of cheating), and the proportion of nature of the population or sample to whom the exam is administered.

    In a cross-sectional use - this type of exam is good at predicting relative future performance on the basis of rank order in the results (not on the basis of absolute percentage scores) when applied to same age groups having been taught a common curriculum etc. - and in this respect academic exams resemble IQ tests (IQ test being, of course, marked and interpreted as age-specific, rank order exams).

    All of which means the raw score of academic exams - the percentage correct - means nothing (or not necessarily anything) when looked at longitudinally. Different percentage scores among different groups at different times is what we expect from academic exams.


    Cross-sectionally, performance in different academic exams correlate with each other; and with 'g' as calculated from IQ tests, or with sub-tests of IQ tests.

    But just because differential performance in an IQ test (a specific test, in a specific group, at a specific time) is a valid predictor; does not mean that IQ testing over time is a valid measure of change in general intelligence.

    The two things are utterly different.

    Cross sectional use of IQ testing measures relative difference now to predict relative differences in future; but longitudinal use of IQ data uses relative difference at various time-points to try and measure objective change over time: incommensurable.


    So, what advantage do IQ tests have over academic exams? Well, mainly the advantage is that good IQ tests are less dependent on prior educational experience (also (which is not exactly the same thing) their components are 'g-loaded').

    Historically, IQ tests were mainly used to pick out intelligent children from poor and deprived backgrounds - whose social and educational experience had led to them under-performing on, say, Latin, arithmetic and History exams - because they had never been taught these subjects - or because their teaching was insufficient or inadequate in some way.

    It was found that a high rank-order score in IQ testing was usefully-predictive of high rank-order performance in future educational exams (assuming that the requisite educational inputs were sufficient: high IQ does not lead to high scores in Latin vocabulary unless the child has actually studied Latin.)

    But IQ tests were done cross-sectionally - to put test-takers in rank order -  they were not developed to measure longitudinal change within or between age cohorts. Indeed, since IQ tests are rank-order tests, they have no reference point to anchor them against: 100 is the average IQ (for England, as the reference population) but that number of 100 is not anchored or referenced to anything else - it is merely an average '100'  not mean anything at all as a measure of intelligence; just as an average score of 50% in A Latin Vocabulary Exam is is not an absolute measure of Latin ability - the test score number 50 does not mean anything at all in terms of an absolute measure of Latin ability.


    What applies to the academic exam or IQ test as a whole, also applies to each of the individual items of the test. The ability to answer any specific individual test item correctly, or wrongly, depends on those things I mentioned before: "what is taught, how big is the curriculum, how it is taught, how the teaching relates to the exam, how much practice of exams and of what type, the conditions of the exam" etc. etc...

    My point is that we have been to ready to assume that IQ testing (in particular raw average scores and specific item scores) is immune to the limitations, variations and problems of all other types of academic exams - problems which render them more-or-less meaningless when raw average scores or specific item scores are used, decontextualized, in the attempt to track long term changes in cognitive ability.


    It is entirely conjectural to suppose, to assume, that IQ tests can function in a way that other cognitive ability tests (such as academic exams) cannot. And once this is understood, it can be seen that - far from being a mystery, there is nothing to explain about the Flynn effect.

    If longitudinal raw average or test item IQ scores have zero expected predictive validity as a measure of intelligence change; then there is no mystery to solve regarding why they might change, at such and such a rate, or stop changing, or anything else!

    The Flynn effect might show IQ raw scores or specific item responses going up, down, or round in circles - and it would not necessarily mean anything at all!


    Friday, 13 June 2014

    Possible Dysgenic Trends in Simple Visual Reaction Time Performance in the Scottish Twenty-07 Cohort


    Michael A. Woodley, Guy Madison, Bruce G. Charlton. Possible Dysgenic Trends in Simple Visual Reaction Time Performance in the Scottish Twenty-07 Cohort: A Reanalysis of Deary & Der (2005). Mankind Quarterly. In press.

    In a 2005 publication, Deary and Der presented data on both longitudinal and cross-sectional aging effects for a variety of reaction time measures among a large sample of the Scottish population. These data are reanalyzed in order to look for secular trends in mean simple reaction time performance. By extrapolating longitudinal aging effects from within each cohort across the entire age span via curve fitting, it is possible to predict the reaction time performance at the start age of the next oldest cohort. The difference between the observed performance and the predicted one tells us whether older cohorts are slower than younger ones when age matched, or vice versa. Our analyses indicate a significant decline of 36 ms over a 40-year period amongst the female cohort. No trends of any sort were detected amongst the male cohort, possibly due to the well-known male neuro-maturation lag, which will be especially pronounced in the younger cohorts. These findings are tentatively supportive of the existence of secular declines in simple reaction time performance, perhaps consistent with a dysgenic effect. On the basis of validity generalization involving the female reaction time decline, the g equivalent decline was estimated at -7.2 IQ points, or -1.8 points per decade.



    This is the full paper publication of some results previously reported here: