Skip to content

Climate scientists behaving badly? Part 3: the conduct of enquiry.

Part 1

Part 2  

 

Now we move on to virtue in the conduct of enquiry.

honest dealing in the conduct of enquiry

There is some evidence giving cause for concern

·        There is evidence of dogmatism: ‘The fact is that we can't account for the lack of warming at the moment and it is a travesty that we can't. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.’[1] Now it is indeed possible that the data is wrong, but the lack of a continued warming trend (since 1998?) is contrary to the predictions of the models on which IPCC predictions are based, and a common variety of dogmatism is to deny evidence that doesn’t fit your preconceived beliefs.

·        There is evidence of arbitrary data manipulation: ‘Another serious issue to be considered relates to the fact that the PC1 time series in the Mann et al. analysis was adjusted to reduce the positive slope in the last 150 years … At this point, it is fair to say that this adjustment was arbitrary.’[2]

·        In the computer code there is evidence of data manipulation conducted in order to get a pre-conceived result.

·        Remarks from a programmer writing code indicate serious problems with collection and recording of original data ‘another problem that's based on the hopeless state of our databases’[3]

·        For some time there has been controversy over the selective use of data. For a recent example from a Russian institute commenting on the CRU use of Russian data (report here ): the continuous data records from Russia which taken in their entirety show warming of 1.4 C since 1860 versus CRU use of only 25% of that data to show 2.06C rise since 1860; the use by CRU of stations with incomplete and interrupted data where such data shows warming versus the omission of stations with complete and continuous data which doesn't.  

·        More broadly, local scientists in Australia and New Zealand have found broadly constant original temperature data on which a rising official temperature record has been based through the use of methods of data manipulation originating in or influenced by CRU practices. See this discussion of the problems in raw data and controversy over claims of inhomogeneity in that data  and adjustments made to produce estimates of historical temperatures from weather stations in Northern Australia: http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

None of these examples demonstrate straightforward dishonesty. For example, all sorts of junk gets left in computer code. People put bits in that they call ‘fudge factors’ because they think they know the broad shape of some other correction process which is not yet coded, so in early drafts a ‘fudge factor’ procedure stands as a proxy for some real adjusting factor. They are, however, evidence that more subtle vices may yet be in play.

We are all capable of self delusion and we know that conviction can lead us all to trim the evidence to support what we think we know to be true.  Consider what can happen in even the hardest science:

 

‘it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong–and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that.’ Feynman http://www.lhup.edu/~DSIMANEK/cargocul.htm.

 

What is evident from the emails and computer code (indeed, this has been evident throughout) is that the raw data is messy, unreliable, subject to distorting factors, derived from many sources and proxies, and contains artifacts from changes of instrumentation, location and methodology. Estimating the historical record from such data inevitably requires a significant amount of data manipulation. Tuning such data manipulation involves making large numbers of judgements about weighting various factors, which weights are, to some degree and unavoidably, inadequately objectively constrained. That is to say, although a weighting of 1 or 100 might be ruled out, anything between 10 and 20 might be defensible and which to use a matter of professional taste. I am not impugning such judgements: as Polanyi pointed out long ago, many disciplines require professional judgments heavily dependent on the  application of tacit knowledge. But when you have a large number of such judgements the outcomes are likely to be very sensitive to any consistent bias manifested within choice of weighting within the unconstrained ranges. Furthermore,  that there are such ranges makes it possible to get away with  the deliberate manufacture of  tendentious results. Given the manifest failings in objectivity, our general susceptibility to self-delusion rooted in conviction, and the room for bias to influence the construction of estimates of historical temperatures, our confidence in those estimates must be weakened.

 intellectual competence

Remarks from a programmer re-writing code for processing raw data indicate serious problems with the competence of prior data manipulation.[4] The clear implication is that at various times various historical temperature records (not necessarily made public) produced by CRU on the basis of its raw data have been processed by code containing multiple significant errors. Worse yet, we have now been told that the raw data is lost. Consequently we know that we are now relying on  historical temperature records some of which contain garbage and we cannot tell which. Yes, science is messy like this, but the problem here is that these facts sully the confidence we can have in the CRU historical temperature records. With this kind of mess down at the bottom, we have lost the means of estimating confidence by standard techniques, such as comparison with the output of fresh processing using code written independently of the original code.

 imagination and originality

The emails contain evidence of groupthink: the creation of in-groups and out-groups; in-group regimentation; language coding and regimentation to create signals of  in-group membership and signals betraying weakening loyalty and heresy. These kinds of social regimentation suppress originality by restricting the countenancing of new or suprising information and forbidding the use of imagination outside permitted channels.


[1] http://www.eastangliaemails.com/emails.php?eid=1048&filename=1255352257.txt

[2] http://www.eastangliaemails.com/emails.php?eid=759&filename=1164120712.txt

[3] http://www.di2.nu/foia/HARRY_READ_ME.txt

[4] http://ecotretas.blogspot.com/2009/11/bugs-do-climategate.html

Share on