Homogenisation: A Test for Validity

This follows on from my last post where I showed a quick comparison of Rutherglen raw data and adjusted data, from 1951 to 1980, with the 17 stations listed by the Bureau as the ones they used for comparison when detecting discontinuities. 

Here is an alternate and relatively painless way to check the validity of the Bureau’s homogenisation methods at Rutherglen, based on their own discontinuity checks.  According to the “Manual” (CAWCR Technical Report No. 49), they performed pair-wise comparisons with each of the 17 neighbours to detect discontinuities.  An abbreviated version of this can be used for before and after comparisons.  For each of the 17 stations, I calculated annual anomalies from the 1961-1990 means for both Rutherglen and the comparison site, then subtracted the comparison data from Rutherglen’s.  I did the same with Rutherglen’s adjusted Acorn data.

A discontinuity is indicated by a sudden jump or drop in the output.  The ideal, if all sites were measuring accurately and there are no discontinuities, would be a steady line at zero: a zero value indicates temperatures are rising or falling at the same rate as neighbours.  In practice no two sites will ever have the same responses to weather and climate events, however, timing and sign should be the same.  Therefore pairwise differencing will indicate whether and when discontinuities should be investigated for possible adjustment.

Similarly, pairwise differencing is a valid test of the success of the homogenisation process.  Successful homogenisation will result in differences closer to zero, with zero trend in the differences over time.  The Bureau has told the media that adjustments are justified by discontinuities in 1966 and 1974.  Let’s see.

Fig. 1:  Rutherglen Raw minus each of 17 neighbours

pairwise diffs Rutherglen Raw

Note: there is a discernible drop in 1974, to 1977.  There is a very pronounced downwards spike in 1967 (ALL differences below zero, indicating Rutherglen data were definitely too low.)  There also a step up in the 1950s, and another spike upwards in 1920.  Rutherglen is also lower than most neighbours in the early 1930s.  Also note several difference lines are obviously much higher or lower than the others, needing further investigation, but the great majority cluster together.  Their differences from Rutherglen are fairly consistent, in the range +/- 1 degree Celsius.

Now let’s look at the differences AFTER homogenisation adjustments:

Fig. 2:  Rutherglen Acorn minus the neighbours: The Test

pairwise diffs Rutherglen Acorn

The contrast is obvious.  The 1920 and 1967 spikes remain.  Differences from adjusted data are NOT closer to zero, most of the differences before 1958 are now between 0 and -2 degrees Celsius, and there is now an apparent large and artificial discontinuity in the late 1950s.  This would indicate the need for Rutherglen Acorn data to be homogenised!

Compare the before and after average of the differences:

Fig. 3:

pairwise diffs Rutherglen Raw v Acorn average

There is now a large positive trend in the differences when the trend should be close to zero.

There are only two possible explanations for this:

(A)  The Bureau used a different set of comparison stations.  If so, the Bureau released false and misleading information. 

(B)   As this surely can’t be true, then if these 17 stations were the ones used, this is direct and clear evidence that the Bureau’s Percentile Matching algorithm for making homogenisation adjustments did not produce correct, successful, or useful results, and further, that no meaningful quality assurance occurred.

If homogenising did not work for Rutherglen minima, it may not have worked at the other 111 stations. 

While I am sure to be accused of “cherry picking”, this analysis is of 100% of the sites for which the identities of comparison stations have been released.  When the Bureau releases the lists of comparison stations for the other 111 sites we can continue the process.

A complete audit of the whole network is urgently needed.

Tags: , , , ,

10 Responses to “Homogenisation: A Test for Validity”

  1. anthony Says:

    Great post Ken.

  2. Mikky Says:

    Cherry picking would be a false accusation here, it only takes one clear error to establish that BoM has a quality control problem, and all data should be “recalled” for checking, just like car manufacturers do when a potential fault is found.

  3. blackduck19 Says:

    You have terrific tenacity Ken. These convoluted adjustments are finally being shown in the broad light of day. Most would have given up when the “high quality” series was ditched in favour of ACORN. The BOM should not have provided even one set of “data”. The other 111 may be a long time coming.

  4. Jennifer Marohasy Says:

    So the Bureau are numerically challenged, they can’t add up?

  5. Glen Michel Says:

    My analysis is one of sinister deception;there is too much that is wrong- and not so due to human error.It is CONTRIVED!….and it is disturbing to say so

  6. siliggy Says:

    Perhaps the second stevenson screen record from Albury can shed even more light on all this. The acting state meteorologist Mr Hunt put it there at St Matthew’s Church in Kiewa St in 1906 so that as a second Albury Stevenson record it would (with the other Albury record) solve the UHI question.

  7. kenskingdom Says:

    I think it’s time to request the full list of comparison stations from the Bureau, but I won’t hold my breath in eager anticipation….

  8. Bureau Caught in Own Tangled Web of Homogenisation | Jennifer Marohasy Says:

    […] how and why the Bureau homogenised the temperature series at Rutherglen. After several days work he came to the conclusion that either the wrong list of 17 stations (against which the Bureau claimed it has made […]

  9. My Submission to the BOM Review Panel | kenskingdom Says:

    […] further information and full explanation see https://kenskingdom.wordpress.com/2014/09/08/homogenisation-a-test-for-validity/ […]

Comments are closed.

%d bloggers like this: