Open Letter to Tony Burke MP

(Readers may wish to refer to previous correspondence below.)

22 February 2011

Dear Mr Burke

Thank you for the letter written by Dr Greg Ayers on your behalf, dated 10 February 2011 (Ref:  Exec 11.0009), in reply to my letter to you dated 26 October 2010.

Firstly, I am disappointed that there was no acknowledgement of, let alone apology for, either the delay in replying to my letter, or the lack of any response from the Bureau of Meteorology apart from the automated Webclim email of 09 September 2010.

The letter written on your behalf is of very poor value.  It shows little attempt to address the issues I have raised, and is composed of  redundant information (“padding”), misinformation, half-truths, evasions, contradictions  and red herrings.  There is one genuinely satisfactory response, with one pending, in the whole four pages.

Given that this is a Ministerial reply, one would have thought more care would have been taken to check the actual information I have already assembled and the specific issues I have raised, not to mention more care with basic proof-reading.

Here is my response to the letter, starting with the second paragraph:  Monitoring Australia’s climate is a high priority for the Bureau of Meteorology…

There is nothing new here, and Dr Ayers could have quoted most of this paragraph from my own posts.

Paragraph 3: When assembling these data, we take great care…

Once again, nothing new- I am quite familiar with the BOM Climate website, which I agree is quite rich with useful information.

Paragraph 4: We welcome examination and testing of our climate data and the analysis methods we use….

This is basically a paraphrasing of the automated Webclim email I received in September, and is nonsense.  Certainly a scientific paper, if and when accepted by a peer-reviewed journal, may eventually lead to a reply.  However, there are several examples of explanations of the Bureau’s procedures on the website which appear not to be published in peer-reviewed journals.  From these explanations, I have raised further queries.  A further simple explanation may have easily cleared up my queries many months ago.   Moreover, as Dr Ayers later admits, the Bureau does make “operational adjustments” which have not, as yet, had any corresponding discussion in a peer-reviewed journal.

I now offer my comments on the responses to my specific questions.  My questions are in bold, quotes from Dr Ayers are in italics.

1. What explanation is there for the large discrepancy between the 100-year trend of the homogenised data from the 100 High Quality sites, and the trend of the raw data from these and nearby sites?

Dr Ayers’ response misses the point: the difference between raw and homogenised data at the 100 High Quality sites.  Comparing the High Quality data with those from a completely different dataset may give an interesting result but is not relevant. Quoting Jones et al. is a distraction – it is a red herring.  Figure 12 in that paper refers only to temperatures from 1950 to the present, whereas the bulk of the adjustments appear to have been applied prior to this.

2. Why, if the adjustments “tend to be equally positive and negative across the network”, have the great majority of sites had adjustments that have resulted in an increased warming trend?

No useful answer given.  I am well aware of the documented homogenisation process from the cited papers.  Dr Ayers states:  “… I assure you that an evaluation of potential bias in warming trends, either positive or negative, is firmly within the compass of that work.  We will publish our findings on this matter later in the year…” This is indeed comforting- but it has only taken 15 years from Torok and Nicholls’ 1996 paper.  Why the rush?  Forgive the sarcasm, but surely any potential bias should have been evaluated before publication of the High Quality data on the Bureau’s website.

3. What explanation is there for the inclusion in the High Quality series of 15 sites previously excluded because of urban influence?

It is not clear to which 15 sites you are referring.  I cannot believe that someone with the expertise of Dr Ayers, with all the resources of the Bureau at his disposal, cannot locate this information by comparing the list of Urban stations in Torok and Nicholls (1996) with the list of 100 High Quality non-urban sites.  Alternatively, he could have checked this site, specifically Part 8: The Big Picture , where I list them.

The decision to classify a station as urban or non-urban is made on the basis of evidence for an urban warming signal and the immediate environment of the station. In Part 9: An Urban Myth, I have shown how in the Urban sites the raw data and the adjusted data both show much less warming than the non-urban sites- and despite the “evidence for an urban warming signal”, the majority of urban sites have had adjustments that increase warming.

4. Why, in a supposedly “High Quality” record, has so much low quality data been included- stations with short records (e.g. Woomera, Giles), large gaps infilled with estimates (e.g. Wilcannia, Cape Borda) or with data from sites many kilometres away (e.g. Newman), and records constructed by combining data from stations with no overlap at all (e.g. Port Hedland, Bourke, Cashmore)?

Dr Ayers’ answer is evasive- “deemed to be of high quality” indeed!  They may be “the longest and highest quality records in a particular region”, but that isn’t saying much in absolute terms.  In short, a null response.

5. Have there been any other adjustments made to the Australian temperature data apart from those documented at the BOM website?  If so, kindly supply details of all reviews and adjustments to the Australian raw temperature data including, but not limited to, Torok and Nichols, and Della-Marta et al.

The three papers described above provide details about how the high quality station data have been prepared, with the thesis by Torok providing detailed accounts of adjustments at stations… This is misinformation.  The three papers provide overviews, not details.  Torok’s thesis does not give detailed accounts of adjustments at stations.  He gives a detailed account of adjustments at Mildura and a couple of other samples, out of 224.   He shows graphs of  stations’ raw and adjusted data, and lists stations with dates and amounts of adjustments, but there are no explanations for the adjustments.

An updated summary of operational adjustments is under preparation..  So the real answer to my question is yes, other adjustments are continually being made, without, as yet, any journal article presenting arguments for them.  I look forward to the coming journal article, written after the event.

6. Can you please provide details (including dates, personnel, methods, results) of the quality control checks on the homogenisation adjustments.

The Bureau of Meteorology  relies on careful metadata checks and a number of robust statistical tests and having a large number of comparison sites available, very good metadata and a year or more of parallel observations for the same station. In theory, this will prevent any lack of quality due to “subjective decision(s) about whether and how much to adjust” as outlined by Torok and Nicholls.  But this is internal monitoring as part of the process, not formal quality control.  Moreover, I remind Dr Ayers of the following:

“Generally, comparison observations for longer than five years were found to provide excellent comparison statistics between the old and new sites…… Comparisons longer than two years, and sometimes between one and two years, were also found to be useful if complete and of good quality… Poor quality comparisons lasting less than two years were generally found to be of limited use.” (Della-Marta et al, 2004).

The five years requirement has apparently now been reduced to one year- perhaps there was a journal article discussing this?  And what then of the stations with zero years of parallel observations?

7. Can you please provide details of the peer reviews of the two papers referenced by Dr Jones, namely

Della-Marta, P., Collins, D., Braganza, K. “Updating Australia’s high-quality annual temperature dataset” Australian Meteorological Magazine Vol. 53, no. 2, June 2004


Torok, S.J. and Nicholls, N. 1996. A historical annual temperature dataset for Australia. Australian Meteorological Magazine, 45, 251-260.

Scientific peer reviews are undertaken in confidence,… Thank you for the first straight forward answer to my questions.  I am satisfied with this.

8. Please provide complete details, including station metadata, of the reasons for the large adjustments to the temperature records of the following sites:


Deniliquin Post office


Wagga Wagga AMO


and of the following Urban sites (not used in climate analyses but adjusted):

Wangaratta Aero

Echuca Aerodrome

Benalla Shadford St

Dubbo Airport AWS.

I have asked our Climate Data Services Section to provide you with the specific data you have requested in your letter. Thank you in anticipation, as I hope this will answer some questions.  I trust that it will include some real explanations for the “subjective decisions” used to make adjustments at these sites.

Dr Ayers’ final paragraph:  No, this hasn’t addressed my concerns.  And I don’t believe I will be submitting a scientific paper anytime soon, as I don’t know any scientists or statisticians with the time or interest to assist me.      However, I will continue to debate scientific “facts”.  This issue will not go away.

In conclusion, Mr Burke, this hastily prepared, poorly researched, and carelessly written Ministerial letter, coming as it does more than three months after my letter to you, is of little worth.  It is nonsense to anyone with more than a superficial interest in meteorology.  It reflects poorly on your credibility as Minister responsible for the Bureau of Meteorology.

Yours sincerely

Ken Stewart

5 Responses to “Open Letter to Tony Burke MP”

  1. TRC Curtin Says:

    Well done, Ken, an excellent letter.

    But it is worth poutting togtehr as short paper for the BoM’s own hosue journal or similar, say Quadrant.

  2. val majkus Says:

    why are scientific peer reviews done in confidence? I don’t understand that

  3. Geoff Sherrington Says:

    val majkus,

    A main reason why more of us do not publish is uncertainty of data. Regarding the BoM specifically, it is hard to be sure that data expressed as “raw” are indeed that. A paper that makes a wrong initial assumption about rawness of data is susceptible to being shot down without useful further comment.

    Then there is cooperation. In 2009 I invited David Jones to write a joint paper and he declined, saying he had too many papers to write.

    Then there is the withholding of data. There was a conference of BoM people and some others in Hobart late in 2010. David Jones suggested that I read the proceedings of a part, but I have been unable to obtain any proceeding at all, despite formal requests. It seems to me that a public body holding a conference ought to make the proceedings available to the citizen, unless restricted by security or commerial-in-confidence arrangements.

    Sure, I could write and submit papers on several aspects of Australian climate, but the reviewers are likely to have easy access to material that is hard for me to find (plus a large computer) and I am vulnerable to simple errors that destroy credibility.

    Blokes like me have been sent to Coventry.

    Reasons enough?

  4. val majkus Says:

    Geoff I understand; I’ve put a link on Warwick Hughes blog to Ken’s correspondence;
    linking to this post of Ken’s;
    there’s been a recent post on WUWT
    AND there’s a comment by Scott there saying that the BOM’s raw data is adjusted
    do you know whether or not that is so?

Comments are closed.

%d bloggers like this: