Archive for the ‘temperature’ Category

My Submission to the BOM Review Panel

February 9, 2015

The Hon. Bob Baldwin MHR

Parliamentary Secretary to the Minister for the Environment

PO Box 6022

House of Representatives

Parliament House


Dear Mr Baldwin

Re: Recommendations for the Review Panel appointed to review official national temperature records

As a “citizen scientist” who has been researching Australia’s climate, and the ACORN-SAT record in particular, over the past several years, I am concerned about errors in the work done by the Australian Bureau of Meteorology, in particular warming bias introduced through homogenisation and the apparent general lack of quality control.   I draw your panel’s attention to these issues listed under four categories in the following submission.  More information, including supporting charts and tables, is provided in numbered attachments.

 1Adjustment Issues

 1.1 Homogenisation distorts temperature records causing warming bias at most locations, indicating the methods as stated in the CAWCR Technical Reports (see CTR-049) are not followed or do not work as designed. Homogenisation should lead to candidate sites having trends in temperature anomalies that are more like their neighbours’. However in many cases this does not occur, and homogenisation has resulted in wide disparities.  This is obvious from a simple visual inspection of a plot of ACORN data at candidate sites versus raw data of the listed neighbours (Attachment 1.1).

 1.2 A better but still simple method of comparison involves differencing. Differencing (anomaly data of candidate site minus data of reference sites) should show improved results following homogenising, with differences closer to zero.  Importantly, even if the differences fluctuate, there should be zero trend in differences. Yet at a number of sites, homogenising has produced worse results. (Attachments 1.2a, 1.2b, 1.2c).

 1.3 The most extreme examples of homogenising arising from warming adjustments, result in trends of candidate sites greater than the homogenised trends at neighbouring ACORN sites. (Attachment 1.3a). The converse applies where extreme cooling adjustments result in trends less than the homogenised trends of ACORN neighbours (Attachment 1.3b). This indicates over correction, resulting in the creation of artificial trends warmer or cooler than the neighbours’.

1.4  Data are homogenised by reference to up to 10 best correlated neighbours. Some of these neighbours may be hundreds of kilometres away, and with completely different climates.  Deleting the two most distant neighbours greatly improves data comparison between Mackay and its remaining neighbours. (Attachment 1.4a, 1.4b).

1.5 The Bureau has belatedly tried to explain adjustments with the release of the 28 page PDF file of all adjustments ( , and has also provided Summaries of Adjustments for six sites as a further explanation. ( ). Incredibly these Summaries don’t agree with the adjustments in the 28 page document. (Attachment 1.5).

1.6 The Bureau claims that sites exhibiting Urban Heat Island (UHI) effect are “excluded from downstream products such as the calculation of national and regional temperature anomalies for the analysis of large scale climate change” (CTR-049, pp.71-73.) These sites include Townsville, Rockhampton, Laverton RAAF, Richmond NSW, Sydney, Melbourne, Adelaide, and Hobart.  Unfortunately they certainly are used as comparison sites when making adjustments.  Several Queensland sites, including Cairns, Charters Towers, Mackay, and Bundaberg, have Townsville and/or Rockhampton listed as neighbours used for making adjustments.  If a site’s temperatures are suspect due to UHI to the extent that they cannot be used for regional or national anomalies, it seems illogical that they can be suitable for comparison with neighbours.  This apparent contradiction needs explanation.

 2. Impact on trends

2.1 The Bureau of Meteorology has reportedly claimed “an extensive study has found homogeneity adjustments have little impact on national trends and changes in temperature extremes.”  (Weekend Australian, August 23-24, 2014). In support of this, the Bureau displays a plot on the adjustments tab at the ACORN-SAT web page purporting to show “temperature trends since 1910 from the unadjusted temperatures from more than 700 locations (AWAP), together with those that have been carefully curated, quality controlled and corrected for artificially induced biases at 112 locations (ACORN-SAT)” ( However, the AWAP (Australian Water Availability Project) network is not “unadjusted”, but according to CTR-050 p.41, ‘the generation of stable climatologies implicit in the AWAP … ( analysis) … goes part of the way towards removing the temporal inhomogeneities implicit in the raw data without the explicit application of temporal-inhomogeneity adjustments. … Hence  it is reasonable to describe the AWAP … (analysis) as “partially homogenised” rather than unhomogenised.’  It is therefore misleading to describe the above-mentioned plot of ACORN vs AWAP as being a comparison with “unadjusted temperatures”.

2.2 Moreover, the Bureau has made no attempt to compare ACORN data with minimally adjusted raw data (that is, adjusted only to combine two incomplete records into one through examination of overlapping data.) My comparison of annual ACORN data (1910-2012) with raw records (corrected only for overlap) at 83 sites for minima and 84 for maxima shows the increase in trend of ACORN over raw is 66% and 13% respectively (Attachments 2.2a, 2.2b).  (The remaining sites had no suitable overlap between discontinued and new stations).  Nearly two thirds of the sites analysed had trends increased (warmed).

2.3 A few very remote sites especially in Northern and Central Australia have an enormous impact on the ACORN record. This is demonstrated by differencing area averaged means (the official national annual means) and straight averaged means of the 104 ACORN sites (Attachment 2.3).  As 7 – 10% of the national climate signal is due to Alice Springs alone (International Review Panel Report, September 2011 p. 12, both the influence of remote sites, and the area averaging algorithm, need to be investigated.

2.4 The South East portion of Australia (the area south and east of the median of ACORN stations’ latitudes and longitudes) has the greatest number of sites, and also the greatest change in trend in minima from raw to ACORN- 232% (Attachment 2.4a). The trend increase for New South Wales is 245% (Attachment 2.4b) and for Victoria is 350% (Attachment 2.4c).  Homogenisation in the most heavily populated areas of the country cannot be described as having “little impact”.

2.5 I have also compared network wide ACORN data with AWAP data (1911 – 2013), for annual, seasonal, and monthly analyses (the last two the Bureau has not yet completed.) The results (Attachments 2.5a, 2.5b, 2.5c, 2.5d, 2.5e) are staggering and need urgent investigation.  In particular, the 200% increase in trends for Summer Maxima is relevant to claims of increasing summer heat, especially in light of the recent Climate Institute report.

3. The effect of rounding on trends, and uncertainties in neighbouring stations’ data used for homogenising

3.1 The Bureau admits that rounding of temperatures to whole degrees in the Fahrenheit era may have led to an artificial breakpoint of +0.1C in 1972, but claims this is lost in the noisy signals of the 1970s (CTR-049 p.70). This, however, is disputable and needs thorough investigation, as an audit by several colleagues and myself of daily minima and maxima data in the Fahrenheit and Celsius eras (474 data records and 8,580,583 daily observations) found evidence that indicates the impact on trends could be between +0.1C and +0.4C. (Attachment 3.1).

3.2 Our study also found homogenising is based on records with large amounts of uncertainty (Attachment 3.2). Significantly, the first of the 2011 International Review Panel’s Recommendations (A1) was “Reduce the formal inspection tolerance on ACORN-SAT temperature sensors significantly below the present ±0.5 °C …“ (See  It is inexcusable that no error bounds are given for ACORN data.

4. Lack of Quality Assurance

The apparent lack of quality assurance means ACORN-SAT is not fit for the purpose of serious climate analysis including the calculation of annual temperature trends, identifying hottest or coldest days on record, analysing the intensity, duration, and frequency of heatwaves, matching rainfall with temperature, calculating monthly means or medians, and calculating diurnal temperature range.

4.1 ACORN-SAT daily data (and consequently, monthly and annual means) provide many obstacles to rigorous analysis. Days of data are missing, slabs of data are offset by one day (daily data being assigned to the wrong date, usually one day early), and many adjustments show obvious glaring errors (Attachment 4.1).

4.2 Another glaring error gives Australia a new hottest day on record (Attachment 4.2).

4.3 Other researchers have reported at least 917 days where minimum temperature exceeds maximum (Attachment 4.3). Although a specific check for errors in recording maxima and minima was conducted before homogenising, this check could not have been done with the homogenised data. It might be claimed that this feature is normal and due to a cold change arriving after 9.00 a.m.  This would be especially evident in winter at high altitudes such as Cabramurra, with 212 occurrences.  However, there are no instances of maximum less than minimum in the raw data for Cabramurra.  All instances occur in the adjusted data before February 1999.  Further, despite the Bureau being aware of the problem since at least I July 2013 when Blair Trewin, lead author of ACORN, assured readers of the blog Open Mind at that “in the next version of the data set (later this year), in cases where the adjusted max < adjusted min, we’ll set both the max and min equal to the mean of the two” (which merely hides the fault caused by adjustments), the problem still exists- 212 occurrences are still in the ACORN record for Cabramurra.

In conclusion, ACORN-SAT is not reliable and should be scrapped.  ACORN-SAT shows adjustments that distort the temperature record and do not follow the stated procedures in the Bureau’s own Technical Papers, generating warming biases at a large number of sites, thus greatly increasing the network wide trends.  Furthermore, the Bureau does not take account of uncertainty, and the data are generally riddled with errors indicating poor quality assurance.  Finally, its authors have not followed up on most undertakings made more than three years ago to permit replication and improve transparency ( ).

I am delighted with the formation of the Review Panel.  I hope that this review will bring about much needed improvements at the Bureau of Meteorology in the way the Bureau collates, audits, analyses and reports on national temperature data.

Yours sincerely

Ken Stewart


 Attachment 1.1:  One example of many – comparison of Acorn anomalies (black) with neighbours at Carnarvon

(For further information and full explanation see )

Attachment 1.2a:  Differencing: Rutherglen minus neighbours

(For further information and full explanation see ) 

Attachment 1.2b:  One example of many- Differences after homogenisation show worse results

(For further information and full explanation see )

Attachment 1.2c:  As for 1.2b, showing the mean of candidate data minus neighbours. 

(For further information and full explanation see )


There are many other sites with greater differences after homogenisation in minima or maxima.  I have also checked Deniliquin, Bourke, Amberley, Carnarvon, Williamtown, Mackay, Kalgoorlie-Boulder and Wilcannia, which all show this problem.  Some sites may show improved differences.  An audit of all sites is essential.

Attachment 1.3a: One example of many of warming adjustments over correcting-  Amberley Acorn vs nearest Acorn neighbours’ (mean of nearest Acorn neighbours’ homogenised data).

(For further information and full explanation see )


Attachment 1.3b:  An example of cooling adjustments over correcting- Acorn Tarcoola shows decreased trend compared with nearest Acorn neighbours’.

(For further information and full explanation see


 Attachment 1.4a: Mackay maxima differencing including all listed neighbours

(For further information and full explanation see )


Attachment 1.4b: Mackay differencing with 2 most distant neighbours excluded, showing improved differences.

(For further information and full explanation see )


Attachment 1.5: The Bureau’s lists of adjustments at six stations are different- except one at Orbost.

(For further information and full explanation see


Attachment 2.2a: Mean of Tmin annual anomalies at 83 sites- minimally adjusted raw data vs Acorn, 1910 – 2012 data. 

(For further information and full explanation see  )


Attachment 2.2b: Mean of Tmax annual anomalies at 84 sites- minimally adjusted raw data vs Acorn, 1910 – 2012 data. 

(For further information and full explanation see  )

Attachment 2.3: Differencing shows the effect of area averaging using very remote sites, 1910 – 2012 data.

(For further information and full explanation see  )


Attachment 2.4a: Tmin increase in trend in different regions, 1910 – 2012 data. 

(For further information and full explanation see ).

  Median network position map adj results

Attachment 2.4b: Increase in Tmin warming in NSW, 1910 – 2012 data.

(For further information and full explanation see ).


Attachment 2.4c: Increase in Tmin warming in Victoria, 1910 – 2012 data.

(For further information and full explanation see ).

  Vic chart

Attachment 2.5a: ACORN vs AWAP comparison- by month (1911 – 2013 data)

(For further information and full explanation see


Attachment 2.5b: ACORN vs AWAP comparison- annual and seasonal (1911 – 2013 data)

(For further information and full explanation see

summary table seasons

Attachment 2.5c: ACORN vs AWAP comparison- by season- Tmean (1911 – 2013 data)

(For further information and full explanation see

mean table seasons

Attachment 2.5d: ACORN vs AWAP comparison- by season- Tmin (1911 – 2013 data)

(For further information and full explanation see

  min table seasons

Attachment 2.5e: ACORN vs AWAP comparison- by season- Tmax (1911 – 2013 data)

(For further information and full explanation see

tmax table seasons

Attachment 3.1: Comparison of percentage of observations recorded in values from whole (rounded to .0) to 0.9 in the Fahrenheit era, with that of the Celsius era at continuing sites, with irrelevant sites deleted, indicating suitable conditions for creation of artificial warming.  (For a full discussion see ).

Attachment 3.2:  Analysis of the impact of rounding on trends and uncertainties.  (For a full discussion see ).

Our study concluded:

“As more than half of all sites in Australia had rounding probably greater than 50%, truncating at significant levels (33%, 50%, or 100%) before September 1972 would cause artificial warming of between +0.1C and +0.4C per 100 years.”

“Many …. sites have recorded large amounts of data in recent  years that may be in error by up to 0.50Celsius, being rounded to whole degrees, and more than half of the sample studied have recorded erroneous data at some time in the past 40 years.”

“As well, the vast majority of sites … inaccurately recorded observations in the Fahrenheit era by recording in whole degrees. For nearly half of all sites, this amounts to at least 50% of their total observations. It is probable that more than 50% of all Australian observations were rounded. This alone means that temperatures before 1972 may be inaccurate by up to 0.250 C.”

“The large amount of uncertainty in the records of so many sites means that homogenisation as practised by BOM researchers must be in question, and with it all analyses of Australia’s temperature trends.”

Attachment 4.1: Obvious errors indicate poor quality assurance.

There are numerous glaring errors for individual days at many sites.  The following graphic shows Rutherglen maxima at Climate Data Online for September to November 1926 compared with ACORN-SAT maxima from 30/09/1926 – 05/11/1926.

cdo v acorn max ruth oct26

Also on 13/10/1926, Acorn minima records -1.2 (adjusted down from +6.9).

Data for days such as 13/10/1926 with an obvious error, possibly the result of a missing leading digit, are not unusual and are found in the records of many stations.

The Acorn record for Rutherglen has some other peculiarities as well.  There are several separate periods where Acorn’s maxima record frequently does not match with data from Climate Data Online, and is one day too early.  These are:

1/11/1920 – 19/3/1940,

1/12/1940 – 31/10/1944,

1/5/1946 – 31/10/1947, and

1/12/1947 – 31/1/1948.

Attachment 4.2: Another glaring error- one of many

Australia’s hottest temperature is supposed to be 50.7C recorded at Oodnadatta on 02/01/1960, but ACORN-SAT has a temperature of 51.2C at Albany on 08/02/1933.  Many days have been adjusted by more than +6 degrees C, resulting in this ludicrous figure which has passed quality assurance.

  albany max 1933

Attachment 4.3: List of 69 stations with ACORN minima exceeding maxima.

Station, Number of days with minimum temperature exceeding the maximum temperature.

Adelaide, 1. Albany, 2. Alice Springs, 36. Birdsville, 1. Bourke, 12. Burketown, 6. Cabramurra, 212. Cairns, 2. Canberra, 4. Cape Borda, 4. Cape Leeuwin, 2. Cape Otway Lighthouse, 63. Charleville, 30. Charters Towers, 8. Dubbo, 8. Esperance, 1. Eucla, 5. Forrest, 1. Gabo Island, 1. Gayndah, 3. Georgetown, 15. Giles, 3. Grove, 1. Halls Creek, 21. Hobart, 7. Inverell, 11. Kalgoorlie-Boulder, 11. Kalumburu, 1. Katanning, 1. Kerang, 1. Kyancutta, 2. Larapuna (Eddystone Point), 4. Longreach, 24. Low Head, 39. Mackay, 61. Marble Bar, 11. Marree, 2. Meekatharra, 12. Melbourne Regional Office, 7. Merredin, 1. Mildura, 1. Miles, 5. Morawa, 7. Moree, 3. Mount Gambier, 12. Nhill, 4. Normanton, 3. Nowra, 2. Orbost, 48. Palmerville, 1. Port Hedland, 2. Port Lincoln, 8. Rabbit Flat, 3. Richmond (NSW), 1. Richmond (Qld), 9. Robe, 2. St George, 2. Sydney, 12. Tarcoola, 4. Tennant Creek, 40. Thargomindah, 5. Tibooburra, 15. Wagga Wagga, 1. Walgett, 3. Wilcannia, 1. Wilsons Promontory, 79. Wittenoom, 4. Wyalong, 2. Yamba, 1.

(From Willis Eschenbach at .  Another study  claims a total of 954 days.)

Not the third hottest year either

January 11, 2015

According to the Bureau’s surface temperature record, 2014 was the 3rd hottest year on record.  The satellite derived Lower Troposphere data from UAH (University of Alabama- Huntsville) show a different picture.

uah aust 2014

If rankings are important to you, 2014 at +0.40C was in equal seventh place with 2006, and cooler than 1980, and warmer than 1988 by 0.01C.

2013 0.71 1
2009 0.64 2
1998 0.63 3
2005 0.51 4
2007 0.50 5
1980 0.49 6
2014 0.40 7
2006 0.40 8
1988 0.39 9
2002 0.23 10
1991 0.22 11
2010 0.22 12
1996 0.17 13
2008 0.16 14
2012 0.14 15
2011 0.10 16
1990 0.09 17
2004 0.02 18
1981 -0.01 19
1995 -0.04 20
2003 -0.05 21
1982 -0.12 22
1979 -0.13 23
1999 -0.15 24
1985 -0.22 25
1989 -0.22 26
1987 -0.22 27
1997 -0.22 28
2000 -0.24 29
2001 -0.29 30
1986 -0.29 31
1993 -0.29 32
1983 -0.36 33
1994 -0.38 34
1992 -0.56 35
1984 -0.62 36

But don’t expect to find this reported by the ABC.

How Much Warming Have School Leavers Seen?

December 7, 2014

Reports of the recent heatwave and record high temperatures in November (which coincided with the end of schooling for our Year 12 students), have been exciting the media here in Australia, and last week Professor Lesley Hughes of the Climate Council got herself overheated and joined in.

Professor Hughes, an ecologist, this year was awarded the Australian Government Eureka Prize for Promoting Understanding of Australian Science Research.

prof hughes


Last week, she claimed that “climate change was having a significant impact on Australia’s temperatures, with record-breaking weather becoming more frequent and more severe.”

“Nine of the 10 warmest springs have occurred in just the last 13 years,’’ Prof Hughes said. “Heatwaves are becoming hotter, lasting longer and occurring more often.

“This is resulting in year after year of recordbreaking temperatures, which increases the risk of bushfires, droughts and heatwave-related health issues.”

How much of this frightening climate change have the current school leavers experienced?

Here are some graphs of Australian monthly temperature anomalies, straight from the Bureau’s Climate Change website, for the entire period of their schooling- from January 2002 to November 2014 (Preschool to Year 12 in Queensland).


aust min 2002-14

Yes, over the past 13 years minimum temperatures have increased at a rate of about +0.08 degrees C per decade, or +0.8 C per 100 years- but less than the post 1910 trend of +0.1 C per decade.


aust max 2002-14

Oops! Over the whole period of the school leavers’ education, maxima have decreased at -0.06 degrees per decade.


aust mean 2002-14

Throughout their school years our school leavers have experienced a warming trend of…. +0.01 C per decade.  Or about 15 thousandths of a degree over 13 years.  Even with the hottest November on record, that’s somewhat less than the trend of +0.9 C per decade since 1910.  I’d call that a slowdown.

Welcome to the real world, school leavers.  It’s not as frightening as Professor Hughes (or Barack Obama) would have you believe.

Rain, clouds, and temperature

November 19, 2014

Looking at the continent of Australia as a whole, and using 12 month running means to smooth the very noisy data, we can see some intriguing patterns.

Firstly, here is a comparison of tropospheric temperatures above Australia from the University of Alabama- Huntsville (UAH), with surface air temperatures from the Bureau of Meteorology’s ACORN-SAT database.   To be comparable, both datasets are in anomalies from their 1981 – 2010 means.  The data are monthly since December 1978, with a 12 month running mean.

Fig. 1

uah v mean

Both datasets show concurrent rises and falls and are very similar (though not always).  Note how Acorn means were very much cooler in 2011 -2012 and much hotter in 2013.  Note also that 2014 has Buckley’s of being the hottest year on record.

Mean equals the average of maximum and minimum, so let’s look at maxima and minima.

Fig. 2

uah v max & min

Note that UAH usually tracks Acorn maxima, except when it doesn’t- shown above by the Xes.

Perhaps it has something to do with rainfall, or lack of it.  In the next plot, rainfall is inverted, so dry is at the top, wet at the bottom.

Fig. 3

uah v rain inv

Incidentally, the Bureau also has 9 a.m. and 3 p.m. cloud data available.  Note how closely both cloud datasets match, and how rainfall largely corresponds.

Fig. 4

rain v cloud

And the Southern Oscillation Index runs in close partnership with rainfall- sometimes SOI leads rain, sometimes rain leads SOI.

Fig. 5

rain v soi

Which is why I don’t take a lot of notice of predictions based on SOI.

Now see what happens when we plot inverted rainfall (dry at the top, wet at the bottom) and maxima.

Fig. 6

rain v max

Only once does 12 month mean maximum temperature precede 12 month rainfall (1991-1992).  At all other times, rainfall peaks or troughs occur before maxima (or at most, simultaneously).

With minima, the lead is even more obvious, however there are apparent exceptions in 1982 and 1994-1995, although these may be further examples of rain leading minima by more than a year (marked with “?”).

Fig. 7

rain v min

When we compare maxima with minima, the pattern is clear.

Fig. 8

max v min

Only in the summer of 1994-1995 do the records diverge.

Generalisations (and farmers have known about these rules of thumb for years):

  1. Climate is cyclical.  Rain and temperature rise and fall in roughly two or three year cycles.
  2. It always rains after a drought.
  3. Dry years are followed by spikes in maximum and minimum temperatures, from one to several months later.
  4. Wet years, with heavy cloud and rain, cause sharp drops in minimum and maximum temperatures, from one to several months later.
  5. Maximum temperatures lead minimum temperatures by several months in wet years, and by a shorter period in dry years.
  6. There are exceptions to all of the above.

Next step: Australia is a large continent with several distinct climatic regions.  I will next look at smaller regions to see if the above generalisations hold true and indeed may be modified or enhanced.

More Bizarre Adjustments

November 5, 2014

In September, the Bureau of Meteorology added two extra tabs to its ACORN-SAT webpage, in response to media and public pressure.  The first tab (“Adjustments”) included a link to a list of temperature adjustments for each of its 112 Acorn stations.  (This had been promised two and a half years earlier.)

Soon after, and probably in response to continued interest in adjustments at Amberley, Rutherglen, and Deniliquin (amongst others), six links to PDF files were added at the bottom of the adjustment page, which gave further explanations and summaries of adjustments at six individual sites- Amberley, Deniliquin, Mackay, Orbost, Rutherglen, and Thargomindah. (Click to enlarge.)

station summaries

Two days ago I posted about the bizarre case of Mackay 33119, listing differing adjustments from the two sources, extra neighbours found, and finding that the set of adjustments in the individual summary did not match the end result (the Acorn record for Mackay).

I thought this must be just a freak problem with Mackay.  Surely the other examples couldn’t all be wrong.

Not so.

Here is a table summarising the adjustments listed by the Bureau in the 28  page Station adjustment summary list, compared with the individual station summaries (click to enlarge).

adj comp table

Only one station (Deniliquin) has matching pairs of adjustments- but none are the same.

Out of 25 pairs of matching adjustments, only one pair has the same adjustment.

Most of the adjustments differ by only a few hundredths of a degree, but some are hugely different (over 1 degree in the case of Mackay).

There are a total of 60 adjustments, but 10 of these do not have a matching adjustment.  Seven of the extras are in the individual station summaries, three are from those in the original 28 page list.

Note that these station summaries are “indicative of the sorts of adjustments made across the 112 ACORN-SAT sites”. As a result, we can have no confidence in the accuracy of the Bureau’s adjustments, and we are left wondering what the Bureau would have us believe are the real temperatures at any site.

An old school teacher’s response to such sloppy work?

Fail.  Check your work and repeat.  Stay in at lunch time until you get it right.

The Bizarre Case of Mackay 33119

November 3, 2014

What has the Bureau of Meteorology done to Mackay’s temperatures?

Mackay’s temperature records from the old Post Office, Te Kowai, and the Met. Office have been combined into one, and this has been “homogenised” by reference to Mackay’s neighbours.

But in attempting to justify their actions the Bureau has provided TWO lists of neighbour stations and TWO lists of adjustments at their website.

First, the list of “neighbour” stations used at Mackay (from the 28 page Adjustments document at ).

33119    Mackay Met. Office (Mt Bassett) 1960-2014 is the official Acorn site.

33046    Mackay Post Office  1910-1949  (4 km away)

33047    Te Kowai  1910-2011  (10 km)

33058    Pine Islet Lighthouse  (70 km)

39023    Cape Capricorn Lighthouse (335 km)

33013    Collinsville Post Office  (154 km)

39083    Rockhampton Aero  (283 km)

32005    Cape Cleveland Lighthouse (290 km)

33077    Pacific Heights (Yeppoon- 273 km)

32078    Ingham  (421 km)

39122    Heron Island  (380 km)

32037    South Johnstone Experiment Station (Tully- 517 km)

34002    Charters Towers PO  (329 km)

33001    Burdekin Shire Council (Ayr- 255 km)

33007    Bowen PO  (160 km)

39069    Walterhall (Mt Morgan- 300 km)

35019    Clermont PO  (246 km)

However, they provide a different list in the explanation for Mackay’s adjustments given at at a link from the site above, and the two explanations are quite different.

Here is the second list, including four extras:

mackay explanation list

You can imagine the reaction of Mackay residents on finding that Mackay’s temperatures have been homogenised using Tully, Heron Island, Townsville, Mount Morgan, Charters Towers, Clermont, and their old rival, Rockhampton.  None of these places has a climate anything like Mackay’s.

Further, the Bureau claims that Townsville and Rockhampton are excluded from climate analyses because they are both affected by Urban Heat Island (UHI) warming, but here they have been included in the climate analysis of Mackay.

Now the lists of adjustments:

mackay adjustments comp

The matching adjustments are completely different, and there are three extra breakpoints detected by statistical means, with adjustments, including two extra for maxima.  So which set of adjustments was actually used?

Here is a chart of Mackay’s annual maximum temperature records. Suffice to say that the Mackay record is a mess, and good luck to anyone trying to homogenise it.

Fig. 1:

mackay max chart

These are the results from applying the two lists of adjustments to the raw Mackay temperatures, to see which matches the Acorn records.

Fig.  2:   Calculated maximum temperatures (raw temperature with listed adjustments applied) minus Acorn temperatures.  Zero difference equals a perfect match.

mky replic

Fig. 3:   Minima:

mky replic min

The original list given in the 28 page list of adjustments appears to be the one used for both maxima and minima.  Mackay Acorn maxima cannot be replicated with the Station temperature adjustment summary list, which has two adjustments clearly not used, and is moreover confusing and does not follow the protocol for 1939-1940.  Similarly, there is an additional adjustment for minima which does not match the Acorn record.

The summary list appears to have been put together in a hurry in an attempt to head off criticism about lack of transparency.  But why the different adjustments?

And were the actual adjustments justified?  A simple test is to find the differences between the station being homogenised and its neighbours.  If Mackay has been properly homogenised, the average difference after homogenisation should have a trend close to zero.  Here are the results:

Fig. 4:  Average differences in anomalies of the 10 listed neighbours for the period around the “statistical” breakpoint at 01/01/1971.

mky 1971 raw adj diff comp

The adjustment of about -0.3C for all years up to 1970 makes the differences worse.  Interestingly, when the two most distant sites to the north and south are excluded (South Johnstone and Heron Island), the trend in raw difference is almost zero.  The raw Mackay MO record is similar to the neighbours, without any adjustment.

Fig. 5: As for Fig. 4, but excluding 2 distant sites:

mky 1971 raw adj diff excl 2 sites

The adjustments to the Post Office for 01/01/1941 and 01/01/1948 cause the following differences:

Fig. 6:

mky PO raw adj diff comp

Once again, there is a major difference between the Acorn record and the average of the neighbours, as shown by the steep trend- not much better than the raw difference.

To conclude,

  1. the Bureau has made an embarrassing mistake in publishing two different lists of adjustments and neighbours for Mackay
  2. the adjustments listed in the Mackay station adjustment summary are not those actually made
  3. adjustments are based on “neighbours” up to 500 km away, including two with UHI effect
  4. very few of these neighbours have climates similar to Mackay’s
  5. differencing shows that homogenising makes Mackay Met Office maxima LESS like the neighbours, and Post Office maxima not much closer.

If the adjustments at Mackay are, as the Bureau claims, “indicative of the sorts of adjustments made across the 112 ACORN-SAT sites”, then we can look forward to finding many more problems.

Adjustments Grossly Exaggerate Monthly and Seasonal Warming

October 4, 2014

The Bureau of Meteorology has reportedly claimed “an extensive study has found homogeneity adjustments have little impact on national trends and changes in temperature extremes.”  (Weekend Australian, August 23-24).

I have always said that the true test of the homogenisation process is its effect on national trends.  Problems at individual stations like Rutherglen are merely symptoms of a system wide malady.

If the adjustments really do have “little impact on national trends” then the Acorn dataset is a reliable indicator of broad temperature change in Australia.

If not, the Bureau has a problem.

So, how do we define “little impact”?

The Bureau has known since March 2012 that mean annual temperature increase from 1911 to 2010 in adjusted data (+0.94C) is 36% greater than in unadjusted data (+0.69C).  This information is publicly available in Table 1 on page 14 of On the sensitivity of Australian temperature trends and variability to analysis methods and observation networks  (CAWCR Technical Report No. 050), R.J.B. Fawcett, B.C. Trewin, K. Braganza, R.J Smalley, B. Jovanovic and D.A. Jones , March 2012 (hereafter CTR-050).  In this paper the authors claim that the rise in unadjusted data is “somewhat smaller”.  If this is so, then what increase in trend over unadjusted data may be considered to be beyond small or “little impact”? 50%? More than 50%?

What about 200%?

The Bureau has this graphic on their new Adjustments tab, which presumably is meant to support the claim of “little impact”:

Fig. 1: Official comparison (click graphics to enlarge)

BOM graphic

How big is that increase?  The devil is in the detail- monthly and seasonal trends, which the Bureau is yet to analyse.

According to the Bureau, AWAP (Australian Water Availability Project) represents unadjusted data. (It’s not, CTR-050 even calls it “partially homogenised”, and there are major issues with it, but that’s another story to be discussed later.  For now, let’s play along with calling it “unadjusted”).  Using this same “unadjusted” data, and the same method as the Bureau, here are results for the 1911 – 2013 period.  (See the Appendix below for full details.)

These tables summarize the results.  Highlighted cells show large ( > 50%) difference.

Fig. 2:  Summary Table: Percentage Increases to Unadjusted Data- Seasons

summary table seasons

The major effect is on summer trend:  increase in Mean trend 64%, Maxima 200%.

Fig .3:  Summary Table: Percentage Increases to Unadjusted Data- Months

summary table months

In Maxima trends, of the hot months, November, December and January have had large increases, and February and March have had cooling trends reversed.

June and November Mean, Minima, and Maxima trends have been massively increased.

One month (August) has had a warming trend reduced.

May, July, August, and September are largely unchanged.


Compared with ‘unadjusted’ data, for the period 1911 – 2013 Acorn shows obvious changes in monthly and seasonal data.  Exploration of the reasons for this needs to be included in the terms of reference of the forthcoming “independent review”.

The difference between AWAP and Acorn, especially in summer maxima, is of particular concern for anyone wishing to analyse national data.  For example: What was the national summer maximum in 1926?  AWAP says 35.87C.  Acorn says 33.53C.  Which dataset is to be believed?

The Bureau has a problem.

The Acorn dataset is NOT a reliable indicator of broad temperature change in Australia.

Appendix: Background, Charts, Methods, and Analysis

CTR-050 analyses data for the 1911-2010 period, comparing Acorn with several other datasets, including AWAP.  All trends are determined by quadratic fit, rather than linear, to better show the temperature trends across the period: cooling then warming.  The authors also use anomalies from 1981-2010 means.

This table shows the change in temperature over the period, which represents trend per 100 years, (and I am annoyed at myself for not reading this more closely two years ago.)

Fig.4:  Table 1 from CTR 050:

BOM table 1 comps

The authors explain (pp. 41-46) that the difference between AWAP and Acorn is mainly between 1911 and 1955 and is largely due to the large impact on national temperature of very few remote sites in the earlier years of last century, and station moves to cooler sites around 1930 and the 1940s.  That may certainly be true, but the large discrepancy calls for closer analysis.

My methods

Monthly and annual AWAP data (minima, maxima, and mean) 1911 – 2013 obtained from the Bureau allows analysis of the impact the adjustments.  I use 1961 – 1990 as the reference period for anomalies.  I also use quadratic trends and calculate temperature change per 100 years by (last quadratic trendline point – first point) X 100/103.  (These first and last points are accurately determined to 0.01C by zooming in on Excel charts- see Figures 22 and 23 below.)  I calculate percentage change in 100 year trend as {(Acorn trend – AWAP trend)/AWAP trend} x 100.

For example:  Annual means.

Quadratic first point (1911)   Quadratic last point (2013)    Change

AWAP:   -0.13                          +0.56                            +0.69

Acorn:   -0.34                           +0.58                            +0.92

AWAP Quadratic trend per 100 years =  0.69 X 100/103 = 0.67

Acorn Quadratic trend per 100 years =   0.92 X 100/103 = 0.89

Percentage change in trend = {(0.89 – 0.67) / 0.67} X 100 = 32.8%.

While my analysis largely confirms the figures in the Figure 4 above, the devil is in the detail.

Firstly, here are charts for comparison of mean temperatures, showing linear and quadratic trends to 2013:

Fig. 5: Linear

mean linear

Fig. 6: Quadratic

mean quadratic

Linear analysis produces a trend value of 31%, a little less than quadratic .  Acorn adjustments produce a quadratic trend about 32.8% greater than AWAP- not as great as 1911-2010, but still substantial.  Quadratic trend lines produce a better fit than linear and clearly show the earlier cooling.

Fig.7:  Annual Minima

min quadratic

Over 25% increase.

Fig. 8: Annual Maxima

max quadratic

36.7% increase.

Seasonal and Monthly Means:

Fig. 9:  Table of Seasonal Differences for Means.

mean table seasons

Note summer mean trend has been increased by 64%.  Graphs may make the comparison starker.

Fig. 10:  Comparison of 100 year trends in unadjusted and adjusted seasonal data.

mean trends diff seasons

Fig. 11: Percentage Difference in Trends

mean trends diff % seasons

Fig. 12: Comparison of 100 year trends in unadjusted and adjusted monthly data.

mean trends comp

Fig. 13:  Percentage Difference in Trends

mean trends diff % months

February trend doubled, March, June, and November are increased by about 80%.


Fig. 14:  Table of Seasonal Differences for Minima.

min table seasons

Fig. 15:  Comparison of 100 year trends in unadjusted and adjusted seasonal data.

min trends comp seasons

Fig. 16:  Percentage Difference in Trends

min trends diff % seasons

Fig. 17: Comparison of 100 year trends in unadjusted and adjusted monthly data.

min trends comp

Fig. 18:  Percentage Difference in Trends

min trends diff %

Note the doubling of the June minima trend, and October and November increased by 50%.


Fig. 19:  Table of Seasonal Differences for Maxima.

tmax table seasons

Fig. 20:  Comparison of 100 year trends in unadjusted and adjusted seasonal data.

max  trends seasons

Fig. 21:  Percentage Difference in Trends- we need to rescale the y-axis!

max trends diff % seasons

Don’t believe the 200% figure?  Here are close ups of the graph.

Fig. 22:  Summer maxima detail

max summer quadratic bottom

Fig. 23:

max summer quadratic top

Fig. 24: Comparison of 100 year trends in unadjusted and adjusted monthly data.

max trends comp

Note cooling trends in February and March reversed., August reduced.

Fig. 25:  Percentage Difference in Trends

max trends diff % months

Strong August warming slightly reduced.  No calculation for February and March.  January, June, December greatly warmed.  November massively warmed.

Why the huge discrepancies between unadjusted and adjusted data?

Acorn data freely available at

AWAP data available at a cost on request from

A Check on ACORN-SAT Adjustments: Part 1

September 18, 2014

I have commenced the long and tedious task of checking the Acorn adjustments of minimum temperatures at various stations by comparing with the lists of “highly correlated” neighbouring stations that the Bureau of Meteorology has kindly but so belatedly provided.   Up to 10 stations are listed for each adjustment date, and presumably are the sites used in the Percentile Matching process.

It is assumed by the Bureau that any climate shifts will show up in all stations in the same (though undefined) region.  Therefore, by finding the differences between the target or candidate station’s data and its neighbours, we can test for ‘inhomogeneities’ in the candidate site’s data, as explained in CTR-049, pp. 44-47.  Any inhomogeneities will show up as breakpoints when data appears to suddenly rise or fall compared with neighbours.  Importantly, we can use this method to test both the raw and adjusted data.

Ideally, a perfect station with perfect neighbours will show zero differences: the average of their differences will be a straight line at zero.  Importantly, even if the differences fluctuate, there should be zero trend.  Any trend indicates past temperatures appear to be either relatively too warm or too cool at the station being studied.  It is not my purpose here to evaluate whether or not individual adjustments are justified, but to check whether the adjusted Acorn dataset compares with neighbours more closely.   If so, the trend in differences should be close to zero.

In all cases I used differences in annual minima anomalies from the 1961-1990 mean, or if the overlap was shorter than this period, anomalies from the actual period of overlap.  Where I am unable to calculate differences for an Acorn merge or recent adjustment due to absence of suitable overlapping data (e.g. Amberley 1997 and Bourke 1999, 1994), as a further test I have assumed these adjustments are correct and applied them to the raw data.

I have completed analyses for Rutherglen, Amberley, Bourke, Deniliquin, and Williamtown.

The results are startling.

In every case, the average difference between the Acorn adjusted data and the neighbouring comparison stations shows a strongly positive trend, indicating Acorn does not accurately reflect regional climate.

Even when later adjustments are assumed to be correct the same effect is seen.

Interim Conclusion:

Based on differencing Raw and Adjusted data from listed comparison stations at five of the sites that have been discussed by Jennifer Marohasy, Jo Nova, or myself recently, Acorn adjustments to minima have a distinct warming bias.  It remains to be seen whether this is a widespread phenomenon.

I will continue analysing using this method for other Acorn sites, including those that are strongly cooled.  At those sites I expect to find the opposite: that the differences show a negative trend.

Scroll down for graphs showing the results.



(Note the Rutherglen raw minus neighbours trend is flat, indicating good regional comparison.  Adjustments for discontinuities should maintain this relationship.)

Amberley (a)


(Note that the 1980 discontinuity is plainly obvious but may have been over-corrected.)

Amberley (b): 1997 merge (-0.44) assumed correct

 amberley inc 1997

Treating the 1997 adjustment as correct has no effect on the trend in differences.

Bourke (a)


Bourke (b):  1999 and 1994 merges assumed correct.

bourke inc merges

No change in trend of differences.



(Note the adjusted differences still show a strong positive trend, but less than the other examples.)



(Applying an adjustment to all years before 1969 produces a strong positive trend in differences.)

Better Late Than Never- BOM Releases Adjustment Details

September 11, 2014

On Monday, quietly and without any announcement, a new tab appeared on the Bureau’s ACORN-SAT webpage.

adj tab

This “Adjustments” tab opens to a page explaining why homogenisation is necessary, supposedly showing how the adjustments don’t make much difference to the mean temperatures, and how Australia really is warming because everyone agrees.  More on this later.  So how do we get to see the actual adjustments for each site?  Tucked away under the first graph is a tiny link:

adj tab link

Click on that and a 27 page PDF file opens, listing every Acorn station, dates and reasons for adjustments, and most importantly, a list of reference stations used for comparison.  (You have to go to Climate Data Online to find the station names, their distance away, site details, and their raw data.)

Finally it will be possible to check the methods and results using the correct comparison stations- until now we could only guess.

Back in September, 2011 the Independent Peer Review Panel made a series of recommendations, including that

“C1. A list of adjustments made as a result of the process of homogenisation should be assembled, maintained and made publicly available, along with the adjusted temperature series. Such a list will need to include the rationale for each adjustment.”

The Bureau responded on 15 February 2012, just before the release of Acorn:

“Agreed. The Bureau will provide information for all station adjustments (as transfer functions in tabular format), cumulative adjustments at the station level, the date of detected inhomogeneities and all supporting metadata that is practical. This will be provided in digital form. Summaries of the adjustments will be prepared and made available to the public.”

That was two and a half years ago.  What took so long?  Why was it not publicly available from the start?  Perhaps it is just a co-incidence that the long awaited information was released shortly after a series of articles by Graham Lloyd appeared in The Australian, pointing out some of the apparent discrepancies between raw and adjusted data.  Graham Lloyd deserves our heartfelt thanks.

The Bureau of Meteorology has been dragged kicking and screaming into the 21st Century.  The Bureau is having trouble coming to terms with this new era of transparency and accountability, an era in which decisions are held up to public scrutiny and need to be defensible.

I trust we won’t have to wait another two and a half years for the other information promised, such as “sufficient station metadata to allow independent replication of homogeneity analyses” and “computer codes… algorithms… and protocols”,  “the statistical uncertainty values associated with calculating Australian national temperature trends” and “error bounds or confidence intervals along the time series”

The final recommendation of the Review Panel, and undertaking by the Bureau:

“E6. The Review Panel recommends that the Bureau assembles and maintains for publication a thorough list of initiatives it has taken to improve transparency, public accessibility and comprehensibility of the ACORN-SAT data-set.

Agreed. The Bureau will provide such information on the Bureau website by March 2012.”

I must have missed that.




Homogenisation: A Test for Validity

September 8, 2014

This follows on from my last post where I showed a quick comparison of Rutherglen raw data and adjusted data, from 1951 to 1980, with the 17 stations listed by the Bureau as the ones they used for comparison when detecting discontinuities. 

Here is an alternate and relatively painless way to check the validity of the Bureau’s homogenisation methods at Rutherglen, based on their own discontinuity checks.  According to the “Manual” (CAWCR Technical Report No. 49), they performed pair-wise comparisons with each of the 17 neighbours to detect discontinuities.  An abbreviated version of this can be used for before and after comparisons.  For each of the 17 stations, I calculated annual anomalies from the 1961-1990 means for both Rutherglen and the comparison site, then subtracted the comparison data from Rutherglen’s.  I did the same with Rutherglen’s adjusted Acorn data.

A discontinuity is indicated by a sudden jump or drop in the output.  The ideal, if all sites were measuring accurately and there are no discontinuities, would be a steady line at zero: a zero value indicates temperatures are rising or falling at the same rate as neighbours.  In practice no two sites will ever have the same responses to weather and climate events, however, timing and sign should be the same.  Therefore pairwise differencing will indicate whether and when discontinuities should be investigated for possible adjustment.

Similarly, pairwise differencing is a valid test of the success of the homogenisation process.  Successful homogenisation will result in differences closer to zero, with zero trend in the differences over time.  The Bureau has told the media that adjustments are justified by discontinuities in 1966 and 1974.  Let’s see.

Fig. 1:  Rutherglen Raw minus each of 17 neighbours

pairwise diffs Rutherglen Raw

Note: there is a discernible drop in 1974, to 1977.  There is a very pronounced downwards spike in 1967 (ALL differences below zero, indicating Rutherglen data were definitely too low.)  There also a step up in the 1950s, and another spike upwards in 1920.  Rutherglen is also lower than most neighbours in the early 1930s.  Also note several difference lines are obviously much higher or lower than the others, needing further investigation, but the great majority cluster together.  Their differences from Rutherglen are fairly consistent, in the range +/- 1 degree Celsius.

Now let’s look at the differences AFTER homogenisation adjustments:

Fig. 2:  Rutherglen Acorn minus the neighbours: The Test

pairwise diffs Rutherglen Acorn

The contrast is obvious.  The 1920 and 1967 spikes remain.  Differences from adjusted data are NOT closer to zero, most of the differences before 1958 are now between 0 and -2 degrees Celsius, and there is now an apparent large and artificial discontinuity in the late 1950s.  This would indicate the need for Rutherglen Acorn data to be homogenised!

Compare the before and after average of the differences:

Fig. 3:

pairwise diffs Rutherglen Raw v Acorn average

There is now a large positive trend in the differences when the trend should be close to zero.

There are only two possible explanations for this:

(A)  The Bureau used a different set of comparison stations.  If so, the Bureau released false and misleading information. 

(B)   As this surely can’t be true, then if these 17 stations were the ones used, this is direct and clear evidence that the Bureau’s Percentile Matching algorithm for making homogenisation adjustments did not produce correct, successful, or useful results, and further, that no meaningful quality assurance occurred.

If homogenising did not work for Rutherglen minima, it may not have worked at the other 111 stations. 

While I am sure to be accused of “cherry picking”, this analysis is of 100% of the sites for which the identities of comparison stations have been released.  When the Bureau releases the lists of comparison stations for the other 111 sites we can continue the process.

A complete audit of the whole network is urgently needed.


Get every new post delivered to your Inbox.

Join 48 other followers