ACORN-SAT: A Preliminary Assessment

Ken Stewart, May 2012

and

Update, April 2014

Readers may have come to this site via a poster campaign, featuring (with my permission) graphs of adjustments to minimum temperatures at Rutherglen, Victoria.  The graphs show comparisons of unadjusted daily temperatures as shown at Climate Data Online, with temperatures downloaded from the Acorn site (see below.)   The daily temperatures have been smoothed with a 365 day running mean and linear trends are shown.  The amount of adjustment in the Acorn data is clear.  Pleased read on for a full explanation.

Introduction

In March 2012, a new daily temperature reconstruction was released, called the Australian Climate Observations Reference Network- Surface Air Temperatures, or ACORN-SAT (Acorn).  It appears that the previous “High Quality” Annual and Daily series will be quietly forgotten, as it had become apparent that they had significant, but never admitted, problems (see my previous posts: The Australian Temperature Record Part 8: The Big Picture; Part 9: An Urban Myth, and Part 10: BOM’s “Explanations”.)  Congratulations are due to the Bureau of Meteorology (BOM) for the excellently presented information, including (some) metadata for all sites, technical papers giving excellent background and describing the homogenising process in some detail, and easily accessible data files.

The authors make a number of claims about Acorn’s quality and robustness, however a number of problems can be identified.

Acorn’s authors claim that:

  • They have produced a daily record for the last 100 years.
  • Increasing data quantity post 1960 gives more confidence that the warming trend is strong and increasing.
  • Acorn produces similar trends to those already shown by previous Australian and international analyses, (which is true), and allows improved analysis of the frequency of hot and cold extremes.
  • There is an approximate balance between positive and negative adjustments for maximum temperature but a weak tendency towards a predominance of negative adjustments (54% compared with 46% positive) for minimum temperature.

Time for a reality check.  Unfortunately,

  • The record is much shorter than 100 years for a significant number of sites.  For many others, the Acorn record has many gaps and contains spurious errors which imply poor quality control.
  • Post 1960 data indeed indicate warming but this is not the case over the whole period.
  • Acorn’s trends indeed reflect those of previous analyses but not those drawn from the raw data.
  • Hot and cold extremes have been adjusted, usually warming winters and cooling summers, and at some sites new and more extreme records have been set.
  • While there may be a numeric balance of positive and negative adjustments, analysis of a representative sample indicates that adjustments predominantly increase warming.
  • The Acorn record is impossible to replicate.

I have downloaded all data for the 112 Acorn sites, maxima and minima.  I have looked at this data from a number of angles: quality of data; length of record; completeness of data, trends in temperature and temperature range; selection of sites; and adjustments at a representative sample of individual Acorn sites.  I have compared Acorn data for this sample with the raw data from the contributing stations, on a daily basis and by calculating 365 day running means.  I have so far limited my analysis to 10 representative sites.  (Each site with data from 1910 to 2011 comprises 37,255 lines of data, so it takes some time to look at each site.  The whole dataset pushes my laptop to its limits!)

Preliminary Findings

1. Metadata: The metadata in the Station Catalogue are not complete and are in some cases misleading, as will be shown in the analysis of a sample of sites.

Never the less, the metadata file gives some interesting information: many sites by their own description are very poor, with no overlapping data for station moves, but the authors claim the homogenisations successfully account for this.

There is no listing of adjustments, neither is there a list of reference sites used for comparison.  Code has not been released, and it is impossible to replicate Acorn.

2.  The Bureau misleadingly claims that Acorn will “provide a daily record of Australian temperatures over the last 100 years.”  Not for Learmonth- make that less than 37 years.  Here is a graph of the count of sites per time (throughout this post, click on each figure for a closer look):

Fig. 1

Missing data and large gaps plague a number of Acorn sites.  Acorn’s data coverage is 81.4% of the possible 1910-2011 data available at each site (excluding the 8 urban sites).

Selection of sites will be discussed further in points 5 and 6 below, however there is another consequence of site selection: length of record.  In 1995, Torok and Nicholls constructed a long-term climate record for Australia.  For this they needed long term sites, those with at least 80 years of monthly data, and were able to create composite records so that 224 stations “were open by 1915”.  Of the 112 Acorn sites, 40 or 35.7% have less than 80 years of daily data.  This increases the uncertainty of any analysis of trends.  The problem is that large amounts of daily data have not yet been digitised- this must be a top priority.

3.  Data Precision:  Although Acorn’s authors specifically address (and dismiss) the issue of the metrication change in 1972 and the incidence of rounded observations in both Fahrenheit and Celsius eras, an audit of all Acorn sites by Chris Gillham has reinforced what we found in our audit of the Annual HQ sites: before 1972 very likely more than 50% of sites’ observations were rounded.  Acorn’s authors admit a possible 0.1C increase in the 1970s as a result of this, but contend that there was too much climate variability due to ENSO activity in this period to verify this.  I would agree about the ENSO variability, however this inability to distinguish between signals itself demonstrates how poor and imprecise the record is.

4.  Adjustments have reversed cooling trends and strongly increased warming trends apparent in the raw data at the sample of sites studied.  Frequently winters are warmed and summers are cooled, thus reducing extremes.  The earlier part of the record has been cooled and the later part warmed, increasing the apparent warming trend.

I have also calculated daily averages of all Acorn sites, and although there is no distance weighting the results are close to Acorn’s.  Here is a graph of the 365 day running mean of all  104 non-urban Acorn minima (1911-2010):

Fig. 2

 And Maxima:  Fig. 3

Mean: Fig. 4

 Temperature Range: Fig. 5

Note: this does not show the global warming signal, which is that warming should be apparent at night, in winter, and towards the poles, such that the diurnal range decreases.

Note I show both 2nd order and 4th order polynomials, as well as linear trend lines.  Acorn’s authors use quadratic or 2nd order.

I have previously analysed trends in annual data at a network of 89 sites demonstrating more than 80 years of data and with sufficient comparative data to make splices when sites change.  I called this the Minimally Adjusted Network (MAN).  Here is the mean for the MAN series:

Fig. 6

This trend (+0.65C/100 years) is nothing like that of Acorn (+0.9).  Although Acorn’s authors state that there are almost equal numbers of positive and negative adjustments, the result appears to be stronger warming trends.

5.  Site selection:  The Acorn set of sites is based on Trewin’s 2001 network, and overlaps the HQ Annual sites:  of the 134 HQ Annual sites, 78 are common (or close substitutes), with 34 different sites.  The Acorn authors explain the care taken with site selection, in CAWCR Technical Report No. 049. The average trend of annual raw mean temperatures for the unused HQ sites is +0.51C/100 years, while the raw trend for the sites maintained is +0.71.  Let me be quite clear: there is no suggestion of deliberate bias in selection.  The omitted sites nearly all lacked sufficient digitised daily data.  However, the mere selection of one site and omission of another site influences the climate record, means, anomalies, and trends.  The selection, substitution, deletion, and addition of the Acorn sites is not value free and has its own non-climatic influence on the record.

6.  New sites:  Acorn has introduced new sites for the 100 year record.  As discussed in previous paragraphs, the choice of these sites has an impact on the record.  While the Acorn network keeps approximately the same proportional distribution north/ south and inland/ coastal as the HQ network, it is intended that more sites will be added in future, especially in remote areas.  Although  the necessity of having more sites in remote areas is obvious, if more sites are added in northern Australia or in the outback, any warming signal will be intensified as northern and inland sites have much larger swings than coastal and southern sites (see MAN analysis here and here), therefore any warming trend will be exaggerated.

7.  Several sites sampled contain spurious data, either due to human error or unchecked blanket application of the adjustment algorithm (evident in month by month adjustments), that indicate quality control may not be adequate.

8.  While Acorn is supposed to better compare (and reduce) extremes, at several sites new (and more extreme) records are established, and more extreme temperature swings are created.

9.  It is impossible to replicate the ACORN-SAT record at some sites with the available data and information.

10.  An international panel was invited to review ACORN-SAT prior to publication.  (Note: this does NOT constitute standard peer review.)  They made some interesting observations and a list of recommendations, to which the Bureau has responded.  Here are some observations, with my emphasis.

  • (T)he surface temperature observation network fails to meet the internationally recommended minimum spatial density through much of inland Australia. 

Fig. 7: illustrating this point

From west to east Australia is about 4,000km, which is more than the distance across the USA.

Acorn’s lead author Blair Trewin admits this in a technical paper, saying “Even today, 23 of the 112 ACORN-SAT locations are 100 kilometres or more from their nearest neighbour, and this number has been greater at times in the past, especially prior to 1950.”

  • The WMO Guide states that an acceptable range of error for thermometers (including those used for measuring maximum and minimum temperature) is ±0.2 °C. However, throughout the last 100 years, Bureau of Meteorology guidance has allowed for a tolerance of ±0.5 °C for field checks of either in-glass or resistance thermometers. This is the primary reason the Panel did not rate the observing practices amongst international best practices.
  • The Bureau has advised that for privacy reasons regarding observers the Bureau cannot make its metadata database publicly available through the internet. However, the Panel considers that for transparency reasons it would be useful if sufficient metadata to allow independent replication of homogeneity analyses for individual ACORN-SAT sites was included within the public ACORN-SAT station catalogue being developed by the Bureau.

Acorn’s Methodology

Acorn adjusts the record in two phases: 1. Detecting discontinuities and 2. Homogenising.  These processes are described in detail in  CAWCR Technical Report No. 049 by Blair Trewin.

While the techniques used to detect discontinuities by comparing “neighbouring” station records (stations that are in the same climate regime, having similar climatic variations) are widely used internationally, the adjustment techniques used in Acorn have not been used outside Australia at a national-level.  This is a first for BOM, so let’s hope they have done a good job.

Quoting from the above Technical Report, detecting discontinuities involves

a series of pairwise comparisons, with the candidate site being compared one-by-one with its neighbours … in the 41×41 matrix (candidate and 40 neighbours)…

 …For each candidate site, testing for inhomogeneities was carried out separately for time series of mean maximum and minimum temperature anomalies for annual means, and for seasonal means for each of the four calendar seasons (Dec-Feb, Mar-May, Jun-Aug, Sep-Nov)….

 …For each candidate site, 40 neighbouring site time series were chosen from all available Australian sites with some overlapping data with the candidate site.

 And 23 sites have no neighbours within 100km!

The adjustment technique is called the Percentile Matching (PM) algorithm, which

… takes two forms. The first, simpler, form is for the case of merging data from two sites where there is a useful overlap between sites. The second, more complex case, is where there is no overlap (or an overlap too short to be useful), and the adjustment is a two-step process involving the use of neighbouring sites.

 This complicated process is explained in detail in the above report.

Site Analysis

Here is my analysis of a sample of 10 representative Acorn sites.  These are:

Metropolitan (Brisbane Airport), remote desert (Alice Springs), lighthouse (Cape Leeuwin), island (Horn Island), outback (Longreach, Wilcannia), country town (Gunnedah), regional city (Bathurst), and semi-rural (Rutherglen, Nhill).  Apart from Horn Island these sites were included in the High Quality Annual datasets.  I compare daily temperatures and also use 365 day running means (as the Acorn methodology is based on annual and seasonal means).  Where there are more than 10 days in a month with no recordings, I omit the following 365 days means.

Alice Springs

The Alice is a beautiful town set in the McDonnell Ranges in the centre of Australia.

Fig. 8- Minima

This shows the difference between the Acorn and raw records (Post Office and Airport).

Fig. 9

Fig. 10  Spliced records vs Acorn

The Maxima record has been adjusted to produce extra warming as well.

Fig. 11

Fig. 12 Splice -raw

The authors correctly point out that many mistakes enter the record due to human error, amongst other things.  The Acorn record, despite being world’s best practice, is not above human error.  Here’s a screen shot of one part of the record, 28/1/1944, with the airport’s maximum reading and Acorn’s highlighted.

Fig. 13

A leading “2” entered instead of “3”.

There are others:  Fig. 14

Perhaps they meant 8.4?

If my laptop can find errors such as this, why can’t BOM’s quality control processes?

The review panel, in commenting on the reliability of Acorn for monitoring national trends, had this to say about the scarcity of remote data and Alice Springs in particular:

The Panel considers the ACORN-SAT national anomaly temperature series can be relied on to quantify national climate variability and change.  The Panel is aware that one station, Alice Springs, contributes 7-10% of the signal which is why the Panel encourages adding a limited number of stations in remote areas to improve assessment and monitoring of sub-national regional temperature trends.

What neighbouring stations were used to make adjustments before the 1950s?  There are no sites with overlapping digitised data within cooee of Alice Springs.  Oodnadatta and Tennant Creek are about 460km away; there are only very few sites for periods after this.  Yet Alice Springs contributes 7-10% of the national warming signal.  How much do Giles, Tennant Creek, Birdsville, and Horn Island contribute?

Bathurst

Bathurst is a regional centre west of Sydney.  There are no pre-1966 daily data for Bathurst Agricultural Research Station in Climate Data Online, yet Acorn has data from 1910- where does this come from?  There is no way of comparing or replicating.  The authors of Acorn claim that they do not use data from Bathurst Gaol, but the pre-1966 data looks suspiciously like the Gaol’s.  Acorn has 1200 more missing observations than the Gaol, especially around 1950, yet the Gaol (it is claimed) isn’t used, despite having more consistent data.  But they get data from somewhere!  BOM needs to be more transparent about its data sources.

Fig. 15:  Minima

Fig. 16 : splice

Fig. 17: maxima

Acorn adjustments are supposed to improve analysis of climate extremes.  There was a minimum high of 28.1 on 14/01/1939 at the Gaol, which Acorn reduces to 26.8, the record is 23 for the Research Station; the record low for the Gaol and the Ag Station both is -8.9; Acorn makes the gaol even colder in 1927 by adjusting to -10.6, and the Ag Station -9.1 in 1971.  The record high maximum at the Gaol was 40.8 on 11/1/1939; this has been adjusted to 40.7.  The Ag Station’s record high was 40.1 on 15/2/2004, and Acorn agrees with this.  While Acorn correctly identifies some spurious recordings e.g. a minimum of 16.7 in September 1961, and 2.8 in February 1952, others are questionable.  Biggest adjustments were -13.8 on 2/11/1919 and +12.7 on 2/1/1961, when a cool change came through and the temperature dropped from 30.6 on 1st January to 18.9 on the 2nd.  Acorn changes this to 31.6, but 40.9mm of rain was measured on the 3rd.

Fig. 18

The adjustment does not appear warranted.  Perhaps they meant 21.6?

Brisbane   

Brisbane’s Acorn record begins in 1948 with the old Eagle Farm Airport, but this excludes the long previous record of the Regional Office, for no apparent reason- these data are not even mentioned.  The result is therefore a warming bias.

Fig. 19 Raw minima

Fig 20: Spliced minima

Fig. 21: Raw maxima

Fig. 22: Spliced maxima

The warming of winters and cooling of summers is visible here:

Fig 23: Acorn less old airport minima

Fig 24: Acorn less old airport maxima

The old Airport’s record high of 39.6 on 14/11/68 has been increased to Acorn’s 40.2 on 22/02/2004, but the Regional Office had a record of 43.2 on 26/01/1940.  RO has been ignored by Acorn.  According to Acorn Brisbane now has a new record low of -2.2 on 22 July 1951 at the old airport just a few km from the sea instead of -0.1 on 19/07/2007 at the new airport.

Here is another curious feature in the Maxima record, from when the New Airport data become available on 1/4/1994:

Fig. 25

Note how Acorn is exactly the same as the Old Airport until the changeover, when it follows the New Airport exactly apart from one day.

 Fig. 26

Note that Acorn continues to follow the New Airport data exactly, until it goes missing, when Acorn reverts to the Old Airport data with no adjustments at all.  This only happens with Maxima: Minima are adjusted:

Fig. 27

Laziness?  Certainly not quality controlled.

Cape Leeuwin

Cape Leeuwin is a lighthouse site in the far south west of Western Australia.  It has had its early maxima cooled, increasing the warming, and minima warmed by about 0.3C up to 31/12/1995, decreasing the trend.

Fig. 28 Minima

Fig. 29 Maxima

Adjustments- Fig. 30 Minima

Fig. 31- maxima adjustments

Extremes: Maxima:  Record high on 8/2/1933 adjusted from 42.8 to 40.3; lowest minimum on 2/8/1932 adjusted from 10.1 to 9.6.

Minima:  Record low of 0.0 on 8/5/1960 has been removed as this is spurious- it seems unlikely for a windswept location right on the sea to be freezing.  However it has a new record low of 3.8 on 26/6/1956, adjusted up from 3.3.

Gunnedah

Gunnedah is in northern NSW, just west of the New England Tableland.

Fig. 32 Minima

Fig. 33- maxima

Acorn follows the Resource Centre temperatures almost exactly, as there is an obvious problem with the Pool records before the early 1960s.  Therefore we have to make do with another short record.

Horn Island

Horn Island is the island airport for Thursday Island in Torres Strait.

Fig. 34- Minima

Fig.35- Maxima

These are very messy records, with very little overlap.  I found it impossible to make decent splices, so I cannot replicate the record.  However, it is possible to find where Acorn adjustments and splices are made.  The T.I. Met Office is used from 04/09/1950 to 31/12/1992, then T.I. Town data until 31/12/1995, and then Horn Island data.

Fig. 36- Minima adjustments

Fig. 37- Maxima adjustments

Metadata mentions a site move in 1950s, and the Acorn record is plausibly adjusted.  However, what neighbouring overlapping sites were found?  Weipa is 236km away, on the western side of Cape York, and has a gap in the 1950s.  Palmerville is an inland site 633km away.   That leaves Daru in Papua New Guinea, which didn’t start until the late 1950s.  Further, the Horn Island minimum data is adjusted up until 30/12/2005, when it reverts to the raw data.  The metadata in the Station Catalogue does not mention this, so why was this data adjusted until 6 years ago?  According to Acorn’s authors, “In cases where no reference series is available (e.g. remote islands), techniques such as RHtestsV3, which do not use reference series, are also available”.  However, there is no explanation for this.  The Acorn record cannot be replicated and may be at best a good guess.

Longreach

Longreach is in central western Queensland and is the home of QANTAS.

Fig. 38 Minima

Fig. 39 Maxima

A long overlap between the town and the airport allows us to make a good splice.

Fig. 40 Minima

Fig. 41 Maxima

Comparison shows large adjustments, cooling the first half of the record and warming the second half.

Fig. 42

Oddly, Acorn has increased a couple of extreme records.

Maxima: 47.9 increased to 49.2 on 26/1/1947; old record low maximum of 10.6 on 5/7/1939 increased to 10.8.  A new low record for maxima created on 19/6/1913 of 7.0, adjusted from 11.3.

Minima: Highest minima on 26/1/1947 of 31.7 adjusted to 30.4; new high of 31.4 now on 4/2/1968 (up from 27.7). Old record low of -2.8 on 16/7/1918 adjusted to -3.4; new record of -5.4 now on 23/6/1949 (down from -2.2).

Here are plots for 1918:

Fig. 43

Fig. 44: 1949

Here’s the adjustments made in 1949: notice the month by month changes, and the new record low.  (The vertical divisions are at 30 day intervals, not quite matching calendar months, which I’ve marked in red.)

Fig. 45

Nhill

Nhill is in western Victoria.  Its High Quality adjustments were examined closely when I analysed the HQ record.

Fig. 46 Minima

Fig. 47 Maxima

Fig. 48 splice minima

Fig. 49 splice maxima

In Nhill both minima and maxima trends are strongly warmed.  Past hot temperatures are reduced (from 45.9 to 45.4 on 13/1/1939) but past cold temperatures are increased (17/6/1959: -6.5 to -5.2).

The Acorn Station Catalogue gives no clue that in December 1994 the observer died and the station closed, and there were no observations from 17th December until 17th January.   In January 1995 the site had moved 500 m, (to the outskirts of town, a more open location) with no comparative data.  Here are graphs for July to June for minima.

Fig. 50 Minima

Fig. 51 Adjustments

Note again the month by month adjustments.  Note also minima are increased after the site change- the Technical Paper indicates after initial homogenisation, Nhill had an anomalous frequency of extreme cold nights compared with neighbours, so this was corrected.  Now this more open location has warmer nights than it did before the move, and cooler days- exactly the opposite of what you would expect.

Rutherglen

This site is in a vineyard research farm in north-east Victoria.

Fig. 52 Minima

Fig. 53 Maxima

Fig. 54 minima adjustments

The maxima trend has been slightly cooled, but large minima adjustments have reversed cooling and produced steep warming.

The Acorn record has some other peculiarities as well.  There are several separate periods where Acorn’s maxima record is one day too early and has to be corrected.  These are: 1/11/1920 – 19/3/1940, 1/12/1940 – 31/10/1944, 1/5/1946 – 31/10/1947, and 1/12/1947 – 31/1/1948.   How did that slip through quality control?  Even after this has been corrected, there is another glaring error of an adjustment of -8.1 degrees.

Fig. 55

Fig. 56

The record minimum low has been changed.  In the raw data it is on 14/06/2006,  -7.5.  Acorn has it as -7.9 on 14/08/1913.   Similarly the record minimum high is changed from 29.2 on 24/12/1942 to 29.0 on 12/01/1982.

The Diurnal Temperature Range shows raw increasing, Acorn decreasing.

Fig. 57 DTR

Wilcannia

Wilcannia is in far western NSW and its record is plagued with missing data.

Fig. 58 Minima

Fig. 59 Maxima

Fig. 60 Minima adjustments

Fig. 61 Maxima adjustments

Acorn produces a major warming of the short record.

Maxima- lowest maximum on 8/7/1978 adjusted from 7.1 to 6.5; record high on 1/3/1973 adjusted from 48.2 to 47.4.

Minima- a new record low on 18/7/1977 adjusted from -5.0 to -7.2; no change to highest minima of 33.4 on 21/12/1994.

This analysis of 10 representative sites shows that Acorn has a number of problems which must be addressed.

Review Panel’s Recommendations

Basically the panel finds BOM has done a great job.  A number of recommendations have been made, however.  For a start, these are urgent:

Recommendations:

The Panel recommends that the Bureau of Meteorology should implement the following actions:

C1 A list of adjustments made as a result of the process of homogenisation should be assembled and maintained and made publicly available, along with the adjusted temperature series. Such a list will need to include the rationale for each adjustment.

C2 The computer codes underpinning the ACORNSAT data-set, including the algorithms and protocols used by the Bureau for data quality control, homogeneity testing, and calculating adjustments to homogenize the ACORN-SAT data, should be made publicly available. An important preparatory step could be for key personnel to conduct code walk throughs for members of the ACORN-SAT team.

C3 Both the raw and the homogenized ACORNSAT data-sets should be analysed with the same gridding and trend analysis method, to identify the effects of the data homogenisation.

C4 The Bureau should better clarify whether or not there have been any network-wide changes in the instrument/observing practices that took place at all stations across large portions of Australia at about the same time. If so, it will be important to demonstrate how these network-wide changes have been addressed. This is important because tests based on comparing neighbouring station records usually cannot detect network-wide changes. (BOM says this has been addressed.)

C5 The Bureau is encouraged to calculate the adjustments using only the best correlated neighbour station record and compare the results with the adjustments calculated using several neighbouring stations. This would better justify one estimate or the other and quantify impacts arising from such choices.

C6 The Panel notes the intention of the Bureau to consider “in-filling” data gaps in a small number of stations’ data records. The Panel strongly recommends that, if the Bureau proceeds with this work, the processes should be carefully documented, and the in-filled data should be flagged and maintained separately from the original. (BOM says this is a misunderstanding.)

C7 Before public release of the ACORN-SAT dataset the Bureau should determine and document the reasons why the new data-set shows a lower average temperature in the period prior to 1940 than is shown by data derived from the whole network, and by previous international analyses of Australian temperature data. (BOM says this has been covered.)

The Bureau’s responses to all of the review panel’s recommendations are listed here.  Some are very enlightening: BOM appears to be not as keen on public accessibility as the review panel recommends!  BOM is quite happy to calculate monthly means for months with up to 12 missing days. No mention is made of short length sites.

Let the review panel have the final say (my emphasis):

The Panel’s overall confidence is derived from its close examination of the Bureau’s observation practices, its network selection methodology, its approach to data homogenisation, and its methodologies for the analysis of trends. All of these factors need to be satisfactorily handled before stakeholders can be confident about findings based on the data; a failure in any one of these factors will result in a loss of stakeholders’ confidence in the system as a whole. But the confidence of public stakeholders needs always to be nurtured in other ways as well. For that reason the Panel has placed special emphasis in its report on the need for good communications and greater transparency of the development and operations of the ACORN-SAT system. A side benefit of this transparency for the Bureau is that useful suggestions for improvements and refinements to the ACORN-SAT system will almost certainly be made.

We can only hope.

About these ads

34 Responses to “ACORN-SAT: A Preliminary Assessment”

  1. filmisking Says:

    Fantastic effort, Ken. If only they were as rigorous as you.

  2. Ken McMurtrie Says:

    Reblogged this on The GOLDEN RULE and commented:
    Ken’s comprehensive analysis appears to be authentic. It suggests at least that the authenticity is demonstrably better than that of the BOM. Net inference is that the BOM data processing leaves something to be desired, contains suspicious manipulations with insufficient transparent justification which tend to bias the trend to a higher warming, and that they display a reluctance to respond to what could be termed “peer-review” by the panel.
    The article’s conclusion is inserted here for the reader’s interest.
    “The Bureau’s responses to all of the review panel’s recommendations are listed here. Some are very enlightening: BOM appears to be not as keen on public accessibility as the review panel recommends! BOM is quite happy to calculate monthly means for months with up to 12 missing days. No mention is made of short length sites.

    Let the review panel have the final say (my emphasis):

    The Panel’s overall confidence is derived from its close examination of the Bureau’s observation practices, its network selection methodology, its approach to data homogenisation, and its methodologies for the analysis of trends. All of these factors need to be satisfactorily handled before stakeholders can be confident about findings based on the data; a failure in any one of these factors will result in a loss of stakeholders’ confidence in the system as a whole. But the confidence of public stakeholders needs always to be nurtured in other ways as well. For that reason the Panel has placed special emphasis in its report on the need for good communications and greater transparency of the development and operations of the ACORN-SAT system. A side benefit of this transparency for the Bureau is that useful suggestions for improvements and refinements to the ACORN-SAT system will almost certainly be made.

    We can only hope.

  3. Carlos Says:

    Hi Ken,

    Another interesting analysis and thank god there are people like you who take the time to dig into the detail and publish the results.

    After reading several of your updates, I’m trying very hard not to be cynical of what appears to be going on with the data adjustments.

    Can’t we let the raw data stand on its own merits?

    Doesn’t the BoM understand that transparency builds trust?

    Great work!!

  4. kenskingdom Says:

    Thanks! The Bureau is trying very hard to improve its somewhat tarnished image, as they feel they have been unfairly criticised. Unfortunately they leave themselves open to criticism by not releasing the data and code and reasons for adjustments. Also the Acorn dataset has been rushed into publication without checking and is full of mistakes e.g. blank lines which make analysis tedious.

  5. siliggy Says:

    G’day Ken
    Great work!
    There are some clues about “The Meteorological station” at Bathurst in this heatwave story from 1896.

    http://trove.nla.gov.au/ndp/del/article/63935070

  6. IanD Says:

    Ken, good analysis. Have you done any analysis with just the highest daily max per month, and the lowest daily Min per month. I did this for Sydney a while back and was quite surprised at the result, in that it was a flat line apart from small bumps that aligned in summer with the odd numbered suns spot cycles – 2003, 1981, 1960

  7. Auditing the adjusting of ACORN temperature down under | Watts Up With That? Says:

    [...] Ken Stewart and the independent BOM analysts team have sliced and diced through the ACORN data. They conclude: [...]

  8. Threat of ANAO Audit means Australia’s BOM throws out temperature set, | The GOLDEN RULE Says:

    [...] Ken Stewart and the independent BOM analysts team have sliced and diced through the ACORN data. They conclude: [...]

  9. Sonny Says:

    The BOM is intentionally rewriting the Australian climate history to fit the political model of global warming. This is a stunning perversion of science. Not even raw data can escape the manipulations of the green movement. George Orwell’s predictions are becoming reality.

  10. Andrew Partington Says:

    Wow that must have taken days and days – excellent work.
    Just wondering – if BOM were comparing the 40 closest stations, and the number of inland stations has increased gradually over the last 100 years but the number of coastal stations has stayed the same or increased less rapidly (which is how it appears on the map, figure 1 CTR_049) could that mean that they were comparing hot inland stations with cool coastal stations more frequently in the early days and comparing hot inland with hot inland stations more frequently, more recently? But considering the high number of coastal stations early on, they were always comparing cool stations with cool ones? If this was the case it could explain the gradual warming trend in the hotter stations.
    The increases might occur in discrete steps then then and might coincide with the introduction of each new inland station to the temperature record.

    • kenskingdom Says:

      Good point Andrew. There is roughly the same proportion of inland and coastal stations now as before however. Yet many of the new sites have shorter records, so will reflect the warming trend over the past 50-60 years, which to my mind would bias the trend.
      Ken

  11. kenskingdom Says:

    Not just days and days I can tell you! Each site took at least one day, some eg Rutherglen much more.

    Ken

  12. Andrew Partington Says:

    Actually the statistical effect would be the same if there are less than forty inland stations to compare to at any one time. The forty closest stations to Alice Springs in 1930 would appear to include Adelaide!!

  13. Andrew Partington Says:

    OMG – re the time it must have taken…!

  14. Richard C (NZ) Says:

    Ken, Part 1 of 2

    I’ve been reading CAWCR Technical Report No. 049 by Blair Trewin (Trewin049) http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf almost word-for-word and have revised my first impression re possible effects on trend by the daily adjustments. On reflection, my previous comment on this was a load of rubbish and I now do not think the daily adjustments have any effect beyond negligible because they seem to be (for Alice Springs) equally up and down on what must be a data series that has already been adjusted for step changes (aka “breakpoints”). My suspicions were correct however wrt daily adjustments, BOM has implemented a methodology that has not been carried out anywhere else. From Trewin049 page 52 pdf:-

    7. DEVELOPMENT OF HOMOGENISED DATA SETS

    [...] The detection of inhomogeneities in a temperature record is a well-developed field of research (see section 7.2) and the methods used in the construction of the ACORN-SAT data set are closely based on those used previously for national-scale networks. However, adjustment of data to remove inhomogeneities at the daily timescale is a much less developed field, with the techniques used in ACORN-SAT not having been used outside Australia for a national-level data set.

    That explains the 34,698 separate daily adjustments but see Part 2 in regard to the sequence of adjustments i.e. step changes then daily?

    Now to the 660 step changes (breakpoints). You have previously answered “No” you did not think BOM carried out cumulative step changes. I should have asked “accumulated” step changes because there is a plot in Trewin049 on page 96 pdf Fig. 29. Accumulated annual mean adjustments (°C) for minimum temperature at Alice Springs, relative to 2009 (more in Part 2). There are definitely 660 step change adjustments in Table 6 page 72 Summary of adjustments (more in Part 2). From Trewin049 reference to these step adjustments:-

    Page 20 pdf

    3.1 What is meant by homogenisation and composite sites?

    [...] Throughout this report, compositing of sites refers to the process of merging nearby sites to create a “single” location series, taking into account differences between the raw data at the sites that are due to the absolute differences in the climate between them, e.g. one site/location might be inherently warmer than another by a few tenths of a degree.

    And page 27 pdf

    3.4 The role of site composites and comparisons

    [...] The merging of sites to form a composite record requires one to account for systematic differences in temperature data or recording. The ideal situation is that a change substantial enough to warrant a change of site number is carried out with a substantial overlap between the two sites, sufficient to enable a good comparison between the two sites and appropriate adjustments to be determined.

    And page 53 pdf

    7. DEVELOPMENT OF HOMOGENISED DATA SETS

    As discussed in section 3, most major site moves in
    the last 15 years at ACORN-SAT locations have at least some parallel comparison data available, although those data are not always useful in determining an appropriate adjustment

    These “appropriate adjustments” are specific to step changes and are ill-defined but alluded to later in Trewin048 (see Part 2). It is also the early step adjustments that can change a linear trend radically and are disputed in the NZT7 by the NZCSET (before a Judge right now).

    Part 2 follows.

    Cross posted JoNova ‘ANAO Audit’, kenskingdom ‘Acorn-Sat’ and Climate Conversations Group ‘If it was settled science’ posts

  15. Richard C (NZ) Says:

    Ken, Part 2 of 2

    From Trewin049 page 54 pdf

    7.2 The detection of inhomogeneities

    [...] A comprehensive search of metadata, both hard-copy and electronic (see section 5), was undertaken to identify changes at a site that could indicate potential inhomogeneities, with a particular emphasis on site moves and significant developments in the vicinity of the observation site. This procedure includes the merging of records from two site numbers (something which was almost always associated with a site move) whether there was an overlap period or not. All such changes were viewed as potential inhomogeneities at this point. (In practice, some of these changes did not have any significant effect on temperature observations; such non-significant ‘inhomogeneities’ were filtered out of analyses during the adjustment process, as described in section 7.7.)

    7.2 goes on in considerable detail.

    And page 57 pdf

    7.3 Adjustment of data to remove inhomogeneities – an overview

    [...] Once potential inhomogeneities have been identified, the next step is to adjust the data to remove the effects of the inhomogeneity (normally by adjusting data prior to the inhomogeneity to make it homogeneous with the most recent data, although the reverse is also possible) and make the data set homogeneous. The practice of homogenising to the most recent data has clear advantages for ongoing monitoring as it allows new data to be simply appended to the location time series (until such time as the next inhomogeneity occurs). Most adjustment techniques that have been used in large-scale climate data sets have used either a uniform annual adjustment (e.g., Della-Marta et al., 2004), or adjustments calculated for each of the 12 calendar months (e.g., Jones et al, 1986). These adjustments have typically been calculated by comparing location means, or their difference with a reference series, before and after an inhomogeneity.

    This gets us approx half way through Trewin049 and is a good place to stop for now. I’ve skimmed the rest of the report and cannot find a reference or link to a tabulation of the step change adjustments. It is imperative that they are found. 10. CASE STUDIES OF SOME SPECIFIC INHOMOGENEITIES on page 87 does however provide a plot of Accumulated annual mean adjustments (°C) for minimum temperature at Alice Springs, relative to 2009 as mentioned above.

    Trewin049 goes on with the following wrt step adjustment methodology:-

    7.4 The percentile-matching (PM) algorithm
    7.4.1 The overlap case
    7.4.2 The non-overlap case
    7.5 Monthly adjustment method
    7.6 Evaluation of different adjustment methods

    Then the most important on page 70 pdf:-

    7.7 Implementation of data adjustment in the ACORN-SAT

    Note that on page 71 pdf Trewin049 says:-

    After the first round of homogenisation, the homogenised data sets were evaluated, using the following tools:

    So step step change adjustments were made in the “first round” of adjustments. I suspect that daily adjustments were made subsequently but haven’t found any confirmation yet.

    More at a later date.

    Cross posted JoNova ‘ANAO Audit’, kenskingdom ‘Acorn-Sat’ and Climate Conversations Group ‘If it was settled science’ posts

  16. roger.edgbaston@gmail.com Says:

    When the ACORN-SAT 2012 came out on 5/1/13 I thought I would put some data into a spreadsheet ( which I have saved) – I put Hobart and Adelaide minima in for the last 3 years.

    To my surprise Feb 29 2012 data points were missing – I double checked a few times – bur sure enough according to the BOM — 2012 had only 365 days — not 366.

    Through the BOM climate website I e-mailed them my concerns .

    No acknowledgment to date.

    But when re-checking the ACORN data-set for HBT and ADL mid- last week – the 29/2/2012 data points has now appeared!

    This is a real concern – what with all the BOM’s hype about the QA that has gone into ACORN-SAT and you get these screaming bloopers..

    I thought I should alert you to this.

    Still no email – let alone thanks or apologies from the dweebs @BOM

  17. barry Says:

    Ken, what do you mean by;

    “I have so far limited my analysis to 10 representative sites.”

    1) What does ‘representative’ mean?

    2) Did you choose the stations randomly?

    If your lasptop has had a rest, could you compare with 10 different sites randomly selected?

  18. kenskingdom Says:

    Gday Barry, Thanks for your questions. As I mentioned in the article, I chose 10 sites from different geographic types across the country- they were completely random within those geographic types (e.g. What lighthouse will I pick? Cape Leeuwin sounds good) except I deliberately chose Alice Springs as it’s in the centre. 9 of the 10 were in the HQ Annual dataset I had examined in 2010. However I had no idea what I would find in the Acorn record until I had spent more than a day on each one. I was so disappointed in the Acorn dataset, and disgusted with the lack of useful response from BOM to my queries re: the HQ network, that I spent no more time on it. (As well I was busy with personal matters for several months). I had a brief look at Birdsville’s record in January but did not closely analyse it, and I am about to have a closer look at Forrest in WA.
    If I analyse any more sites I would concentrate on the remote ones as they have the largest influence on the climate record. Another group I am interested in would be in Victoria and southern NSW where the HQ record was most strongly adjusted. Or I could number the remaining 102 sites and generate 10 random numbers to pick them, but that would be pretty boring. 10 sites would take about a month as I do have a life, so please don’t hold your breath!
    I am more interested in the post-1979 data (the satellite era), but my main interest is in cycles in weather systems.
    Ken

  19. barry Says:

    Thanks for the reply. Reading some of the information above, it seems we will have to wait for the BoM to make available their codes etc. Well done pursuing your interest. Look forward to a fair appraisal of their adjustments.

    I noticed there was contention about whether the warm/cool adjustments had been roughly equivalent. BoM gave a ratio – more cool than warm. You found otherwise. What was the ratio? I think you suggest they were roughly equal.

    The resulting warmer trend might be explained by more cooling adjustments made to the earlier records and/or vise versa. This was a similar situation with the global and US data sets. I followed as closely as a layman could, the evolution of spot checks and, later, full-blown analyses of raw and adjusted temperatures. It appears that the mean values for those data sets and resultant trends were roughly consonant with the official records (Fall et al, various analyses at The Air Vent [Jeff Condon, Roman M] and The Blackboard [Zeke Hausfather and others]). The US data set includes a suite of adjustments that have a similar impact to the BoM methodology. Fall et al discovered that the min/max data had issues, but that the mean result was corroborated. Perhaps someone may take a similar approach with Australian data, which would be a less onerous task, considering the relatively small number of stations. OTOH, the sparsity of weather stations would present other challenges.

    Comparing lower tropospheric trends with surface for regions is difficult, I think, as the variability (greater than global) would make even the 34-year data set dangerously short to make meaningful comparisons. The UAH trend for the US is lower by 0.1C/decade than the surface trends (1979 – 2008), but Fall et al (Anthony Watts was a co-author) suggests the mean trend for the surface data is solid, based on their ‘best stations’ method.

    Conversely, the differing trends in maximum and minimum temperature among classes cause the average temperature trends to be almost identical, especially for the fully adjusted data. In this case, no matter what CRN class is used, the estimated mean temperature trend for the period 1979–2008 is about 0.32°C/decade

    http://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf

    I would not presume a similar outcome for an investigation of BoM data and trends, but Fall et al should act as a caution against leaping to conclusions. Watts, Pielke et al’s results agreed with the surface records on mean temp trends, and not with the MSU record. But I expect you’re aware of this, Ken, and you don’t seem to be rushing to judgement as others have done.

    I’ll keep your blog in my bookmarks and check in as time goes by, and read any replies here. Keep up the good work.

    Cheers,

    barry.

    • kenskingdom Says:

      I would not presume to estimate the warm/cool adjustment ratio based on only 10 sites. In the sites I looked at there was a predominance of cooling adjustments of early data, resulting in increase in warming (or complete reversal of cooling). In HQ the mean still showed warming bias, but I now prefer to keep Tmin and Tmax separate as they’re more interesting.
      Ken

  20. barry Says:

    Out of interest I ran a simple regression on UAH monthly data for Australia 1979 – 2013 (December 2012) and the same time period for BoM annual data. UAH was 0.1C/decade, BoM was 0.17C/decade, a similar difference to the US comparison I mentioned above.

    • kenskingdom Says:

      That is (almost) correct- see http://kenskingdom.wordpress.com/2013/03/17/how-angry-was-summer/. You should use the same comparison period for both i.e. 1981-2010 of course, which shows UAH slightly higher than BOM.

      • barry Says:

        I’m not sure that there is any need to choose the ‘climatology’ of either data set – after all, it’s just a reference point for anomalies.

        But I ran a regression with UAH data for the 1981 – 2010 time period, and came up with the same result you did.

        What I glean from both results is that regional data is more variable than global, and even a 30-year period, shifted by a couple of years, can produce a significant change in the slope. Trends of this length, then, are strongly influenced by variability, something to be cautious about when comparing surface and satellite data. The agreement you got may largely be a result of the time period you chose.

        • kenskingdom Says:

          I calculated climatology for Acorn 1981-2010 instead of 1961-1990, and applied it to the whole period of the satellite record. Apples with apples. I didn’t cherry pick start or finish dates- exactly the same period. This discussion should be on the other thread.

          Ken

  21. barry Says:

    “According to Acorn’s authors, “In cases where no reference series is available (e.g. remote islands), techniques such as RHtestsV3, which do not use reference series, are also available”. However, there is no explanation for this.”

    Have they described this process yet? Made a bet with myself it is based on relative humidity indices.

    • kenskingdom Says:

      I don’t expect BOM to describe any of their processes anytime soon. The “RH” in RHtests V3 is a clue, as you worked out. BOM don’t publish long term daily RH so we can’t check.

      • Bill Johnston Says:

        Hi Ken,

        RHtestV3 is a Canadian software package based on “R”; H stands for homogeneity; nothing to do with relative humidity. As I recall you need to ask for it; I got and used it a few times, but it is quite specialized.

  22. Jennifer Marohasy » Fiddling Temperatures for Bourke: Part 1, Hot Days Says:

    […] Ken Stewart, ACORN-Sat: A Preliminary Assessment, May 2012. http://kenskingdom.wordpress.com/2012/05/14/acorn-sat-a-preliminary-assessment/ […]

  23. The Australian Temperature Record Revisited: A Question of Balance | kenskingdom Says:

    […] studied a sample of 10 Acorn sites in May 2012, which convinced me that the Acorn dataset has many defects.  However, I have now […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 42 other followers

%d bloggers like this: