Ken Stewart, May 2012
and
Update, April 2014
Readers may have come to this site via a poster campaign, featuring (with my permission) graphs of adjustments to minimum temperatures at Rutherglen, Victoria. The graphs show comparisons of unadjusted daily temperatures as shown at Climate Data Online, with temperatures downloaded from the Acorn site (see below.) The daily temperatures have been smoothed with a 365 day running mean and linear trends are shown. The amount of adjustment in the Acorn data is clear. Pleased read on for a full explanation.
Introduction
In March 2012, a new daily temperature reconstruction was released, called the Australian Climate Observations Reference Network- Surface Air Temperatures, or ACORN-SAT (Acorn). It appears that the previous “High Quality” Annual and Daily series will be quietly forgotten, as it had become apparent that they had significant, but never admitted, problems (see my previous posts: The Australian Temperature Record Part 8: The Big Picture; Part 9: An Urban Myth, and Part 10: BOM’s “Explanations”.) Congratulations are due to the Bureau of Meteorology (BOM) for the excellently presented information, including (some) metadata for all sites, technical papers giving excellent background and describing the homogenising process in some detail, and easily accessible data files.
The authors make a number of claims about Acorn’s quality and robustness, however a number of problems can be identified.
Acorn’s authors claim that:
- They have produced a daily record for the last 100 years.
- Increasing data quantity post 1960 gives more confidence that the warming trend is strong and increasing.
- Acorn produces similar trends to those already shown by previous Australian and international analyses, (which is true), and allows improved analysis of the frequency of hot and cold extremes.
- There is an approximate balance between positive and negative adjustments for maximum temperature but a weak tendency towards a predominance of negative adjustments (54% compared with 46% positive) for minimum temperature.
Time for a reality check. Unfortunately,
- The record is much shorter than 100 years for a significant number of sites. For many others, the Acorn record has many gaps and contains spurious errors which imply poor quality control.
- Post 1960 data indeed indicate warming but this is not the case over the whole period.
- Acorn’s trends indeed reflect those of previous analyses but not those drawn from the raw data.
- Hot and cold extremes have been adjusted, usually warming winters and cooling summers, and at some sites new and more extreme records have been set.
- While there may be a numeric balance of positive and negative adjustments, analysis of a representative sample indicates that adjustments predominantly increase warming.
- The Acorn record is impossible to replicate.
I have downloaded all data for the 112 Acorn sites, maxima and minima. I have looked at this data from a number of angles: quality of data; length of record; completeness of data, trends in temperature and temperature range; selection of sites; and adjustments at a representative sample of individual Acorn sites. I have compared Acorn data for this sample with the raw data from the contributing stations, on a daily basis and by calculating 365 day running means. I have so far limited my analysis to 10 representative sites. (Each site with data from 1910 to 2011 comprises 37,255 lines of data, so it takes some time to look at each site. The whole dataset pushes my laptop to its limits!)
Preliminary Findings
1. Metadata: The metadata in the Station Catalogue are not complete and are in some cases misleading, as will be shown in the analysis of a sample of sites.
Never the less, the metadata file gives some interesting information: many sites by their own description are very poor, with no overlapping data for station moves, but the authors claim the homogenisations successfully account for this.
There is no listing of adjustments, neither is there a list of reference sites used for comparison. Code has not been released, and it is impossible to replicate Acorn.
2. The Bureau misleadingly claims that Acorn will “provide a daily record of Australian temperatures over the last 100 years.” Not for Learmonth- make that less than 37 years. Here is a graph of the count of sites per time (throughout this post, click on each figure for a closer look):
Missing data and large gaps plague a number of Acorn sites. Acorn’s data coverage is 81.4% of the possible 1910-2011 data available at each site (excluding the 8 urban sites).
Selection of sites will be discussed further in points 5 and 6 below, however there is another consequence of site selection: length of record. In 1995, Torok and Nicholls constructed a long-term climate record for Australia. For this they needed long term sites, those with at least 80 years of monthly data, and were able to create composite records so that 224 stations “were open by 1915”. Of the 112 Acorn sites, 40 or 35.7% have less than 80 years of daily data. This increases the uncertainty of any analysis of trends. The problem is that large amounts of daily data have not yet been digitised- this must be a top priority.
3. Data Precision: Although Acorn’s authors specifically address (and dismiss) the issue of the metrication change in 1972 and the incidence of rounded observations in both Fahrenheit and Celsius eras, an audit of all Acorn sites by Chris Gillham has reinforced what we found in our audit of the Annual HQ sites: before 1972 very likely more than 50% of sites’ observations were rounded. Acorn’s authors admit a possible 0.1C increase in the 1970s as a result of this, but contend that there was too much climate variability due to ENSO activity in this period to verify this. I would agree about the ENSO variability, however this inability to distinguish between signals itself demonstrates how poor and imprecise the record is.
4. Adjustments have reversed cooling trends and strongly increased warming trends apparent in the raw data at the sample of sites studied. Frequently winters are warmed and summers are cooled, thus reducing extremes. The earlier part of the record has been cooled and the later part warmed, increasing the apparent warming trend.
I have also calculated daily averages of all Acorn sites, and although there is no distance weighting the results are close to Acorn’s. Here is a graph of the 365 day running mean of all 104 non-urban Acorn minima (1911-2010):
Note: this does not show the global warming signal, which is that warming should be apparent at night, in winter, and towards the poles, such that the diurnal range decreases.
Note I show both 2nd order and 4th order polynomials, as well as linear trend lines. Acorn’s authors use quadratic or 2nd order.
I have previously analysed trends in annual data at a network of 89 sites demonstrating more than 80 years of data and with sufficient comparative data to make splices when sites change. I called this the Minimally Adjusted Network (MAN). Here is the mean for the MAN series:
This trend (+0.65C/100 years) is nothing like that of Acorn (+0.9). Although Acorn’s authors state that there are almost equal numbers of positive and negative adjustments, the result appears to be stronger warming trends.
5. Site selection: The Acorn set of sites is based on Trewin’s 2001 network, and overlaps the HQ Annual sites: of the 134 HQ Annual sites, 78 are common (or close substitutes), with 34 different sites. The Acorn authors explain the care taken with site selection, in CAWCR Technical Report No. 049. The average trend of annual raw mean temperatures for the unused HQ sites is +0.51C/100 years, while the raw trend for the sites maintained is +0.71. Let me be quite clear: there is no suggestion of deliberate bias in selection. The omitted sites nearly all lacked sufficient digitised daily data. However, the mere selection of one site and omission of another site influences the climate record, means, anomalies, and trends. The selection, substitution, deletion, and addition of the Acorn sites is not value free and has its own non-climatic influence on the record.
6. New sites: Acorn has introduced new sites for the 100 year record. As discussed in previous paragraphs, the choice of these sites has an impact on the record. While the Acorn network keeps approximately the same proportional distribution north/ south and inland/ coastal as the HQ network, it is intended that more sites will be added in future, especially in remote areas. Although the necessity of having more sites in remote areas is obvious, if more sites are added in northern Australia or in the outback, any warming signal will be intensified as northern and inland sites have much larger swings than coastal and southern sites (see MAN analysis here and here), therefore any warming trend will be exaggerated.
7. Several sites sampled contain spurious data, either due to human error or unchecked blanket application of the adjustment algorithm (evident in month by month adjustments), that indicate quality control may not be adequate.
8. While Acorn is supposed to better compare (and reduce) extremes, at several sites new (and more extreme) records are established, and more extreme temperature swings are created.
9. It is impossible to replicate the ACORN-SAT record at some sites with the available data and information.
10. An international panel was invited to review ACORN-SAT prior to publication. (Note: this does NOT constitute standard peer review.) They made some interesting observations and a list of recommendations, to which the Bureau has responded. Here are some observations, with my emphasis.
- (T)he surface temperature observation network fails to meet the internationally recommended minimum spatial density through much of inland Australia.
Fig. 7: illustrating this point
From west to east Australia is about 4,000km, which is more than the distance across the USA.
Acorn’s lead author Blair Trewin admits this in a technical paper, saying “Even today, 23 of the 112 ACORN-SAT locations are 100 kilometres or more from their nearest neighbour, and this number has been greater at times in the past, especially prior to 1950.”
- The WMO Guide states that an acceptable range of error for thermometers (including those used for measuring maximum and minimum temperature) is ±0.2 °C. However, throughout the last 100 years, Bureau of Meteorology guidance has allowed for a tolerance of ±0.5 °C for field checks of either in-glass or resistance thermometers. This is the primary reason the Panel did not rate the observing practices amongst international best practices.
- The Bureau has advised that for privacy reasons regarding observers the Bureau cannot make its metadata database publicly available through the internet. However, the Panel considers that for transparency reasons it would be useful if sufficient metadata to allow independent replication of homogeneity analyses for individual ACORN-SAT sites was included within the public ACORN-SAT station catalogue being developed by the Bureau.
Acorn’s Methodology
Acorn adjusts the record in two phases: 1. Detecting discontinuities and 2. Homogenising. These processes are described in detail in CAWCR Technical Report No. 049 by Blair Trewin.
While the techniques used to detect discontinuities by comparing “neighbouring” station records (stations that are in the same climate regime, having similar climatic variations) are widely used internationally, the adjustment techniques used in Acorn have not been used outside Australia at a national-level. This is a first for BOM, so let’s hope they have done a good job.
Quoting from the above Technical Report, detecting discontinuities involves
a series of pairwise comparisons, with the candidate site being compared one-by-one with its neighbours … in the 41×41 matrix (candidate and 40 neighbours)…
…For each candidate site, testing for inhomogeneities was carried out separately for time series of mean maximum and minimum temperature anomalies for annual means, and for seasonal means for each of the four calendar seasons (Dec-Feb, Mar-May, Jun-Aug, Sep-Nov)….
…For each candidate site, 40 neighbouring site time series were chosen from all available Australian sites with some overlapping data with the candidate site.
And 23 sites have no neighbours within 100km!
The adjustment technique is called the Percentile Matching (PM) algorithm, which
… takes two forms. The first, simpler, form is for the case of merging data from two sites where there is a useful overlap between sites. The second, more complex case, is where there is no overlap (or an overlap too short to be useful), and the adjustment is a two-step process involving the use of neighbouring sites.
This complicated process is explained in detail in the above report.
Site Analysis
Here is my analysis of a sample of 10 representative Acorn sites. These are:
Metropolitan (Brisbane Airport), remote desert (Alice Springs), lighthouse (Cape Leeuwin), island (Horn Island), outback (Longreach, Wilcannia), country town (Gunnedah), regional city (Bathurst), and semi-rural (Rutherglen, Nhill). Apart from Horn Island these sites were included in the High Quality Annual datasets. I compare daily temperatures and also use 365 day running means (as the Acorn methodology is based on annual and seasonal means). Where there are more than 10 days in a month with no recordings, I omit the following 365 days means.
Alice Springs
The Alice is a beautiful town set in the McDonnell Ranges in the centre of Australia.
This shows the difference between the Acorn and raw records (Post Office and Airport).
Fig. 10 Spliced records vs Acorn
The Maxima record has been adjusted to produce extra warming as well.
The authors correctly point out that many mistakes enter the record due to human error, amongst other things. The Acorn record, despite being world’s best practice, is not above human error. Here’s a screen shot of one part of the record, 28/1/1944, with the airport’s maximum reading and Acorn’s highlighted.
A leading “2” entered instead of “3”.
There are others: Fig. 14
Perhaps they meant 8.4?
If my laptop can find errors such as this, why can’t BOM’s quality control processes?
The review panel, in commenting on the reliability of Acorn for monitoring national trends, had this to say about the scarcity of remote data and Alice Springs in particular:
The Panel considers the ACORN-SAT national anomaly temperature series can be relied on to quantify national climate variability and change. The Panel is aware that one station, Alice Springs, contributes 7-10% of the signal which is why the Panel encourages adding a limited number of stations in remote areas to improve assessment and monitoring of sub-national regional temperature trends.
What neighbouring stations were used to make adjustments before the 1950s? There are no sites with overlapping digitised data within cooee of Alice Springs. Oodnadatta and Tennant Creek are about 460km away; there are only very few sites for periods after this. Yet Alice Springs contributes 7-10% of the national warming signal. How much do Giles, Tennant Creek, Birdsville, and Horn Island contribute?
Bathurst
Bathurst is a regional centre west of Sydney. There are no pre-1966 daily data for Bathurst Agricultural Research Station in Climate Data Online, yet Acorn has data from 1910- where does this come from? There is no way of comparing or replicating. The authors of Acorn claim that they do not use data from Bathurst Gaol, but the pre-1966 data looks suspiciously like the Gaol’s. Acorn has 1200 more missing observations than the Gaol, especially around 1950, yet the Gaol (it is claimed) isn’t used, despite having more consistent data. But they get data from somewhere! BOM needs to be more transparent about its data sources.
Acorn adjustments are supposed to improve analysis of climate extremes. There was a minimum high of 28.1 on 14/01/1939 at the Gaol, which Acorn reduces to 26.8, the record is 23 for the Research Station; the record low for the Gaol and the Ag Station both is -8.9; Acorn makes the gaol even colder in 1927 by adjusting to -10.6, and the Ag Station -9.1 in 1971. The record high maximum at the Gaol was 40.8 on 11/1/1939; this has been adjusted to 40.7. The Ag Station’s record high was 40.1 on 15/2/2004, and Acorn agrees with this. While Acorn correctly identifies some spurious recordings e.g. a minimum of 16.7 in September 1961, and 2.8 in February 1952, others are questionable. Biggest adjustments were -13.8 on 2/11/1919 and +12.7 on 2/1/1961, when a cool change came through and the temperature dropped from 30.6 on 1st January to 18.9 on the 2nd. Acorn changes this to 31.6, but 40.9mm of rain was measured on the 3rd.
The adjustment does not appear warranted. Perhaps they meant 21.6?
Brisbane
Brisbane’s Acorn record begins in 1948 with the old Eagle Farm Airport, but this excludes the long previous record of the Regional Office, for no apparent reason- these data are not even mentioned. The result is therefore a warming bias.
The warming of winters and cooling of summers is visible here:
Fig 23: Acorn less old airport minima
Fig 24: Acorn less old airport maxima
The old Airport’s record high of 39.6 on 14/11/68 has been increased to Acorn’s 40.2 on 22/02/2004, but the Regional Office had a record of 43.2 on 26/01/1940. RO has been ignored by Acorn. According to Acorn Brisbane now has a new record low of -2.2 on 22 July 1951 at the old airport just a few km from the sea instead of -0.1 on 19/07/2007 at the new airport.
Here is another curious feature in the Maxima record, from when the New Airport data become available on 1/4/1994:
Note how Acorn is exactly the same as the Old Airport until the changeover, when it follows the New Airport exactly apart from one day.
Note that Acorn continues to follow the New Airport data exactly, until it goes missing, when Acorn reverts to the Old Airport data with no adjustments at all. This only happens with Maxima: Minima are adjusted:
Laziness? Certainly not quality controlled.
Cape Leeuwin
Cape Leeuwin is a lighthouse site in the far south west of Western Australia. It has had its early maxima cooled, increasing the warming, and minima warmed by about 0.3C up to 31/12/1995, decreasing the trend.
Extremes: Maxima: Record high on 8/2/1933 adjusted from 42.8 to 40.3; lowest minimum on 2/8/1932 adjusted from 10.1 to 9.6.
Minima: Record low of 0.0 on 8/5/1960 has been removed as this is spurious- it seems unlikely for a windswept location right on the sea to be freezing. However it has a new record low of 3.8 on 26/6/1956, adjusted up from 3.3.
Gunnedah
Gunnedah is in northern NSW, just west of the New England Tableland.
Acorn follows the Resource Centre temperatures almost exactly, as there is an obvious problem with the Pool records before the early 1960s. Therefore we have to make do with another short record.
Horn Island
Horn Island is the island airport for Thursday Island in Torres Strait.
These are very messy records, with very little overlap. I found it impossible to make decent splices, so I cannot replicate the record. However, it is possible to find where Acorn adjustments and splices are made. The T.I. Met Office is used from 04/09/1950 to 31/12/1992, then T.I. Town data until 31/12/1995, and then Horn Island data.
Metadata mentions a site move in 1950s, and the Acorn record is plausibly adjusted. However, what neighbouring overlapping sites were found? Weipa is 236km away, on the western side of Cape York, and has a gap in the 1950s. Palmerville is an inland site 633km away. That leaves Daru in Papua New Guinea, which didn’t start until the late 1950s. Further, the Horn Island minimum data is adjusted up until 30/12/2005, when it reverts to the raw data. The metadata in the Station Catalogue does not mention this, so why was this data adjusted until 6 years ago? According to Acorn’s authors, “In cases where no reference series is available (e.g. remote islands), techniques such as RHtestsV3, which do not use reference series, are also available”. However, there is no explanation for this. The Acorn record cannot be replicated and may be at best a good guess.
Longreach
Longreach is in central western Queensland and is the home of QANTAS.
A long overlap between the town and the airport allows us to make a good splice.
Comparison shows large adjustments, cooling the first half of the record and warming the second half.
Oddly, Acorn has increased a couple of extreme records.
Maxima: 47.9 increased to 49.2 on 26/1/1947; old record low maximum of 10.6 on 5/7/1939 increased to 10.8. A new low record for maxima created on 19/6/1913 of 7.0, adjusted from 11.3.
Minima: Highest minima on 26/1/1947 of 31.7 adjusted to 30.4; new high of 31.4 now on 4/2/1968 (up from 27.7). Old record low of -2.8 on 16/7/1918 adjusted to -3.4; new record of -5.4 now on 23/6/1949 (down from -2.2).
Here are plots for 1918:
Here’s the adjustments made in 1949: notice the month by month changes, and the new record low. (The vertical divisions are at 30 day intervals, not quite matching calendar months, which I’ve marked in red.)
Nhill
Nhill is in western Victoria. Its High Quality adjustments were examined closely when I analysed the HQ record.
In Nhill both minima and maxima trends are strongly warmed. Past hot temperatures are reduced (from 45.9 to 45.4 on 13/1/1939) but past cold temperatures are increased (17/6/1959: -6.5 to -5.2).
The Acorn Station Catalogue gives no clue that in December 1994 the observer died and the station closed, and there were no observations from 17th December until 17th January. In January 1995 the site had moved 500 m, (to the outskirts of town, a more open location) with no comparative data. Here are graphs for July to June for minima.
Note again the month by month adjustments. Note also minima are increased after the site change- the Technical Paper indicates after initial homogenisation, Nhill had an anomalous frequency of extreme cold nights compared with neighbours, so this was corrected. Now this more open location has warmer nights than it did before the move, and cooler days- exactly the opposite of what you would expect.
Rutherglen
This site is in a vineyard research farm in north-east Victoria.
The maxima trend has been slightly cooled, but large minima adjustments have reversed cooling and produced steep warming.
The Acorn record has some other peculiarities as well. There are several separate periods where Acorn’s maxima record is one day too early and has to be corrected. These are: 1/11/1920 – 19/3/1940, 1/12/1940 – 31/10/1944, 1/5/1946 – 31/10/1947, and 1/12/1947 – 31/1/1948. How did that slip through quality control? Even after this has been corrected, there is another glaring error of an adjustment of -8.1 degrees.
The record minimum low has been changed. In the raw data it is on 14/06/2006, -7.5. Acorn has it as -7.9 on 14/08/1913. Similarly the record minimum high is changed from 29.2 on 24/12/1942 to 29.0 on 12/01/1982.
The Diurnal Temperature Range shows raw increasing, Acorn decreasing.
Wilcannia
Wilcannia is in far western NSW and its record is plagued with missing data.
Acorn produces a major warming of the short record.
Maxima- lowest maximum on 8/7/1978 adjusted from 7.1 to 6.5; record high on 1/3/1973 adjusted from 48.2 to 47.4.
Minima- a new record low on 18/7/1977 adjusted from -5.0 to -7.2; no change to highest minima of 33.4 on 21/12/1994.
This analysis of 10 representative sites shows that Acorn has a number of problems which must be addressed.
Review Panel’s Recommendations
Basically the panel finds BOM has done a great job. A number of recommendations have been made, however. For a start, these are urgent:
Recommendations:
The Panel recommends that the Bureau of Meteorology should implement the following actions:
C1 A list of adjustments made as a result of the process of homogenisation should be assembled and maintained and made publicly available, along with the adjusted temperature series. Such a list will need to include the rationale for each adjustment.
C2 The computer codes underpinning the ACORNSAT data-set, including the algorithms and protocols used by the Bureau for data quality control, homogeneity testing, and calculating adjustments to homogenize the ACORN-SAT data, should be made publicly available. An important preparatory step could be for key personnel to conduct code walk throughs for members of the ACORN-SAT team.
C3 Both the raw and the homogenized ACORNSAT data-sets should be analysed with the same gridding and trend analysis method, to identify the effects of the data homogenisation.
C4 The Bureau should better clarify whether or not there have been any network-wide changes in the instrument/observing practices that took place at all stations across large portions of Australia at about the same time. If so, it will be important to demonstrate how these network-wide changes have been addressed. This is important because tests based on comparing neighbouring station records usually cannot detect network-wide changes. (BOM says this has been addressed.)
C5 The Bureau is encouraged to calculate the adjustments using only the best correlated neighbour station record and compare the results with the adjustments calculated using several neighbouring stations. This would better justify one estimate or the other and quantify impacts arising from such choices.
C6 The Panel notes the intention of the Bureau to consider “in-filling” data gaps in a small number of stations’ data records. The Panel strongly recommends that, if the Bureau proceeds with this work, the processes should be carefully documented, and the in-filled data should be flagged and maintained separately from the original. (BOM says this is a misunderstanding.)
C7 Before public release of the ACORN-SAT dataset the Bureau should determine and document the reasons why the new data-set shows a lower average temperature in the period prior to 1940 than is shown by data derived from the whole network, and by previous international analyses of Australian temperature data. (BOM says this has been covered.)
The Bureau’s responses to all of the review panel’s recommendations are listed here. Some are very enlightening: BOM appears to be not as keen on public accessibility as the review panel recommends! BOM is quite happy to calculate monthly means for months with up to 12 missing days. No mention is made of short length sites.
Let the review panel have the final say (my emphasis):
The Panel’s overall confidence is derived from its close examination of the Bureau’s observation practices, its network selection methodology, its approach to data homogenisation, and its methodologies for the analysis of trends. All of these factors need to be satisfactorily handled before stakeholders can be confident about findings based on the data; a failure in any one of these factors will result in a loss of stakeholders’ confidence in the system as a whole. But the confidence of public stakeholders needs always to be nurtured in other ways as well. For that reason the Panel has placed special emphasis in its report on the need for good communications and greater transparency of the development and operations of the ACORN-SAT system. A side benefit of this transparency for the Bureau is that useful suggestions for improvements and refinements to the ACORN-SAT system will almost certainly be made.
We can only hope.
May 15, 2012 at 1:18 am
Fantastic effort, Ken. If only they were as rigorous as you.
May 17, 2012 at 11:53 am
Reblogged this on The GOLDEN RULE and commented:
Ken’s comprehensive analysis appears to be authentic. It suggests at least that the authenticity is demonstrably better than that of the BOM. Net inference is that the BOM data processing leaves something to be desired, contains suspicious manipulations with insufficient transparent justification which tend to bias the trend to a higher warming, and that they display a reluctance to respond to what could be termed “peer-review” by the panel.
The article’s conclusion is inserted here for the reader’s interest.
“The Bureau’s responses to all of the review panel’s recommendations are listed here. Some are very enlightening: BOM appears to be not as keen on public accessibility as the review panel recommends! BOM is quite happy to calculate monthly means for months with up to 12 missing days. No mention is made of short length sites.
Let the review panel have the final say (my emphasis):
The Panel’s overall confidence is derived from its close examination of the Bureau’s observation practices, its network selection methodology, its approach to data homogenisation, and its methodologies for the analysis of trends. All of these factors need to be satisfactorily handled before stakeholders can be confident about findings based on the data; a failure in any one of these factors will result in a loss of stakeholders’ confidence in the system as a whole. But the confidence of public stakeholders needs always to be nurtured in other ways as well. For that reason the Panel has placed special emphasis in its report on the need for good communications and greater transparency of the development and operations of the ACORN-SAT system. A side benefit of this transparency for the Bureau is that useful suggestions for improvements and refinements to the ACORN-SAT system will almost certainly be made.
We can only hope.
May 17, 2012 at 11:12 pm
Hi Ken,
Another interesting analysis and thank god there are people like you who take the time to dig into the detail and publish the results.
After reading several of your updates, I’m trying very hard not to be cynical of what appears to be going on with the data adjustments.
Can’t we let the raw data stand on its own merits?
Doesn’t the BoM understand that transparency builds trust?
Great work!!
May 18, 2012 at 6:15 am
Thanks! The Bureau is trying very hard to improve its somewhat tarnished image, as they feel they have been unfairly criticised. Unfortunately they leave themselves open to criticism by not releasing the data and code and reasons for adjustments. Also the Acorn dataset has been rushed into publication without checking and is full of mistakes e.g. blank lines which make analysis tedious.
May 20, 2012 at 9:03 pm
G’day Ken
Great work!
There are some clues about “The Meteorological station” at Bathurst in this heatwave story from 1896.
http://trove.nla.gov.au/ndp/del/article/63935070
May 29, 2012 at 1:54 pm
Ken, good analysis. Have you done any analysis with just the highest daily max per month, and the lowest daily Min per month. I did this for Sydney a while back and was quite surprised at the result, in that it was a flat line apart from small bumps that aligned in summer with the odd numbered suns spot cycles – 2003, 1981, 1960
May 29, 2012 at 3:54 pm
Hi Ian, thank you. No I haven’t drilled down that far but it sounds interesting.
June 18, 2012 at 2:18 pm
[…] Ken Stewart and the independent BOM analysts team have sliced and diced through the ACORN data. They conclude: […]
June 19, 2012 at 11:44 am
[…] Ken Stewart and the independent BOM analysts team have sliced and diced through the ACORN data. They conclude: […]
June 19, 2012 at 11:51 am
The BOM is intentionally rewriting the Australian climate history to fit the political model of global warming. This is a stunning perversion of science. Not even raw data can escape the manipulations of the green movement. George Orwell’s predictions are becoming reality.
June 20, 2012 at 5:35 pm
Wow that must have taken days and days – excellent work.
Just wondering – if BOM were comparing the 40 closest stations, and the number of inland stations has increased gradually over the last 100 years but the number of coastal stations has stayed the same or increased less rapidly (which is how it appears on the map, figure 1 CTR_049) could that mean that they were comparing hot inland stations with cool coastal stations more frequently in the early days and comparing hot inland with hot inland stations more frequently, more recently? But considering the high number of coastal stations early on, they were always comparing cool stations with cool ones? If this was the case it could explain the gradual warming trend in the hotter stations.
The increases might occur in discrete steps then then and might coincide with the introduction of each new inland station to the temperature record.
June 20, 2012 at 5:41 pm
Good point Andrew. There is roughly the same proportion of inland and coastal stations now as before however. Yet many of the new sites have shorter records, so will reflect the warming trend over the past 50-60 years, which to my mind would bias the trend.
Ken
June 20, 2012 at 5:42 pm
Not just days and days I can tell you! Each site took at least one day, some eg Rutherglen much more.
Ken
June 20, 2012 at 5:47 pm
Actually the statistical effect would be the same if there are less than forty inland stations to compare to at any one time. The forty closest stations to Alice Springs in 1930 would appear to include Adelaide!!
June 21, 2012 at 7:08 am
Hadn’t though of that!
June 20, 2012 at 5:48 pm
OMG – re the time it must have taken…!
June 21, 2012 at 4:48 pm
Ken, Part 1 of 2
I’ve been reading CAWCR Technical Report No. 049 by Blair Trewin (Trewin049) http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf almost word-for-word and have revised my first impression re possible effects on trend by the daily adjustments. On reflection, my previous comment on this was a load of rubbish and I now do not think the daily adjustments have any effect beyond negligible because they seem to be (for Alice Springs) equally up and down on what must be a data series that has already been adjusted for step changes (aka “breakpoints”). My suspicions were correct however wrt daily adjustments, BOM has implemented a methodology that has not been carried out anywhere else. From Trewin049 page 52 pdf:-
That explains the 34,698 separate daily adjustments but see Part 2 in regard to the sequence of adjustments i.e. step changes then daily?
Now to the 660 step changes (breakpoints). You have previously answered “No” you did not think BOM carried out cumulative step changes. I should have asked “accumulated” step changes because there is a plot in Trewin049 on page 96 pdf Fig. 29. Accumulated annual mean adjustments (°C) for minimum temperature at Alice Springs, relative to 2009 (more in Part 2). There are definitely 660 step change adjustments in Table 6 page 72 Summary of adjustments (more in Part 2). From Trewin049 reference to these step adjustments:-
Page 20 pdf
And page 27 pdf
And page 53 pdf
These “appropriate adjustments” are specific to step changes and are ill-defined but alluded to later in Trewin048 (see Part 2). It is also the early step adjustments that can change a linear trend radically and are disputed in the NZT7 by the NZCSET (before a Judge right now).
Part 2 follows.
Cross posted JoNova ‘ANAO Audit’, kenskingdom ‘Acorn-Sat’ and Climate Conversations Group ‘If it was settled science’ posts
June 21, 2012 at 4:54 pm
Ken, Part 2 of 2
From Trewin049 page 54 pdf
7.2 goes on in considerable detail.
And page 57 pdf
This gets us approx half way through Trewin049 and is a good place to stop for now. I’ve skimmed the rest of the report and cannot find a reference or link to a tabulation of the step change adjustments. It is imperative that they are found. 10. CASE STUDIES OF SOME SPECIFIC INHOMOGENEITIES on page 87 does however provide a plot of Accumulated annual mean adjustments (°C) for minimum temperature at Alice Springs, relative to 2009 as mentioned above.
Trewin049 goes on with the following wrt step adjustment methodology:-
7.4 The percentile-matching (PM) algorithm
7.4.1 The overlap case
7.4.2 The non-overlap case
7.5 Monthly adjustment method
7.6 Evaluation of different adjustment methods
Then the most important on page 70 pdf:-
7.7 Implementation of data adjustment in the ACORN-SAT
Note that on page 71 pdf Trewin049 says:-
So step step change adjustments were made in the “first round” of adjustments. I suspect that daily adjustments were made subsequently but haven’t found any confirmation yet.
More at a later date.
Cross posted JoNova ‘ANAO Audit’, kenskingdom ‘Acorn-Sat’ and Climate Conversations Group ‘If it was settled science’ posts
January 14, 2013 at 10:03 am
When the ACORN-SAT 2012 came out on 5/1/13 I thought I would put some data into a spreadsheet ( which I have saved) – I put Hobart and Adelaide minima in for the last 3 years.
To my surprise Feb 29 2012 data points were missing – I double checked a few times – bur sure enough according to the BOM — 2012 had only 365 days — not 366.
Through the BOM climate website I e-mailed them my concerns .
No acknowledgment to date.
But when re-checking the ACORN data-set for HBT and ADL mid- last week – the 29/2/2012 data points has now appeared!
This is a real concern – what with all the BOM’s hype about the QA that has gone into ACORN-SAT and you get these screaming bloopers..
I thought I should alert you to this.
Still no email – let alone thanks or apologies from the dweebs @BOM
July 3, 2013 at 2:15 am
Ken, what do you mean by;
“I have so far limited my analysis to 10 representative sites.”
1) What does ‘representative’ mean?
2) Did you choose the stations randomly?
If your lasptop has had a rest, could you compare with 10 different sites randomly selected?
July 3, 2013 at 7:49 am
Good morning Barry,
I have to go out today but I will try to reply to you sometime this arvo (probably quite late). Current time 7.45 a.m.
Ken
July 3, 2013 at 8:30 am
I’m in Sydney, off to work myself, at a lazier hour. Have a good day.
July 3, 2013 at 3:55 pm
Gday Barry, Thanks for your questions. As I mentioned in the article, I chose 10 sites from different geographic types across the country- they were completely random within those geographic types (e.g. What lighthouse will I pick? Cape Leeuwin sounds good) except I deliberately chose Alice Springs as it’s in the centre. 9 of the 10 were in the HQ Annual dataset I had examined in 2010. However I had no idea what I would find in the Acorn record until I had spent more than a day on each one. I was so disappointed in the Acorn dataset, and disgusted with the lack of useful response from BOM to my queries re: the HQ network, that I spent no more time on it. (As well I was busy with personal matters for several months). I had a brief look at Birdsville’s record in January but did not closely analyse it, and I am about to have a closer look at Forrest in WA.
If I analyse any more sites I would concentrate on the remote ones as they have the largest influence on the climate record. Another group I am interested in would be in Victoria and southern NSW where the HQ record was most strongly adjusted. Or I could number the remaining 102 sites and generate 10 random numbers to pick them, but that would be pretty boring. 10 sites would take about a month as I do have a life, so please don’t hold your breath!
I am more interested in the post-1979 data (the satellite era), but my main interest is in cycles in weather systems.
Ken
July 3, 2013 at 10:23 pm
Thanks for the reply. Reading some of the information above, it seems we will have to wait for the BoM to make available their codes etc. Well done pursuing your interest. Look forward to a fair appraisal of their adjustments.
I noticed there was contention about whether the warm/cool adjustments had been roughly equivalent. BoM gave a ratio – more cool than warm. You found otherwise. What was the ratio? I think you suggest they were roughly equal.
The resulting warmer trend might be explained by more cooling adjustments made to the earlier records and/or vise versa. This was a similar situation with the global and US data sets. I followed as closely as a layman could, the evolution of spot checks and, later, full-blown analyses of raw and adjusted temperatures. It appears that the mean values for those data sets and resultant trends were roughly consonant with the official records (Fall et al, various analyses at The Air Vent [Jeff Condon, Roman M] and The Blackboard [Zeke Hausfather and others]). The US data set includes a suite of adjustments that have a similar impact to the BoM methodology. Fall et al discovered that the min/max data had issues, but that the mean result was corroborated. Perhaps someone may take a similar approach with Australian data, which would be a less onerous task, considering the relatively small number of stations. OTOH, the sparsity of weather stations would present other challenges.
Comparing lower tropospheric trends with surface for regions is difficult, I think, as the variability (greater than global) would make even the 34-year data set dangerously short to make meaningful comparisons. The UAH trend for the US is lower by 0.1C/decade than the surface trends (1979 – 2008), but Fall et al (Anthony Watts was a co-author) suggests the mean trend for the surface data is solid, based on their ‘best stations’ method.
Click to access r-367.pdf
I would not presume a similar outcome for an investigation of BoM data and trends, but Fall et al should act as a caution against leaping to conclusions. Watts, Pielke et al’s results agreed with the surface records on mean temp trends, and not with the MSU record. But I expect you’re aware of this, Ken, and you don’t seem to be rushing to judgement as others have done.
I’ll keep your blog in my bookmarks and check in as time goes by, and read any replies here. Keep up the good work.
Cheers,
barry.
July 4, 2013 at 6:48 am
I would not presume to estimate the warm/cool adjustment ratio based on only 10 sites. In the sites I looked at there was a predominance of cooling adjustments of early data, resulting in increase in warming (or complete reversal of cooling). In HQ the mean still showed warming bias, but I now prefer to keep Tmin and Tmax separate as they’re more interesting.
Ken
July 3, 2013 at 11:30 pm
Out of interest I ran a simple regression on UAH monthly data for Australia 1979 – 2013 (December 2012) and the same time period for BoM annual data. UAH was 0.1C/decade, BoM was 0.17C/decade, a similar difference to the US comparison I mentioned above.
July 4, 2013 at 6:35 am
That is (almost) correct- see https://kenskingdom.wordpress.com/2013/03/17/how-angry-was-summer/. You should use the same comparison period for both i.e. 1981-2010 of course, which shows UAH slightly higher than BOM.
July 4, 2013 at 9:24 am
I’m not sure that there is any need to choose the ‘climatology’ of either data set – after all, it’s just a reference point for anomalies.
But I ran a regression with UAH data for the 1981 – 2010 time period, and came up with the same result you did.
What I glean from both results is that regional data is more variable than global, and even a 30-year period, shifted by a couple of years, can produce a significant change in the slope. Trends of this length, then, are strongly influenced by variability, something to be cautious about when comparing surface and satellite data. The agreement you got may largely be a result of the time period you chose.
July 4, 2013 at 3:14 pm
I calculated climatology for Acorn 1981-2010 instead of 1961-1990, and applied it to the whole period of the satellite record. Apples with apples. I didn’t cherry pick start or finish dates- exactly the same period. This discussion should be on the other thread.
Ken
July 3, 2013 at 11:56 pm
Have they described this process yet? Made a bet with myself it is based on relative humidity indices.
July 4, 2013 at 6:39 am
I don’t expect BOM to describe any of their processes anytime soon. The “RH” in RHtests V3 is a clue, as you worked out. BOM don’t publish long term daily RH so we can’t check.
November 10, 2013 at 9:48 am
Hi Ken,
RHtestV3 is a Canadian software package based on “R”; H stands for homogeneity; nothing to do with relative humidity. As I recall you need to ask for it; I got and used it a few times, but it is quite specialized.
March 30, 2014 at 11:05 pm
[…] Ken Stewart, ACORN-Sat: A Preliminary Assessment, May 2012. https://kenskingdom.wordpress.com/2012/05/14/acorn-sat-a-preliminary-assessment/ […]
May 16, 2014 at 12:35 pm
[…] studied a sample of 10 Acorn sites in May 2012, which convinced me that the Acorn dataset has many defects. However, I have now […]
December 22, 2016 at 7:56 pm
Ken, I’ve just perused through all this for now. You have done an enormous amount of work. It will take a while to go through all your work and I’m getting a lot of interference from Christmas.
I was asked to write that report from my knowledge and contacts at BOM. I don’t have any more information really other than what I can collect from my research. Although I do know the brother of one of the ACORN-SAT Forum members and I have just started a dialogue with hm so maybe I can get more info, but I suspect that any info I get from him will be just what he wants me to know and not what I want to know.
Regarding all your work, I hope it hasn’t been in vain. Trump has already announced he’s going to dismantle NASA’s climate division for producing politicized science. Last week his team asked the department’s Bureau of Oceans and International Environmental and Scientific Affairs, ‘How much does the Department of State contribute annually to international environmental organizations in which the department participates?,’” He’s sounding like he’s going to defund the IPCC. There will be no more NASA-GISS manipulating of temperature data and he most probably will put an end to NOAAs Homogenization program as well. You’d expect that to have a flow on effect to BOM. It’s the IPCC requesting WMO for homogenized data. With no IPCC there will be no requests to BOM for homogenized data. And there will only be Australia and the UK doing it so it will be a waste of time.
We’ll just have to wait and see.
December 22, 2016 at 8:49 pm
Gday Brendan, do you mind if I contact you by email? I have a lot more recent posts as well, and Christmas is going to put me out of circulation for a few days. Have just found that BOM have put digitised data on online so daily data can be checked.
Cheers
Ken S
February 14, 2019 at 11:31 am
[…] I (along with others) found to have very many severe problems. (If you like, check these posts, here, here, here, and here. There are many […]
May 20, 2019 at 7:23 pm
[…] I (along with others) found to have very many severe problems. (If you like, check these posts, here, here, here, and here. There are many […]