Adjustments Grossly Exaggerate Monthly and Seasonal Warming

The Bureau of Meteorology has reportedly claimed “an extensive study has found homogeneity adjustments have little impact on national trends and changes in temperature extremes.”  (Weekend Australian, August 23-24).

I have always said that the true test of the homogenisation process is its effect on national trends.  Problems at individual stations like Rutherglen are merely symptoms of a system wide malady.

If the adjustments really do have “little impact on national trends” then the Acorn dataset is a reliable indicator of broad temperature change in Australia.

If not, the Bureau has a problem.

So, how do we define “little impact”?

The Bureau has known since March 2012 that mean annual temperature increase from 1911 to 2010 in adjusted data (+0.94C) is 36% greater than in unadjusted data (+0.69C).  This information is publicly available in Table 1 on page 14 of On the sensitivity of Australian temperature trends and variability to analysis methods and observation networks  (CAWCR Technical Report No. 050), R.J.B. Fawcett, B.C. Trewin, K. Braganza, R.J Smalley, B. Jovanovic and D.A. Jones , March 2012 (hereafter CTR-050).  In this paper the authors claim that the rise in unadjusted data is “somewhat smaller”.  If this is so, then what increase in trend over unadjusted data may be considered to be beyond small or “little impact”? 50%? More than 50%?

What about 200%?

The Bureau has this graphic on their new Adjustments tab, which presumably is meant to support the claim of “little impact”:

Fig. 1: Official comparison (click graphics to enlarge)

BOM graphic

How big is that increase?  The devil is in the detail- monthly and seasonal trends, which the Bureau is yet to analyse.

According to the Bureau, AWAP (Australian Water Availability Project) represents unadjusted data. (It’s not, CTR-050 even calls it “partially homogenised”, and there are major issues with it, but that’s another story to be discussed later.  For now, let’s play along with calling it “unadjusted”).  Using this same “unadjusted” data, and the same method as the Bureau, here are results for the 1911 – 2013 period.  (See the Appendix below for full details.)

These tables summarize the results.  Highlighted cells show large ( > 50%) difference.

Fig. 2:  Summary Table: Percentage Increases to Unadjusted Data- Seasons

summary table seasons

The major effect is on summer trend:  increase in Mean trend 64%, Maxima 200%.

Fig .3:  Summary Table: Percentage Increases to Unadjusted Data- Months

summary table months

In Maxima trends, of the hot months, November, December and January have had large increases, and February and March have had cooling trends reversed.

June and November Mean, Minima, and Maxima trends have been massively increased.

One month (August) has had a warming trend reduced.

May, July, August, and September are largely unchanged.


Compared with ‘unadjusted’ data, for the period 1911 – 2013 Acorn shows obvious changes in monthly and seasonal data.  Exploration of the reasons for this needs to be included in the terms of reference of the forthcoming “independent review”.

The difference between AWAP and Acorn, especially in summer maxima, is of particular concern for anyone wishing to analyse national data.  For example: What was the national summer maximum in 1926?  AWAP says 35.87C.  Acorn says 33.53C.  Which dataset is to be believed?

The Bureau has a problem.

The Acorn dataset is NOT a reliable indicator of broad temperature change in Australia.

Appendix: Background, Charts, Methods, and Analysis

CTR-050 analyses data for the 1911-2010 period, comparing Acorn with several other datasets, including AWAP.  All trends are determined by quadratic fit, rather than linear, to better show the temperature trends across the period: cooling then warming.  The authors also use anomalies from 1981-2010 means.

This table shows the change in temperature over the period, which represents trend per 100 years, (and I am annoyed at myself for not reading this more closely two years ago.)

Fig.4:  Table 1 from CTR 050:

BOM table 1 comps

The authors explain (pp. 41-46) that the difference between AWAP and Acorn is mainly between 1911 and 1955 and is largely due to the large impact on national temperature of very few remote sites in the earlier years of last century, and station moves to cooler sites around 1930 and the 1940s.  That may certainly be true, but the large discrepancy calls for closer analysis.

My methods

Monthly and annual AWAP data (minima, maxima, and mean) 1911 – 2013 obtained from the Bureau allows analysis of the impact the adjustments.  I use 1961 – 1990 as the reference period for anomalies.  I also use quadratic trends and calculate temperature change per 100 years by (last quadratic trendline point – first point) X 100/103.  (These first and last points are accurately determined to 0.01C by zooming in on Excel charts- see Figures 22 and 23 below.)  I calculate percentage change in 100 year trend as {(Acorn trend – AWAP trend)/AWAP trend} x 100.

For example:  Annual means.

Quadratic first point (1911)   Quadratic last point (2013)    Change

AWAP:   -0.13                          +0.56                            +0.69

Acorn:   -0.34                           +0.58                            +0.92

AWAP Quadratic trend per 100 years =  0.69 X 100/103 = 0.67

Acorn Quadratic trend per 100 years =   0.92 X 100/103 = 0.89

Percentage change in trend = {(0.89 – 0.67) / 0.67} X 100 = 32.8%.

While my analysis largely confirms the figures in the Figure 4 above, the devil is in the detail.

Firstly, here are charts for comparison of mean temperatures, showing linear and quadratic trends to 2013:

Fig. 5: Linear

mean linear

Fig. 6: Quadratic

mean quadratic

Linear analysis produces a trend value of 31%, a little less than quadratic .  Acorn adjustments produce a quadratic trend about 32.8% greater than AWAP- not as great as 1911-2010, but still substantial.  Quadratic trend lines produce a better fit than linear and clearly show the earlier cooling.

Fig.7:  Annual Minima

min quadratic

Over 25% increase.

Fig. 8: Annual Maxima

max quadratic

36.7% increase.

Seasonal and Monthly Means:

Fig. 9:  Table of Seasonal Differences for Means.

mean table seasons

Note summer mean trend has been increased by 64%.  Graphs may make the comparison starker.

Fig. 10:  Comparison of 100 year trends in unadjusted and adjusted seasonal data.

mean trends diff seasons

Fig. 11: Percentage Difference in Trends

mean trends diff % seasons

Fig. 12: Comparison of 100 year trends in unadjusted and adjusted monthly data.

mean trends comp

Fig. 13:  Percentage Difference in Trends

mean trends diff % months

February trend doubled, March, June, and November are increased by about 80%.


Fig. 14:  Table of Seasonal Differences for Minima.

min table seasons

Fig. 15:  Comparison of 100 year trends in unadjusted and adjusted seasonal data.

min trends comp seasons

Fig. 16:  Percentage Difference in Trends

min trends diff % seasons

Fig. 17: Comparison of 100 year trends in unadjusted and adjusted monthly data.

min trends comp

Fig. 18:  Percentage Difference in Trends

min trends diff %

Note the doubling of the June minima trend, and October and November increased by 50%.


Fig. 19:  Table of Seasonal Differences for Maxima.

tmax table seasons

Fig. 20:  Comparison of 100 year trends in unadjusted and adjusted seasonal data.

max  trends seasons

Fig. 21:  Percentage Difference in Trends- we need to rescale the y-axis!

max trends diff % seasons

Don’t believe the 200% figure?  Here are close ups of the graph.

Fig. 22:  Summer maxima detail

max summer quadratic bottom

Fig. 23:

max summer quadratic top

Fig. 24: Comparison of 100 year trends in unadjusted and adjusted monthly data.

max trends comp

Note cooling trends in February and March reversed., August reduced.

Fig. 25:  Percentage Difference in Trends

max trends diff % months

Strong August warming slightly reduced.  No calculation for February and March.  January, June, December greatly warmed.  November massively warmed.

Why the huge discrepancies between unadjusted and adjusted data?

Acorn data freely available at

AWAP data available at a cost on request from

Tags: , , ,

23 Responses to “Adjustments Grossly Exaggerate Monthly and Seasonal Warming”

  1. Geoff Sherrington Says:

    Ken, Spent much of the last 4 days analysing Rutherglen Tmin daily data. Much work formatting Excel to get apples sitting next to apples. Just starting interpretation. Early stages indicate either an influence of different observers or non-raw ‘raw’ data in blocks with different rounding. It might turn out that statistical break point detection is detecting different observers. More work needed to exclude this possibility.: There are other emerging lines of inquiry. They are complementary to your splendid work.

    Between us citizen scientists, there will emerge a story, because we are working objectively with data while others are investing in social talk, like making claims of the magnificence of peer review.

    If you like, I will bounce prelim findings off you. This cursed illness has robbed me of a lot of past ability to perform and express complex ideas.

    Well done with your latest post. I am still absorbing some finer points.


  2. Geoff Sherrington Says:

    P.s. Ken, I forgot to mention the term ‘statistical overfitting’. Search it for several references that describe the Acorn problem, very broadly expressed by me here as making heroically large adjustments one after another, some overlapping and compounding, to fix a perceived problem of heterogeneity of a similar order of magnitude, with both adjustments and breaks being numerically large compared to the total change in the data set. Like, the cure is worse than the ailment..

    You are encouraged to keep hammering that virtually any adjustment MUST change the trend; so you have mutual exclusion -you can fix the breaks but butcher he trends, or you can preserve the trends and bugger up the step change size. One or the other, not both. Cheers Geoff.

    • kenskingdom Says:

      Good morning Geoff, good to hear from you. I would be glad to see your findings as I think there’s a systemic problem with the adjustments.

  3. Ian George Says:

    Fantastic work, Ken. Let’s hope that Birmingham et al will look at your research to grill the BoM and find out how this has come to pass.

    Paraphrasing an old quote;-
    If you torture the figures for long enough, they’ll ‘fess up to anything.

  4. peter azlac Says:

    Hi Ken

    Looking at the Trewin CTR report I see that they state that they have:
    `Out of the 112 locations in the ACORN-SAT network (Trewin 2012a
    ; Trewin 2012b ), we omit from the analyses eight locations lassified as urban, either because they are in the centres of major urban
    areas, or are in more peripheral locations but show evidence of anomalous temperature trends, in comparison to their surrounds. Those omitted stations are; 023090 Adelaide (Kent Town), 032040 Townsville Aero, 039083 Rockhampton Aero, 066062 Sydney (Observatory Hill), 6 067105 Richmond RAAF, 086071 Melbourne Regional Office, 087031 Laverton RAAF, and 094029 Hobart Ellerslie Road)`

    So straight off we have an admission of a warming bias in the Acorn data.

    Then, as I stated at the blog site of Jennifer Mahorasy:

    `……. In contrast, for Australia, the BEST climate site gives details of over 180 sites with continuous or near continuous temperature records of over 100 years. Of these some 75 have Tmin and Tmax records for the same periods. As Tmin and Tmax are important in determining the causes of any temperature change as reflected in DTR values it is critical that stations used in the compilation of a series such as BEST, CRU, GISS, NCDC and Acorn are based on such records alone.
    As one would expect these sites are mainly located in areas of agriculture, predominantly in the NE and SE (140), with limited number elsewhere. Of the total some 70 have Tmin and Tmax values that match the length of the records, out of which 60 are in the NE and SE and only 11 in the SW with none in the NW. Yet Acorn, that BOM claims is based on the best available records, uses only 26 of the BEST sites with 100 years of Tmin and Tmax data; instead using other sites where the Tmin and Tmax records only run post 1940 and for 55 of their 112 stations using sites that only started recording temperatures at all from this time onwards. From these they stitch together at temperature trend that they claim is representative of the true trend for Australia. But, the result is, as Matt Briggs says (, not data at all but modeled synthetic data that may conform to other synthetic series like BEST and the rest, but tells us little about the real trends in climate change. This is difficult to understand as as Trewin in a paper given at the Workshop on Pan evaporation cited above –emphasized the importance of Tmib and Tmax values:

    Click to access nc-ess-pan-evap.pdf

    So, to me, the problem is worse than you find, in that Acorn is not based on all the available long term records with minimum and maximum values but a selective set with most only having this data after the 1950´s. Is this also true of the Trewin data? If so, neither analysis has any value, as to understand climate change one needs to take into account the effects of precipitation, soil type etc (as a I detailed earlier at the same site) and for this minimum and maximum temperatures are an absolute necessity and Tav worse than useless, as you show, in that it introduces substantial bias.

    The evaluation that needs doing for Australia is to use only the the 70 or so long run records from 1910 or earlier and show trends for Tmax and Tmin compared to precipitation, as discussed in the above Workshop on Pan Evaporation, and to do this by Köppen Zomes and not neat grids. The output should be given by Köppen Zone and not as some meaningless national average.

    • kenskingdom Says:

      Firstly, I regret your comment went into moderation per WordPress while I was getting my beauty sleep.
      Climate analyses do NOT include the 8 sites with UHI. I don’t know why they even bother with them for that reason.
      Acorn is an awful dataset as you say, for many and various reasons, including stations with short records. CTR050 discusses precipitation but discounts its significance. AWAP is an index including precipitation (amongst others). Both AWAP and Acorn are not suitable for climate analyses.

  5. Jo Nova Says:

    Excellent work Ken. This is very useful analysis, something CSIRO might have done back in the days when it was run by scientists…

  6. in looking at outlooks … | pindanpost Says:

    […] On Stewart’s site there is discussion about the value of quadratic trends versus linear trends, but the main point is that the BOM use quadratic trends, so Stewart copied their approach. As far as I’m concerned, all trends from either AWAP or ACORN are hopelessly compromised. Both are area-weighted, gridded data, neither are “unadjusted”, and there are far too many anomalies, and adjustments that can’t be justified with documentation. These are statistical creations. […]

  7. peter azlac Says:

    Hi Ken

    I applaud your efforts in showing how BOM have manipulated the Australian temperature series through dubious homogenisation but the real problem is not only that they have done this but, as I said in my earlier comment, they have been selective of the sites they include in their series that has biased to results. You will be able to show their fudges via your analyses but the problem goes beyond the selection and analysis of the station data to the question as to whether station data has meaning outside the local areas they were intended for – that is to supply farmers and others with relevant climate information. As someone who has worked in agricultural research for over 50 years, including spells in Africa as a soil scientist, I am more aware than most of the problems of extrapolating meteorological data too far from where it is collected, especially if the interpretation into crop selection than involves different patterns of soils, solar insolation, topography, precipitation etc. Of these soil type, that affects soil moisture and evaporation rates, is of major importance as it has substantial impacts on Tmin and Tmax. What BOM see as discontinuities can well be real effects from responses to changes in these factors – some of the discontinuities discussed for Rutherglenn, Bourne etc relate to hot dry years when soil moisture would have been low and given such responses.

    I have commented extensively on this subject in responses at the sites of Jennifer Mahorasy and Jo Nova as well as directly with Jennifer. I have put together the more important points for you to ponder if you would like to get in touch directly by email.

  8. Robert B Says:

    There was an interesting reply to a comment of mine in WUWT. Apparently Richmond RAAF site 67105 which was opened in 1993 has data from mid 1939 in Acorn. Sydney Airport has data from April 1939.

    Have they infilled data for almost 60 years from a station 50km closer to the coast?

  9. Global Warming Concerns Grow - Page 5 - Defending The Truth Political Forum Says:

    […] work here on the BOM fraud in Australia. Adjustments Grossly Exaggerate Monthly and Seasonal Warming | kenskingdom […]

  10. Keith Says:

    Ken, I saw a very interesting approach from Steve Goddard where he compares surface data with satellite data for a grid including that city surface station.

    THe surface data shows warming which he ascribes to UHI, whereas the satellite data shows a minor cooling trend.

    It seems to me like a smart approach but I do not have the data skills to check it.

    I thought it might be interesting to bring it to your attention.

    Best wishes, and your work is much appreciated – Keith

  11. Truthseeker Says:

    Tony Heller (aka Steven Goddard) nails it very succintly …

  12. DaveR Says:


    good news from NZ where a new paper by de Freitas et al exposes the faulty national temperature series adjustments made by NIWA.

    Now for the Bureau of Meteorolgy.

    Best summary here:

  13. My Submission to the BOM Review Panel | kenskingdom Says:

    […] (For further information and full explanation see…). […]

  14. The Bureau Boss on Temperature Trends, Heatwaves, and Climate Change | kenskingdom Says:

    […] an explanation for my interest in comparison with AWAP data, see my analysis of monthly and seasonal differences in trends between AWAP and Acorn from October last year.  My calculations indicate a 200% increase in trend in summer […]

  15. coworking Says:


    Adjustments Grossly Exaggerate Monthly and Seasonal Warming | kenskingdom

  16. After 15 Weeks, the Bureau Responds With Non-Answers | kenskingdom Says:

    […] has been undertaken.  Apparently I am the only person to have done this, and my results showing massive differences in maxima trends, largely due to just two adjustments, have not been […]

  17. ACORN-SAT 2.0: Western Australia- A State of Confusion | kenskingdom Says:

    […] found to have very many severe problems.  (If you like, check these posts, here, here, here, and here.  There are many […]

  18. ACORN-SAT 2.0: Nation-wide Summary | kenskingdom Says:

    […] to have very many severe problems.  (If you like, check these posts, here, here, here, and here.  There are many […]

Comments are closed.

%d bloggers like this: