By Ken Stewart, ably assisted by Chris Gillham, Phillip Goode, Ian Hill, Lance Pidgeon, Bill Johnston, Geoff Sherrington, Bob Fernley-Jones, and Anthony Cox.
In the previous post of this series I explained how the Bureau of Meteorology presents summaries of weather observations at 526 weather stations around Australia, and questioned whether instrument error or sudden puffs of wind could cause very large temperature fluctuations in less than 60 seconds observed at a number of sites.
The maximum or minimum temperature you hear on the weather report or see at Climate Data Online is not the hottest or coldest hour, or even minute, but the highest or lowest ONE SECOND VALUE for the whole day. There is no error checking or averaging.
A Bureau officer explains:
Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)
Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600.
Automatic Weather Station instruments were introduced from the late 1980s, with the AWS becoming the primary temperature instrument at a large number of sites from November 1 1996. They are now universal.
An AWS temperature probe collects temperature data every second; there are 60 datapoints per minute. The values given each half hour (and occasionally at times in between) at each station’s Latest Weather Observations page are samples: spot temperatures for the last second of the last minute of that half hour, and the Low Temp or High Temp values on the District Summary page are the lowest and highest one second readings within that minute of reporting. The remaining seconds of data are filtered out. There is no averaging to find the mean over say one minute or ten minutes. There is NO error checking to flag rogue values. The maximum temperatures are dutifully reported in the media, especially if some record has been broken. Quality Control does not occur for two or three months at least, which then just quietly deletes spurious values, long after record temperatures have been spruiked in the media.
In How Temperature is “Measured” in Australia: Part 1 I demonstrated how this method has resulted in large differences recorded in the exact same minutes at a number of stations.
What explanation is there for these differences?
The Bureau will insist they are due to natural weather conditions. Some rapid temperature changes are indeed due to weather phenomena. Here are some examples.
In semi-desert areas of far western Queensland, such as in this example from Urandangi, temperatures rise very rapidly in the early morning.
Fig. 1: Natural rapid temperature increase

For 24 minutes the temperature was increasing at an average of more than 0.2C per minute. That is the fastest I’ve seen, and entirely natural- yet at Hervey Bay on 22 February the temperature rose more than two degrees in less than a minute, before 6 a.m., many times faster than it did later in the morning.
Similarly, on Wednesday 8 March, a cold change with strong wind and rain came through Rockhampton. Luckily the Bureau recorded temperatures at 4:48 and 4:49 p.m., and in that minute there was a drop of 1.2C.
Fig. 2: Natural rapid temperature decrease

That was also entirely natural, and associated with a weather event.
For the next plots, which show questionable readings, I have supplemented BOM data with data from an educational site run by the UK Met Office, WOW (Weather Observations Worldwide). The Met gets data from the BOM at about 10 minutes before the hour, so we have an additional source which increases the sample frequency. The examples selected are all well-known locations in Queensland, frequently mentioned on ABC TV weather. They have been selected purely because they are examples of large one minute changes.
This plot is from Thangool Airport near Biloela, southwest of Rockhampton, on Friday 10 March. The weather was fine, sunny, and hot, with no storms or unusual weather events.
Fig. 3: Temperature spike and rapid fall at Thangool

This one is for Coolangatta International Airport on the Gold Coast on 20th February.
Fig. 4: Temperature spike and rapid fall at Coolangatta

And Maryborough Airport on 15th February:
Fig. 5: Temperature spike and rapid fall at Maryborough (Qld)

Figure 5(b): The weirdest spike and fall: Coen Airport 21 March

Thanks to commenter MikeR for finding that one.
All of these were in fine sunny conditions in the hottest part of the day. It is difficult to imagine a natural meteorological event that would cause such rapid fluctuations- in particular rapid falls- as in the above examples. It is possible they were caused by some other event such as jet blast or prop wash blowing hotter air over the probe during aircraft movement, quickly replaced by air at the ambient surrounding temperature. It is either that or random instrument error. Either way, the result is the same: rogue outliers are being captured as maxima and minima.
How often does this happen?
Over one week I collected 200 instances where the High Temps and Low Temps could be directly checked as they occurred in the same minute as the 30 minute observation.
The results are astounding. The differences occurring in readings in the same minute are scattered across the range of temperatures. Most High Temp discrepancies are of 0.1 or 0.2 degrees, but there is a significant number (39% of the sample) with 0.3C to 0.5C decreases in less than one minute, and five much larger.
Fig. 6: Temperature change within one minute from maximum

Notice that 95% of the differences were from 0.1C to 0.5C, which suggests that one minute ranges of up to 0.5C are common and expected, while values above this are true outliers. The Bureau claims (see below) that in 90% of cases AWS probes have a tolerance of +/-0.2C, whereas the 2011 Review Panel mentioned the “the present +/- 0.5 °C”. Is the tolerance really +/-0.5C?
Fig. 7: Temperature change within one minute from minimum

There was one instance where there was no difference. The vast majority have a -0.1C difference, which is within the instruments’ tolerance.
This next plot shows the differences (temperature falls in one minute from the second with the highest reading to that of the final second) ordered from greatest to least.
Fig. 8: Ordered count of temperature falls

The few outliers are obvious. More than half the differences are of 0.1C or 0.2C.
One minute temperature rises:
Fig. 9: Ordered count of temperature rises

Note the outlier at -2.1C: that was Hervey Bay Airport. Also note only one example with no difference, and the majority at -0.1C.
Is there any pattern to them?
The minimum temperature usually occurs around sunrise, although in summer this varies, but very rarely when the sun is high in the sky. Therefore rapid temperature rise at this time will be relatively small, as the analysis shows: 80% of the differences between the Low Temps and corresponding final second observations were zero or one tenth of a degree, and 91% two tenths of a degree or less. As the instrument tolerance of AWS sensors is supposed to be +/- 0.2C, the vast majority of Low Temps are within this range. Therefore, the Low Temps are not significantly different from the Latest Observation figures. Yet as it is the Lowest Temperature that is being recorded, all but one example have the Low Temp, and therefore daily minimum, cooler than the final second observation. 9% are outside the +/-0.2C range and show real discrepancy, i.e. very rapid temperature rise within one minute, that is worth investigating. Remember, the fastest morning rise I’ve found averaged about 0.2C per minute.
The High Temps have 56% of discrepancies within the +/-0.2C tolerance range. Day time temperatures are much more subject to rapid rise and fall of temperatures. The 44% of discrepancies of 0.3C or more are worth investigation. Many are likely due to small localized air temperature changes, the AWS probes being very sensitive to this, but the rapid decreases shown in the examples above, as well as the rapid rises in the Low Temp examples, mean that random noise is likely to be a factor as well.
Have they affected climate analysis?
Comparison of values at identical times has shown that out of 200 cases, all but one had higher or lower temperatures at some previous second than at the last second of that minute, with a significant number of High Temp observations (39% of the sample) with 0.3C to 0.5C decreases in less than one minute, and five much larger. There is a very high probability that similar differences occur at every station in every state, every day.
In more than half of the sample of High Temps, and over 90% of the Low Temps, the discrepancy was within the stated instrumental tolerance range, and therefore the values are not significantly different, but the higher or lower reading becomes the maximum or minimum, with no tolerance range publicised.
This would of course be an advantage if greater extremes were being looked for.
Nearly 10 percent of minimum temperatures were followed by a rise of more than 0.2C, and 44 percent of maxima were followed by a fall of more than 0.2C. While many of these may have entirely natural causes, none of the very large discrepancies examined had an identifiable meteorological cause. It is questionable whether mercury-in-glass or alcohol-in-glass thermometers used in the past would have responded as rapidly as this. This must make claims for record temperatures questionable at best.
If you think that the +/- 0.2C tolerance makes no difference in the big picture, as positives will balance negatives and errors will resolve to a net of zero, think again. Maximum temperature is the High Temp value for the day, and 44% of the discrepancies were more than +0.2C. If random instrument error is the problem causing the apparent temperature spikes, (and downwards spikes in the hot part of the day are not reported unless they show up in the final second of the 30 minute reporting period), only the highest upwards spike, with or without positive error, is reported. Negative error can never balance any positive error.
Further, these very precise but questionable values then become part of the climate monitoring system, either directly if they are for ACORN stations, or indirectly if they are used to homogenise “neighbouring” ACORN stations. They also contribute to temperature maps, showing for example how hot New South Wales was in summer.
Again, temperature datasets in the ACORN network are developed from historic, not very precise, but (we hope) fairly accurate data from slow response mercury-in-glass or alcohol-in-glass thermometers observed by humans, merged with very precise but possibly unreliable, rapid response, one second data from Automatic Weather Systems. The extra precision means that temperatures measured by AWS probes are likely to be some tenths of a degree higher or lower than LIG thermometers in similar conditions, and the higher proportion of High Temp differences shown above, relative to Low Temp differences, will lead to higher maxima and means in the AWS era. Let’s consider maxima trends:
Fig. 10: Australian maxima 1910-2016

There are no error bars in any BOM graph. Maxima across Australia as a whole have increased by about 0.9 C per 100 years according to the Bureau, based on analysis of ACORN data. Even if across the whole network of 526 automatic stations the instrument error is limited to +/- 0.2C, that is 22.2% of the claimed temperature trend. In the past, indeed as recently as 2011 (see below), instrument error was as high as +/-0.5C, or about half of the 107 year temperature increase. No wonder the Bureau refuses to show error bands in its climate analyses.
There have been NO comparison studies published of AWS probes and LIG thermometers side by side. Can temperatures recorded in the past from liquid-in-glass thermometers really be compared with AWS one second data? The following quotes are from 2011, when an Independent Review Panel gave its assessment of ACORN before its introduction.
Report of the Independent Peer Review Panel p8 (2011)
Recommendations: The Review Panel recommends that the Bureau of Meteorology should implement the following actions:
A1 Reduce the formal inspection tolerance on ACORN-SAT temperature sensors significantly below the present ±0.5 °C. This future tolerance range should be an achievable value determined by the Bureau’s Observation Program, and should be no greater than the ±0.2 °C encouraged by the World Meteorological Organization.
A2 Analyse and document the likely influence if any of the historical ±0.5 °C inspection tolerance in temperature sensors, on the uncertainty range in both individual station and national multidecadal temperature trends calculated from the ACORN-SAT temperature series.
And the BoM Response: (2012)
… … … An analysis of the results of existing instrument tolerance checks was also carried out. This found that tolerance checks, which are carried out six-monthly at most ACORN-SAT stations, were within 0.2 °C in 90% of cases for automatic temperature probes, 99% of cases for mercury maximum thermometers and 96% of cases for alcohol minimum thermometers.
These results give us a high level of confidence that measurement errors of sufficient size to have a material effect on data over a period of months or longer are rare.
This confirms LIG thermometers have more reliable accuracy than automatic probes, and that 10% of AWS probes are not sufficiently accurate, with higher error rates. That is, at more than 50 sites. If they are in remote areas, their inaccuracy will have an additional large effect on the climate signal. It is to be hoped that Alice Springs, which contributes 7-10% of the national climate signal, is not one of them.
Conclusion
It is very likely that the 199 one minute differences found in a sample of 200 high and low temperature reports are also occurring every day at every weather station across Australia. It is very likely that nearly half of the High Temp cases will differ by more than 0.2 degree Celsius.
Maxima and minima reported by modern temperature probes are likely to be some tenths of a degree higher or lower than those reported historically using Liquid-In-Glass thermometers.
Daily maximum and minimum temperatures reported at Climate Data Online are just noise, and cannot be used to determine record high or low temperatures.
These problems are affecting climate analyses directly if they are at ACORN sites, or indirectly, if they are used to homogenise ACORN sites, and may distort regional temperature maps.
Instrument error may account for between 22% and 55% of the national trend for maxima.
A Wish List of Recommendations (never likely to be adopted):
That the more than 50 sites at which AWS probes are not accurate to +/- 0.2 degree Celsius be identified and replaced with accurate probes as a matter of urgency.
That the Bureau show error bars on all of its products, in particular temperature maps and time series, as well as calculations of temperature trends.
That the Bureau of Meteorology recode its existing three criteria filter, to zero-out spurious spikes and preferably send them as fault flags into a separate file in order to improve Quality Control.
That the Bureau replace its one second spot maxima and minima reports with a method similar to wind speed reports: the average over 10 minutes. That would be a much more realistic measure of temperature.