Posts Tagged ‘bom’

Garbage In, Garbage Out

August 7, 2017

(By Ken Stewart, assisted by Bill Johnston and Phill Goode; and cross-posted with Jo Nova)

Early ABC Radio news bulletins last Wednesday morning were led by this item, which you can read in full at ABC Online.

More climate scientists needed to avoid expensive mistakes, review urges

Apparently we urgently need 77 climate scientists to predict the future of areas like the Murray-Darling Basin with climate modelling.

Interestingly, Professor McDougall of the Australian Academy of Science points out that one of those “expensive mistakes” was the $2 billion desalination plant built in Queensland as a response to the millennium drought, “which really wasn’t an indication of climate change at all”.   Why didn’t the good professor raise his voice before the money was wasted?

But I digress.

Reliable modelling and projections for the future are surely desirable.

But such modelling must be based on reliable data, and the reliability of temperature data in Australia is demonstrably poor.

Example 1:  As has been widely reported in The Australian, and by Jennifer Marohasy and Jo Nova, cold temperatures at two separate sites (and possibly many others) were altered to appear warmer, then changed back, then deleted.  The BOM gave two conflicting explanations, both of which cast grave doubt on the reliability of “raw” temperature data from an unknown number of stations.

Example 2:  After enquiring why there are frequently two different temperature readings for exactly the same minute at various weather stations, a Bureau spokesperson told me that:

Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)

(See here and here.)

In other words, Australian maximum and minimum temperatures are taken from ONE SECOND readings from Automatic Weather Stations.  Spikes due to localised gusts of hot air, or instrument error, become the maximum for the day.  (This rarely has a large effect on minima, as night time temperatures are fairly smooth, whereas during the day temperature bounces rapidly up and down.  This is shown in this plot of temperatures at Thangool Airport in Queensland on Australia Day this year.)

Thangool 26 Jan 17 1 min

And this is for the same day between 3.00pm and 4.00pm.

Thangool 26 Jan 17 3 to 4pm

As you can see the temperature spikes up and down in the heat of the day by up to one degree between one minute and the next.  But these are the temperatures at the final second of each minute: during the intervening 59 seconds the temperature is spiking up and down as well, which we know because occasionally the highest or lowest temperature for the day occurs in the same minute as a final second recording on the BOM database (usually on the hour or half hour).  This can be up or down by two or three degrees in less than 60 seconds.

This is in contrast to the rest of the world.  The WMO recommends 1 minute (60 second) averages of temperature to be recorded to combat this very problem of noisy data, and this is followed in the UK.  In the USA 5 minute (300 second) averages are calculated.

From THE WEATHER OBSERVER’S HANDBOOK by Stephen Burt (Cambridge University Press, 2012):

Observers handbook

Even without software or human interference as in Example 1, this means Australian temperature data, in particular maxima, are not reliable.

Example 3:  Historically, temperatures were observed from Liquid In Glass (LIG) thermometers.  From the 1990s, Automatic Weather Stations (AWS) were installed using Platinum Resistance Thermometers (PRT) and are now the source for daily data.  AWS thermometers are very precise, but as I showed in Example 2, their data is used idiosyncratically to record 1 second spikes, frequently resulting in higher maxima and less often slightly lower minima than a 1 or 5 minute average.

One would think that with such a major change in technology there would be comparative studies reported in the BOM’s meteorological journal or other “peer reviewed” literature.  Apparently not.

Dr Bill Johnston has investigated this and says:

Parallel data were collected all over Australia for over a decade, some until last year when thermometers were removed, at manned sites, mainly major airports (Ceduna, Sydney, Hobart, Perth, Darwin, Alice Springs, Albany, Norfolk Island, Wagga to name a few) and also met-offices such as Cobar and Giles. However, comparisons between screens were done at one site only (Broadmeadows, Melbourne, which is not even an official weather station) using PRT only and reported as a “preliminary report”, which is available (https://www.wmo.int/pages/prog/www/IMOP/WebPortal-AWS/Tests/ITR649.pdf) however, after AWS became primary instruments, as I’ve reported before, the Bureau had an internal policy that parallel liquid-in-glass thermometer data were not databased. Furthermore, they had another policy that paper-data was destroyed after 2-years. So there is nothing that is easily available…. there is also no multi-site replicated study involving screen types and thermometers vs. PRT probes ….

Deliberate destruction of data is scandalous; the only way now to compare Automatic Weather Stations (AWS) and Liquid in Glass, is to hunt for sites where there is overlap between two stations; where the AWS is given a new number. This is possible BUT the problem is that the change-over is invariably confounded with either a site move or the change to a small screen.

Therefore we suspect that the introduction and reliance on AWS has led to artificially higher maxima (and thus record temperatures) than in the past, but we have no way of knowing for sure or how much.

So we now have (1) temperatures that are altered before they even become ‘raw’ data; (2) use of one second spikes for recording daily maximum and minimum temperatures, very probably resulting in artificially high maxima and slightly lower minima; and (3) no way of telling how the resulting data compare with those from historical liquid-in-glass thermometers.

How can the CSIRO hope to produce reliable climate modelling with any number of climate scientists when the BOM cannot produce reliable temperature data?  Garbage in, garbage out.

TC Debbie

March 29, 2017

TC Debbie hit the Whitsunday coast and areas to the south and inland yesterday.  As I spent nearly half my life in places not far from Mackay and have many friends in the region, I was very interested to see what was happening.   I began checking online from 5 a.m. Tuesday morning.

Here is some initial analysis of TC Debbie.  Firstly, here is the table of cyclone intensities as found at http://www.bom.gov.au/cyclone/faq/index.shtml#definitions .

Fig. 1:  Cyclone Intensity

TC Intensity

I began checking online from 5 a.m. Tuesday morning.

Fig. 2:  0500 forecast cyclone track map.

Debbie 5am

How accurate was the Bureau’s forecast?  Here is the forecast 22 hours later, at 0300 Wednesday morning.

Fig. 3:  Wednesday 0300 forecast cyclone track map.

Ex TC Debbie

The track forecast was pretty good.

The next images show Debbie’s progress across the Whitsunday Islands until the eyewall crossed the coast near Airlie Beach.

Fig. 4:  0720 Eyewall about to hit Hamilton Island

radar 720am debbie hayman is eye

Fig. 5:  0910  Hamilton Island near the eyewall, Hayman Island in the eye

radar 910am debbie hamilton eyewall

Fig. 6:  10.30  Hamilton Island near the eyewall, Hayman Island in the eye, and the eyewall about to pass over Airlie Beach

radar 1030am debbie hamilton eyewall

And four and a half hours later, the worst is over at Hamilton and Hayman Island and the eye is collapsing over Proserpine.

Fig. 7:  1510  Debbie weakening near Proserpine

radar 310pm eye breakup

Note the “gap” in the image in the northwest sector.  The Bowen radar failed and the Mackay radar was blocked by high mountains to the west.

What about forecasts of the cyclone’s intensity?

The next figures show plots of wind gusts, pressure, temperature, and rain at Hamilton Island, Proserpine, and Bowen, the closest stations to the cyclone’s track.

Fig. 8:  Wind gusts at Hamilton Island

wind hamilton

The black line shows the period from just before 8.00 a.m. until about 2.30 p.m. during which Hamilton Island was close to the eyewall, the area of maximum wind strength.   For nine hours from before 6.00 a.m. until nearly 3.00 p.m. wind gusts were of Category 3 strength.  From 8.00 a.m. until 12.30 p.m. gusts approached or exceeded 225 km/hr, bordering on category 4, and between 10.35 and 10.30 reached 263 km/hr three times at least- and the Bureau had forecast winds up to 270 km/hr.  While the station at Hamilton Island is too high to be completely reliable, these data are indicative that winds at 10 metres were at cat 4 level for some time.

Fig. 9:  Air Pressure at Hamilton Island

pressure hamilton

The red line shows the period from just before 8.00 a.m. until about 2.30 p.m. during which Hamilton Island was near the eyewall, the area of maximum wind strength.    From 2.00 a.m. until 5.00 p.m.  pressure was below 985 hPa (Cat, 2) and from 10.00 a.m. until 1.30 p.m. was below 970 hPa (Cat.3) but did not reach 955 hPa (Cat. 4).  Remember however that Hamilton Island was some 50 km from the centre of the eye, so 955 hPa is quite possible for central pressure.

On the basis of wind gusts and pressure at Hamilton Island, I believe Debbie was a strong Category 3, weak Category 4 system.

Fig. 10:  Air temperature at Hamilton Island

T hamilton

Note the sudden jump in temperature from 8.12 a.m.- 3 degrees in 3 minutes- coinciding with a wind gust of 212 km/hr, and kept climbing to unbelievable values.  (Compare with Proserpine below.)  It is likely that the AWS probe malfunctioned, and failed altogether at 12.00 noon.

Fig. 11:  Rain at Hamilton Island

rain hamilton

Rain measurement is unlikely to be accurate in such ferocious winds.  Note how rainfall levelled off from 11.00 a.m until 2.00 p.m., then increased after 3.00 p.m.

Fig. 12:  Wind gusts at Proserpine

wind proserpine

Proserpine Airport is some 20 km inland, 41 km west of Hamilton Island and 56 km from Bowen.  As the cyclone arrived over land it began losing strength and the eye began to shrink.  From 10.00 a.m. until 2.00 p.m. gusts were at Category 2 strength and at 1.00 p.m. reached the magic 165 km/hr of Cat 3 strength.  They were very probably much stronger in the town itself 9.1 km north.

Fig. 13:  Pressure at Proserpine Airport

pressure proserpine

From 12.30 p.m. until 5.00 p.m. the pressure at the airport, some 20-30 km from the centre, was below the Category 3 value of 970 hPa.

Wind gust and pressure data indicate Debbie was very likely still Category 3 as it passed over Proserpine town.

Fig. 14:  Air temperature at Proserpine

T proserpine

Fairly stable temperature with only about 1.5C range all day.

Fig. 15:  Rain at Proserpine

rain proserpine

Steady rain all day, fairly typical of cyclonic conditions.  At Strathdickie not far from Proserpine, 193mm fell in one hour that morning, and at Dalrymple Heights about 50km south 814mm fell in 24 hours.

Fig. 16:  Wind gusts at Bowen

wind bowen

For four and a half hours wind gusts reached Category 2 strength, and were above 100 km/hr from 9.00 a.m. to 8.00 p.m.

Fig. 17:  Pressure at Bowen

pressure bowen

Pressure was at Category 2 levels from 9.00 a.m.

Fig. 18:  Air temperature at Bowen

T bowen

Winds were west south west most of the day, but as Debbie passed and winds turned northwest (over the ocean), the temperature climbed.

Fig. 19:  Rain at Bowen

rain bowen

Steady rain all day: 12 inches in 12 hours.

While no stations were directly in the cyclone’s path, nearby station data indicate that Debbie was a large Category 3 to Category 4 tropical cyclone when it hit the coast and brought very strong winds, very heavy rainfall, and widespread destruction.  It is still lingering as a tropical low 300 km inland, bringing more strong winds and very heavy rain, and will head south over the next couple of days.  The clean up begins.  We await the report from James Cook University engineers who will provide their assessment of damage and wind loadings in a few weeks’ time.

Give credit where credit is due: the Bureau of Meteorology got this one pretty right.

How Temperature is “Measured” in Australia: Part 2

March 21, 2017

By Ken Stewart, ably assisted by Chris Gillham, Phillip Goode, Ian Hill, Lance Pidgeon, Bill Johnston, Geoff Sherrington, Bob Fernley-Jones, and Anthony Cox.

In the previous post of this series I explained how the Bureau of Meteorology presents summaries of weather observations at 526 weather stations around Australia, and questioned whether instrument error or sudden puffs of wind could cause very large temperature fluctuations in less than 60 seconds observed at a number of sites.

The maximum or minimum temperature you hear on the weather report or see at Climate Data Online is not the hottest or coldest hour, or even minute, but the highest or lowest ONE SECOND VALUE for the whole day.  There is no error checking or averaging.

A Bureau officer explains:

Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)

Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600.

Automatic Weather Station instruments were introduced from the late 1980s, with the AWS becoming the primary temperature instrument at a large number of sites from November 1 1996.  They are now universal.

An AWS temperature probe collects temperature data every second; there are 60 datapoints per minute.  The values given each half hour (and occasionally at times in between) at each station’s Latest Weather Observations page are samples: spot temperatures for the last second of the last minute of that half hour, and the Low Temp or High Temp values on the District Summary page are the lowest and highest one second readings within that minute of reporting.  The remaining seconds of data are filtered out.  There is no averaging to find the mean over say one minute or ten minutes.  There is NO error checking to flag rogue values.  The maximum temperatures are dutifully reported in the media, especially if some record has been broken.  Quality Control does not occur for two or three months at least, which then just quietly deletes spurious values, long after record temperatures have been spruiked in the media.

In How Temperature is “Measured” in Australia: Part 1 I demonstrated how this method has resulted in large differences recorded in the exact same minutes at a number of stations.

What explanation is there for these differences? 

The Bureau will insist they are due to natural weather conditions.  Some rapid temperature changes are indeed due to weather phenomena.  Here are some examples.

In semi-desert areas of far western Queensland, such as in this example from Urandangi, temperatures rise very rapidly in the early morning.

Fig. 1:  Natural rapid temperature increase

urandangi

For 24 minutes the temperature was increasing at an average of more than 0.2C per minute.  That is the fastest I’ve seen, and entirely natural- yet at Hervey Bay on 22 February the temperature rose more than two degrees in less than a minute, before 6 a.m., many times faster than it did later in the morning.

Similarly, on Wednesday 8 March, a cold change with strong wind and rain came through Rockhampton.  Luckily the Bureau recorded temperatures at 4:48 and 4:49 p.m., and in that minute there was a drop of 1.2C.

Fig. 2:  Natural rapid temperature decrease

Rocky 8 March

That was also entirely natural, and associated with a weather event.

For the next plots, which show questionable readings, I have supplemented BOM data with data from an educational site run by the UK Met Office, WOW (Weather Observations Worldwide).  The Met gets data from the BOM at about 10 minutes before the hour, so we have an additional source which increases the sample frequency.  The examples selected are all well-known locations in Queensland, frequently mentioned on ABC TV weather.  They have been selected purely because they are examples of large one minute changes.

This plot is from Thangool Airport near Biloela, southwest of Rockhampton, on Friday 10 March.  The weather was fine, sunny, and hot, with no storms or unusual weather events.

Fig. 3:  Temperature spike and rapid fall at Thangool

Thangool 10 march

This one is for Coolangatta International Airport on the Gold Coast on 20th February.

Fig. 4:  Temperature spike and rapid fall at Coolangatta

Coolangatta 20 Feb bom met

And Maryborough Airport on 15th February:

Fig. 5:  Temperature spike and rapid fall at Maryborough (Qld)

Mboro 15 Feb

Figure 5(b):  The weirdest spike and fall:  Coen Airport 21 March 

Coen 21 March

Thanks to commenter MikeR for finding that one.

All of these were in fine sunny conditions in the hottest part of the day.  It is difficult to imagine a natural meteorological event that would cause such rapid fluctuations- in particular rapid falls- as in the above examples.  It is possible they were caused by some other event such as jet blast or prop wash blowing hotter air over the probe during aircraft movement, quickly replaced by air at the ambient surrounding temperature.  It is either that or random instrument error.  Either way, the result is the same: rogue outliers are being captured as maxima and minima.

How often does this happen?

Over one week I collected 200 instances where the High Temps and Low Temps could be directly checked as they occurred in the same minute as the 30 minute observation.

The results are astounding.  The differences occurring in readings in the same minute are scattered across the range of temperatures.  Most High Temp discrepancies are of 0.1 or 0.2 degrees, but there is a significant number (39% of the sample) with 0.3C to 0.5C decreases in less than one minute, and five much larger.

Fig. 6:  Temperature change within one minute from maximum

Count diffs hi T graph

Notice that 95% of the differences were from 0.1C to 0.5C, which suggests that one minute ranges of up to 0.5C are common and expected, while values above this are true outliers.  The Bureau claims (see below) that in 90% of cases AWS probes have a tolerance of +/-0.2C, whereas the 2011 Review Panel mentioned the “the present +/- 0.5 °C”.  Is the tolerance really +/-0.5C?

Fig. 7:  Temperature change within one minute from minimum

Count diffs lo T graph

There was one instance where there was no difference.  The vast majority have a -0.1C difference, which is within the instruments’ tolerance.

This next plot shows the differences (temperature falls in one minute from the second with the highest reading to that of the final second) ordered from greatest to least.

Fig. 8:  Ordered count of temperature falls

Count diffs hi T

The few outliers are obvious.  More than half the differences are of 0.1C or 0.2C.

One minute temperature rises:

Fig. 9:  Ordered count of temperature rises

Count diffs lo T

Note the outlier at -2.1C: that was Hervey Bay Airport.  Also note only one example with no difference, and the majority at -0.1C.

Is there any pattern to them? 

The minimum temperature usually occurs around sunrise, although in summer this varies, but very rarely when the sun is high in the sky.  Therefore rapid temperature rise at this time will be relatively small, as the analysis shows: 80% of the differences between the Low Temps and corresponding final second observations were zero or one tenth of a degree, and 91% two tenths of a degree or less.  As the instrument tolerance of AWS sensors is supposed to be +/- 0.2C, the vast majority of Low Temps are within this range.  Therefore, the Low Temps are not significantly different from the Latest Observation figures.  Yet as it is the Lowest Temperature that is being recorded, all but one example have the Low Temp, and therefore daily minimum, cooler than the final second observation.  9% are outside the +/-0.2C range and show real discrepancy, i.e. very rapid temperature rise within one minute, that is worth investigating.  Remember, the fastest morning rise I’ve found averaged about 0.2C per minute.

The High Temps have 56% of discrepancies within the +/-0.2C tolerance range.  Day time temperatures are much more subject to rapid rise and fall of temperatures.  The 44% of discrepancies of 0.3C or more are worth investigation.  Many are likely due to small localized air temperature changes, the AWS probes being very sensitive to this, but the rapid decreases shown in the examples above, as well as the rapid rises in the Low Temp examples, mean that random noise is likely to be a factor as well.

Have they affected climate analysis? 

Comparison of values at identical times has shown that out of 200 cases, all but one had higher or lower temperatures at some previous second than at the last second of that minute, with a significant number of High Temp observations (39% of the sample) with 0.3C to 0.5C decreases in less than one minute, and five much larger.  There is a very high probability that similar differences occur at every station in every state, every day.

In more than half of the sample of High Temps, and over 90% of the Low Temps, the discrepancy was within the stated instrumental tolerance range, and therefore the values are not significantly different, but the higher or lower reading becomes the maximum or minimum, with no tolerance range publicised.

This would of course be an advantage if greater extremes were being looked for.

Nearly 10 percent of minimum temperatures were followed by a rise of more than 0.2C, and 44 percent of maxima were followed by a fall of more than 0.2C.  While many of these may have entirely natural causes, none of the very large discrepancies examined had an identifiable meteorological cause.   It is questionable whether mercury-in-glass or alcohol-in-glass thermometers used in the past would have responded as rapidly as this.  This must make claims for record temperatures questionable at best.

If you think that the +/- 0.2C tolerance makes no difference in the big picture, as positives will balance negatives and errors will resolve to a net of zero, think again.  Maximum temperature is the High Temp value for the day, and 44% of the discrepancies were more than +0.2C.  If random instrument error is the problem causing the apparent temperature spikes, (and downwards spikes in the hot part of the day are not reported unless they show up in the final second of the 30 minute reporting period), only the highest upwards spike, with or without positive error, is reported.  Negative error can never balance any positive error.

Further, these very precise but questionable values then become part of the climate monitoring system, either directly if they are for ACORN stations, or indirectly if they are used to homogenise “neighbouring” ACORN stations. They also contribute to temperature maps, showing for example how hot New South Wales was in summer.

Again, temperature datasets in the ACORN network are developed from historic, not very precise, but (we hope) fairly accurate data from slow response mercury-in-glass or alcohol-in-glass thermometers observed by humans, merged with very precise but possibly unreliable, rapid response, one second data from Automatic Weather Systems.  The extra precision means that temperatures measured by AWS probes are likely to be some tenths of a degree higher or lower than LIG thermometers in similar conditions, and the higher proportion of High Temp differences shown above, relative to Low Temp differences, will lead to higher maxima and means in the AWS era.  Let’s consider maxima trends:

Fig. 10:  Australian maxima 1910-2016

graph max trend

There are no error bars in any BOM graph.  Maxima across Australia as a whole have increased by about 0.9 C per 100 years according to the Bureau, based on analysis of ACORN data.  Even if across the whole network of 526 automatic stations the instrument error is limited to +/- 0.2C, that is 22.2% of the claimed temperature trend.  In the past, indeed as recently as 2011 (see below), instrument error was as high as +/-0.5C, or about half of the 107 year temperature increase.  No wonder the Bureau refuses to show error bands in its climate analyses.

There have been NO comparison studies published of AWS probes and LIG thermometers side by side.  Can temperatures recorded in the past from liquid-in-glass thermometers really be compared with AWS one second data?  The following quotes are from 2011, when an Independent Review Panel gave its assessment of ACORN before its introduction.

Report of the Independent Peer Review Panel p8 (2011)

Recommendations: The Review Panel recommends that the Bureau of Meteorology should implement the following actions:

A1 Reduce the formal inspection tolerance on ACORN-SAT temperature sensors significantly below the present ±0.5 °C. This future tolerance range should be an achievable value determined by the Bureau’s Observation Program, and should be no greater than the ±0.2 °C encouraged by the World Meteorological Organization.

A2 Analyse and document the likely influence if any of the historical ±0.5 °C inspection tolerance in temperature sensors, on the uncertainty range in both individual station and national multidecadal temperature trends calculated from the ACORN-SAT temperature series.

And the BoM Response: (2012)

… … …   An analysis of the results of existing instrument tolerance checks was also carried out. This found that tolerance checks, which are carried out six-monthly at most ACORN-SAT stations, were within 0.2 °C in 90% of cases for automatic temperature probes, 99% of cases for mercury maximum thermometers and 96% of cases for alcohol minimum thermometers.

These results give us a high level of confidence that measurement errors of sufficient size to have a material effect on data over a period of months or longer are rare.

This confirms LIG thermometers have more reliable accuracy than automatic probes, and that 10% of AWS probes are not sufficiently accurate, with higher error rates.  That is, at more than 50 sites.  If they are in remote areas, their inaccuracy will have an additional large effect on the climate signal.   It is to be hoped that Alice Springs, which contributes 7-10% of the national climate signal, is not one of them.

Conclusion

It is very likely that the 199 one minute differences found in a sample of 200 high and low temperature reports are also occurring every day at every weather station across Australia.  It is very likely that nearly half of the High Temp cases will differ by more than 0.2 degree Celsius.

Maxima and minima reported by modern temperature probes are likely to be some tenths of a degree higher or lower than those reported historically using Liquid-In-Glass thermometers.

Daily maximum and minimum temperatures reported at Climate Data Online are just noise, and cannot be used to determine record high or low temperatures.

These problems are affecting climate analyses directly if they are at ACORN sites, or indirectly, if they are used to homogenise ACORN sites, and may distort regional temperature maps.

Instrument error may account for between 22% and 55% of the national trend for maxima.

A Wish List of Recommendations (never likely to be adopted):

That the more than 50 sites at which AWS probes are not accurate to +/- 0.2 degree Celsius be identified and replaced with accurate probes as a matter of urgency.

That the Bureau show error bars on all of its products, in particular temperature maps and time series, as well as calculations of temperature trends.

That the Bureau of Meteorology recode its existing three criteria filter, to zero-out spurious spikes and preferably send them as fault flags into a separate file in order to improve Quality Control.

That the Bureau replace its one second spot maxima and minima  reports with a method similar to wind speed reports: the average over 10 minutes.  That would be a much more realistic measure of temperature.

How Temperature Is “Measured” in Australia: Part 1

March 1, 2017

By Ken Stewart, ably assisted by Chris Gillham, Phillip Goode, Ian Hill, Lance Pidgeon, Bill Johnston, Geoff Sherrington, Bob Fernley-Jones, and Anthony Cox.

The Bureau of Meteorology maintains the Southern Oscillation Index (SOI), one of the most useful climate and weather records in the world.  In About SOI,  the Bureau says:

 Daily or weekly values of the SOI do not convey much in the way of useful information about the current state of the climate, and accordingly the Bureau of Meteorology does not issue them. Daily values in particular can fluctuate markedly because of daily weather patterns, and should not be used for climate purposes.

It is a pity that the BOM doesn’t follow this approach with temperature, and in fact goes to the opposite extreme.

Record temperatures, maximum and minimum temperatures, and monthly, seasonal, and annual analyses are based not on daily values but on ONE SECOND VALUES.

The Bureau reports daily maximum and minimum temperatures at Climate Data Online,   but also gives a daily summary for each site in more detail on the State summary observations page , and a continuous 72 hour record of 30 minute observations (examples below), issued every 30 minutes, with the page automatically refreshed every 10 minutes, also handily graphed .  These last two pages have the previous 72 hours of readings, after which they disappear for good.  However, the State summary page, also refreshed every 10 minutes, is for the current calendar day only.

This screenshot shows part of the Queensland observations page for February 26, showing the stations in the North Tropical Coast and Tablelands district.

Fig. 1:  District summary page

mareeba-example

Note especially the High Temp of 30.5C at 01:26pm.  Clicking on the station name at the left takes us to the Latest Weather Observations for Mareeba page:

Fig. 2:  Latest Observations for Mareeba

mareeba detail example.jpg

Notice that temperature recordings are shown every 30 minutes, on the hour and half hour.

In Figure 1 I have circled the Low Temp and High Temp for Mareeba.  Except in unusual circumstances, High Temp and Low Temp values become the maximum and minimum temperatures and are listed on the Climate Data Online page, and for stations that are part of the ACORN network, become part of the official climate record.  It is most important that these High Temp and Low Temp values, the highest and lowest recorded temperatures of each day, should be accurate and trustworthy.

But frequently they are higher or lower than the half hourly observations, as in the Mareeba example (0.6C higher), and I wanted to know why.  In this post I show some recent examples, with the explanation from the Bureau.

Perhaps the difference between the Latest Weather Observations and maximum temperature reported at Climate Data Online is due to brief spikes in temperature in between the reported temperatures of the latest observations, such as in this example from Amberley RAAF on February 12.

Fig. 3:  Amberley RAAF temperatures, 12 February 2017

amberley-12-feb

A probable cause would be that the Automatic Weather Station probe is extremely sensitive to sudden changes in temperature as breezes blow warmer or cooler air around or a cloud passes over the sun.

However, this may not be the whole story.

Occasionally the report time for the High Temp or Low Temp is exactly on the hour or half hour, and therefore can be directly compared with the temperature shown for that time at the station’s page.

These progressive Low and/or High Temps on the half hour or hour occur and can be observed throughout the day at various times, as well as at the end of the reporting period.

For example, here is a mid-afternoon screenshot of the Queensland- Wide Bay and Burnett district summary for Wednesday 15th February.  I have highlighted the High Temp value for Maryborough at 1:00pm.

Fig. 4:  District summary at 2:00pm for Maryborough 15 February 2017

obs-mboro-15th

In the Latest Observations for Maryborough, I have highlighted the 1:00pm reading.

Fig. 5: Latest Observations at Maryborough at 01:00pm on 15 February

obs-mboro-15th-detail

The difference is +1.5 degrees.  Here I have graphed the results.

Fig. 6:  Maryborough 15 February

mboro-15th-graph

That’s a 1.5 degree difference at the exact same minute.

Here is a screenshot of Latest Observations values at Hervey Bay Airport on Wednesday 22 February.  Low Temp for the morning of 23.2C was reached at 6.00 a.m.

Fig. 7:  Hervey Bay, 06:00am  22 February 2017

hervey-bay-22nd

Note that at 6.00am, just after sunrise, the Latest Observations page shows that the temperature was 25.3 degrees.  The daily Low Temp was reported as 23.2 degrees at 6.00am – 2.1 degrees cooler.  This graph will show the discrepancy more plainly.

Fig. 8:  Hervey Bay temperatures 22 February

hervey-bay-22nd-graph

What possible influence would cause a dawn temperature to drop 2.1 degrees?

I sent a query to the Bureau about Hervey Bay, and the explanation from the Bureau’s officer was enlightening:

Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)

Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600.

I’ve looked at the data for Hervey Bay at 0600 on the 22nd February.
25.3, 25.4, 23.2 .

The temperature reported each half hour on the station Latest Observations page is the instantaneous temperature at that exact second, in this case 06:00:00, and the High Temp or Low Temp for the day is the highest or lowest one second temperature out of every minute for the whole day so far.  There is no filtering or averaging.

The explanation for the large discrepancy was that “Sometimes the initial heating from the sun causes cooler air closer to the ground to mix up to the temperature probe (1.2m above ground).”

However, in Figure 7 above it can be seen that the wind was south east at 17 km/hr, gusting to 26 km/hr, and had been like that all night, over flat ground at the airport, so an unmixed cooler surface layer mixing up to the probe seems very unlikely.

You will also note that the temperatures in the final second of every half hour period from 12.30 to 6.30 ranged from 25C to 25.5C, yet in some second in the final minute before 6.00 a.m. it was at 23.2C.  I have shown these values in the graph below.

Fig. 9:  Hervey Bay 05:59 to 06:00am

hervey-bay-22nd-at-6am

The orange row shows the highest temperature for this last minute at 25.4C at some unknown second, the blue row the lowest temperature for this minute (and for the morning) at 23.2C at some unknown second, and the spot temperature of 25.3C at exactly 06:00:00am.  The black lines show the upper and lower values of half hourly readings between 12:30 and 06:30: the high temp and 06:00am readings are within this range.

23.2C looks a lot like instrument error, and not subject to any filtering.

Further, there are only two possibilities:  either from a low of 23.2C, the temperature rose 2.2 degrees to 25.4C, then down to 25.3C; or else from a high of 25.4C it fell 2.2 degrees to 23.2C, then rose 2.1 degrees to 25.3C, all in the 60 seconds or less prior to 06:00:00 a.m.

How often does random instrument error affect the High and Low Temps reported at the other 526 stations?  Like Thargomindah, where on February 12 the High Temp was 2.3 degrees to 2.5 degrees higher than the temperatures 15 minutes before and after?

Fig. 10:  Thargomindah temperatures 12 February 2017

thargomindah-12-feb

Or was this due to a sudden rise and fall caused by a puff of wind, even a whirl-wind?

Who knows?  The Bureau certainly doesn’t.

 

In Part 2, I will look at patterns arising from analysis of 200 High and Low Temps occurring in the same minute as the half hourly values, and implications this has for our climate record.

No Excess Winter Warming for 103 Years!

January 9, 2014

Greenhouse Myth Buster No. 2

Another key indicator of greenhouse warming, a pattern of temperature change “uniquely associated with the enhanced greenhouse effect” according to Dr Braganza, is greater warming in winter compared with summer.

Not in Australia.

This is a graph of summer annual means minus winter annual means for the years 1910 – 2012, straight from BOM’s time series data.

summ-wint2012

No winter increase over summer in 103 years.  This summer- we find out in early March- will have to be less than +0.7 C above average to make  the trend ever so slightly negative (to 5 decimal places).

But then how will we get another “Angry Summer”?

No Evidence of Greenhouse Warming for 67 Years!

January 8, 2014

The release of 2013 data by the BOM has provided me with plenty to work on.  Various commentators are busily alarming people by claiming that the hottest year on record is an indication that global warming due to the enhanced greenhouse effect is already impacting Australia.  What is most disappointing is that the BOM has done nothing to report the truth: that while Australia has definitely been warming, and breaking records, the data show no evidence of greenhouse warming.

One of the key indicators of warming uniquely associated with the enhanced greenhouse effect is night time temperatures (minima) increasing faster than daytime temperatures (maxima).  The difference between the two is called the Diurnal Temperature Range, or DTR.  So, decreasing DTR would be evidence of greenhouse warming.

Here is Australian DTR since 1947:dtr1947-2013

That’s dead flat or slightly rising for 67 years!

I couldn’t believe it either, and double checked.  There’s no mistake- DTR shows no evidence of greenhouse warming in Australia, with a flat trend for 67 years.

Still No Evidence of Greenhouse Warming!

January 8, 2014

This morning I noticed at Jennifer Marohasy’s post http://jennifermarohasy.com/2014/01/last-year-2013-a-hot-year-for-australia/

a comment from “Luke” (who else) objecting to my use of 2nd order polynomials in yesterday’s post.  Strictly I should stick to linear trends for a 35 year timescale, and use polynomials only for much longer periods.   Therefore, here is a plot of Australian annual minima and maxima for the 104 years from 1910 to 2013, using data straight from the BOM.minvmax poly2

Note that the red 2nd polynomial curve (maxima) shows a fairly flat trend until the 1950s, with an increasing rise since then. (Yes! It’s getting hotter!)

Note how the blue (minima ) curve also gradually rises over the years and apparently continues to do so.

However I have circled the graphs in the 1980s and the last few years.   I have blown this up so you can see more clearly what is happening.minvmax blownup

Since the mid 1980s there is a divergence in trends.  Daytime temperatures are rising faster than night time temperatures.

This is a problem because increasing CO2 and other greenhouse gases should be slowing back radiation, which should be evident in night time temperatures increasing faster.

Something else is happening.

 

The Hottest Year, but NOT due to Greenhouse Warming

January 7, 2014

ACORN-SAT- the gift that keeps on giving!

Unfortunately for doomsayers, the fact that 2013 was the hottest year on record in Australia is no evidence for the effects of greenhouse warming.  In fact, it is the very opposite.

Why?  Any sort of warming will eventually produce the hottest year on record.  But warming due to the enhanced greenhouse effect is quite special.  Warming due to greenhouse gases is evidenced by

greater warming of night time temperatures than daytime temperatures”

amongst other things, according to Dr Karl Braganza (http://theconversation.com/the-greenhouse-effect-is-real-heres-why-1515)

I discussed this in April  last year.  Now, with the updated data for 2013, it’s time for a reality check to see whether there is now evidence of greenhouse warming in Australia (a region as large as Antarctica, Greenland, the USA, or Europe, and supposed to be especially vulnerable to the effects of global warming.)

Once again I am using data straight from the Bureau’s website.

Fig. 1: Monthly maxima and minima with 12 month smoothing, December 1978 – December 2013, from http://www.bom.gov.au/climate/change/index.shtml#tabs=Tracker&tracker=timeseries&tQ%5Bgraph%5D=tmax&tQ%5Barea%5D=aus&tQ%5Bseason%5D=01&tQ%5Bave_yr%5D=0

max v min linear

For the past 35 years, there is much LESS warming of night time temperatures than daytime temperatures.  And the divergence is increasing:

Fig 2: fitted with a 2nd order polynomialmax v min poly

Sorry, but this is not evidence of greenhouse warming over the period of the satellite era, when greenhouse gases have been increasing rapidly.  It is merely evidence of warming.

Was 2013 the Hottest Year on Record? Update!

January 6, 2014

Update:  Warwick Hughes has reminded me of his post on 5 December at http://www.warwickhughes.com/blog/?p=2496 where he shows a distinct drift in UAH data compared with RSS, and in later posts he confirms this in southern Africa and the USA.  Warwick says:

"I have checked UAH against CRUT4 and GHCN CAMS for all Australia and it
looks like there was a drift in UAH 2005-2006.

Until UAH resolves the issue, I think their ranking of Australian hot
years is not worth repeating."

That may help explain the large divergence in recent years.  

I will leave this post as is, with the caveat that it is based on available UAH and Acorn data.

Yes.

On Friday, 2 January, the BOM released its Climate Statement claiming 2013 as the hottest year on record.

The UAH dataset for lower troposphere temperatures has also been just released.

I have compared BOM monthly data with UAH by converting the BOM anomalies to the same reference period as UAH (1981-2010).

Here is the result:  UAH vs BOM 1978-2013 (12 month running means)uah v bom

It is plain to see that in the satellite era, Australian surface temperatures (as calculated by the BOM) reached a record last year.

For the 12 month periods to December, UAH agrees that 2013 was the hottest, just ahead of 1998 and 2009.

According to UAH, the 12 months period to October 2013 was just edged out by the 12 months to June 2010.

So, the BOM is right in saying 2013 was the hottest on their 104 year (and very much adjusted) record.

While the two datasets match reasonably well in most years, especially 1996-1999, they diverge markedly in recent extreme years.  It appears that the BOM area averaging algorithm accentuates extremes, probably because of the scarcity of observing sites in the remote inland, where warming and cooling are much greater.  Alice Springs, for example, being hundreds of kilometres from the nearest neighbouring site, contributes 7 – 10% of the national warming signal.

As well, the satellites’ remote sensors do not necessarily match the atmospheric conditions at ground level, depending on different seasonal conditions.  However, to quote Dr John Christy, “the temperature of the lower troposphere (TLT) more accurately represents what the bulk atmosphere is doing – which is the quantity that is most directly related to greenhouse gas impacts.”

So- if you are interested in the weather, how hot it is locally, consult the BOM- the old Weather Bureau.  If you are interested in whether the climate is changing due to greenhouses gases, consult the satellite data.

And yes, the weather has been hot (and still is where I live).

BOM’s Annual Climate Statement

January 3, 2014

This is just a quick comment on the 2013 Annual Climate Statement released today by the Bureau.

I have checked only minimum temperatures, and my calculation I released yesterday for the 2013 annual mean for minima was +0.82 C.  The Bureau has reported  an annual average of +0.94 C.

The BOM’s figures are derived from the Acorn dataset.  They acknowledge in the small print that:

Note that all values in this statement are as compiled from data available on 2 January 2014. Subsequent quality control and the availability of additional data may result in minor changes to final values.

What they really mean is that Acorn won’t be updated with 2013 data for several weeks, and that nearly all Acorn sites have many months of daily data that has not yet been quality controlled.  This means that the values they give can not be checked for several weeks.

As well, three Acorn sites in Western Australia ceased reporting in August 2012.  Bridgetown has years of data that is out by two days, and Rutherglen has years of data out by one day.   And on some 800 occasions the minimum temperature is higher than the maximum.  Quality checking is not as rigorous as you might expect.

Any changes may be only minor, but rushing to publish before the data can be checked is not a good look.