Skip to main content

It’s Getting Hot Out: The Efficacy of Heat Warnings

 

Summer’s about here, and we can look forward to more of that Environment Canada staple – The Heat Warning. You know, the alerts about high temps and humidity you see on your favourite source for weather info.

I never think much about them, figuring people are pretty good at understaning when it’s hot out and what to do about it. It turns out some local researchers got to wondering if these alerts did any measurable good.

Their work was written up in the Freeps some while back, in an article headlined:

Do hot-weather alerts help? No, not really: London researchers

– published on Aug 22, 2022.

The tag line below the title reads “Those heat alerts telling us to be careful when temperatures spike? Turns out they do little to keep people overcome by heat out of hospital, say London researchers calling for changes to make the warnings more effective.”

The Freeps reporter has the research right in this case. In the research article you will find the following two paragraphs –

“The researchers compiled data on patients with heat-related illnesses who showed up in emergency rooms from 2012-18 and looked at whether their numbers dropped after the harmonized heat warnings kicked in.”

Then later –

“While there did appear to be a slight drop in heat-related emergency room visits after the provincial warning system was introduced, particularly in children and adults with chronic conditions, the results were not statistically significant, Clemens said.”

I went and read the research paper, published in The Canadian Journal of Public Health in 2022 (I’m a geek; you can read it too, here, although you will have to get past the paywall). That is indeed what the researchers say in the paper.

This research paper strikes me as reporting on potentially useful research. The Freeps article notes that “In southern Ontario, heat alerts are issued amid daytime highs hit 31 C or higher, lows of 20 C or when the humidex reaches 40.” You want to put off digging that garden to another, cooler, day. Old coots like me are particularly aware of this.

But setting aside my own instincts, I am all in favor of research to determine whether government initiatives are having their hoped-for effect. My unease about the research arose from the following lines in the Freeps article, in which the lead researcher is quoted –

“This research points to the need to raise awareness of heat-related illness. I’d like to see this translate into more education and physician-level awareness . . . ,” Clemens said. “As an endocrinologist, (I) could help inform or prepare my complex patients to better protect themselves.”

Huh? Exactly how does this research point to that? These research findings say the current warning system had no impact on heat-related emergency-room visits. What is the logic leading from that useful finding to the first sentence in the quote above? And as to the second quoted sentence, by all means, go ahead and inform and prepare your patients, but what does “this research” have to do with that?

Then, at the very end of the paper, we find this:

What are the key implications for public health interventions, policy or practice?

  1. More heat alerts were triggered in Ontario between 2013 and 2018, and many cities spent more days under heat warnings. The implementation of a harmonized HWIS appeared to reduce rates of ED visits for heat-related illness in some subpopulations, but at a provincial level, the change was not statistically significant.

 

  1. Given HWPs are a main policy tool to protect populations against heat, we suggest ongoing efforts to support effective HWP in our communities, with a particular focus on at-risk groups.

 

The journal itself probably has a requirement – since it is a public health journal – to include in any published paper a final statement on the public health implications of the research. However, point 1 is not an implication of the research findings. It is just a restatement of the fact that the research found the warnings had no impact. However it is entirely misleading to say that the HWIS ‘appeared to reduce rates of ED visits…’ and then immediately say ‘the change was not statistically significant’. All social science research operates with the knowledge that there is a lot going on in the world that we can’t identify, or even know about, and so any difference we see in data (like differences in ER visits) might be due to random chance. Researcher can’t just say something ‘appeared’ to be different when in fact the difference was statistically insignificant.

So, why cling to the ‘ we found this, but it wasn’t significant’ language? Why not just say ‘we found no impact’? That is a useful thing to find, indeed, and researchers should expect to find exactly nothing much of the time. Finding nothing advances our knowledge about the world, it is very useful to learn ‘well, that doesn’t seem to have any impact’.

Then, in implication 2 above, they write “…we suggest ongoing efforts to support effective HWP in our communities….”

C’mon folks, you just found that HWPs are ineffective in reducing ER visits, so in what way is an implication of that finding that we should support effective HWPs? Particularly since nothing in your research tells anyone what an effective HWP might look like.

Having hung around with social science researchers nearly all my adult life, I will bravely put forward a hypothesis about motivations, here: there is nothing that would have induced the researchers to write, instead of the two misguided points above, this implication of their research: –

Our research suggests that the HWIS program and its associated HWPs be ended, and the resources involved be directed toward programs for which there is evidence of effectiveness.

That sentence never stood a chance of appearing in their paper.