Skip to main content

On Sin

I recently did a post about disability accommodations at Canadian Universities, in which ADHD diagnoses played a prominent role. ADHD is far from the only non-physical disability whose presence in universities has increased greatly, but it is up there. After writing that, I was reading a post in a blog which I have come to find very interesting, called On the Contrary, written by a fellow named Simon Sobo. He bills himself as ‘81 year old unheralded, frustrated, but serious writer’.

The particular post I read was titled ‘ADHD and Other Sins of Our Children, Part 2’, and you can read it here. Free. (There is a Part 1, which I have not yet read, and which opens with the remark that Part 2 has been read five times as often as Part 1. Hmmm…..)

The post is quite long, and the first section of it is sub-titled A Memory. It’s an interesting account of how his Jewish parents raised him, and his attempts to behave as they wished him to, especially during a long sermon by the Rabbi in his synagogue. My own Catholic upbringing was not dissimilar. The penultimate section of the article is subtitled ‘Sin and ADHD’ and starts with this sentence:

“First a few interesting statistics about adults diagnosed with ADHD and their sense of moral responsibility.”

It makes for fascinating reading, although the academic in me wishes Sobo had done a better job of providing citations for those stats. (There is a set of serious references at the end of the article, which I intend to pursue).

Anyway, I found it thought-provoking, and think you might, also.

Disability, Inc.

‘You don’t ask a barber whether you need a haircut’.

I have no idea from where or who that bit of wisdom comes, but I also have no doubt of its wisdom.

A story in the Dec 27 Globe and Mail is titled As demand for disability accommodations in universities grows, professors contend with how to handle students’ requests’

Not far into the piece one sees the graph which I have reproduced below:

 

A more than two-fold increase in the percentage of individuals in any sub-category of any population is notable. When something changes that dramatically there is something going on. Maybe more than one thing, but something, almost surely.

An important aspect of the phenomenon is depicted in the following graph, also from the article:

It is clear that the number of students with physical disabilities has not changed much over the depicted 5-year period, as the purple region’s upper boundary is nearly flat. However, the number of students registered with non-physical disabilities has increased by some 20,000 over the same period. Non-physical disabilities refer to things like test-taking anxiety, ADHD, difficulty with concentration, etc.

It is thus unavoidable that the source of the steep increase in students with registered disabilities is to be found in the source of the steep increase in non-physical disabilities. Importantly, these disabilities are not objectively verifiable. If a student claims to be blind or deaf or unable to walk, this can be verified by a doctor. More pointedly, if a test (like the PSA, for example) is used to diagnose prostate cancer, it is possible to determine how accurate the PSA test is. That is, it is possible to determine what percentage of false positives and false negatives arise in any population in which the test is used diagnostically. That is because surgeries and autopsies of individuals who are diagnosed in this way can determine whether they actually had prostate cancer.

There is no analog to surgery for test-taking anxiety or ADHD. There are tests, of a sort, and questionnaires, and criteria, for sure, but all of these have been developed by the ‘experts’ who are doing the diagnosis, and again – there is no way to objectively verify their results. One has ADHD if an expert on ADHD – whoever that may be – says so. That expert perhaps went through some process in determining that diagnosis but there is no device outside that ring of experts which can tell us how accurate are their diagnoses of ADHD. Determining ‘accuracy’ is impossible without the ability to determine what is true in a way that does not depend on who is doing the determining.

You can go to this link to get, for example, a pdf containing the DSM-5 Diagnostic Criteria for ADHD in children. You will find it a series of statements about observed behavior, such as ‘Lacks ability to complete schoolwork and other assignments or to follow instructions’ and ‘Incapable of staying seated in class’. The reality is that none of these observations are actually likely to be done by the ‘expert’, but come from the expert interviewing the parents and perhaps a teacher or two. The point is that this set of ‘criteria’ is all there is. An MD or psychologist talks to the parents and/or teachers, maybe the kid for a bit, consults this checklist, and then ‘decides’ ADHD, yes or no. One can go to another expert then, who might or might not agree, but one cannot get verification. It is not possible.

It is known, for example, that the PSA test produces ‘false positives’ at a high rate. This results in urologists generally requiring more evidence than a high PSA test value before they do anything invasive to a guy whose test showed such a high value. There is no way to talk about a ‘false positive rate’ for an ADHD diagnosis. Different experts might do the interviews, go through the checklist, then come to different answers, but there is no such thing as a ‘false positive’. If some expert says you have ADHD, the law and the university treat that as a certainty.

In my latter teaching career, when the explosion in disability exemptions among students was well underway, I was told that most university students with a disability diagnosis went into their first year with it already in place from their high school, if not earlier. The universities just carried on with accommodating them. The ‘advisors’ at Queens mentioned in the article don’t diagnose these conditions, they just decree what accommodation the student must be given for having them.

The psychology and psychiatric professions have been devoted to expanding the range of disabilities for decades now. Pretty much every kind of behavior has been given a spot as a ‘disorder’ in the DSM, including ‘oppositional defiant disorder’. Type that into google and see what comes up. I’m not an expert, but I’m quite sure I suffered from a bout of it at one point in high school.

These are barbers setting the rules for when someone must get a haircut. To be sure, not all members of all health care professions buy into the medicalization and amelioration of life. There is a reader comment at the end of the GM article from a health care worker who notes that when she pushed back against providing a ‘sick note’ for a student, the parent threatened to report her to her professional college.

Professional colleges in ‘regulated professions’ have become great agents for the enforcement of conformity. And the counsellors who set the accommodations for these diagnostic students, cannot be argued with, either. Certainly not by the likes of lowly faculty. It is worth noting that there is no science whatever behind the ‘accommodation’ business. That is to say, there are no scientific studies of the effects of giving vs not giving various ‘accommodations’ to students with registered disabilities. The standard two accommodations given are to allow more time to complete things, or to take tests in an environment separate from other students. Note that these are things that would likely allow any student to get a higher mark.

So, the disability accommodation system in Canadian education has become a system in which professionals with an interest in seeing disabilities and accommodations grow are in a position to help it do so, and in which the accommodations are such that every student would find them beneficial.

I would timidly suggest we may have found a cause for the pattern shown in the second figure above.

Predictably, this Globe article generated plenty of outrage in the comments sections, as well as some suggestions for what would or should happen now. I will close by dealing with a few of these.

  1. Universities should indicate on degree certificates or transcripts when a student has received accommodation in earning their credential. That’s a non-starter under both privacy and discrimination law. That it would improve the information available to society is quite beside the point, this cannot be done under current law, even if universities were willing.
  2. These accommodated students will be unable to compete or hold a job once they get out into the real work world. I fear this is false. The various Disabilities Acts in federal and provincial law – not to mention the Canadian Charter – pretty much insist that accommodation be given once someone is expertly diagnosed. I expect this will apply no less to workplaces than it does to educational institutions, so those accommodated students are likely to move into the workforce and insist on – and receive – similar special treatment from their employers. It is likely just a matter of time before most firms of any size have ‘disability counsellors’, just as they have HR and DEI people now.
  3. Once people start dying on operating tables and bridges start falling down, society will see that these accommodations are granting credentials to incompetent people and this will have to stop. This point is also often made with regard to DEI (now EDIDA at my old employer – guess what that stands for, gwan) policies, as well.

I actually think there is something to this, as some very undeserving and – honestly, ignorant – people have been receiving both high marks and degrees at my former place of employment for some time, and I have no problem believing their general ignorance, coupled with a sense of entitlement, will have costs down the road. Costs to others, I mean.

But neither the existence of EDIDA nor of disability accommodation means that all university graduates are ignorant and entitled; only that too many are. After all, universities have always given credentials to some who were undeserving. However, it is likely to take a loooong time for the impact of that increase in the undeserving to be visible out in the world, and an even longer time for it to be sufficiently obvious to enough people that the will to do something about it takes hold in society in general.

I would not hold my breath. Actually, I won’t be able to, as I expect to be long gone before it happens.

Motivational Workshops are Good For You. Really. Stanford Says So

I mentioned in my last post I had been reading about academic fraud, and I was. I do. Frequently. I didn’t post anything on it, but after posting my last piece I came upon some material I had dug up a while back on a case that is, well, improbable. Not fraud, or so say the experts who know more than I, but research that makes you wonder ‘What were these people thinking?’

The story starts with an improbable figure: Tony Robbins. You may remember him, motivational speaker, was all over TV at one point, good looking guy, great teeth. This guy:

He was on OWN for a while, wrote several books, like Unlimited Power: The New Science of Personal Achievement (1986) and several others. He also went through some rough times, accused of sexual misconduct in Buzzfeed in 2019. I don’t know what came of that, and it is quite separate from his role in our current story.

A group of researchers, some from Stanford’s Genetics Department and its Stanford Health Innovation Lab (SHIL) undertook some research projects in which Robbins’s seminars figured. I don’t know how many projects there were in total, or how many papers they got published on them, here I will focus on two I do know about:

Non-traditional immersive seminar enhances learning by promoting greater physiological and psychological engagement compared to a traditional lecture format  

This was published in the journal Physiology and Behavior in 2021, and you can read it here if you like. I’ll refer to this as the learning paper. The other is

Effects of an immersive psychosocial training program on depression and well-being: A randomized clinical trial

Published in The Journal of Psychiatric Research in 2022, you can read it here. I’ll refer to this as the depression paper.

Each paper lists 9 separate author-researchers, five of whom appear as authors on both papers. Michael Snyder, the Director of SHIL, is listed only on the depression paper, but other authors on the learning paper are from the Stanford Dept of Genetics.

The learning paper’s abstract starts with this sentence:

“The purpose of this study was to determine the impact of an immersive seminar, which included moderate intensities of physical activity, on learning when compared to traditional lecture format.”

In fact the researchers observed 26 people, 13 in an ‘immersive treatment’ group and 13 in a control group, all 26 of whom went through a two-day seminar. The difference in the two groups is described as follows:

“For the IMS group, participants were provided with a hard copy of lecture material which was presented at UPW with a combination of state elevation sessions (jumping, shouting, fist pumping, and high five behaviors) which were conducted approximately once every hour to raise arousal and to interrupt sedentary behavior, as well as mindfulness mediation[sic] that focused on a wide variety of awareness, affective states, thoughts and images [35] that were conducted once at the end of each day.”

The control group got the same seminar and hard copy of material, but without the meditation and ‘elevation sessions’.

What does this have to do with Tony Robbins? You will find his name in the paper, sort of, at the very end –

Funding

This study financially supported by Robbins Research International Inc.

What this does not mention is that the ‘lectures’ the research subjects attended were in fact a two-day Robbins seminar, one of those things one pays a lot for.

Still, nothing wrong that I can see with someone like Robbins sponsoring research to see if one of two different approaches leads to better learning outcomes in his seminars. However, because all the subjects are people who paid a lot to be in this environment, one can’t really claim they are representative of the general population, so one has no reason to think that the results of this study tell us much about the impact of such ‘immersive’ learning in other situations.

Beyond that, there are only 13 research subjects in each group. That is a rather small number,  which makes one wonder whether the results mean much. Here’s what the researchers say they found:

            “The primary findings of this study were that learning was greater in the IMS compared to the CON as the increased performance on the exam was sustained 30- days post event when compared to CON, which decreased 30-days post event.”

Ok, but the question that comes to me at this point is – why did these well-placed researchers bother with this? The paper’s Conclusion section actually notes that “Previous studies have shown that physical activity can promote learning in traditional classrooms [2,3].” They then go on to note that those previous studies utilized different types of exercise. Whoop-de-do. I would think researchers at this high level would be interested in research that could move the needle more than that. I get that Robbins (maybe only partly) funded the work, but I doubt these people are hard up for research funding. So, in the end my only question is really: why and how did these researchers get involved with Robbins at all, and on such a mediocre project?

Hang on. I came upon this in Gelman’s statistics blog, which I read regularly and mention here often. He didn’t take these guys on, all he did was reprint some material from an article in The San Francisco Chronicle which, based on the bits I’ve been able to read (paywall) was really hard on these researchers and this research.

As I noted, Robbins had been in the news not long ago for bad behavior, so if the learning paper was the extent of this Robbins/researchers interaction, I would likely chalk this up to no more than a newspaper thinking it can score points by embarrassing some Stanford researchers for their association with Robbins, and take another strip off Robbins himself in the process. Nothing (much) to see here.

But then there’s the depression paper.

Again, Robbins’ name appears only once, at the end of the paper, like this:

“This study was not funded by Robbins Research International; however, they did allow participants to participate in the DWD program at no charge. They also provided housing for two research coordinators who stayed on site during the trial.”.

DWD here refers to Date With Destiny, which is a Tony Robbins seminar that the paper describes as follows:

“…a six-day immersive training program that includes a subsequent 30-day daily psychosocial exercise follow-up period. DWD is popular with thousands of people using this intervention annually. The program combines a variety of lifestyle and psychological approaches that seek to improve well-being, including cognitive reframing, guided meditation and visualization, neurolinguistic programming, gratitude, goal setting, guided hypnosis, community belonging and engagement, and exercise. Although components of the program such as exercise, gratitude, and cognitive reframing have independently been found to improve mental health and wellness (Goyalet al., 2014; Kvam et al., 2016; Mikkelsen et al., 2017; Schuch et al., 2016), the effectiveness of the DWD program has not been investigated.”

No mention anywhere of the fact that this is an ongoing Tony Robbins (money-making) seminar series for which people (although not the subjects of this study) pay big bucks to attend, although people familiar with the Robbins operation would recognize the DWD name. Here’s what the Robbins website says about DWD:

Create life according to your terms

Dive deep into the patterns that are holding you back, ignite your motivation, and build momentum toward the life of your dreams.

I guess that’s kinda ‘mental health and wellness’, right?

The researchers describe their experiment as follows:

            “A randomized clinical trial was conducted in which 45 participants were randomized at 1:1 ratio to DWD (n = 23) or a gratitude journaling control group (n = 22) (Fig. 1). Depressed individuals (n = 27), as assessed by the Patient Health Questionnaire-9 (PHQ-9; see below), and those without depression (n = 18) were recruited by email, flyers, and physician referral in the U.S.

At least there are more subjects this time, right? Ah, no. In fact there were 14 depressed subjects assigned to the Seminar and 13 assigned to the ‘gratitude journaling’ control group. The other 18 subjects, 9 in each group, were not designated as depressed according to their own original responses on the above-mentioned PHQ-9 questionnaire, and as we shall see, it is the originally depressed subjects who are the star of the show.

That’s because the big question that is being asked here is: does attending a Tony Robbins six-day DWD event reduce depression?

Can anyone guess what answer Mr. Robbins would like to hear to that question?

And indeed, that is the answer we get – in spades. Here’s the payoff, from the paper’s Abstract:

“Seventy-nine percent (11/14) of depressed participants in the intervention condition were in remission (PHQ-9 ≤ 4) by week one and 100% (14/14) were in remission at week six.”

In remission here means that on that self-reported questionnaire, the scores generated by their answers were below the threshold at which they are considered depressed. In other words, the Tony Robbins DWD seminar has a 100% cure rate for depression.

One need not even ask how the control group did, my god, all the depressed people were cured by the treatment. Huzzah!

More times than I could count, I have written in this blog, ‘If it seems too good to be true, it ain’t true’.

Reasons for skepticism are many and varied.

In clinical trials of anti-depressants, typically something like half of participants report ‘feeling better’ after six to eight weeks. A paper in the Lancet I dug up said that 62% of adults reported ‘improvement’ in depression after psychotherapy, at varying time frames.

But DWD – 100% cure rate. Go, Tony.

There is once again the small sample issue here, just as in the learning paper, but to this under-educated economist, the subjects themselves are the big question mark. These are people who self-reported being depressed, went to a DWD seminar for six days for free and then were asked again afterward about how they felt. Ya think maybe they were inclined to believe in the power of DWD?

The bits of the SF Chronicle article that are quoted on Gelman’s blog suggests that some of the SHIL researchers knew and were fans of Robbins before the research started. I can’t say anything about that, and I don’t think you need to know that to wonder about the results in the depression paper.

A final note. If you do go to download the depression paper, you will find that the journal in 2024 also published a Corrigendum about the original 2022 paper, and this corrigendum has the same original nine authors on it. These corrigenda are published by a journal when a mistake is found in a published paper, but is thought not so egregious as to warrant retracting the paper completely. It notes that there was an error in calculating the post-treatment PHQ-9 score for one of the treatment subjects, and as a result, the cure rate was ‘really’ only 93%.

Most interesting in this corrigendum, all two pages of it, is the following paragraph, which I quote:

“Finally, we note that after the article was first made available online on March 9th, 2022, Dr. Snyder became a co-founder of a startup, Marble Therapeutics, on July 12th, 2022. Mr. Robbins later invested in Marble Therapeutics on September 26th, 2022, three months after the final version of the article was published. We do not believe there was a conflict at the time this work was done, but nevertheless wish to note this relationship.”

There’s that other thing I often write: can’t make this shit up.

Canadian Health Care System Goes in for an Exam

My title is a bit inaccurate, maybe even a lot, because even though people write and talk a lot about the ‘Canadian’ health care system, in fact health care is a provincial responsibility. The Canada Health Act is intended to promote some level of commonality across the provincial systems, but I have little doubt that there is a good deal of variation in the quality and timeliness of the provision of health care services across the provinces.

Still, folks do talk as though we have a ‘Canadian system’, and the report by my favourite Canadian think tank, the Fraser Institute, that prompted this article is no exception.

Before I tell you about that, here’s a different report regarding what Canadians think about the health care system. In a poll done in August of ’23, Angus Reid asked a sample of Canadians a variety of questions about health care. Some highlights (you can read the entire AR report here):

– two-thirds of Canadians (66%) think there are structural problems within health care that will not be solved with more funding.

– 19% reported not having a family doctor, and 29% reported that they have one but find it difficult to get in to see them.

– 68% did not expect to see improvement in health care system in two years, and 56% did not expect to see improvement in five years.

– 68% report they think their province is doing a poor or terrible job measuring health care system performance, and 69% think that performance would improve if key health care indicators were made public.

Now this is an interesting snapshot of what a sample of Canadians think about the health care system under which they live, but it prompts a lot of questions. What health care indicators do the people surveyed have in mind? What do they mean when they tick the box ‘there are structural problems in health care that money won’t fix’?

Still, the survey suggests a lot of folks aren’t happy with the system, and my own circle of friends and acquaintances echoes that, although I will add that, in my experience, there is also a not-insignificant set of people who get angry at the suggestion that Canada has anything less than one of the best health care systems in the world.

Which leads one to wonder – at least it leads me to wonder – just how good is it?

Well, the good ol’ Fraser Institute wondered that too, and the result of that wondering is a new report from them titled ‘Comparing Performance of Universal Health Care Countries, 2024’, which you can download and read here. Really, all 66 pages, no paywall.

Suspecting that not all my readers are as enthusiastic about 66 pages of numbers, graphs and tables as am I, I valiantly dug in to see what I could learn and then report on here.

One thing to note right off is that Fraser decided to look only at OECD countries with relatively high incomes, and only those with universal health care systems. That tossed the USA out of the comparison right off, as well as low income countries  like Mexico, Costa Rica and Poland. This left 31 countries that met the Fraser criteria, and it includes all the countries you would expect: Canada, the UK, France, Germany, Australia and New Zealand, even Israel and Slovenia.

There’s a lot in the report obviously, and the first key comparison is – how much does Canada spend on its health care system relative to these other countries? The next comparison will be, of course – what we get for that spending.

The first question is answered in two ways. First converting all monetary amounts into US dollars (despite its absence from the list) at purchasing power parity exchange rates (don’t ask) it does two international comparisons. One looking at expenditures as a percentage of GDP, and a second looking at expenditures per capita.

Canada is the fourth highest spending country on the first measure, spending 11.5% of its GDP on health care, trailing only Germany, Switzerland and New Zealand. On a per-capita basis, Canada comes in 9th of 31 countries, spending $US7035/person, the leader of the pack being Switzerland at $US9218/person.

So, Canada’s system is not inexpensive. We spend a bigger chunk of our national income on health care than all but three other rich countries and are in the top half of countries ranked by spending per person.

So now, what do we get for this?

This is much harder to measure in any credible way than are expenditures, so Fraser measures a lot of different things to try and get a handle on what and how well the system delivers. An imperfect place to start is to measure inputs. How well does the system do in providing the labour and capital (cuz that’s what it is, folks) that is needed to deliver health care services?

The answers are varied.

Canada ranks 28/31 in the number of physicians per capita, but a better 13/31 in the number of nurses per capita. However we slip to 25/31 in the number of each of overnight-stay and psychiatric hospital beds per capita. We are 27th in MRI units per capita and 28th in CT scanners. We are a surprising 3rd in Gamma scanners per capita, a device that detects gamma radiation emitted by radioisotopes. (I had to look that up.)

Canada doesn’t look good on this dimension outside that gamma scanner thing, but the report notes quite reasonably that what matters is not inputs but output. When these people and equipment go to work, what results do they get?

This is even harder to measure (it’s relatively easy to count people and pieces of equipment).

So….

Step 1: How well are the people and equipment utilized?

Two countries were left out of this part of the report for lack of data, but

Canada ranks 17/29 in doctor consultations per capita, 22/27 in MRI exams per capita, 13/27 in CT scans per capita.

We squeaked into the top half on that last one, but Canada also ranked dead-last in ‘curative-care discharge rates’ from hospitals, which are basically instances per thousand population of a patient being admitted to a hospital for treatment of an illness or injury (so, not maternity stays, for example) and then being discharged.

So, last in just going to the hospital to be treated and released. Hard to give that a positive spin, I think. (Thinking about it, that number could be low either because we have a hard time getting into the hospital to be treated, or because when we are, we tend to die rather than be discharged. Or both.)

And now, everybody’s favourite….ok, my favourite, waiting to be treated.

The set of countries in this ranking is even smaller, and I’m not sure why. The countries included are Netherlands, New Zealand, Canada, Switzerland, Germany, Australia, France, Sweden and Switzerland.

– 65% of Canadians reported waiting more than one month for an appointment with a specialist, placing Canada 8th out of 9. [France was even higher. Sacre bleu.]

– 58% waited more than two months for elective surgery, placing Canada 9/9.

Based on the stories I hear from my (admittedly mostly not-young) friends, I find it amazing that 42% of Canadians apparently got elective surgery of any kind in less than two months. I will add the cautionary note that this is based on patient self-reports, which are never the most accurate stats. Ideally one would like administrative data on wait times.

Anyway, on to –

Step 2: what happens to people who do get into the system?

There are a number of measures of how well the system actually does in making people better. I won’t go through them all, but here are a few so you can see what sorts of things are involved and how we did. Not all the 31 countries are included in each measure, data again.

a. The rate of diabetes-related lower extremity amputations was not statistically different from the 23-country average.

b. Canada ranked 8th (out of 26) for performance on the indicator measuring 30-day mortality after admission to hospital for acute myocardial infarction (heart attack) which was statistically better than the average.

c. 11th (out of 26) for performance on the indicator measuring 30-day mortality after admission to hospital for a hemorrhagic stroke, which is not statistically different from the average, and

d. 15th (out of 26) for performance on the indicator measuring 30-day mortality after admission to hospital for an ischemic stroke, which is also not statistically different from the average.

Wait, there’s more….

e. Canada ranks 5th (out of 28) on the indicator measuring the rate of 5-year survival after treatment for breast cancer (statistically better than average),

f. 11th (out of 28) for the rate of 5-year survival after treatment for cervical cancer (not statistically different from the average),

g. 8th (out of 28) for the rate of 5-year survival after treatment for colon cancer (statistically better than average),

h. 6th (out of 28) for the rate of 5-year survival after treatment for rectal cancer (statistically better than average)

Clearly, Canada does okay on these measures overall, so I will take a paragraph here to note what I think is ironic The most terrible thing you can suggest to most Canadians is that we make our health care system more like that of the US. In fact, no one reasonable ever suggests that, but Canadians seem to be terrified of the very idea of the US system, which they think leaves people dying on the sidewalk, unless they can afford the Cadillac-level health care available to those with money and/or good jobs.

But what this report suggests is that Canadians have a terrible time getting access to health care, yet if they do, the system does a good job of treating them. So, in the US access is difficult unless you have money, but you get good care if you get in, while in Canada access is difficult because governments won’t spend the necessary money, but again, if you get in, you get good care.

Not what I would have expected before reading this report.

There’s a lot more in it, including comparisons of longevity and mortality and such. I don’t think those things tell us anything much about our health care system, as they depend on too many other things, so I will pass on reporting them, but you can read the report if you’re interested. The one exception is ‘treatable mortality’, which, as its name suggests, is mortality from causes that a health care system can ameliorate. On that measure, Canada is 20/31, not terrible, consistent with the numbers we just saw.

My bottom line is what it has been for a long time regarding the Canadian Health Care System. Despite it’s acronym, OHIP is not an Insurance Program. It may have been at the start, based on what I have read about its origins. Now, it has morphed into an all-encompassing bureaucratic nightmare in which too many decisions are made at too high a level. The politicians and high-level bureaucrats in Ontario try to ‘run’ the system from Queens Park, and if the Soviet Union was not sufficient demonstration that that doesn’t work I don’t know what could be. In addition, politicians are unwilling to commit the spending to the system needed to reduce wait times and ameliorate other access issues, because doing so would make it much harder for them to do the things they believe will get them re-elected: subsidies to manufacturing plants, sending cheques to taxpayers and the like.

I have seen the enemy of good health care, and it ain’t us, it’s the people we vote into office. All of them.

 

 

 

 

 

It’s Getting Hot Out: The Efficacy of Heat Warnings

 

Summer’s about here, and we can look forward to more of that Environment Canada staple – The Heat Warning. You know, the alerts about high temps and humidity you see on your favourite source for weather info.

I never think much about them, figuring people are pretty good at understaning when it’s hot out and what to do about it. It turns out some local researchers got to wondering if these alerts did any measurable good.

Their work was written up in the Freeps some while back, in an article headlined:

Do hot-weather alerts help? No, not really: London researchers

– published on Aug 22, 2022.

The tag line below the title reads “Those heat alerts telling us to be careful when temperatures spike? Turns out they do little to keep people overcome by heat out of hospital, say London researchers calling for changes to make the warnings more effective.”

The Freeps reporter has the research right in this case. In the research article you will find the following two paragraphs –

“The researchers compiled data on patients with heat-related illnesses who showed up in emergency rooms from 2012-18 and looked at whether their numbers dropped after the harmonized heat warnings kicked in.”

Then later –

“While there did appear to be a slight drop in heat-related emergency room visits after the provincial warning system was introduced, particularly in children and adults with chronic conditions, the results were not statistically significant, Clemens said.”

I went and read the research paper, published in The Canadian Journal of Public Health in 2022 (I’m a geek; you can read it too, here, although you will have to get past the paywall). That is indeed what the researchers say in the paper.

This research paper strikes me as reporting on potentially useful research. The Freeps article notes that “In southern Ontario, heat alerts are issued amid daytime highs hit 31 C or higher, lows of 20 C or when the humidex reaches 40.” You want to put off digging that garden to another, cooler, day. Old coots like me are particularly aware of this.

But setting aside my own instincts, I am all in favor of research to determine whether government initiatives are having their hoped-for effect. My unease about the research arose from the following lines in the Freeps article, in which the lead researcher is quoted –

“This research points to the need to raise awareness of heat-related illness. I’d like to see this translate into more education and physician-level awareness . . . ,” Clemens said. “As an endocrinologist, (I) could help inform or prepare my complex patients to better protect themselves.”

Huh? Exactly how does this research point to that? These research findings say the current warning system had no impact on heat-related emergency-room visits. What is the logic leading from that useful finding to the first sentence in the quote above? And as to the second quoted sentence, by all means, go ahead and inform and prepare your patients, but what does “this research” have to do with that?

Then, at the very end of the paper, we find this:

What are the key implications for public health interventions, policy or practice?

  1. More heat alerts were triggered in Ontario between 2013 and 2018, and many cities spent more days under heat warnings. The implementation of a harmonized HWIS appeared to reduce rates of ED visits for heat-related illness in some subpopulations, but at a provincial level, the change was not statistically significant.

 

  1. Given HWPs are a main policy tool to protect populations against heat, we suggest ongoing efforts to support effective HWP in our communities, with a particular focus on at-risk groups.

 

The journal itself probably has a requirement – since it is a public health journal – to include in any published paper a final statement on the public health implications of the research. However, point 1 is not an implication of the research findings. It is just a restatement of the fact that the research found the warnings had no impact. However it is entirely misleading to say that the HWIS ‘appeared to reduce rates of ED visits…’ and then immediately say ‘the change was not statistically significant’. All social science research operates with the knowledge that there is a lot going on in the world that we can’t identify, or even know about, and so any difference we see in data (like differences in ER visits) might be due to random chance. Researcher can’t just say something ‘appeared’ to be different when in fact the difference was statistically insignificant.

So, why cling to the ‘ we found this, but it wasn’t significant’ language? Why not just say ‘we found no impact’? That is a useful thing to find, indeed, and researchers should expect to find exactly nothing much of the time. Finding nothing advances our knowledge about the world, it is very useful to learn ‘well, that doesn’t seem to have any impact’.

Then, in implication 2 above, they write “…we suggest ongoing efforts to support effective HWP in our communities….”

C’mon folks, you just found that HWPs are ineffective in reducing ER visits, so in what way is an implication of that finding that we should support effective HWPs? Particularly since nothing in your research tells anyone what an effective HWP might look like.

Having hung around with social science researchers nearly all my adult life, I will bravely put forward a hypothesis about motivations, here: there is nothing that would have induced the researchers to write, instead of the two misguided points above, this implication of their research: –

Our research suggests that the HWIS program and its associated HWPs be ended, and the resources involved be directed toward programs for which there is evidence of effectiveness.

That sentence never stood a chance of appearing in their paper.