Skip to main content

Not Political  

Hey, Kids!

This is going to be an article about the recent US election that is not about politics. I’m sure you’re all relieved – I know I am.

One staple of any US (or Canadian) election is pre-election polling. Media organizations do it, the parties and candidates do it if they can afford it, and movements in those polls are written about endlessly in the months leading up to the election. It might even be true that they matter for the outcome, because they affect people’s beliefs about the eventual likely outcome, and there is credible evidence that the expected closeness of an election affects people’s decisions to vote. Why bother if you expect a landslide?

This last US election was predicted by almost all polls to be close at every moment, but there were differences, and one notable difference was in the predictions made by traditional polls relative to those made by prediction markets. These market platforms, like Polymarket, allow people to make real-money bets on who will win the election. The price of betting on a Trump win is set on this market platform, based on what bets are being made, and varies between .01$ and 1$. Paying the going price at any moment gives you one ‘share’ in a Trump victory, which means that if he wins, as he did, you collect $1 for each share you bought. Similarly for shares in Harris, which trades on its own separate market (although prices on the two markets are obviously linked). Thus, if a share in Trump costs 40cents, that suggests that in this market the general belief is that Trump is more likely to lose than win, while if his shares are going for 65cents, as they were at various times a week or so before the election on Polymarket, that suggests the general view is that he is more likely to win. Those who bought Trump shares when the price hit $0.65 (see below) made $0.35 for each share they bought.

Polls, on the other hand, do not directly predict the winner, but rather try to predict the share of votes each candidate will get, both nationally and in each state, based on surveys of likely voters. Those state vote predictions then have implications for the Electoral College vote count, which can of course be used to generate a prediction about who will win.

Below is the plaform Fivethirtyeight‘s final poll from Election Day.

Well, here are some facts (so far as I can tell) about all these predictions and polls in this election.

1, All the polls I know about got the national popular vote wrong. Not by a lot, maybe, but the ones that I know about predicted that however the Electoral College turned out, Harris would win the popular vote. They’re still counting votes in California for some reason, but it looks like Trump’s national vote total will exceed Harris’s by maybe 3 or 4 million.

2. None of the polls I saw at any point predicted that Trump would sweep the swing states of Pennsylvania, Michigan, Georgia, Nevada and Wisconsin.

3. If this makes you want to thumb your nose at polls generally, you should probably pick on the one that appears in – wait for it – The Economist. On Election Day – yes, you read that right – it posted an update to its prediction about the winner, giving Harris a 56 percent probability of victory.

Gonna be hard to live that one down……

4, There is plenty of chatter out there in the mainstream media now that the polls, like those in The Economist, are done for, having had their collective asses kicked by the political betting markets. That’s unwarranted, for sure. If you want to go full geek you can read a blog post here by statistician Andrew Gelman as to why that is an unwarranted conclusion, but know that Gelman was involved in putting together the prediction model used by The Economist. (I don’t believe he is involved in its day-to-day operation, so is likely innocent of any involvement in that big Harris jump on Election Day.)

However, I think I will back Gelman on this. First of all, it is worth noting that the betting markets back in 2016 were predicting a Clinton victory right up to election night, by which I mean a Clinton share cost more than $0.50 and a Trump share cost less than that. Beyond that, it is important to note that all US presidential elections in this century have been quite close, some of them really really close. It has become a 50/50 country at the national level politically, so predicting election outcomes is always going to be done with a great deal of uncertainty. That’s what makes sporting contests between equally capable teams fun, right? Whether that fun translates into politics you can judge for yourself.

All of which brings me to one particular prediction market in which something a bit unusual happened this year. I think it’s an interesting story in its own right. A couple of weeks before the election, the prediction market Polymarket, which had been selling Trump and Harris shares for prices not far from $0.50 for some time, suddenly saw the price of a Trump share vault into the middle 0.60s.

It turned out, and Polymarket was pretty up front about this, that the price had increased so markedly because a large buyer – later know as ‘The Trump Whale’ – had jumped in and bought a lot, as in millions of dollars worth, of Trump shares. It’s called a market because that’s what it is, that’s how it’s designed, so his big buy pushed the price of a Trump share up markedly.

It is still not known who this is, and Polymarket sure isn’t going to say, but The Wall Street Journal has featured a couple of stories on him, in which he has said he has no political agenda, but rather that he had a ‘hunch’ that Trump was going to win, that the polls were missing something. [It appears now to be common knowledge that he is a wealthy Frenchman who goes by the alias ‘Theo’. The WSJ claims he is set to rake in some $50 million from his election bets.] He bet not only that Trump would win the presidency, but also that he would win the national popular vote, something no poll got right.

The latest version of his story is that he believed that the polls were once again missing a ‘shy-Trump-voter’ effect, as they did in 2016 and 2020. That is, people who are going to vote for Trump, but won’t tell a pollster that, or just don’t respond to polls.  The WSJ story continues as follows –

To solve this problem, Théo argued that pollsters should use what are known as neighbor polls that ask respondents which candidates they expect their neighbors to support. The idea is that people might not want to reveal their own preferences, but will indirectly reveal them when asked to guess who their neighbors plan to vote for.

Théo cited a handful of publicly released polls conducted in September using the neighbor method alongside the traditional method. These polls showed Harris’s support was several percentage points lower when respondents were asked who their neighbors would vote for, compared with the result that came from directly asking which candidate they supported.

Well, now – that’s an interesting idea. If you think so, too, Gelman has an entire post devoted to it, not completely geeky, but very long, which you can read here. Bottom line: he also thinks ‘ask about your neighbors’ polling is an interesting (although not entirely new) idea, but needs more research. Of course he does, Gelman’s an academic, remember.

It is by now well-recognized by pollsters themselves that getting truly representative samples of people to survey has become more difficult with the demise of the universal use of landline telephones. What that all means for the polling business going forward remains to be seen, for sure, but it seems at least true that after three Trump-involved elections, US pollsters have still not figured out how to stop underestimating his support.

 

Grade Inflation Two – Local

There are times when this blog writes itself. No sooner did my last piece on Harvard’s grading practices get posted than I acquired – by various means – documents that reveal what is going on in my old department regarding undergraduate marking.

First, a memo was sent around (including to me, for some reason) about new grading standards to be followed within the Dept. Instructors in the first year introductory courses and in the second year core theory courses (which almost all students take) have been informed that they should plan to give an average course mark of 75% (a middle B) and to award As and Bs to 60% of their students. Now, this is nowhere near Harvardesque, as that fine institution of higher learning is, as noted, giving As and A minuses to nearly 80% of its students. Even the Econ Dept-inclusive Faculty of Social Science at Harvard is giving out A-range marks to 65% of its students, much more generous than the new UWO Econ guideline, which in any case only applies to first and second-year courses.

None the less, this is a much higher grading expectation for UWO Econ students than reigned in my day, and the reasons for this are illustrative of what goes on in much of higher ed these days.  A separate report to the Department’s Committee on Academic Policy (also sent to me) notes that Economics tends to award fewer As and Bs than other Departments in courses of all levels. This is bad for enrollment in Econ courses, and enrollment in courses is what Departments live on in the 21st century. This is no doubt why the new grading guidelines have been struck.

Further, other information I have seen indicates that enrollment in all UWO Econ programs is on the decline, precipitously so, in some cases. For example, in UWO Econ’s once world-class Econ Honours undergrad program, enrollment in non-required courses has dropped 60% in five years. The PhD program took in 8 new students last year, when it used to take in 15-20 not long ago, when I was still employed. Even the new and previously successful Master’s program in Financial Economics has only 16 new entrants, where it used to have nearly 30.

The reasons for this are many and varied, as is always true, but for the undergrad Honors program, one cause is abundantly clear. Some 20 years back, when the Ivey Business School’s MBA program crashed and burned, Ivey had to find a new way to generate revenue. It chose to massively expand its undergrad HBA program, which students enroll in for only their last two years. Tuition for Ontario students in this program is for this year $25,200/year, so $50K for the program ($60K/year for foreign students). Ivey’s intake into the first year of this program has, since the 2000s, gone from less than 200 to 765 students in 23/24, according to its own website.

Not many programs at UWO have grown like that, although another one that has is also relevant, and its name is MOS. That stands for Management and Organizational Studies, and is a program within the Faculty of Social Science that – so far as I can tell – has also grown massively over the same period. The Faculty of Social Science is the largest at UWO, with nearly 8.000 students, and when I left my position two years ago, half of those Social Science students were said to be MOS students. (I note in passing that MOS now likes to be referred to as DAN Management, as entrepreneur Aubrey Dan left it a couple of large donations, and got the program and Department re-named in his honour some years back.)

Anyway, this quasi-business school’s growth, coupled with that of the older Ivey undergrad program has left UWO as The Business School of Western Ontario, and done much to reduce enrollment in Econ, as well as other non-Bus programs, I expect. The fact that Econ courses are hard, and, as noted last week, Econ profs are not much inclined to be easy markers, has helped feed the recent precipitous decline in Econ enrollment, and the resulting attempt to reverse this downward trend by awarding higher marks. This illustrates one of the forces militating against having grading standards that are difficult for students to meet. Another one can be seen in a document sent to me by one of my not-yet-retired colleagues in Econ. Said document is a product of what UWO calls its ‘Teaching and Learning Centre’ or TLC. It is headed:

Professional Development Workshop

Grading and Assignments

Under the heading ‘Assignment Design’ in the document is included this advice:

‘Cut down on the stuff they have to think about (and perhaps reduce cheating).’

Well yea, less reason to cheat – or study – if you aren’t expected to think about very much. I mean – who comes to university expecting to think about a lot of stuff, right?

The bureaucrats, who are really in charge now at Western, do not want faculty messing things up by making students thinkhard, or – heaven forbid – giving them low marks. It’s very bad for business and business, with a  Capital B – or maybe Capital I – is what BSWO is all about.

Learning, thinking – not so much.

I add an epilogue to further demonstrate what has happened in 40 years to Ontario universities. When I arrived at Western in 1980 I was absolutely floored by how well-prepared, smart and hard-working were UWO undergrads compared to the US undergrads I had taught during my graduate training. When I first was given an Intro Econ course to teach, I was sternly told that the Dept’s undergrad grading guidelines were to give about 1/3 of students an A or B, 1/3 a C, and 1/3 a D or F. Easy to remember, eh? 1/3, 1/3, 1/3.

So, 33% As and Bs versus 60% now, or 79% As, as at Harvard. You do the math. It’s a good bet UWO undergrads can’t. Math requires thinking about many things.

Grade inflation, Harvard Edition  

I sometimes write posts for this blog and then file them without posting, either thinking I will get back to them later and improve/shorten them, or just deciding that the post isn’t up to snuff. I have a looong one sitting in limbo right now on grade inflation, specifically in US universities. I think it’s an issue in Canadian universities (and secondary schools) also, but there is better data on it for the US.

Anyway, I may post that still-simmering article at some point, as it is a topic about which I care, but the Harvard Crimson, that august institution’s student paper, has published an article about grade inflation at Harvard that provides a nice introduction to the topic – and a shorter post.

Kurt Vonnegut is reported to have said ‘Most kids can’t afford to go to Harvard to be misinformed.’, and he wrote that long before Harvard’s most recent troubles – in 1987, in fact. It turns out that if a kid can afford it, he is at least all but assured of getting an A while being misinformed.

   or……….  

The Crimson article is from 2023, and titled ‘Harvard Report Shows 79% A-Range Grades Awarded in 2020-21, Sparking Faculty Discussion’

The Crimson has no paywall, so you can find the article and read it yourself, I expect, but here’s the graph that it opens with.

So, nearly 80% of the course grades awarded in 2020/1 were As or A minuses. Wow, is all I can say. I thought we gave marks that were too high at UWO back in the day, but I don’t think we ever approached 70+% As.

As is always the case, the percentage of As awarded varies by discipline. Here’s another graph from the Crimson report, which also goes further back into Harvard history.

SEAS is the School of Engineering and Applied Science, and I have to admit that I am surprised that Social Science profs were just as tough graders as were those in Science. (Economists, almost always housed in Social Science, are typically tough graders, but this is normally more than made up for by the Sociologists, Psychologists, Political Scientists, etc. Not at Harvard, apparently.) That the A & H faculty were the easiest graders should surprise no one. They would almost certainly look tough compared to the faculty in the Faculty of Education – if Harvard has one. And, of course, I am using ‘tough’ here ironically. The 60% A marks handed out in SEAS is anything but ‘tough’.

Yet the truly remarkable thing here is that the percentage of A grades awarded has more than doubled in 20 years in all faculties.

Now why do you think that happened?

I can tell you that there were administrators at Western in my day who attributed the ever-rising admission average to get into the Faculty of Social Science to the fact that ‘Western is just attracting better and better students.’ Secondary School grade inflation? – oh, no, certainly not.

The Crimson has some quotes from various administrators at Harvard about this, one of them from Dean of Undergraduate Education Amanda Claybaugh:

In the meeting, Claybaugh said that the “report establishes we have a problem — or rather, we have two: the intertwined problems of grade inflation and compression.”

By ‘grade compression’ academics mean the fact that all grades awarded are concentrated on a few possibilities, making it hard to distinguish really outstanding students from others. I find it encouraging – remarkable, even, given my own interactions with administrators – that the Dean thinks this is a problem. She is later quoted again:

“There is a sense that giving a wider range of grades would give students better information about their performance, and it would give us better information about where they are ranked against other students,” Claybaugh said in an interview after the meeting.

Claybaugh said that the evidence for the existence of grade inflation was less clear, as many student grades are well-deserved and faculty have increasingly focused on learning objectives.

Yea, that sounds more like an administrator – these grades are well-deserved. At least, ‘many’ of them are.

The next quote actually makes me grumpy. Well, grumpier…..

Nonetheless, she said it seems, as one faculty member put it, external “market forces” are influencing grading, particularly as faculty rely on positive course evaluations from students for professional advancement, she said in the interview.

Ok, I’m an economist, I am used to the fact that everyone wants to blame those awful ‘market forces’ when things go bad, but c’mon, unnamed faculty member. If faculty are giving high marks to get higher student evaluations there is nothing ‘market’ about that, it is an internal Harvard decision to use those (almost entirely uninformative, imho) student evaluations for ‘professional advancement’. You could stop doing that, you know. I’m talking to you, Harvard.

Here’ my favourite quote:

Claybaugh said she would defer to the full faculty to decide whether or not to implement concrete reforms to Harvard’s grading policies, but said she would be “interested in exploring” changes “that put more information on the transcript that put the grade in context.”

The Crimson then mentions some of the things that ‘the full faculty’ considered in their meeting. Someone from A&H (natch) suggested grading simply be abolished, but in the end nothing was decided. Shocking. A faculty meeting in which nothing was decided.

Here’s a fact, from my time in the trenches, that is never mentioned in the article. Giving grades that use the entire grading range, from F to A at Harvard, from 0 to 100 at Western, is hard. It is hard for faculty to give students low marks, partly because they will come to your office and bitch and moan and plead, and their parents may call you (yes, they do) and plead on their behalf, but also because it is not fun. It is not pleasant. Few faculty are sadists, few want to be known as The Grim Reaper of Harvard – or Western.

But the thing is, it is part of the job. That is, it is part of the job to give accurate information to students about their learning, and giving almost everyone an A is not doing that. It is also part of the job to inform the world outside academia how good a student is. Giving almost everyone an A is not doing that, either.

This is, if you ask me – and I know you didn’t, but it’s my damn blog – the fundamental reason why over the last twenty years marks in secondary and post-secondary institutions all over North America have gone up and up and up. Instructors (most of them) don’t want to do that part of their job. And, since they don’t have to, they don’t. Giving lots of As is waaaaay easier on everyone, so that is what happens.

It is the ultimate slogan of the 21st century. ‘Easy is best.’

 

 

 

 

 

 

 

I Couldn’t Not Write About This

By now I’m sure you have all seen headlines or read something about this on a newsfeed, but I’ll paste in here the first paragraph of the story that was in the Wall Street Journal on Tuesday afternoon, Sept 17.

BEIRUT—Pagers carried by thousands of Hezbollah operatives exploded at about the same time Tuesday afternoon in what appeared to be an unprecedented attack that authorities said injured more than 2,700 and killed eight across Lebanon.

Pagers. Exploding pagers, and not a few of them, more than 2,700 of them, exploding simultaneously. This is James Bond stuff, no, it’s not really, Ian Fleming would not have tried to sell such an outrageous idea as plausible, and even in the 007 movies, this would be a stretch. It’s more like Maxwell Smart stuff, except that in Smart’s case the pagers wouldn’t explode and the remote control that was supposed to detonate them all would blow up in Smart’s face, and 99 would have to rescue poor Max from his own folly.

How the hell does one pull this off? How does one get exploding pagers into the hands – or, I suppose, onto the belts – of 2800 of your enemies and then set them off? And, if eight people were killed by these things, they are not just devices that go bang, they are potentially lethal.

Hezbollah immediately blamed Israel, and of course they would, because 1) Israel and Hezbollah are long-time enemies, and 2) there is no other organization or state on the planet who anyone  could imagine being capable of pulling this off. The CIA, MI6, Canada’s CSIS – not a chance, especially that last one, and the CIA repeatedly failed to assassinate just one Caribbean dictator.  Supposedly with an exploding cigar, as I recall. Russia’s FSB might be good at basic murder, poison umbrella tips to kill one man, but not something as complex and audacious as this. I don’t know much about China’s Ministry of State Security, I suspect no one does, and while they may have the technical chops for this, you still have to put your money on Mossad.

Later, like the next day, radios and walkie-talkies started blowing up, even more lethally. One can easily imagine Hezbollah operatives having the fridges and washing machines removed from their homes.

Sorry, I have nothing intelligent or insightful to say about this other than – you really don’t want Israel as an enemy.

 

A Tip of My Hat to Jagmeet Singh

Canadians will know that Jagmeet Singh is the leader of the Federal New Democratic Party, and they can probably guess that I have nothing good to say about any of the policies espoused by him or his party.

However, I want to here and now give him a public tip of my hat for something I saw him do in a video on the CBC website, which you can also view here (you’ll have to scroll to the bottom of the page for the ‘Featured Videos’, and I don’t know how long it will stay posted).

Singh is walking near Parliament with a staffer when two dudes come up behind him, filming him with their phones, and Dude 1 says out loud ‘Would you vote a non-confidence today if it came up?”

Singh ignores him and keeps walking.

Then Dude 1 can easily be heard to say “Corrupted bastard.”

At that Singh turns around and walks back toward Dude 1, saying “Wanna say something?”

Dude 1: “What?”

Singh: “Wanna say something to me?”

Dude 1: “I didn’t say nothing.”

And it goes on like that, with Dude 1, in the manner of confronted cowards everywhere, denying that he said anything, while his buddy, Dude 2, continues to film.

Security officers were right there the whole time, but I here tip my hat to Mr Singh for turning around and calling out the asshole who was only willing to insult him while Singh’s back was turned. Bravo, Mr Singh. Had Singh smacked the guy up side the head, no jury of real people would convict him of anything. The asshole asked for it.

 

What Does a 226% Improvement Smell Like?

Let me first express my thanks to Andrew Gelman of Columbia whose blog (see above) first brought this to my attention, so I can bring it to yours.

This is another of those ‘if it seems too good to be true it probably is’ research papers that I so love. The paper is titled ‘Overnight olfactory enrichment using an odorant diffuser improves memory and modifies the uncinate fasciculus in older adults’ and it was published in Frontiers in Neuroscience in July of 2023.

It reports on a study in which participants – all elderly, like me – were given scent diffusers to take home and use for two hours each night, starting when they went to bed – the diffusers automatically shut off two hours after they were started. The participants were given cognition tests and a functional MRI (that’s where the uncinate fasciculus bit in the title comes from) before they started the experiment and again six months later, after they had used the diffusers for that period of time.

The ‘treatment group’ got a set of 7 genuine essential oils to use in their diffusers, while the control group got ‘de minimis amounts of odorant’ according to the researchers. (Do you suppose those in the Control Group noticed that their diffusers produced no scent? But I digress.)

In the end there were a total of 43 participants in the two groups, and the headline result of this research was, quoting the paper,

“ A statistically significant 226% improvement was observed in the enriched group compared to the control group on the Rey Auditory Verbal Learning Test….”

Pretty impressive, eh? Six months of smelling essential oils for two hours/night at bedtime, that’s all it took to get that huge improvement in Auditory Verbal Learning.

Anyone smell anything?

Here are some facts about this ‘controlled’ experiment. First, 43 participants? A bit more than 20 in each group? That is what statisticians call a small sample. But wait, there’s more. If you look at the flow chart of how they recruited and screened participants for this study, you find that 132 subjects passed the initial screening. Of these, only 68 were included in the Control and Treatment groups that were used in the statistical analysis of the results, and of those 68, 25 dropped out during the study. That leaves the 43 whose results are reported on, of the 132 who passed the screen.

Smell anything yet? Why did those 25 people drop out? That’s 36% of the 68 whose results were analyzed and reported. What does that drop out rate imply for the credibility of the results? People don’t drop out randomly, they do it for reasons.

And, as one of the readers of the Gelman blog pointed out, that 226% improvement claim comes from the control group scoring on average 0.73 points worse post-treatment than pre-treatment on a particular test, while the treatment group scored 0.92 points better on average. So you have a difference of 1.65 points in the two groups’ average ‘improvement’ on the test, and 1.65 is 2.26 times 0.73.

Interesting arithmetic. I think Gelman’s reader is right, as that 226% number doesn’t come up anywhere else in the paper. However, note that 1.65 being 2.26 times 0.73 is not the same as 1.65 being a 226% ‘improvement’ over 0.73. The latter would mean that it was more than three times greater, and it is not. Neuroscientists don’t do a lot of basic arithmetic, I guess. That detail aside, just looking at the difference in average scores for the two groups – what does a ‘point’ mean in this context, anyway? Is it big? How ‘big’ is a 1.65 point improvement on this test? What does that actually translate into, memory-wise? The researchers do not say.

One last thing. At the end of the paper there is a section called Funding, under which is written ‘This research was sponsored by Procter and Gamble.’

There, now you smell it, eh?

I know nothing about the journal Frontiers in Neuroscience, but if their editors did not stop and wonder about that 226% claim, well, I don’t think I’ll subscribe. Actually, it’s an open access journal so I can read it for free, as I did this smelly article. But I won’t. Read the journal, I mean. Unless maybe I’m looking for something else to write about here.

An Ex-Mayor and an Editor Walk into a Bar

A tip of the hat to The Wall Street Journal for putting this in their Notable and Quotable column. The Journal’s Editors don’t comment on whatever appears in this column, they just publish it for their readers to see. Having seen it, I have a comment or two.

As background, Keisha Lance Bottoms is a former mayor of Atlanta, Georgia, a city with a population of about 500,000, which is the centre of a metropolitan area of some 6 million. Not a small job.

Ms. Bottoms has a different job now, something in the commentariat business. After the Biden-Trump debate, she was interviewed on MSNBC, and here is the WSJ’s report of part of that conversation –

Former Atlanta Mayor Keisha Lance Bottoms speaks with MSNBC host Chris Jansing, July 1:

Ms. Jansing: Your hometown paper the Atlanta Journal-Constitution is among those saying it’s time for President Biden to pass the torch. The editorial board wrote, “This wasn’t a bad night. It was confirmation of the worst fears of some of Biden’s most ardent supporters.” . . .

Ms. Bottoms: Let me just say I was very disappointed with the Atlanta Journal-Constitution. We have talked about making sure we’re protecting elections and making sure there’s no undue influence. This was undue influence by the Atlanta Journal-Constitution or an attempt to influence. I think voters should be able to make decisions the same way they did in the primaries.

Ms. Jansing: But isn’t that what editorial boards are supposed to do?

Ms. Bottoms: Editorial boards are supposed to honor fair elections. I don’t think it’s fair when an editorial board with 10 people sitting in a room are trying to influence an election.

– There you have Ms. Bottoms’ take on the role of newspapers in the 21st century.

I’ll first just say that this is a good example of the tendency for supposedly knowledgeable people to say things that would have – even 20 years ago – been considered laughable.

Note the use by Ms. Bottoms of the terms ‘undue influence’, ‘protecting elections’, and ‘honor fair elections’. This is typical cant for most members of what passes for an intelligentsia in the 21st century. There is a list of unquestionable and unpardonable sins, like colonialism and oppression, ready and waiting to be attached to anything one is against. ‘Election influencing’ is another such sin – although I suspect only when practiced by the wrong people to influence elections in the wrong way.

Beyond that – what is it about the ‘10 people sitting in a room’ that is unfair? Would 5 people be fair, or would a thousand be more fair? Is it the fact they are sitting in a room at all that makes it unfair? Would it be fair if they were standing, or – kneeling?

And the sentence ‘I think voters should be able to make decisions the same way they did in the primaries.’ is beyond the pale. Does Ms. Bottoms believe that The Atlanta Journal-Constitution Editorial Board did not publish any commentary on the candidates in the primaries when those were being held? Or, perhaps they did not write those when sitting in a room.

It has become the sole job of virtually all political operatives, be they candidates, office-holders, advocates, activists or spin doctors, to quote talking points. Never mind a reasoned analysis, god forbid you should explain why you disagree with the Editorial Board’s position. Just get in your words – ‘honor fair elections’ – and retreat from the field claiming a score.

As I say, someone 20 years ago who said what Ms. Bottoms said above would have been laughed at. Today, I’m sure she has been favourably quoted by other, similar, political operatives.

The impact of strip clubs on sex crimes

I ended a post back on April 25 with the following question:

Does the presence of bricks-and-mortar adult entertainment establishments have a positive, negative, or no effect on the commission of sex crimes in the surrounding neighborhood?

I then asked you to consider what sort of data would be required to provide credible evidence as to what is the correct answer to that question.

Fair warning, this is going to be a longish article, but I would suggest that a credible answer to the first question above has some social value. And, full disclosure, this post is part of my ‘Studies show’ inoculation campaign.

‘Swat I do.

I do think the answer to this question is of more than passing interest.

If the presence of adult entertainment establishments (aee’s, henceforth) like strip clubs and such could be shown to reduce the incidence of sex-crimes like sexual assault and rape, this might be counted as a reason to allow them to operate. If, on the other hand, they are associated with an increase in such crimes, then that is a reason to ban them entirely. The ban/allow decision for aees is of course complex, and other factors may also be important (e.g., links to organized crime, drug use). Still, the answer could be a significant input into city policy-setting on such places.

More disclosure, this is not a very original post. I got wind of all this reading Andrew Gelman’s Statistical Inference blog back when. However, he didn’t dig into the details much.  I have, and I think it is another nice illustration of an important principle: if it sounds really good, be skeptical.

Ok, then – our story begins with a paper by two economists titled “THE EFFECT OF ADULT ENTERTAINMENT ESTABLISHMENTS ON SEX CRIME: EVIDENCE FROM NEW YORK CITY”,

which was written by Riccardo Ciacci and Maria Micaela Sviatschi and published in 2021, in The Economic Journal, a well-respected outlet in my old discipline.

The following sentence from the Abstract of their paper lays out what they find –

“We find that these businesses decrease sex crime by 13% per police precinct one week after the opening, and have no effect on other types of crime. The results suggest that the reduction is mostly driven by potential sex offenders frequenting these establishments rather than committing crimes.”

Trust me, if true, that’s a big deal. A 13% reduction on average, and in the first week after the aees open.

Social scientists rarely find effects of that size attributable to any single thing. That’s huge. One might even venture to say – unbelievable.

It is not surprising that The Economic Journal was happy to give space in its pages to publish these results. And, coming back to what I wrote above, what city politician could ignore the possibility that licensing aees in their jurisdiction might reduce sex crimes by 13%?

To dig deeper we return to the ‘extra credit’ question I posed on that post of April 25 – what kind of data would one need to answer the question?

Well, you need to be able to make a comparison of sex crime numbers between areas where aees operate, and areas where they do not. An obvious possibility is to find two political jurisdictions such that one contains aees, and the other, perhaps due to different laws, does not. Then you can compare the incidence of sex crimes in those two jurisdictions and get your answer.

That approach is just fraught with difficulties, all following from the fact that the two jurisdictions are bound to be different from one another in a whole host of ways, any one of which might be the reason for any sex-crime difference you find. Demographics, incomes, legal framework, policing differences, the list goes on and on. You can try to account for all that, but it’s very difficult, you need all kinds of extra data, and you can never be certain that any difference you find can actually be attributed to the presence/absence of aees.

The alternative is to look at a single jurisdiction, like NYC, and find data on where aees operate and where they do not. Now NYC is a highly heterogeneous place – it’s huge, and its neighborhoods differ a lot, so it sort of seems like we’re back to the same problem.

However, suppose you can get data on when and where aees open and close in NYC. Then, you have before and after data for each establishment and its neighborhood. If an aee started operating in neighborhood X on June 23, 2012, you can then look at sex crime data in that area before and after the opening date. You still want to assure yourself that nothing else important in that neighborhood changed around that same time, but that seems like a doable thing.

This is pretty much what our economists did, as we will see, but there is still another issue; data on sex crimes.

All data on criminal activity carries with it certain problems. Data on arrests and convictions for crimes is generally pretty reliable, but crimes are committed all the time for which no arrests are made and/or no convictions occur. Still, the crimes occur, and for the purposes of this question, you want data on the occurrence of sex crimes, not on arrests for them.

We’ll come back to the crime data below, but I’ll start with the data on aees.

The authors note that if you are going to open a strip club in NYC there is a bureaucratic process to go through, and the first thing a potential operator of such has to do is register the business with a government bureau.

To quote directly from the paper:

“We construct a new data set on adult entertainment establishments that includes the names and addresses of the establishments, providing precise geographic information. We complement this with information on establishment registration dates from the New York Department of State and Yellow Pages, which we use to define when an establishment opened.”

So, the researchers know where each aee opened, and they know when, but do note for later, that for the ‘when’ bit they use the date of registration with the NY Department of State.

The location that they get from the Dept and the Yellow pages then allows the researchers to determine in which NYPD precinct the aee is located, and that is going to allow them to associate each aee, once it opens, with crime data from that precinct.

So, what crime data do they use? As I’ve noted, such data always has issues.

Here’s one thing the economist say about their crime data.

“The crime data include hourly information on crimes observed by the police, including sex crimes. The data set covers the period from 1 January 2004 to 29 June 2012. Since these crimes are reported by the police, it minimises the biases associated with self-reported data on sex crime.”

Ok, hold on. ‘Crimes observed by police’? What does that mean? How many of the people arrested for or even suspected of a crime by the police had that crime observed by the police? Speeders, stop-sign ignorers, perhaps? But burglars, murderers, and – the point here – sexual assault or rape? How often are those crimes observed by police?

The vast majority of crimes come to light and are investigated by police on the basis of a report by a private citizen. In the case of sex crimes, most often a victim is found somewhere or comes to the police after the crime has occurred, inducing police to begin an investigation.

This sentence from the paper clears things up….a bit.

“We categorise adult entertainment establishments by New York Police Department (NYPD) precincts to match crime data from the ‘stop-and-frisk’ program.”

Ah. You may remember NYC’s (in)famous ‘stop and frisk’ program of several years (and mayors) ago. NYPD officers would stop folks on the street and – chat them up. Ask questions of various kinds, and then fill out and turn in a card that recorded various aspects of the encounter. As we will see below, virtually none of these s-a-f encounters resulted in a report of a crime or an arrest.

So….’crime data’? From stop and frisk encounters? Need to know a lot more about how that data was used.

And we shall, but let’s go back to the other key bit of data – where and when aees opened in NYC. The date used for the aees ‘opening’ is, according to the quote above, the date on which each establishment was registered  with the NY Dept of State.

Can you think of any establishment that needs a city or health or any other license to operate, that actually starts serving customers the day after it files the licensing paperwork?

To be sure, I have never operated a business, but I don’t think that can possibly be how it works. For one thing, how many different licenses do you suppose a strip club needs to operate at all? A health inspection, a liquor license, a fire inspection, building safety certificate….?

This is not a detail, because the BIG Headline this paper starts with is that a strip club reduces the number of sex crimes in the precinct in which it is located in the first week of operation. If the researchers are using the date of registration to determine when was that first week – there’s a problem.

 Ok, time to let the rest of the cats out of the proverbial bag. I mentioned above that I came on this research through a post on Gelman’s blog in which some folks expressed considerable skepticism about the economists’ findings. Those skeptics are, to give credit where due:

Brandon del Pozo, PhD, MPA, MA (corresponding author); Division of General Internal Medicine, Rhode Island Hospital/The Warren Alpert Medical School

Peter Moskos, PhD; Department of Law, Police Science, and Criminal Justice Administration, John Jay College of Criminal Justice, New York

John K. Donohue, JD, MBA; Center on Policing, Rutgers University

John Hall, MPA, MS ; Crime Control Strategies, New York Metropolitan Transportation Authority Police Department

They lay out their issues with the paper in considerable detail in a paper of their own titled:

Registering a proposed business reduces police stops of innocent people? Reconsidering the effects of strip clubs on sex crimes found in Ciacci & Sviatschi’s study of New York City

which was published in Police Practice and Research, 2024-05-03.

This post is already quite long, so I am going to just give you the two most salient (in my opinion) points that are made by the skeptics in their paper.

First, as to the economists’ ‘sex crime’ data:

“The study uses New York City Police Department stop, question and frisk (SQF) report data to measure what it asserts are police-observed sex crimes, and uses changes in the frequency of the reports to assert the effect of opening an adult entertainment establishment on these sex crimes. These reports document forcible police stops of people based on less than probable cause, not crimes. Affirmatively referring to the SQF incidents included in the study as ‘sex crimes,’ which the paper does throughout (see p. 2 and p. 6, for example), is a category error. Over 94% of the analytic sample used in the study records a finding that there was insufficient cause to believe the person stopped had committed a crime….In other words, 94% of the reports are records of people who were legally innocent of the crime the police stopped them to investigate.”

And then, for the data on the openings of aees:

“This brings us back to using the date a business is registered with New York State as a proxy for its opening date, considering it provides a discrete date memorialized by a formal process between the government and a business. However, the date of registration is not an opening date, and has no predictable relationship to it, regardless of the type of business, or whether it requires the extra reviews necessary for a liquor license. New York City’s guidance to aspiring business owners reinforces the point that registration occurs well before opening.”

I close with the following. It turns out our four skeptics sent a comment to The Economic Journal laying out all their concerns about the original research, the Journal duly sent said commentary on to the authors, Ciacci and Sviatschi, and those authors responded that they did not think these concerns affected the important points in their paper. So, the journal not only did not retract the paper, it declined to publish a Commentary on its findings by the four skeptics. (Econ Journals do publish such Comments from time to time. Not this time.)

I mean, that would just make the original authors – and the Journal – look bad, no? The skeptics did, as we saw, eventually get their concerns into the public domain via a different publication – one read by pretty much nobody who reads The Economic Journal, I’m thinking.

Again – if it seems too good to be true…Objects in mirror may be smaller than they appear.

Sample-based bullshit

Standards in general, and in academia in particular, are a keen concern of mine, and I will be writing about them frequently here. This post is about an open letter written by a faculty member in the Department of Computer Science at the University of Regina to the Head of the Department, a letter which he shared in a publication to which I subscribe.

The letter concerns an email sent to all CS students and faculty at UR on August 30, 2023, in which the following sentence appears – “In an effort to provide timely feedback on student work, some of our courses will be moving to a sample-based marking approach.”

The email goes on to explain what that means – that when a student turns in an assignment or test, not all of it will be marked. The parts to be marked will not be revealed until after the marked work has been returned to the students, and their grade will be determined only by that which is marked. So if a test consists of 6 problems, perhaps only three will be marked and feedback provided; the same three for all students.

The reason given for this is the increase in the size of CS classes, driven in turn by an increase in the number of CS majors, said to be nearly 1000 in the email.

The faculty member wrote his open letter to the Head of his Department (Computer Science) decrying this new grading approach, and explaining why he thought it would lead to a decline in academic standards.

I will first just record here that ‘sample-based marking’ as described, is in itself a reduction in academic standards. When I taught at University my assignments and tests were conceived as a whole, different parts of it designed to test different parts of the material, but also different abilities. Some questions could not be answered well without having the ability to write clearly and concisely about something complex, while other questions were designed to test one’s ability to deal with more formal logical or technical issues. To mark or provide feedback on only some aspects of the work is to ignore some part of what the course is about.

I understand well that the idea behind this is that, because the students don’t know up-front which parts of their work will be marked, they still have an incentive to work hard learning all of it. This does not change the fact they will get no feedback on some of their work, a primary point of marking. But in addition, anyone who knows students knows this will lead to a cottage industry in figuring out which parts of work any given instructor is likely to mark, which is not in any way part of what higher education is supposed to teach students.

This policy is, in the end, a further piece of evidence as to what University administrators’ goal is. Get as many students through to a degree as possible, at as low a cost as possible. So far as I can tell, their political masters in Canada are perfectly in agreement with this goal.

This is why sample-based marking is being implemented, rather than the solution suggested by the CS faculty member who objected to it; hiring more faculty to accommodate the growing number of students. Faculty are expensive. And, note that the letter did not indicate that CS students would be seeing a discount on their tuition bill to accompany this sample-based marking initiative.

Imagine a McDonald’s franchise-holder, or local restauranteur, who found themselves with a (delightful) increase in patronage, and responded by filling only part of all food orders, rather than hiring more workers, while charging for everything ordered.

One final note. The original email laying out how this scheme is envisioned working at UR also says that the parts of any student work that are not marked will none the less have solutions posted for the non-marked parts, or they will be gone over in class so as to provide students with the correct solutions. So, there’s your all-around feedback, eh?

Right. In a university atmosphere in which students feel free – indeed, are encouraged – to argue for higher marks for most any reason they can think up, this will open up a whole new area of student appeals. To wit: “ I got the parts of the exam you did not mark nearly perfect, according to your own solution key, so I deserve much more than the 63 I received, which is based solely on the parts you did mark.”…followed by the ever-popular ‘This isn’t fair.’

 

 

Following the Hot Hand of Science

Anyone vaguely familiar with basketball has heard of the ‘hot hand’ phenomenon. Someone on the team gets a hot shooting streak going, they can’t seem to miss, and their teammates start looking to get the hot-handed player the ball. I played backyard hoops a lot in my youth, and there were (very few) times when it happened to me; every shot I threw up seemed to go in – briefly.

Well, academics got wind of this long ago also, and decided to investigate whether there was anything to it. Yea, sure, players talk about experiencing it, or seeing it, but it could easily be just a matter of perception, something that would disappear into the ether once subjected to hard-nosed observation and statistical analysis.

The canonical paper to do this analysis was published in 1985 in Cognitive Psychology, authored by Gilovich, Tallone and Tversky. The last of this trio, Amos Tversky, was a sufficiently notable scholar that young economists like me were told to read some of his work back in the day. He died young, age 59, in 1996, six years before his frequent co-researcher, Daniel Kahnemann, was awarded the Nobel Prize in Economics. The work the Nobel committee cited in awarding the prize to Kahnemann was mostly done with Tversky, so there is little doubt Tversky would have shared the prize had he lived long enough, but Nobels are, by rule, not given to the dead.

Now, as a research question, looking for a basketball hot hand is in many ways ideal: the trio used data on shots made and missed by players in the NBA, which tracks such data very carefully, and beyond that, they did their own controlled experiment, putting the Cornell basketball teams to work taking shots, and recording the results. Good data is everything in social science, and the data doesn’t get much better than that. Well, bear with me here, this is most of the Abstract of that 1985 paper:

“Basketball players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot. However, detailed analyses of the shooting records of the Philadelphia 76ers provided no evidence for a positive correlation between the outcomes of successive shots. The same conclusions emerged from free-throw records of the Boston Celtics, and from a controlled shooting experiment with the men and women of Cornell’s varsity teams. The outcomes of previous shots influenced Cornell players’ predictions but not their performance. The belief in the hot hand and the “detection” of streaks in random sequences is attributed to a general misconception of chance according to which even short random sequences are thought to be highly representative of their generating process.”

That is, a player who hits a shot expects he is likely to hit the next one, too. When he does, he files this away as ‘having a hot hand’, but the actual frequency with which he hits the second shot is not actually higher than when he had missed his previous shot. Standard ‘cognitive bias’ causes the player – and fans – to see it that way, that’s all. They remember when the second shot is made more than they remember it being missed.

Damn scientists are always messing with our hopes and dreams, right? No Easter Bunny, no extra-terrestrials in Mississauga, and no hot hand. Is nothing sacred?  Other researchers went looking for evidence of a hot hand over the ensuing years, but it became known in academic circles as ‘the hot hand fallacy’, the general consensus being that it did not exist in the real world of basketball.

33 years later

But wait, it’s now 2018 and a paper by Miller and Sanjurjo appears in Econometrica, the premier journal for economic analysis involving probability and/or statistics. It’s title is “Surprised by the hot-hand fallacy? A truth in the law of small numbers”

Here’s some of what their Abstract says:

We prove that a subtle but substantial bias exists in a common measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data…. We observe that the canonical study [that is, Gilovich, Tallone and Tversky] in the influential hot hand fallacy literature, along with replications, are vulnerable to the bias. Upon correcting for the bias, we find that the longstanding conclusions of the canonical study are reversed.

It took over 30 years for two economists to figure out that ‘the canonical study’ of the hot hand did its ciphering wrong, and that once this is corrected, it’s findings are not just no longer true, they are reversed. The data collected in 1985 do provide evidence of the existence of a hot hand.

Think about this. In 1985 some very clever academics showed there was no such thing as a hot hand in the real world of basketball, and the academics who peer-reviewed their work agreed with them. Thirty-plus years later, some other clever academics realized that first set had gotten something wrong, and that fixing it reversed the previous findings – and the academics who peer-reviewed their work agreed with them.

Ain’t social science wonderful? A question for which there is excellent data, a situation rarer than hen’s teeth in social science, is investigated and a conclusive answer arrived at, and thirty years later that answer is shown to be not just wrong but backwards.

No one did anything shady here. There was no messing with data, the 2018 guys used the same data used in 1985. A mistake, a subtle but significant mistake, accounts for the turnaround, and it took 33 years to discover it. One can hardly blame the 1985 researchers for not seeing the mistake, given that no one else did for such a long time.

The Lesson?

So, in case my point is not yet obvious, science is not a set of settled facts. Those do exist – sort of – but anyone who understands the process of science even a little understands that settled facts are settled only until they are overturned. And if that is true for such a clean research question as an investigation of a basketball hot hand, think about a more typical social science question in which two things are almost always true. One, the data is not at all what the researchers need, so they make do with what they can actually gather. Two, the right way to analyze that data – among endless possibilities – is a matter of disagreement among respectable social scientists. Following that kinda science will make you dizzy, my friends.

A teaser: think about this social scientific question. It is arguably of more importance than basketball shooting.

Does the availability of bricks-and-mortar adult entertainment establishments have a positive, negative, or no effect on the commission of sex crimes in the surrounding neighborhood?

Whaddya think is the right answer?

For extra credit: what kind of data would a researcher need to gather to answer that question?

Now that’s real (i.e., messy) social science.

Stay tuned, because a couple of economists set out to investigate the question above, and I’ll have a go at what they did and their findings in a future post.

 

 

Streaming service warnings, or…..huh?

A pervasive feature of the 21st century in North America is the deterioration in the quality of written language. Words with a quite precise meanings, like ‘phone’, ‘mail’, ‘email’ and ‘text’ get replaced with the coverall ‘reach out’.

I have access to exactly one internet streaming service, and it provides one of the more amusing examples of language abuse in the warnings it attaches to the previews of the films that one can watch on it.

Now, some of these warnings are easily understandable: Nudity, Sex, Violence – the Classics. Attaching any of these to the preview of a film is particularly useful to any teens or pre-teens who live in the household. I have experience from an earlier era. In my pre-teen years my good Polish Catholic parents subscribed to The Catholic Chronicle, a weekly paper put out by the local diocese. This featured a lot of boring stuff I never read, but it also provided ratings of all the movies that would be shown that week on the 5 or 6 TV stations available in our town. Those ratings told me which channel to put on when I stayed up past my parents’ bedtime on Friday or Saturday night. I was most grateful to the Bishop for this service, even though nothing on TV in that era was actually all that scandalous. It doesn’t really take much to get a 12-year-old boy excited.

However, contemporary warning words beyond that Big Three are rather more mysterious to me.

One warning is Language. Not Profanity, not Cussin’, not even Bad Language, just – Language. That seems to suggest that the characters in the film are going to talk, but there is also another warning of Pervasive Language. I suppose it is useful for some people to know there will be a lot of talking, so they should pause the stream if they have to go to the bathroom.

There is also a warning for Smoking, which I presume is due to our enlightened age realizing that all it takes is for some young’n to see someone smoking in a film to provoke them to go out and steal some smokes and try it themselves.

However, there is also a distinct warning about Historical Smoking. Clearly this would be attached to a film set in the past in which people smoke. What is not clear to me is whether the distinction is made because seeing past smoking is more or less harmful than seeing current smoking. Whichever way it is, why is there not then a warning about Historical Nudity (Adam and Eve?) or Historical Violence (Conan the Barbarian?) or, really – Historical Sex; you know, before people knew how to do it right like we do.

Undoubtedly, the biggest mystery to me is when a film preview comes with this warning:

Some Thematic Elements

Whatever in the hell does that mean? I can’t even make a joke about it.

One might think that, whatever the environment, posting a warning whose meaning is unclear would be a terrible idea. Do we want Environment Canada putting out Alerts that say Something Might be Coming? [I admit, EnvCan’s Special Weather Statements are pretty close to that.]

However, here in the 21st century, when offence lurks around every corner, it may be that posting a warning on a film the meaning of which no one understands has value.

Consider this scenario – a subscriber phones up or texts the customer service dept of the service.

Subscriber: “Hey that movie had a blonde-haired woman chasing a blue aardvark around with a flyswatter, that was appalling, I had no idea me and the kids  would be exposed to that. What is wrong with you people?”

Customer Service: “Ah, but Madam, we did make it clear the movie contained Some Thematic Elements.”

Young, Rogan and the Cost of Principles

Came across an article in the Wall Street Journal last month headlined as:

Neil Young Will Return to Spotify After Two-Year Boycott Over Joe Rogan

Singer-songwriter says he had no choice but to return to streaming platform due to wider distribution of Rogan’s podcast

March 13, 2024 Gareth Vipers

For you non-WSJ subscribers who may have forgotten what this is all about, here’s a quote from the WSJ piece:

Young penned an open letter to his manager and label in 2022 asking them to remove his music from the platform, saying it was spreading fake information about Covid-19 vaccines through Rogan’s show.

The article explains that in fact, “…Young’s label legally has control over how and where his music is distributed…” but Vipers claims that they had reason to honor his request. The piece does not say if they actually did, and if they did not, then this would seem to have been a rather empty gesture on ol’ Neil’s part.

Anyway, the point of this piece was that Rogan had since 2022 made a very lucrative deal to have his podcast more widely streamed, including on Apple and Amazon, and in light of that, Young was going to start letting his musical recordings be distributed on Spotify again. [I am inferring from that piece of info that Warner Bros did indeed pull his stuff from Spotify in ’22.]

I am a fan of Young’s music. Hearing Cinnamon Girl blasting out of a pair of car speakers was one of the great thrills of my youth, and one of the few truly wonderful musical moments on the old Saturday Night Live show was when Young and Crazy Horse brought down the house with a searing version of Rockin’ in The Free World. The man was a serious rocker, and he wrote some great songs.

One of my favourite Neil Young moments was in 1988, when he put out an album titled This Note’s for You. It was a blast at other musicians who allow their music to be used to sell shit. One of my (admittedly costless to hold) convictions is that musicians (or actors or other performers) who have made serious money in their career and then allow their output or their selves to be used to sell shit – any shit – are putzes who I wouldn’t trust if I ever ran into them.

As one example, I was depressed a couple of years ago to hear the Who’s Eminence Front – one of their best recordings – being used to sell Nissans. From the movies we have Samuel L Jackson, Danny DeVito, Rob Lowe, Matthew McConaughey, Jennifer Garner and on and on….one sees them more often in ads than in movies.

[I would like to think there is a special place in hell for celebs who accept money to promote online gambling sites – Gretzky, McDavid, Matthews, Jamie Foxx, etc. But I’m sure there’s not.]

These people are not needy. I’m an economist, I get it, no one thinks they have enough money, but I happen to think there ought to be some things one will not do for more. And no, I am not saying that celebrities or people with more wealth than some specified number should be prohibited from selling other people’s shit. They all have a perfect right to do what they are doing. I’m really only saying I think less of them for doing it –  which troubles them not the least, I know.

So back to ol’ Neil. His original move to pull his music from Spotify had two characteristics. One, it harmed Spotify – maybe. Spotify operates a subscription model in which folks pay a monthly fee for the right to listen to music from its catalog, including Young’s. So, it would appear that Young’s move hurt Spotify only to the extent to which people cancelled their Spotify plans, either out of sympathy with Young, or simply because they would no longer be able to listen to his tunes on the platform. I don’t know if that happened (though I rather doubt it), but more important to me is the second characteristic, which is that Young paid a price himself for doing that. He lost his share of that revenue, too, and about that there is no doubt. To me, that speaks to a level of integrity in Young. I don’t mean to say I agree with Young’s apparent position that Rogan is evil. I’ve never listened to one of Rogan’s podcasts, and don’t know what was said on them that upset Young. My point is only that incurring a cost yourself over a principle signals integrity. Anyone can run around bashing others, imposing costs on others, people can do that just for amusement. Taking a hit yourself says something, it says you mean it. Similarly, Young’s apparent past refusal to let his music be used to sell shit cost him real $. Someone would surely have paid him to use his music to sell cinnamon or something, back in the day.

Of course, the corollary to all this is that Neil could have reacted to the recent news of the now-wider distribution of Rogan’s podcast by asking Warner Bros to pull his music from Amazon and Apple, too. That would be even more costly to Young, and would leave me even more impressed with his integrity and commitment. What he has actually done by, according to the article, allowing his music back on Spotify (along with leaving it on the other platforms) says to this observer that Young was not willing to pay that high a price for his principles.

And, to be clear, in ‘price’ I am not pointing only to the money he would lose from streaming payments. He’s a musician, composer and performer, and having people hear his music has been his life’s work. Losing that is a serious price to pay, even were no cash involved.

I judge Neil Young not, and I still thank him for putting out the This Note’s for You album and writing and recording Cinnamon Girl. I merely point out that everything has a price, and we all have to decide which prices we will pay and which we will not, and I continue to believe that those who pay a price to adhere to a principle deserve my respect, if not necessarily my agreement. And – those who have made millions, become famous and then go on to accept money to sell other people’s shit deserve my contempt.

Btw, if I’m right that Young’s original move in 2022 cost Spotify nothing, it raises another question: what was ol ‘Neil trying to accomplish? Topic for another post, perhaps.

 

Sci-fi in aid of Science

I was a pretty big fan of science fiction in my younger days, and still read some from time to time. I think Frank Herbert’s  Dune is a great novel (the sequels not so much), enjoyed reading works by Heinlein, Le Guin and Asimov.. 

One of the genre’s leading lights back then was Arthur C Clarke, who wrote the novel 2001: A Space Odyssey (in 1982) [not true, see below] on which the film was based. I was not a Clarke fan, don’t remember that I read any of his stuff. However, he made an interesting contribution to the culture beyond his books themselves, when he formulated three ‘laws’ regarding technology that have come to be known as Clarke’s Laws. He didn’t proclaim these all at once, and in any case it is the third law that is most cited, which so far as I can determine first appeared in a letter he wrote to Science in 1968. [If anyone has better info on the third law’s original appearance and antecedents I’d love to hear it.]

Clarke’s Third Law is: ‘Any sufficiently advanced technology is indistinguishable from magic.’

That strikes me – and many others, apparently – as a perceptive statement. Think of how someone living in 1682 anywhere in the world would regard television or radio. 

As with any perceptive and oft-repeated assertion,  this prompted others to lay down similar edicts, such as Grey’s Law: “Any sufficiently egregious incompetence is indistinguishable from malice.”

[I cannot trace Grey’s law back to anyone named Grey – if you can, let me know.]

Note that there is a difference, as Clarke’s law speaks to how something will be perceived, whereas Grey’s points at the consequences of incompetence vs malice. If you are denied a mortgage by a bank despite your stellar credit rating, the impact on you of that decision does not depend on whether it is attributable to the credit officer’s incompetence or dislike of you. 

On to Science, then, and what I will call Gelman’s Law (although Gelman himself does not refer to it that way). 

Most non-academics I know view academics and their research with a somewhat rosy glow. If someone with letters after their name writes something, and particularly if they write it in an academic journal, they believe it. 

It does nothing to increase my popularity with my friends to repeatedly tell them: it ain’t so. There is a lot of crappy (a technical academic term, I will elaborate in future posts) research being done, and a lot of crappy research being published, even in peer-reviewed journals. What is worse is that as far as I can tell, the credible research is almost never the stuff that gets written up in the media. Some version of Gresham’s Law [‘bad money drives out good money’] seems to be at work here. 

A blog that I read regularly is titled Statistical Modeling, Causal Inference and Social Science (gripping title, eh?), written by Andrew Gelman, a Political Science and Stats prof at Columbia U. I recommend it to you, but warn that you better have your geek hard-hat on for many of the posts. 

Although I often disagree with Gelman, he generally writes well and I have learned tons from his blog. One of the things that has endeared it to me is his ongoing campaign against academic fraud and incompetent research. 

He has formulated a Law of his own, which he modestly attributes to Clarke, but which I will here dub Gelman’s Third Law: 

“Any sufficiently crappy research is indistinguishable from fraud.”

I think this law combines the insights of Clarke’s and Grey’s. The consequences of believing the results from crappy research do not differ from the consequences of believing the results from fraudulent research, as with Grey. However, it is also true that there is no reason to see the two things as different. If you are so incompetent at research as to produce crap, then you should be seen as a fraud, as with Clarke. 

I will be writing about crappy/fraudulent research often here, in hopes of convincing readers that they should be very skeptical the next time they read those deadly words: “Studies show…”

I will close this by referring you, for your reading pleasure, to a post by Gelman titled:    

 It’s bezzle time: The Dean of Engineering at the University of Nevada gets paid $372,127 a year and wrote a paper that’s so bad, you can’t believe it.

It’s a long post, but non-geeky, and quite illuminating. (Aside: I interviewed for an academic position at U of Nevada in Reno a hundred years ago. They put me up in a casino during my visit. Didn’t gamble, didn’t get a job offer.) You can read more about this intrepid and highly paid Dean here. His story is really making the (academic) rounds these days. 

You’re welcome, and stay tuned. I got a million of ‘em….

p.s. Discovered this since I wrote the above, but before posting. One of many reasons this stuff matters, from Nevada Today

University receives largest individual gift in its history to create the George W. Gillemot Aerospace Engineering Department 

The $36 million gift is the largest individual cash gift the University has received in its 149-year history 

Anyone care to bet on whether this Dean gets canned?

 Corrigendum: An alert reader has pointed out that Clarke’s novel was not written in 1982 – indeed, the film came out in 1968. In fact the 2001 film was based largely on one of Clarke’s short stories from 1951: The Sentinel. Clarke did write a novel called 2010: Odyssey Two, in 1982, and a not-so-successful movie was based on that, in 1984.