Skip to main content

Following the Hot Hand of Science

Anyone vaguely familiar with basketball has heard of the ‘hot hand’ phenomenon. Someone on the team gets a hot shooting streak going, they can’t seem to miss, and their teammates start looking to get the hot-handed player the ball. I played backyard hoops a lot in my youth, and there were (very few) times when it happened to me; every shot I threw up seemed to go in – briefly.

Well, academics got wind of this long ago also, and decided to investigate whether there was anything to it. Yea, sure, players talk about experiencing it, or seeing it, but it could easily be just a matter of perception, something that would disappear into the ether once subjected to hard-nosed observation and statistical analysis.

The canonical paper to do this analysis was published in 1985 in Cognitive Psychology, authored by Gilovich, Tallone and Tversky. The last of this trio, Amos Tversky, was a sufficiently notable scholar that young economists like me were told to read some of his work back in the day. He died young, age 59, in 1996, six years before his frequent co-researcher, Daniel Kahnemann, was awarded the Nobel Prize in Economics. The work the Nobel committee cited in awarding the prize to Kahnemann was mostly done with Tversky, so there is little doubt Tversky would have shared the prize had he lived long enough, but Nobels are, by rule, not given to the dead.

Now, as a research question, looking for a basketball hot hand is in many ways ideal: the trio used data on shots made and missed by players in the NBA, which tracks such data very carefully, and beyond that, they did their own controlled experiment, putting the Cornell basketball teams to work taking shots, and recording the results. Good data is everything in social science, and the data doesn’t get much better than that. Well, bear with me here, this is most of the Abstract of that 1985 paper:

“Basketball players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot. However, detailed analyses of the shooting records of the Philadelphia 76ers provided no evidence for a positive correlation between the outcomes of successive shots. The same conclusions emerged from free-throw records of the Boston Celtics, and from a controlled shooting experiment with the men and women of Cornell’s varsity teams. The outcomes of previous shots influenced Cornell players’ predictions but not their performance. The belief in the hot hand and the “detection” of streaks in random sequences is attributed to a general misconception of chance according to which even short random sequences are thought to be highly representative of their generating process.”

That is, a player who hits a shot expects he is likely to hit the next one, too. When he does, he files this away as ‘having a hot hand’, but the actual frequency with which he hits the second shot is not actually higher than when he had missed his previous shot. Standard ‘cognitive bias’ causes the player – and fans – to see it that way, that’s all. They remember when the second shot is made more than they remember it being missed.

Damn scientists are always messing with our hopes and dreams, right? No Easter Bunny, no extra-terrestrials in Mississauga, and no hot hand. Is nothing sacred?  Other researchers went looking for evidence of a hot hand over the ensuing years, but it became known in academic circles as ‘the hot hand fallacy’, the general consensus being that it did not exist in the real world of basketball.

33 years later

But wait, it’s now 2018 and a paper by Miller and Sanjurjo appears in Econometrica, the premier journal for economic analysis involving probability and/or statistics. It’s title is “Surprised by the hot-hand fallacy? A truth in the law of small numbers”

Here’s some of what their Abstract says:

We prove that a subtle but substantial bias exists in a common measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data…. We observe that the canonical study [that is, Gilovich, Tallone and Tversky] in the influential hot hand fallacy literature, along with replications, are vulnerable to the bias. Upon correcting for the bias, we find that the longstanding conclusions of the canonical study are reversed.

It took over 30 years for two economists to figure out that ‘the canonical study’ of the hot hand did its ciphering wrong, and that once this is corrected, it’s findings are not just no longer true, they are reversed. The data collected in 1985 do provide evidence of the existence of a hot hand.

Think about this. In 1985 some very clever academics showed there was no such thing as a hot hand in the real world of basketball, and the academics who peer-reviewed their work agreed with them. Thirty-plus years later, some other clever academics realized that first set had gotten something wrong, and that fixing it reversed the previous findings – and the academics who peer-reviewed their work agreed with them.

Ain’t social science wonderful? A question for which there is excellent data, a situation rarer than hen’s teeth in social science, is investigated and a conclusive answer arrived at, and thirty years later that answer is shown to be not just wrong but backwards.

No one did anything shady here. There was no messing with data, the 2018 guys used the same data used in 1985. A mistake, a subtle but significant mistake, accounts for the turnaround, and it took 33 years to discover it. One can hardly blame the 1985 researchers for not seeing the mistake, given that no one else did for such a long time.

The Lesson?

So, in case my point is not yet obvious, science is not a set of settled facts. Those do exist – sort of – but anyone who understands the process of science even a little understands that settled facts are settled only until they are overturned. And if that is true for such a clean research question as an investigation of a basketball hot hand, think about a more typical social science question in which two things are almost always true. One, the data is not at all what the researchers need, so they make do with what they can actually gather. Two, the right way to analyze that data – among endless possibilities – is a matter of disagreement among respectable social scientists. Following that kinda science will make you dizzy, my friends.

A teaser: think about this social scientific question. It is arguably of more importance than basketball shooting.

Does the availability of bricks-and-mortar adult entertainment establishments have a positive, negative, or no effect on the commission of sex crimes in the surrounding neighborhood?

Whaddya think is the right answer?

For extra credit: what kind of data would a researcher need to gather to answer that question?

Now that’s real (i.e., messy) social science.

Stay tuned, because a couple of economists set out to investigate the question above, and I’ll have a go at what they did and their findings in a future post.

 

 

Who needs experts – and who needs Al?

I had planned to follow up my post on the Freeps printing a ‘news’ article in which the only news was a set of comments by one person (link), with one on the use of ‘experts’ in media more generally. Before I could, a regular reader pointed me at a piece in a site called The Hub that covered the same ground. Having read Howard Anglin’s piece carefully, and enjoyed it much, I’ve decided the best thing for me to do is just provide my own readers with a link to it (link).

I can’t see me writing anything better than he did….at least not yet.

Sci-fi in aid of Science

I was a pretty big fan of science fiction in my younger days, and still read some from time to time. I think Frank Herbert’s  Dune is a great novel (the sequels not so much), enjoyed reading works by Heinlein, Le Guin and Asimov.. 

One of the genre’s leading lights back then was Arthur C Clarke, who wrote the novel 2001: A Space Odyssey (in 1982) [not true, see below] on which the film was based. I was not a Clarke fan, don’t remember that I read any of his stuff. However, he made an interesting contribution to the culture beyond his books themselves, when he formulated three ‘laws’ regarding technology that have come to be known as Clarke’s Laws. He didn’t proclaim these all at once, and in any case it is the third law that is most cited, which so far as I can determine first appeared in a letter he wrote to Science in 1968. [If anyone has better info on the third law’s original appearance and antecedents I’d love to hear it.]

Clarke’s Third Law is: ‘Any sufficiently advanced technology is indistinguishable from magic.’

That strikes me – and many others, apparently – as a perceptive statement. Think of how someone living in 1682 anywhere in the world would regard television or radio. 

As with any perceptive and oft-repeated assertion,  this prompted others to lay down similar edicts, such as Grey’s Law: “Any sufficiently egregious incompetence is indistinguishable from malice.”

[I cannot trace Grey’s law back to anyone named Grey – if you can, let me know.]

Note that there is a difference, as Clarke’s law speaks to how something will be perceived, whereas Grey’s points at the consequences of incompetence vs malice. If you are denied a mortgage by a bank despite your stellar credit rating, the impact on you of that decision does not depend on whether it is attributable to the credit officer’s incompetence or dislike of you. 

On to Science, then, and what I will call Gelman’s Law (although Gelman himself does not refer to it that way). 

Most non-academics I know view academics and their research with a somewhat rosy glow. If someone with letters after their name writes something, and particularly if they write it in an academic journal, they believe it. 

It does nothing to increase my popularity with my friends to repeatedly tell them: it ain’t so. There is a lot of crappy (a technical academic term, I will elaborate in future posts) research being done, and a lot of crappy research being published, even in peer-reviewed journals. What is worse is that as far as I can tell, the credible research is almost never the stuff that gets written up in the media. Some version of Gresham’s Law [‘bad money drives out good money’] seems to be at work here. 

A blog that I read regularly is titled Statistical Modeling, Causal Inference and Social Science (gripping title, eh?), written by Andrew Gelman, a Political Science and Stats prof at Columbia U. I recommend it to you, but warn that you better have your geek hard-hat on for many of the posts. 

Although I often disagree with Gelman, he generally writes well and I have learned tons from his blog. One of the things that has endeared it to me is his ongoing campaign against academic fraud and incompetent research. 

He has formulated a Law of his own, which he modestly attributes to Clarke, but which I will here dub Gelman’s Third Law: 

“Any sufficiently crappy research is indistinguishable from fraud.”

I think this law combines the insights of Clarke’s and Grey’s. The consequences of believing the results from crappy research do not differ from the consequences of believing the results from fraudulent research, as with Grey. However, it is also true that there is no reason to see the two things as different. If you are so incompetent at research as to produce crap, then you should be seen as a fraud, as with Clarke. 

I will be writing about crappy/fraudulent research often here, in hopes of convincing readers that they should be very skeptical the next time they read those deadly words: “Studies show…”

I will close this by referring you, for your reading pleasure, to a post by Gelman titled:    

 It’s bezzle time: The Dean of Engineering at the University of Nevada gets paid $372,127 a year and wrote a paper that’s so bad, you can’t believe it.

It’s a long post, but non-geeky, and quite illuminating. (Aside: I interviewed for an academic position at U of Nevada in Reno a hundred years ago. They put me up in a casino during my visit. Didn’t gamble, didn’t get a job offer.) You can read more about this intrepid and highly paid Dean here. His story is really making the (academic) rounds these days. 

You’re welcome, and stay tuned. I got a million of ‘em….

p.s. Discovered this since I wrote the above, but before posting. One of many reasons this stuff matters, from Nevada Today

University receives largest individual gift in its history to create the George W. Gillemot Aerospace Engineering Department 

The $36 million gift is the largest individual cash gift the University has received in its 149-year history 

Anyone care to bet on whether this Dean gets canned?

 Corrigendum: An alert reader has pointed out that Clarke’s novel was not written in 1982 – indeed, the film came out in 1968. In fact the 2001 film was based largely on one of Clarke’s short stories from 1951: The Sentinel. Clarke did write a novel called 2010: Odyssey Two, in 1982, and a not-so-successful movie was based on that, in 1984.