Skip to main content

Following the Hot Hand of Science

Anyone vaguely familiar with basketball has heard of the ‘hot hand’ phenomenon. Someone on the team gets a hot shooting streak going, they can’t seem to miss, and their teammates start looking to get the hot-handed player the ball. I played backyard hoops a lot in my youth, and there were (very few) times when it happened to me; every shot I threw up seemed to go in – briefly.

Well, academics got wind of this long ago also, and decided to investigate whether there was anything to it. Yea, sure, players talk about experiencing it, or seeing it, but it could easily be just a matter of perception, something that would disappear into the ether once subjected to hard-nosed observation and statistical analysis.

The canonical paper to do this analysis was published in 1985 in Cognitive Psychology, authored by Gilovich, Tallone and Tversky. The last of this trio, Amos Tversky, was a sufficiently notable scholar that young economists like me were told to read some of his work back in the day. He died young, age 59, in 1996, six years before his frequent co-researcher, Daniel Kahnemann, was awarded the Nobel Prize in Economics. The work the Nobel committee cited in awarding the prize to Kahnemann was mostly done with Tversky, so there is little doubt Tversky would have shared the prize had he lived long enough, but Nobels are, by rule, not given to the dead.

Now, as a research question, looking for a basketball hot hand is in many ways ideal: the trio used data on shots made and missed by players in the NBA, which tracks such data very carefully, and beyond that, they did their own controlled experiment, putting the Cornell basketball teams to work taking shots, and recording the results. Good data is everything in social science, and the data doesn’t get much better than that. Well, bear with me here, this is most of the Abstract of that 1985 paper:

“Basketball players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot. However, detailed analyses of the shooting records of the Philadelphia 76ers provided no evidence for a positive correlation between the outcomes of successive shots. The same conclusions emerged from free-throw records of the Boston Celtics, and from a controlled shooting experiment with the men and women of Cornell’s varsity teams. The outcomes of previous shots influenced Cornell players’ predictions but not their performance. The belief in the hot hand and the “detection” of streaks in random sequences is attributed to a general misconception of chance according to which even short random sequences are thought to be highly representative of their generating process.”

That is, a player who hits a shot expects he is likely to hit the next one, too. When he does, he files this away as ‘having a hot hand’, but the actual frequency with which he hits the second shot is not actually higher than when he had missed his previous shot. Standard ‘cognitive bias’ causes the player – and fans – to see it that way, that’s all. They remember when the second shot is made more than they remember it being missed.

Damn scientists are always messing with our hopes and dreams, right? No Easter Bunny, no extra-terrestrials in Mississauga, and no hot hand. Is nothing sacred?  Other researchers went looking for evidence of a hot hand over the ensuing years, but it became known in academic circles as ‘the hot hand fallacy’, the general consensus being that it did not exist in the real world of basketball.

33 years later

But wait, it’s now 2018 and a paper by Miller and Sanjurjo appears in Econometrica, the premier journal for economic analysis involving probability and/or statistics. It’s title is “Surprised by the hot-hand fallacy? A truth in the law of small numbers”

Here’s some of what their Abstract says:

We prove that a subtle but substantial bias exists in a common measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data…. We observe that the canonical study [that is, Gilovich, Tallone and Tversky] in the influential hot hand fallacy literature, along with replications, are vulnerable to the bias. Upon correcting for the bias, we find that the longstanding conclusions of the canonical study are reversed.

It took over 30 years for two economists to figure out that ‘the canonical study’ of the hot hand did its ciphering wrong, and that once this is corrected, it’s findings are not just no longer true, they are reversed. The data collected in 1985 do provide evidence of the existence of a hot hand.

Think about this. In 1985 some very clever academics showed there was no such thing as a hot hand in the real world of basketball, and the academics who peer-reviewed their work agreed with them. Thirty-plus years later, some other clever academics realized that first set had gotten something wrong, and that fixing it reversed the previous findings – and the academics who peer-reviewed their work agreed with them.

Ain’t social science wonderful? A question for which there is excellent data, a situation rarer than hen’s teeth in social science, is investigated and a conclusive answer arrived at, and thirty years later that answer is shown to be not just wrong but backwards.

No one did anything shady here. There was no messing with data, the 2018 guys used the same data used in 1985. A mistake, a subtle but significant mistake, accounts for the turnaround, and it took 33 years to discover it. One can hardly blame the 1985 researchers for not seeing the mistake, given that no one else did for such a long time.

The Lesson?

So, in case my point is not yet obvious, science is not a set of settled facts. Those do exist – sort of – but anyone who understands the process of science even a little understands that settled facts are settled only until they are overturned. And if that is true for such a clean research question as an investigation of a basketball hot hand, think about a more typical social science question in which two things are almost always true. One, the data is not at all what the researchers need, so they make do with what they can actually gather. Two, the right way to analyze that data – among endless possibilities – is a matter of disagreement among respectable social scientists. Following that kinda science will make you dizzy, my friends.

A teaser: think about this social scientific question. It is arguably of more importance than basketball shooting.

Does the availability of bricks-and-mortar adult entertainment establishments have a positive, negative, or no effect on the commission of sex crimes in the surrounding neighborhood?

Whaddya think is the right answer?

For extra credit: what kind of data would a researcher need to gather to answer that question?

Now that’s real (i.e., messy) social science.

Stay tuned, because a couple of economists set out to investigate the question above, and I’ll have a go at what they did and their findings in a future post.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *