Skip to main content

Noah Meets Nash

 

My dumbphone service provider is Telus, a big company. They send me an email with my monthly bill in it, but they send me plenty of other emails, too. The subject line of this one caught my attention.

Introducing a convenient new way to connect with a human expert about your services

Here is the step-by-step way to do this, according to the email:

How it works:
On the Web
1. Log in to My TELUS online
2. Select the Virtual Assistant icon in the bottom right corner of the screen to start a chat
3. The Virtual Assistant will quickly transfer you to the TELUS Expert Messaging platform
Reading the rest of the email made it pretty clear you should not expect a reply from the ‘expert’ immediately. You should just go about your day, and you will get a reply eventually. My real question is: why would I believe that I will get a response from a ‘human expert’? I have to send my request to the Virtual Assistant first, meaning a chatbot. First, there seems to be a missing bit of info: what does one type at the Virtual Assistant to let them know you want your query to be answered by a human? And, whatever that requires, there is no way to know whether whatever response you get back is really from a human.

I almost never need to contact Telus, thankfully, that is another advantage of having a dumbphone. However, this ‘convenient new way’ looks to me like another step in the online enshittification of everything. All companies are quickly moving to using LLM/chatbots for all customer enquiries, guaranteeing a miserable time for us all.

However, the world remains teeming with AI enthusiasts. One of these is Noah Smith, an economist (yea, one of those) whose blog Noahpinion I generally recommend. He has recently posted an article titled:

Superintelligence is already here, today

It’s going to revolutionize science. It also might take control of this planet.

You can kindof guess what he has to say just from that, and if you want to read the whole piece, it is freely available here.

What I want to write about here is something that his article illustrates really really well. The tendency for people who are enthusiasts about something to abandon all their critical thinking skills so as to stay enthusiastic.

Here is a quote from Smith’s article titled as above:

This doesn’t mean that AI can now do everything a human being can do. Its intelligence is “jagged” — there are still some things humans are better at. But this is also true of human beings’ advantages over animals. Did you know that chimps are better than humans at game theory and have better working memory?

Now, notice that he is not saying here that animals are better at physical things than humans. Yes, Cheetahs can run at speeds no human can match and bald eagles’ distance vision puts ours to shame. No, Noah wants us to believe that animals – chimps, specifically – are better at some mental tasks than us. One of these, as quoted above, is at game theory. If you click on the link there, you will be taken to an article in something called Science, which indeed has the headline ‘Chimps best humans at game theory’.

Here’s a quote from that article:

The chimpanzees trumped the humans. They learned the game faster than their human counterparts and performed in line with the Nash equilibrium—hitting the theoretical benchmark.

Here’s the deal – ‘trumped the humans’ is bullshit.

Some background. Nash equilibrium is named for John Nash, the brilliant but troubled academic who was portrayed by Russell Crowe in the movie A Beautiful Mind, based on a book of the same name.

His concept is indeed the central piece used by economists and others to try to understand human behaviour in strategic situations. What is a strategic situation? Any situation with more than one participant in which the payoff to any participant depends on what they do as well as on what the other participants do. This includes all the games you can think of – poker, monopoly, bridge – hence the name ‘game theory’. However it also applies to important economic situations, such as two gasoline stations at the same intersection trying to decide what prices to set. Economists got very very interested in game theory right when I joined the profession, because strategic situations as I just defined them include almost everything that happens in the economy.

John Nash’s equilibrium gave us the first idea of how to predict the choices that would actually be made by those two gasoline station owners. Nash won a much-deserved Economics Nobel for this idea.

However, it is highly problematic. In plenty of games, there is no Nash equilibrium, and in others there are many many of them. For a concept that is supposed to be predictive of human behaviour, those are serious deficiencies. So is the fact that if you watch humans in actual situations, they often don’t behave as Nash’s concept predicts they would.

But none of that is the real problem with the Science article, or with Smith’s jumping on it to explain to us non-believers that animals are sometimes smarter than us.

What is very well known to all economists is that the Nash equilibrium choices in a strategic situation are almost always inefficient. That is, there are other choices the people in the situation could make that would be better for all of them than the ones Nash’s theory predicts. Indeed, in my two-gas-station example, if the two owners would get together and agree on a couple of higher-than-Nash prices for their gasoline, they would both make more profits. (This is illegal, by the way, but it happens frequently.)

So, what the Science headline should say is ‘Nash’s equilibrium concept does a better job of predicting chimp behaviour than it does human behavior in some situations’. The experiment described, with monkeys and humans, tells us something about Nash’s equilibrium concept’s usefulness. It tells us nothing about the relative mental ability of humans and chimps, as there is nothing ‘better’ about those Nash equilibrium choices.

And Noah Smith, who is supposedly an economist, should know that, too. But that would not help him make his point about AI – and that is what he desperately wanted to do, the truth be damned. He later in his post writes:

But if what we’re talking about is domination of the planet’s resources, and control of the destiny of life on Earth, we don’t actually need AI to be better at every cognitive task. Humans conquered the planet from animals despite having worse short-term memories than chimps and being worse at differentiating sounds than rabbits.

So now he’s back to the ‘chimps have better short term memory’ thing.

Smith mentioned that in the original quote above, too. Well, I looked into that claim , also, and it is in fact based on yet another game theory experiment with chimps and humans playing. Here’s a quote about it:

In Camerer’s experiment, it turned out that chimps played a near-ideal game, as their choices leaned closer to game theory equilibrium.

Again, not ‘ideal’ in any normative sense, just closer to the theory. I doubt ol’ John Nash appreciated that he had developed a theory of chimp behaviour.

He’s a believer, is Noah. I do not worry about AI taking over the world or destroying mankind. Get real. I do worry about what humans – like the people who run Telus – are going to do to life for most of us, with their chatbots and general attitude toward customer service. However, they were enshttifying our lives before AI came along. AI may well make it worse, and the true believers and cheerleaders like Mr. Smith will help that along, I expect.