The Responsibility of Deciding
A bit ago I went to Chapters and bought a book titled These Strange New Minds by one Christopher Summerfield, who is a professor of cognitive neuroscience at the University of Oxford and an AI researcher. The book’s subtitle is ‘How AI learned to talk and what it means’. As so often happens, I first read about this book in a review of it in a literary magazine.
I have by now had enough of people saying AI means the end of mankind or the birth of a new golden age, all of which is the sort of tripe uttered by people who don’t know much but have some kind of self-interest in or against something new like AI.
I am utterly dubious about AI and LLMs and what they are doing to society, but then I am a luddite who is also sure that the internet has been a net force for bad in the world and that owning a smartphone is a terrible idea if you value your humanity.
I went to a seminar by a now-departed colleague of mine a couple of years ago in which he laid out the actual mathematical workings of LLMs like ChatGPT. I was quite surprised to see that the highest mathematics used in building these programs is the sort of linear algebra taught in a first-year university course. What makes LLMs special, if they are, is putting massive computing power to work on massive data sets to train them. Huh.
But I wanted to understand as well as I could how these things work, as it’s becoming clear that companies are going to start using them to do all kinds of things, and most certainly, they will be what you must deal with when you want to interact with any LBO. It is near impossible to speak to a person at any insurance company, government office or other service-oriented org now. Soon it will be LLMs and nothing but all across the corporate and government world.
It’s actually quite a well-written book, and I am about half way through and do understand better how LLMs evolved over the last 20 years and have a better handle on how they work. The author is pretty much a proponent of the wonderfulness of LLMs, albeit not to the utterly mindless extent of someone like Sam Altman. He has not yet convinced me in any way that the coming deluge of LLMs is going to be good for anybody but the people in the companies that run them, but I’m still reading.
Here’s one reason why. The stats blog I follow is pretty big on this stuff, although most of the people who write on it do admit that AI and LLMs are currently being over-hyped by the Altmans and Marc Andreesen’s of the world. [Here’s a direct quote from Andreesen: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”]
A recent post from a computer scientist named Jessica on the stats blog described some of what she learned and heard at a conference discussing the use of AI in corporate strategizing. The post is titled ‘How will/should LLMs change how organizations do strategy?’, and you can read the whole post here yourself if you like. The key question at this conference, according to her, was ‘The workshop revolved around the question, How capable are LLMs of making hard strategic decisions?’
There is a regular commenter on Gelman’s stats blog who goes by the nom de plume ‘Anonymous’. He has written in the past (I am assuming it’s a he) that he is a faculty member at a university, and prefers to remain anonymous due to worries about the backlash he would face from colleagues for some of his posts.
Whether that is true or not, he is a gadfly on the blog. Very contrary, taking views not held by others who post or comment on the site, and I do appreciate him for that. After Jessica’s post, he posted the following comment on the topic of AI making corporate decisions:
I’m strongly reminded of a famous quote from an IBM presentation in 1979:
“A computer can never be held accountable, so it should never be allowed to make a management decision.”
This holds regardless of how “good” it is at making such decisions. What happens when, not if, it goes wrong?
A worthy point, in my view, but once again, I am a luddite. I simply do not understand why people use auto-complete in their emails and other writing. Even if the damn thing didn’t make tons of mistakes, have you no pride of authorship, no desire to write well yourself?
Similarly, why in the hell does anyone want to let a computer program make decisions for them?
Dale Lehman posted a reply to Anonymous’s comment above that included the following:
I’m on the same page here…..Everything seems to concern what LLMs “can” or “cannot” do. What about “should?” Why do we want LLMs to make strategic decisions?….. Why not have someone decide who I should marry? I don’t think it matters whether it is an LLM or a human deciding for me – I would say that is my decision to make, regardless of whether or not I can do it as well as others.
The answer for most LBOs to that ‘why?’ question would be the same: it’s cheaper than paying managers.
Maybe it is, at least in the short run. I fear we are going to find out soon, en masse. I predict an ongoing shit-show, but I am, of course, a luddite.
PostScript to my faithful few readers:
I have regretfully shut down the comments on my blog. Few of you ever commented, but I still did this reluctantly, driven only by the fact that I found myself deleting dozens of spam comments, no doubt generated by robots, every hour. An annoyance at first, it lately turned into a torrent. Seeing no way to stop the deluge, and getting angrier and more depressed by it each day, I had my IT people just shut down that capability. The web is where assholes go to prosper, sadly. Or, perhaps it is more accurate to say that the web allows the assholes of the world to multiply their assholeness by a huge factor.