AI Will Set You Free
[Note of clarification: In the font used by my website, AI looks like my first name. In this post it should be read as ae-eye. You know, the technology that will change the world for the better.]
Here’s the title from a recent article in The Free Press by one Molly Cantillon, dated Jan 22:
AI Took Control of My Life and I Love It
Ms Cantillon is an executive in a tech company, as she writes –
“For context, I’m the founder of NOX, an AI-assisted inbox organizing tool, and I have a very busy schedule.”
Very busy. She says she was making commitments she could not keep track of, had put her phone in a constant state of Do Not Disturb and had no idea what she was subscribed to – and paying for.
She got various programs and fed them into Claude, the AI program, and had it cancel her subscriptions (after asking her if she needed each one), find and pay all her accumulated parking tickets, and she also had Claude write an app to ‘optimize her sleep’.
There’s more:
I hit inbox zero for the first time ever, thanks to Claude filtering and deleting spam while auto-drafting replies for all relevant inbound emails. It analyzed my bottomless pile of Apple Notes and meeting transcripts, found the things I’d procrastinated on and took care of them instantaneously—from confirming travel details to following up on action items from meetings.
Claude Code has taken over her personal finances:
Every morning, it scans the news and writes me a personalized brief on my portfolio and opportunities to buy or sell. Last month it flagged Democratic congresswoman Cleo Fields buying at least $500,000 in Netflix stock.
Whoa, Claude does insider trading on insider traders. [Aside: Ain’t democracy great, tho?]
She is, in a word, ecstatic about all this.
It’s hard to overstate the value of an impartial observer that reads all the data that makes up your life, catches what you’ve unconsciously dropped, notices patterns across domains you’d kept stubbornly separate, and—crucially—tells you what to do about it.
Two parts of that last quote struck me.
One, ‘impartial observer’.
Two, ‘tells you what to do about it.’
She says ‘Living with Claude feels like having a tireless chief of staff.’
Regarding One, if one did have a chief of staff, albeit a human, and thus, not-tireless one, it is fair to say that one could not be entirely sure that it was ‘impartial’. He might have an agenda. He might, for example, steer one away from any decisions or even information that he felt jeopardized his position, or in any way made him look bad.
I am left to ask: from whence comes Ms Cantillon’s belief that her AI does not also have an agenda? It is generally true that the AI chatbots that people use for various inquiries work hard to ingratiate themselves with their users. They are apparently ‘trained’ to behave that way. How is her Claude chief of staff trained? Does she know? Does she care?
As to Two, a human chief of staff would also tell (or at least ‘recommend’) what you should do. And, according to her, the default setting on her AICoS is that it asks her before it does anything in her name.
Ok, that seems wise, if it works. I guess you would not know if Claude went off the reservation on its own until shit started to happen. But I suppose that also would be true of a human CoS who went renegade on you.
Then she writes this.
Naturally, the watchtower has a landlord. Anthropic, Claude’s creator, sees every query you make. The value exchange is explicit: their visibility into your thinking for access to a small army of taskrabbits.
Indeed. And there it is. It is Anthropic, the corporation, in which one must have some level of trust if one is to do this. If AICoS has an agenda, Anthropic might just be from whence it comes.
She writes further:
“I know I am dependent. If Claude disappeared tomorrow, my day-to-day routine would fall apart.”
Ah.
She also writes this at the end –
“To paraphrase Charles Goodhart, the famed British economist: Meticulous optimization yields hollow victory.”
I have never heard of this Goodhart economist guy, but I am very sure I do not want to meticulously optimize my life, with or without an AICoS. I suspect, however, that such a move might have been more appealing to me when I was younger, more ambitious, climbing the academic career ladder.
Today, at this stage in my life, I find all this appalling. On reflection, that feeling has two levels to it.
One is, I think, highly personal. I could never trust Anthropic, or any other organization, to not take advantage of that ‘value exchange’ Cantillon mentions in the quote above. I have no doubt they would use all the info they gathered on me to their own advantage, and I do not see that serving my interests in the long run.
My second concern is more….social. I see looming a world in which all or most of us get the same info, make the same decisions, have the same ideas, because all of it is being run through a small set of incomparably complex programs that no one really understands. It is a future of the ‘hive mind’, a term Cantillon herself uses. We are all to become ‘dependent’ on this software, for everything. If it fails, or its ‘landlords’ just decide to fuck with us, things will indeed fall apart. Perhaps in a way that we will be unable to detect, or to do anything about if we do notice.
No, thanks.
Epilogue
After I had finished a third draft of the above post (I do write and re-write), an article appeared in The Free Press by Tyler Cowen, economist, titled ‘Can AI help us find God?’
Cowen is an AI enthusiast who has written a number of breathless articles in TFP about how great it is going to be for humanity. Might be a glitch here and there, sure, but in general – transformative. Awesome. He does go on.
I add in one quote from this latest Cowen article:
It’s common now to ask AI about the meaning of life, or about the ultimate nature of reality. From there, it’s only a small step to having AI be your major conversational partner about God, and then, for some people, a god or spirit as well.
Cowen presents no evidence for this, so I wonder: is it really common for people to ask AI those questions? God help us, if so. It’s a bloody computer program, it knows nothing that was not put into it. Actually, it ‘knows’ nothing, period.