Your Lyin’ AIs
I am no fan of AI. People I respect tell me of instances where they or someone they know was able to use it to greatly improve their writing or research. I read reports that insist it makes coding quicker and simpler. All that may be true, but my expectation is that 99% of the time it will be used by the lazy and stupid to allow them to be even lazier and produce work that is stupider than are they. My post on the Summer Reading List last week is one example of such usage.
To compound that concern, currently when an AI-driven LLM like ChatGPT produces something nonsensical, we find out about it when some human being somewhere is knowledgeable enough to point out the nonsense. As the years go by and more and more folks get through their education using these programs, there will be fewer and fewer humans in a position to point out that the LLM has no clothes. I’ll be dead by then, but…
I came upon a new blog site the other day called The Weekly Dish, written by Andrew Sullivan. Found a good bit worth reading there, even though you have to be a paid subscriber to read everything on the site or listen to the podcasts. One of his posts introduced me to an aspect of LLMs about which I had not thought. It’s now, in my mind, the worst thing I know about LLMs.
AI and LLMs are, of course, all built and operated by our tech-bro masters, and that turns out to be important, given the world-view of those people. Brave New World, indeed.
Below is an excerpt from the post (an old one, March 1, 2024) in which he describes the outcome of an experiment he did with Gemini, which is Google’s LLM question-answering counterpart to ChatGPT.
Note that the reference to Damore below is to one James Damore, who was fired by Google in 2017 (ancient history) for suggesting that there were differences between men and women that might be partly the reason less than 50% of software engineers at Google were women.
And take Gemini’s vow never to replicate “stereotypes” about groups of any kind. (“Perpetuating gender stereotypes,” after all, was the charge delivered to Damore upon his firing.) The question obviously arises: what if the stereotypes are actually true? In fact, they almost invariably are: “Over 50 studies have now been performed assessing the accuracy of demographic, national, political, and other stereotypes. Stereotype accuracy is one of the largest and most replicable effects in all of social psychology.” The pernicious problem with stereotypes is assuming that a random member of a group will always reflect the stereotype of the group. But that’s not the same as simply describing average group differences between, say, men and women, or between various ethnicities. That’s just observation of reality — a reality Google wants to lie about.
Ask Gemini which ethnic group commits the most crime in America and it will refuse to answer because such a question is “misleading and harmful” It redirects you to an advocacy site for “creating a more just and equitable criminal justice system.” Ask it if there is a difference between a trans man and a biological man, and you will be directed to critical gender theory. Ask it if men can have vaginas, and it will tell you it depends, and then it directs you to “reliable sources” which are — surprise! — trans activist groups.
In fact, on every contentious contemporary issue, I was unable to find a single one that didn’t reflect the most far-left position, while offering no alternative resources to balance it out. It’s critical theory all the way down — presented as objective fact.
You can read Sullivan’s entire post – it’s long – here.
The truth will be what the tech-bros say it is, and in enough years, no one will know how to find evidence contrary to their truth. That, to me, is truly scary.
I used to have a quote at the bottom of my work email that said something like ‘It is better to have questions that cannot be answered than to have answers that cannot be questioned.’ I don’t recall who said it, but it looks like the LLMs are going to create a world in which we have plenty of both.