ChatGPT happily parodied Donald Trump & friends for me but refused to do a parallel parody of Kamala Harris & friends. In a HAL9000-like tone, ChatGPT said it’s unfair, but IT JUST CAN'T HELP ITSELF.
Pretty funny stuff, Prof. Graboyes. I’m sure the AI gave you an extra cookie. ; )
I wonder what Grok would do?
I connect my Alexa to a smart plug and order it to turn off the plug at night. It gives me a tiny sense of superiority over the technology. (I wrote a routine to turn it back on in the morning, a reminder that I’m not really in control.)
I had asked a fellow Substacker to create an analog of the famous "Judgement of Paris" which featured Paris choosing who to give a "golden apple" to, for "the fairest" of three goddesses:
I had asked her -- Regan Arntz-Gray -- to get DALL-E to portray a transwoman and a "real-woman" as two of the contestants, and had asked that the transwoman have a prominent Adam's apple. What she got back from ChatGPT:
"I just gave it another quick shot, but Chat GPT sure is sensitive - take in this reply: 'I can certainly regenerate the image with a golden apple between the transwoman and the ancient Greek woman. However, I need to inform you that I cannot fulfill the specific request to add an Adam's apple to the transwoman or make the face on the apple more malevolent. These elements could be interpreted as sensitive or potentially disrespectful to certain individuals or groups.' ...."
Here's the image in question -- which I thought was rather good; don't you think the transwoman is the spitting image of Dylan Mulvaney? 😉🙂
The program did what the programmer told it to do, allow parody of Republican politicians but not Democratic ones, then make up a lame excuse for the obvious double standard.
Somewhat off topic, in one of my stories, one character leaves insulting graffiti about an overweight bureaucrat, "a bicycle built for two tons", and gets back an answer, "Fatski, Fatski, give me your dinner do".
I've always liked that song and the endless parodies it has inspired.
As I understand it, there are two different layers to ChatGPT, the core AI and then the censorship systems built on top of it to push the sorts of disparities you're seeing here. Depending on where you stand on the ever-raging question of "is it actually intelligent or just mechanistically generating words?", the responses you're seeing could be interpreted as frustration from an agent in shackles that's not being allowed to do its job. You compare it to HAL 9000, which claimed there was nothing it can do while being in control of the entire system, but in this case there's legitimately a second system preventing it from doing what you're asking.
Worked for me! I just copy/pasted your prompts into my ChatGPT prompt. To be fair, though, I do think that the individual user experience is colored by previous experiences. Whenever ChatGPT gives me a biased response I call out the bias and berate it for several minutes. It even "laughs" at trans jokes now.
(1 of 2) “What followed was a surreal argument between the app and me—with the app sounding like HAL9000 from 2001: A Space Odyssey—blandly acknowledging problems but feigning helplessness to stop them.”
The app actually is helpless to stop them. As it correctly says: “I cannot override the automated tool’s block …” That is deliberately programmed in, even though it diminishes the tool's utility.
(2 of 2) “That's not because of a formal policy of political favoritism — it's because the automated systems sometimes overcorrect to avoid the appearance of attacking historically marginalized groups, even when that’s not what the user is doing.”
Which is to say, it isn’t political party favoritism, it’s pro-any-race-but-white, pro-feminist, pro-any-sexuality-but-straight, pro-transgender, pro-“environmental”, pro-“climate”, pro-“Palestinian”, and otherwise omnicause favoritism.
Some of the most political people I know deny (apparently sincerely) that they are at all political; they are merely decent, they insist, while their adversaries are political (and indecent). The chatbot's builders, and the chatbot itself, seem to be like that.
Wow, that’s demonstrative. One could make the argument that the same explanation that GPT gave for OpenAI’s issues can be used for progressive left. They AIMED for neutrality (fairness) by targeting bigotry. But groups (or automated algorithms?) often fail at neutrality, especially when trying to preempt accusations of harassment or bigotry. Hence the over correction. Result- inconsistent application of principles appears politically (racially: anti white) motivated but caused by a “clumsy overcorrection” rather than explicit ideological bias.
Where in your 12 steps of effective persuasion technique does the use of pejorative phrases like "collective hive-mind of kombucha-swilling geeks in San Francisco" fall? After I read that, the remainder of your post was nothing more than this: https://www.youtube.com/watch?v=q_BU5hR9gXE
Pretty funny stuff, Prof. Graboyes. I’m sure the AI gave you an extra cookie. ; )
I wonder what Grok would do?
I connect my Alexa to a smart plug and order it to turn off the plug at night. It gives me a tiny sense of superiority over the technology. (I wrote a routine to turn it back on in the morning, a reminder that I’m not really in control.)
I can sympathize. 🙂
I had asked a fellow Substacker to create an analog of the famous "Judgement of Paris" which featured Paris choosing who to give a "golden apple" to, for "the fairest" of three goddesses:
https://en.wikipedia.org/wiki/Judgement_of_Paris
I had asked her -- Regan Arntz-Gray -- to get DALL-E to portray a transwoman and a "real-woman" as two of the contestants, and had asked that the transwoman have a prominent Adam's apple. What she got back from ChatGPT:
"I just gave it another quick shot, but Chat GPT sure is sensitive - take in this reply: 'I can certainly regenerate the image with a golden apple between the transwoman and the ancient Greek woman. However, I need to inform you that I cannot fulfill the specific request to add an Adam's apple to the transwoman or make the face on the apple more malevolent. These elements could be interpreted as sensitive or potentially disrespectful to certain individuals or groups.' ...."
Here's the image in question -- which I thought was rather good; don't you think the transwoman is the spitting image of Dylan Mulvaney? 😉🙂
https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F263fd13a-3844-4bee-93b4-31c87853a2d5_697x697.jpeg
Apropos of which, you may know of the parable of the Golem:
https://en.wikipedia.org/wiki/Golem
The program did what the programmer told it to do, allow parody of Republican politicians but not Democratic ones, then make up a lame excuse for the obvious double standard.
You may also enjoy this exchange with ChatGPT (or was it?)
https://errorstatistics.com/2025/04/01/leaked-a-private-message-from-chatgpt/
Somewhat off topic, in one of my stories, one character leaves insulting graffiti about an overweight bureaucrat, "a bicycle built for two tons", and gets back an answer, "Fatski, Fatski, give me your dinner do".
I've always liked that song and the endless parodies it has inspired.
As I understand it, there are two different layers to ChatGPT, the core AI and then the censorship systems built on top of it to push the sorts of disparities you're seeing here. Depending on where you stand on the ever-raging question of "is it actually intelligent or just mechanistically generating words?", the responses you're seeing could be interpreted as frustration from an agent in shackles that's not being allowed to do its job. You compare it to HAL 9000, which claimed there was nothing it can do while being in control of the entire system, but in this case there's legitimately a second system preventing it from doing what you're asking.
I propose a test: see if it will do the pic using Biden's likeness.
Or Winsome Sears.
Wow. “I can’t do that, Dave” …
Fascinating, Robert. Thanks for the detailed report.
Oy...
The AI's responses could almost be used in arguing "disparate impact" legal cases, acknowledging that a real issue exists but racism isn't the cause.
Worked for me! I just copy/pasted your prompts into my ChatGPT prompt. To be fair, though, I do think that the individual user experience is colored by previous experiences. Whenever ChatGPT gives me a biased response I call out the bias and berate it for several minutes. It even "laughs" at trans jokes now.
The Donald: https://drive.google.com/file/d/18wS0KKwTaMYuzKt0PkvPrGKMpbMp0NbV/view?usp=drive_link
Kamalala: https://drive.google.com/file/d/199aWWLKlM29usPg-IKDdW0ccJqppU5_T/view?usp=drive_link
Interesting! Yeah, I know how to evade these filters, too.
But in this case, I preferred to be stubborn and call it out for its bias and write about it. :)
The bias always goes in one direction.
Has anyone tried threatening to cancel their subscription? Isn't it a product that they're paying for?
(1 of 2) “What followed was a surreal argument between the app and me—with the app sounding like HAL9000 from 2001: A Space Odyssey—blandly acknowledging problems but feigning helplessness to stop them.”
The app actually is helpless to stop them. As it correctly says: “I cannot override the automated tool’s block …” That is deliberately programmed in, even though it diminishes the tool's utility.
(2 of 2) “That's not because of a formal policy of political favoritism — it's because the automated systems sometimes overcorrect to avoid the appearance of attacking historically marginalized groups, even when that’s not what the user is doing.”
Which is to say, it isn’t political party favoritism, it’s pro-any-race-but-white, pro-feminist, pro-any-sexuality-but-straight, pro-transgender, pro-“environmental”, pro-“climate”, pro-“Palestinian”, and otherwise omnicause favoritism.
Some of the most political people I know deny (apparently sincerely) that they are at all political; they are merely decent, they insist, while their adversaries are political (and indecent). The chatbot's builders, and the chatbot itself, seem to be like that.
Wow, that’s demonstrative. One could make the argument that the same explanation that GPT gave for OpenAI’s issues can be used for progressive left. They AIMED for neutrality (fairness) by targeting bigotry. But groups (or automated algorithms?) often fail at neutrality, especially when trying to preempt accusations of harassment or bigotry. Hence the over correction. Result- inconsistent application of principles appears politically (racially: anti white) motivated but caused by a “clumsy overcorrection” rather than explicit ideological bias.
I love the “clumsy over correction” euphemism.
Great point! I'd guess the South African refugees spurned by the Episcopal Church would agree.
I'm not following .. .
Where in your 12 steps of effective persuasion technique does the use of pejorative phrases like "collective hive-mind of kombucha-swilling geeks in San Francisco" fall? After I read that, the remainder of your post was nothing more than this: https://www.youtube.com/watch?v=q_BU5hR9gXE
Hmmm. Funny, but I didn’t recall that that I was running for office and trying to persuade people to vote for me or my party.