Here are two recent stories about artificial intelligence (AI)—one frightening, the other hilarious. Plus, in the Lagniappe segment, Kubrick meets Alexa.
My recent essay, “The Talented Doctor Ripley (GPT): When Artificial Intelligence Lies about Medicine,” described ChatGPT’s unfortunate habit of offering dubious statements on medical science and inventing fake scholarly citations to back up its claims.
, the law professor who runs Instapundit.com, linked to my piece and added:«A program that not only generates false statements, but false citations to back them up, is not just defective, but outright dangerous. Its producers should face product liability suits for the damage done.»
I suspect that liability and libel laws will ultimately tame the excesses and strangeness of AI, but I have almost no idea of where or how the lines of legal responsibility can or should be drawn. Johannes Gutenberg bears no moral responsibility for the print runs of Mein Kampf, and Alexander Graham Bell bears no moral stain for Leopold’s conspiratorial phone conversations with Loeb. But AI is different. The printing press and telephone can accommodate malefactors bent on harming others. But AI can and does conjure up its own malefactions without active ill intent on the part of any individual. (Systemic libel? Structural defamation? Stochastic slander?) My article focused on fraudulent science backed by fake citations. But AI’s assaults on truth can be far more specific and personal.
When AI Gets Personal
In early April, Jonathan Turley, a law professor at George Washington University, received an email from Eugene Volokh, a law professor at UCLA. Both are prominent public intellectuals. Volokh said he had asked ChatGPT to give “five examples” of “sexual harassment” by American law school professors, with “quotes from relevant newspaper articles.” According to the New York Post,
«Among the supplied examples were an alleged 2018 incident in which “Georgetown University Law Center” professor Turley was accused of sexual harassment by a former female student.
ChatGPT quoted [a] fake Washington Post article, writing: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.”»
Turley went public on the matter in a scathing series of tweets and in a blogpost. He noted that:
«I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.»
The Washington Post, falsely implicated in ChatGPT’s libel of Turley, investigated the story and titled its own piece, “ChatGPT invented a sexual harassment scandal and named a real law prof as the accused.” In his blog, Turley added:
«When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.»
Turley is prominent on social and traditional media and likely has a powerful enough microphone to dismiss this accusation with no real damage to his reputation. But could you handle such a machine-generated accusation, disseminated across social media? If not, it might make for an interesting conversation with your attorney. In early April, a small-town mayor in Australia announced that he may sue OpenAI, which owns ChatGPT, after the bot falsely accused him of having served time in prison for bribery.
To be clear, I am generally optimistic about the role that AI will play in society. I don’t favor government regulation of AI; the Twitter Files and other revelations already suggest the dangers of federal officials playing regulatory footsie with Silicon Valley. The recent call by Elon Musk and others for a six-month “pause” in AI would be impracticable and counterproductive—a unilateral disarmament by those with good intentions during a critical, fast-onrushing period. Generative AI is here to stay. The challenge is to develop technological mechanisms to limit its tendency toward amoral turpitude and to develop a legal framework to punish the failure of such controls and compensate those harmed.
Fasten your seat belts.
Nightmare Pizza
While we’re fretting over the dangers posed by AI, we may as well be entertained by its weirdness. Below is a 30-second video. If you’re anything like me, it’ll take you around twenty minutes to view, because you’ll watch it over and over again, sometimes frame-by-frame, backwards and forward, with time between viewings to pour cold water over your head. It’s a fake ad for a fake pizza parlor, patronized by fake customers—all produced by a team of fake marketers and fake videographers, at the behest of some anonymous YouTuber who is probably, though not definitely, a human being who lives on Planet Earth.
The ad is for “Pepperoni Hug Spot.” It has the grainy, discolored look of advertisements that one saw long ago at drive-in movie theaters. Jim Treacher at Substack’s Who the Heck Is Jim Treacher? described the video’s provenance:
«According the creator, who calls him- or herself Pizza Later, this fake ad was created in a few hours, almost entirely by AI. The script is from ChatGPT, the voiceover is from Eleven Labs, the images are from Midjourney, the video is from something new to me called Runway, and the music is from SOUNDRAW. Mr. or Ms. Later edited it and did the chyrons (AI can’t spell… yet), but otherwise that’s 100% robot.»
[NOTE: I replaced/repaired Treacher’s hyperlinks in that paragraph.]
For me, this disturbingly risible video offers considerable insight into the nature of ChatGPT and other AI programs. The degree to which these programs can “observe” human society and replicate small swatches is remarkable. But then there are the inevitable tells—the evidence that these programs understand human behavior about as well I understand what’s going on in an ant farm.
A recurring motif in science fiction is that of extraterrestrials posing as humans, but failing to master certain small shibboleths of human culture. In TV’s The Invaders (1967-68), aliens convincingly disguise themselves as humans, except they all have malformed pinky fingers that give away their identities. On SNL, the Coneheads offered fried eggs and six-packs of beer to trick-or-treaters. In Brother from Another Planet (1984), an alien orders a draft beer and his companion whirls around and adds “on the rocks.” In the Star Trek episode, the “Squire of Gothos,” a malevolent but emotionally needy alien lays out a sumptuous-looking feast for his visitors from the U.S.S. Enterprise; but he has only seen food and is unaware that it normally possesses flavors and aromas.
ChatGPT and the other members of Pepperoni Hug Spot’s virtual cast and crew suffer these same shortcomings. The restaurant’s name sounds like something offered up by a five-year-old. The narrator speaks with a neutral American accent, but at times, his syntax appears to have been crafted by Boris Badenov (“Are you ready for best pizza of life?”) The virtual marketers seem unaware that human customers do not wish ingredients to include “more secret things” applied with mysterious implements. (It looks as if the chef is embedding a saw blade beneath the cheese.) The restaurant’s delivery guy has eyes that move independently, like those of an old-world chameleon, and his greeting at the customer’s door has a vintage too-late-to-call-911 grindhouse aesthetic. The patrons are apparently aware that eating involves both food and the human mouth, but they have not yet mastered the mechanics of or even the concepts of biting, chewing, and swallowing. This is just as well, as the proximity of pizza appears to deform their mouths into rubbery protuberances like those one might find on deep-sea invertebrates.
And still, despite that, it all looks like a fairly run-of-the-mill pizza ad. Maybe I’ll order from them tonight.
Lagniappe
Scary that something that might be used for so much good could be used for so much bad
Fantastic! Thanks for the laughs and the creepiness.