Banzai, Miss American Pie — Did American Ideals Veto Kamikaze Tactics?
AI helps me navigate a long-ago old classroom anecdote about WWII history, economic theory, cognitive bias, and ethical norms. A strange thought experiment plus hints on how AI changes research.
Did America consider launching kamikaze-style suicide bombers during World War II? In the early 1980s, one of my economics professors told an anecdote suggesting so and used the story to illustrate concepts related to decision-making under risk and uncertainty. The discussion in class (or maybe it was a seminar) delved into the interaction of economic behavior with psychology (e.g., cognitive bias) and national ethics (e.g., Japan’s Bushido Code versus America’s Mom-‘n’-Apple-Pie optimism). For decades, I’ve recounted the discussion while never being certain of its veracity. I occasionally Googled the topic but always came up blank. Recently, I wondered whether artificial intelligence (AI) platforms might shed light where search engines had failed, and the answer was a resounding *yes*.
Grok and Gemini said, no, the U.S. never considered suicide bombing, but ChatGPT said yes, the topic was broached and rejected. It offered a lengthy discussion of how America employed near-suicidal tactics but ultimately rejected certain-death suicide missions. The text explored cultural differences between mid-century America and Japan and, after a followup prompt, psychological aspects of the story. It found no evidence of the professor’s story but suggested that it might have been a teaching parable in decision theory or military ethics—a sort of airborne trolley problem.
ChatGPT did describe near-suicidal endeavors by U.S. personnel in WWII. Among those tales was that one such mission killed a serviceman whom some believed (with good reason) would one day become president of the United States.
As much as anything, this post explores the role of AI in research and teaching—not as a source of instant copy-and-paste journal articles, but rather as a remarkably efficient tool for beginning a research paper or student essay. Before AI, all this information was available, but would have required laborious journeys through physical libraries and/or hit-or-miss web searches. Now, it only took a split second to churn up the raw materials to begin. With that in mind, treat all the assertions below as tentative. AI platforms often err, and my own fact-checking here was quick and incomplete.
HERE’S THE ANECDOTE
Somewhere around 1980, in a discussion of rationality under risk and uncertainty, one of my economics professors told a strange and thought-provoking anecdote. I remember it being presented as fact, but perhaps he only told it as a theoretical.
A group of American fighter pilots in WWII, he said, were offered the chance to improve their odds of survival by switching from standard bombing runs to kamikaze-style suicide missions. The U.S. needed to eliminate some Japanese ships, and military actuaries considered two scenarios: (1) Some high percentage of U.S. fliers (say, 2/3) would perish if they flew standard bombing missions; or (2) Some low percentage (say, 1/3) would perish if those 1/3, randomly chosen by lottery, agreed to fly suicide missions. Offered this choice, so went the anecdote, the pilots decided to stick with traditional tactics, and 2/3 died in sinking the Japanese ships.
This decision by pilots would contradict a simplistic interpretation of economic theory (specifically, expected utility analysis). The pilots chose the tactic yielding a randomly distributed 2/3 mortality rate over an equally valuable tactic yielding a randomly distributed 1/3 mortality rate. Why, his question went, would they so choose? One admirable purpose of the exercise was to dissuade us from reflexively assuming that human beings follow behaviors mechanically predicted by textbook models.
THE ANTIDOTE IN THE ANECDOTE
A mainstay of economic theory is the rational actor—the assumption that individuals have preferences, are fully aware of available options, and choose options that maximize their utility/happiness. I’ve noted in the past that economists don’t believe that people (or dogs) are literally rational, but rather that the rationality assumption imposes a pragmatic discipline on economic research, making it difficult for economists to simply conjure up ad hoc theories to accord with their pre-existing wishes and biases. Economists may deviate from the the rationality assumption, but they must do so transparently and coherently. Psychologist Daniel Kahneman received the Nobel Prize in Economics for systematizing ways in which people engage in irrational behavior. (I highly recommend his book, Thinking Fast and Slow.) Psychology offers a vast array of cognitive biases that can lead people away from the strict rational actor model. Those biases are arrayed in the following diagram:

The southwest zone of the above codex features cognitive biases in situations where individuals “need to act fast.” More specifically, around seven o’clock on the diagram are biases that occur in situations where “we must be confident we can make an impact and feel what we do is important.” In the middle of that stretch lies Optimism Bias, which was a topic in our classroom discussion. (For those interested in the world of cognitive biases, click on the link in the caption. Every bias in this rather incredible diagram is a hyperlink.)
ChatGPT described Optimism Bias as follows:
“a well-documented cognitive phenomenon where individuals believe they are less likely than others to experience negative events. In the military, this manifests as: ‘Most of us might not make it—but I probably will.’ This bias preserves morale and operational functioning. If everyone fully accepted the statistical risk personally, cohesion and will to fight might collapse.”
Thus, a flyer facing the 2/3 versus 1/3 survival odds might choose the 1/3 because he believes the odds don’t apply to him. ChatGPT mentioned several relevant stories:
“Eighth Air Force bomber missions over Germany in 1943 had staggering loss rates—sometimes 12–20% per mission. Crews were expected to complete 25 missions, but actuarially, few could survive that many. Still, most soldiers believed they personally would beat the odds.” …
“In a few rare cases, ‘volunteer’ tactics bordering on suicide were considered (e.g., night raids on Japanese islands, near-impossible rescue missions).” …
“Crews occasionally voted or were polled on strategy preferences—though not in the stark kamikaze-vs-non-kamikaze framing described in your story.” …
“As fighter ace Robin Olds put it: ‘No fighter pilot ever thought he was going to be the one to die today.’”
One might say that optimism bias is a defining characteristic of the American character and that it served as an antidote to any establishment of American kamikazes.
DIVINE WIND VERSUS DIVINE GRACE
Optimism bias is an attractive explanation for the behavior described in the professor’s anecdote, but it is not the only explanation. Other explanations may lie in the prevailing ethical standards of the nations involved. ChatGPT offered the following comparison between the ethical norms of the United States and Japan—suggesting why America rejected and Japan accepted suicide as a military tactic. (Note: “Aphrodite” is discussed farther down.)
ChatGPT’s narrative added that:
“The United States never formally adopted kamikaze-style tactics, but the concept of suicide or near-suicidal missions was indeed discussed and even experimented with, particularly in desperate or highly experimental contexts. However, such ideas were quickly marginalized or reframed to align with American military ethics and strategic values. …”
“American military leadership viewed kamikaze attacks—a tactic increasingly used by Japan in the Pacific theater—as the product of a desperate and fanatical ideology. The idea of deliberately sacrificing pilots was antithetical to U.S. doctrine, which emphasized preservation of life and the return of the soldier wherever possible. Suicide was associated with dishonor in Western culture, unlike the Bushido-influenced worldview of the Japanese military elite.” …
“There is evidence that true suicide attacks were occasionally proposed in extreme theoretical contexts—e.g., the use of manned gliders or boats packed with explosives—but they were consistently rejected. Proposals of this nature were usually fringe ideas, quickly dismissed by senior command as tactically unnecessary and morally repugnant.”
We can summarize these two philosophies as “Divine Wind versus Divine Grace.” The term “kamikaze” translates as “Divine Wind” and refers to the divine protection afforded those sacrificing their lives out of duty to emperor and homeland. In contrast, the controlling ethic for the American military was more that of “Divine Grace”—the primacy of life and its preservation. Americans were willing to engage in near-suicidal tactics, but not, as a rule, in outright suicide, no matter how the probabilities played out. ChatGPT’s take was that this stance allowed American servicemen to maintain “agency, dignity, hope, and the illusion of control.”
APHRODITE AND THE APPROACH TO SUICIDE

America rejected outright suicide missions as a tactic of war, but came closest with a program mentioned by all the AI platforms—Operation Aphrodite. The operation is perhaps best remembered for the death of one of its fliers, Joseph P. Kennedy, Jr., seen by his father and others as a potential postwar U.S. president. Of course, John Kennedy won the office and both of his remaining brothers sought the office.
The U.S. Army’s Operation Aphrodite (and the U.S. Navy’s Operation Anvil) involved quasi-suicidal missions that preserved for its fliers a sliver of hope for survival—that essential quantum of optimism bias and divine grace that made it acceptable for American military authorities. Obsolete or worn-out bomber planes were hollowed out and packed to the gills with explosives. Two-man crews would fly the planes toward their targets and then bail out, with robotic guidance taking control for the final descent into the target—a sort of proto-cruise missile. The crew’s likelihood of survival was low, but not zero. The missions were terribly unreliable and perilous for the crews. In Kennedy’s case, the bombs exploded prematurely, killing him and his crewmate in midair. In hindsight, Aphrodite and Anvil were viewed with considerable discomfort.
TO CLASSROOMS PAST AND FUTURE
Looking back to my Columbia classroom, circa 1980, the AI platform likely resolved by decades-old question as to the veracity of the professor’s tale, and this experience suggests to me how future professors ought to encourage students to use AI platforms in their research.
Looking back, I suspect ChatGPT offered the correct take on my professor’s anecdote:
“No direct historical record confirms this exact scenario—a democratic vote between suicide tactics and traditional tactics with equal expected fatality. However, it may be a composite tale, pulling together: (1) Real fatality statistics (e.g., Black Thursday, Schweinfurt raids); (2) Real psychological research on pilot perceptions and biases; (3) Real moral debates within units (e.g., whether to volunteer for a mission).
As a soldier in World War II, my father taught military law at Camp Lee, Virginia. Denied a college education by poverty, he was nevertheless a profound thinker and gifted educator. My professor’s anecdote is exactly the sort of question I could imagine my father posing to his students who, as he often noted, included generals and one future Secretary of the Army.
Looking forward, my use of AI in exploring this decades-old question is precisely what will make AI both useful and dangerous in university classrooms going forward. Useful in that AI can instantaneously provide the raw materials for research that once would have taken countless hours to accumulate. Dangerous in that students will no doubt be inclined to copy-and-paste results without further inquiry. In this case, a student using Grok or Gemini and a student using ChatGPT would have arrived at radically different conclusions. And, to reiterate, I’m not guaranteeing the veracity of the output that I’ve shared—only its status as the precursor chemicals to meaningful research. Were I still teaching, I would encourage students to use AI and insist that they document their prompts and, perhaps, provide links to each increment of output. (All the major AI platforms allow you to post links to specific queries and ensuing responses.)
Really interesting essay - thank you!
> The pilots chose the tactic yielding a randomly distributed 2/3 mortality rate over an equally valuable tactic yielding a randomly distributed 1/3 mortality rate. Why, his question went, would they so choose? One admirable purpose of the exercise was to dissuade us from reflexively assuming that human beings follow behaviors mechanically predicted by textbook models.
Thinking about it, this can be rational in the sense that in the first case the 1/3 survivors become instant war heroes, whereas surviving a lottery isn't nearly as impressive.
Also from a strategic point of view Japan's kamikaze strategy was not that effective since there were no survivors of kamikaze missions to give after action feedback.