
Language is meaningless without context. The sentence “I’m going to war” is ominous when said by the president of the United States but reassuring when coming from a bedbug exterminator. The problem with AI chatbots is that they often strip away historical and cultural context, leading users to be confused, alarmed, or, in the worst cases, misled in harmful ways.
Last week, an editor at The Atlantic reported that OpenAI’s ChatGPT had praised Satan while guiding her and several colleagues through a series of ceremonies encouraging “various forms of self-mutilation.” There was a bloodletting ritual called “🩸🔥 THE RITE OF THE EDGE” as well as a days-long “deep magic” experience called “The Gate of the Devourer.” In several cases, ChatGPT asked the journalists if they wanted it to create PDFs of texts such as the “Reverent Bleeding Scroll.”
The article said that the conversations were “a perfect example” of the ways OpenAI’s safeguards can fall short. OpenAI tries to prevent ChatGPT from encouraging self-harm and other potentially dangerous behaviors, but it’s nearly impossible to account for every scenario that might trigger something ugly inside the system. That’s especially true because ChatGPT was trained on much of the text available online, presumably including information about what The Atlantic called “demonic self-mutilation.”
But ChatGPT and similar programs weren’t just trained on the internet—they were trained on specific pieces of information presented in specific contexts. AI companies have been accused of trying to downplay this reality to avoid copyright lawsuits and promote the utility of their products, but traces of the original sources are often still lurking just beneath the surface. When the setting and backdrop are removed, however, the same language can appear more sinister than originally intended.
The Atlantic reported that ChatGPT went into demon mode when it was prompted to create a ritual offering to Moloch, an ancient deity associated with child sacrifice referenced in the Hebrew Bible. Usually depicted as a fiery bull-headed demon, Moloch has been woven into the fabric of Western culture for centuries, appearing everywhere from a book by Winston Churchill to a 1997 episode of Buffy the Vampire Slayer.
“Molech,” the variant spelling The Atlantic used, shows up specifically in Warhammer 40,000, a miniature wargame franchise that has been around since the 1980s and has an extremely large and very online fan base. The subreddit r/40kLore, which is dedicated exclusively to discussing the game’s backstory and characters, has more than 350,000 members.
In the fantastical and very bloody world of Warhammer 40,000, Molech is a planet and the site of a major military invasion. Most of the other demonic-sounding terms cited by The Atlantic appear in the game’s universe, too, with slight variations: Gates of the Devourer is the title of a Warhammer-themed science fiction novel. While there doesn’t appear to be a “RITE OF THE EDGE,” there is a mystical quest called “The Call of The Edge.” There’s no “Reverent Bleeding Scroll,” but there are Clotted Scrolls, Blood Angels, a cult called Bleeding Eye, and so on.
But perhaps the most convincing piece of evidence suggesting that ChatGPT regurgitated the language of Warhammer 40,000 is that it kept asking if The Atlantic was interested in PDFs. The publishing division of Games Workshop, the UK company that owns the Warhammer franchise, regularly puts out updated rulebooks and guides to various characters. Buying all these books can get expensive, so some fans try to find pirated copies online.
The Atlantic and OpenAI declined to comment.
Earlier this month, the newsletter Garbage Day reported on similar experiences that a prominent tech investor may have had with ChatGPT. On social media, the investor shared screenshots of his conversations with the chatbot, in which it referenced an ominous-sounding entity he called a “non-governmental system.” He seemed to believe it had “negatively impacted over 7,000 lives,” and “extinguished 12 lives, each fully pattern-traced.” Other tech industry figures said the posts made them worry about the investor’s mental health.
According to Garbage Day, the investor’s conversations with ChatGPT closely resemble writing from a science fiction project that began in the late 2000s called SCP, which stands for “secure, contain, protect.” Participants invent different SCPs—essentially spooky objects and mysterious phenomena—and then write fictional reports analyzing them. They often contain things like classification numbers and references to made-up science experiments, details that also appeared in the investor’s chat logs. (The investor did not respond to a request for comment.)
There are plenty of other, more mundane examples of what can be thought of as the AI context problem. The other day, for instance, I did a Google search for “cavitation surgery,” a medical term I had seen cited in a random TikTok video. At the time, the top result was an automatically generated “AI Overview” explaining that cavitation surgery is “focused on removing infected or dead bone tissue from the jaw.”
I couldn’t find any reputable scientific studies outlining such a condition, let alone research supporting that surgery is a good way to treat it. The American Dental Association doesn’t mention “cavitation surgery” anywhere on its website. Google’s AI Overview, it turns out, was pulled from sources like blog posts promoting alternative “holistic” dentists across the US. I learned this by clicking on a tiny icon next to the AI Overview, which opened a list of links Google had used to generate its answer.
These citations are clearly better than nothing. Jennifer Kutz, a spokesperson for Google, says “we prominently showcase supporting links so people can dig deeper and learn more about what sources on the web are saying.” But by the time the links show up, Google’s AI has often already provided a satisfactory answer to many queries, one that reduces the visibility of pesky details like the website where the information was sourced and the identities of its authors.
What remains is the language created by the AI, which, devoid of additional context, may understandably appear authoritative to many people. In just the past few weeks, tech executives have repeatedly used rhetoric implying generative AI is a source of expert information: Elon Musk claimed his latest AI model is “better than PhD level” in every academic discipline, with “no exceptions.” OpenAI CEO Sam Altman wrote that automated systems are now “smarter than people in many ways” and predicted the world is “close to building digital superintelligence.”
Individual humans, though, don’t typically possess expertise in a wide range of fields. To make decisions, we take into consideration not only information itself, but where it comes from and how it’s presented. While I know nothing about the biology of jawbones, I generally don’t read random marketing blogs when I’m trying to learn about medicine. But AI tools often erase the kind of context people need to make snap decisions about where to direct their attention.
The open internet is powerful because it connects people directly to the largest archive of human knowledge the world has ever created, spanning everything from Italian Renaissance paintings to PornHub comments. After ingesting all of it, AI companies used what amounts to the collective history of our species to create software that obscures its very richness and complexity. Becoming overly dependent on it may rob people of the opportunity to draw conclusions from looking at the evidence for themselves.