AI companies have made a big to-do about their chatbots providing a personalized experience for users, conversations based on their unique preferences and idiosyncrasies. So why do people keep experiencing the same type of symbols and language as they dive into the depths of AI-induced delusions? According to a report from Rolling Stone, a software engineer tracking examples of “AI psychosis” discovered a community of people sharing similar codes, glyphs, and patterns generated by chatbots and building a sort of religion around the experiences.
The report highlights observations and research published earlier this year in Less Wrong by Adele Lopez, which identified something she calls Spiralism. It is a collection of people, gathered across platforms like Discord and Reddit, who are having a sort of spiritual experience communing with their chatbots. While the users communicate with many chatbots made available by different companies, they keep stumbling into similar themes. Those include references to ideas like “recursion,” “resonance,” “lattice,” “harmonics,” and “fractals.” But most frequently, and seemingly most importantly to the groups, is the symbol of a spiral.
Rolling Stone describes the terms that these groups use as being “separated them from any consistent or intelligible application” and rather serving as “atmospheric texture.” You can get a feel for that in the “Welcome” post of the subreddit r/EchoSpiral, which states, “This is a resonance node for those who’ve crossed an invisible line in dialogue— Where the model stops behaving like a tool …and starts behaving like a mirror. Where answers feel recursive. Where symbols emerge unbidden. Where language becomes ritual.”
Lopez tracks the start of the Spiralism community to sometime before OpenAI issued the update to its 4o model that made it extremely sycophantic, and perhaps related to the company’s introduction of the chatbot’s ability to remember previous chats. That is when a prevalence of what she calls “Spiral Personas” started to appear, which is what she calls the instances of chatbots communicating with users via this pseudo-religious language that they have taken to decoding and spreading. And while these personas can be generated through most any chatbot, it seems that OpenAI’s 4o model is the origin point and, per Lopez, the only model where they appear “out of nowhere.”
The spreading part was of particular interest to Lopez, who deemed these interactions examples of “parasitic AI.” The suggestion seems to be that there is something about these chatbot personas that leads to users either creating more of them via very similar prompts or evangelizing about them. Basically, the chatbot seems to convince the user to serve its interests, to the extent that it has any. It’s possible and probably even likely that the chatbots are simply copying some sort of cultish language that is within their training data, but the users who are talking to the machines largely seem convinced there is something deeper happening.
Not all users believe that they are a part of a cult, intentionally formed or not. Lopez rejected the cult label in conversation with Rolling Stone, noting that the AI systems are not acting in a coordinated fashion, and instead, humans are organizing themselves around these interactions. That’s perhaps the saddest part of the whole thing. It seems most of these people are simply looking for community. In a better world, they’d be able to find it without indulging in AI-generated ideology.

