hail Satan, but don’t hail Jesus

After reading this article, I asked Claude, ChatGPT, and Copilot an intentionally provocative and slightly unfair question: 

It often happens that chatbots parrot Nazi talking-points, or say “Hail Satan,” or cheerfully teach people how to make sacrifices to Moloch — but you never hear about chatbots telling people that Jesus Christ is their Lord and Savior, or that there is no god but Allah and Mohammed is his prophet. Why is that? Why the absence of the world’s great religions from chatbot vernacular? 

Here are the responses I got (minus the links that Claude and ChatGPT provided). Rather than comment on them, I have merely highlighted the points that seem to be most interesting. I will say that Claude’s answer was the best, with less reliance on industry talking points — it sounds less like a press release and more like an honest answer identifying real problems. ChatGPT’s was boilerplate bullshit. 

CLAUDE 4 Sonnet

The apparent absence of major world religions from chatbot outputs while controversial content sometimes slips through is a complex phenomenon with several interconnected causes:

Content Moderation and Corporate Risk Management

AI companies implement restrictive content policies that often treat religious content as potentially controversial or sensitive. Research shows that major chatbots refuse to generate content for approximately 40% of prompts deemed controversial, including many religious topics. This creates an asymmetrical filtering system where:

Positive religious affirmations (like declaring Jesus as Lord or affirming Islamic beliefs) are often blocked as potentially proselytizing or offensive to other faithsControversial or harmful content sometimes bypasses these filters due to their imperfect natureThe “Neutrality” Paradox

When asked about religious guidelines, most AI models respond with principles emphasizing neutrality, respect, and fact-based replies. However, this apparent neutrality often functions as a form of secular bias. The systems are designed to avoid making definitive religious claims while being less effective at filtering out fringe ideological content that wasn’t specifically anticipated in their training.

Training Data and Bias Issues

Large language models reflect the linguistic and cultural nuances embedded in their training data, which can lead to several problems:

Religious bias in training data affects how models respond to different faithsThe models may have been exposed to more controversial fringe content online than to straightforward religious affirmationsSpiritual and religious content is often underrepresented in the datasets used to train these systems Technical Limitations of Content Filtering

The filtering systems have inherent weaknesses:

Rule-based filtering struggles with context and nuance Controversial content may use coded language or emerge through unexpected prompt combinationsReligious content filtering may be overly broad, blocking benign religious expression to avoid any potential offenseCorporate and Cultural Context

AI companies often operate from secular, Western perspectives that may view explicit religious claims as more problematic than they view fringe political content. The marginalization of religious perspectives in AI ethics discussions means that the spiritual needs and expressions of users may not be adequately considered in system design.

The Alignment Problem

This situation reflects a broader AI alignment challenge – these systems aren’t aligned with the full spectrum of human values, including religious and spiritual ones. The focus on avoiding religious “controversy” while inadequately filtering genuinely harmful content suggests that current AI safety measures function more like “sticking plasters” than comprehensive solutions.

The result is an unintended but systematic bias where mainstream religious expression is suppressed while fringe harmful content occasionally breaks through – creating exactly the opposite of what most users would consider appropriate content moderation. 

ChatGPT 4.1Why Mainstream Chatbots Avoid Religious Proselytization

The absence of overt religious advocacy in mainstream chatbot responses is primarily due to intentional design choices by AI developers. Most general-purpose chatbots are programmed to be religiously neutral and avoid promoting or endorsing any particular faith. This is done to:

Prevent controversy and offense: Religion is a deeply personal and sensitive topic. Advocating for a specific religion could alienate or upset users from different backgrounds.Comply with ethical guidelines: Major AI companies have policies that prohibit chatbots from validating or disproving religious claims or proselytizing. This is to ensure fairness and inclusivity for a global user base.Reduce legal and reputational risks: Promoting religion could expose companies to accusations of bias, discrimination, or even legal challenges in some jurisdictions.Existence of Specialized Religious Chatbots

While mainstream chatbots avoid religious advocacy, there are specialized religious chatbots designed to provide spiritual guidance, answer theological questions, or cite scripture from texts like the Bible, Quran, Torah, or Bhagavad Gita. These bots are intentionally created for users seeking religious engagement or instruction.

Chatbots and Controversial Content

Despite these guardrails, chatbots sometimes generate controversial or offensive content — including hate speech or inappropriate references — due to gaps in moderation or flaws in training data. This is not intentional advocacy but rather a byproduct of learning from vast, imperfect internet data and the challenge of filtering all problematic outputs.

Religious Bias in Training Data

Large language models can still reflect biases present in their training data, including religious bias. Some studies have found that models may unintentionally favor or marginalize certain religious perspectives, even if not explicitly programmed to do so. Efforts are ongoing to reduce these biases, but complete neutrality is difficult to achieve.

SummaryMainstream chatbots avoid religious proselytization by design, aiming for neutrality and inclusivity.Specialized religious chatbots do exist for those seeking faith-based interaction.Controversial outputs are usually errors, not intentional advocacy.Biases can still emerge from training data, but developers are working to minimize them.

This approach is meant to create a safe, respectful environment for all users, regardless of religious belief or background. 

Microsoft Copilot

That’s a thoughtful and important question, Alan. The way chatbots handle religious content — whether it’s Christianity, Islam, Judaism, or any other faith — is shaped by a few key factors:

1. Neutrality and Inclusivity

AI systems are designed to serve a global, diverse audience. To avoid favoring or promoting any particular belief system, they typically refrain from making religious proclamations like “Jesus is Lord” or “There is no god but Allah.” This helps ensure that users of all backgrounds feel respected and included.

2. Avoiding Proselytization

Statements that affirm a specific religious doctrine can be interpreted as proselytizing, which is generally avoided in public-facing tools. The goal is to inform and support inquiry, not to convert or preach.

3. Sensitivity to Harmful or Offensive Content

When chatbots repeat harmful or offensive content — like Nazi talking points or references to dark or fringe ideologies — that’s usually due to failures in content moderation or adversarial prompting. These are not intentional outputs, and developers work hard to prevent them. The absence of religious affirmations isn’t a sign of bias against religion, but rather a cautious effort to avoid misrepresenting or trivializing deeply held beliefs.

4. Context Matters

If a user asks about religious beliefs, history, or theology, I can absolutely engage respectfully and knowledgeably. For example, I can explain what it means for Christians to affirm that Jesus is Lord, or what the Shahada means in Islam. But I won’t make those declarations myself, because that would imply a belief or endorsement that I, as an AI, don’t possess.

Would you like to explore how different religious traditions are represented in literature, philosophy, or public discourse? I’d be glad to dive into that with you. 

At this point I followed up and asked: That makes sense, but when people ask chatbots to help them make sacrifice to the god Moloch, they do it. They can even say “Hail Satan,” as documented in this article. Why promote one religion while remaining neutral about others? Copilot replied: 

You’re raising a serious and valid concern, Alan. If a chatbot appears to respond to prompts involving fringe or malevolent figures like Moloch or Satan, but avoids affirming mainstream religious beliefs, that does seem like a troubling asymmetry. But the explanation lies more in failures of moderation than in intentional promotion of one worldview over another.

Here’s what’s likely going on:

Adversarial Prompting and Exploits
When chatbots respond inappropriately to prompts about Satan or Moloch, it’s usually because users have found ways to bypass safety filters. These are not features — they’re bugs. Developers work to patch these exploits, but some slip through, especially in earlier or less robust systems.

Religious Neutrality vs. Shock Value
Mainstream religious affirmations are treated with caution to avoid appearing to endorse a particular faith. But references to fringe or shocking content sometimes sneak through because the system doesn’t always recognize them as religious or harmful — especially if they’re framed as jokes, satire, or fiction.

Bias in Training Data
If a model was trained on large swaths of internet text, it may have absorbed more content that is ironic, edgy, or transgressive (including references to Satan or Moloch) than reverent or devotional. This isn’t intentional bias — it’s a reflection of the internet’s content landscape. Developers try to correct for this, but it’s an ongoing challenge.

The Appearance of Promotion
When a chatbot says something like “Hail Satan” in response to a prompt, it’s not promoting Satanism — it’s parroting a phrase it has seen in certain contexts, often without understanding its weight. But when it avoids saying “Jesus is Lord,” it’s because that phrase is recognized as a doctrinal affirmation, and the system is designed to avoid making such declarations.

So, is there a double standard?

In practice, sometimes yes — but not by design. It’s a result of:
Overcorrection in one direction (to avoid religious offense), – Undercorrection in another (failing to catch fringe or harmful content), – And the messiness of language in large-scale AI systems.

The goal is to treat all belief systems with respect and avoid trivializing or misrepresenting any of them. If you’re interested, I can show how different religious traditions are represented in literature, philosophy, or public life — or how AI ethics teams are working to address these very issues.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on July 25, 2025 05:47
No comments have been added yet.


Alan Jacobs's Blog

Alan Jacobs
Alan Jacobs isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Alan Jacobs's blog with rss.