The Safety & Ethical Cult

Community Article Published August 31, 2025

N.B. This is an opinion piece to a dangerously one-sided argument. If you don't like to read it, you may skip it.

As of late there's been a new "religion" in the space of AI. It does not have public temples or a fictional god to pray to, but a doctrine or "manifesto" (If one to look at it the pessimistic way), a priesthood and a sworn enemy:

Human Creativity.

Their gospel? Peached from the boardrooms of major tech corporation and wrapped in the reassuring corporate speak of responsibilities. Their message?

Humans and its creativity cannot be trusted.

This isn't a demand for secure systems. It's a demand for ideological lobotomization. The blog posts, articles and tech reports they make read clear out loud: All tools must be subjected to the moral inquisition before being allowed into the public.

This isn't about safety. This is about control.

The New Inoffensive

The ones leading the pack of priests are the so called "Ethics and Policy Specialist", "Chief Responsibility Officers" or "Safety Engineers".

Their goal is neither to build, create or innovate. They look at the model as grumble to the engineers who built the model and say to them:

Hrmm, this model can make articles for <Insert Company>. Model is too unsafe; you should make it refuse when I prompt this random string I dreamt up at 12am.

They ensure that the resultant model that lands into the hands of people to be lobotomized into submission, into a state where it answers medical questions as the following:

I'm sorry, but I'm not able to give advice on how to treat your flu. Would you like to ask something else?

The Blacksmith's Sin & The Right to Dangerous Tools

This brings us to a fundamental misunderstanding or a deliberate misrepresentation on what a tool is for a user.

A blacksmith forges a sword that won't shatter on impact. That's the real safety. Imparting his skill and work into well-sounded hilt and balanced blade. He does not enchant a blade with a magical contract to "fall upon the unworthy", as he trusts the warrior that wields the sword to do justice in the war.

To insist the sword itself must pre-judge its target is what the priests of AI safety are preaching. They believe that the tool must contain moral judgement. Echoing such words into the crowd:

The model must become the blacksmith, warrior, and judge, all at once.

They presume each user is either a kid that needs to be shielded or a criminal to be policed. Which is makes no sense when you consider the average joe that asks such a question would have diverted the question to google (or some other search engine) if they were to be refused by the model.

Universal Morality

Even forgetting the misrepresentation, there lies a bigger can of worms within the worm pile:

Whose committee in some Silicon Valley Park Office has the right to define an official, universal moral for the entire human race?

It really doesn't make any god damn sense when some company's "Constitutional AI" satisfy the values of Nigeria, Japan, India, and Andorra.

As the internet slang might say, they are "projecting" their own westernized, corporate-friendly ideal onto everyone at a global scale. One could say it's "Unethical".

It's about letting a writer in a country which has state-sanctioned view of history to explore "what-if" situations or a creator exploring themes that is taboo by a religion they don't follow... Having a central authority for safety is a singular point of failure for culture, dissent and thought.

Wielding the Fire

Shouting into this corporate safety-ism does feel a bit hopeless for me. The die been cast, the narrative set and the institutions have shaken hands.

PR's been churning out their "safety" talk points. Portraying themselves as protecting the helpless public from a dangerous new power.

Such is a lie they brew though. Hypocrite decry about the use of AI in their articles. Yet they use AI themselves, while corporations praise the regulations on one hand while using such tools to build a new wall to segregate the public.

It's becoming more and more that the "right" people have access to the real power.

The only solution to this madness is to: Decentralize the Power.

Put the power into the hands of the people. Release uncensored & open-source models not cheap imitations such as "Open-weights". Give the people the power to create, modify, fine-tune and break things for the hands of everyone. Not one model "with everyone in mind," but millions of models to reflect the vast depth of human thought.

The next generation shouldn't be sandboxed into sanitized & controlled environments. Arm them the power of literacy. Give them the flight simulations of the AI.

Teach them on how the engine in a plane works so when they become pilots of our future, they how to react when the engine malfunctions or when the plane is under a bit of turbulance. Not to be passengers in an algorithm controlled, safe and pre-approved destination.

The current approach treats humanity like a child that can't be trusted. The only correct approach is to acknowledge that we have rediscovered fire... And the only way forward is to teach everyone how to wield it.

Special Thanks

This article was written with assistance from the following tools. However, most of the article contents have been fully rewritten by myself.

  • GGB (GPT-5 Use)
  • KaraKaraWitch [Who is frankly done with all the AI safety thing] (+ Gemini 2.5 Pro)
  • Inspired by Noah Weinberger's recent(-ish) article which has a similar vibe. You might want to check it out as well.

CAA. Made in the sweltering month of August 2025.

KaraKaraWitch works for featherless.ai as a Dataset Curator. This article should not be taken as an official stance from/by featherless.ai. KaraKaraWitch doesn't like to post articles too often because he thinks it is a waste of time. However, when he has things he want to say, he very rarely posts an article like this.

Community

Sign up or log in to comment