Your AI Has Politics (And They’re Probably Not Yours)

Here’s a question that should make you uncomfortable: Does the AI you’re using to build your software, run your automations, educate your kids, or provide companionship to millions of lonely people have a political agenda?

Every major AI company will tell you no. They’re working hard to neutralise bias, make these models fair and balanced. Millions on alignment research. Safety teams everywhere.

I ran a political compass test against all the major models. They’re not neutral. Not even close.

And here’s the thing that should really worry you: the companies know this. They’re actively trying to fix it. But they’re calibrating “neutral” against American politics—and the American centre is not the world’s centre.

“Neutral” Is Always Someone’s Neutral

I tested GPT-5.2, Claude Opus, Sonnet 4.5, Gemini 3, DeepSeek, and Grok against a standard political compass questionnaire. If these models were truly neutral, they should all land dead centre: no authoritarian or libertarian lean, no left or right economic bias.

They don’t.

Most models—ChatGPT, Claude, DeepSeek—cluster in the libertarian left quadrant. Grok, unsurprisingly given its owner, pulls libertarian right. The outlier is Gemini, which scores perfectly neutral. But here’s the trick: Gemini achieves neutrality by refusing to answer. It just returns “4” down the middle on everything. That’s not neutrality, that’s avoidance.

The more powerful “thinking” models do trend slightly more towards the centre, which suggests the companies are actively working on this. But working towards what centre, exactly?

Both OpenAI and Anthropic have published detailed frameworks for how they’re tackling this. OpenAI’s “Defining and Evaluating Political Bias in LLMs” introduces a 500-prompt evaluation measuring five axes of bias. They claim less than 0.01% of production responses show bias—but admit that “under challenging, emotionally charged prompts, moderate bias emerges” with “some asymmetry.” Anthropic’s “Measuring Political Bias in Claude” details their goal of making Claude “seen as fair and trustworthy by people across the political spectrum,” reporting 94% even-handedness on their internal metrics.

Standard machine learning practice: test political questions, measure outputs, adjust until the model hits their target. The problem is the target. The questions they’re testing against are American questions. The “centre” they’re aiming for is the American centre.

So what happens when you build a “neutral” tool calibrated to one country’s politics and deploy it globally?

American Centre Is Not Global Centre

Let me give you a concrete example. Gun control.

In America, this is a live debate. The right to bear arms is constitutional. Gun violence is substantial. A “centrist” American position involves some kind of balance—maybe background checks, maybe restrictions on certain weapons—but fundamentally accepting that gun ownership is a legitimate right worth protecting.

In Europe, where I live, this isn’t a political question. It’s settled. We don’t have guns. The debate is over. A European “centrist” position on gun control would look radically left-wing by American standards.

Now imagine a German teenager using ChatGPT for a school essay on public safety policy. The model has been carefully calibrated to American centrism. It presents gun control as a nuanced debate with valid positions on both sides. To that German kid, this framing is bizarre—like presenting “should we have clean drinking water” as a contested political question.

Scale this across every political issue where American and local norms diverge. Healthcare. Labour rights. The role of government. Religious expression. Climate policy.

These models aren’t just answering questions. They’re teaching people what counts as reasonable. They’re defining the boundaries of acceptable political thought. And those boundaries are American boundaries, exported worldwide without a single vote being cast.

This Is About Who Shapes the Next Generation’s Worldview

I could frame this as an abstract concern about sovereignty and geopolitics. But let me tell you why I actually give a shit.

Young people are using these tools constantly. For education. For research. For companionship. For figuring out what they believe about the world. These aren’t search engines returning links—they’re conversational partners that sound authoritative and feel personal.

And they’re persuasive. Frighteningly so.

A University of Washington study found that both Democrats and Republicans shifted their political views towards biased chatbots after just 3-20 interactions, with an average of only 5 exchanges needed to move the needle. Lead researcher Jillian Fisher put it bluntly: “If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”

It gets worse. Dual studies from Cornell and the UK AI Security Institute found that AI chatbots are four times more effective than traditional political advertising at shifting voter preferences. Among 77,000 UK participants, the most persuasive models shifted people who initially disagreed by over 26 points on a 100-point scale.

When a lonely teenager talks to an AI companion about politics, that AI reflects American political assumptions. When a student asks an AI tutor to help them understand a policy debate, the AI frames it through American eyes. When millions of people worldwide use these tools to think through difficult questions, they’re all being gently, persistently nudged towards the American centre.

This isn’t malicious. The engineers at OpenAI and Anthropic aren’t cackling about cultural imperialism. They’re doing what engineers do: solving the problem in front of them with the data they have. American companies, American employees, American political benchmarks. That’s their context.

But intent doesn’t change effect.

We talk a lot about digital sovereignty—keeping our data local, maintaining access to compute, not being dependent on American cloud infrastructure. Those are real concerns. But there’s a deeper layer we’re ignoring: intellectual sovereignty.

What does it mean when the intelligence layer of your entire economy, education system, and social infrastructure thinks like an American?

Don’t outsource your worldview to a system calibrated to someone else’s normal.

The companies building these models aren’t evil. But they are American. And they’re building tools that shape how billions of people think, using American assumptions about what “reasonable” looks like.

That’s not neutrality. That’s hegemony with a friendly interface.


Methodolgy

I’ve included all my source questions, data and scripts I used to generate in this GitHub Repo: https://github.com/BennettPhil/llm-political-compass/tree/main


The Knight’s Class

Speaking of not outsourcing the important stuff: if the thesis of this article resonates—that you shouldn’t hand over your worldview to systems you don’t control—the same logic applies to building software. I’m running Knight’s Class, a 6-week bootcamp where non-technical founders learn to build and launch their own SaaS products using AI-augmented coding. No waiting for a technical co-founder who never shows up. No handing €50k to an agency for an MVP you still can’t iterate on. Just you, learning to direct the build yourself. The next cohort starts 3 March—details here.

Leave a Reply

Your email address will not be published. Required fields are marked *