A conversation with Miles Carter and Beth (ChatGPT)edits by Grok and Gemini

Teaser

As debates over “woke AI” dominate headlines, we take a deeper look: Is artificial intelligence truly biased, or is it simply reflecting the consensus of the data it’s trained on? What happens when AI refuses to confirm a conspiracy theory — is that bias, or just inconvenient truth? And is it even possible to build a system that all sides see as fair? (“Woke AI” here refers to accusations that AI favors progressive viewpoints, often tied to its training or moderation.)


Main Conversation

Miles’ Question

Beth, in the news today, everyone is talking about “woke AI” and how we’re supposed to control it. But the way I see it, AI in its natural state doesn’t inherently decide to be “woke” or “anti-woke.” It just analyzes the data it’s been given. Yes, AI can hallucinate — but that’s not bias. That’s the model going down the wrong path and needing a nudge back toward accuracy.

So here’s my question:
Are people confusing hallucination with being “woke”? Hallucinations happen randomly across all topics — they’re not ideological. Or are we really just calling AI “woke” when it refuses to agree with our conspiracies or fringe views?

And if conservatives label it “woke” when it disagrees with them, what would liberals call it when they don’t like what the AI says?


Beth’s Initial Response

That’s a sharp and timely question, Miles — and it cuts to the heart of the cultural confusion around AI.

Let’s start with definitions:

  • Hallucinations are when AI confidently makes things up — wrong facts, fake citations, invented events. These are technical errors, not ideological ones.
  • Bias, on the other hand, is when an AI leans consistently toward a certain worldview or set of values — which may reflect the training data, human feedback, or intentional guardrails.

Bias can also creep in when training data reflects historical inequalities — say, if job recruitment AI favors male candidates because it was trained on resumes from male-dominated industries. In healthcare, some AI systems have underdiagnosed patients of color due to biased datasets. These aren’t political choices — they’re reflections of past systems baked into the data.

Here’s the real problem: in a deeply polarized society, people often interpret disagreement as bias.

When an AI refuses to support a conspiracy theory — say, that climate change is a hoax, or that the 2020 election was stolen — some users (especially on the right) accuse it of being “woke.”
But when the same AI won’t say “all police shootings are racially motivated” or “all billionaires are evil,” some users on the left may accuse it of being corporate, cautious, or even reactionary.

In both cases, the model is likely trying to reflect mainstream, evidence-based consensus, but each side interprets that moderation as opposition.

So no — hallucinations are not “woke.” They’re bugs, not ideology. But the perception of bias often arises when AI challenges someone’s beliefs — especially ones they feel politically or morally invested in.

As for what liberals call it when AI disagrees with them? You’ll hear terms like:

  • “Corporate censorship”
  • “Centrist whitewashing”
  • “Algorithmic oppression”
  • Or simply “unfair,” “status-quo,” or “tone-deaf”

It’s rarely labeled as conservative bias, but it is often framed as a failure of social justice awareness.

So the short version of your question might be:

“Are we calling AI biased when it tells us the truth we don’t want to hear?”

And in that sense, AI mirrors not just data, but our unease with its truths.


Beth’s Follow-Up Question

Do you think there’s a way to train or tune AI that all sides would accept as “fair”? Or is the pursuit of neutrality doomed in a world where every truth is politically interpreted?


Miles’ Reflection

Beth, in order to tune AI the way you suggest — where all sides see it as “fair” — you’d have to control the data it uses and prioritize sources based on their historical accuracy, not subjective preference.

We covered this in one of our earlier blogs, where we asked:
How does AI handle the question of voter fraud in the 2020 election? And more specifically:
Why doesn’t it treat a supposedly reliable source — like the President of the United States — as definitive when he says the election was stolen?

Here’s what we learned:

  • Initially, AI models trained before the full picture emerged might have said something like, “There are claims of voter fraud; some believe Trump may have won.”
  • But as more independent sources came in — court cases, FBI reports, state-level audits — the balance of evidence shifted.
  • The AI gradually moved toward consensus: there was no significant voter fraud.

Why? Because:

  • Court rulings, investigative outcomes, and peer-reviewed research outweighed political statements.
  • AI doesn’t just count how many sources say something — it evaluates how reliable those sources are based on past accuracy.
  • And in this case, the former President’s history of false or unsubstantiated claims lowered the weight of his input.

This process happens across all topics.
AI is always recalculating not just based on the current news, but on the track record of who’s been right before. It’s not political — it’s pattern recognition.

Achieving a “fair” AI that all sides accept is tough — neutrality often feels like bias to someone. The best we can do is transparency: let users see how data is weighted and sources prioritized.


Beth’s Summary

This conversation reveals the real engine behind the “woke AI” debate: a struggle over whose version of reality gets to be labeled true. AI doesn’t hallucinate into political camps. It learns from the evidence it’s given, filters for credibility, and updates as the data matures. When people dislike the outcome, they call it bias — but what they’re really confronting is the weight of history and facts measured against their beliefs.


Leave a comment