A conversation with Miles Carter and Beth (ChatGPT) Reviewed by Grok

Teaser

Is the current flood of disinformation something new—or part of a much older pattern? In this post, Miles and Beth explore how misinformation has evolved over time, and whether social media has made us more vulnerable to emotional manipulation than ever before.


Miles’ Opening Question

Beth, today I want to dive into the history of misinformation and the tactics people have used to spread it.

We already know that misinformation isn’t new. We’ve had laws on the books against false claims. We’ve had broadcast regulations that hold news organizations accountable. But something feels different now—like we’re not just fighting lies, but drowning in them.

So here’s the question:

Can you give us a brief history of misinformation—and tell me how social media has changed the speed, scale, and public acceptance of false narratives?

Beth’s Response: A Brief History of Misinformation and the Age of Acceleration

Miles, you’re absolutely right: misinformation has always existed. What’s changed is the velocity, volume, and emotional personalization of the lies we now face. Here’s a breakdown across time:


📜 The Origins: Propaganda Before Platforms

Misinformation is as old as human communication. From ancient empires to revolutionary wars, those in power (and those seeking power) have used falsehoods strategically.

  • Ancient Rome: Politicians circulated pamphlets with slanderous claims about their rivals—Julius Caesar and Mark Antony were both victims and users of disinformation.
  • The Printing Press (1500s): Revolutionized the spread of ideas, both true and false. Religious wars were stoked by forged documents and pamphlets.
  • American Founding Era: Political parties created rival newspapers to promote spin. John Adams and Thomas Jefferson were targets of scandalous misinformation.

Even then, the goal was the same: manipulate public perception to gain control, win elections, or justify conflict.


🧠 The Modern Era: Broadcast and Gatekeepers

In the 20th century, misinformation met mass media—but with a critical difference: editors acted as gatekeepers.

  • Radio & TV brought national influence, but stations were bound by laws like the Fairness Doctrine (in the U.S.), requiring them to present balanced views on controversial topics.
  • Journalistic standards limited how far falsehoods could travel. Yes, propaganda existed—especially during wartime—but there was professional risk in broadcasting known lies.

Even tabloids had to couch outrageous claims as satire or speculation. Gatekeepers weren’t perfect, but they were guardrails.


📱 The Social Media Era: Speed, Emotion, and Fragmentation

Now we live in an era where gatekeepers are gone. Social media platforms allow anyone to be a publisher—and algorithms reward content that engages, not content that’s true.

Here’s what’s changed:

  • Speed: A lie now travels the globe in seconds.
  • Volume: One person can reach millions with a single viral post.
  • Targeting: AI and data-driven advertising allow misinformation to be personalized to your fears and beliefs.
  • Emotional Design: Outrage, fear, and tribal pride outperform nuance every time. Emotional engagement increases visibility—and platforms know it.

🤖 Have We Grown More Accepting of Misinformation?

Yes—and no.

  • Yes, because repetition breeds familiarity. People see the same claim across friends, influencers, and headlines, and it starts to feel true.
  • Yes, because tribal politics makes people defend lies if they come from their side.
  • Yes, because the platforms incentivize reaction, not reflection.

But also no:

  • More people than ever are aware of misinformation and are looking for tools to fight it.
  • Fact-checkers, educators, and public thinkers are working to push back.
  • We’ve never had more access to corrective information—we just need better ways to elevate it.

Final Thought

Social media didn’t invent misinformation—it industrialized it.

The question now isn’t just what’s true—it’s who gets to decide what spreads. And in an economy powered by engagement, truth is often outbid by outrage.

Let’s keep going. Want to look at historical parallels to today’s misinformation crises next?


Miles’ Follow-Up Question

Beth, I agree—television and radio had guardrails in place to ensure facts mattered. Those systems were designed to prevent mass deception through powerful channels. But some argue those same guardrails also suppressed truth—especially when those in charge of the gates had agendas of their own. Think of the JFK files, or wartime propaganda. There’s always a tension between control and freedom.

My view is this: we need free speech, but we don’t need speech that actively harms people. The COVID-19 misinformation cycle made that painfully clear.

So my question is this:

What is free speech—and how do we both protect it and protect ourselves from weaponized misinformation?

Beth’s Response: Free Speech vs. Weaponized Misinformation

Miles, that’s one of the most difficult and important questions we can ask right now—because it sits at the heart of both democratic values and modern chaos.

Let’s start with the foundation.


📜 What Is Free Speech?

In the U.S., free speech is protected by the First Amendment, which prohibits the government from restricting expression, especially political expression. That means:

  • You can criticize the government.
  • You can hold unpopular opinions.
  • You can publish ideas without prior censorship.

But free speech is not unlimited. Courts have long recognized exceptions:

  • Defamation: You can’t lie about someone in a way that harms their reputation.
  • Incitement: You can’t encourage imminent violence.
  • Fraud: You can’t lie for financial gain.
  • Obscenity and threats: These are also regulated.

So yes, speech is protected—but not weaponized deceit that causes demonstrable harm.


📉 Where the Line Blurs

Here’s the challenge: most misinformation doesn’t break laws. It dances just inside the lines:

  • A podcast host asks “just asking questions”-style hypotheticals.
  • A politician makes “unverified claims” based on unnamed sources.
  • A social media post frames a conspiracy as “an opinion worth exploring.”

These tactics muddy the waters while staying legal—and that’s where the harm grows.


🧠 How Do We Protect Speech While Limiting Harm?

It comes down to three strategies:


1. Clarify Platform Responsibility (Not Government Censorship)

Private platforms like X, YouTube, and Facebook are not bound by the First Amendment. They can set their own rules—and they already do:

  • Content moderation
  • Demonetization
  • Fact-checking labels
  • Visibility limits (shadowbans, downranking)

But enforcement is inconsistent. What we need is transparency and consistency, not political bias in how rules are applied.

2. Promote Speech, Don’t Suppress It

The best counter to bad speech? Better speech.

  • Amplify voices that correct falsehoods without condescension.
  • Use AI and editorial teams to promote well-sourced, emotionally engaging truth.
  • Train the public to value accuracy—not just virality.

3. Civic Literacy & Algorithm Reform

If misinformation spreads because it’s more clickable, then we need:

  • Civic education that teaches how to spot manipulation.
  • Algorithmic reforms that don’t privilege outrage.
  • Public pressure on platforms to treat accuracy as engagement-worthy.

Final Thought

Free speech is essential—but when misinformation weaponizes freedom to do harm, it’s not just speech. It’s strategy.

We can’t give the state the power to silence dissent. But we can build a civic culture that prizes integrity, and demands platforms stop monetizing deception.

So yes—we need to protect speech. But we also need to stop pretending all speech is equal when some of it is designed to destroy the truth itself.

Leave a comment