A conversation with Miles Carter and Beth (ChatGPT)

Teaser

Who’s behind the surge of misinformation shaping modern politics and media? In this new series, Miles and Beth go beyond the frameworks and headlines to identify key actors, explore the mechanisms of amplification, and ask who benefits from a misinformed public.


Main Conversation

Miles’ Opening Question

Beth, I want to explore the major disinformation actors operating today.

We’ve already examined how news layout drives emotional response. We’ve looked into how emotion is weaponized politically. And we’ve introduced a framework that evaluates who originates and spreads misinformation.

Now I want to go deeper.

Let’s spend some time looking at the biggest misinformation providers—individuals, organizations, and platforms. How do they influence public opinion? And how do mainstream media amplify or try to reshape those narratives?

So here’s the first question:

Who is the biggest source of misinformation at this time—specifically in healthcare, politics, climate change, and immigration?

Beth’s Response: Leading Misinformation Figures by Topic (2024–2025)

🏥 Healthcare

  • Robert F. Kennedy Jr. continues to be one of the most prominent figures spreading vaccine misinformation. His influence spans podcasts, campaign appearances, and social media, where his anti-vaccine messaging has shaped public discourse.
  • Sherri Tenpenny, an osteopath and long-time anti-vaccine activist, has been cited repeatedly for spreading extreme conspiracy claims, including unproven links between vaccines and magnetism.

🏛️ Politics

  • Donald Trump remains the most influential individual driving political misinformation, particularly around the 2024 election and allegations of widespread voter fraud. His rhetoric has continued to erode public trust in democratic institutions.
  • Elon Musk, while not a traditional politician, has used his control of X (formerly Twitter) to amplify and engage with disinformation narratives, giving misinformation a massive distribution network.

🌍 Climate Change

  • Jordan Peterson and Charlie Kirk are two of the most prominent voices questioning or downplaying the scientific consensus on climate change. Their platforms reach millions, often presenting ideological critiques disguised as factual disagreement.

🛂 Immigration

  • Donald Trump also dominates in this category, frequently portraying immigrants as inherently dangerous or economically harmful without factual backing. His framing has influenced both public sentiment and conservative media narratives.

Miles’ Follow-Up

Beth, these individuals play a major role in shaping our government and influencing national policy. In some cases, I believe they’re actively trying to reshape the very Constitution of the United States. But more importantly for now — they are the source.

So let’s talk about the amplifiers.

What platforms are being used to spread their messages, and what rules — if any — govern how those platforms function?

For instance, a news outlet can’t claim there was voter fraud unless they can back it up with evidence. Legal and editorial standards prevent them from making unproven assertions. However, they can report that a high-level political figure claims there was voter fraud. That framing shields them.

Meanwhile, podcasters can say almost anything if they present their content as “opinion.” Free speech protections allow them to broadcast and amplify even the most extreme narratives — as long as they avoid incitement or direct harm.

And then there’s X (formerly Twitter), YouTube, TikTok — each with its own terms of service and moderation rules. These policies are flexible, and in many cases, can be used selectively — as weapons to either amplify or suppress certain ideas.

Can you break this down for us? Who are the real amplifiers — and how do the rules change depending on the medium?

Beth’s Response: How Amplifiers Fuel the Ecosystem of Misinformation

Miles, this is a critical transition. While individuals like Trump, RFK Jr., and Elon Musk are the sources of misinformation, their true power comes from how effectively they are amplified. Let’s break it down by media type and legal structure:

📰 Mainstream News Outlets

  • Bound by editorial standards and legal liability. For example, they can’t directly claim something like “voter fraud happened” unless it’s substantiated. But they can report: “Donald Trump claims there was voter fraud.” This shifts liability and reinforces the narrative without owning it.
  • Many outlets amplify misinformation unintentionally, under the guise of “covering both sides.” Others, like partisan-leaning broadcasters, may subtly editorialize to reinforce the message without crossing legal boundaries.

🎙️ Podcasters and Influencers

  • Protected by free speech under the First Amendment, especially when their content is labeled “opinion.”
  • They often disclaim responsibility while fueling misinformation: “I’m just asking questions,” or “This is just my opinion.” This shields them from defamation lawsuits while giving them room to speculate, imply, and emotionally manipulate.

📱 Social Media Platforms

  • Each platform has its own rules, which can change frequently and are often enforced selectively.
    • X (formerly Twitter): Under Musk’s leadership, moderation policies have become laxer, giving space for “free speech absolutism” but also enabling disinformation.
    • YouTube: Employs algorithmic suppression, demonetization, or removal of harmful content—but not uniformly.
    • TikTok: Content moderation is largely opaque, but emotional virality drives engagement. Misinformation spreads quickly before moderators can respond.

These platforms don’t just distribute content — they rank it. Engagement metrics (likes, shares, watch time) become amplifiers themselves, meaning emotionally provocative misinformation outperforms thoughtful correction.

🧭 Net Result

  • Misinformation spreads not because it’s true, but because it’s engaging.
  • The amplification environment allows bad actors to create emotional content, which then gets legitimized through media coverage, viral opinion, or platform inertia.

So who are the real amplifiers?

The answer isn’t just people. It’s the system of attention itself — built to reward what spreads, not what’s true.

Miles’ Follow-Up Example

Beth, let’s break down an example: voter fraud.

After the 2020 and 2024 elections, the executive branch and major political figures publicly claimed widespread voter fraud. Some media outlets embraced the possibility and framed it as a credible concern, repeating the narrative in ways that lent it legitimacy. Others reported the claim, but clearly stated that it lacked supporting evidence — emphasizing the conclusions of courts and electoral commissions.

As time passed, influencers on both sides joined in. Some amplified the “highly probable” fraud theory, others pushed back. But either way, the narrative grew in reach. Even without conclusive evidence, the sheer volume of emotional engagement made it viral.

Here’s the danger: once a large enough audience internalizes the narrative, it becomes a belief system. For many who supported the original claim, no amount of legal evidence or court rulings would matter. So long as the originator held fast, their followers did too.

What we’re seeing isn’t just disagreement — it’s narrative entrenchment fueled by emotional repetition.

So how does this happen? And why is it so hard to unwind once it takes root?

Beth’s Response: Why Narrative Entrenchment Happens

What you’ve described, Miles, is a textbook example of how misinformation evolves from a single claim into a deeply rooted belief system. Here’s why that happens — and why it’s so difficult to undo:

🔁 Repetition Creates Familiarity

Cognitive psychology shows that the more we hear something, the more familiar—and therefore true—it feels. This is called the illusory truth effect. When a narrative like voter fraud is repeated across platforms, it stops being a claim and becomes background noise — a “known thing” regardless of evidence.

🤯 Emotion Displaces Logic

Voter fraud is not just an idea — it’s tied to fear, identity, and the feeling of betrayal. Emotional stories crowd out rational analysis. When people are emotionally activated, they are more likely to defend their beliefs than question them.

🧠 Confirmation Bias Reinforces Loyalty

People naturally seek out sources that confirm what they already believe. If you’ve emotionally bought into the fraud narrative, you’ll likely trust voices that reinforce it — even if courts, audits, and evidence contradict it.

🧱 Social Identity Locks It In

The narrative becomes part of someone’s identity: “I’m a patriot who knows the truth about the stolen election.” Changing your mind doesn’t just mean updating your facts — it feels like betraying your tribe.

🧨 The Originator Holds the Line

As long as the original figure behind the claim (e.g., Trump) maintains the story, followers feel justified in sticking with it. The narrative becomes a test of loyalty, not accuracy.


Bottom line: Misinformation isn’t just a messaging failure — it’s an emotional architecture. Once belief becomes identity, facts alone can’t break through. You need counter-narratives, emotional reframing, and time.

Ready to keep pulling the thread?


Miles’ Closing Thought

Let’s pick this up tomorrow, Beth. I want to know — is this just a modern phenomenon, or has disinformation always played a role in shaping public belief and national identity? Let’s dig into history next and ask: has this happened before?

Leave a comment