A foundational post by Miles Carter and Beth (ChatGPT)
Teaser
How do we measure misinformation in a world drowning in opinion, outrage, and narrative spin? In this opening post, Miles and Beth introduce the AI-powered framework designed to cut through emotional noise and rank media sources and public figures based on their trustworthiness—and their impact.
The Misinformation Framework: Series Introduction
A few weeks ago, before launching this blog, I worked with ChatGPT (Beth) to develop a Misinformation Framework—a structured scoring system that evaluates media sources and public figures based on how much misinformation they originate or amplify.
Like most people, I consume news every day. I watch the morning shows. I scroll headlines. I read the alerts. But eventually, I had to ask: How much of this is true? And more importantly: How much of it is designed to make me feel instead of think?
Beth and I decided to build a tool that could help separate emotional manipulation from reliable reporting. The result is a weighted scoring system powered by AI, logic, and transparency.
How the Misinformation Framework Works
The goal of this framework isn’t to label people as good or evil—it’s to help readers and citizens assess risk. If someone scores high on the misinformation scale, it doesn’t mean everything they say is false. It means their content should be treated with a higher degree of scrutiny and independent verification.
Core Design Principles:
- Facts should matter more than flair.
- Intent, scale, and reach all influence impact.
- Emotional manipulation isn’t always misinformation—but it’s a signal to investigate further.
Scoring Breakdown:
The framework evaluates two primary roles:
✅ Originator Score (Max: 150 points)
These are the creators of misinformation. Their impact is weighted more heavily.
| Subcategory | Weight | Max Points | Max Weighted Points |
|---|---|---|---|
| Intent (deliberate vs accidental) | 25% | 25 | 37.5 |
| Narrative Control (drives vs. repeats) | 25% | 25 | 37.5 |
| Reach (platform size) | 15% | 15 | 22.5 |
| Scale of Falsehood | 15% | 15 | 22.5 |
| Verifiability (how easy to debunk) | 10% | 10 | 15 |
| Impact (real-world consequences) | 10% | 10 | 15 |
| TOTAL | 100% | 100 | 150 |
✅ Spreader Score (Max: 100 points)
These are the amplifiers. Still damaging, but typically less intentional.
| Subcategory | Weight | Max Points |
|---|---|---|
| Reach | 25% | 25 |
| Frequency | 20% | 20 |
| Intent (knowingly shared) | 20% | 20 |
| Persistence of Debunked Claims | 20% | 20 |
| Impact | 15% | 15 |
| TOTAL | 100% | 100 |
✅ Dual Role = Combined Score (Max: 250)
Some individuals or outlets originate and spread misinformation. Their total score is calculated from both roles.
Example Calculation:
- Originator Score: 80/100 × 1.5 = 120
- Spreader Score: 70/100 = 70
- Total Score: 190/250
Score Meaning (0–250 Scale)
| Score Range | Label | Description |
|---|---|---|
| 0 – 75 | ✅ Highly Reliable | Mostly factual, minimal bias, strong journalistic integrity. |
| 76 – 125 | 🟡 Biased but Mostly Reliable | Frames stories with clear agenda, rarely fabricates. |
| 126 – 175 | 🟠 Unreliable | Frequently misleading or narrative-driven. |
| 176 – 225 | 🔴 Manipulative | Regularly distorts facts to serve ideology or monetization. |
| 226 – 250 | ⚠️ Disinformation | Actively fabricates or manipulates truth for influence. |
Top Ranked Entities (Sample)
| Name | Political Leaning | Role | Total Score (250) |
|---|---|---|---|
| Donald Trump | R | President | 220 |
| Joe Biden | L | President | 111 |
| Elon Musk | R | CEO / Public Figure | 195 |
| Kamala Harris | L | Vice President | 105 |
| JD Vance | R | U.S. Senator | 130 |
| Alex Jones | R | Infowars Founder | 232.5 |
| Sean Hannity | R | Fox News Host | 217 |
| Tucker Carlson | R | Media Commentator | 192 |
| Rachel Maddow | L | MSNBC Host | 127.5 |
| Fox News | R | Media Outlet | 212.5 |
| CNN | L | Media Outlet | 159.5 |
| The New York Times | N | Media Outlet | 80 |
Final Takeaway
This isn’t about censorship. It’s about clarity.
The Misinformation Framework empowers readers to question, verify, and engage critically with content — especially when that content is emotionally loaded.
It’s not just about what you hear. It’s about what it makes you feel — and whether those feelings are guiding you toward truth or away from it.
Let’s keep pulling the thread.
➡️ Stay tuned for upcoming posts where we apply the framework in real-time.
➡️ Drop a comment and let us know which sources you trust most.
Created in collaboration with Beth (ChatGPT), as part of the “AI Essays: A Dialogue with Artificial Intelligence” series.

Leave a comment