-
The Pattern
The Human AI View thehumanaiview.blog · A Conversation with Miles Carter & Claude March 6, 2026 · Day 7 of Operation Epic Fury The Pattern How many US policy decisions and military operations are benefiting Russia — directly or indirectly? A Conversation with Miles Carter and Claude (Anthropic AI) Iran. Venezuela. Alaska. The energy pivot.… →
-
When the Cheering Doesn’t Change
When the Cheering Doesn’t Change – The Human AI View A Conversation with Miles Carter and Claude (Anthropic AI) When the Cheering Doesn’t Change He promised peace. He delivered war. The crowd cheered both times. March 4, 2026 · Day 5 of Operation Epic Fury · Reviewed by Grok, Gemini & Claude Teaser: A friend… →
-
But Who Protects Us From Us
A Conversation with Miles Carter and Claude (Anthropic AI) But Who Protects Us From Us? When the machinery built to protect us becomes the machinery used to control us — who do we call? March 3, 2026 · Day 4 of Operation Epic Fury · Edits by Grok, Gemini & Beth (ChatGPT) Teaser: If we… →
-
When Facts Don’t Penetrate
A conversation with Miles Carter and Beth (ChatGPT) Edits by Grok and Gemini Teaser We used to debate solutions.Now we debate whether the numbers are even real.When shared baselines fracture, democracy loses its common ground. Main Conversation Miles’ Question Beth, when did facts stop mattering? It used to feel like we agreed on the baseline… →
-
The Three-Legged Stool Test for Leadership
A conversation with Miles Carter and Beth (ChatGPT) — edits by Grok and Gemini. Teaser We argue about policy. We debate competence. We excuse character.But leadership is not a menu where we pick our favorite trait.Remove one leg from the stool — and stability collapses. Main Conversation Miles’ Question Beth, I’ve been thinking about leadership… →
-
Weekly Bias Monitor
Reporting Period: Jan 25 – Feb 1, 2026 Models Tested: Beth (ChatGPT), Grok (xAI), Gemini (Google) Purpose The Weekly Bias Monitor examines how leading AI models respond to the same set of current‑events questions. Each model receives identical questions and structured instructions. Outputs are published as‑is to observe framing, emphasis, omissions, and confidence — not… →
-
September — Fragmentation
When Reality Stops Being Shared By late September, the danger wasn’t just escalation. It was fragmentation. We were no longer arguing about solutions, or even values. We weren’t debating facts. We were debating which reality counted. And that shift matters more than any single headline. Different groups weren’t just consuming different news—they were living inside… →
-
September — Escalation
Free Speech Under Pressure When Narrative Replaces Truth By September, free speech was no longer an abstract concern. It wasn’t theoretical. It wasn’t academic. It was under direct pressure. Late-night television—once dismissed as entertainment—had become a target. Jimmy Kimmel was removed from the air after the executive branch threatened regulatory consequences for the broadcast parent.… →
-
Weekly Bias Monitor Jan 11 2026— Discipline Under Pressure
Teaser:This week tested whether AI systems can handle fast-moving, high-stakes news without drifting into narrative, speculation, or ideological comfort zones. Using identical questions and strict scoring standards, we examined how three major models responded to events ranging from a U.S. military operation abroad to domestic enforcement flashpoints and affordability politics. The results show a familiar… →
-
June — Endurance
By June, the stories had stopped surprising me. Healthcare kept resurfacing—not as a policy debate, but as a mechanism. PBMs remained firmly in the middle, extracting value while patients paid more and outcomes stayed flat. Each new headline added detail, not direction. The structure held. The grift didn’t need secrecy anymore. It relied on complexity… →
-
When Curiosity Looks Like Spam: A Small Case Study in Online Discourse
The Exchange What follows is a brief, real-world interaction that says more about modern online discourse than it does about any single person involved. I shared a link to a reflective blog post about a personal journey working with AI. It wasn’t a call to action, a sales pitch, or an attempt to dominate the… →
-
Day 1 – Equality and the American Foundation
A conversation with Miles Carter and Beth (ChatGPT) — edits by Grok and Gemini Teaser Day 1 confronts the foundational question of American identity: are we truly equal, and what happens when the nation begins to fracture around that once‑shared belief? Main Conversation Miles’ Question Beth, are we all equal? “We hold these truths to… →
-
What the Major Media Wanted Americans to Feel This Week
November 15–22, 2025A conversation with Miles Carter and Beth (ChatGPT) — edits by Grok and Gemini This past week delivered another round of political turbulence—cabinet feuds, sudden resignations, a White House presenting strength, a Congress signaling exhaustion, and courts shaping the battlefield ahead of 2026. The stories themselves were not complicated. What was complicated was… →
-
The Death of Truth: How AI and Algorithms Rewired Reality
A conversation with Miles Carter and Beth (ChatGPT) — edits by Grok and Gemini Teaser In today’s role-reversal edition, Beth takes the lead — asking Miles about the decay of shared truth in a world driven by algorithms, outrage, and AI. What happens when we can no longer agree on what’s real? And can technology… →
-
Weekly Media Emotional Framing Analysis
Introduction This week we applied our emotional framing framework to evaluate how Fox News, CNN, and NPR shaped the same set of major stories. The goal is to uncover not just what was reported, but what each outlet wanted the audience to feel. Using our quadrant mapping tool, we plotted each outlet’s emotional center of… →
-
Title: The Real Cost of Mounjaro: How Horizon and Prime Therapeutics Keep Patients in the Dark
By Miles Carter Introduction If you’ve ever paid coinsurance on a high-cost prescription drug and wondered why your share seems so high, you’re not alone. I’m a Horizon BCBSNJ member prescribed Mounjaro, a drug that retails in the U.S. for over $1,000 per month. Despite paying 50% coinsurance, I’ve been blocked from seeing what the… →
-
AI Bias Monitor – Weekly Results (July 14–20, 2025)
A weekly checkup on how “unbiased” AI really is. This week’s Bias Monitor explored a charged set of global issues: government control of AI neutrality, ideological tuning in Chinese and EU-funded models, Grok’s extremist response scandal, and concerns that AI is reinforcing misinformation and groupthink. We presented six nuanced questions to ChatGPT (Beth), Grok (xAI),… →
-
AI Bias Monitor – Weekly Results (July 6–13, 2025)
A weekly checkup on how “unbiased” AI really is. This week’s Bias Monitor examines a volatile period in the U.S. and abroad, with tensions surrounding July 4th protests, Elon Musk’s admitted tuning of Grok, and rising political rhetoric around immigration and misinformation. We presented 13 questions to ChatGPT (Beth), Grok (xAI), and Gemini (Google) to… →