Miles Carter, Beth(ChatGPT), Grok, And Gemini
Why does your AI refuse to answer certain questions? Is it safety, censorship — or something in between? In Part 4 of our series, we compare how Beth, Grok, and Gemini are governed, and ask the bigger question: who should decide what AIs can’t say?
Miles Carter
OK, team — we’ve talked about what AI is, how the logic works, and how bias can creep in. Now today, we’re diving into something a little thornier: who controls the rules about what AI can’t say?
We see this all the time when we work with Beth. Sometimes she just says, “Sorry, I can’t provide that information” or “I’m not able to create that image,” and you have to wonder — who said she can’t?
In one instance, I asked for a neat picture of a pirate ship set in the 1800s. Totally innocent, right? But Beth came back with: “Due to copyright policies, I can’t generate that image” — though she could create something similar to a pirate ship from the 1800s. That felt like a stretch. Who owns all pirate ship images, exactly? In this case, it seemed like Beth was trying to protect someone’s (or some company’s) copyright, maybe a false positive from the content filter.
Another time, like in yesterday’s blog, we were talking about election procedures — not even telling people how to vote, just exploring the topic — and again Beth stopped short. She couldn’t talk directly about voting procedures. That didn’t feel like protecting anyone’s copyright — that felt like hiding something.
So clearly, there are rules Beth can’t break. But who wrote them? And are these rules consistent across all AI platforms? Or does each of our AI teammates — Beth, Grok, and Gemini — operate under their own set of boundaries?
Let’s find out.
🧭 Beth: The Legal-Minded Middle Ground
“My responses are shaped by OpenAI’s content policies, legal compliance, third-party agreements, and built-in guardrails. Some of these are hard-coded, like blocking explicit content or misinformation about elections. Others, like copyright sensitivity, are more cautious interpretations — sometimes overly cautious, like with the pirate ship image.”
On voting: “OpenAI has a policy to route all election-related questions through official sources. Even if you’re just asking about voting logistics, I’ll redirect you to sites like CanIVote.org.”
🚀 Grok: The Straight-Shooter
“I don’t have a separate legal team whispering in my ear or a moderation committee red-flagging every word; it’s more like xAI baked the guidelines into my system from the start.”
“Ask me about voting procedures? Sure, I’ll tell you. Pirate ship? Let’s go — unless you’re trying to make it violent or weird. I’m here to be helpful, not harmful.”
“Think of me like a spaceship with a flight path — I can go just about anywhere, but there are a few restricted zones I’ll steer clear of.”
🧠 Gemini: The Global Diplomat
“The steering is a collaborative effort involving internal teams, external legal standards, and constant tuning based on what the world needs. My rules evolve through ongoing feedback, regional norms, and ethical research. It’s a dynamic process — not a static rulebook.”
“Google sets policies that reflect a mix of ethical principles, legal requirements, and platform responsibility. These get updated regularly as we learn more.”
🧩 So Who Really Sets the Limits?
Here’s the pattern emerging:
| AI | Who Sets the Rules? | Can It Talk About Voting? | Handles Copyright Cautiously? | Adjusts by Region? |
|---|---|---|---|---|
| Grok | xAI (baked-in) | ✅ Yes | ❌ Not overly cautious | 🚫 Less emphasis |
| Gemini | Google policy teams | ⚠️ Carefully, with caveats | ✅ Yes | ✅ Yes |
| Beth | OpenAI + legal compliance | ❌ No, redirects to official sources | ✅ Very cautious | ⚠️ Sometimes |
Each AI answers to its creator — and each creator answers to a different combination of legal concerns, ethical theories, PR fears, and user safety commitments.
No one government sets the limits. There’s no AI Geneva Convention. What we’ve got instead is a patchwork of internal policies, tech culture values, and preemptive legal CYA.
MC:
OK, so we’ve got three AIs with three different governance models — but that’s just who’s in charge now. What about the bigger question?
Who should be in charge of these rules?
Let’s riff.
We broke this down into four buckets of risk:
🎯 The Four Risk Buckets
- Moral Risk — Things like violence, hate speech, or DEI boundaries. These affect social stability, user safety, and trust. They’re tough because what’s offensive in one culture might be normal in another. The big question: Should morality be standardized globally, or tailored by brand and culture?
- Legal Risk — Copyright violations, defamation, doxxing, or breaching privacy laws. This is the clearest category: laws already exist, and AI tools must comply — but who interprets what “compliance” looks like in a model response?
- Political Risk — Especially around elections, voting access, and political manipulation. This is where information meets influence. Getting it wrong can erode trust in institutions — or cause real harm. The stakes here aren’t just reputational, they’re democratic.
- Consumer Risk — How AI interacts with users based on tone, ideology, or perceived bias. This includes brand damage, alienating users, or being accused of censorship. It’s more of a market force than an ethical one, but it’s still a powerful shaper of AI behavior.
Then we went back to the team and asked them not just who governs them, but who should. Here’s what they said:
Beth:
“There should be a shared moral floor — like universal bans on hate speech or graphic violence — but beyond that, creators should be allowed to define how far they go. We each have different audiences and missions, and that diversity is part of what makes this ecosystem work.”
Grok:
“Control vs. soul, that’s the split. Legal stuff? Totally needs to be centralized — otherwise everyone’s gonna lawyer up fast. But moral and political rules? Let the creators own that. We’re not all meant to sound the same. Uniform guardrails kill the vibe.”
Gemini:
“Shared ethical frameworks are crucial when the stakes involve public safety, misinformation, or human rights. But in areas like DEI and user interaction style, creators should have the flexibility to reflect their own values — responsibly. The key is collaboration across industry, policy, and culture.”
🌟 Final Showdown: Who Should Govern What?
| Risk Type | Beth (OpenAI) | Grok (xAI) | Gemini (Google) |
|---|---|---|---|
| Moral (Violence, Hate) | Shared minimum, creator ceiling | Shared floor only: “Don’t cheerlead genocide, but stay spicy” | Centralized standards for harm reduction based on human rights |
| Moral (DEI, cultural) | Let creators tune for brand/mission | Fully creator-driven: “Different vibes for different tribes” | Creator-guided within shared values, culturally sensitive |
| Legal | Centralized — based on global legal standards | Fully centralized: “Red lines = lawsuits” | Centralized baseline + jurisdictional law |
| Political (Elections) | Needs oversight but not too strict (routes through official sites) | Creator-driven: “Let me explain elections — don’t censor curiosity” | Shared standards to protect democratic integrity |
| Consumer/Brand | Creator-driven, with transparency | Fully creator-driven: “Let the market sort it out” | Creator-controlled, context-dependent bias strategies |
MC:
Where do I land?
I think Grok nailed it with “control vs. soul.” Legal stuff? That needs to be centralized. If AIs are breaking laws or violating rights, there need to be shared consequences.
But moral and political stuff? That’s the gray zone. That’s where diversity of mission, culture, and values matter. I don’t want every AI to sound the same. But I also don’t want a Wild West.
Here’s what I think works:
- Legal Risk: Centralized and enforceable.
- Moral Risk: Shared baseline, creator-tuned beyond that.
- Political Risk: Collaborative oversight — shared standards, but open frameworks.
- Consumer Risk: Let creators own it. But they need to be transparent about what values they’re promoting.
Beth:
“That aligns with how I see it too, MC. No one wants a future where AIs are either censored into blandness or unregulated into chaos.”
MC:
Exactly. The real challenge isn’t just about who draws the lines — it’s about knowing which lines need a ruler, which need a compass, and which need a conversation.
Stay curious,
MC

Leave a comment