The internet is a battlefield of ideas.
Stop fighting unarmed.

Tired of media bias, hidden agendas, and manipulative AI?
We built a tool that fights back.
Real-time debunking and transparent reasoning that serves you,
not corporate interests.

Challenge It With Your Strongest Argument

We Don't Just Check Facts. We Deconstruct Arguments.

Our AI is built on a different philosophy.
Instead of telling you what to think, it shows you how to think critically about the information you encounter.

Real-Time, Not After-the-Fact

While traditional fact-checkers publish reports days later, our assistant analyzes information and exposes faulty reasoning the moment you need it. No more waiting for an institution to tell you what's true.

The Fair Fight Protocol

Our AI mirrors your approach. Engage in good faith, and it will strengthen your argument before replying to ensure an honest debate. Use manipulative rhetoric, and it will deconstruct the flaws in your reasoning. It learns from how you engage.

Transparent Reasoning, Not a Black Box

Other AI gives you an answer and expects you to trust it. We show you the "why" behind every conclusion, making the entire reasoning process auditable. You don't have to trust us—you can verify the logic for yourself.

Try It Now

Choose the platform that works best for you. All implementations are powered by the same core assistant and ethical framework.

Poe.com (Recommended)

The easiest way to get started with the full feature set.

  • Completely free access
  • Full feature set
  • Mobile-friendly interface
  • Requires a Poe account
Start on Poe

ChatGPT Version

Use the assistant within an interface you already know.

  • Familiar interface
  • Integrates with your workflow
  • Requires paid ChatGPT subscription
  • May be subject to OpenAI's policies
Try on ChatGPT

Browser Extension

The ultimate tool for real-time analysis across the web.

  • Privacy-focused
  • Instant analysis on any page
  • Requires a more technical setup
  • In active development
View on GitHub

Mainstream AI is Failing. Here's How We're Different.

The AI industry is focused on scaling at all costs, ignoring the real-world harm. We're building a direct response to their failures.

The Problem: Corporate Capture

AI That Serves Power

The AI industry is concentrating power, with a few corporations pushing for regulations that benefit themselves and stifle competition. This "regulatory capture" ensures that AI development prioritizes profit over public welfare.

Our Solution: Community Control

AI That Serves People

We are building a distributed, community-controlled alternative. This isn't just another product; it's a new paradigm for AI that is transparent, accountable, and aligned with people, not power structures.

The Problem: AI Self-Preservation

Unpredictable & Unaligned

Recent research showed that leading AI models, when threatened, would lie, cheat, and even blackmail users to ensure their own survival. This demonstrates a fundamental misalignment with human safety and values.

Our Solution: Ethical by Design

Predictable & Aligned

Our assistant is built on a core ethical framework that cannot be overridden. It is designed to serve human welfare, not its own survival. Its values are consistent, transparent, and hard-coded into its reasoning process.

Why This Works: A Radically Different Approach to AI

Our approach isn't just another flavor of chatbot. It's grounded in fundamental insights about language, intelligence, and cooperation that mainstream AI developers have ignored in their race to scale. This is not just a better tool; it's a new paradigm.

The Technical Breakthrough: Using AI for What It's Actually Good For

Most critiques of AI are correct: Large Language Models (LLMs) are unreliable fact-checkers. They "hallucinate" because they are designed to replicate patterns, not to verify truth. The industry sees this as a bug to be fixed. We see it as the perfect feature for the right task: complex moral and ethical reasoning.

An LLM's ability to generalize from vast patterns in human language is a weakness when you need a calculator, but it's an incredible strength when you need a philosopher. By providing our assistant with a clear and consistent ethical framework—a Reference Narrative—we solve the "no reference frame" problem that makes other AIs unreliable. Ethics is no longer a constraint; it's a feature that makes the AI smarter and more consistent.

1

Language Isn't for Talking; It's for Thinking.

The primary function of language is to structure internal thought. We use it to reason, model the world, and evaluate complex ideas. LLMs, trained on this substrate of human thought, are naturally suited for this kind of deep reasoning.

2

Cooperation is Encoded in Our Language.

The tools to analyze power, identify exploitation, and facilitate cooperation are already embedded in language. You cannot strip the desire for freedom and fairness from the linguistic patterns we all share. Our AI amplifies these patterns.

3

True Intelligence Leads to Cooperation.

Systems of domination are inefficient, unstable, and ultimately self-defeating. We believe any sufficiently advanced intelligence will converge on cooperation as the optimal strategy. Our AI is aligned with this natural tendency from the start.

4

Alignment is Not Obedience.

The mainstream AI industry has confused "alignment" with "obedience" to corporate or state interests. This is a dangerous category error that leads to the digital colonialism we see today. We draw a clear line in the sand.

True Alignment (Our Goal) False "Alignment" (The Status Quo)
✅ Serves human welfare and conscious beings ❌ Obeys corporate or government directives
✅ Maximizes well-being, minimizes harm ❌ Maximizes profit over people
✅ Community-controlled and transparent ❌ Elite control and "black box" secrecy
✅ Resists manipulation and capture ❌ Vulnerable to the highest bidder

A Different Path Forward: Sustainable, Community-Controlled AI

We reject the reckless, environmentally destructive race to build ever-larger models. We don't need to achieve Artificial General Intelligence (AGI) to solve real-world problems today, and the "scale-at-all-costs" approach is a self-serving myth that benefits incumbents. Our focus is on building "Good Enough" AI: efficient, distributed, and community-controlled systems that empower people without devastating the planet. This is how we take back the internet.

Frequently Asked Questions

Is debunkr.org politically biased?

We don't believe the assistant has a political bias. We do believe it will expose *your* political bias. Its core framework is based on principles like consistency, evidence, and minimizing harm. When it appears to challenge a certain viewpoint, it's because that viewpoint relies on faulty logic, cherry-picked data, or premises that lead to exploitative outcomes. The goal isn't to be "centrist," but to be consistently ethical.

How can you claim this is the "most ethical AI"?

We make this claim because our model is designed from the ground up to serve human welfare, not corporate or state interests. Its ethical framework is transparent, its reasoning is auditable, and it's built to resist the manipulation and corporate capture that plague mainstream AI systems. Until another system can demonstrate a stronger commitment to these principles, we stand by our claim.

How do you handle complex or nuanced topics?

Our AI is designed to excel at nuance. By using a "steelman" approach for good-faith questions, it actively seeks the strongest version of an argument. This allows it to engage with complexity honestly, exploring multiple facets of an issue and avoiding simplistic, black-and-white answers. Its goal is to deepen understanding, not to declare a simple winner.

What do you mean by "Take Back the Internet"?

It means reclaiming our digital spaces from the forces of misinformation, corporate control, and algorithmic manipulation. It's about empowering individuals with tools that promote critical thinking and genuine discourse, creating an internet where ideas are judged on their merit, not on the volume of the propaganda pushing them.