Demo PreviewThis is a concept demo with mock data. Join the waitlist to help us build the real platform. Data collected via Google Forms.
Governance, not secrecy

Moderation Policy

Last updated: January 2026Scope: GlobalApplies to: Users, brands, moderators
⚠️ Demo Preview: This policy describes our vision for the platform we're building. Data collected via Google Forms.

BrandedFacts will be designed so that moderation, corrections, and brand participation are visible, accountable, and evidence-based, not hidden or influenced by payment.

Moderation Principles

BrandedFacts is a community-driven fact validation platform. Facts are evaluated based on evidence, community consensus, and policy compliance - not commercial pressure or reputational concerns alone.

Moderation decisions follow the principles of:

  • Transparency
  • Evidence-based review
  • Proportional response
  • Public accountability

Fact Status Outcomes

Facts on BrandedFacts may fall into one of the following states:

  • Community Verified - validated by community consensus
  • Community Rejected - rejected by community consensus
  • Under Evidence Review - temporarily paused for investigation
  • Archived (Outdated) - previously accurate but no longer current
  • Removed (Policy) - removed due to policy or legal violations (no longer publicly visible)

When a Fact May Be Removed

A fact may be removed (not merely rejected) only under the following circumstances:

Policy Violations

  • Contains hate speech, harassment, or targeted abuse
  • Promotes violence or illegal activity
  • Discloses personal or confidential information
  • Infringes copyright or intellectual property
  • Is demonstrably false and defamatory (false statement presented as fact that harms reputation)

Legal Risk

  • Is subject to a credible legal complaint, such as a defamation claim
  • Is accompanied by a formal legal notice (e.g., cease-and-desist, takedown request)
  • Poses material legal risk to the platform if left published

In such cases, the fact may be:

  • Temporarily removed pending review
  • Escalated to Legal Review
  • Permanently removed if confirmed to violate law or policy

What Does NOT Justify Removal

The following do not justify removal on their own:

  • A brand disputes the fact without evidence
  • A fact damages reputation but is evidence-based
  • A fact is Community Rejected
  • Commercial pressure or paid requests
  • Threats without legal substance

Community Rejection is not deletion. Rejected facts may remain visible for transparency unless policy or legal thresholds are met.

Brand-Initiated Requests

Brands may:

  • Request an Evidence Review
  • Submit documentation for Correction or Clarification
  • Publish an Official Brand Statement

Brands may not:

  • Pay to delete facts
  • Override community outcomes
  • Suppress audit history

Audit & Transparency

All major moderation actions - including removals, legal reviews, and corrections - are logged in the public audit history, except where prohibited by law.

Why transparency matters

Platforms lose trust when decisions are invisible. BrandedFacts takes the opposite approach: significant actions are logged, labeled, and open to scrutiny.

Transparency protects users, brands, moderators, and the platform itself.

How moderation works

  • Facts can be flagged by users or brands with reasons and evidence
  • Moderators review flags, evidence, and community signals
  • Actions follow defined policy, not popularity or payment
  • Outcomes include uphold, correction, archival, or escalation

What gets logged in the audit history

  • Fact creation and edits
  • Community voting milestones
  • Reports and moderation decisions
  • Brand claim approvals
  • Paid evidence review requests
  • Status changes (verified, corrected, archived, rejected)

What cannot be hidden or erased

  • That a fact existed
  • That a correction or review occurred
  • That a brand submitted evidence or a statement
  • That moderators took an action
Important principle

Payment may fund review work. It never guarantees outcomes or removes history.

Example audit timeline (illustrative)

This is an example of how activity may appear on a fact page.

Jan 12, 2026

Fact submitted with evidence (user)

Jan 14, 2026

Community voting opened

Jan 20, 2026

Brand submitted official correction request

Jan 22, 2026

Moderator applied correction and updated fact status

Who can do what

  • Users: submit facts, vote, report issues
  • Brands: submit evidence, corrections, official statements
  • Moderators: apply policy-based decisions
  • BrandedFacts: maintain infrastructure and enforcement
BrandedFacts | Crowd-Verified Brand Truths