This platform is for awareness and transparency. Not financial or legal advice.

What is AI 4 Society?

An open intelligence platform tracking how artificial intelligence is reshaping society — powered by AI, guided by humans.

Mission

Artificial intelligence is transforming every aspect of human society — from employment and education to governance and warfare — faster than any institution can track. Most people hear about AI through hype cycles or fear headlines, not through structured, evidence-based analysis.

AI 4 Society exists to close that gap. We operate a real-time observatory that continuously scans hundreds of sources, classifies signals by risk category, and connects them to an evolving knowledge graph of risks, solutions, stakeholders, and milestones.

Our goal is to democratize AI risk intelligence — making it accessible enough for the general public yet rigorous enough for researchers and journalists to cite.

How It Works

The Observatory uses a multi-agent AI pipeline that continuously scans 39 sources across seven credibility tiers. Every signal is classified by AI, then reviewed and approved by a human before it reaches the public. Nothing is published without passing through a human review gate.

Our knowledge graph tracks risks, solutions, stakeholders, and milestones — connected by typed edges that evolve as new evidence emerges.

OECD AI Principles

Every signal classified by our pipeline is tagged with one or more OECD AI Principles (P01–P10). These principles, adopted by 46 countries, provide a standardized framework for evaluating AI's societal impact. We use them as the backbone of our classification taxonomy.

P01Inclusive Growth

AI should benefit people and the planet, driving inclusive growth, sustainable development, and well-being.

P02Human-Centred Values

AI systems should respect the rule of law, human rights, democratic values, and diversity.

P03Transparency & Explainability

AI systems should be transparent and responsible disclosure should be ensured.

P04Robustness & Safety

AI systems should function robustly, safely, and securely throughout their lifecycle.

P05Accountability

Organizations developing AI should be accountable for their proper functioning.

P06Investing in R&D

Governments should invest in AI research and development to spur innovation.

P07Digital Ecosystem

Governments should foster a digital ecosystem for trustworthy AI.

P08Skills & Labour

Governments should enable people to develop skills for AI and support fair transitions.

P09International Cooperation

Governments should cooperate across borders to share information and foster interoperability.

P10Domestic Policy

Governments should adopt national AI policies and regulatory frameworks.

Signals may also carry a harm status tag — either "incident" (harm has occurred) or "hazard" (potential for harm). This distinguishes between realized and potential risks in our analysis.

Get Involved

AI 4 Society is a volunteer-driven project. Here is how to help:

  1. Browse and vote — Visit the and upvote or downvote risks and solutions to shape community perception scores.
  2. Sign in to become a Member and unlock voting.
  3. Apply to review — Members can request reviewer access to help verify AI-classified signals.

Data & Privacy

We take data responsibility seriously:

  • Approved signals are retained for 90 days, then archived. Archived signals are deleted after 1 year.
  • Rejected signals are deleted within 30 days.
  • Individual votes are private — only aggregate counts are shown publicly.
  • We collect only what Google OAuth provides (name, email, photo). No tracking pixels, no analytics beyond basic Firebase usage.
  • All source data is publicly available — we surface and classify it, we do not create it.

Release Notes

A record of what's been built and shipped.

v0.6
Polish & ObservabilityMarch 2026
  • Node labels on all graph nodes (risk, solution, milestone, stakeholder)
  • Mobile bottom sheet for Observatory detail panel
  • 7 new signal sources added (Alignment Forum, CAIS, Nature Machine Intelligence, IEEE Spectrum, The Guardian AI, AI Now, Ben's Bites)
  • Admin source config grouped by tier with toggle fixes
  • Feed Curator and Data Lifecycle run summaries now visible in admin
  • README and design spec updated to reflect v2 state
v0.5
Admin PanelFebruary–March 2026
  • Agent dashboard with health cards, run history charts, and manual triggers
  • Source config table with per-source enable/disable toggles
  • Unified review list with bulk approve/reject
  • User management for role assignment
  • Paused-state checks for all scheduled agents
v0.4
Landing PageFebruary 2026
  • Instagram-style Risk Reels with gradient velocity rings
  • Personalised news feed with recency-decay scoring
  • Preference picker with interest tracking
  • Hamburger nav for mobile
v0.3
ObservatoryFebruary 2026
  • Interactive knowledge graph with cosmic rendering (Cytoscape.js)
  • Node type filter (risk, solution, stakeholder, milestone)
  • Detail panel with narrative, voting, evidence list, and connections
  • Chronological timeline view
  • Deep-link routing: /observatory/:nodeId
v0.2
Agent PipelineJanuary–February 2026
  • Signal Scout: 17 RSS/API sources + Gemini 2.5 Flash classification
  • Discovery Agent: clusters unmatched signals into new node proposals
  • Validator Agent: proposes score and field updates for existing nodes
  • Feed Curator: rebuilds ranked feed_items every 6 hours
  • Data Lifecycle: archives and purges stale data daily
  • Graph Builder: rebuilds graph_snapshot and node summaries on demand
v0.1
FoundationJanuary 2026
  • React 19 + Vite 7 + TypeScript + Tailwind 3.4 + Firebase
  • Firebase Auth with Google OAuth and role-based access control
  • Firestore graph model: nodes, edges, signals, graph_snapshot, feed_items
  • GraphContext with real-time Firestore listeners
  • Human-in-the-loop review gates (Gate 1: Signal Review, Gate 2: Proposal Review)