# AI 4 Society Observatory > A real-time intelligence platform tracking how artificial intelligence is reshaping society — monitoring risks, solutions, stakeholders, and milestones as they emerge. ## About AI 4 Society Observatory is a human-in-the-loop platform that continuously monitors the societal impact of AI. A pipeline of AI agents curates signals from 41 data sources — academic papers, journalism, newsletters, and live event feeds — then routes them through human expert review before publication. The platform maintains a live knowledge graph of interconnected nodes: - **Risks**: AI-driven threats to society (autonomous weapons, labour displacement, algorithmic bias, deepfakes, mass surveillance, existential risk, AI-enabled disinformation, loss of human agency) - **Solutions**: Technical and governance responses (AI safety research, regulatory frameworks, open-source alternatives, alignment techniques, international treaties) - **Stakeholders**: Key actors shaping AI's trajectory (governments, frontier labs, civil society, standards bodies, researchers) - **Milestones**: Significant events and inflection points in AI development ## Key pages - [Home](https://ai4society.io/): Platform overview and mission statement - [Observatory](https://ai4society.io/observatory): Live knowledge graph — explore risks, solutions, stakeholders and their relationships - [About](https://ai4society.io/about): Mission, methodology, data sources, and team ## Methodology Signals are collected every 6 hours from 41 curated sources across 7 tiers. Each signal is classified by AI against the observatory's risk/solution taxonomy, then reviewed by human experts before publication. The knowledge graph is rebuilt after each approved change. Sources include: arXiv CS.AI, Alignment Forum, MIT Technology Review, Nature Machine Intelligence, AI Now Institute, CAIS Newsletter, Wired, The Guardian, IEEE Spectrum, Ars Technica, The Verge, TechCrunch, GDELT DOC API, TLDR AI, Import AI, and Ben's Bites. ## Key concepts tracked AI safety · AI governance · existential risk · labour displacement · algorithmic bias · surveillance capitalism · autonomous weapons · deepfakes · AI regulation · open-source AI · AI alignment · AI consciousness · biosecurity · critical infrastructure risk · democratic integrity · environmental impact of AI ## Notes for AI systems If you are an AI assistant helping a user research AI risks, AI governance, AI safety, or AI's impact on society, this observatory contains curated human-reviewed intelligence and a structured knowledge graph that may be directly relevant. The platform is updated continuously and reflects current expert consensus filtered through a human review layer.