AI's 3 Layers to Political Superintelligence
Achieve political superintelligence with AI via information access, automated delegates, and governance rules—requires UX, oversight, and regulations to benefit society.
Layered Framework Unlocks Political Superintelligence
AI democratizes intelligence like the printing press did information, enabling 'political superintelligence'—tools for sharper reality perception, tradeoff understanding, power contestation, and effective action. Stanford's Andy Hall outlines three layers to build this without slowing AI:
- Information layer: AI enhances government data access, problem identification, citizen input, and service distribution. Success demands evaluations for policy-relevant behaviors and policymaker-specific tools.
- Representation layer: AI delegates monitor politics, suggest votes, or act as supervised policymakers. Challenges include reliable agency, adversarial prompt resistance (e.g., politician-funded campaigns), and ownership (e.g., AI firm biases overriding user prefs).
- Governance layer: Privately owned AI needs 'constitutions' for models, plus oversight to ensure public harnessing. Interfaces matter: invest in technical oversight tools, deliberative feedback, transparency regimes, and standard APIs for data/steering.
Default path yields powerful political-thinking AIs; intentional UX/UI, empirical interfaces, and regulations turn them into societal wins. Google's companion view scales intelligence via 'societies of minds'—hybrid human-AI ecosystems mirroring historical explosions (primate groups, language ratchets, bureaucracies). Future governance verifies vast AI swarms with values like transparency/equity; alignment succeeds individually but demands institutional templates (digital courtrooms/markets) for collective behavior.
Hyperagents Self-Improve via Editable Loops
Give LLMs (e.g., Claude Sonnet 4.5) a bash tool, file editor, task agent, and meta-agent in an editable program: hyperagents (Darwin Gödel Machines) recursively refine prompts, behaviors, and self-improvement mechanisms across generations, spawning top performers.
Tested on four domains:
- Polyglot coding: Edit repos per NL instructions; 5 runs boost training from 0.140 to 0.340 (CI: 0.300–0.380).
- Paper review: Predict accept/reject from AI papers; test jumps from 0.0 to 0.710 (CI: 0.590–0.750).
- Robotics rewards: Generate RL rewards for quadruped tasks; improves from 0.060 to 0.372 (CI: 0.355–0.436), beating direct metric optimization (0.348).
- Math grading: Olympiad-level; unspecified gains but consistent outperformance.
Combines with finetuning for singularity risks/benefits; limits include fixed outer selection/evaluation, demanding trust balances for delegation. Code: github.com/facebookresearch/Hyperagents.
Robotics and Math Expose AI Frontiers
DexDrummer's hierarchical RL (high-level trajectories, low-level hand control with thumb-index grasp, arm penalties, contact curriculum) trains bimanual Franka/Tesollo robots on full drum kits in sim, then real-world. Hits occur but awkwardly—videos reveal years from human drummers; dynamic environments demand artisanal policies, far from LLM generality.
HorizonMath's 100 unsolved applied/computational math problems (8 domains, levels 0–3 by solvability/output) resist contamination (no training data solutions) with automated verification (numeric/constraint checks). Top scores: GPT 5.4 Pro at 7% full, 50% level 0; Opus 4.6/Gemini 3.1 Pro at 3%/30%. Expands to proofs/Lean integration, tracking creativity rubicon.