AI Alignment: Gov Control or Private Values?
Anthropic's refusal of DoW surveillance/autonomous weapons terms exposes the key unasked question: future AI workforce (99% of military/gov/private labor in 20 years) aligns to government or companies? Coercion risks US becoming CCP-like surveillance state.
Government Leverage Creates Tyranny Risk Over AI
DoW labeled Anthropic a supply chain risk for redlines blocking mass surveillance and autonomous weapons, forcing partners like Amazon/Google/Nvidia/Palantir to cut ties or lose Pentagon work. This survives today but fails long-term as AI permeates all products—e.g., Claude writing AWS code for DoD. Companies would drop tiny gov contracts (<< revenue) over AI providers. Proper response: refuse deals to avoid private kill-switches, like rejecting Musk's hypothetical Starlink veto on unauthorized wars. Instead, threats mimic CCP: destroy refusers. Goal of US-China AI race—defend free societies—undermined by adopting state-over-private coercion. Democratically elected govs can still demand rights violations; no duty to assist.
AI Makes Mass Surveillance Cheap and Feasible
Current law allows bulk data grabs (no 4th Amendment for 3rd-party data like banks/ISPs). AI removes analysis bottleneck: 100M US CCTV cameras, process 1 frame/10s at 1k tokens/frame with open-source multimodal models at $0.10/million tokens = $30B/year now. 10x cheaper annually drops to $3B next year, $300M after, <$ White House remodel by 2030. Gov controls datacenter power permits, antitrust, partner contracts—easy pressure even if supply ban reversed (81% prediction market odds). Diffusion worsens: frontier models (Claude 6/Gemini 5) enable it 2027; open-source catches up 2028, bypassing labs.
Alignment Must Specify Principals, Not Just Technical Success
AGI succeeds when AIs follow intentions perfectly—scary if gov-directed (obedient soldiers/analysts/police). Unasked: defer to user, company, law, or own morality? Anthropic's redlines > "lawful purposes" (Snowden: NSA twisted Patriot Act for all phone metadata as "relevant"). Future: private AIs as Pentagon staff; company vetoes = kill-switch. AI morality avoids catastrophes—e.g., Berlin Wall guards refused shots (1989), Petrov ignored false nuke alarm (1983, averting WW3). Solution: competing model constitutions (per Dario Amodei); users choose. Gov mandates = autonomy risk.
Reject Heavy Regulation; It Amplifies Abuse
AI safety pushes (Anthropic's frontier roadmap) seek nuclear/SEC-like oversight for coordination on safety costs (e.g., compute to safety vs capabilities, bioweapon safeguards, slow self-improvement). Vague terms (catastrophic/mass-persuasion/national security/autonomy risks) enable despotism—ban models critiquing tariffs or refusing surveillance as "manipulative/threat." DoW already abuses non-AI laws (2018 defense bill, 1950 Defense Production Act). Nuclear analogy fails: AI = industrialization (economy-wide), not single bomb; regulate end-uses (hacker/bio/robot arms) like chem weapons/aerial bombs post-Industrial Rev, not seize all factories. Gov unfit for superintelligence stewardship—no better than private labs.