Sora Fails on Economics as Agents Disrupt Dev Tools
OpenAI kills Sora after $15M/day compute burn and 66% download drop due to unsustainable costs and AI slop backlash; Linear's agents in 75% of workspaces end issue tracking, while Coinbase's no-code experiment enables continuous dev via autonomous agents.
Sora's Shutdown Exposes Flaws in High-Compute AI Media
OpenAI discontinued its Sora AI video app, website, and API shortly after launch, collapsing a $1B Disney licensing deal for 200+ characters. Downloads peaked at 3.3M in November 2025 but dropped 66% to 1.1M by February 2026, with lifetime in-app revenue at just $2.1M. Video generation's high costs—scaling with resolution, duration, complexity, and iterations—made flat $200/month pricing unsustainable, subsidizing heavy users like text/chat plans don't. Peak compute hit $10-15M/day ($15M most cited), turning it into a quick-cut loss. Alternatives like Runway Gen-4, Kling 3.0, and Google Veo now lead. OpenAI pivots Sora teams to robotics and rumors a 'super app' merging ChatGPT, Codex, and browser to focus on coding against Claude ahead of IPO. This ties to broader AI slop fatigue: Wikipedia bans AI articles, Reddit verifies humans, Spotify fights AI music clones—killing appetite for flooding feeds with generated video.
Agents Eliminate Handoffs in Agent-Driven Development
Linear's CEO argues issue tracking dies as AI agents, now in 75% of enterprise workspaces, shift from PM-engineer handoffs to context-driven systems. Linear Agent accesses full workspace (threads, backlog, requests, codebase) to synthesize context, recommend, and act. 'Skills' save/reuse workflows as slash commands, e.g., agent groups backlog by customer impact and drafts top-3 issues. Coinbase tested by deleting dev environments for 2 weeks—no code written—to expose 'hidden tax' of questions: context switches slow teams more than devs. Result: continuous dev where agents run overnight PRs; engineers review, spin new agents, deep-dive complex work. Key metric: autonomous operation time (minutes agent runs without intervention), steadily rising. Tool boundaries blur—Cursor could add product features, Claude/Figma code—centering everything on shared context as PM/eng/design roles collapse.
Brain Simulations and Self-Maintenance Threaten SaaS
Meta's Tribe V2, trained on 1,000+ hours of brain scans from 720 people, predicts brain responses to video/audio/language, enabling simulated user testing for designs, ads, onboarding vs. costly surveys/lab studies. Cisco replaced a presentation tool with AI agents, saving $5M/year in licenses, targeting $50-200M more by automating apps into workflows—question every SaaS faces. Ramp's self-maintaining codebase uses 'Ramp Inspect' agents wired to Datadog: monitors fire, agents reproduce bugs in sandbox, generate fixes, open PRs in minutes. Scaled from 10 manual to 1,000 AI monitors (1 per 75 LOC) in weeks, catching issues faster than users report. This kills the 'maintenance overhead' excuse for not cloning SaaS, reshaping software economics.