AI Scales Cyberattacks Rapidly, Boosts Startups 1.9x

Frontier models double cyberoffense capability every 5.7 months, startups using AI internally gain 44% more use cases and 1.9x revenue, automation rises gradually to 90% success on text tasks by 2029, but GDP forecasts add just ~1% by 2030.

Frontier AI Doubles Cyberoffense Power Every 5.7 Months

Lyptus Research evaluated AI on cyberattack benchmarks like CyBashBench, NL2Bash, InterCode CTF, NYUCTF, CyBench, CVEBench, and CyberGym, plus a new 291-task dataset calibrated by cybersecurity pros. From 2019's GPT-2 to 2026's GPT-5.3 Codex and Opus 4.6, capabilities follow scaling laws: overall doubling time of 9.8 months, accelerating to 5.7 months for 2024+ models. Top models hit 50% success on tasks taking human experts 3.1-3.2 hours—half a workday. Open-weight GLM-5 trails closed-source leaders by 5.7 months, implying quick diffusion of offensive cyber skills. This dual-use scaling means defensive AI aids also enable attacks, multiplying policy challenges as models become 'everything machines'.

Internal AI Adoption Yields 1.9x Revenue for Startups

INSEAD and Harvard Business School ran a field experiment on 515 AI Founder Sprint startups, giving treated firms ($25k in API credits, OpenAI onboarding) workshops on real AI use cases like Gamma's pattern detection for product variants (one PM ships team-scale features), Ryz Labs' parallel AI coding from PRDs, FazeShift's AR automation, and Ranger's traction bootstrapping. Treated firms discovered 44% more (2.7 extra) use cases, focused on product/strategy, completing 12% more tasks (2.2 more internal ones), gaining 18% higher paying customer odds, and 1.9x revenue. Each extra use case adds 0.85 tasks and 26% revenue. Capital demand dropped 39.5% ($220k less) without labor hikes, proving AI cuts experimentation costs for faster scaling. Founders note AI as 'force multiplier', replacing $1k outsourcing in hours. Non-AI firms will lose to AI-native competitors, demanding managerial education to map AI into production.

AI Automates Text Tasks via Gradual 'Rising Tide'

MIT analyzed 3,000 O-NET tasks with 17,000 worker evals, finding AI progress as broad 'rising tides' not disruptive 'crashing waves'. Frontier models shifted from 50% success on 3-4 hour tasks (2024-Q2) to 1-week tasks (2025-Q3), and 70% on 1-min to 1-hour tasks. Task success vs. duration slope stays flat across job families like management. By 2029, most few-hour text-based tasks hit 80-95% success at sufficient quality (90% median), validating METR's time-horizon scaling. Expect steady labor displacement favoring capital over humans, challenging economic stability.

GDP Forecasts Paradox: Fast AI Progress, Minor ~1% Boost

Forecasting Research Institute surveyed 69 economists, 52 AI/policy experts, 38 superforecasters, 401 public (Oct 2025-Feb 2026). All expect moderate-rapid AI progress by 2030 (basic-to-top-human research/coding/creativity/physical tasks), yet GDP growth adds ~1pp (to 3.4% from 2.4%), with flat TFP/labor participation, rising inequality. Economists see 14% chance of major short-term GDP/inequality surge, favor retraining/unemployment insurance/AI Manhattan Project over UBI/compute tax. By 2050, experts predict multi-pp GDP adds. This underplays lab visions of exponential change, highlighting forecasters' conservatism on exponentials.

Summarized by x-ai/grok-4.1-fast via openrouter

7795 input / 2751 output tokens in 19164ms

© 2026 Edge