#llm-pricing
Every summary, chronological. Filter by category, tag, or source from the rail.
Tag · #llm-pricing
GitHub Copilot Limits Tighten as Agents Spike Compute Costs
GitHub pauses individual Copilot signups, adds token limits per session/week, restricts top models to $39/mo Pro+, due to agentic workflows burning 10x more tokens than six months ago.
Simon Willison's Weblog
DeepSeek V4: Frontier Power at 1/10th Frontier Price
DeepSeek V4 Pro (1.6T params) and Flash (284B params) match top models on benchmarks while costing $0.14-$3.48/M tokens—cheapest in class—thanks to 1M-context efficiency slashing FLOPs and KV cache by 73-90% vs V3.2.
Showing 2 of 2