Linear's Patient AI Bet Pays Off for SaaS
Linear skipped early AI hype like chatbots, built an agent-friendly platform, and positioned itself as the sticky context layer for AI workflows—proving SaaS thrives by understanding real value over rushing tokens.
Skipping the AI Chatbot Rush for Real Workflows
Karri Saarinen, co-founder and CEO of Linear, explains how most SaaS companies mishandled early AI by slapping on chatbots without validating workflows. Linear took years to study how teams actually use AI, avoiding the trap of "everyone else is doing it." Instead, they released an open agent platform with strong docs, enabling seamless integrations from coding agents like OpenAI's Codex, Coinbase's homegrown tools, and others. This made Linear the hub for guiding agents—providing context like issues, priorities, and customer requests—without bearing token costs.
"We have spent all this couple years now like trying to understand these workflows like how do people actually want to use these things," Saarinen says. The result: Linear handles synthesis of customer requests, spotting patterns in feature asks (e.g., hundreds requesting multiple assignees), and clarifying organizational intent before agents execute.
This positions Linear as a "sticky interface" where work starts and records, ideal for an era of many agents per company. Saarinen notes, "Linear becomes kind of like a system for guiding the agents and like building this context... You're the one who has the sort of sticky interface cuz it's where everyone is kicking things off from."
SaaS Isn't Dead—But Public Giants Face Inertia
The market's "SaaS is dead" narrative overlooks nuance, per Saarinen. Investors rightly worry about uncertain cash flows in an AI-shifting landscape, but wiping out SaaS for custom tools is simplistic. Public companies suffer most due to rigid modes and decades of inertia, while nimble growth-stage firms like Linear adapt by rethinking products from scratch.
Linear, with ~120 people (half on product), lives in "day one" mode: no reliance on past decisions. They track AI signals amid noise—like loops being hyped then dismissed—but test in large org contexts where outcomes matter. No investor pressure helped; they picked backers who trust deliberate calls. "The public companies probably get hit the hardest here because they are like their modes are kind of like disappearing in a way," Saarinen observes.
Ditching Vanity Metrics for Product Outcomes
Internally, Linear shifted from skepticism ("Is AI just autocomplete?") to full adoption: engineers, designers, and PMs use agents. But vanity metrics like token spend, PR volume, or "% agent-written code" mislead—activity ≠ value. Token sellers incentivize over-spending, ignoring negative impacts.
True signals: product improvement (user love, revenue), bug rates, feature feedback. Linear enforces a "zero bugs" policy: triage via Linear team, 1-week SLA fixes. Agents handle first-pass fixes, engineers review in-app. "Now I almost feel like with the agents and AI is almost like why do you even have bugs in your product like you should be like there's no excuse for it anymore."
Lagging indicators like profits guide, balanced by per-team token use as signals, not absolutes. Quality trumps quantity: "It's not always like activity is always positive like sometimes it can be negative too."
AI Accelerates Execution, Not Problem-Finding
AI shortens loops across roles, but Saarinen balances speed with deliberation. Product: A custom "Linear way" skill digests docs/feature requests, synthesizing problems (e.g., core reasons for multi-assignee asks) to prioritize. No more manual hunting.
Design: Saarinen prefers manual Figma exploration for thoughtful iteration—speed skips self-checks. Team prototypes via VR builds for live testing. Engineering: Slack convos → agent-created issues instantly. Overall: Fast execution post-decision, slow problem selection.
"I don't want the problem finding to be fast. Like you should take the time to find the right problem and like the right approach for the problem and then once you decide that then you can go faster on it," Saarinen emphasizes. Danger: Speed-running ideas without framing vs. alternatives leads to unprioritized prototypes.
Linear's tasteful, patient build—closed beta, minimal funding—mirrors this: quality over hype, craft over chaos.
Key Takeaways
- Study AI workflows deeply before building; chatbots rarely add real value without validated use cases.
- Build open platforms (e.g., strong docs for agent integrations) to become the context layer, avoiding token costs.
- Ignore vanity metrics like token spend or PRs; track bugs, user feedback, and revenue for true progress.
- Enforce zero-bug policies with agents for triage/fixes—demand quality in AI outputs.
- Slow down problem-finding and prioritization; speed up execution once committed.
- SaaS wins by adapting fresh: treat AI as day-one rethink, not bolt-on.
- Use AI to synthesize customer requests/patterns for faster prioritization.
- Turn informal chats (Slack) into actionable issues instantly to close loops.
- Pick investors who trust deliberate pacing over market noise.