Build Agent-Ready Platforms with Self-Service APIs

Human platform best practices—self-service, API-first, local workflows, API observability—unlock AI agent autonomy, closing loops on build-debug-ship cycles.

Self-Service APIs Eliminate Human Dependencies for Agents

New developers waste time copying pipelines, debugging infra errors, and waiting on teams for databases or storage—processes that block humans and paralyze agents without social skills. Fix this with fully self-service platforms: automate resource provisioning (Kubernetes compute, databases, blob storage, secrets, messaging) so no human handoffs occur. Base everything on well-defined APIs with schemas for discoverability, validation, authentication/authorization, and structured responses. Agents excel here, looping calls until success (e.g., deploy app, check response, iterate). Wrap APIs in CLIs or MCP servers for flexibility. At Banking Circle's Atlas platform (serving 250+ builders processing €1T/year for 700+ institutions), this abstracts cloud complexity, letting teams focus on payments APIs, core banking, data science.

Local-First Workflows and API Observability Close Agent Loops

Agents run locally, so shift left: validate configs, run previews, and fail fast on your machine before VCS pushes or remote workflows. Define success criteria precisely (e.g., 'deployment succeeds if API returns 200 and metrics show healthy traffic'). Expose observability—logs, metrics, traces—via APIs/CLIs/MCPs, not dashboards agents can't parse. This lets agents verify outcomes autonomously, iterating without human oversight. Result: agents build, debug, ship independently, boosting productivity where tribal knowledge once ruled.

Structured Docs and Guardrails Boost Contributions

Colocate docs with code in repos for small projects; centralize platform docs (API-accessible snippets, not full HTML) for discovery. Use agent.md (or CLAUDE.md, instructions.md) for repo-specific rules: 'build/test/deploy/verify this way.' Codify conventions as 'skills' (markdown guides) for tasks like platform interactions. Welcome AI-powered contributions to platforms—lowers barriers—but enforce quality via policies (security/compliance) plus contextual md files guiding agents. Combine hard gates with soft guidance for maintainable code.

Measure Impact and Use AI Hype for Best Practices

Track DORA metrics (deployment frequency, lead time, MTTR, change failure rate) pre/post changes; monitor reliability (error rates, traffic performance); count support tickets (fewer = better self-service); survey dev experience (e.g., SPACE framework). Fewer tickets signal agent success. Leverage AI excitement: pitch long-ignored best practices (API-first, docs, local tooling) as 'agent prerequisites' to overcome resistance from execs to ICs.

Video description
As AI coding agents become first-class users of internal developer platforms, the practices that make platforms accessible to humans turn out to be the same ones that enable AI to thrive. Self-service interfaces, well-defined APIs with schemas and documentation, local-first workflows, and rich observability have always been important elements of a good platform. Now they are prerequisites for agents that can autonomously build, debug, and ship software. This talk explores what it means to design platforms where both humans and AI can collaborate effectively. We'll cover: - How to expose your platform as a product with structured APIs (and perhaps MCPs) - Why prioritizing local tooling pays dividends when agents need to iterate on errors - How observability becomes the bridge between runtime behavior and AI understanding We'll also discuss the flip side: AI is making it easier than ever to *contribute* to platform code, but that comes with new responsibilities around quality gates, context files like CLAUDE.md, and maintainability. Walk away with concrete practices to ensure your platform is ready for a future where agents are not just tools, but users of it. Juan Herreros Elorza - Team Lead, Banking Circle I'm Juan, a Platform Engineering enthusiast. I am working for Banking Circle, as the Team Lead in our Cloud Native Technology team. When I'm not working, I'm most likely rehearsing or performing improv comedy. Socials: https://juanherreros.com/ https://linkedin.com/in/juan-herreros-elorza https://github.com/jherreros Slides: https://speakerdeck.com/jherreros/platforms-for-humans-and-machines-engineering-for-the-age-of-agents

Summarized by x-ai/grok-4.1-fast via openrouter

7035 input / 1493 output tokens in 14614ms

© 2026 Edge