Event Stream as the Sole Primitive for Agents

The foundation is a simple event log at events.iterate.com, where every interaction—user inputs, LLM streaming chunks, tool calls, errors, circuit breakers—is an immutable event with type, optional payload, stream path, offset, and timestamp. Append events via POST /:path with JSON or raw payloads; invalid inputs auto-generate error events like https://events.iterate.com/invalid-event-appended.

Key operations enable agent-like behavior without custom servers:

Circuit breakers auto-pause after 100 events/sec, ensuring stability. This setup mimics agent logs (e.g., Claude/Pi traces) but makes everything replayable: on restart after 100 events, derive full state without re-calling LLMs.

Common pitfall: Forgetting paths start with / (URL-encode if nested). Use project slugs via X-Project-Slug header for isolated namespaces during experiments.

Stream Processor: Reducer + Side-Effects Hook

A processor is pure JS/TS code with two parts:

  1. Synchronous reducer: Function deriving state from events. Input: array of events; output: new state (JSON-serializable). Runs on every append, replaying from offset 0 for consistency.
  2. Async after-append hook: Triggered post-reduce, for side effects like LLM calls or tools. Input: current state + latest event; no state mutation.

Example processor (from workshop repo):

import { EventsClient } from '@iterate-labs/ai-engineer-workshop';

const processor = {
  reduce(events: any[]) {
    return {
      messages: events.filter(e => e.type === 'user_message').map(e => e.payload),
      assistant: events.filter(e => e.type === 'assistant_message'),
    };
  },
  async afterAppend(state, event) {
    if (event.type === 'user_message') {
      const messages = [...state.messages, event.payload];
      // Call LLM here, append chunks as events
    }
  }
};

Deploy by appending a https://events.iterate.com/dynamic-worker-configured event with processor source as payload string. The service isolates and runs it in a sandboxed worker per stream—polyglot in principle (docs at events.iterate.com/apidocs for other langs).

Quality criteria: Reducer must be pure/deterministic (no side effects, I/O). State should capture only essentials for decision-making (e.g., message history, not raw chunks). Test by replaying event logs: curl full history, pipe to jq, verify state.

Pitfall avoidance: Avoid LLM calls in reducer (blocks replay). Use hooks for async work. Handle streaming: append partial tokens as https://events.iterate.com/llm-chunk events.

Composing Processors for Extensible Agents

Multiple processors attach to one stream via separate dynamic-worker-configured events. They run in parallel:

  • Author's processor: Core agent logic (e.g., OpenAI chat).
  • Safety checker: Injects context pre-LLM (e.g., append guardrail event in 200ms window).
  • External: Rust/TS plugins from different servers compose seamlessly.

Example flow:

  1. User appends message.
  2. All processors reduce to their state.
  3. Hooks fire: Safety checks, then LLM call, append response chunks.
  4. Replay skips LLM re-runs via reducer.

Trade-offs: Gains composability (mix/match extensions), distribution (edge-deployed, HTTP-only). Risks: Races/loops (mitigate with pausing, idempotency). No auth (rotate secrets post-demo).

Before/after: Naive agent: Opaque traces, non-replayable side effects. This: Full event log as single source of truth, URL per agent (/jonas/agent1), webhooks/forms as inputs.

Hands-On: From Curl to Full Agent

Prerequisites: Node/TS basics, agent familiarity (e.g., function calling). Fits early in workflow: Prototype before frameworks like LangChain.

Steps to build:

  1. Clone github.com/iterate/ai-engineer-workshop, npm i.
  2. Create client: const client = createEventsClient({ baseUrl: 'https://events.iterate.com', pathPrefix: '/yourname/agent' });
  3. Append init event: await client.append({ type: 'user_message', payload: 'Hello' });
  4. Tail live: client.tail({ live: true }, console.log);
  5. Define/deploy processor: Serialize as string, append dynamic-worker-configured.
  6. Extend: Append JSON transformer for event rewriting; schedule tools.

Exercise: Build coding agent—append code events, reduce to context window, hook to LLM for edits. Combine with external webhook for GitHub PRs.

"The split matters: when your program restarts after 100 events, you want to catch up state without replaying LLM requests."

"Dynamic worker configured... Append it to any stream and that stream becomes an AI agent without server or dependencies."

"Processors from different authors on different servers can compose against the same stream."

"Everything that happens (streaming chunks, tool calls, errors, circuit breaker triggers) is an event in the log."

"I would like to build agent harnesses... purely event sourced. Aka debugable."

Key Takeaways

  • Start every agent with an event stream: Append user inputs/tools as typed events; tail via SSE for real-time.
  • Implement processors as {reduce(events), afterAppend(state, event)}—pure sync state, async effects only.
  • Deploy dynamically: POST processor JS as event payload; runs isolated per stream.
  • Prevent loops: Use pausing, idempotency, rate limits out-of-box.
  • Compose boldly: Multiple workers per stream enable plugins/safety without forking codebases.
  • Debug via replay: Curl full log, reduce manually to verify state sans LLMs.
  • Edge-first: Agents get public URLs instantly; HTTP webhooks/forms as inputs/outputs.
  • Polyglot/extensible: TS SDK for workshop, but curl/OpenAPI for any lang.
  • Test rigorously: Simulate 100+ events, pause/resume, check reducer purity.