Master Agent Fundamentals Before Building
Agents follow a universal loop across LLMs like Anthropic or OpenAI: user input triggers LLM thinking to either respond directly from context or select tools (e.g., web search, Twitter API), execute a plan, observe results, and loop back with memory updates. This differs from deterministic workflows, where fixed prompts yield identical outputs cheaply and predictably. Agents are dynamic—LLM decides tool calls or paths flexibly—but cost more and risk unreliability.
Skip agents for most tasks; use Anthropic's 5 workflows first:
- Prompt chaining: Break tasks into sequential subtasks (e.g., outline marketing copy → verify quality → write full → translate) for accuracy over single-prompt cram.
- Routing: Classify input (e.g., customer service, billing, tech support) and direct to handlers.
- Parallelization: Run task variants, aggregate results.
- Orchestrator workers: Central LLM dynamically assigns subtasks to workers for unpredictable complex tasks like deep research.
- Evaluator-optimizer: Generator LLM creates output; evaluator critiques and loops feedback until criteria met.
Graduate to agents only when workflows fail, starting simple to avoid overkill.
Build v1 Agents in One Day with the Formula
Define before coding: exact outcome (e.g., structured report, not vague help), required info (web/files/DB/user message), allowed actions (search/edit/send), rules (tone/format/uncertainty handling).
Formula: Agent = Role + Goal + Tools + Rules + Output Format. Paste into Claude Code extension markdown for instant project generation (e.g., npm run dev launches).
Beginner types:
- Research: Gather/summarize info.
- Content: Write/rewrite/transform.
- Workflow: Repeatable processes.
- Personal knowledge: Query private docs.
- Operator: Environment actions.
Example: Crypto research agent—Role: assistant; Goal: find/summarize accurately; Tools: web search/file search/calculator; Rules: cite sources, flag uncertainty; Output: docx report. Yields project with system prompt, runnable via queries like "research Ethereum". Brainstorm via Claude: "Help design Anthropic agent for goal, fill formula."
Newsletter example: Input transcript → polished article matching voice (e.g., for builders using AI/no-code). Update output to HTML/CSS (Notion-style sticky scroll) for auto-blogging from YouTube.
Optimize with Minimal Tools, Memory, and Debugging
Fewer tools boost reliability—only for external data/actions AI can't do natively (e.g., current weather/news/calculations/sheets). No-tool tasks: rewrite email, summarize, explain concepts. Prompt LLM: "For goal/actions, which need tools? Suggest minimal simple ones with descriptions/inputs." Instruct precisely: "Use calculator only for math, never guess."
Memory types:
- Short-term: Conversation history.
- Long-term: External (DB/docs/PDFs).
Test need: Prompt LLM with role/goal: "Needs conversational/external memory? Why?" Skip if agent works without.
Handle real inputs (messy/vague/slang like "Why the f did IRS charge us?"): Test rigorously. Debug: "Agent prompt, input, output—what failed? Fix?"
Scale to Multi-Agents Only When Single Fails
Master one agent first. Add multiples for distinct skills/roles (e.g., newsletter generator → frontend designer/deployer). Conditions: clear task split, one agent struggles, different permissions (e.g., private finance data).
Pipeline: Input → analysis/write → design/deploy. Use supervisor/orchestrator as user-facing hub routing to sub-agents.
Decide via prompt: "Agent does job. Single or multiple? Roles/why?" Start simple for sustainable workflows.