Build WATSON: Lateral AI Agent for Original Content Ideas

Replace boring AI summaries with WATSON, a Claude Code agent that cross-pollinates 20+ broad sources against your brand docs to generate novel, non-obvious content angles via lateral thinking.

Standard AI Research Produces Convergent, Boring Ideas—Lateral Cross-Pollination Delivers Breakthroughs

AI tools excel at summarizing niche sources like TechCrunch or Hacker News, delivering the same 10 talking points everyone else gets within 48 hours, leading to homogenized content. This mirrors information theory's entropy: systems feeding on themselves degrade signal into noise, as seen in AI models trained on AI text collapsing quality (Nature study). Psychologically, creators avoid experimentation, sticking to safe, generic frameworks.

Breakthroughs require Edward de Bono's lateral thinking: introduce random constraints from outside the niche, like solving marketing via 19th-century naval tactics or Pixar feedback sessions. Sherlock Holmes catalogs data flawlessly but can't storytell; John Watson adds human context, emotional resonance, and cultural weight. Build agents that emulate Watson: reject keyword-matching, demand structural similarities (not surface resemblances), and filter through your brand positioning (voice DNA, audience profiles, content pillars from Notion/Google Drive).

Discard shallow connections per rules like "No stretching logic" or "If ChatGPT would suggest it, kill it." Target idea types: explainer-with-depth (trend + psych framework + brand tie), contrarian (mainstream + opposition + reframe), unexpected analogy (unrelated domain mapped to topic). Score on timeliness, originality, brand fit, combo strength, engagement (High/Medium/Low).

Modular Claude Code Architecture Separates Identity, Rules, and Skills for Scalable Agents

Ditch single 300-line Markdown files; use a directory structure for any Claude Code agent:

agent-name/
├── CLAUDE.md              # Identity, mission, capabilities
├── claude/
│   ├── rules/             # Always-on constraints (fire first)
│   └── skills/            # On-demand workflows
├── inbox/, outputs/, archive/

Identity (CLAUDE.md): Define as "senior content strategist specializing in cross-domain ideation." Mission: non-obvious world-to-brand connections. Core principle: despise generic takes.

Rules (always-on, 5 files):

  • 00-onboarding.md: Locate brand docs or halt.
  • 01-scope-assessment.md: Searchable topic? Research. Personal? Ask once: research themes or riff on brand?
  • 02-execution-rules.md: Enforce "diverse ideas only," "sacred connection paragraph" proving structural similarity, no shallow combos.
  • 03-data-source-config.md: Read all brand files every run.
  • 04-reddit-crawling.md: Bypass Reddit blocks via Markdown converter for unfiltered opinions.

Skills (on-demand):

  • 00-setup-datasource.md: One-time brand doc validation.
  • 01-idea-generation-pipeline.md (6 steps): 1) Sweep 20+ sources (broad queries + targeted: Reddit, HN, papers, blogs). 2) Categorize into 5 always-on lenses (news, opinions, contrarians, psych/behavior, analogies) + 8 conditional (business, tech, culture, history, data, regs, creators, failures). 3) Build 15-30 row table (tagged findings, no early filter). 4) Load brand docs. 5) Cross-pollinate for surprise/novelty. 6) Score ideas.
  • 02-output-format.md: Per idea—type, angle, connection para, title/subtitle/hook, controversy, scores, sources, adjacents.

Build an "Agent Optimizer" skill first: input bulky file, outputs modular structure. YAML header auto-loads: name: watson-editorial-researcher, model: opus, memory: project. Quick test prompt: Find 3 connections (psych, unrelated domain, Reddit) with structural similarity explanations.

WATSON Generates High-Impact Ideas: Nano Banana 2 Examples

Input: "Nano Banana 2" (Google's fast AI image model). Output: 25+ sources, 11 categories, 25-row table, 5 ideas.

Idea 1: Visual Elevator Music Problem (High scores)—Links ScienceDirect paper (700 trajectories converging to identical outputs over 100 iterations, termed "visual elevator music") to NB2's 4-8s speed and brand fear of dilution. Connection: "Faster tools accelerate uninterrupted iteration toward homogenized slop, surrendering taste quicker."

Idea 2: Google Solved Face Consistency; Fix Your Voice Drift—NB2 maintains 5 characters' appearances across workflows via stable reference architecture. Analogy: Text AI forgets your voice unless you build identity-holding systems (e.g., brand docs). No summarizer links image tech to text voice stability.

Pre-WATSON: Rush to generic features/use-cases. Post: ScienceDirect loops, de Bono lateral thinking, Reddit friction, voice consistency analogy—unique angles preserving creator voice while covering news.

Summarized by x-ai/grok-4.1-fast via openrouter

8866 input / 1697 output tokens in 14116ms

© 2026 Edge