Self-Improving LinkedIn Pipeline with Claude Code & Autoresearch
Duncan Rogoff uses Claude Code to build a daily automated system that generates lead magnets, LinkedIn posts with scroll videos, publishes via Blot, scrapes metrics with Apify, and applies Karpathy's autoresearch loop to iteratively boost performance—all running on GitHub Actions.
From Manual Posts to Autonomous Content Flywheel
Duncan Rogoff, a former art director for Apple, PlayStation, and Nissan now running a six-figure AI agency, faced the challenge of scaling LinkedIn content manually. His posts garnered 4,500 to 90,000 impressions, driving social proof, leads, and community signups to Buildroom (his Skool community for AI-powered personal brands). But consistency required hands-on effort: crafting lead magnets in Notion, writing posts, recording 6-7 second scroll videos with Gotham font overlays (ultra/medium weights, neon green box with black stroke), and overlaying captions. The opportunity? Automate end-to-end while making it self-improving via Karpathy's autoresearch concept—a feedback loop where engagement metrics (hooks, formats, lengths, angles, topics) refine future outputs.
He rejected pure manual scaling or basic automation without learning. Instead, he chose Claude Code in Antigravity IDE for its plan mode, agent teams, browser capabilities, and parallel execution. Tradeoffs: Higher token costs for agent teams (multiple sub-agents collaborating) vs. single-agent speed, but faster cohesive builds (10 minutes total). GitHub Actions for scheduling beat local runs for reliability, though requiring secure secrets storage (API keys). Apify scraper ($5/1,000 results, $5 free monthly credit) over direct LinkedIn API or Claude's browser scraping for simplicity and proven JSON outputs (post text, URL, reactions, likes, comments).
"Claude Code is changing the way I do everything and the way I run my business." – Duncan introduces the build, highlighting its business leverage for a technical founder juggling agency work.
Architecture: Daily Generation + Weekly Optimization Loop
The system runs three GitHub Actions:
- Daily Pipeline (9 AM): Scrapes Reddit for trending topics (audience-aligned: experts with low online visibility seeking AI content strategies). Claude Code's lead magnet skill generates Notion pages (e.g., prompt packs, frameworks with storytelling). It then crafts a LinkedIn post emphasizing personal hooks ("I grew my LinkedIn to 10k followers using Claude Code") and numbers for performance. Publishes via Blot (pre-configured MCP). Records a browser-scroll video of the Notion page, burns in branded overlay. Stores post ID, hook type, text, angle in Notion's tracking database.
- Metrics Scraper (10 AM): Apify actor (high-rated LinkedIn scraper) fetches engagement for recent posts (initially seeded with Rogoff's last 20). Updates Notion with likes, comments, shares, impressions.
- Weekly Autoresearch (Sundays, tunable to 2-3x/week): Analyzes Notion data for patterns in hooks, line length, format, post length, content type, angle. Rewrites strategy (e.g., favor I-statements with metrics if they outperform). Feeds improved prompts back into daily pipeline.
Key integrations:
- Notion: Lead magnets, post storage, results DB (auto-created by Claude).
- Blot: Autopublish posts.
- Apify: Async actor run + dataset items endpoint for metrics JSON.
- GitHub: Repo for code/files, secrets (Anthropic API, Apify token, Notion, Blot), workflows.
Claude Code handled font addition (Gotham from library), video via built-in browser. Seeding used real examples: Rogoff's top posts (e.g., 25k impressions) as MD files, plus saved high-performers, plus audience info MD (pain points: expertise sans visibility).
Tradeoffs surfaced: Weekly research conservative for data accumulation (faster risks noisy signals); Apify cheap but external cost vs. free Claude scraping (less reliable). Agent teams parallelized but token-heavy.
"Auto research is taking over the web right now. Basically, all it is is this self-improving loop that this guy Carpathy created essentially for machine learning, but now people are adapting it to all sorts of other use cases." – Explains the core loop: generate → measure → analyze → iterate, adapted from ML evals to content.
Build Process: Plan, Execute, Debug with Claude
Started in Antigravity: /plan mode for brain-dump (lead magnet skill + autoresearch + Notion + Apify + video). Fed GitHub repo link, example MP4, high-perform post MDs. Claude output a thorough plan unprompted: daily gen/scrape, weekly rewrite. Refined via chat: audience MD, hypothesis (personal hooks + numbers), agent team yes.
Execution: 10-min agent build pushed full codebase to GitHub (Python-heavy, TypeScript optional). Added secrets manually (Anthropic, Apify, etc.). Tested workflows, iterated on errors.
Debugging chain:
- Initial error (workflow permissions): Copied log → Claude diagnosed/fixed all three workflows.
- No initial metrics: Seeded with 20-post scrape.
- Scraper mismatch: Specified Apify actor ID, sample JSON response.
"Troubleshooting is 90% of the job. You have to get comfortable spending a little bit of time asking the right questions and working with Claude to basically cover the last 5 to 10% of the project." – Rogoff on the reality of AI builds, emphasizing iterative prompting over one-shot perfection.
Claude's plan quote (paraphrased in voice but verbatim intent): "Build a fully automated, self-improving LinkedIn lead magnet system that runs daily on GitHub actions. Each day it generates a lead magnet plus a LinkedIn post, publishes via Blotato, creates a six to seven second notion scroll video... tracks engagement via Amplify, and runs a weekly auto research loop."
Results and Early Signals
Post-build: Notion DB auto-populated. Pipelines ran successfully after fixes. System primed with Rogoff's historical data (e.g., hooks, impressions). No live metrics yet (transcript cuts mid-seed), but loop positions for compounding: Poor hooks dropped, winners amplified.
Business impact projected: More consistent high-impression posts (target 90k+) → amplified social proof → agency leads + Buildroom growth. Cost: Negligible (Apify <$5/month initially).
"Better content on LinkedIn creates more social proof for me, which leads to more leads for my business, which then leads to more social proof for me, which leads to more leads for my business." – Ties content directly to flywheel of proof → leads → proof.
Key Takeaways
- Use Claude Code's
/planmode + real examples (MD posts, MP4 demos, audience MD) to bootstrap complex systems; agent teams for parallelism despite token cost. - Schedule via GitHub Actions with secrets; Apify for cheap, structured scraping (specify actor + sample JSON).
- Seed autoresearch with historical data (20+ posts) for faster convergence; track specifics: hooks, lengths, formats, angles.
- Hypothesis-driven starts (e.g., I-statements + numbers) + Reddit trends for relevance; tune research frequency (2-3x/week post-seed).
- Debug by pasting full errors into Claude—expect 90% troubleshooting; fixes often cascade across workflows.
- Chain tools orthogonally: Notion (storage), Blot (publish), browser (video), for end-to-end without custom infra.
- Personalize: Feed audience pains/hopes; test scroll videos at 6-7s with branded overlays for engagement.
- Measure everything in Notion DB upfront; let loop rewrite prompts autonomously.