Hyperframes: AI Pipeline for Website-to-Cinematic Videos

Hyperframes uses HTML compositions and a 7-step AI agent pipeline in Claude Code to turn any website into a 20-second Apple Keynote-style video—no After Effects needed.

HTML Beats React for AI-Driven Video Animations

Hyperframes outperforms Remotion for programmatic videos because it uses plain HTML compositions instead of React components, enabling smoother, more natural animations via AI agents. Paste any landing page, design system, or CodePen demo directly into HTML for animation—React's abstractions make visuals clunky (e.g., unnatural movements in side-by-side prompt tests). This DOM-based renderer suits AI writing videos and visual editors, as HTML expresses visuals more intuitively. Trade-off: Still early-stage AI quality, but user prompts and data improve outputs over time.

Setup takes minutes in Claude Code: Install via npx create-hyperframes-app, add GSAP skills for professional animations (smooth, playful effects from Webflow's library). Cold start with descriptive prompts (e.g., "10-second intro with fade-outs, specific colors/typography") generates previewable compositions—run hyperframes preview for editor view, hyperframes render for MP4 export.

7-Step Pipeline Transforms Websites into Product Videos

Warm start pulls any URL (e.g., linear.app, framer.com) through an automated 7-step agent pipeline: (1) Capture (DOM/text summary), (2) Design, (3) Script, (4) Storyboard, (5) VO timing, (6) Build, (7) Validate. Each step outputs artifacts feeding the next—agents auto-trigger on URL + video requests like "product launch" or "brand reel."

Prompt example: "Create a 20-second product launch video from linear.app. Make it feel like an Apple Keynote announcement." Results: Logo SVG growth, UI popups, particle effects, purpose-built taglines—cinematic without manual keyframes. Works on Airbnb, Twitter, YouTube too. Pipeline runs in Claude Code, producing editable previews for iteration.

Gemini Vision and Prompt Vocab Boost Quality

Default captures use DOM context (text, headings, CSS); add Gemini API key (.env file) for vision-powered descriptions (e.g., detailed image breakdowns), yielding richer assets. Prompt tweaks from Hyperframes guide refine outputs: "Swap to dark mode, add fade-out, lower third at 3s with name/title." Vocabulary shifts like "Apple Keynote announcement," caption tones, transitions, audio-reactive animations elevate results—feed the full guide to Claude for custom skills.

Iterate by continuing chats (e.g., fix logos via Figma SVGs). For founders/designers/devs, this cuts video production from hours to seconds, though high-end polish needs refinement.

Summarized by x-ai/grok-4.1-fast via openrouter

6240 input / 1462 output tokens in 8861ms

© 2026 Edge