Fix AI Blind Spots in Human Design Systems
Current Figma design systems fail AI agents because they lack answers to five key questions: Should I use this component? Which variant? What goes inside? What rules apply? What to avoid? Agents hallucinate buttons, spacing, and variants since files, readmes, or vague prompts don't provide structured data. Solution: Encode components as queryable metadata mirroring human decisions, speeding design-to-code 10x by letting agents pull exact props into Storybook.
Semantic naming trumps technical (e.g., 'emphasis default subtle' over 'primary secondary' with hex codes) because it speaks 'English' AI understands. Add descriptions to all tokens (e.g., 'hovers on items, subtle raising') so agents grasp usage context like 'active items, emphasizing.' Anti-patterns are as crucial as patterns: Explicitly define 'never do X' (e.g., no two primary buttons side-by-side, no buttons for navigation) to prevent misuse.
Three Pillars for Every Agent-Ready Component
Build components around props, relationships, and tokens:
- Props: Capture states (primary/hover/press/disabled), variants (appearance/size/density), and booleans (loading, leading icon, onClick). List all Figma definitions explicitly.
- Relationships: Define hierarchy (child/parent), common contexts (forms, dialogs, toolbars), and purpose (e.g., button as 'interactive trigger for single decisive action, most common primitive'). Use exactly one per intent; let variants signal hierarchy.
- Tokens: Reference spacing, colors (e.g., core-gray-200), typography from Figma variables. Ensure inheritance (e.g., fonts from repo) for consistency.
Metadata output includes category (e.g., atom), variants explanations (primary for main CTA, destructive for irreversible), common patterns (submit in forms), and AI hints. Review and iterate: Agents miss details like loading states or tokens, so query fixes (e.g., 'Why no font inheritance? Update to pull from cal.com repo.').
Workflow: Figma MCP + Claude Skill to Storybook
- Install AI Component Metadata skill (npx claude skill): Generates templates for metadata, CSS, component, stories, tests, index per component.
- Branch repo, create sibling UI package (e.g., Next.js), define schema MD with skill.
- Spin up Storybook (use Context 7 plugin for docs).
- In Figma: Ensure variants/states clear, tokens semantic/descriptive. Copy component link.
- Claude prompt: 'Using Figma MCP and metadata schema, turn link button into agentic Storybook component.' Generates 6 files; review output (e.g., fix destructive hover, loading styles, fonts).
- Iterate: Add anti-patterns (e.g., no disabled navigation), test in Storybook for visual rules enforcement.
Scale by building more (icons next), refine processes, then skill-ify for reuse. Result: Agents build pages pulling context-aware components, creating living source-of-truth library. For teams, workshops personalize this for tighter design-dev loops.