SKILL.md Enforces Consistent Cortex Code Analysis

Upload SKILL.md to mandate a 4-step procedure in Snowflake Cortex Code: classify intent, ReAct loop on structured data (max 5 turns), extract facts from documents, output fixed 13-field report—delivering auditable, leadership-ready answers every time.

SKILL.md Delivers Repeatable AI Outputs Without New Capabilities

SKILL.md is a markdown file uploaded once to Snowflake Cortex Code's Skills feature, enforcing a mandatory 4-step procedure for every user query regardless of phrasing or asker. It doesn't add SQL, analysis, or Cortex functions—Coco already handles those—but ensures identical reasoning patterns, tool calls, and a fixed 13-field structured report format. Key proof: every output includes SKILL_APPLIED: true, confirming the full procedure ran. This transforms variable prose responses into glanceable reports with labeled fields, traceable numbers, three findings, and three recommendations. For example, across datasets with SALES (67 rows, 64 won/3 lost deals), REVENUE_SUMMARY (regional summaries with anomaly flags), and DOCUMENTS (resignation emails, deal loss email, Slack export), it connects structured metrics like LATAM's 64% QoQ drop to unstructured root causes like two rep resignations and one lost Argentina deal.

The skill persists across sessions and schemas—update table names in a few lines to reuse on any Snowflake setup with transactions, summaries, and documents. Audit trails log every ReAct turn to AGENT_RUN_LOG, providing full traceability without manual intervention.

4-Step Procedure Bridges Structured and Unstructured Data

Step 1 classifies query intent via CORTEX.CLASSIFY_TEXT(), routing to DataAgent, AnomalyAgent, ReportAgent, or ForecastAgent.

Step 2 runs a ReAct loop (max 5 turns) via CORTEX.COMPLETE(): each turn follows Thought (reason next data need), Action (SQL query via sql_tool), Observation (results). For LATAM query, turn 1 identifies Q3 outlier; turn 2 confirms 64% drop and rep-level gaps (e.g., Sofia Reyes as sole Q3 closer); it stops when confident, avoiding unnecessary queries unlike static SQL.

Step 3 uses CORTEX.EXTRACT_ANSWER() on DOCUMENTS for why (e.g., Carlos Lima/Brazil resignation, Diego Herrera/Colombia resignation, Argentina loss).

Step 4 synthesizes into identical 13-field report: e.g., FINDING_1: "LATAM revenue dropped 64% QoQ from $860K to $310K"; RECOMMENDATION_1: "Hire 2 reps for Brazil/Colombia"; plus metrics, evidence, and SKILL_APPLIED: true.

ReAct Pattern Ensures Completeness Over One-Shot Queries

ReAct outperforms single SQL by iteratively deciding queries based on prior observations: start broad (regional trends), drill down (QoQ history, rep deals), integrate unstructured facts. This yields complete answers neither tables alone provide—SALES/REVENUE_SUMMARY show what happened (64% drop), DOCUMENTS explain why (attrition, loss). Pre-skill: useful narrative varies by run. Post-skill: same depth, format fires on any query (e.g., "Q3 closed-lost deals" lists 3 losses with reasons). Result: leadership gets board-ready reports in seconds vs. hours of analyst chaining, fostering habit of querying Coco first.

Three Core Benefits: Consistency, Completeness, Commitment

Consistency: Same procedure/format every query, eliminating format variance.

Completeness: ReAct + extraction crosses data boundaries for root-cause synthesis.

Commitment: SKILL_APPLIED: true + logs verify rigor, building trust for production use. In demo, VP gets actionable LATAM intel (numbers, causes, hires) instantly, scalable to pipeline/forecasts—one upload shifts Coco from experimental to reliable.

Summarized by x-ai/grok-4.1-fast via openrouter

7390 input / 2141 output tokens in 19555ms

© 2026 Edge