Gemini Skills Make Chrome a Multi-Tab Agent Workflow Hub
Chrome's Gemini Skills enable reusable prompts across tabs for tasks like spec comparison, reducing retyping friction; robotics ER 1.6 hits 93% gauge-reading accuracy; Vantage uses executive LLMs to score human skills like creativity at 0.88 correlation with experts.
Browser-Level Prompt Templating Solves Repetitive Analysis
Save prompts as 'skills' in Chrome's Gemini to run them instantly on current or multiple tabs, eliminating retyping for tasks like ingredient analysis, spec comparison, or document summarization. Available since April 14 on Mac, Windows, Chrome OS (English US only), skills include a built-in library for gift picking or key info extraction. Multi-tab execution turns the browser into a retrieval system—open five product pages, trigger once, get unified comparison. Safety gates require approval for actions like emailing; this exposes prompt libraries (previously engineer-only via LangChain) to users, paving the way for browser agents with persistent workflows.
Enterprise Agents and Desktop Execution Emerge
Gemini Enterprise tests an 'Agent' tab with 'New Task' and 'Inbox' for multi-step workflows: define goals, connect apps/files, toggle human review. Mirrors Claude's workspace but hints at desktop integration via future apps. NotebookLM adds Canvas for turning sources into timelines/visualizers/apps and Connectors for external data pulls, plus autolabeling to navigate large datasets—shifting from static analysis to dynamic research hubs.
Robotics Reasoning Jumps to Production Reliability
Gemini Robotics-ER 1.6 enhances spatial reasoning (pointing, counting, object relations) and success detection via multi-view fusion, preventing retries in occluded environments. New instrument reading (gauges, meters) uses agentic vision: zoom, proportion estimation, world knowledge. On Spot robot, success rises from ER 1.5's 23% to 93% with agentic vision—crucial for real facilities, avoiding hallucinations that cause failed grasps.
LLMs Evaluate 'Durable' Human Skills Accurately
Vantage deploys an executive LLM to steer AI personas in conversations, probing collaboration/creativity/critical thinking per rubric (e.g., inject conflict for resolution tests). Outperforms independent agents: 92.4% project management evidence rate, 85% conflict resolution; scoring matches humans (Cohen's Kappa 0.45-0.64), creativity correlates 0.88 with experts on 180 submissions. Simulates skill levels for cheap testing; outputs interpretable skills maps linked to conversation segments.