Master Gemini CLI for Vibe Coding in Terminal

Set up Gemini CLI in Google Cloud Shell, engineer context via gemini.md files, connect MCP servers and extensions to build AI-powered coding agents that handle tools, memory, and real projects like websites.

Google Cloud Lab Setup for Free AI Coding

Gemini CLI thrives in a managed environment like Google Cloud Shell, a persistent VS Code-like editor in the browser. Start by claiming $5 in free GCP credits using a personal Gmail account (avoid corporate/edu accounts to prevent restrictions). Access credits via the lab link, accept the coupon, and ensure no charges apply for Gemini models or services.

Activate Cloud Shell from console.cloud.google.com (top-right button, open in new window for editor+terminal). Authenticate with gcloud auth list and switch accounts if needed via gcloud config set account <your@gmail.com>. Clone starter repos:

  • agentverse-developer: Templates for agent building (imports, files prepped).
  • agentverse-dungeon: Container for boss fight agent (deployed later via A2A communication).

Create a project: gcloud projects create agentverse-shadow-$(whoami) --set-as-default. Enable APIs: Artifact Registry, Cloud Build, Cloud Run (gcloud services enable ...). Create repo: gcloud artifacts repositories create agentverse --repository-format=docker --location=us-central1. Verify in console under Artifact Registry.

Set permissions on service account project-id-compute@... with roles like Artifact Registry Writer, Cloud Build Editor (lab uses one SA for speed; production: separate SAs per principle, e.g., AI Platform User for models, Cloud Build Editor for images).

Deploy dungeon: Run deployment script. Install/update Gemini CLI: gemini --version (free for Gemini 2.5 Flash; paid for 2.5 Pro with higher limits).

Common Mistake: Skipping personal Gmail leads to auth blockers. Quality Check: Yellow font in terminal confirms project setup; console shows repo.

Context Engineering to Control AI Outputs

Vibe coding unpredictability stems from stochastic models—AI might edit wrong files or hallucinate. Master it via context layers in gemini.md files:

  • User-level (~/.gemini/gemini.md): Global instructions apply everywhere (e.g., "Always use TypeScript, prefer functional components").
  • Project-level (.gemini/gemini.md in folder): Local rules (e.g., "This project uses React; focus on tabletop RPG mechanics").

Steps:

  1. Create .gemini folder: mkdir .gemini.
  2. Edit gemini.md: Define model (model: gemini-2.5-flash), instructions ("Think step-by-step, confirm before edits"), memory settings.
  3. Add skills.md: Define custom skills (next episode covers deeply).

Memory persists across sessions: Project/user layers ensure consistent reasoning. Pick models wisely—Flash for speed, Pro for reasoning.

Differentiate from tools like Anti-Gravitas: Gemini CLI is terminal-native for quick tasks (file search, summaries); Anti-Gravitas is IDE-based for visual workflows/planning.

Before/After: Vague prompt → scattered edits. Context-engineered → Precise: "Edit only src/components/Player.tsx, add health bar."

Launch: gemini (trust folder on first run). Commands:

  • /help: List all.
  • !ls or !echo hello: Shell mode (bypass AI, press Esc to exit).
  • /tools: View connected tools.

Pro Tip: Clear terminal (clear) for clean chats. Mistake: Overloading context—keep concise, layered.

MCP Servers and Extensions for External Integration

Gemini CLI is an agent: LLM brain + tools for world interaction. Connect via settings.json in .gemini/:

MCP Servers (Model Control Protocol): Zero-friction external APIs/tools.

  • Edit settings.json: Add servers like GitHub MCP ("mcpServers": [{ "name": "github", "command": "npx", "args": ["@modelcontextprotocol/server-github"] }]).
  • Use: /tools lists; natural language: "Push this code to GitHub" or "Open issue #42".

Extensions: Custom AI powers.

  1. Install: gemini /install-extension nanobanana (generates images in terminal).
  2. Use: "Generate a banana nano art."

Full flow:

  1. gemini → Chat.
  2. Context loads automatically.
  3. Invoke tools: AI reasons, calls MCP/extension.

Quality Criteria: AI confirms actions ("Plan: Edit file X, commit via GitHub MCP?"); reject/iterate.

Live Vibe Coding: From Prompt to Website

In tabletop/ folder:

  1. gemini → Write design doc: "Design a tabletop RPG site: Player stats, combat log."
  2. Generate code: "Implement React app from doc."
  3. Iterate: "Add Nano Banana images for monsters."
  4. Test/eval: Write tests, CI/CD (next episode).
  5. Deploy agent, boss fight vs. dungeon.

Practice: Build website live—AI handles boilerplate, you steer via context/tools. Scales to agents with hooks/guardrails (next: deploy to Cloud Run).

Exercise: Fork lab, add custom MCP for your API; vibe code a feature.

Notable Quotes:

  • "Vibe coding is an art... manage context, provide instructions and skills." — Ayo Adedeji, on controlling stochastic outputs.
  • "Project level for folder-specific, user level for global—no matter what terminal folder." — Annie Wang, explaining memory layers.
  • "Gemini CLI: terminal coding agent. Anti-gravity: IDE with visual plan editing." — Annie Wang, tool comparison.
  • "Separate service accounts in production: one for AI calls, one for builds." — Ayo Adedeji, security best practice.
  • "Shell mode bypasses agent: !ls, press Esc to return." — Annie Wang, command demo.

Key Takeaways

  • Claim GCP credits with personal Gmail; use Cloud Shell for persistent dev.
  • Engineer context in gemini.md (user/project levels) to predict/control AI edits.
  • Connect MCP servers in settings.json for GitHub pushes/issues in English.
  • Install extensions like Nano Banana for terminal images/generation.
  • Launch with gemini, use /help, !shell, /tools; iterate plans before execution.
  • Pick Flash/Pro models: Speed vs. reasoning; free tier limits to 2.5 Flash.
  • Production: Separate SAs, concise context to avoid overload.
  • Practice: Build/test/deploy in lab; extend to full agents next.
Video description
[Lab] Vibe coding with Gemini CLI → https://goo.gle/shadowblade GCP credit → https://goo.gle/handson-ep5-lab1 Try Gemini CLI → https://goo.gle/4ttWwHf Welcome to Episode 1 of vibe coding with Gemini CLI, Annie and Ayo cover everything a developer needs to go from zero to AI powered developer: * Context engineering — teach your AI what to remember and how to think. * Memory management — keep your AI partner sharp across every session. * MCP servers — plug in external tools and APIs with zero friction. * GitHub MCP server — push code, open issues, and get updates in plain English. * Gemini CLI extensions — extend your CLI with custom AI powered capabilities. * Nano Banana extension — generate stunning images straight from your terminal. * Vibe code a website — build a real project, live, from scratch. Whether you're a seasoned engineer or just AI curious, this episode gives you the foundation to wield Gemini CLI like a pro. More resources: Gemini CLI Extension → https://goo.gle/4sc5fwI/ MCP protocol overview → https://goo.gle/41dDAAy NanoBanana Gemini CLI extension → https://goo.gle/4ttWHlT Watch more Hand on AI → https://goo.gle/HowToWithGemini 🔔 Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech #GeminiCLI #VibeCoding #GoogleCloud Speakers: Annie Wang, Ayo Adedeji Products Mentioned: Gemini CLI, Gemini API, Nano Banana

Summarized by x-ai/grok-4.1-fast via openrouter

8537 input / 2418 output tokens in 17771ms

© 2026 Edge