Master Gemini CLI for Vibe Coding in Terminal
Set up Gemini CLI in Google Cloud Shell, engineer context via gemini.md files, connect MCP servers and extensions to build AI-powered coding agents that handle tools, memory, and real projects like websites.
Google Cloud Lab Setup for Free AI Coding
Gemini CLI thrives in a managed environment like Google Cloud Shell, a persistent VS Code-like editor in the browser. Start by claiming $5 in free GCP credits using a personal Gmail account (avoid corporate/edu accounts to prevent restrictions). Access credits via the lab link, accept the coupon, and ensure no charges apply for Gemini models or services.
Activate Cloud Shell from console.cloud.google.com (top-right button, open in new window for editor+terminal). Authenticate with gcloud auth list and switch accounts if needed via gcloud config set account <your@gmail.com>. Clone starter repos:
agentverse-developer: Templates for agent building (imports, files prepped).agentverse-dungeon: Container for boss fight agent (deployed later via A2A communication).
Create a project: gcloud projects create agentverse-shadow-$(whoami) --set-as-default. Enable APIs: Artifact Registry, Cloud Build, Cloud Run (gcloud services enable ...). Create repo: gcloud artifacts repositories create agentverse --repository-format=docker --location=us-central1. Verify in console under Artifact Registry.
Set permissions on service account project-id-compute@... with roles like Artifact Registry Writer, Cloud Build Editor (lab uses one SA for speed; production: separate SAs per principle, e.g., AI Platform User for models, Cloud Build Editor for images).
Deploy dungeon: Run deployment script. Install/update Gemini CLI: gemini --version (free for Gemini 2.5 Flash; paid for 2.5 Pro with higher limits).
Common Mistake: Skipping personal Gmail leads to auth blockers. Quality Check: Yellow font in terminal confirms project setup; console shows repo.
Context Engineering to Control AI Outputs
Vibe coding unpredictability stems from stochastic models—AI might edit wrong files or hallucinate. Master it via context layers in gemini.md files:
- User-level (
~/.gemini/gemini.md): Global instructions apply everywhere (e.g., "Always use TypeScript, prefer functional components"). - Project-level (
.gemini/gemini.mdin folder): Local rules (e.g., "This project uses React; focus on tabletop RPG mechanics").
Steps:
- Create
.geminifolder:mkdir .gemini. - Edit
gemini.md: Define model (model: gemini-2.5-flash), instructions ("Think step-by-step, confirm before edits"), memory settings. - Add
skills.md: Define custom skills (next episode covers deeply).
Memory persists across sessions: Project/user layers ensure consistent reasoning. Pick models wisely—Flash for speed, Pro for reasoning.
Differentiate from tools like Anti-Gravitas: Gemini CLI is terminal-native for quick tasks (file search, summaries); Anti-Gravitas is IDE-based for visual workflows/planning.
Before/After: Vague prompt → scattered edits. Context-engineered → Precise: "Edit only src/components/Player.tsx, add health bar."
Launch: gemini (trust folder on first run). Commands:
/help: List all.!lsor!echo hello: Shell mode (bypass AI, press Esc to exit)./tools: View connected tools.
Pro Tip: Clear terminal (clear) for clean chats. Mistake: Overloading context—keep concise, layered.
MCP Servers and Extensions for External Integration
Gemini CLI is an agent: LLM brain + tools for world interaction. Connect via settings.json in .gemini/:
MCP Servers (Model Control Protocol): Zero-friction external APIs/tools.
- Edit
settings.json: Add servers like GitHub MCP ("mcpServers": [{ "name": "github", "command": "npx", "args": ["@modelcontextprotocol/server-github"] }]). - Use:
/toolslists; natural language: "Push this code to GitHub" or "Open issue #42".
Extensions: Custom AI powers.
- Install:
gemini /install-extension nanobanana(generates images in terminal). - Use: "Generate a banana nano art."
Full flow:
gemini→ Chat.- Context loads automatically.
- Invoke tools: AI reasons, calls MCP/extension.
Quality Criteria: AI confirms actions ("Plan: Edit file X, commit via GitHub MCP?"); reject/iterate.
Live Vibe Coding: From Prompt to Website
In tabletop/ folder:
gemini→ Write design doc: "Design a tabletop RPG site: Player stats, combat log."- Generate code: "Implement React app from doc."
- Iterate: "Add Nano Banana images for monsters."
- Test/eval: Write tests, CI/CD (next episode).
- Deploy agent, boss fight vs. dungeon.
Practice: Build website live—AI handles boilerplate, you steer via context/tools. Scales to agents with hooks/guardrails (next: deploy to Cloud Run).
Exercise: Fork lab, add custom MCP for your API; vibe code a feature.
Notable Quotes:
- "Vibe coding is an art... manage context, provide instructions and skills." — Ayo Adedeji, on controlling stochastic outputs.
- "Project level for folder-specific, user level for global—no matter what terminal folder." — Annie Wang, explaining memory layers.
- "Gemini CLI: terminal coding agent. Anti-gravity: IDE with visual plan editing." — Annie Wang, tool comparison.
- "Separate service accounts in production: one for AI calls, one for builds." — Ayo Adedeji, security best practice.
- "Shell mode bypasses agent: !ls, press Esc to return." — Annie Wang, command demo.
Key Takeaways
- Claim GCP credits with personal Gmail; use Cloud Shell for persistent dev.
- Engineer context in
gemini.md(user/project levels) to predict/control AI edits. - Connect MCP servers in
settings.jsonfor GitHub pushes/issues in English. - Install extensions like Nano Banana for terminal images/generation.
- Launch with
gemini, use/help,!shell,/tools; iterate plans before execution. - Pick Flash/Pro models: Speed vs. reasoning; free tier limits to 2.5 Flash.
- Production: Separate SAs, concise context to avoid overload.
- Practice: Build/test/deploy in lab; extend to full agents next.