Equip Local Dev with Python/Node Managers and Gemini CLI Skills
Manage Python versions consistently with pyenv (install via GitHub repo) to hit mainstream 3.13.13—validate via python --version—avoiding cross-platform mismatches common in AI/ML workflows. Pair with nvm for Node (Gemini CLI dependency) to ensure up-to-date environments. Install Gemini CLI globally (npm install -g @google/gemini-cli), authenticate via Google, and activate ADK skills (/skills list): adk-cheatsheet for API patterns/orchestration/state, adk-dev-guide for lifecycle/coding rules, adk-deploy-guide for Cloud Run/GKE/CI-CD, adk-eval-guide for metrics/LLM-judging, adk-scaffold for new projects/RAG adds, adk-observability-guide for tracing/BigQuery. These slash debugging time on agent tools/callbacks by providing indexed docs/MCP servers (e.g., adk-docs-mcp for fetching sources).
Clone https://github.com/xbill9/gemini-cli-aws, source init2.sh/set_env.sh for PROJECT_ID vars, then make install in multi-lightsail/ to pip-install ADK/shared-utils and npm deps for frontend/backend. This yields a testbed beyond basic codelabs, leveraging Gemini CLI for real-time code assists on A2A multi-agents.
Verify and Run 5-Agent Workflow Locally Before Deploy
Agents interact via A2A protocol: Researcher (gemini-2.5-flash model, searches/topics), Judge (validates), Orchestrator (routes), Content Builder (generates), Course Builder (structures outputs). Test single-agent first: cd agents; adk run researcher/ prompts chat interface with logs at /tmp/agents_log/agent.latest.log, using in-memory storage/.env/session.db/artifacts. Scale to web UI: adk web --host 0.0.0.0 (add --allow_origins 'regex:.*' for Cloud Shell CORS) serves at http://0.0.0.0:8000.
Makefile orchestrates all: make start launches agents on ports 8001-8004 (researcher:8001, judge:8002+), backend:8000, frontend:5173—check make local-status for tcp listeners/processes. Run make test (pytest), e2e-test vs localhost, lint (ruff). make stop cleans. This local loop—build/debug via Gemini CLI skills, verify via logs/UI—ensures production readiness without cloud costs upfront, exposing issues like experimental PLUGGABLE_AUTH warnings early.
One-Command Lightsail Deploy for Predictable VPS Hosting
AWS Lightsail provides low-fixed-price VPS (console: https://lightsail.aws.amazon.com/ls/webapp/home/containers) for dev/prod without full AWS complexity—ideal for agent apps vs EC2 sprawl. From local setup, make deploy-lightsail containers all services; monitor with lightsail-status, get endpoint via endpoint-lightsail, teardown via destroy-lightsail. Trade-off: Simpler than GKE but caps at small workloads; pairs with ADK's modularity for scaling agents autonomously. Full flow from codelab base (https://codelabs.developers.google.com/codelabs/production-ready-ai-roadshow/1-building-a-multi-agent-system) re-engineered here yields deployable multi-agent A2A system in minutes, not hours.