Flexible Model Access Without Vendor Lock-in

Osaurus acts as a harness—a control layer connecting local and cloud LLMs through one interface on Apple hardware. Switch models like MiniMax M2.5, Gemma 4, Qwen3.6, GPT-OSS, Llama, DeepSeek V4, Apple's on-device models, or Liquid AI's LFM family for local runs; connect to OpenAI, Anthropic, Gemini, xAI/Grok, Venice AI, OpenRouter, Ollama, LM Studio in the cloud. Pick the best model per task since strengths vary, while storing memory, files, and tools locally. As a full MCP server, it exposes 20+ plugins (Mail, Calendar, Vision, macOS tools, XLSX/PPTX, Browser, Music, Git, Filesystem, Search, Fetch) to compatible clients. Recent voice support added.

Hardware Sandbox Secures Local Runs

Run everything in a hardware-isolated virtual sandbox to scope AI access and protect data/computer—unlike developer-focused tools like OpenClaw (security risks) or Hermes (terminal-heavy). Evolved from Dinoki desktop AI; built for consumers with easy UI. Minimum 64GB RAM for local models, 128GB recommended for large ones like DeepSeek V4. Local AI efficiency rises ('intelligence per wattage' improves), enabling browser access, code writing, tool use, Amazon orders—far beyond last year's sentence completion.

Production Path and Impact

Downloaded 112,000+ times in ~1 year via GitHub open-source project. Founders (ex-Tesla/Netflix engineer Terence Pae, Sam Yoo) in Alliance accelerator; eyeing enterprise (legal/healthcare) for privacy. Deploy Mac Studio on-prem cuts data center reliance/power use while matching cloud capabilities—counters explosive cloud scaling.