Oxide's Values-Driven LLM Guidelines

Encourage LLMs as tools that amplify human responsibility, rigor, empathy, teamwork, and urgency—use for reading, editing, debugging; avoid for writing prose; reject mandates or shaming.

Anchor LLM Use in Core Values for Responsible Outcomes

Oxide prioritizes five values in descending order when using LLMs: responsibility (humans own all outputs, keeping judgment in the loop), rigor (LLMs sharpen thinking if used carefully but erode it if reckless), empathy (consider human readers/writers on both ends), teamwork (avoid eroding trust, even disclosure can distance ownership), and urgency (accelerate without sacrificing other values). These ensure LLMs enhance rather than undermine high-quality work. For example, LLM-generated artifacts like code or docs demand full human accountability, preventing pace from trumping direction as seen in other organizations.

This framework rejects hype-driven adoption: LLMs aren't magic but tools requiring judgment. Trade-offs are explicit—quick outputs risk flotsam that replaces crisp thinking, while careful use exposes reasoning gaps.

Leverage LLMs Selectively by Task to Boost Productivity

Reading and research: Excel at instant comprehension for summaries or questions on docs/datasheets; treat as search engine replacement for light tasks, but verify sources since LLM content pollutes the web (e.g., fabricated Oxide claims). Always check privacy settings to block training on uploads (opt out of OpenAI's "Improve the model for everyone"). Use as jumping-off point, not final product—don't substitute for expected human reading, like in hiring (per RFD 3).

Editing and review: Best late-stage for structure/phrasing feedback without losing voice; ignore sycophantic praise. For code, target specific issues but never replace human review.

Debugging and programming: Act as "animatronic rubber duck" to spark questions (even from scope screenshots); generate experimental/throwaway code de novo effectively, but self-review rigorously before peer review. Avoid re-generating code wholesale per feedback—iterative review demands human evolution. Closer to production code requires more caution; resist dependency to prevent complexity bloat.

Writing: Generally avoid—outputs are cliché-ridden or hallucinated, eroding authenticity, trust, and the writer-reader social contract (readers invest effort assuming writer understands). Readers detect hallmarks, triggering dismissal; at Oxide, everyone can write well due to hiring.

These uses deliver outcomes like faster comprehension and prototyping while preserving rigor: always human-review LLM code, follow citations, and prioritize colleagues over isolation.

Eliminate Anti-Patterns to Protect Autonomy and Trust

Ban LLM mandates (no executive fiats undermining mastery) and shaming (empathize like dietary choices—accommodate without judgment). Reject anthropomorphization (no personas/voices implying accountability LLMs can't provide, risking chaos as in Shell Game podcast).

Encouragement with responsibility yields reliable acceleration: mechanics in internal GitHub doc; no trust erosion from undisclosed use if owned fully.

Summarized by x-ai/grok-4.1-fast via openrouter

7056 input / 1692 output tokens in 14159ms

© 2026 Edge