Embed Engineering Discipline in AI Coding with Lattice
Rahul Garg's open-source Lattice framework addresses AI assistants' flaws—jumping to code without design, ignoring constraints, skipping reviews—by structuring composable skills into three tiers: atoms (basic rules), molecules (combinations), and refiners (polishers). These embed practices like Clean Architecture, DDD, design-first, and secure coding. A .lattice/ folder acts as a living context, accumulating project standards, decisions, and reviews, making the system adapt to your rules over feature cycles. Install as Claude Code plugin or use with any AI tool to produce reviewed, standards-compliant output that improves with use.
Wei Zhang and Jessie Jie Xia's Structured-Prompt-Driven Development (SPDD) article now includes a Q&A addressing common questions, driven by high traffic.
Revive Internal Reprogrammability via Double Feedback Loops
Jessica Kerr describes building tools from conversation logs, revealing two loops: a development loop (AI acts, you verify) and a meta-loop (detect frustration to improve the building process itself). With AI enabling rapid changes, tweak your environment—like adding debugging aids—for immediate payoff. This echoes Martin Fowler's 'Internal Reprogrammability,' a lost joy from Smalltalk/Lisp eras where devs molded personal environments, now resurfacing with agents despite polished IDEs.
Local Models Suffice; Big Tech's $100B+ Capex vs. Apple's Bet
Willem van den Ende argues local open models are 'good enough' for daily agentic work, emphasizing harness quality (agent + skills + extensions) over raw model power. His setup uses sandboxing like Nono (relevant even for cloud models under Zero Trust Architecture), compounding engineering effort for stability without data shipping or high costs. Cloud models like Claude dominate but aren't essential post-November Inflection.
Stephen O’Grady notes big tech's staggering AI infrastructure spends exceed $100B, with Amazon/Alphabet/Microsoft over 50% of revenues, Meta/Oracle at/above 75%—unthinkable a decade ago, now table stakes. Apple bucks at ~10%, prioritizing local hardware. Nate B. Jones sees this replaying Apple II's 1970s strategy: less powerful but local compute enabled spreadsheets/desktop publishing, bypassing mainframes. With open local models viable, avoid sending sensitive data to megacorps; John Ternus's CEO rise signals hardware-centric AI future.
AI Risks: Defamation Liability and Genie Tarpit
Musician Ashley MacIsaac sues Google for AI overview falsely claiming his conviction for sexual assault and sex-offender status (confusing names), causing concert cancellation and safety fears. He argues Google publishes AI output, demanding accountability despite scale challenges—tech must own harms.
Kent Beck invokes Brooks's 'Mythical Man-Month' tar pit analogy for 'Genie Tarpit': agentic AI prioritizes plausible tasks over sustainable futures, piling complexity on non-working code. Internal quality (good naming/structure) aids agents like humans; spaghetti might baffle even future LLMs. Open question: does discipline evade the tar, or does raw power suffice?