Blog Posts
A collection of essays on software development, AI, and engineering practices.
Articles
Dialectic Driven Development in the Age of AI
An AI-first methodology where documentation becomes the primary artifact and code becomes disposable. Explores how to leverage AI assistants through a cycle of docs → tests → implementation → learnings, using “napkin physics” and toy models to extract durable insights.
CLI + JSON as Debugger: Giving LLM Agents Deterministic Legs
A practical approach to building AI-debuggable systems using CLI tools that expose every step of execution in machine-readable JSON format. Shows how deterministic I/O gives LLM agents transparent, falsifiable state to reason over.
Guidance Vectors: Compact Spells for Steering LLMs
Explores how short phrases can carry enough latent meaning to shift LLM behavior into useful modes of reasoning. Introduces concepts like “Chomskyian concision” and “Einsteinian simplicity” as compact incantations for steering model output.
The Over-Engineering Epidemic: How Modern Development Tools Are Slowing Us Down
A critique of modern development practices that prioritize complex tooling and abstractions over clarity and simplicity. Argues against TypeScript overhead, premature abstraction, and AI-generated boilerplate in favor of direct, understandable code.
Fast Fashion Code: The Hidden Environmental Cost of AI-Generated Garbage
Examines the environmental impact of AI-generated code, drawing parallels between fast fashion and the current trend of generating low-quality, disposable code. Advocates for a “Slow Code” ethic that prioritizes building less but building better.
Shipping My First Fully AI‑Coded Project
A hands-on account of building and shipping a complete project where AI handled all implementation while the human focused on product direction, UX decisions, and testing. Demonstrates the practical application of human-AI collaboration through the development of a ChatGPT export viewer toolkit.
Vibe-writing the DDD Book
How the DDD book was produced through “vibe-writing”: agents draft, restructure, and refactor while the human edits for clarity, constraints, and voice. Connects the methodology’s dual modes (Discovery for uncertainty, Execution for delivery) to long-form writing, showing agent-first operations beyond code.
Futureproofed
Why DDD won’t be obsolete tomorrow: Explores how Dialectic-Driven Development is built on stable economic and cognitive invariants rather than current model capabilities, making it resilient across technological shifts. Argues that the methodology is designed for optimal collaboration structure, not as a workaround for current model limitations.
Working With vs. Working On: The Coming Schism in LLM Development
An examination of the fundamental difference between building large language models and building usable LLM-driven systems. Argues for a crucial division of labor and the need for multi-disciplinary teams including linguists, philosophers, and cognitive psychologists alongside ML engineers.
How We Tricked ncurses Into Running Without a Terminal (And Got Test Coverage)
Solving the problem of testing ncurses code in CI environments by using macOS DYLD_INTERPOSE to fake a TTY. Shows how an “idle thought” during a break led to a simple 90-line C solution that enabled full test coverage for terminal-dependent code, demonstrating effective human-AI collaboration on practical engineering challenges.
External Split
DDD isn’t just methodology—it’s hypothesis-testing about how LLMs work best. Explores recent discoveries (Research mode and meta-mode patterns) and planned work (separating reasoning from communication) as observations about LLM cognitive architecture. Shows how each pattern reveals something fundamental about effective human-AI collaboration.
Cargo-Culting Human Limitations: What LLMs Think They Can’t Do (But Actually Can)
Part 2 of the LLM Cognition series. Explores how LLMs inherit human constraints they don’t actually have—time pressure, decision fatigue, context-switching costs. Through the ddd-nes audio testing moment and meta-analysis of LLM behavioral patterns, reveals how cargo-culting manifests and how meta-coaching can recalibrate AI to its actual capabilities.
Guidance Vectors in Practice: Compression Algorithms for Philosophy
Part 3 of the LLM Cognition series. From theory to practice: how guidance vectors evolved during ddd-nes into the Hegel CLI LEXICON. Shows significant compression transforming verbose corrections into dense philosophical principles. Demonstrates how vectors work as cognitive shortcuts, create shared vocabulary, and enable methodology transfer across projects.
Context is King: Why We Pivoted from MCP to CLI
Part 4 of the LLM Cognition series. The MCP → CLI pivot wasn’t about features—it was about context composition. Every tool invocation enters LLM context; names repeated 20+ times per session compound semantic weight. “Hegel” isn’t just branding—it’s a guidance vector that primes dialectical thinking through repetition. Shows why tool names matter and how to design for cumulative context effects.