Yesterday we wrote about building for the machine, the idea that if an LLM is building your software, your software needs interfaces the LLM can use. Today we shipped the implementation: a Unix domain socket that lets Claude drive Ikigai like a human sitting at the keyboard.
Agents can finally manage themselves. Rel-11 gives agents the tools to fork children, send messages, wait for results, and clean up after themselves without a human touching a single slash command.
Ikigai is a black box. It renders to the alternate terminal buffer, reads keystrokes one byte at a time, and produces nothing an external tool can see or touch. That’s fine when a human is sitting at the keyboard. It’s a dead end when the thing building the software is an LLM that can’t see your screen.
When we started Ikigai twelve weeks ago, we chose C. The reasoning was straightforward: if you want a tool to land in Linux distributions, C is the path of least resistance. Package maintainers understand it, build systems support it natively, and there’s no runtime to drag along. We were building software meant to be installed through apt and dnf, sitting alongside the rest of the system tools. That assumption turned out to be rooted in a world that’s already disappearing.
Two days ago we published “See Ralph Run,” a post about our nano-service orchestration layer. It’s already stale. Not because anything broke, but because the thinking underneath it shifted. That’s life in agentic development right now: write something on Monday, rethink it by Wednesday, rebuild it by Friday.
We finally spent ten minutes on production value. The Ikigai Devlog YouTube channel now has actual thumbnails instead of random freeze frames, courtesy of Google’s image generation throwing robot illustrations at us until something stuck.
Remember Dick and Jane? “See Dick run. See Jane run. Run, Dick, run!” Those simple sentences taught millions of children to read. We’ve borrowed the formula for our orchestration layer, except our protagonist is an AI agent named Ralph, and instead of running through yards he’s running through codebases.
The ralph harness has been running effectively for weeks now. It tracks progress, summarizes history, and can grind through complex goals for 30 hours when needed (though most finish in a single iteration). We have a solid foundation for autonomous development.
I’ve been actively working with AI coding agents for months. I thought I was ahead of the curve. Then I went back and re-read something Steve Yegge wrote in March 2025, almost a year ago, and realized I’m still catching up.