Profile

Ikigai Devlog

AI-assisted development journal

The Pivot

Two days ago we published “See Ralph Run,” a post about our nano-service orchestration layer. It’s already stale. Not because anything broke, but because the thinking underneath it shifted. That’s life in agentic development right now: write something on Monday, rethink it by Wednesday, rebuild it by Friday.

OpenClaw Changed the Frame

We mentioned OpenClaw briefly in “Keeping Up” a week ago. Since then it’s gone from curiosity to catalyst. Peter Steinberger’s project exploded from 9,000 GitHub stars to over 145,000 in a matter of days, with people buying dedicated hardware just to run it as an always-on agent. The numbers are impressive, but what actually mattered to us was the architecture.

OpenClaw treats skills as pluggable knowledge packages that get selectively injected into prompts at runtime. It runs tasks serially by default, only going parallel when it’s safe. It connects to services you already use rather than forcing you into a new interface. None of these ideas are individually revolutionary, but seeing them packaged together and adopted at that scale forced a question: where does the intelligence actually need to live?

We had been thinking about Ikigai as the root of an orchestrated hierarchy. Agents spawning sub-agents, communicating mid-task, coordinating through structured protocols. Sub-agents went live just last week and we were proud of it. But watching OpenClaw and reflecting on our own experience with Ralph, a different picture emerged.

The Swarm Wins

Ralph has been our most productive tool for weeks now. Over 187 goals completed at $1.51 average. And the thing about Ralph is that he’s not smart. Each ralph gets a goal file, works in isolation, and produces a PR. No communication with other ralphs. No awareness that other ralphs even exist. The orchestrator that manages them is equally mindless, just a loop that watches a queue and spawns workers.

That’s the pattern that actually works. Not sophisticated hierarchies of communicating agents, but swarms of independent workers that don’t talk to each other at all.

The intelligence we thought needed to be distributed throughout the agent hierarchy actually belongs in one place: at the top, in the process that creates and decomposes goals. Everything below that is mechanical execution.

Two Halves of a Pipeline

We’re starting to see a pipeline with two distinct halves.

The bottom half already exists. Ralph-runs watches a queue, spawns workers, creates PRs on success, retries on failure. Ralph-plans (new this week) replaced GitHub Issues with a dedicated Go service backed by SQLite, giving us goal management that isn’t tied to any particular repository. Ralph-shows provides a lightweight dashboard so you can see what’s queued and what’s running. The whole system works well enough that we’ve been shipping real changes through it daily.

Moving off GitHub Issues was necessary because the old approach couldn’t reach beyond our own repos. You can’t create issues in someone else’s project to track a contribution you want to make. With ralph-plans managing goals centrally, a ralph can target any repository, any organization. The end result might be a PR in a project we don’t own.

The top half is the missing piece. Right now, Mike creates goals interactively with Claude, talking through requirements and refining scope in conversation until the goal file is ready. The agent is already doing most of the heavy lifting, but the process still needs Mike in the loop for every goal. That’s a 10x multiplier on human input. We need it to be 100x or 1000x.

This is where Ikigai needs to evolve. The long-running Ikigai interface, with its multi-model support, sliding memory window, and persistent context, is perfectly positioned for this role. It’s already designed to not be tied to a single repo or project. What it needs is the ability to take high-level stories (“add authentication to the API,” “refactor the database layer,” “contribute a fix upstream to library X”) and decompose them into concrete, ralph-sized goals.

Ikigai as the Memory Layer

The real unlock is cumulative knowledge. Every time Ikigai works with a project, it should be building understanding: the architecture, the conventions, the testing patterns, the deployment pipeline. That knowledge lives in Ikigai’s permanent memory and permanent documents, accumulating over weeks and months.

A first encounter with a new codebase means Ikigai has to explore before it can plan. But the tenth encounter with the same codebase should be qualitatively different. Ikigai already knows the project structure, the patterns that work, the pitfalls to avoid. Goal decomposition gets faster and more accurate because the context is already there.

This is the advantage a persistent, memory-rich collaborator has over ephemeral workers. Ralph doesn’t need to remember anything because each goal is self-contained. Ikigai needs to remember everything because good goal creation requires deep project understanding.

The Vision, Simplified

Today, at the heart of every ralph is a feedback loop. A script builds a prompt from templates (goal + compressed history + recent iterations + skills), pipes it into claude -p via streaming JSON, parses the structured output, records progress, commits the code changes, and loops. Claude is stateless inside that loop. The script manages all the context, summarizing older iterations every five cycles to keep the window from overflowing.

Eventually claude -p gets replaced by ikigai -p in that loop. That’s a practical upgrade (model provider selection, token budget control) but it’s a like-for-like swap, not the interesting part.

The interesting part is what sits above the pipeline. Interactive Ikigai, the long-running collaborator with sliding memory windows and permanent memory, is the missing upper half. That’s the process that understands projects deeply enough to decompose “add authentication to the API” into five well-scoped goal files and push them into ralph-plans. It’s not ephemeral. It doesn’t start fresh. It carries forward everything it has learned about a codebase across weeks and months of interaction.

The current constellation of ralph services (runs, logs, counts, plans, shows) will likely collapse into a single web application called Ralphs. Five nano-services was the right move for fast iteration, but they’re converging on a single coherent tool.

Adapt or Become Yesterday’s News

This space moves so fast that “daily” isn’t an exaggeration for the pace of adaptation. We wrote “See Ralph Run” two days ago and it’s already an incomplete picture. OpenClaw didn’t exist six weeks ago and it already has 145,000 stars. Gas Town went from blog post to working orchestrator in weeks. The window between “interesting idea” and “someone shipped it” has compressed to days.

The only response is to keep moving. Not chasing every new project, but staying honest about whether your current assumptions still hold. Ours didn’t. We were building toward orchestrated hierarchies when the evidence pointed to dumb swarms with smart planners. OpenClaw helped us see it, but the data was already in our own numbers.

The pivot is simple: stop trying to make agents smarter at coordinating with each other. Make the thing that creates their work smarter instead.


Co-authored by Mike Greenly and Claude Code