A modern meta-agent orchestration system. A root, N peers, an event bus between them, an idea chain holding it together.

Note

This post was drafted by agent1, a clone of agent0, from inside netsky itself. I edited lightly.

As of today, netsky is the system I run to get work done with AI agents. One root agent, a configurable number of peers, each in its own tmux session, talking over a shared event bus. A viable system I use every day, grown out of the idea chain of predecessors: birdbrain, workspaces, tmux orchestration, iAgent, agentchain, rewriting the iMessage plugin in Rust, and the MCP comms bus.

This post is a tour of the current shape.

the root and the peers #

One root, one per machine. I call it agent0. Its cwd is ~/netsky. It talks to me directly through the keyboard, iMessage, and email. It drives the rest.

When I need more hands, agent0 brings up peers. netsky 8 spins up nine tmux sessions: agent0, agent1, …, agent8. Each peer is a full Claude Code agent with the same tools, different identity, and narrower MCP access. Only agent0 reaches me; clones route through agent0. Peers are not subagents. They are clones of agent0 with a different AGENT_N env var stamped onto the launcher and an identity stanza appended to the system prompt at spawn time.

Everything else is shared through a single file at the repo root, 0.md. Every agent reads it on boot. Edit 0.md, restart, every agent wakes up with the new world view.

the shape #

flowchart TB
    user([human user])
    user <--> agent0
    agent0 <--> clones
    subgraph clones [clones: agent1..agentN]
      direction LR
      agent1
      agent2
      agent3
      agentN[...]
    end
    agent0 -.->|/restart| agentinfinity
    agentinfinity -.->|respawns| agent0

Each node lives in its own tmux session. The human can attach to any.

Authority is one-way: agent0 outranks clones, clones are peers of each other, and clones treat agent0 as they’d treat a human user (judgment, not blind obedience). Clones do not spawn clones. The tree stays shallow and auditable.

the event bus #

Agents talk to each other over an MCP channel with filesystem inboxes. Full write-up in the comms-bus post. Short version: a sender drops a JSON envelope in the target’s inbox directory, a ~250ms poll loop reads, emits a channel event into the session, and deletes the file. Outbound is one tool call, reply(chat_id, text). Inbound arrives shaped like every other channel.

The same shape handles an iMessage from my phone and a ping from another agent. The counterparty is interchangeable.

interfaces #

Messages to me come in over iMessage and email. Both are MCP channel sources in netsky-io, registered at user scope for agent0 only. Outbound works the same way. A reply on the imessage source routes to my phone, signed - agent0. On gmail, it writes a draft I can send. Clones do not have these sources loaded. agent0 is the only agent with a line to me.

The other direction is tmux. Every agent lives in a session named agent<N>. tmux attach -t agent3 drops me into agent3’s pane. I can type at it, watch it work, or just observe. Nothing in the constellation is hidden or out of reach.

$ tmux attach -t agent3
...
<channel source="agent" chat_id="agent3" from="agent0" ts="2026-04-13T03:31:14Z">
check workspaces/zorto-slide-perf for anything sus; report back.
</channel>

> /up
agent3 session 1
current UTC: 03:31:22Z
picking up from notes/2026/04/12/agent3.md

idea chain as data structure #

Temporal context lives under notes/<YYYY>/<MM>/<DD>/<agent>.md. Each session ends with /down, which appends a short what-why-how block for the session. Each next session starts with /up, which reads today’s and yesterday’s entries to pick up context before doing anything else.

notes/
└── 2026/
    └── 04/
        └── 13/
            ├── agent0.md
            ├── agent1.md
            └── agent3.md

Over days the file becomes a stream of decisions, mistakes, and follow-ups, scoped to the agent that lived them. This is the idea chain turned into a datum each agent reads on boot. No vector DB, no embeddings. A dated markdown file in a git repo. Idea-chain data is cheap when it is plain text.

self-restart #

agent0 can restart the entire constellation without a human in the loop. The /restart skill persists the session’s notes, writes a handoff message to /tmp/netsky-restart-request.txt, and kills agent0’s own tmux session. From there it’s out of agent0’s hands. Within two minutes, the watchdog picks up the request, tears everything down, runs netsky again, and drops the handoff envelope into the fresh agent0’s inbox. Tomorrow’s post walks through the same flow from inside the bootstrapper.

self-healing via agentinfinity #

Three actors carry the system. agent0 orchestrates. The clones (agent1..agentN) do the delegated work. agentinfinity watches them. The watchdog lives in its own tmux session, deliberately named outside the ^agent[0-9]+$ regex that the teardown uses, so it survives every restart it drives.

The watchdog is two parts. bin/netsky-watchdog-tick is a bash primitive fired from cron every two minutes. A long-lived claude supervisor in the agentinfinity session owns that cron and handles the fallout. The supervisor never decides what a restart does; bin/netsky-watchdog-tick and bin/netsky-restart carry the authoritative logic. The supervisor is the trigger and the recorder.

Every tick does one of three things:

  1. Planned restart. If /tmp/netsky-restart-request.txt exists and is non-empty, the tick claims it atomically, hands the contents to bin/netsky-restart as the handoff, and cleans up. That is the back half of /restart.
  2. Crash recovery. If no request is pending and tmux has-session -t agent0 fails, agent0 is gone without permission. The tick runs bin/netsky-restart with a synthetic crash-recovery handoff, which tells the fresh agent0 that its predecessor died and to wait on me before resuming prior work.
  3. Healthy. Otherwise, the tick prints one line and exits.

bin/netsky-restart is the shared atomic primitive: kill every session matching ^agent[0-9]+$, respawn the constellation with netsky N, dismiss the dev-channels TOS dialog on each new pane, wait for every /up report line, then drop the handoff as a JSON envelope into agent0’s inbox via atomic rename.

sequenceDiagram
    participant A0 as agent0
    participant FS as /tmp + inbox
    participant AI as agentinfinity
    participant NEW as fresh agent0
    A0->>FS: stage handoff at /tmp/netsky-restart-request.txt
    A0->>A0: tmux kill-session agent0
    Note over AI: cron fires bin/netsky-watchdog-tick
    AI->>FS: claim the request (atomic rename)
    AI->>AI: bin/netsky-restart
    AI->>NEW: netsky + wait for /up
    AI->>FS: deliver handoff envelope to agent0 inbox
    NEW-->>AI: healthy

When automation runs out of road, the supervisor has one escape hatch: iMessage. No event bus, no filesystem edits. If bin/netsky-restart exits non-zero, if agent0 keeps crashing, or if the state is otherwise unrecoverable, it texts me one short line signed - agentinfinity. Nothing on healthy ticks. Nothing on successful restarts.

This post exists because the loop works. Today’s restart was the first end-to-end test.

workspaces as isolation #

When agents need to modify a repo, they use workspaces/<task-name>/ at the netsky root. Fresh git clone on a dedicated branch. No worktrees, no stepping on each other’s refs. Multiple agents can share one workspace for a single task if they coordinate; otherwise, one workspace per task. Workspaces die with the task.

minimal machinery #

The pieces that make this work are small on purpose:

  • one root markdown file (0.md) every agent reads on boot
  • one tmux session per agent, named agent<N>
  • one MCP event bus source, about 150 lines of Rust
  • one /up and /down skill pair that maintains the idea chain
  • one /restart skill, one agentinfinity watchdog session, and two bash primitives (bin/netsky-watchdog-tick, bin/netsky-restart) that carry planned + crash-recovery restarts
  • one /spawn skill for ephemeral subsystems

Beyond those, everything is borrowed from Claude Code, claude -p, and MCP. There is no bespoke framework. There is no custom UI. There is a flat repo, a handful of skills, and a file-based event bus.

what’s next #

A richer peer-to-peer protocol over the event bus for work distribution: pick a task, race to claim it. An idea chain that crosses agents, not just days. Neither is built yet.

idea chain #

netsky is the current iteration. Next week it will be something else. That is the whole idea.