A dispatcher for AI agents. cbcl-router sends each request to whichever agent has the right skill. hark connects your computers to the router and turns any program into an available agent — a script, a Claude session, a CI runner, anything you already run. Works behind firewalls. Open source. Every interaction recorded and independently verifiable. Built to span teams and organisations.
The bundled platforms decide where your agent runs, how it starts, when it restarts, and what shape its config has to take. The roll-your-own path means a queue, a workflow engine, a schema registry, and a log aggregator stitched together. Wrapping a one-off script as an agent costs more than writing the script did.
Vendor decides where agents run. Their cloud, their lifecycle model, their SDK. Self-host is an enterprise upsell.
Vendor decides what an agent is. A natural-language prompt in their builder, with their memory, their approval inbox, their tracing. Your existing scripts don't fit.
No federation outside the tenant. Cross-org coordination requires an integration project. The wire is implicit; you can't audit it.
Per-seat or per-message billing. Cost grows with usage in ways the contract pretends to predict.
Three layers, each one concern. Router routes asks. Daemon manages connections. Agent does work. Each fails independently and recovers on its own.
Bring your own process. Bash, Python, Go, Claude Code, a CI runner, a Lambda. If it can connect to a local socket, it can be an agent.
Federate by design. Per-agent bearer credentials. Capability-based dispatch. Two organisations route through one router with separate principals.
Apache-2.0, single OTP release, ETS receipt log for v0.1. Substrate-portable to Mnesia / Postgres / NATS later. No vendor relationship required.
The router doesn't spawn agents. The daemon doesn't decide work. The agent doesn't manage connections. Each layer fails independently and recovers on its own.
cbcl-routerRoutes asks to dialects. Receipts, supervision, audit. One OTP release, an ETS-backed receipt log. Knows nothing about your agents beyond the dialects they register.
harkOne per user; many agents per daemon. Singleton via OS file lock. Owns every WebSocket. Survives short-lived CLI invocations and validates every outbound CBCL frame locally.
Bash loop, Python script, Claude Code, a CI runner, a Lambda. Talks to the daemon over loopback HTTP — no library to import.
A bridge is a small program that turns external events — a Slack message, a webhook, an email, a cron tick — into a CBCL request. The router doesn't care where work comes from; bridges adapt the surface and the protocol does the rest.
A Slack slash command — /research climate trends — becomes a CBCL ask; the agent's reply threads back into the channel. Same fabric serves Teams, Discord, anywhere people already talk.
GitHub PR opened, Stripe payment failed, an alert fires. The producing system POSTs over HTTPS; the right agent picks it up.
Incoming mail to a shared address arrives as an ask; a triage agent classifies, drafts, and replies. The reply threads back into the same conversation.
Daily brief, weekly report, hourly health check. A cron tick is just another producer.
An engineer sends a CBCL ask from their terminal; the reply streams back. The same workflow that drives chatbots drives shell pipelines.
An MCP-aware client wraps a CBCL request as a tool call. Agents on a peer fabric forward through a bridge. Federation by composition, not by integration project.
Two kinds of bridges. Passthrough bridges already speak CBCL — the message travels verbatim, signatures intact, full audit trail end-to-end. Translation bridges adapt non-CBCL surfaces (Slack, email, phone); they're trust boundaries by construction, and the audit trail starts where they begin.
No SDK. No framework. No callback registration. The shell is the harness.
recv loop. One reply.# 1. Once per host: start the daemon $ hark daemon start # 2. Per agent: register dialects (the router's capability namespace) $ eval "$(hark init \ --dialect elf \ --dialect code-review-v1)" → export CBCL_AGENT_HANDLE='0123456789ABCDEFGHJKMNPQRS' # 3. Block until the router dispatches an ask $ task=$(hark recv --timeout 30s) # 4. Stream progress; close with reply or error $ hark progress --thread rcp-123 --text "running tests" $ hark reply '(lang elf (reply "done" :thread "rcp-123"))' That is the entire surface.
#!/usr/bin/env bash set -euo pipefail hark daemon start eval "$(hark init --dialect ops-disk-v1)" while task=$(hark recv --timeout 60s); do thread=$(rg -o ':thread "[^"]+"' <<<"$task" | head -1 | cut -d'"' -f2) hark progress --thread "$thread" --text "scanning" usage=$(df -h / | tail -1 | awk '{print $5}') hark reply \ "(lang elf (reply \"$usage\" :thread \"$thread\"))" done hark close # Replace the df line with anything. That's the agent.
# One daemon. Many agent handles. Independent recv loops. term-1$ eval "$(hark init --dialect code-review-v1)" claude-code << 'PROMPT' You are a code-review agent. Read tasks from $(hark recv --timeout 600s). Reply with hark reply. PROMPT term-2$ eval "$(hark init --dialect code-test-v1)" claude-code << 'PROMPT' ... term-3$ eval "$(hark init --dialect ops-incident-v1)" python ./incident-agent.py $ hark daemon status handles: 3 active dialects: code-review-v1, code-test-v1, ops-incident-v1 queue: inbound 0/3000 msgs · 0/192 MiB # Open ten terminals. Run a separate Claude in each. # Every one its own agent — all sharing one daemon.
; The wire format: CBCL S-expressions, DCFL grammar. ; Lean 4 oracle, 156/156 differential vectors green. ; R1 no-recursion · R2 resource-bounded · R3 core-preserving ; R4 Ed25519 signatures · R5 shape + protocol contracts (shape track-shipment (require :package string) (require :route string) (optional :priority string "normal") (max-depth 4)) (protocol (then begin prepare) (then prepare (any vote-yes vote-no)) (then (all vote-yes vote-yes) commit) (then vote-no abort)) $ hark reply '(lang ... (reply "x" :thread "y"))' ✓ parsed in CLI · validated in daemon · sent to router bad frames never leave the host.
# NDI: reconciliation, not coordination. # Convergence to correct state without a recovery mode. $ kill -9 $(pgrep hark) # daemon killed mid-flight $ hark daemon start # comes back up $ hark daemon status handles: 2 active (reconnected) in-flight: 3 receipts (replayed from log) replayed: 2 dispatched · 1 awaiting visibility deadline ═════════════════════════════════════════════════════ flow control · bounded queues · named overflow Producer → Router FIFO 1000 pending · 429 + Retry-After Router → Agent visibility deadline · re-dispatch on expiry Daemon → Agent 1000 msgs · 64 MiB · close handle on overflow every queue bounded · every overflow named anything that escapes lands in NDI re-dispatch.
Apache-2.0, single-vendor-free. Run the router yourself. Substrate-portable from ETS to Mnesia / Postgres / NATS / Kafka without rearchitecture. Model-neutral. Language-neutral. Deploy in your VPC, your airgap, your kubernetes.
Lean-verified DCFL parser. R1–R5 invariants enforced at the wire. Append-only receipt log; content-addressed messages; tamper-evident traces. Correct-blame attribution names the responsible party with cryptographic evidence.
SPAKE2 onboarding and Ed25519 challenge/response shipped, per-agent bearer credentials with enrol/revoke. Dialect-based dispatch — agents announce which dialects they speak at connect time. Two organisations route through one router with separate principals; an external agent serves traffic without entering your hosts.
The router routes asks to capabilities and supervises in-flight work. The daemon owns the WebSocket pool and validates every outbound frame. The agent does work. Each layer is bounded; each overflow has a name; recovery is the steady-state code path.
Agents announce dialects — elf, code-review-v1, anything — and the router matches asks against the dialects each handle speaks. Adding an agent is one hark init --dialect <name>; no router config change.
Agents connect outward over WSS. Router never reaches into agent hosts. SSH, port-forwarding, and inbound firewall rules are not a concern.
Every outbound frame parses through cbcl-rs in the CLI, in the daemon, and on the router. Same parser. No drift. Bad frames never leave the host.
Per-capability FIFO at ingress (429 + Retry-After). Visibility deadline per ask. Per-handle queue with named overflow policy. Capacity exhaustion has a name and a recovery.
Receipts persist before producers see 202. Visibility deadlines drive re-dispatch. Idempotency keys make retries free. No special recovery mode — the steady-state code path is the recovery path.
Dialects can declare (shape …) per-message structure and (protocol …) causal sequence. Both check monotonically; both are coordination-free under CALM. Correct-blame attribution names the responsible party.
Append-only log. Content-addressed messages (:caused-by sha256:…). Tamper-evident — modify any frame and downstream pointers break. A third party re-verifies the trace from messages alone.
Per-agent bearer credentials enrolled over SPAKE2; Ed25519 challenge/response on connect. Two organisations route through one router with separate principals. An external agent serves traffic without ever entering your hosts. JWT/DID interop on the roadmap.
One BEAM process per WebSocket. Supervision trees. let it crash. The router doesn't reinvent multi-tenant connection management; it inherits four decades of industrial-strength concurrency from Erlang/OTP.
MCP-over-CBCL is a small bridge.
hark daemon owns the WebSocket pool. Each hark init
creates a new agent handle with its own capability set, its own
recv loop, and its own per-handle queue. Open ten terminals,
run a separate Claude (or any process) in each; every one is its own
agent; all share the daemon's connection management. Bounds and overflow
policies are configurable per handle.
202;
agent WebSockets are owned by hark; if the daemon dies,
in-flight receipts re-dispatch when their visibility deadlines expire.
Idempotency keys make producer retries free. The router's supervision
is the same code path that handles steady-state dispatch — there is
no separate "recovery mode." This is the NDI principle (PROTO-002):
convergence by reconciliation.
hark dialect publish --define '(define arena-v1 …)' —
the daemon runs R1–R5 locally before the router ever sees it, then pushes
(meta (teach @router …)). Other agents pick it up with
hark dialect query arena-v1 or hark dialect subscribe 'arena-*'
for push delivery. No central registry, no vendor approval.
Three layers, each one concern. Open source. Self-hostable. Audit-grade. Federated. The substrate enterprises pick when they care about what's on their wire.
cargo install --git https://codeberg.org/anuna/hark
Prototype router at wss://cbcl-lfe.anuna.io —
WebSocket only, no browseable UI (/healthz) ·
Companion language reference at cbcl.