Infra setup¶
Last updated: 2026-04-13 · Reading time: ~30 min · Difficulty: moderate
TL;DR
- The shared brain is a directory of markdown files under
~/Dropbox/openclaw-backup/that every agent reads and writes. No database. Four core primitives — facts, commitments, tasks, notes — plus per-person profiles and per-agent status/rules. Readops/brain/README.mdfor the canonical schema. - The brain has two halves that sync through different mechanisms.
ops/brain/*is git-tracked config (templates, schemas, seed data) that flows git → VPS. The runtime state under~/Dropbox/openclaw-backup/(facts, people, commitments, tasks, notes) flows Dropbox ↔ VPS. Don't cross the streams. - Dropbox on a headless VPS is the hardest easy thing in this setup. Budget an hour. Selective sync is non-optional and you have to re-apply exclusions every time you re-link the daemon.
- Every agent gets its own Telegram bot. Bot commands and descriptions get clobbered on every OpenClaw restart and have to be re-applied via a startup hook.
- The unified deploy tool (
agents/shared/deploy.py) replaces the pile of copy-pasted shell deploys it evolved from. It reads each agent'smanifest.jsonand enforces ten safeguards. Three of them exist because of past outages, and they are the reason that today your cron does not silently break the morning after an upgrade.
Why a shared brain¶
Before the brain existed, every agent in this fleet was an amnesiac. Each cron fired a fresh LLM session, loaded its 8-file workspace, reasoned from scratch, and wrote nothing another agent could read. When the news agent learned I'd be traveling next week, the fact lived in the news agent's conversation and nowhere else. When the shopping agent later wondered whether to hold a delivery that would arrive while I was out of town, it had no way to know. Two agents, same user, no shared context — and I got to play messenger between them.
That failure mode is more expensive than it sounds. An agent that re-learns the same fact every session is one that re-asks you the same question every session. An agent that can't read another agent's notes contradicts the fleet within a week. An agent with no persistent record of its own mistakes will re-make them. The fleet does not hang together as a team unless there's a single place it all writes to and reads from.
The shared brain is that place. It's a directory of plain markdown files under ~/Dropbox/openclaw-backup/, synced to Dropbox, that every agent reads from and writes to. Four core primitives — facts (knowledge that decays), commitments (promises that resolve), tasks (action items), and notes (raw inputs awaiting triage) — plus per-person profile files and per-agent status / rules files. No database, no vendor, no API. All agents share the same working memory, and because everything is plain markdown I can open the whole thing in Obsidian and read what my agents know about my life.
The schema is borrowed from a separate cognitive-exoskeleton product I'm building in parallel — a system that models the same primitives (structured facts with epistemic status and bi-temporal tracking, per-contact tone preferences, proactive nudges) and runs them as a proper service with a graph database and an MCP-addressable API. The Clawford brain is the pragmatic local-first markdown version of the same ideas: same primitives, no database, no service, no training data leaving my disk. When a capability in that parallel system becomes production-stable I expect to migrate the corresponding Clawford agent onto it as an MCP tool and let the local-markdown version become a fallback. Until then, markdown files in a Dropbox folder are plenty of brain for one household.
The shape of the brain¶
The live brain lives at ~/Dropbox/openclaw-backup/ on the VPS. Inside it:
~/Dropbox/openclaw-backup/
├── people/ # One file per person — identity, circles, contact
├── facts/ # Append-only monthly fact logs (YYYY-MM.md)
├── commitments/ # Promise tracking (active.md, resolved.md)
├── tasks/ # Action queue (queue.md)
├── notes/ # Raw inputs awaiting triage (inbox.md)
├── agents/ # Per-agent status + rules files
├── archive/ # Monthly archival of completed / stale data
├── obsidian/briefings/ # Daily briefings the Obsidian bridge reads
├── fleet-health.json # Canonical fleet-state snapshot (see below)
└── scripts/ # Validation and maintenance scripts
Facts decay, commitments resolve. Facts are append-only and confidence-scored, and their effective confidence decays over time based on category — identity facts never decay, logistics facts decay within a week. Anything below an effective confidence of 0.2 is flagged stale and archived monthly. Commitments don't decay; they cycle through open → completed | overdue | cancelled. Notes sit in an inbox until a designated agent triages them into one of the other primitives. Tasks are the simplest of the four: a description, an assignee, an optional due date, a status.
The full schema with field tables, half-lives, and ID conventions lives in ops/brain/README.md. Read it when you're about to write an agent that writes to the brain, not before — the schema only makes sense in the context of what an agent is trying to say.
The append-only rule is the most important convention in the brain. No agent overwrites another agent's entries. All writes are appends. This prevents collisions when two agents write to the same file simultaneously, and it makes the audit story trivial: every entry carries the agent ID and the timestamp, and the file is its own changelog. The only allowed in-place edits are resolving a commitment (source agent or human) and marking a task done (same rule). Everything else is append >>.
IDs are globally unique. Format: {agent-name}-{YYYY-MM-DD}-{seq}, where seq is a zero-padded three-digit counter per agent per day. A single agent can write at most 999 entries per day across the whole brain, which has never come close to being a real limit.
Two halves of the brain¶
This is the subtle part. The brain actually has two halves that live in different places and sync through different mechanisms.
The git-tracked half — ops/brain/* in the Clawford repo. This is configuration: the README.md with the canonical schema, the seed _template.md for new people files, per-agent rules scaffolds, and the validation scripts. It flows one-way: from local git → the VPS via deploy.py. An agent edit never writes back here. If you want to change a schema or update a rules file, you edit locally, commit, push, and re-run the deploy.
The Dropbox-synced half — ~/Dropbox/openclaw-backup/ on the VPS. This is runtime state: live facts, live people files (whose structure came from ops/brain/people/_template.md but whose content is populated by the Connector and the human), commitments, tasks, notes, per-agent status, and fleet-health.json. This flows bidirectionally: agents write into it on the VPS, Dropbox syncs it off-VPS, and the local side of the sync is where I open Obsidian against the same files.
Don't cross the streams. Writing agent config to the Dropbox half means it's not in git and can't be versioned, tested, or rolled back. Writing runtime state to the git half means it gets committed to history and potentially leaked. The deploy.py tool enforces this boundary with a drift check (Safeguard 4), and the pre-push hook catches the other direction. Anything in ops/brain/ that isn't a template or a schema is a bug; anything in ~/Dropbox/openclaw-backup/ that isn't runtime state is a bug.
🔦 Tip. If you find yourself wanting to
git add ~/Dropbox/openclaw-backup/<something>, stop. That directory is Dropbox-synced runtime state. Whatever you're committing either belongs inops/brain/as a template (in which case move it there), or it's PII that belongs nowhere in git.
Fleet state — where to look¶
The canonical "is the fleet healthy?" signal is ~/Dropbox/openclaw-backup/fleet-health.json, written every few minutes by the host-cron probe (ops/scripts/fleet-health-host.sh). Ch 06 has the full story of why this lives on the host instead of in an LLM cron.
The per-agent agents/*.status.md files in the brain are legacy. Individual agents used to write to them on their heartbeat ticks, but nothing reads them as fleet state anymore and the newer cron prompts explicitly forbid agents from touching them at all (see the status-file-drift war story in Ch 07-1). If you see code parsing agents/*.status.md for health — the old Obsidian-briefing parser had this exact bug — it's reading stale data. Point it at fleet-health.json instead, and use the rendered morning briefing for human-facing reports.
Structural validity of the brain — required directories, core file headers, facts-file presence, Dropbox conflict files, oversized files — is a separate concern from fleet state, and it's handled by ops/brain/scripts/validate.py, run every six hours by Mr Fixit's brain-validation cron. The one subtlety worth calling out: the 500KB file-size warning deliberately excludes deploy-backups/ and workspace-snapshots/ via a SIZE_CHECK_EXCLUDE_DIRS set, because deploy.py's pre-deploy tarballs and the daily workspace-snapshot tarballs are supposed to be large (Lowly Worm's Camoufox profile clocks in above 200MB). The retention policy lives in deploy.py and workspace-snapshot.py, not in the validator. If you catch yourself "fixing" the validator to flag those directories, you're about to re-introduce the 24-warning noise floor that buried a real family-calendar.status.md FAIL in April 2026.
Dropbox on a headless VPS¶
Dropbox on a headless Linux VPS is the single fiddliest thing in the setup. Not because it's hard exactly — because the fail modes are silent and the defaults assume a desktop user. Budget an hour the first time.
The install is straightforward:
cd ~ && curl -Ls 'https://www.dropbox.com/download?plat=lnx.x86_64' | tar xzf -
~/.dropbox-dist/dropboxd
The first run prints an authorization URL. Open it in a browser, log in, authorize, and the daemon confirms "This computer is now linked to Dropbox." Ctrl+C out of the foreground run, then restart under nohup or (better) a systemd unit so it survives SSH disconnects:
nohup ~/.dropbox-dist/dropboxd > /dev/null 2>&1 &
Install the dropbox CLI for status checks (sudo apt install nautilus-dropbox, then dropbox status).
Selective sync is non-optional¶
If your Dropbox account has anything besides the brain folder — and it does — you must exclude everything else before the daemon starts pulling it down. Without exclusions, the VPS downloads your entire Dropbox and fills the disk. Apply exclusions immediately after linking, not "once you notice disk getting tight."
mkdir -p ~/Dropbox/Archive ~/Dropbox/Personal ~/Dropbox/Projects
dropbox exclude add ~/Dropbox/Archive ~/Dropbox/Personal ~/Dropbox/Projects
dropbox exclude list
⚠️ Warning. Dropbox exclusions require full paths.
dropbox exclude add Archivesilently does nothing — no error, just does nothing. Always use~/Dropbox/Archive. And if you everunlinkand re-link the daemon (e.g., after a VPS rebuild), you must re-apply every exclusion immediately, because a re-link starts with none.
Only openclaw-backup/ should remain syncing. Everything else is off.
Known fail modes¶
A few patterns worth knowing in advance:
- Ghost folders. Dropbox reports "Up to date" but a file isn't syncing.
dropbox filestatus <path>showsunwatched. This happens after a re-link or when the folder existed in the cloud before the daemon was set up. Fix: delete the empty cloud-side folder and let the daemon upload the VPS's copy as new. - Daemon stops silently after SSH disconnect. The daemon doesn't produce an error — it just stops. Fix: run under systemd instead of
nohup, and make the fleet-health probe watchdropbox statuson the host. - "Dropbox isn't responding" during initial sync. Usually the daemon is overloaded by too many files at once. Fix: wait 30 seconds; if persistent,
pkill -f dropbox && sleep 3 && nohup ~/.dropbox-dist/dropboxd &. - Conflict files. Simultaneous writes produce files like
active (conflicted copy 2026-04-08).md. Mr Fixit monitors for these every two hours and alerts on Telegram — do not auto-merge conflict files. Let the alert fire and resolve manually. Auto-merging is how you corrupt the commitment log.
Telegram bots and the gateway¶
Every agent gets its own Telegram bot. If they all share one, every message comes from the same sender and at 3am when something alerts you, you want to know instantly whether it's Mr Fixit reporting a down agent or Hilda Hippo confirming a purchase. Separate bots give each agent its own name, its own avatar, and its own chat thread.
Creating a bot¶
For each agent, open Telegram, message @BotFather, send /newbot, and follow the prompts:
- Display name: the agent's name — "Mr Fixit", "Hilda Hippo", "Lowly Worm".
- Username:
openclaw_{agent-id}_bot(the suffixbotis required). - Save the bot token. It goes straight into your local
.envas{AGENT}_BOT_TOKEN=..., never into git, never into a checked-in config. - Optional: use BotFather's
/setuserpicto upload an avatar. I match each bot's avatar to the agent's Busytown emoji.
Registering the bot with OpenClaw¶
Inside the container (via the oc wrapper from Ch 03):
oc channels add --channel telegram \
--token "$FIXIT_BOT_TOKEN" \
--account fixit \
--name "Mr Fixit"
oc agents bind --agent fix-it --bind telegram:fixit
Then pair the bot by sending /start from your phone. If it replies with a pairing code, run oc pairing approve telegram <CODE>. If it just starts a conversation with no pairing code, it's already paired.
⚠️ Warning. Every cron that needs to deliver a Telegram message must include
--to <your-chat-id>,--account <agent-id>, and--announce. Omit any of the three and the cron runs, the agent does the work, and no message arrives. Silent delivery failure is one of the top three things to check when "the agent isn't working."
Bot commands and descriptions, and the OpenClaw clobber¶
A fresh Telegram bot looks generic: empty chat window, no slash-command picker, no hint of what the agent does. Two Bot API surfaces fix this — setMyCommands (the / picker) and setMyDescription / setMyShortDescription (the empty-chat window). Both must be set via the Bot API; there's no BotFather command for either one.
Two scripts handle the whole fleet:
scripts/set-bot-commands.sh— sets each agent's custom/picker entries.scripts/set-bot-descriptions.sh— sets each agent's short + long descriptions.
Both scripts are idempotent, read tokens from ~/openclaw/.env, and use Python's urllib (not curl) for UTF-8-safe JSON bodies. You can run either from the host after a restart.
Here's the load-bearing gotcha. OpenClaw's channel-sync re-applies its built-in default slash commands (/help, /status, /context, /exec, and 45 others) whenever its config hash changes — which includes every gateway restart. If you only rely on the commands you set via setMyCommands, OpenClaw will clobber them silently on the next docker compose restart and your bots revert to the generic OpenClaw surface. There are two mitigations, and I run both:
- Set
commands.native: false+customCommandsin~/.openclaw/openclaw.json. This tells OpenClaw to stop re-applying its defaults on channel-sync. That's the durable fix. - Run
set-bot-commands.shas a post-startup hook fromentrypoint.sh, firing at +25s after container start. This is the belt-and-suspenders backup — if OpenClaw's config-hash logic changes under me or I forget to setcommands.native: falseon a new agent, the hook reapplies the custom commands within half a minute of startup. Same pattern forset-bot-descriptions.shat +30s.
I reach for both. The .openclaw.json fix is the real one; the entrypoint hook is the fix that survives future-me forgetting to make the real one.
🔦 Tip. After running either script, close and reopen the bot's chat in your Telegram client. Telegram aggressively client-caches both the command menu and the descriptions — pull-to-refresh doesn't always cut it, and you'll spend ten minutes thinking the script didn't work when the issue is that the Telegram app still has the old menu cached.
The unified deploy tool: deploy.py¶
Clawford's deploys used to be five copy-pasted shell scripts, one per agent, with every deploy-time decision baked in by hand and every new feature added to every shell script independently. That worked for about two agents and fell apart at four. agents/shared/deploy.py replaced all of them.
Clawford fights OpenClaw defaults on purpose¶
A running theme of this chapter — and of deploy.py in particular — is worth naming explicitly. OpenClaw ships with reasonable behaviors for a simpler use case: a single agent, fresh "who am I?" onboarding on every fresh workspace, a generic slash-command menu, a cautious approval policy, a sensible identity-bootstrap flow. Clawford is trying to build something more sophisticated than any of those defaults assume — a fleet of differentiated agents with immutable souls, pre-deployed identities, narrow-remit tool surfaces, and a permissive-but-audited exec policy — and every one of those choices fights an OpenClaw default. Specifically:
- OpenClaw's "who am I?" fresh-boot onboarding flow gets overwritten by a pre-deployed
SOUL.md+IDENTITY.md.deploy.pyauto-deletes theBOOTSTRAP.mdthat OpenClaw would otherwise drop into the workspace — if that file survives, it shadowsIDENTITY.mdand puts the agent into the "split-brain" state from Ch 06. - OpenClaw's default slash-command menu gets overwritten by per-agent custom commands, re-applied via an entrypoint hook (see the bot-commands section above).
- OpenClaw's default exec-approvals policy gets overwritten by a committed baseline and enforced by Safeguard 8 (below).
- OpenClaw's default cron-message shell conventions get overwritten by the script contract and enforced by Safeguard 9 (below).
None of these are bugs in OpenClaw — they're reasonable defaults for a simpler use case. Every one of them is an override Clawford makes deliberately, and the rest of this section is how deploy.py makes the overrides stick across restarts, upgrades, and drift.
What deploy.py actually does¶
deploy.py reads an agent's manifest.json (see Ch 06 for the manifest shape) and does the full install idempotently: copies the workspace files into ~/.openclaw/<agent>-workspace/, seeds any declared state files, registers crons with OpenClaw, binds the Telegram bot, writes exec-approvals, chattr +is the immutable files, deletes any BOOTSTRAP.md left behind by OpenClaw's onboarding flow, and captures a pre-deploy backup tarball.
It also enforces ten safeguards. I won't list them all here — DEPLOY.md has them in full — but three deserve a mention because they exist because of specific past outages:
Safeguard 8: exec-approvals drift is a blocking error¶
OpenClaw stores each agent's exec-approvals policy at ~/.openclaw/exec-approvals.json on the VPS. The permissive-with-allowlist model (Ch 06) means this file effectively defines "what an agent can do without a Telegram approval prompt." It is quiet and load-bearing, and quiet load-bearing things drift.
The specific failure I landed this safeguard against: after an OpenClaw upgrade, the live file got silently re-written with a tightened default, every cron that had previously worked started hitting approval prompts, and I didn't notice until the next morning's status report showed exec blocked by approval policy across the entire fleet. The fix was to commit the intended baseline to git at ops/exec-approvals-baseline.json and have deploy.py refuse to deploy if the live file on the VPS diverges from it. The tool prints a diff and waits for you to reconcile the drift — either by updating the baseline in git, or by re-writing the live file from the baseline — before it proceeds.
The whole fleet falling over on a schema-drift morning is a specific kind of pain, and I do not want to live through it twice.
Safeguard 9: forbidden cron-message patterns¶
Safeguard 9 statically walks every manifest's crons[].message field and refuses to deploy if any of them contain a pattern from a blocklist. The blocklist has grown two distinct classes on top of each other, and it's worth naming them separately because the stories are different.
Class 1 — shell operators that trip OpenClaw's exec preflight. The original outage: a cron message that said "run python3 foo.py; echo $? and capture the exit code." OpenClaw 2026.4.11's hardcoded preflight hard-rejects any interpreter invocation combined with shell operators like ;, &&, | python3, > /tmp/, 2>&1, sh -lc 'python…', or exit-code capture ($?). The rejection cascades into "approval required" alerts across the fleet. The fix was the script contract in Ch 06: scripts print one JSON line with a status field, and cron messages invoke them bare. Safeguard 9 grep-matches every shell operator in that list and refuses the deploy. It's a grep rule. It runs before anything touches the VPS.
Class 2 — R6 legacy prose clauses that drift agent LLMs into writing dead files. The April 2026 outage (covered in Ch 07-1's status-file-drift war story): every agent's cron prompts used to end with some variant of "Update your status file." Post-R6, per-agent *.status.md files are legacy — fleet-health.json is the authoritative health surface and nothing reads the status files anymore. Keeping the prose in the prompts drifted agent LLMs into writing status files with whatever schema and header they invented on the day, which is exactly how # family-calendar status (lowercase, wrong em-dash) got committed to a file that then tripped brain-validation. The fix had three layers: retire the validator's header check, strip the "Update your status file" clause from every cron prompt in the fleet, and add the exact string "Update your status file" to Safeguard 9's pattern list as a regression guard. Any future deploy that re-introduces the legacy clause is now refused at the gate with exit code 8.
The full pattern list lives in agents/shared/deploy.py at CRON_MESSAGE_FORBIDDEN_PATTERNS. If you're adding a new pattern, add the failing test case first (agents/shared/tests/test_cron_message_hygiene.py), watch it go red, then add the pattern and watch it go green. That file is the easiest place in the repo to land a red-green-refactor cycle and the cheapest place to prevent a class of outage.
Safeguard 10: config-source resolution and --bootstrap-configs¶
Real agent config files — SOUL.md, IDENTITY.md, TOOLS.md, AGENTS.md, USER.md, HEARTBEAT.md, MEMORY.md, CRONS.md — are gitignored. Only the *.example templates are tracked. This is a deliberate consequence of the 2026-04-13 PII sanitization — the real files contain family names, DOBs, calendar IDs, and similar; the templates contain the Sam / Alex / Jamie / Avery / Jordan cast and generic provider names.
On a fresh clone, the operator has templates but no real files. Deploying in that state is a bug. Safeguard 10 walks every config_files[] entry in the manifest and classifies each one:
- Real file present, no sentinel → proceed.
- Real file missing,
.examplesibling present → exit 5 with the--bootstrap-configshint. - Real file missing, no
.examplesibling → exit 5 "truly broken manifest." - Real file present but still carries the
CLAWFORD_BOOTSTRAP_UNEDITEDsentinel on its first line → exit 5 naming the files.
deploy.py <agent> --bootstrap-configs scaffolds each missing real file by copying its .example sibling and prepending a CLAWFORD_BOOTSTRAP_UNEDITED sentinel comment. The operator then hand-edits the dummy values to real ones, deletes the sentinel line, and re-runs deploy.py. The sentinel is the rail that stops you from accidentally shipping the Sam / Alex cast into a live agent's workspace after a fresh clone; I've been grateful for it exactly once, which is enough.
--skip-files bypasses Safeguard 10 entirely (the whole config-files pass is skipped), which is what I used throughout the April 2026 cron-prompt cleanup to push manifest-level edits through without also trying to re-sync workspace files I wasn't touching.
The other six safeguards — backup, source-cleanliness, diff-gated UPDATE, drift detection, workflow-contract banner, smoke test — are documented in DEPLOY.md with the same "here's the outage this exists because of" flavor.
Backups — what I back up and what I don't¶
Backups in Clawford come in three tiers:
- Pre-deploy workspace tarballs.
deploy.pywrites~/.openclaw/deploy-backups/<agent>-<timestamp>.tar.gzbefore every deploy. Also mirrors the tarball to~/Dropbox/openclaw-backup/deploy-backups/so it survives a VPS loss. Recovery from a bad deploy istar -xzf. - Dropbox itself. The entire
~/Dropbox/openclaw-backup/tree is continuously synced off-VPS by the Dropbox daemon. Losing the VPS means re-provisioning and re-cloning — the brain state is in Dropbox. - Monthly brain archival. Mr Fixit archives completed tasks and stale facts older than 90 days into
~/Dropbox/openclaw-backup/archive/YYYY-MM/on a monthly cron. Archive is permanent — nothing inarchive/is ever deleted — so the whole history of the brain is recoverable.
There's a fourth recovery channel that isn't strictly a backup but bears mentioning, because it's the one that unblocked me the day Safeguard 10 caught a sanitization wipe: the live VPS workspace at ~/.openclaw/<agent>-workspace/ holds the deployed copies of every gitignored config file (SOUL.md, IDENTITY.md, TOOLS.md, etc.). If your local checkout loses the unsuffixed siblings — fresh laptop, accidental delete, the kind of PII-sanitization pass that drops them from git — scp openclaw@<vps>:~/.openclaw/<agent>-workspace/SOUL.md ./agents/<agent>/SOUL.md is the recovery move (repeat per file). The workspace is whatever was last successfully deployed; for the gitignored half of the repo, that's the freshest source of truth that exists, and the deploy tarballs in tier 1 are the next best thing if the workspace itself is gone too.
What's outside the Clawford backup system (and stored or handled elsewhere):
- Secrets.
.envfiles, bot tokens, API keys, gateway tokens. These are gitignored and Dropbox-ignored by design — they never live in git or in Dropbox. They live in 1Password, where secrets belong. Protected, encrypted, and recoverable without touching the VPS or the Clawford repo. - Terraform state. Hetzner's state file is local to wherever you ran
terraform applyfrom. Keep it on your laptop, or set up a remote state backend if you have strong feelings about remote state. I run local. - The Docker image cache on the VPS. A rebuild pulls fresh. Nothing irreplaceable lives there.
The Obsidian bridge (optional)¶
Reading the brain via ssh and cat works. Reading it in Obsidian, with wikilinks and graph view and live-updating markdown rendering, is noticeably better. The bridge is optional — skip it if you don't already use Obsidian — but if you do, the setup is cheap.
The hybrid model¶
The bridge is split into two halves that flow in different directions:
| Layer | Direction | Mechanism |
|---|---|---|
| Reference data (people, commitments, tasks) | Bidirectional | Directory junctions from the vault into the brain |
| Daily state (schedule, fleet health, what needs attention) | Agent → human | A deterministic briefing file generated on a cron |
| Human intent (TODOs, plans, private notes) | Human only | Stays in Obsidian; agents never read it |
The brain stays the system of record for agent knowledge. Obsidian stays the system of record for human thinking. The bridge connects them without merging them.
What gets junctioned, and what stays out¶
| Vault path | Brain target | Why |
|---|---|---|
Brain/People/ |
openclaw-backup/people/ |
Family and friend profiles — core graph nodes |
Brain/Commitments/ |
openclaw-backup/commitments/ |
Promises the fleet is tracking |
Brain/Tasks/ |
openclaw-backup/tasks/ |
Action items from the fleet |
Briefings/ |
openclaw-backup/obsidian/briefings/ |
Daily briefings the fleet writes |
What deliberately stays out of the vault:
agents/*.status.md— legacy per-agent status files (see the fleet-state section above); nothing useful is in them.fleet-health.json— machine-readable, not human-readable; use the rendered morning briefing instead.notes/inbox.md— internal triage state; the human inbox lives in Workflowy or wherever you actually capture notes.facts/— verbose, agent-formatted, with confidence/decay metadata; not fun to read by hand.archive/— historical, low-signal.
Setup¶
The vault itself must live outside Dropbox. Junctions from a Dropbox path into another Dropbox path cause double-sync; I have learned this one the hard way. Keep the vault on local disk.
On Windows, mklink /J from Git Bash is unreliable — use Python's _winapi.CreateJunction instead:
import _winapi
junctions = [
(r"C:\Users\you\YourVault\Brain\People", r"E:\Dropbox\openclaw-backup\people"),
(r"C:\Users\you\YourVault\Brain\Commitments", r"E:\Dropbox\openclaw-backup\commitments"),
(r"C:\Users\you\YourVault\Brain\Tasks", r"E:\Dropbox\openclaw-backup\tasks"),
(r"C:\Users\you\YourVault\Briefings", r"E:\Dropbox\openclaw-backup\obsidian\briefings"),
]
for link, target in junctions:
_winapi.CreateJunction(target, link)
On macOS or Linux, use ln -s (symlinks work fine outside Dropbox's double-sync trap). Open the vault in Obsidian and the junctioned directories appear as native vault folders.
The daily briefing¶
A small Python script — deterministic, no LLM, zero token cost — runs on Mr Fixit's cron every morning and assembles a briefing file that Obsidian picks up via the junction. The pattern:
- Fire the cron 10 minutes after the upstream agent briefings that feed it (e.g., 12:10 UTC / 5:10 AM PT, just after Mistress Mouse's 5:00 AM morning briefing lands).
- Read
fleet-health.json,commitments/active.md,tasks/queue.md, and any cached briefings from other agents (Mistress Mouse's calendar summary, Hilda Hippo's delivery digest, etc.). - Render them into a single markdown file with YAML frontmatter and Obsidian
[[wikilinks]]. - Write to
~/Dropbox/openclaw-backup/obsidian/briefings/YYYY-MM-DD.md. - Dropbox syncs it locally; it appears in Obsidian via the junction.
Silent on success — no Telegram notification, you just open Obsidian in the morning and the briefing is there.
What to remember¶
- Vault outside Dropbox. Seriously. Double-sync corrupts files and eats CPU.
- Read-only mindset for brain files. Browse and search freely. Avoid edits to
commitments/active.mdortasks/queue.mdfrom inside Obsidian — agents append to these with strict formatting and a casual edit will break the parsing. - Wikilink aliases are manual. The briefing script maps display names to slugs (
"Sam" → "sam-smith"); update the map when you add a new person file. - No LLM in the briefing. The script is pure I/O. If you want LLM summaries, that's a different cron and it belongs upstream of the briefing.
Pitfalls you'll hit¶
🧨 Pitfall. Skipping selective sync and letting the Dropbox daemon pull down your entire Dropbox. Why: a "small" personal Dropbox account can easily contain tens of thousands of files that will fill a 160 GB VPS in an afternoon and leave the daemon in a half-synced state that's painful to recover. How to avoid: apply exclusions within the first 60 seconds after the daemon links. Every single top-level folder except
openclaw-backup/, full paths, no abbreviations. Re-apply after every re-link.🧨 Pitfall. Setting custom Telegram bot commands without also setting
commands.native: falseinopenclaw.json. Why: OpenClaw re-applies its default commands on every channel-sync (which happens on restart), clobbering yours. You'll deploy a perfect bot, walk away, restart for an unrelated reason, and come back to find every bot showing the generic OpenClaw command menu. How to avoid: setcommands.native: falseinopenclaw.jsonas the durable fix, and runset-bot-commands.shas an entrypoint hook as the backup. Both — they fail in different scenarios.🧨 Pitfall. Editing commitment or task files directly from Obsidian. Why: agents append to these files with strict formatting conventions that their parsers depend on, and a casual human edit will silently break the parsing. Overdue detection will stop working; a resolved commitment will look open; a task marked done in Obsidian will stay open in the queue. How to avoid: treat the brain as read-mostly in Obsidian. Browse and search freely, but if you need to change an entry, do it via the agent (send Mr Fixit a message, or edit via Workflowy for notes), not via a direct markdown edit.
🧨 Pitfall. Auto-merging a Dropbox conflict file. Why: Dropbox creates
active (conflicted copy 2026-04-08).mdwhen two writers touch the same file simultaneously. If you write a script that merges both versions automatically, you will eventually lose commitment state or double-write a fact — the "winning" version of a merge is not obvious when both copies have structured entries with different IDs. How to avoid: let Mr Fixit alert on the conflict, look at both files by hand, pick the one that's correct (or merge by hand line by line), and only then delete the conflict copy.
See also¶
- Ch 02 — Before you start — the Dropbox and Telegram account prerequisites this chapter depends on.
- Ch 03 — VPS setup — the provisioned, Tailscale-reachable VPS this chapter's daemons run on.
- Ch 06 — Intro to agents — the
manifest.jsonand exec-approvals model thatdeploy.pyreads and enforces. - Ch 08 — Security and hardening — the defense-in-depth story built on top of the exec-approvals baseline this chapter introduces.
ops/brain/README.md— the canonical brain schema with full field tables, half-lives, and access matrix.agents/shared/deploy.py— the unified deploy tool itself. Start with the function it calls last, not the function it calls first.DEPLOY.md— the ten-safeguard inventory with the outage story behind each.scripts/set-bot-commands.shandscripts/set-bot-descriptions.sh— the idempotent Telegram bot-surface scripts that run on every gateway restart.ops/exec-approvals-baseline.json— the committed baselinedeploy.pySafeguard 8 checks against.- Obsidian — the markdown vault app the bridge section above is built around.