OpenClaw Backup, Restore, and Migration: The Complete Guide
Your OpenClaw instance accumulates valuable state over time -- conversation memory, custom skills, integration configurations, API keys, and messaging session data. Losing this means rebuilding from scratch: re-pairing Telegram bots, re-creating skills, and losing all the context your agent has built up.
This guide covers backing up your data, restoring after failure, and migrating between servers.
What Data Needs Backing Up
Everything lives under ~/.openclaw/:
~/.openclaw/
openclaw.json # Main configuration
gateway.db # SQLite database (memory, conversations, state)
credentials/ # API keys and integration tokens
workspace/skills/ # Custom skills
workspace/memory/ # Persistent memory files
sessions/ # Telegram/WhatsApp session state
| Component | Loss Impact |
|---|---|
openclaw.json |
Must reconfigure from scratch |
gateway.db |
Lose all accumulated context and history |
credentials/ |
Must re-authenticate everything |
workspace/skills/ |
Must rewrite custom skills |
sessions/ |
Must re-pair messaging integrations |
The most painful losses are gateway.db (weeks of conversation memory) and sessions/ (WhatsApp re-pairing is especially tedious[1]).
For Docker installations, data lives in a Docker volume with the same internal structure.
Method 1: Manual Backup with tar
# Stop gateway for database consistency
sudo systemctl stop openclaw
tar -czf ~/openclaw-backup-$(date +%Y%m%d-%H%M%S).tar.gz -C ~ .openclaw/
sudo systemctl start openclaw
Stopping the gateway matters -- SQLite databases can corrupt during live copies[2]. For zero-downtime backup, use SQLite's built-in backup:
sqlite3 ~/.openclaw/gateway.db ".backup '/tmp/gateway-backup.db'"
tar -czf ~/openclaw-backup-$(date +%Y%m%d-%H%M%S).tar.gz \
--exclude='.openclaw/gateway.db' \
--exclude='.openclaw/gateway.log' \
-C ~ .openclaw/
Always verify:
sqlite3 /tmp/gateway-backup.db "PRAGMA integrity_check;"
# Should output: ok
Method 2: Automated Daily Backups with Cron
Create /usr/local/bin/openclaw-backup.sh:
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/var/backups/openclaw"
OPENCLAW_DIR="$HOME/.openclaw"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
BACKUP_FILE="$BACKUP_DIR/openclaw-$TIMESTAMP.tar.gz"
mkdir -p "$BACKUP_DIR"
# Safe database copy
[ -f "$OPENCLAW_DIR/gateway.db" ] && \
sqlite3 "$OPENCLAW_DIR/gateway.db" ".backup '$BACKUP_DIR/gateway-$TIMESTAMP.db'"
# Archive everything except live database
tar -czf "$BACKUP_FILE" \
--exclude='.openclaw/gateway.db' \
--exclude='.openclaw/gateway.log' \
-C "$(dirname "$OPENCLAW_DIR")" "$(basename "$OPENCLAW_DIR")"
# Add safe database copy
if [ -f "$BACKUP_DIR/gateway-$TIMESTAMP.db" ]; then
cd "$BACKUP_DIR" && mv "gateway-$TIMESTAMP.db" "gateway.db"
tar -rf "$BACKUP_FILE" "gateway.db" && rm -f "gateway.db" && cd -
fi
# Cleanup old backups
find "$BACKUP_DIR" -name "openclaw-*.tar.gz" -mtime +$RETENTION_DAYS -delete
echo "[$(date)] Backup: $BACKUP_FILE ($(du -sh "$BACKUP_FILE" | cut -f1))"
Schedule with cron:
chmod +x /usr/local/bin/openclaw-backup.sh
crontab -e
# Add: 0 3 * * * /usr/local/bin/openclaw-backup.sh >> /var/log/openclaw-backup.log 2>&1
Method 3: Cloud Backup to S3 or B2
Local backups do not protect against hardware failure. Add cloud redundancy.
Amazon S3
aws s3 cp "$BACKUP_FILE" "s3://your-openclaw-backups/openclaw-$TIMESTAMP.tar.gz" \
--storage-class STANDARD_IA
S3 Standard-IA costs roughly $0.0125/GB/month[3]. Set lifecycle rules to auto-delete after 90 days:
aws s3api put-bucket-lifecycle-configuration \
--bucket your-openclaw-backups \
--lifecycle-configuration '{"Rules":[{"ID":"Cleanup","Status":"Enabled","Filter":{"Prefix":"openclaw-"},"Expiration":{"Days":90}}]}'
Backblaze B2
At $0.005/GB/month, B2 is even cheaper:
b2 upload-file your-openclaw-backups "$BACKUP_FILE" "$(basename "$BACKUP_FILE")"
Method 4: Docker Volume Backup
docker stop openclaw
docker run --rm \
-v openclaw_data:/data \
-v $(pwd):/backup \
alpine tar -czf /backup/openclaw-docker-$(date +%Y%m%d).tar.gz -C /data .
docker start openclaw
Restore:
docker stop openclaw
docker run --rm \
-v openclaw_data:/data \
-v $(pwd):/backup \
alpine sh -c "rm -rf /data/* && tar -xzf /backup/openclaw-docker-20260224.tar.gz -C /data"
docker start openclaw
Restoring from Backup
sudo systemctl stop openclaw 2>/dev/null || true
rm -rf ~/.openclaw
tar -xzf /var/backups/openclaw/openclaw-20260224-030000.tar.gz -C ~
sqlite3 ~/.openclaw/gateway.db "PRAGMA integrity_check;"
sudo systemctl start openclaw
openclaw status
For partial restores, extract to /tmp and copy only what you need -- skills, the database, or credentials individually.
Server-to-Server Migration
1. Prepare the new server
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
npm install -g openclaw
# Do NOT run openclaw onboard -- we will restore over it
2. Backup and transfer
# On OLD server
sudo systemctl stop openclaw
tar -czf ~/openclaw-migration.tar.gz -C ~ .openclaw/
scp ~/openclaw-migration.tar.gz user@new-server:~/
3. Restore and configure
# On NEW server
tar -xzf ~/openclaw-migration.tar.gz -C ~
sqlite3 ~/.openclaw/gateway.db "PRAGMA integrity_check;"
Update ~/.openclaw/openclaw.json if the server has a different IP, hostname, or port. Then set up systemd and start the gateway.
4. Verify
openclaw status
openclaw chat "Are you working?"
openclaw integrations status
Keep the old server's backup for at least a week before decommissioning.
Messaging Session Migration
Telegram: Bot sessions transfer cleanly -- the bot token in credentials/telegram.json is all that is needed. You may need to re-approve devices with openclaw devices approve <id>.
WhatsApp: Session data in sessions/whatsapp/ contains cryptographic keys. It often migrates successfully, but if not, you will need to re-pair by scanning a QR code. Group memberships and chat history are preserved on the WhatsApp side[4]. Note down your groups before migrating.
Migration for Hosted Instances
If migrating to or from ClawTank:
# Export (self-hosted to ClawTank)
openclaw export --format clawtank -o ~/openclaw-export.zip
# Import (ClawTank to self-hosted)
openclaw import ~/clawtank-export.zip
API keys are excluded from exports for security -- re-enter them after import.
Backup Strategy Recommendations
Minimum: Daily local cron backups with 30-day retention.
Recommended: Daily local backups + weekly cloud backups to S3 or B2 (90-day retention) + pre-change backups before upgrades + monthly restore tests.
Production: Hourly local backups + daily cloud backups to two providers (1-year retention) + automated monitoring that alerts when backups stop.
Common Issues
Database locked during backup: Use sqlite3 ... ".backup" instead of copying the file directly.
Corrupt archive: If tar -tzf fails, try partial extraction with 2>/dev/null || true.
Messaging fails after restore: Ensure the system clock is accurate (sudo timedatectl set-ntp true). Session tokens include timestamps.
Permission errors: Fix ownership with sudo chown -R your-user:your-user ~/.openclaw/ and set chmod 600 ~/.openclaw/credentials/*.
Summary
A solid backup strategy takes 30 minutes to set up and protects months of accumulated agent state. The essential steps: automate daily backups, add cloud redundancy, test your restores, and use the same backup/restore process for server migrations. Do not wait for a failure to discover your backups do not work.
