All posts
Fix 'OpenClaw Send Failed: Gateway Not Connected' Error (2026)

Fix 'OpenClaw Send Failed: Gateway Not Connected' Error (2026)

|7 min read

You send a message to your OpenClaw assistant and get back:

send failed: error: gateway not connected

Or your Telegram bot goes silent. Messages are delivered but never answered. The web interface shows a spinning indicator that never resolves.

This error means the OpenClaw client (CLI, Telegram, or web interface) cannot reach the gateway process. The gateway is either crashed, unreachable due to network issues, or running but not accepting connections. Here is how to systematically find and fix the problem.

Step 1: Check Gateway Status

Start with the basics. Is the gateway actually running?

openclaw status

Healthy output:

Gateway:    connected (v2.x.x)
Uptime:     3d 14h 22m
Port:       3001
Devices:    2 paired

Unhealthy output:

Gateway:    not connected

If the status shows "not connected," the gateway process has stopped. Proceed to Step 2 to find out why.

If openclaw status itself hangs or times out, the issue may be at the system level (the OpenClaw binary cannot communicate with anything). Skip to the "System-Level Issues" section below.

Step 2: Check the Logs

The logs almost always reveal why the gateway stopped:

# Direct installation
openclaw logs --tail 100

# Docker
docker logs <container_name> --tail 100

Look for error messages near the end of the output. The most common patterns:

Pattern A: OOM Kill

Killed

or in dmesg:

Out of memory: Killed process 1234 (node) total-vm:2048000kB

The system ran out of memory and the Linux OOM killer terminated the gateway process. This is the number one cause of "gateway not connected" on budget VPS instances[1].

Fix:

# Check current memory
free -h

# If no swap exists, add some
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

# Restart the gateway
openclaw restart

For Docker, set memory limits with a swap allowance:

docker run -d --memory=512m --memory-swap=1g openclaw-stack

Pattern B: Port Already in Use

Error: listen EADDRINUSE: address already in use :::3001

Another process grabbed the port, or a previous gateway instance did not shut down cleanly.

Fix:

# Find what is using the port
lsof -i :3001
# or
ss -tlnp | grep 3001

# Kill the conflicting process
kill <PID>

# Or change the port
openclaw config set gateway.port 3002
openclaw restart

Pattern C: Crash Loop

[gateway] starting...
[gateway] error: <some error>
[gateway] starting...
[gateway] error: <some error>

The gateway is crashing and auto-restarting in a loop. The specific error message tells you the cause. Common ones:

  • SQLITE_CANTOPEN -- database file permissions or disk full
  • ECONNREFUSED -- an external service the gateway depends on is down
  • SyntaxError in config -- malformed openclaw.json

Fix for config issues:

# Validate config
openclaw config validate

# If invalid, reset to defaults
openclaw config reset
# Then reconfigure
openclaw config set gateway.mode local

Fix for disk issues:

# Check disk space
df -h

# Check database file permissions
ls -la ~/.openclaw/data/

Pattern D: Segfault or Unexpected Exit

Segmentation fault (core dumped)

or the log simply ends with no error message.

This is rare but can happen with Node.js native module incompatibilities, especially on ARM architectures.

Fix:

# Reinstall OpenClaw
npm install -g openclaw --force

# If in Docker, pull the latest image
docker pull openclaw-stack:latest

Step 3: Network and Firewall Issues

If the gateway is running (you can see it in process list) but clients cannot connect, the issue is network-level.

Check if the Gateway Is Listening

# Verify the gateway port is open
ss -tlnp | grep 3001

# Should show something like:
# LISTEN  0  128  *:3001  *:*  users:(("node",pid=1234,fd=18))

If nothing shows, the gateway started but is not listening on the expected port. Check the configured port:

openclaw config get gateway.port

Check Firewall Rules

# UFW (Ubuntu)
sudo ufw status

# iptables
sudo iptables -L -n | grep 3001

# firewalld (CentOS/RHEL)
sudo firewall-cmd --list-ports

If running behind a reverse proxy (Caddy, Nginx), port 3001 should only be open to localhost. But the reverse proxy port (80/443) must be open:

# For UFW
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

Check Reverse Proxy Status

If you access OpenClaw through a domain name:

# Caddy
systemctl status caddy
caddy validate --config /etc/caddy/Caddyfile --adapter caddyfile

# Nginx
systemctl status nginx
nginx -t

A stopped or misconfigured reverse proxy causes "gateway not connected" even though the gateway itself is healthy. The --adapter caddyfile flag is important for Caddy -- omitting it causes validation to fail silently[2].

WebSocket Connection Issues

The gateway uses WebSocket for persistent connections. Some reverse proxies or CDNs do not properly handle WebSocket upgrades.

Symptoms: Initial page loads work, but messages fail to send. The connection drops after a few seconds.

Fix for Caddy (usually handles this automatically, but verify):

yourdomain.com {
    reverse_proxy localhost:3001
}

Fix for Nginx:

location / {
    proxy_pass http://localhost:3001;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 86400;
}

The proxy_read_timeout is critical -- the default 60-second timeout will kill idle WebSocket connections.

Step 4: Docker-Specific Issues

Container Stopped or Restarting

docker ps -a | grep openclaw

If the status shows "Exited" or "Restarting":

# Check why it stopped
docker logs <container> --tail 50

# Restart with auto-restart policy
docker run -d --restart=unless-stopped openclaw-stack

Volume Mount Issues

If the data volume is not mounted correctly, the gateway cannot access its database:

# Verify volume mounts
docker inspect <container> | grep -A 5 "Mounts"

Gateway Token Regeneration

In Docker, if OPENCLAW_GATEWAY_TOKEN is not set, the gateway generates a new token on every restart. Connected clients still hold the old token and receive "not connected" errors[3].

# Generate a persistent token
TOKEN=$(openssl rand -hex 32)

# Restart with it set
docker run -d \
  -e OPENCLAW_GATEWAY_TOKEN=$TOKEN \
  --restart=unless-stopped \
  -v openclaw_data:/data \
  openclaw-stack

Step 5: The Full Diagnostic Sequence

When you are not sure where the problem is, run through this sequence:

# 1. Is the gateway process alive?
openclaw status

# 2. What do the logs say?
openclaw logs --tail 100

# 3. Run automated diagnostics
openclaw doctor

# 4. Is the port open?
ss -tlnp | grep 3001

# 5. Can you reach it locally?
curl -s http://localhost:3001/health

# 6. Is the reverse proxy working?
curl -s https://yourdomain.com/health

# 7. Check system resources
free -h && df -h

# 8. Check for OOM events
dmesg | tail -20

The answer is almost always found in step 2 (logs) or step 7 (resources).

Preventing Future "Send Failed" Errors

Set Up Auto-Restart

# With systemd
openclaw service install
systemctl enable openclaw

# With Docker
docker run -d --restart=unless-stopped openclaw-stack

# With PM2
pm2 start openclaw -- start
pm2 save
pm2 startup

Monitor Gateway Health

# Simple cron-based monitoring (check every 5 minutes)
*/5 * * * * curl -sf http://localhost:3001/health || openclaw restart

Set Resource Alerts

If you are on a VPS, set up basic monitoring to catch OOM events before they crash the gateway:

# Alert when memory drops below 100MB free
*/5 * * * * [ $(free -m | awk '/Mem:/ {print $7}') -lt 100 ] && echo "Low memory" | mail -s "OpenClaw Alert" you@example.com

When You Are Tired of Debugging

Server maintenance is not for everyone, and there is no shame in that. If "send failed" errors keep disrupting your workflow, managed hosting through ClawTank eliminates the entire category of infrastructure problems. The gateway runs on managed containers with automatic restarts, resource monitoring, and proper reverse proxy configuration already in place.

But if you prefer self-hosting -- and many people do for excellent reasons -- the steps in this guide cover every known cause of the "gateway not connected" error. Keep openclaw doctor in your toolkit and check the logs first. The answer is almost always there.

References

  1. Linux OOM Killer Explained
  2. Caddy Reverse Proxy Configuration
  3. OpenClaw Docker Environment Variables
  4. OpenClaw CLI Reference -- status and doctor

Ready to deploy OpenClaw?

No Docker, no SSH, no DevOps. Deploy in under 1 minute.

Get started free