Series: Hardening Your Self-Hosted OpenClaw
L0 (Naked Exposure) → L1: Basic Auth → L2: OAuth/OIDC → L3: Rate Limiting & Audit → L4: Full Automation
OpenClaw was built to run on your PC. You put it on a VPS. That’s fine — a lot of us did. But there’s a gap between “it works” and “it’s safe to leave on the internet,” and this series is about closing that gap.
No judgment. The gap isn’t obvious until someone points it out.
The Original Design Wasn’t Wrong
OpenClaw’s pitch is simple: a personal AI agent that runs locally, keeps your data on your machine, and doesn’t send anything anywhere it shouldn’t. The threat model is a single trusted user on a trusted network. That’s a reasonable scope for a local-first tool, and the team designed for it honestly.
For remote access, the official answer is tunneling — SSH port forwarding, Tailscale, Cloudflare Tunnel. Private, encrypted, zero public exposure. If that setup works for you, close this tab and go do something more interesting. You don’t need this series.
But “always-on, accessible from a browser, maybe shared with a teammate” is a different use case entirely, and it’s where a lot of OpenClaw deployments end up. Once you’re on a VPS with a public IP, you’re not running a local assistant anymore. You’re running a web service. Web services need web service security.
What the Default Security Actually Gives You
The Gateway uses one token. You paste it into the Control UI, click connect, you’re in. Clean, simple, works great locally.
On a public server, that single token is the only thing standing between the open internet and full control of your agent. And here’s the problem: there’s nothing rate-limiting the guesses.
The OpenClaw community noticed. GitHub issue #29567 lays it out:
“When deployed on a public-facing server, there’s no built-in protection against: brute-force token guessing — no rate limiting on authentication attempts; abuse by token holders — no per-IP or per-user rate limiting on message volume; geo/IP filtering — no allowlist for trusted IP ranges. Users must manually configure external tools (Nginx, UFW, Cloudflare) to add these protections. This requires significant infrastructure knowledge and is not documented in the official setup guide.”
That last sentence is doing a lot of work. The protections exist — they just require you to know to add them, and know how. This series is the documentation that issue is asking for.
To be clear about what this isn’t: it’s not a vulnerability disclosure. The OpenClaw team built a local-first tool and it behaves like one. The mismatch is between the original design scope and how people are actually deploying it. That’s a documentation problem, not a security bug.
How Many Instances Are We Talking About
OpenClaw Exposure Watchboard tracks OpenClaw instances reachable from the public internet. The current count is over 600,000.
That’s not 600,000 people who made a mistake. That’s 600,000 deployments where the innocent setup met a use case it wasn’t designed for, and nobody pointed out the gap. Some of those instances have additional hardening. Many don’t.
If you deployed OpenClaw on a VPS and haven’t added an external auth layer, check the count. You’re probably in there.
Three Honest Options
There’s no single right answer — it depends on who needs access and how much you care about where your traffic goes.
Tailscale (or SSH tunneling). Everyone who needs access installs Tailscale, joins your network, done. Zero public exposure, nothing to harden. The catch: every user needs a Tailscale client, which makes it impractical for sharing with external users or anyone you can’t onboard to your private network. Also worth knowing — Tailscale’s default relay (DERP) means your traffic routes through their servers unless you self-host the relay infrastructure, which is its own project. Great for personal use, awkward at anything resembling a team.
Cloudflare Tunnel + Access. No open ports, CF handles the TLS and the auth UI, relatively easy to set up. The tradeoff is that all traffic — every prompt, every response, everything your agent does — flows through Cloudflare’s edge nodes. For a privacy-first tool like OpenClaw, that’s worth thinking about. You’re also adding a hard dependency on CF’s availability and policy decisions.
Self-hosted reverse proxy with your own auth layer (this series). Traffic goes directly from the client to your VPS to OpenClaw. No third party in the path, no relay, no edge node seeing your data. Full control over the auth mechanism, the logs, the rate limiting. The cost is that you’re the one building and maintaining the security layer — which is exactly what L1 through L4 are for.
If you only need personal access and Tailscale fits your workflow, use Tailscale. If you need to share access with external users and keeping traffic off third-party infrastructure matters to you, this series is the right path.
What L1–L3 Build
Each level is a standalone operations manual — prerequisites, steps, verification. No background, no repeated context. You’re reading the background right now.
L1 — Caddy + Basic Auth. A reverse proxy that demands a username and password before any request touches the Gateway. Shuts down automated token guessing immediately. One config file, twenty minutes.
L2 — oauth2-proxy + OAuth/OIDC. Swaps the password prompt for a real login flow — Google, GitHub, or any OIDC provider. Adds sessions, per-user access control, and a proper audit trail.
L3 — Rate limiting, IP filtering, access auditing. Network-level controls layered on top of L1/L2. The stuff that matters when you want to know what’s hitting your instance and slow it down.
L4 — Warded: One Command, Fully Protected.
Coming soon
L1 through L3 work. They’re also, let’s be honest, a lot of YAML, a lot of config files, and a non-trivial amount of infrastructure knowledge to get right. That’s fine if you enjoy this stuff. Not everyone does, and even those who do don’t always want to spend an afternoon on it.
Warded is a tool I’m building to handle the entire stack automatically — reverse proxy, TLS certificate, subdomain, passwordless login, rate limiting, access logging. Everything L1 through L3 cover, configured in seconds. You get a yourname.warded.me subdomain out of the box, traffic goes directly to your instance with no third-party relay in the path, and the whole thing is triggered by a single conversation with your OpenClaw agent:
“Set up Warded for me.”
That’s it. The agent handles the rest.
Warded is designed for the user who wants their OpenClaw deployment properly protected but would rather spend that time actually using the agent. If that’s you, join the waitlist at warded.me and I’ll let you know when it’s ready.
Your agent. Warded.
One Check Before You Continue
Confirm your OpenClaw port is actually exposed before spending time on L1:
1 | ss -tlnp | grep <your-openclaw-port> |
0.0.0.0:<port> or *:<port> — exposed, proceed to L1.
127.0.0.1:<port> — local only, check what’s already in front of it before continuing.