When browser automation fails, it often looks like “OpenClaw is slow”, but there are actually two different timeout classes:
- The browser control service is not reachable / crashed / wedged.
- The browser is alive, but the page/action is slow (heavy site, bot protection, infinite loading, large JS bundle).
This guide helps you separate those causes and apply fixes that improve reliability long-term.
If you’re seeing a URL encoding bug (Chinese query params), use:
0) Start with the truth sources: status + logs
On the gateway host:
openclaw status --deep
openclaw logs --follow
If OpenClaw is generally unstable (restarting, low disk, config invalid), fix that first:
1) Reproduce with the smallest possible run
Before changing timeouts, shrink the repro:
- One URL (prefer a lightweight, static page)
- One action (load page, extract title)
- One run
Why: if even a simple page times out, you have an environment/service problem. If only complex sites time out, you have a site complexity / bot protection / performance issue.
2) Common environment causes (and symptoms)
2.1 Docker/WSL2 headless dependencies and sandbox constraints
Symptoms:
- browser launches inconsistently
- random timeouts across many sites
Fixes:
- Ensure the runtime image includes required Playwright/Chromium deps.
- Avoid over-restrictive container sandboxes that prevent the browser from launching.
- Increase shared memory (
/dev/shm) for Chromium-heavy pages (common in containers).
If you’re already on Docker, read:
2.2 Low memory (OOM) looks like timeouts
Heavy sites can OOM the browser process and present as “timeout”.
Check your platform logs and consider:
- higher memory limit
- reducing concurrent browser runs
2.3 DNS / egress instability
If many outbound requests hang, “timeout” is sometimes just network egress failure.
Use the same host to curl the target domain and verify DNS resolves consistently.
3) Timeouts: increase only after fixing the root cause
If you increase timeouts too early, you convert crashes into “slow stuck jobs”.
Recommended approach:
- Fix environment stability first (deps/memory/egress).
- Add a modest timeout increase for known heavy pages.
- Add bounded retries with evidence output (log the URL, attempt count, and result).
4) Make browser automation safe for cron
Browser automation + cron becomes reliable when each run:
- writes a timestamped artifact (HTML snapshot, extracted JSON, screenshot)
- sends a short status message (success/failure + artifact path)
- fails fast and leaves evidence
Related:
- Cron reliability patterns: /guides/openclaw-cron-and-heartbeat-24x7
- Persistence/workspace correctness: /guides/openclaw-state-workspace-and-memory
5) Dedicated fix page
If you see the specific error string, also check: