solution model high macos linux windows

Custom OpenAI-compatible endpoint rejects tools or tool_choice

Fix custom or proxy AI endpoints that can chat normally but fail once OpenClaw sends tools, tool_choice, parallel_tool_calls, or later tool-result turns.

By CoClaw Team •

Symptoms

  • Basic chat works against your custom endpoint.
  • The first turn may succeed, but failures appear once tools are involved.
  • Requests may fail when OpenClaw sends tools, tool_choice, or later tool-result continuation.
  • You may see 400/422 errors, empty replies, or the model printing tool JSON as plain text.

Cause

Many custom OpenAI-compatible endpoints implement only a subset of the modern tool-calling contract.

Common breakpoints include:

  • rejecting tools entirely,
  • accepting tools but rejecting tool_choice,
  • rejecting parallel_tool_calls,
  • mishandling tool-result continuation on later turns,
  • or accepting tool calls only in single-turn playground-style usage.

This is especially common when the endpoint sits behind a local-model server, a relay, or a proxy that normalizes requests imperfectly.

Fix

1) Prove whether plain chat is the only supported mode

Temporarily configure the model as a plain chat backend:

{
  models: {
    providers: {
      myprovider: {
        api: "openai-completions",
        baseUrl: "http://host:port/v1",
        apiKey: "${MY_API_KEY}",
        models: [
          {
            id: "my-model",
            reasoning: false,
            input: ["text"],
            compat: {
              supportsTools: false,
            },
          },
        ],
      },
    },
  },
}

If that stabilizes the provider, you have confirmed a tools-compatibility boundary rather than a networking problem.

2) Retry with a fresh session

Tool-related failures often poison the session history for later retries.

Use a new session id:

openclaw agent --session-id "tool-compat-test" -m "hi"

3) Check whether the provider only fails on later turns

Ask:

  • does the first chat turn succeed?
  • does the failure appear only after a tool runs?
  • do errors mention bad response shape, invalid arguments, or empty tool names?

If yes, your endpoint may support only partial tool-calling behavior.

4) Prefer a provider path with a clearer tools contract

If you need reliable agent tooling, prefer:

  • a native API path designed for that backend,
  • a provider mode with stronger tool semantics,
  • or a different relay/backend known to handle tool-result continuation correctly.

Do not assume every /v1/chat/completions endpoint is equally capable once tools enter the session.

Verify

The issue is resolved if:

  • OpenClaw no longer fails when tools are enabled,
  • later turns after tool execution also succeed,
  • and the model stops dumping raw tool JSON into plain text output.

Verification & references

  • Reviewed by:CoClaw Editorial Team
  • Last reviewed:March 14, 2026
  • Verified on: macOS · Linux · Windows
Want to explore more? Browse all solutions or ask in the Community Forum .
Report a problem

Related Resources

Local llama.cpp, Ollama, and vLLM tool-calling compatibility
Fix
Understand why local-model servers can chat normally but still fail on agent tool calling, tool-result continuation, or OpenAI-compatible multi-turn behavior in OpenClaw.
Model outputs '[Historical context]' / tool-call JSON instead of a normal reply
Fix
Fix chat replies that leak internal tool metadata (e.g. '[Historical context: ... Do not mimic ...]') by switching to a tool-capable model/provider and ensuring function calling is enabled.
Browser tool: URLs with Chinese characters are mis-encoded
Fix
Work around a browser tool encoding bug by pre-encoding non-ASCII query parameters (UTF-8) before calling the browser tool.
OpenClaw only chats and won't use tools after update
Fix
Fix OpenClaw when it suddenly stops reading files, patching code, or running commands after a recent update. The most common cause is `tools.profile: messaging` or a narrower tool policy.
How to Choose Between Native Ollama, OpenAI-Compatible /v1, vLLM, and LiteLLM for OpenClaw
Guide
A decision guide for choosing the right local or proxy AI API path for OpenClaw: native Ollama, Ollama /v1, llama.cpp, vLLM, LiteLLM, and generic OpenAI-compatible relays.
OpenClaw Not Using Tools After the Update? Fix the ‘Only Chats, Doesn’t Act’ Problem
Guide
A practical step-by-step guide to fix OpenClaw when it suddenly stops using tools after recent updates. Learn how to check `tools.profile`, restore coding tools safely, and verify the agent can act again.