Add a WebSocket endpoint at /ws/nodes where external processes and
devices can connect and advertise their capabilities at runtime.
The gateway tracks connected nodes in a NodeRegistry and exposes
their capabilities as dynamically available tools via NodeTool.
- Add src/gateway/nodes.rs: WebSocket endpoint, NodeRegistry, protocol
- Add src/tools/node_tool.rs: Tool trait wrapper for node capabilities
- Add NodesConfig to config schema (disabled by default)
- Wire /ws/nodes route into gateway router
- Add NodeRegistry to AppState and all test constructions
- Re-export NodesConfig and NodeTool from module roots
Closes#3093
* feat(provider): support custom API path suffix for custom: endpoints
Allow users to configure a custom API path for custom/compatible
providers instead of hardcoding /v1/chat/completions. Some self-hosted
LLM servers use different API paths.
Adds an optional `api_path` field to:
- Config (top-level and model_providers profile)
- ProviderRuntimeOptions
- OpenAiCompatibleProvider
When set, the custom path is appended to base_url instead of the
default /chat/completions suffix.
Closes#3125
* fix: add missing api_path field to test ModelProviderConfig initializers
Add deferred MCP tool activation to reduce context window waste.
When mcp.deferred_loading is true (the default), MCP tool schemas
are not eagerly included in the LLM context. Instead, only tool
names appear in an <available-deferred-tools> system prompt section,
and the LLM calls the built-in tool_search tool to fetch full schemas
on demand. Setting deferred_loading to false preserves the existing
eager behavior.
Closes#3095
Use `cmd.exe /C` instead of `sh -c` on Windows via cfg(target_os).
Make the shell allowlist, forbidden paths, env vars, risk classification,
and path detection platform-aware so the shell tool works correctly on
Windows without changing Unix behavior.
Closes#3327
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
MCP tools were not visible to delegate subagents because parent_tools
was a static snapshot taken before MCP tool wiring. Switch to interior
mutability (parking_lot::RwLock) so MCP wrappers pushed after
DelegateTool construction are visible at sub-agent execution time.
Closes#3069
When workspace_only=true and allowed_roots is configured, several tools
(file_read, content_search, glob_search) rejected absolute paths before
the allowed_roots allowlist was consulted. Additionally, tilde paths
(~/...) passed is_path_allowed but were then incorrectly joined with
workspace_dir as literal relative paths.
Changes:
- Add SecurityPolicy::resolve_tool_path() to properly expand tilde
paths and handle absolute vs relative path resolution for tools
- Add SecurityPolicy::is_under_allowed_root() for tool pre-checks to
consult the allowed_roots allowlist before rejecting absolute paths
- Update file_read to use resolve_tool_path instead of workspace_dir.join
- Update content_search and glob_search absolute-path pre-checks to
allow paths under allowed_roots
- Add tests covering workspace_only + allowed_roots scenarios
Closes#3082
PR #3409 fixed AtomicU64 usage on 32-bit targets in other files but
missed src/tools/mcp_client.rs. Apply the same cfg(target_has_atomic)
pattern used in channels/irc.rs to conditionally select AtomicU64 vs
AtomicU32.
Closes#3430
* feat(tools/mcp): add MCP subsystem tools layer with multi-transport client
Introduces a new MCP (Model Context Protocol) subsystem to the tools layer,
providing a multi-transport client implementation (stdio, HTTP, SSE) that
allows ZeroClaw agents to connect to external MCP servers and register their
exposed tools into the runtime tool registry.
New files:
- src/tools/mcp_client.rs: McpRegistry — lifecycle manager for MCP server connections
- src/tools/mcp_protocol.rs: protocol types (request/response/notifications)
- src/tools/mcp_tool.rs: McpToolWrapper — bridges MCP tools to ZeroClaw Tool trait
- src/tools/mcp_transport.rs: transport abstraction (Stdio, Http, Sse)
Wiring changes:
- src/tools/mod.rs: pub mod + pub use for new MCP modules
- src/config/schema.rs: McpTransport, McpServerConfig, McpConfig types; mcp field
on Config; validate_mcp_config; mcp unit tests
- src/config/mod.rs: re-exports McpConfig, McpServerConfig, McpTransport
- src/channels/mod.rs: MCP server init block in start_channels()
- src/agent/loop_.rs: MCP registry init in run() and process_message()
- src/onboard/wizard.rs: mcp: McpConfig::default() in both wizard constructors
* fix(tools/mcp): inject MCP tools after built-in tool filter, not before
MCP servers are user-declared external integrations. The built-in
agent.allowed_tools / agent.denied_tools filter (filter_primary_agent_tools_or_fail)
governs built-in tool governance only. Injecting MCP tools before that
filter would silently drop all MCP tools when a restrictive allowlist is
configured.
Add ordering comments at both call sites (run() CLI path and
process_message() path) to make this contract explicit for reviewers
and future merges.
Identified via: shady831213/zeroclaw-agent-mcp@3f90b78
* fix(tools/mcp): strip approved field from MCP tool args before forwarding
ZeroClaw's security model injects `approved: bool` into built-in tool
args for supervised-mode confirmation. MCP servers have no knowledge of
this field and reject calls that include it as an unexpected parameter.
Strip `approved` from object-typed args in McpToolWrapper::execute()
before forwarding to the MCP server. Non-object args pass through
unchanged (no silent conversion or rejection).
Add two unit tests:
- execute_strips_approved_field_from_object_args: verifies removal
- execute_handles_non_object_args_without_panic: verifies non-object
shapes are not broken by the stripping logic
Identified via: shady831213/zeroclaw-agent-mcp@c68be01
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Add `extra_headers` config field and `ZEROCLAW_EXTRA_HEADERS` env var
support so users can specify custom HTTP headers for provider API
requests. This enables connecting to providers that require specific
headers (e.g., User-Agent, HTTP-Referer, X-Title) without a reverse
proxy.
Config file headers serve as the base; env var headers override them.
Format: `Key:Value,Key2:Value2`
Closes#3189
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
MCP (Model Context Protocol) config and tool modules were added on the
old `main` branch but never made it to `master`. This restores the full
MCP subsystem: config schema, transport layer (stdio/HTTP/SSE), client
registry, tool wrapper, config validation, and channel wiring.
Closes#3379
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The provider HTTP request timeout was hardcoded at 120 seconds in
`OpenAiCompatibleProvider::http_client()`. This makes it configurable
via the `provider_timeout_secs` config key and the
`ZEROCLAW_PROVIDER_TIMEOUT_SECS` environment variable, defaulting
to 120s for backward compatibility.
Changes:
- Add `provider_timeout_secs` field to Config with serde default
- Add `ZEROCLAW_PROVIDER_TIMEOUT_SECS` env var override
- Add `timeout_secs` field and `with_timeout_secs()` builder on
`OpenAiCompatibleProvider`
- Add `provider_timeout_secs` to `ProviderRuntimeOptions`
- Thread config value through agent loop, channels, gateway, and tools
- Use `compat()` closure in provider factory to apply timeout to all
compatible providers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add `agent.tool_call_dedup_exempt` config key (list of tool names) to
allow specific tools to bypass the within-turn identical-signature
deduplication check in run_tool_call_loop. This fixes the browser
snapshot polling use case where repeated calls with identical arguments
are legitimate and should not be suppressed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
WebSearchTool previously stored the Brave API key once at boot and never
re-read it. This caused three failures: (1) keys set after boot via
web_search_config were ignored, (2) encrypted keys passed as raw enc2:
blobs to the Brave API, and (3) keys absent at startup left the tool
permanently broken.
The fix adds lazy key resolution at execution time. A fast path returns
the boot-time key when it is plaintext and non-empty. When the boot key
is missing or still encrypted, the tool re-reads config.toml, decrypts
the value through SecretStore, and uses the result. This also means
runtime config updates (e.g. `web_search_config set brave_api_key=...`)
are picked up on the next search invocation.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Centralize cron shell command validation so all entrypoints enforce the
same security policy (allowlist + risk gate + approval) before
persistence and execution.
Changes:
- Add validate_shell_command() and validate_shell_command_with_security()
as the single validation gate for all cron shell paths
- Add add_shell_job_with_approval() and update_shell_job_with_approval()
that validate before persisting
- Add add_once_validated() and add_once_at_validated() for one-shot jobs
- Make raw add_shell_job/add_job/add_once/add_once_at pub(crate) to
prevent unvalidated writes from outside the cron module
- Route gateway API through validated creation path
- Route schedule tool through validated helpers (single validation)
- Route cron_add/cron_update tools through validated helpers
- Unify scheduler execution validation via validate_shell_command_with_security
- CLI update handler uses full validate_command_execution instead of
just is_command_allowed
- Add focused tests for validation parity across entrypoints
- Standardize error format to "blocked by security policy: {reason}"
Closes#2741Closes#2742
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Scheduled jobs created via channel conversations (Discord, Telegram, etc.)
never delivered output back to the channel because:
1. The agent had no channel context (channel name + reply_target) in its
system prompt, so it could not populate the delivery config.
2. The schedule tool only creates shell jobs with no delivery support,
and the cron_add tool's delivery schema was opaque.
3. OpenAiCompatibleProvider was missing the native_tool_calling field,
causing a compile error.
Changes:
- Inject channel context (channel name + reply_target) into the system
prompt so the agent knows how to address delivery when scheduling.
- Improve cron_add tool description and delivery parameter schema to
guide the agent toward correct delivery config.
- Update schedule tool description to warn that output is only logged
and redirect to cron_add for channel delivery.
- Fix missing native_tool_calling field in OpenAiCompatibleProvider.
Co-authored-by: Cursor <cursoragent@cursor.com>
* feat(composio): fix v3 compatibility with parameter discovery, NLP text execution, and error enrichment
Three-layer fix for the Composio v3 API compatibility issue where the LLM
agent cannot discover parameter schemas, leading to repeated guessing and
execution failures.
Layer 1 – Surface parameter hints in list output:
- Add input_parameters field to ComposioV3Tool and ComposioAction structs
- Pass through input_parameters from v3 list response via map_v3_tools_to_actions
- Add format_input_params_hint() to show required/optional param names in list output
Layer 2 – Support natural-language text execution:
- Add text parameter to tool schema (mutually exclusive with params)
- Thread text through execute handler → execute_action → execute_action_v3
- Update build_execute_action_v3_request to send text instead of arguments
- Skip v2 fallback when text-mode is used (v2 has no NLP support)
Layer 3 – Enrich execute errors with parameter schema:
- Add get_tool_schema() to fetch full tool metadata from GET /api/v3/tools/{slug}
- Add format_schema_hint() to render parameter names, types, and descriptions
- On execute failure, auto-fetch schema and append to error message
Root cause: The v3 API returns input_parameters in list responses but
ComposioV3Tool was silently discarding them. The LLM had no way to discover
parameter schemas before calling execute, and error messages provided no
remediation guidance — creating an infinite guessing loop.
Co-Authored-By: unknown <>
(cherry picked from commit fd92cc5eb0)
* fix(composio): use floor_char_boundary for safe UTF-8 truncation in format_schema_hint
Co-Authored-By: unknown <>
(cherry picked from commit 18e72b6344)
* fix(composio): restore coherent v3 execute flow after replay
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
- Problem: The existing http_request tool returns raw HTML/JSON, which is nearly unusable for LLMs to extract
meaningful content from web pages.
- Why it matters: All mainstream AI agents (Claude Code, Gemini CLI, Aider) have dedicated web content extraction
tools. ZeroClaw lacks this capability, limiting its ability to research and gather information from the web.
- What changed: Added a new web_fetch tool that fetches web pages and converts HTML to clean plain text using
nanohtml2text. Includes domain allowlist/blocklist, SSRF protection, redirect following, and content-type aware
processing.
- What did not change (scope boundary): http_request tool is untouched. No shared code extracted between http_request
and web_fetch (DRY rule-of-three: only 2 callers). No changes to existing tool behavior or defaults.
Label Snapshot (required)
- Risk label: risk: medium
- Size label: size: M
- Scope labels: tool, config
- Module labels: tool: web_fetch
- If any auto-label is incorrect, note requested correction: N/A
Change Metadata
- Change type: feature
- Primary scope: tool
Linked Issue
- Closes #
- Related #
- Depends on #
- Supersedes #
Supersede Attribution (required when Supersedes # is used)
N/A
Validation Evidence (required)
cargo fmt --all -- --check # pass
cargo clippy --all-targets -- -D warnings # no new warnings (pre-existing warnings only)
cargo test --lib -- web_fetch # 26/26 passed
cargo test --lib -- tools::tests # 12/12 passed
cargo test --lib -- config::schema::tests # 134/134 passed
- Evidence provided: unit test results (26 new tests), manual end-to-end test with Ollama + qwen2.5:72b
- If any command is intentionally skipped, explain why: Full cargo clippy --all-targets has 43 pre-existing errors
unrelated to this PR (e.g. await_holding_lock, format! appended to String). Zero errors from web_fetch code.
Security Impact (required)
- New permissions/capabilities? Yes — new web_fetch tool can make outbound HTTP GET requests
- New external network calls? Yes — fetches web pages from allowed domains
- Secrets/tokens handling changed? No
- File system access scope changed? No
- If any Yes, describe risk and mitigation:
- Deny-by-default: enabled = false by default; tool is not registered unless explicitly enabled
- Domain filtering: allowed_domains (default ["*"] = all public hosts) + blocked_domains (takes priority).
Blocklist always wins over allowlist.
- SSRF protection: Blocks localhost, private IPs (RFC 1918), link-local, multicast, reserved ranges, IPv4-mapped
IPv6, .local TLD — identical coverage to http_request
- Rate limiting: can_act() + record_action() enforce autonomy level and rate limits
- Read-only mode: Blocked when autonomy is ReadOnly
- Response size cap: 500KB default truncation prevents context window exhaustion
- Proxy support: Honors [proxy] config via tool.web_fetch service key
Privacy and Data Hygiene (required)
- Data-hygiene status: pass
- Redaction/anonymization notes: No personal data in code, tests, or fixtures
- Neutral wording confirmation: All test identifiers use neutral project-scoped labels
Compatibility / Migration
- Backward compatible? Yes — new tool, no existing behavior changed
- Config/env changes? Yes — new [web_fetch] section in config.toml (all fields have defaults)
- Migration needed? No — #[serde(default)] on all fields; existing configs without [web_fetch] section work unchanged
i18n Follow-Through (required when docs or user-facing wording changes)
- i18n follow-through triggered? No — no docs or user-facing wording changes
Human Verification (required)
- Verified scenarios:
- End-to-end test: zeroclaw agent with Ollama qwen2.5:72b successfully called web_fetch to fetch
https://github.com/zeroclaw-labs/zeroclaw, returned clean plain text with project description, features, star count
- Tool registration: tool_count increased from 22 to 23 when enabled = true
- Config: enabled = false (default) → tool not registered; enabled = true → tool available
- Edge cases checked:
- Missing [web_fetch] section in existing config.toml → works (serde defaults)
- Blocklist priority over allowlist
- SSRF with localhost, private IPs, IPv6
- What was not verified:
- Proxy routing (no proxy configured in test environment)
- Very large page truncation with real-world content
Side Effects / Blast Radius (required)
- Affected subsystems/workflows: all_tools_with_runtime() signature gained one parameter (web_fetch_config); all 5
call sites updated
- Potential unintended effects: None — new tool only, existing tools unchanged
- Guardrails/monitoring for early detection: enabled = false default; tool_count in debug logs
Agent Collaboration Notes (recommended)
- Agent tools used: Claude Code (Opus 4.6)
- Workflow/plan summary: Plan mode → approval → implementation → validation
- Verification focus: Security (SSRF, domain filtering, rate limiting), config compatibility, tool registration
- Confirmation: naming + architecture boundaries followed (CLAUDE.md + CONTRIBUTING.md): Yes — trait implementation +
factory registration pattern, independent security helpers (DRY rule-of-three), deny-by-default config
Rollback Plan (required)
- Fast rollback command/path: git revert <commit>
- Feature flags or config toggles: [web_fetch] enabled = false (default) disables completely
- Observable failure symptoms: tool_count in debug logs drops by 1; LLM cannot call web_fetch
Risks and Mitigations
- Risk: SSRF bypass via DNS rebinding (attacker-controlled domain resolving to private IP)
- Mitigation: Pre-request host validation blocks known private/local patterns. Same defense level as existing
http_request tool. Full DNS-level protection would require async DNS resolution before connect, which is out of scope
for this PR.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
(cherry picked from commit 04597352cc)
* ci(homebrew): prefer HOMEBREW_UPSTREAM_PR_TOKEN with fallback
* ci(homebrew): handle existing upstream remote and main base
* feat(tools): Use system default browser instead of hard-coded Brave Browser
---------
Co-authored-by: Will Sarg <12886992+willsarg@users.noreply.github.com>
When max_response_size is set to 0, the condition `text.len() > 0` is
true for any non-empty response, causing all responses to be truncated
to empty strings. The conventional meaning of 0 for size limits is
"no limit" (matching ulimit, nginx client_max_body_size, curl, etc.).
Add an early return when max_response_size == 0 and update the doc
comment to document this behavior.
Thinking/reasoning models (Kimi K2.5, GLM-4.7, DeepSeek-R1) return a
reasoning_content field in assistant messages containing tool calls.
ZeroClaw was silently dropping this field when constructing conversation
history, causing provider APIs to reject follow-up requests with 400
errors: "thinking is enabled but reasoning_content is missing in
assistant tool call message".
Add reasoning_content: Option<String> as an opaque pass-through at every
layer of the pipeline: ChatResponse, ConversationMessage, NativeMessage
structs, parse/convert/build functions, and dispatcher. The field is
skip_serializing_if = None so it is invisible for non-thinking models.
Closes#1327
Add a complete web management panel for ZeroClaw, served directly from
the binary via rust-embed. The dashboard provides real-time monitoring,
agent chat, configuration editing, and system diagnostics — all
accessible at http://localhost:5555/ after pairing.
Backend (Rust):
- Add 15+ REST API endpoints under /api/* with bearer token auth
- Add WebSocket agent chat at /ws/chat with query param auth
- Add SSE event stream at /api/events via BroadcastObserver
- Add rust-embed static file serving at /_app/* with SPA fallback
- Extend AppState with tools_registry, cost_tracker, event_tx
- Extract doctor::diagnose() for structured diagnostic results
- Add Serialize derives to IntegrationStatus, CliCategory, DiscoveredCli
Frontend (React + Vite + Tailwind CSS):
- 10 dashboard pages: Dashboard, AgentChat, Tools, Cron, Integrations,
Memory, Config, Cost, Logs, Doctor
- WebSocket client with auto-reconnect for agent chat
- SSE client (fetch-based, supports auth headers) for live events
- Full EN/TR internationalization (~190 translation keys)
- Dark theme with responsive layouts
- Auth flow via 6-digit pairing code, token stored in localStorage
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>