* feat: add desktop companion app integration and CI/CD pipeline
- Add `zeroclaw desktop` CLI command to launch/install companion app
- Add device-aware installer (desktop/server/mobile/embedded/container)
- Replace from-source Tauri build with pre-built .dmg download flow
- Add `build-desktop` job to beta and stable release workflows
- Build universal macOS .dmg via Tauri on macos-14 runners
- Include .dmg in GitHub Release assets alongside CLI binaries
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: bump version to 0.6.1
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: update Cargo.lock for 0.6.1
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(security): wire LeakDetector into outbound message path
Wire LeakDetector.scan() into channel response sanitization (via
sanitize_channel_response) and cron job delivery (via
deliver_announcement) to prevent credential leakage to external
channels.
Changes:
- src/channels/mod.rs: Add leak detection to sanitize_channel_response()
with tracing::warn! on detection. Follows prior art from commit fb3b7b8e.
- src/cron/scheduler.rs: Add leak detection to deliver_announcement()
before sending output to any channel (Telegram, Discord, Slack,
Mattermost, Signal, Matrix, WhatsApp, QQ).
- Added unit tests in both modules to verify redaction behavior.
All outbound channel messages and cron job delivery are now scanned for
API keys, AWS credentials, JWTs, database URLs, private keys, and
high-entropy tokens. Detected leaks are redacted before delivery and
logged via tracing::warn! with pattern details.
Ref: SPEC-1.3-leak-detector-outbound.md (gap closure §1.3)
Prior art: commit fb3b7b8e (zeroclaw_homecomming branch)
* fix(security): enforce leak redaction with RedactedOutput newtype
Replace raw String return from scan_and_redact_output with a
RedactedOutput newtype. All channel dispatch in deliver_announcement
must go through this type, so the compiler rejects any path that
skips leak detection. Replaces tests that called LeakDetector in
isolation with tests that exercise scan_and_redact_output through
the RedactedOutput boundary.
* feat(channels): add message redaction API to Channel trait
Add redact_message() method to Channel trait with default no-op implementation.
Implement for MatrixChannel using existing room.redact() SDK method.
Add comprehensive test coverage for trait default and Matrix implementation.
Changes:
- src/channels/traits.rs: Add redact_message() default method and test
- src/channels/matrix.rs: Implement redact_message() for MatrixChannel
- tests/integration/channel_matrix.rs: Add RedactMessage event, test channel impl, and lifecycle test
Tests added:
- default_redact_message_returns_success (trait default)
- redact_message_lifecycle (MatrixTestChannel)
- Updated minimal_channel_all_defaults_succeed
All tests pass. No breaking changes.
* test(channels): clarify redact_message test coverage
* fix(channels): use OwnedRoomId for matrix redact_message room lookup
RoomId is unsized (wraps str), so inlining .parse() in
get_room(&target_room.parse()?) fails to compile with the
channel-matrix feature. Use an intermediate OwnedRoomId binding,
matching the pattern used in send() and listen().
* feat(ci): add per-component labels for channels, providers, and tools
labeler.yml only had base scope labels (channel, provider, tool) matching
entire directories. The PR workflow and ci-map specs already describe
fine-grained module labels (channel:telegram, provider:kimi, tool:shell)
but the labeler config never implemented them.
Adds 27 channel, 13 provider, and 12 tool group entries. Base labels are
preserved for the compaction rule in pr-labeler.yml.
* feat(ci): add pr-path-labeler workflow to invoke actions/labeler
labeler.yml has path-label config but no workflow was invoking it.
Adds pr-path-labeler.yml on pull_request_target to apply path and
per-component labels automatically. Updates actions source policy
doc to reflect the new workflow and action.
* docs(contributing): add label registry as single reference for all PR labels
Label definitions were scattered across labeler.yml, label-policy.json,
pr-workflow.md, and ci-map.md with no consolidated view. Adds a single
registry documenting all 99 labels by category, their definitions, how
each is applied, and which automation exists vs. is specced but missing.
* docs(contributing): correct label-registry automation status
The CI was simplified from 20+ workflows to 4. Size, risk,
contributor tier, and triage label automation (pr-labeler.yml,
pr-auto-response.yml, pr-check-stale.yml) were removed during
that simplification. Update the registry to reflect that these
labels are now applied manually, not "not yet implemented."
See PR #2931 for the upstream docs cleanup.
* feat(memory): add pgvector support and Postgres knowledge graph (#4028)
Enhance PostgresMemory with optional pgvector extension for hybrid
vector+keyword recall. Add namespace and importance columns. Create
PgKnowledgeGraph with recursive CTEs for graph traversal (no AGE
dependency). Extend ConsolidationResult with facts/trend fields for
richer extraction. All behind memory-postgres feature flag.
New file: knowledge_graph_pg.rs (5 unit tests)
Modified: postgres.rs (pgvector init, namespace/importance columns),
consolidation.rs (facts/trend fields), StorageProviderConfig
* fix: update PostgresMemory::new call in CLI with pgvector params
* fix: route WebSocket connections through configured proxy (#4408)
tokio_tungstenite::connect_async does not honour proxy settings. Add
proxy-aware WebSocket connect helpers (HTTP CONNECT and SOCKS5) in
config::schema and update all six channel WebSocket connections
(discord, discord_history, slack, dingtalk, lark, qq) to use
ws_connect_with_proxy instead of connect_async.
* fix: update Cargo.lock with tokio-socks dependency
Add heuristic complexity estimation (Simple/Standard/Complex) and
post-response quality evaluation to enable cheapest-capable model
routing. When rule-based classification misses, auto_classify falls
back to complexity-tier hints for model selection. Eval gate checks
response quality and suggests retry with higher-tier model if below
threshold.
New file: eval.rs (14 unit tests)
Modified: AgentConfig (eval, auto_classify fields), classify_model()
Fixes#4409. The git_operations tool now accepts an optional 'path'
parameter to specify a subdirectory within the workspace. Path
traversal outside the workspace is rejected for security.
Fixes#4442. Empty allowed_tools arrays caused all tools to be filtered
out, making cron jobs fail. Also adds tool name validation in the
OpenRouter provider to skip tools with names that violate the OpenAI
naming regex.
Fixes#4445. Docker/podman containers running in daemon mode had no
[autonomy] section in the baked config.toml, causing file_write and
file_edit tools to be auto-denied in non-interactive mode.
Add history pruning and context-aware tool filtering to reduce token
usage per API call. History pruner collapses old tool call/result pairs
and drops oldest messages when over budget. Context analyzer suggests
relevant tools per iteration based on keyword matching.
New files: history_pruner.rs, context_analyzer.rs
Modified: AgentConfig (new fields), ToolFilterGroup (filter_builtins),
agent loop (pruning integration)
Resolves#3540. Lark/Feishu channel was not included in default features, causing builds via cargo install, brew, and other standard install methods to produce binaries without Lark support. Also fixes pre-existing lark audio test failures that were previously hidden.
The concurrency group used github.sha for push events, giving each
merge its own group. Back-to-back merges queued multiple full CI runs
that never cancelled each other, exhausting runner capacity.
Use a fixed 'push-master' group so only the latest push to master
runs CI — older runs are cancelled automatically.
Add BedrockAuth enum supporting both SigV4 and BearerToken authentication.
BEDROCK_API_KEY env var takes precedence over AWS AKSK credentials.
When a Bearer token is provided, requests use Authorization: Bearer header
instead of SigV4 signing. Existing SigV4 auth remains fully backward
compatible.
Closes#3742
Add `[shell_tool]` section to config schema with `timeout_secs` field
(default 60s). The ShellTool struct now carries the timeout as an
instance field instead of relying on the hardcoded constant, and the
full tool registry reads the value from Config at construction time.
Closes#4331
Add a `zeroclaw skills test [name] [--verbose]` command that discovers
and executes TEST.sh files inside skill directories. Each line in a
TEST.sh follows the format `command | expected_exit_code | pattern` and
is validated against actual exit code and output (substring or regex).
- Add `Test` variant to `SkillCommands` enum in lib.rs
- Create `src/skills/testing.rs` with test runner, parser, and
pretty-printed results using the console crate
- Wire the new subcommand into `handle_command()` in skills/mod.rs
- Add example TEST.sh for the browser skill
- Include 14 unit tests covering parsing, pattern matching, execution,
aggregation, and edge cases
Closes#3697
Implement a new ask_user tool that sends a question to a messaging
channel and waits for the user's response with configurable timeout.
Supports optional structured choices and works across all channels
(Telegram, Discord, Slack, CLI, Web) via the late-bound channel map
pattern used by PollTool and ReactionTool.
Closes#3698
Migrate Telegram, Discord, Slack, WhatsApp Web, and Matrix channels
from inline TranscriptionConfig/transcribe_audio() calls to the
centralized TranscriptionManager pattern for unified provider selection,
fallback, and global cap enforcement.
Each channel now:
- Stores a transcription_manager: Option<Arc<TranscriptionManager>> field
- Builds the manager in with_transcription() from config
- Calls manager.transcribe() instead of the legacy transcribe_audio() shim
- Matrix replaces the whisper-cpp shell-out with the manager API
Closes#4311
Allow transcription of forwarded/regular audio messages on WhatsApp,
not just push-to-talk voice notes. Controlled by the new
`transcribe_non_ptt_audio` option in `[transcription]` (default: false).
Closes#4343
The tool-context summary (e.g. `[Used tools: browser_open]`) was
prepended to assistant history entries for all non-Telegram channels so
the LLM could retain awareness of prior tool usage. This caused the
model to learn and reproduce the bracket format in its own output,
delivering raw log lines to end-users instead of meaningful tool results.
Three changes:
1. Stop prepending `[Used tools: …]` to stored history for all channels.
The LLM already receives tool context through the tool-call/result
messages built by `run_tool_call_loop`, making the summary redundant.
2. Strip any `[Used tools: …]` prefix from cached history on reload, so
existing sessions are cleaned up retroactively.
3. Strip the prefix in `sanitize_channel_response` as a safety net, in
case the model still echoes it from older context.
Implement ClaudeCodeRunner that spawns Claude Code in a tmux session with
HTTP hooks to POST tool execution events back to ZeroClaw's gateway,
updating a Slack message in-place with progress, plus an SSH handoff link.
Components:
- src/tools/claude_code_runner.rs: session lifecycle (tmux spawn, hook
config generation, event handling, TTL cleanup)
- src/gateway/api.rs: POST /hooks/claude-code endpoint for hook events
- src/channels/slack.rs: chat.update + chat.postMessage with ts return
- src/config/schema.rs: ClaudeCodeRunnerConfig (enabled, ssh_host,
tmux_prefix, session_ttl)
- src/tools/mod.rs: register runner tool
Closes#4287
Add interactive /config command for Slack that renders Block Kit
static_select dropdowns for provider and model selection. Interactive
block_actions payloads from Socket Mode are translated into synthetic
/models and /model commands so existing runtime command handling
applies the selection to the per-sender RouteSelectionMap.
- Enable runtime model switching for the slack channel
- Add ShowConfig variant to ChannelRuntimeCommand
- Parse /config in parse_runtime_command (gated by supports_runtime_model_switch)
- Build Block Kit JSON with provider/model dropdowns and current selection
- Detect __ZEROCLAW_BLOCK_KIT__ prefix in SlackChannel::send() to post raw blocks
- Handle interactive envelope type in Socket Mode for block_actions events
- Non-Slack channels get a plain-text /config fallback
Closes#4286
Replace the AGENTS.md symlink with a real file containing all cross-tool
agent instructions. Reduce CLAUDE.md to Claude Code-specific directives
only, with a reference back to AGENTS.md for shared conventions.
Closes#4289
Resolves#3538. Docker builds only included memory-postgres in ZEROCLAW_CARGO_FEATURES, which meant Lark/Feishu long-connection (WebSocket) support was not compiled into container images.
Includes macOS desktop menu bar app, media pipeline channels,
SOP engine improvements, and CLI tool integrations.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tauri): add desktop app scaffolding with Tauri commands
Add Tauri v2 workspace member under apps/tauri with gateway client,
shared state, and Tauri command handlers for status, health, channels,
pairing, and agent messaging. The desktop app communicates with the
ZeroClaw gateway over HTTP and is buildable independently via
`cargo check -p zeroclaw-desktop`.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tauri): add system tray icon, menu, and events
Add tray module with menu construction (Show Dashboard, Status, Open
Gateway, Quit) and event handlers. Wire tray setup into the Tauri
builder's setup hook. Add icons/ placeholder directory.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tauri): add mobile entry, platform capabilities, icons, and tests
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tauri): add auto-pairing, gradient icons, and WebView integration
Desktop app now auto-pairs with the gateway on startup using localhost
admin endpoints, injects the token into the WebView localStorage, and
loads the dashboard from the gateway URL. Adds blue gradient Z tray
icons (idle/working/error/disconnected states), proper .icns app icon,
health polling with tray updates, and Tauri-aware API/WS/SSE paths in
the web frontend.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tauri): add dock icon, keep-alive for tray mode, and install preflight
- Set macOS dock icon programmatically via NSApplication API so it shows
the blue gradient Z even in dev builds without a .app bundle
- Add RunEvent::ExitRequested handler to keep app alive as a menu bar app
when all windows are closed
- Add desktop app preflight checks to install.sh (Rust, Xcode CLT,
cargo-tauri, Node.js) with automatic build on macOS
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tools): add Codex CLI and Gemini CLI harness tools
Add two new ACP harness tools following the same pattern as ClaudeCodeTool:
- CodexCliTool: delegates to `codex -q` for OpenAI Codex CLI
- GeminiCliTool: delegates to `gemini -p` for Google Gemini CLI
Both tools share the same security model (rate limiting, act-policy
enforcement, workspace containment, env sanitisation, kill-on-drop
timeout) and are config-gated via `[codex_cli]` / `[gemini_cli]`
sections with `enabled = true`.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(tools): use 'codex exec' instead of invalid '-q' flag
* feat(tools): add OpenCode CLI harness tool
Add opencode_cli tool following the same pattern as codex_cli and
gemini_cli. Uses `opencode run "<prompt>"` for non-interactive
delegation. Includes config struct, factory registration, env
sanitization, timeout, and tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(tools): correct CLI invocation args for codex and gemini tools
- Fix codex_cli to use `codex -q` (quiet mode) instead of incorrect
`codex exec` subcommand, matching the documented behavior and the
actual @openai/codex CLI interface.
- Fix gemini_cli install instruction to reference `@google/gemini-cli`
instead of the incorrect `@anthropic-ai/gemini-cli` package name.
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: rareba <rareba@users.noreply.github.com>
* feat(tools): add background and parallel execution to delegate tool
Add three new execution modes to the delegate tool:
- background: when true, spawns the sub-agent in a background tokio task
and returns a task_id immediately. Results are persisted to
workspace/delegate_results/{task_id}.json.
- parallel: accepts an array of agent names, runs them all concurrently
with the same prompt, and returns all results when complete.
- action parameter with check_result/list_results/cancel_task support
for managing background task lifecycle.
Cascade control: background sub-agents use child CancellationToken
derived from the parent, enabling cancel_all_background_tasks() to
abort all running background agents when the parent session ends.
Existing synchronous delegation flow is fully preserved (opt-in only).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(tools): validate task_id as UUID to prevent path traversal in delegate tool
The check_result and cancel_task actions accept a user-provided task_id
that is used directly in filesystem path construction. A malicious
task_id like "../../etc/passwd" could read or overwrite arbitrary files.
Since task_ids are always generated as UUIDs internally, this adds UUID
format validation before any filesystem operations, rejecting invalid
task_id values with a clear error message.
Also updates existing tests to use valid UUID-format task_ids and adds
dedicated path traversal rejection tests.
* fix: add missing attachments field to wati ChannelMessage after media pipeline merge
* fix(channels): add missing attachments field to voice_wake and lark
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(sop): add deterministic execution mode with typed steps and approval checkpoints
Add opt-in deterministic execution to the SOP workflow engine, inspired
by OpenClaw's Lobster engine. Deterministic mode bypasses LLM round-trips
for step transitions, executing steps sequentially with piped outputs.
Key additions:
- SopExecutionMode::Deterministic variant and `deterministic: true` SOP.toml flag
- SopStepKind enum (Execute/Checkpoint) for marking approval pause points
- StepSchema for typed input/output validation (JSON Schema fragments)
- DeterministicRunState for persisting/resuming interrupted workflows
- DeterministicSavings for tracking LLM calls saved
- SopRunAction::DeterministicStep and CheckpointWait action variants
- SopRunStatus::PausedCheckpoint status
- Engine methods: start_deterministic_run, advance_deterministic_step,
resume_deterministic_run, persist/load_deterministic_state
- SopConfig in config/schema.rs with sops_dir, default_execution_mode,
max_concurrent_total, approval_timeout_secs, max_finished_runs
- Wire `pub mod sop` in lib.rs (previously dead/uncompiled module)
- Fix pre-existing test issues: TempDir import, async test annotations
All 86 SOP core tests pass (engine: 42, mod: 17, dispatch: 13, types: 14).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(sop): resolve clippy warnings and fix metrics test failures
- Add `ampersona-gates = []` feature declaration to Cargo.toml to fix
clippy `unexpected cfg condition value` errors in sop/mod.rs,
sop/audit.rs, and sop/metrics.rs.
- Use dynamic Utc::now() timestamps in metrics test helper `make_run()`
instead of hardcoded 2026-02-19 dates, which had drifted outside the
7-day/30-day windowed metric windows causing 7 test failures.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(sop): remove non-functional ampersona-gates feature flag
The ampersona-gates feature flag referenced ampersona_core and
ampersona_engine crates that do not exist, causing cargo check
--all-features to fail. Remove the feature flag and all gated code:
- Remove ampersona-gates from Cargo.toml [features]
- Delete src/sop/gates.rs (entire module behind cfg gate)
- Remove gated methods from audit.rs (log_gate_decision, log_phase_state)
- Remove gated MetricsProvider impl and tests from metrics.rs
- Simplify sop_status.rs gate_eval field and append_gate_status
- Update observability docs (EN + zh-CN)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style(sop): fix formatting in metrics.rs
* fix(sop): wire SOP tools into tool registry
The five SOP tools (sop_list, sop_advance, sop_execute, sop_approve,
sop_status) existed as source files but were never registered in
all_tools_with_runtime. They are now conditionally registered when
sop.sops_dir is configured.
Also fixes:
- Add mod sop + SopCommands re-export to main.rs (binary crate)
- Handle new DeterministicStep/CheckpointWait variants in match arms
- Add missing struct fields (deterministic, kind, schema, llm_calls_saved)
to test constructors
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(sop): fix state file leak and add deterministic execution tests
- Fix persist_deterministic_state fallback: use system temp dir instead
of current working directory when SOP location is unset, preventing
state files from polluting the working directory.
- Add comprehensive test coverage for deterministic execution path:
start, advance, checkpoint pause, completion with savings tracking,
and rejection of non-deterministic SOPs.
- Add tests for deterministic flag in TOML manifest loading and
checkpoint kind parsing from SOP.md.
- Add serde roundtrip tests for DeterministicRunState, SopStepKind,
SopExecutionMode::Deterministic, and SopRunStatus::PausedCheckpoint.
* ci: retrigger CI
* fix: add missing attachments field to wati ChannelMessage after media pipeline merge
* fix(channels): add missing attachments field to voice_wake and lark
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: rareba <rareba@users.noreply.github.com>
- Add error codes to WS error messages (AGENT_INIT_FAILED, AUTH_ERROR, PROVIDER_ERROR, INVALID_JSON, UNKNOWN_MESSAGE_TYPE, EMPTY_CONTENT)
- Send Close frame with code 1011 on agent init failure
- Add tracing logs for agent init and turn errors
- Increase MAX_API_ERROR_CHARS from 200 to 500
- Frontend: handle abnormal close codes and show config error hints
Fixes#3681
Supersedes #4366
Original work by mark-linyb
Narration text from native tool-call providers that doesn't end with a newline now gets one appended before dispatch to the draft updater. Prevents garbled output in Telegram drafts.
Closes#4348
Drop delta_tx before awaiting draft_updater so the mpsc channel closes and the updater task can terminate. Without this, draft-capable channels (e.g. Telegram) hang indefinitely after the tool loop completes.
Closes#4300
* feat(channels): add automatic media understanding pipeline for inbound messages
Add MediaPipeline that pre-processes inbound channel message attachments
before the agent sees them:
- Audio: transcribed via existing transcription infrastructure, annotated
as [Audio transcription: ...]
- Images: annotated with [Image: <file> attached] (vision-aware)
- Video: annotated with [Video: <file> attached] (placeholder for future API)
The pipeline is opt-in via [media_pipeline] config section (default: disabled).
Individual media types can be toggled independently.
Changes:
- New src/channels/media_pipeline.rs with MediaPipeline struct and tests
- New MediaPipelineConfig in config/schema.rs
- Added attachments field to ChannelMessage for media pass-through
- Wired pipeline into process_channel_message after hooks, before agent
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): add attachments field to integration test fixtures
Add missing `attachments: vec![]` to all ChannelMessage struct literals
in channel_matrix.rs and channel_routing.rs after the new attachments
field was added to the struct in traits.rs.
Also fix schema.rs test compilation: make TempDir import unconditional
and add explicit type annotations on tokio::fs calls to resolve type
inference errors in the bootstrap file tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): add missing attachments field to gmail_push and discord_history constructors
These channels were added to master after the media pipeline PR was
originally branched. The ChannelMessage struct now requires an
attachments field, so initialise it to an empty Vec for channels
that do not yet extract attachments.
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add Agent::turn_streamed() that forwards TurnEvent (Chunk, ToolCall,
ToolResult) through an mpsc channel during execution. The WebSocket
gateway uses tokio::join! to relay these events to the client in real
time instead of waiting for the full turn to complete.
Introduce chunk_reset message type so the frontend clears its draft
buffer before the authoritative done message arrives. Update the React
AgentChat page to render streamed text live in the typing indicator
area, replacing the bounce-dot animation when content is available.
Backward-compatible: the done message still carries the full_response
field unchanged, and providers that do not support streaming fall back
to the non-streaming chat path transparently.
Closes#4372
Add the [cost] section to dev/config.template.toml and
examples/config.example.toml documenting all CostConfig options
(enabled, daily_limit_usd, monthly_limit_usd, warn_at_percent,
allow_override) with their defaults and commented-out model pricing
examples.
Fixes#4373
Add `let _ =` prefix to explicitly discard the Result from
`client.encryption().backups().disable().await`, suppressing the
`unused_must_use` compiler warning.
Closes#4374
* fix(transcription): remove deployment-specific WireGuard references from doc comments
LocalWhisperProvider and LocalWhisperConfig doc comments referenced
WireGuard and specific internal infrastructure. Deployment topology is
operator choice — replace with neutral examples.
* feat(wati): add audio and voice message transcription
* fix(wati): SSRF host validation, early fromMe check, download size cap
Validate that media_url host matches the configured api_url host before
fetching, preventing SSRF with credential leakage. Move the fromMe
check before the HTTP download to avoid wasting bandwidth on outgoing
messages. Add MAX_WATI_AUDIO_BYTES (25 MiB) Content-Length pre-check.
Skip empty transcripts in parse_audio_as_message. Short-circuit
with_transcription() when config.enabled is false.
* fix(wati): remove dead field, enforce streaming size cap, log errors
- Remove unused transcription config field (clippy dead code)
- Use streaming download to enforce size cap without Content-Length
- Log network errors instead of silently swallowing with .ok()?
---------
Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
* fix(transcription): remove deployment-specific WireGuard references from doc comments
LocalWhisperProvider and LocalWhisperConfig doc comments referenced
WireGuard and specific internal infrastructure. Deployment topology is
operator choice — replace with neutral examples.
* feat(mattermost): add audio transcription via TranscriptionManager
Add transcription support for Mattermost audio attachments. Routes
audio through TranscriptionManager when configured, with duration
limit enforcement and wiremock-based integration tests.
* fix(mattermost): add download size cap, HTTP status check, warn logging
Replace chained .ok()? calls in try_transcribe_audio_attachment with
explicit error handling that logs warnings on HTTP failure, non-success
status codes, and oversized files. Add MAX_MATTERMOST_AUDIO_BYTES
(25 MiB) Content-Length pre-check. Remove mp4 and webm from the
extension-only fallback in is_audio_file(). Short-circuit
with_transcription() when config.enabled is false.
* fix(mattermost): add [Voice] prefix, filter empty, fix config init
- Add [Voice] prefix to transcribed audio matching Slack/Discord
- Filter empty transcription results
- Only store transcription config on successful manager init
- Update test expectation for [Voice] prefix
---------
Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
* fix(transcription): remove deployment-specific WireGuard references from doc comments
LocalWhisperProvider and LocalWhisperConfig doc comments referenced
WireGuard and specific internal infrastructure. Deployment topology is
operator choice — replace with neutral examples.
* feat(lark): add audio message transcription
* test(channels): add three missing lark audio transcription tests
- lark_audio_skips_when_manager_none: parse_event_payload_async returns
empty when transcription_manager is None
- lark_audio_routes_through_transcription_manager: wiremock end-to-end
test proving file_key → download → whisper → ChannelMessage chain
- lark_audio_token_refresh_on_invalid_token_response: wiremock test
verifying 401 triggers token invalidation and retry
Adds a #[cfg(test)] api_base_override field to LarkChannel so wiremock
can intercept requests that normally go to hardcoded Lark/Feishu API
base URLs.
* fix(lark): add audio download size cap, event_type guard, skip disabled transcription
Add MAX_LARK_AUDIO_BYTES (25 MiB) Content-Length pre-check before
reading audio response bodies on both the normal and token-refresh
paths. Add the missing im.message.receive_v1 event_type guard in
parse_event_payload_async so non-message callbacks are rejected before
the audio branch. Short-circuit with_transcription() when
config.enabled is false.
* fix(lark): address review feedback for audio transcription
- Wire parse_event_payload_async in webhook handler (was dead code)
- Use streaming download with enforced size cap (no Content-Length bypass)
- Fix test: lark_manager_some_when_valid_config asserted is_some with enabled=false
- Fix test: add missing header to lark_audio_skips_when_manager_none payload
- Remove redundant group-mention check in audio arm (shared check covers it)
---------
Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
* fix(security): update blocked_commands_basic test after allowing python/node
After adding python3 and node to default_allowed_commands in #4338,
the test asserting they are blocked is now wrong. Replace with ruby
and perl which remain correctly blocked.
* fix(security): also fix python3 reference in pipe validation test
The command_with_pipes_validates_all_segments test also referenced
python3 in a blocked assertion. Replace with ruby.
Force explicit Gregorian year/month/day via Datelike/Timelike traits instead of chrono format() which inherits system locale (e.g. Buddhist calendar on Thai systems). Prepend datetime before memory context in user messages. Rename DateTimeSection header to CRITICAL CONTEXT.
* feat(tools): add cross-channel poll creation tool
Adds a poll tool that enables cross-channel poll creation with voting
support. Changes all_tools_with_runtime return type from 3-tuple to
4-tuple to accommodate the new reaction handle.
Original PR #4243 by rareba.
* ci: retrigger CI
* feat(tools): add Firecrawl fallback for JS-heavy and bot-blocked sites
Adds optional Firecrawl API integration as a fallback when standard web
fetching fails due to JavaScript-heavy pages or bot protection. Includes
configurable API key, timeout, and domain allowlist/blocklist.
Original PR #4244 by rareba.
* ci: retrigger CI
When a downloaded binary has the wrong architecture, the previous code
attempted to execute it and surfaced a raw "Exec format error (os error 8)".
Now validate_binary reads the ELF/Mach-O header first, compares the binary
architecture against the host, and reports a clear diagnostic like:
"architecture mismatch: downloaded binary is aarch64 but this host is x86_64".
Closes#4291
- Prefer HOME env var in default_config_dir() before falling back to UserDirs::new()
- Pre-create temp_default_dir in test so is_temp_directory() can canonicalize /var -> /private/var symlink on macOS
Move supports_runtime_model_switch() guard from parse_runtime_command() entry to /model and /models match arms only, so /new is available on every channel.
Closes#4236
* feat(tools): add LLM task tool for structured JSON-only sub-calls
Add LlmTaskTool — a lightweight tool that runs a single prompt through
an LLM provider with no tool access and optionally validates the
response against a caller-supplied JSON Schema. Ideal for structured
data extraction, classification, and transformation in workflows.
- Parameters: prompt (required), schema (optional JSON Schema),
model (optional override), temperature (optional override)
- Uses configured default provider/model from root config
- Validates response JSON against schema (required fields, type checks)
- Strips markdown code fences from LLM responses before validation
- Gated by ToolOperation::Act security policy
- Registered in all_tools_with_runtime (always available)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: use non-constant value instead of approximate PI in tests
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: rareba <rareba@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(agent): add loop detection guardrail for repetitive tool calls
Introduces a LoopDetector that monitors a sliding window of recent tool
calls and detects three repetitive patterns:
1. Exact repeat — same tool+args called consecutively (default 3+)
2. Ping-pong — two tools alternating for 4+ cycles
3. No progress — same tool with different args but identical results (5+)
Each pattern escalates through Warning -> Block -> CircuitBreaker.
Configurable via [pacing] section: loop_detection_enabled (default true),
loop_detection_window_size (default 20), loop_detection_max_repeats
(default 3).
Wired into run_tool_call_loop alongside the existing time-gated
identical-output detector. Respects loop_ignore_tools exclusion list.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(agent): fix channel test interaction with loop detector
The max_tool_iterations channel tests use an IterativeToolProvider that
intentionally repeats identical tool calls. The loop detector (enabled by
default) fires its circuit breaker before max_tool_iterations is reached,
causing the test to fail. Disable loop detection in these two tests so
they exercise only the max_tool_iterations boundary.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(agent): address PR #4240 review — loop detector correctness and test precision
Critical fixes:
- Fix result_index/tool_calls misalignment: use enumerate() before
filter_map() so the index stays aligned with tool_calls even when
ordered_results contains None entries from skipped tool calls.
- Fix hash_value JSON key-order sensitivity: canonicalise() recursively
sorts object keys before serialisation so {"a":1,"b":2} and
{"b":2,"a":1} hash identically.
Tightened test assertions:
- ping_pong_escalates_with_more_cycles: assert Block with 5 cycles
(was loose Warning|Block|Break match).
- no_progress_escalates_to_block_and_break: assert Break at 7 calls
(was loose Block|Break match).
- no_progress_not_triggered_when_all_args_identical: assert Warning
specifically (was accepting Ok as alternative).
New tests:
- ping_pong_detects_alternation_with_varying_args (item 3)
- window_eviction_prevents_stale_pattern_detection (item 4)
- hash_value_is_key_order_independent + nested variant (item 2)
- pacing_config_serde_defaults_match_manual_default (item 5)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: rareba <rareba@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When a skill directory contains script files and `skills.allow_scripts`
is false (the default), the skill was silently skipped with only a
tracing::warn that most users never see. The LLM then reports "tool is
not available in this environment" with no guidance on how to fix it.
Now the loader emits a user-visible stderr warning that names the
skipped skill and suggests setting `skills.allow_scripts = true`.
Closes#4292
The previous substring match (e.g. "aarch64-unknown-linux") could match
both gnu and android release assets. Switch to full triples like
"aarch64-unknown-linux-gnu" so find_asset_url only selects the correct
platform binary.
Closes#4293
Cost tracking was disabled by default (enabled: false), causing the
/api/cost endpoint to always return zeros unless users explicitly
opted in. Change the default to enabled: true so cost tracking works
out of the box for all models and channels.
The prometheus crate requires AtomicU64 which is unavailable on 32-bit
ARM (armv7l/armv6l). Detect 32-bit ARM in install.sh and build with
--no-default-features, re-enabling only channel-nostr and
skill-creation (excluding observability-prometheus).
Previously, the /ws/chat handler silently ignored messages with
unrecognized types, leaving clients waiting for a response that never
comes. Now sends explicit error messages describing the expected format.
When users configure the Ollama api_url with the full endpoint path
(e.g. http://host:11434/api/chat), normalize_base_url only stripped
/api but not /api/chat, causing the final request URL to become
/api/chat/api/chat which fails on remote servers.
Closes#4342
The default_allowed_commands() list was missing common scripting
runtimes. After #3691 added serde defaults to AutonomyConfig, users
who omitted allowed_commands from their config silently fell back to
the restrictive default list, breaking shell tool access to Python.
Closes#4338
- Add tag-push trigger to Release Stable workflow so `git push origin v0.5.9`
auto-triggers the full pipeline (builds, Docker, crates.io, website,
Scoop, AUR, Homebrew, tweet) in one shot
- Add Homebrew Core as downstream job (was manual-only, never auto-triggered)
- Add `workflow_call` to pub-homebrew-core.yml so it can be called from
the stable release workflow
- Skip beta releases on version bump commits (prevents beta/stable race)
- Skip auto crates.io publish when stable tag exists (prevents double-publish)
- Auto-create git tag on manual dispatch so tag always exists for downstream
- Fix cut_release_tag.sh to reference correct workflow name
Closes#4231. Adds voice memo detection and transcription for Slack and Discord channels. Audio files are downloaded, transcribed via the existing transcription module, and passed as text to the LLM.
Closes#4235. Adds image and file message type handling in the Lark channel - downloads images/files via Lark API, detects MIME types, and passes content to the model for analysis.
* feat(agent): add thinking/reasoning level control per message
Users can set reasoning depth via /think:high etc. with resolution
hierarchy (inline > session > config > default). 6 levels from Off
to Max. Adjusts temperature and system prompt.
* fix(agent): prevent thinking level prefix from leaking across interactive turns
system_prompt was mutated in place for the first message's thinking
directive, then used as the "baseline" for restoration after each
interactive turn. This caused the first turn's thinking prefix to
persist across all subsequent turns.
Fix: save the original system_prompt before any thinking modifications
and restore from that saved copy between turns.
* feat(channels): add automatic link understanding for inbound messages
Auto-detects URLs in inbound messages, fetches content, extracts
title+summary, and enriches the message before the agent sees it.
Includes SSRF protection (rejects private IPs).
* fix(channels): use lowercased string for title extraction to prevent byte offset mismatch
extract_title used byte offsets from the lowercased HTML to index into
the original HTML. Multi-byte characters whose lowercase form has a
different byte length (e.g. İ → i̇) would produce wrong slices or panics.
Fix: extract from the lowercased string directly. Add multibyte test.
* feat(tools): enable internet access by default
Enable web_fetch, web_search, http_request, and browser tools by
default so ZeroClaw has internet access out of the box. Security
remains fully toggleable via config (set enabled = false to disable).
- web_fetch: enabled with allowed_domains = ["*"]
- web_search: enabled with DuckDuckGo (free, no API key)
- http_request: enabled with allowed_domains = ["*"]
- browser: enabled with allowed_domains = ["*"], agent_browser backend
- text_browser: remains opt-in (requires external binary)
* fix(tests): update component test for browser enabled by default
Update config_nested_optional_sections_default_when_absent to expect
browser.enabled = true, matching the new default.
Add support for defining cron jobs directly in the TOML config file via
`[[cron.jobs]]` array entries. Declarative jobs are synced to the SQLite
database at scheduler startup with upsert semantics:
- New declarative jobs are inserted
- Existing declarative jobs are updated to match config
- Stale declarative jobs (removed from config) are deleted
- Imperative jobs (created via CLI/API) are never modified
Each declarative job requires a stable `id` for merge tracking. A new
`source` column (`"imperative"` or `"declarative"`) distinguishes the
two creation paths. Shell jobs require `command`, agent jobs require
`prompt`, validated before any DB writes.
Apply exponential time decay (2^(-age/half_life), 7-day half-life) to
memory entry scores post-recall. Core memories are exempt (evergreen).
Consolidate duplicate half-life constants into a single public constant
in the decay module.
Based on PR #4266 by 5queezer with constant consolidation fix.
Apply PR #4258 changes to add whatsapp/whatsapp-web/whatsapp_web match
arm in deliver_announcement, feature-gated behind whatsapp-web.
Added is_web_config() guard to bail early when the WhatsApp config is
for Cloud API mode (no session_path), preventing a confusing runtime
failure with an empty session path.
* fix(cron): add WhatsApp Web delivery channel with backend validation
Apply PR #4258 changes to add whatsapp/whatsapp-web/whatsapp_web match
arm in deliver_announcement, feature-gated behind whatsapp-web.
Added is_web_config() guard to bail early when the WhatsApp config is
for Cloud API mode (no session_path), preventing a confusing runtime
failure with an empty session path.
* feat(gateway): add named sessions with human-readable labels
Apply PR #4267 changes with bug fixes:
- Add get_session_name trait method so WS session_start includes the
stored name on reconnect (not just when ?name= query param is present)
- Rename API now returns 404 for non-existent sessions instead of
silently succeeding
- Empty ?name= query param on WS connect no longer clears existing name
Skill tools defined in [[tools]] sections are now registered as first-class
callable tool specs via the Tool trait, rather than only appearing as XML
in the system prompt. This enables the LLM to invoke skill tools through
native function calling.
- Add SkillShellTool for shell/script kind skill tools
- Add SkillHttpTool for http kind skill tools
- Add skills_to_tools() conversion and register_skill_tools() wiring
- Wire registration into both CLI and process_message agent paths
- Update prompt rendering to mark registered tools as callable
- Update affected tests across skills, agent/prompt, and channels
Define the contract for long-lived shared state in multi-client tool
execution, covering ownership (handle pattern), identity assignment
(daemon-provided ClientId), lifecycle (validation at registration),
isolation (per-client for security state), and reload semantics
(config hash invalidation).
Add an `allowed_rooms` field to MatrixConfig that controls which rooms
the bot will accept messages from and join invites for. When the list
is non-empty, messages from unlisted rooms are silently dropped and
room invites are auto-rejected. When empty (default), all rooms are
allowed, preserving backward compatibility.
- Config: add `allowed_rooms: Vec<String>` with `#[serde(default)]`
- Message handler: replace disabled room_id filter with allowlist check
- Invite handler: auto-accept allowed rooms, auto-reject others
- Support both canonical room IDs and aliases, case-insensitive
Parse forward_from, forward_from_chat, and forward_sender_name fields
from Telegram message updates. Prepend forwarding attribution to message
content so the LLM has context about the original sender.
Closes#4118
When vision_provider is configured in [multimodal] config, messages
containing [IMAGE:] markers are automatically routed to the specified
vision-capable provider instead of failing on the default text provider.
Closes#4119
Add a piper TTS provider that communicates with a local Piper/Coqui TTS
server via an OpenAI-compatible HTTP endpoint. This enables fully offline
voice pipelines: Whisper (STT) → LLM → Piper (TTS).
Closes#4116
When a user provides a custom `auto_approve` list in their TOML
config (e.g. to add an MCP tool), serde replaces the built-in
defaults instead of merging. This causes default safe tools like
`weather`, `calculator`, and `file_read` to lose auto-approve
status and get silently denied in non-interactive channel runs.
Add `ensure_default_auto_approve()` which merges built-in entries
into the user's list after deserialization, preserving user
additions while guaranteeing defaults are always present. Users
who want to require approval for a default tool can use
`always_ask`, which takes precedence.
Closes#4247
Wrap `fetch_live_models_for_provider` calls in
`tokio::task::spawn_blocking` so the `reqwest::blocking::Client`
is created and dropped on a dedicated thread pool instead of
inside the async Tokio context. This prevents the
"Cannot drop a runtime in a context where blocking is not allowed"
panic when running `models refresh --provider openai`.
Closes#4253
Fixed issue #4139.
Previously, slicing a string at exactly 64 bytes could land in the middle of a multi-byte UTF-8 character (e.g., Chinese characters), causing a runtime panic.
Changes:
- Replaced direct byte slicing with a safe boundary lookup using .char_indices().
- Ensures truncation always occurs at a valid character boundary at or before the 64-byte limit.
- Maintained existing hyphen-trimming logic.
Co-authored-by: loriscience <loriscience@gmail.com>
Explicitly call `client.encryption().backups().disable()` when backups
are not enabled, preventing the matrix_sdk_crypto crate from attempting
room key backups on every sync cycle and spamming the logs with
"Trying to backup room keys but no backup key was found" warnings.
Closes#4227
* feat(channels): add voice wake word detection channel
Add VoiceWakeChannel behind the `voice-wake` feature flag that:
- Captures audio from the default microphone via cpal
- Uses energy-based VAD to detect speech activity
- Transcribes speech via the existing transcription API (Whisper)
- Checks for a configurable wake word in the transcription
- On detection, captures the following utterance and dispatches it
as a ChannelMessage
State machine: Listening -> Triggered -> Capturing -> Processing -> Listening
Config keys (under [channels_config.voice_wake]):
- wake_word (default: "hey zeroclaw")
- silence_timeout_ms (default: 2000)
- energy_threshold (default: 0.01)
- max_capture_secs (default: 30)
Includes tests for config parsing, state machine, RMS energy
computation, and WAV encoding.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): exclude voice-wake from all-features CI check
Add a `ci-all` meta-feature in Cargo.toml that includes every feature
except `voice-wake`, which requires `libasound2-dev` (ALSA) not present
on CI runners. Update the check-all-features CI job to use
`--features ci-all` instead of `--all-features`.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(channels): add Gmail Pub/Sub push notifications for real-time email
Add GmailPushChannel that replaces IMAP polling with Google's Pub/Sub
push notification system for real-time email-driven automation.
- New channel at src/channels/gmail_push.rs implementing the Channel trait
- Registers Gmail watch subscription (POST /gmail/v1/users/me/watch)
with automatic renewal before the 7-day expiry
- Handles incoming Pub/Sub notifications at POST /webhook/gmail
- Fetches new messages via Gmail History API (startHistoryId-based)
- Dispatches email messages to the agent with full metadata
- Sends replies via Gmail messages.send API
- Config: gmail_push.enabled, topic, label_filter, oauth_token,
allowed_senders, webhook_url
- OAuth token encrypted at rest via existing secret store
- Webhook endpoint added to gateway router
- 30+ unit tests covering notification parsing, header extraction,
body decoding, sender allowlist, and config serialization
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): fix extract_body_text_plain test
Gmail API sends base64url without padding. The decode_body function
converted URL-safe chars back to standard base64 but did not restore
the padding, causing STANDARD decoder to fail and falling back to
snippet. Add padding restoration before decoding.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a process-scoped Mutex that all env-var-mutating tests in
openai_codex::tests must hold. This prevents std::env::set_var /
remove_var calls from racing when Rust's test harness runs them on
parallel threads.
Affected tests:
- resolve_responses_url_prefers_explicit_endpoint_env
- resolve_responses_url_uses_provider_api_url_override
- resolve_reasoning_effort_prefers_configured_override
- resolve_reasoning_effort_uses_legacy_env_when_unconfigured
- Use char_indices for safe UTF-8 truncation instead of byte slicing
- Replace unbounded Vec with VecDeque rolling window in load_jsonl_messages
- Add path separator validation for channel/to to prevent directory traversal
When `load_session_context = true` in `[heartbeat]`, the daemon loads the
last 20 messages from the target user's JSONL session file and prepends them
to the heartbeat task prompt before calling the LLM.
This gives the companion context — who the user is, what was last discussed —
so outreach messages feel like a natural continuation rather than a blank-slate
ping. Defaults to `false` (opt-in, no change to existing behaviour).
Key behaviours:
- Session context is re-read on every heartbeat tick (not cached at startup)
- Skips context injection if only assistant messages are present (prevents
heartbeat outputs feeding back in a loop)
- Scans sessions directory for matching JSONL files using flexible filename
matching: {channel}_{to}.jsonl, {channel}_*_{to}.jsonl, or
{channel}_{to}_*.jsonl — handles varying session key formats
- Injects file mtime as "last message ~Xh ago" so the LLM knows how long
the user has been silent
Config example:
[heartbeat]
enabled = true
interval_minutes = 120
load_session_context = true
target = "telegram"
to = "your_username"
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Agent Tools and CLI Tools section headings were static divs with no
way to collapse sections the user is not interested in, making the
page unwieldy with a large tool set.
- Convert both section heading divs to button elements toggling
agentSectionOpen / cliSectionOpen state (both default open)
- Section content renders conditionally on those booleans
- ChevronsUpDown icon added (already in lucide-react bundle) that
fades in on hover and indicates collapsed/expanded state
- No change to individual tool card parameter schema expand/collapse
Risk: Low — UI state only, no API or logic change.
Does not change: search/filter behaviour, tool card expand/collapse,
CLI tools table structure.
Three issues addressed:
Empty page: the logs page shows nothing at idle because the SSE stream
only carries ObserverEvent variants (llm_request, tool_call, error,
agent_start, agent_end). Daemon stdout and RUST_LOG tracing output go
to the terminal/log file and are never forwarded to the broadcast
channel — this is correct behaviour, not a misconfiguration. Added a
dismissible informational banner explaining what appears on the stream
and how to access tracing output (RUST_LOG + terminal).
Layout: flex-1 log entries div was missing min-h-0, which can cause
flex children to refuse to shrink below content size in some browsers.
Connection indicator: moved from the toolbar (where it cluttered the
title and controls) to a compact footer strip below the scroll area,
matching the /agent page pattern exactly.
Also added colour rules for llm_request, agent_start, agent_end,
tool_call_start event types which previously fell through to default
grey.
Risk: Low — UI layout and informational copy only, no backend change.
Does not change: SSE connection logic, event parsing, pause/resume,
type filters, or the underlying broadcast observer.
Two issues with the config editor:
Layout: the page root had no height constraint and the textarea used
min-h-[500px] resize-y, causing independent scrollbars on both the
page and the editor. Fixed by adopting the Memory/Cron flex column
pattern so the editor fills the remaining viewport height with a single
scroll surface.
Highlighting: plain textarea with no visual structure for TOML.
Added a zero-dependency layered pre-overlay technique — no new npm
packages (per CLAUDE.md anti-pattern rules). A pre element sits
absolute behind a transparent textarea; highlightToml() produces HTML
colour-coding sections, keys, strings, booleans, numbers, datetimes,
and comments via per-line regex. onScroll syncs the overlay. Tab key
inserts two spaces instead of leaving focus.
dangerouslySetInnerHTML used on the pre — content is the user's own
local config, not from the network, risk equivalent to any local editor.
Risk: Low-Medium — no API or backend change. New rendering logic
in editor only.
Does not change: save/load API calls, validation, sensitive field
masking behaviour.
The cron page used a block-flow root with no height constraint, causing
the jobs table to grow taller than the viewport and the page itself to
scroll. This was inconsistent with the Memory page pattern.
- Change page root to flex flex-col h-full matching Memory's layout
- Table wrapper gains flex-1 min-h-0 overflow-auto so it fills
remaining height and scrolls both axes internally
- Table header already has position:sticky so it pins correctly
inside the scrolling container with no CSS change needed
Risk: Low — layout only, no logic or API change.
Does not change: job CRUD, modal, catch-up toggle, run history panel.
The card heading used the key dashboard.active_channels ("Active Channels")
even though the card has a toggle between Active and All views, making the
static heading misleading. The channel list div had no height cap, causing
tall channel lists to stretch the card and break 3-column grid alignment.
- Change heading to t("dashboard.channels") — key already present in all
three locales (zh/en/tr), no i18n changes needed
- Add overflow-y-auto max-h-48 pr-1 to the channel list wrapper so it
scrolls internally instead of stretching the card
* feat: add discord history logging and search tool with persistent channel cache
* fix: remove unused channel_names field from DiscordHistoryChannel
The channel_names HashMap was declared and initialized but never used.
Channel name caching is handled via discord_memory.get()/store() with
the cache:channel_name: prefix. Remove the dead field.
* style: run cargo fmt on discord_history.rs
---------
Co-authored-by: ninenox <nisit15@hotmail.com>
* fix(web/tools): make section headings collapsible
Agent Tools and CLI Tools section headings were static divs with no
way to collapse sections the user is not interested in, making the
page unwieldy with a large tool set.
- Convert both section heading divs to button elements toggling
agentSectionOpen / cliSectionOpen state (both default open)
- Section content renders conditionally on those booleans
- ChevronsUpDown icon added (already in lucide-react bundle) that
fades in on hover and indicates collapsed/expanded state
- No change to individual tool card parameter schema expand/collapse
Risk: Low — UI state only, no API or logic change.
Does not change: search/filter behaviour, tool card expand/collapse,
CLI tools table structure.
* fix(web/tools): improve a11y and fix invalid HTML in collapsible sections
- Replace <h2> inside <button> with <span role="heading" aria-level={2}>
to fix invalid HTML (heading elements not permitted in interactive content)
- Add aria-expanded attribute to section toggle buttons for screen readers
- Add aria-controls + id linking buttons to their controlled sections
- Replace ChevronsUpDown with ChevronDown icon — ChevronsUpDown is
visually symmetric so rotating 180deg has no visible effect; ChevronDown
rotating to -90deg gives a clear directional cue
- Remove unused ChevronsUpDown import
---------
Co-authored-by: WareWolf-MoonWall <chris.hengge@gmail.com>
* fix: resolve claude-code test flakiness and update security policy
* fix: restrict `free` command to Linux-only in security policy
`free` is not available on macOS or other BSDs. Move it behind
a #[cfg(target_os = "linux")] gate so it is only included in the
default allowed commands on Linux systems.
---------
Co-authored-by: ninenox <nisit15@hotmail.com>
* feat(channels): add Gmail Pub/Sub push notifications for real-time email
Add GmailPushChannel that replaces IMAP polling with Google's Pub/Sub
push notification system for real-time email-driven automation.
- New channel at src/channels/gmail_push.rs implementing the Channel trait
- Registers Gmail watch subscription (POST /gmail/v1/users/me/watch)
with automatic renewal before the 7-day expiry
- Handles incoming Pub/Sub notifications at POST /webhook/gmail
- Fetches new messages via Gmail History API (startHistoryId-based)
- Dispatches email messages to the agent with full metadata
- Sends replies via Gmail messages.send API
- Config: gmail_push.enabled, topic, label_filter, oauth_token,
allowed_senders, webhook_url
- OAuth token encrypted at rest via existing secret store
- Webhook endpoint added to gateway router
- 30+ unit tests covering notification parsing, header extraction,
body decoding, sender allowlist, and config serialization
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): fix extract_body_text_plain test
Gmail API sends base64url without padding. The decode_body function
converted URL-safe chars back to standard base64 but did not restore
the padding, causing STANDARD decoder to fail and falling back to
snippet. Add padding restoration before decoding.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): address critical security bugs in Gmail Pub/Sub push
- Add webhook authentication via shared secret (webhook_secret config
field or GMAIL_PUSH_WEBHOOK_SECRET env var), preventing unauthorized
message injection through the unauthenticated webhook endpoint
- Add 1MB body size limit on webhook endpoint to prevent memory exhaustion
- Fix race condition in handle_notification: hold history_id lock across
the read-fetch-update cycle to prevent duplicate message processing
when concurrent webhook notifications arrive
- Sanitize RFC 2822 headers (To/Subject) to prevent CRLF injection
attacks that could add arbitrary headers to outgoing emails
- Fix extract_email_from_header panic on malformed angle brackets by
using rfind('>') and validating bracket ordering
- Add 30s default HTTP client timeout for all Gmail API calls,
preventing indefinite hangs
- Clone tx sender before message processing loop to avoid holding
the mutex lock across network calls
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(gateway): add Live Canvas (A2UI) tool and real-time web viewer
Add a Live Canvas system that enables the agent to push rendered content
(HTML, SVG, Markdown, text) to a web-visible canvas in real time.
Backend:
- src/tools/canvas.rs: CanvasTool with render/snapshot/clear/eval actions,
backed by a shared CanvasStore (Arc<RwLock<HashMap>>) with per-canvas
broadcast channels for real-time updates
- src/gateway/canvas.rs: REST endpoints (GET/POST/DELETE /api/canvas/:id,
GET /api/canvas/:id/history, GET /api/canvas) and WebSocket endpoint
(WS /ws/canvas/:id) for real-time frame delivery
Frontend:
- web/src/pages/Canvas.tsx: Canvas viewer page with WebSocket connection,
iframe sandbox rendering, canvas switcher, frame history panel
Registration:
- CanvasTool registered in all_tools_with_runtime (always available)
- Canvas routes wired into gateway router
- CanvasStore added to AppState
- Canvas page added to App.tsx router and Sidebar navigation
- i18n keys added for en/zh/tr locales
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(gateway): share CanvasStore between tool and REST API
The CanvasTool and gateway AppState each created their own CanvasStore,
so content rendered via the tool never appeared in the REST API.
Create the CanvasStore once in the gateway, pass it to
all_tools_with_runtime via a new optional parameter, and reuse the
same instance in AppState.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(gateway): address critical security and reliability bugs in Live Canvas
- Validate content_type in REST POST endpoint against allowed set,
preventing injection of "eval" frames via the REST API
- Enforce MAX_CONTENT_SIZE (256KB) limit on REST POST endpoint,
matching tool-side validation to prevent memory exhaustion
- Add MAX_CANVAS_COUNT (100) limit to prevent unbounded canvas creation
and memory exhaustion from CanvasStore
- Handle broadcast RecvError::Lagged in WebSocket handler gracefully
instead of disconnecting the client
- Make MAX_CONTENT_SIZE and ALLOWED_CONTENT_TYPES pub for gateway reuse
- Update CanvasStore::render and subscribe to return Option for
canvas count enforcement
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: rareba <rareba@users.noreply.github.com>
* feat(channels): add voice wake word detection channel
Add VoiceWakeChannel behind the `voice-wake` feature flag that:
- Captures audio from the default microphone via cpal
- Uses energy-based VAD to detect speech activity
- Transcribes speech via the existing transcription API (Whisper)
- Checks for a configurable wake word in the transcription
- On detection, captures the following utterance and dispatches it
as a ChannelMessage
State machine: Listening -> Triggered -> Capturing -> Processing -> Listening
Config keys (under [channels_config.voice_wake]):
- wake_word (default: "hey zeroclaw")
- silence_timeout_ms (default: 2000)
- energy_threshold (default: 0.01)
- max_capture_secs (default: 30)
Includes tests for config parsing, state machine, RMS energy
computation, and WAV encoding.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): exclude voice-wake from all-features CI check
Add a `ci-all` meta-feature in Cargo.toml that includes every feature
except `voice-wake`, which requires `libasound2-dev` (ALSA) not present
on CI runners. Update the check-all-features CI job to use
`--features ci-all` instead of `--all-features`.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(channels): address critical bugs in voice wake word detection
- Replace std::mem::forget(stream) with dedicated thread that holds the
cpal stream and shuts down cleanly via oneshot channel, preventing
microphone resource leaks on task cancellation
- Add config validation: energy_threshold must be positive+finite,
silence_timeout_ms >= 100ms, max_capture_secs clamped to 300
- Guard WAV encoding against u32 overflow for large audio buffers
- Add hard cap on capture_buf size to prevent unbounded memory growth
- Increase audio channel buffer from 4 to 64 slots to reduce chunk
drops during transcription API calls
- Remove dead WakeState::Processing variant that was never entered
---------
Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When a fork syncs with upstream, GitHub attributes the push to the fork
owner, causing release-beta-on-push and publish-crates-auto to run
under the wrong identity — leading to confusing notifications and
guaranteed failures (missing secrets).
Add repository guards to root jobs so the entire pipeline is skipped
on forks.
- Replace manual can_act()/record_action() with enforce_tool_operation()
to match the codebase convention used by all other tools (notion,
memory_forget, claude_code, delegate, etc.), producing consistent
error messages and avoiding logic duplication.
- Add model parameter validation to prevent URL path traversal attacks
via crafted model identifiers (e.g. "../../evil-endpoint").
- Add tests for model traversal rejection and filename sanitization.
SessionsSendTool was missing security gate enforcement entirely - any agent
could send messages to any session without security policy checks. Similarly,
SessionsHistoryTool had no security enforcement for reading session data.
Changes:
- Add SecurityPolicy field to SessionsHistoryTool (enforces ToolOperation::Read)
- Add SecurityPolicy field to SessionsSendTool (enforces ToolOperation::Act)
- Add session_id validation to reject empty or non-alphanumeric-only IDs
- Pass security policy from all_tools_with_runtime registration
- Add tests for empty session_id, non-alphanumeric session_id validation
The reaction tool was passing the channel adapter name (e.g. "discord",
"slack") as the first argument to Channel::add_reaction() and
Channel::remove_reaction(), but the trait signature expects a
platform-specific channel_id (e.g. Discord channel snowflake, Slack
channel ID like "C0123ABCD"). This would cause all reaction API calls
to fail at the platform level.
Fixes:
- Add required "channel_id" parameter to the tool schema
- Extract and pass channel_id (not channel_name) to trait methods
- Update tool description to mention the new parameter
- Add MockChannel channel_id capture for test verification
- Add test asserting channel_id (not name) reaches the trait
- Update all existing tests to supply channel_id
For models with small context windows (e.g. glm-4.5-air ~8K tokens),
the system prompt alone can exceed the limit. This adds:
- max_system_prompt_chars config option (default 0 = unlimited)
- compact_context now also compacts the system prompt: skips the
Channel Capabilities section and shows only tool names
- Truncation with marker when prompt exceeds the budget
Users can set `max_system_prompt_chars = 8000` in [agent] config
to cap the system prompt for small-context models.
Closes#4124
auto_approve = ["*"] was doing exact string matching, so only the
literal tool name "*" was matched. Users expecting wildcard semantics
had every tool blocked in supervised mode.
Also adds "prompt exceeds max length" to the context-window error
detection hints (fixes GLM/ZAI error 1261 detection).
Closes#4127
* fix(publish): add aardvark-sys version and publish it before main crate
- Add version = "0.1.0" to aardvark-sys path dependency in Cargo.toml
- Update all three publish workflows to publish aardvark-sys first
- Add aardvark-sys COPY to Dockerfile for workspace builds
- Fixes cargo publish failure: "dependency aardvark-sys does not
specify a version"
* ci: publish aardvark-sys before main crate in all publish workflows
All three crates.io publish workflows now publish aardvark-sys first,
wait for indexing, then publish the main zeroclawlabs crate.
When run via `curl | bash`, stdin is the curl pipe, so sudo cannot
prompt for a password. Redirect sudo's stdin from /dev/tty to reach
the real terminal, allowing the password prompt to work in piped
invocations.
Instead of exiting with a manual remediation step, the installer now
attempts to accept the Xcode/CLT license automatically via
`sudo xcodebuild -license accept`. Falls back to a clear error message
only if sudo fails (e.g. no terminal or password).
The output format used "{action}ed" which produced "removeed" for the
remove action. Use explicit past-tense mapping instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
type inference failures (create_dir_all, write, read_to_string)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(hardware): drain stdin in subprocess test to prevent broken pipe flake
The test script did not consume stdin, so SubprocessTool's stdin write
raced against the process exit, causing intermittent EPIPE failures.
Add `cat > /dev/null` to drain stdin before producing output.
* style: format subprocess test
The Xcode license test-compile was inside install_system_deps(), which
only runs when --install-system-deps is passed. On macOS the default
path skipped this entirely, so users hit `cc` exit code 69 deep in
cargo build. Move the check into the unconditional main flow so it
always fires on Darwin.
xcrun --show-sdk-path can succeed even when the Xcode/CLT license has
not been accepted, so the previous check was ineffective. Replace it
with an actual test-compilation of a trivial C file, which reliably
triggers the exit-code-69 failure when the license is pending.
Add ImageGenTool that exposes fal.ai Flux model image generation as a
standalone tool, decoupled from the LinkedIn client. The tool accepts a
text prompt, optional filename/size/model parameters, calls the fal.ai
synchronous API, downloads the result, and saves to workspace/images/.
- New src/tools/image_gen.rs with full Tool trait implementation
- New ImageGenConfig in schema.rs (enabled, default_model, api_key_env)
- Config-gated registration in all_tools_with_runtime
- Security: checks can_act() and record_action() before execution
- Comprehensive unit tests (prompt validation, API key, size enum,
autonomy blocking, tool spec)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add ReactionTool that exposes Channel::add_reaction and
Channel::remove_reaction as an agent-callable tool. Uses a
late-binding ChannelMapHandle (Arc<RwLock<HashMap>>) pattern
so the tool can be constructed during tool registry init and
populated once channels are available in start_channels.
Parameters: channel, message_id, emoji, action (add/remove).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add three new tools in src/tools/sessions.rs:
- sessions_list: lists active sessions with channel, message count, last activity
- sessions_history: reads last N messages from a session by ID
- sessions_send: appends a message to a session for inter-agent communication
All tools operate on the SessionBackend trait, using the JSONL SessionStore
by default. Registered unconditionally in all_tools_with_runtime when the
sessions directory is accessible.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(config): add configurable pacing controls for slow/local LLM workloads (#2963)
Add a new `[pacing]` config section with four opt-in parameters that
let users tune timeout and loop-detection behavior for local LLMs
(Ollama, llama.cpp, vLLM) without disabling safety features entirely:
- `step_timeout_secs`: per-step LLM inference timeout independent of
the overall message budget, catching hung model responses early.
- `loop_detection_min_elapsed_secs`: time-gated loop detection that
only activates after a configurable grace period, avoiding false
positives on long-running browser/research workflows.
- `loop_ignore_tools`: per-tool loop-detection exclusions so tools
like `browser_screenshot` that structurally resemble loops are not
counted toward identical-output detection.
- `message_timeout_scale_max`: overrides the hardcoded 4x ceiling in
the channel message timeout scaling formula.
All parameters are strictly optional with no effect when absent,
preserving full backwards compatibility.
Closes#2963
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(config): add missing pacing fields in tests and call sites
* fix(config): add pacing arg to remaining cost-tracking test call sites
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* fix(install): detect un-accepted Xcode license before build
Add an xcrun check after verifying Xcode CLT is installed. When the
Xcode/CLT license has not been accepted, cc exits with code 69 and
the build fails with a cryptic linker error. This surfaces a clear
message telling the user to run `sudo xcodebuild -license accept`.
* chore(release): bump version to v0.5.5
Update version across all distribution manifests:
- Cargo.toml and Cargo.lock
- dist/aur/PKGBUILD and .SRCINFO
- dist/scoop/zeroclaw.json
* feat(channel): add per-channel proxy_url support for HTTP/SOCKS5 proxies
Allow each channel to optionally specify a `proxy_url` in its config,
enabling users behind restrictive networks to route channel traffic
through HTTP or SOCKS5 proxies. When set, the per-channel proxy takes
precedence over the global `[proxy]` config; when absent, the channel
falls back to the existing runtime proxy behavior.
Adds `proxy_url: Option<String>` to all 12 channel config structs
(Telegram, Discord, Slack, Mattermost, Signal, WhatsApp, Wati,
NextcloudTalk, DingTalk, QQ, Lark, Feishu) and introduces
`build_channel_proxy_client`, `build_channel_proxy_client_with_timeouts`,
and `apply_channel_proxy_to_builder` helpers that normalize proxy URLs
and integrate with the existing client cache.
Closes#3262
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(channel): add missing proxy_url fields in test initializers
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(tool): enrich delegate sub-agent system prompt and add skills_directory config key (#3046)
Sub-agents configured under [agents.<name>] previously received only the
bare system_prompt string. They now receive a structured system prompt
containing: tools section (allowed tools with parameters and invocation
protocol), skills section (from scoped or default directory), workspace
path, current date/time, safety constraints, and shell policy when shell
is in the effective tool list.
Add optional skills_directory field to DelegateAgentConfig for per-agent
scoped skill loading. When unset, falls back to default workspace
skills/ directory.
Closes#3046
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tools): add missing fields after rebase
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Adopted from #3705 by @fangxueshun with fixes:
- Added input validation for date strings (RFC 3339)
- Used chrono DateTime comparison instead of string comparison
- Added since < until validation
- Updated mem0 backend
Supersedes #3705
Fixes E0382 borrow-after-move error: wait_with_output() consumed the
child handle, making child.kill() in the timeout branch invalid.
Use kill_on_drop(true) with cmd.output() instead.
Adds per-channel cost tracking via task-local context in the tool call
loop. Budget enforcement blocks further API calls when limits are
exceeded. Resolves merge conflicts with model-switch retry loop,
reply_target parameter, and autonomy level additions on master.
Supersedes #3758
Self-hosted Whisper-compatible STT provider that POSTs audio to a
configurable HTTP endpoint (e.g. faster-whisper over WireGuard). Audio
never leaves the platform perimeter.
Implemented via red/green TDD cycles:
Wave 1 — config schema: LocalWhisperConfig struct, local_whisper field
on TranscriptionConfig + Default impl, re-export in config/mod.rs
Wave 2 — from_config validation: url non-empty, url parseable, bearer_token
non-empty, max_audio_bytes > 0, timeout_secs > 0; returns Result<Self>
Wave 3 — manager integration: registration with ? propagation (not if let Ok
— credentials come directly from config, no env-var fallback; present
section with bad values is a hard error, not a silent skip)
Wave 4 — transcribe(): resolve_audio_format() extracted from validate_audio()
so LocalWhisperProvider can resolve MIME without the 25 MB cloud cap;
size check + format resolution before HTTP send
Wave 5 — HTTP mock tests: success response, bearer auth header, 503 error
33 tests (20 baseline + 13 new), all passing. Clippy clean.
Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
Slack's Block Kit supports a native `markdown` block type that accepts
standard Markdown and handles rendering. This removes the need for a
custom Markdown-to-mrkdwn converter. Messages over 12,000 chars fall
back to plain text.
Co-authored-by: Joe Hoyle <joehoyle@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(auth): add import functionality for existing OpenAI Codex auth profiles
Introduces a new command-line option to import an existing `auth.json` file for OpenAI Codex, allowing users to bypass the login flow. The import feature reads and parses the specified JSON file, extracting authentication tokens and storing them in the user's profile. This change enhances user experience by simplifying the authentication process for existing users.
- Added `import` option to `AuthCommands` enum
- Implemented `import_openai_codex_auth_profile` function to handle the import logic
- Updated `handle_auth_command` to process the import option and validate provider compatibility
- Ensured that the import feature is exclusive to the `openai-codex` provider
* feat(auth): extract expiry from JWT in OpenAI Codex import
Enhances the `import_openai_codex_auth_profile` function by extracting the expiration date from the JWT access token. This change allows for more accurate management of token lifetimes by replacing the hardcoded expiration date with a dynamic value derived from the token itself.
- Added `extract_expiry_from_jwt` function to handle JWT expiration extraction
- Updated `TokenSet` to use the extracted expiration date instead of a static value
- Add ThemeContext with light/dark/system theme support
- Migrate all hardcoded colors to CSS variables
- Add SettingsModal for theme customization
- Add font loader for dynamic font selection
- Add i18n support for Chinese and Turkish locales
- Fix accessibility: add aria-live to pairing error message
Co-authored-by: nanyuantingfeng <nanyuantingfeng@163.com>
* fix(config): add missing WhatsApp Web policy config keys (mode, dm_policy, group_policy, self_chat_mode)
* fix(onboard): add missing WhatsApp policy fields to wizard struct literals
The new mode, dm_policy, group_policy, and self_chat_mode fields added
to WhatsAppConfig need default values in the onboard wizard's struct
initializers to avoid E0063 compilation errors.
The channel path in `src/channels/mod.rs` was passing `None` as the
`model_switch_callback` to `run_tool_call_loop()`, which meant model
switching via the `model_switch` tool was silently ignored in channel
mode.
Wire the callback in following the same pattern as the CLI path:
- Pass `Some(model_switch_callback.clone())` instead of `None`
- Wrap the tool call loop in a retry loop
- Handle `ModelSwitchRequested` errors by re-creating the provider
with the new model and retrying
Fixes#4107
Replace `tokio::task::spawn_blocking()` with plain `std:🧵:Builder`
OS threads in all PostgresMemory trait methods. The sync `postgres` crate
(v0.19.x) internally calls `Runtime::block_on()`, which panics when called
from Tokio's blocking pool threads in recent Tokio versions. Plain OS threads
have no runtime context, so the nested `block_on` succeeds.
This matches the pattern already used in `PostgresMemory::initialize_client()`,
which correctly used `std:🧵:Builder` and never exhibited this bug.
A new `run_on_os_thread` helper centralizes the pattern: spawn an OS thread,
run the closure, and bridge the result back via a `tokio::sync::oneshot` channel.
Fixes#4101
Implement start_typing/stop_typing for Slack using the Assistants API
assistant.threads.setStatus method. Tracks thread context from
assistant_thread_started events and inbound messages, then sets
"is thinking..." status during processing. Status auto-clears when
the bot sends a reply via chat.postMessage.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Implements the Weather integration as a native Rust Tool trait
implementation, consistent with the existing first-party tool
architecture (no WASM/plugin layer required).
- Add src/tools/weather_tool.rs with full WeatherTool impl
- Fetches from wttr.in ?format=j1 (no API key, global coverage)
- Supports city names (any language/script), IATA airport codes,
GPS coordinates, postal/zip codes, domain-based geolocation
- Metric (°C, km/h, mm) and imperial (°F, mph, in) units
- Current conditions + 0-3 day forecast with hourly breakdown
- Graceful error messages for unknown/invalid locations
- Respects runtime proxy config via apply_runtime_proxy_to_builder
- 36 unit tests: schema, URL building, param validation, formatting
- Register WeatherTool unconditionally in all_tools_with_runtime
(no API key needed, no config gate — same pattern as CalculatorTool)
- Flip integrations registry Weather entry from ComingSoon to Available
Closes #<issue>
Register DeepMyst (https://deepmyst.com) as an OpenAI-compatible
provider with Bearer auth and DEEPMYST_API_KEY env var support.
Aliases: "deepmyst", "deep-myst".
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The builder used sed to remove crates/robot-kit from [workspace].members because that path was not copied into the image. Cargo.lock is still generated for the full workspace (including zeroclaw-robot-kit), so the manifest and lockfile disagreed. cargo build --release --locked then tried to rewrite Cargo.lock and failed with "cannot update the lock file ... because --locked was passed" (commonly hit when ZEROCLAW_CARGO_FEATURES includes memory-postgres).
Copy crates/robot-kit/ into the image and drop the sed step so the workspace matches the committed lockfile.
Made-with: Cursor
Co-authored-by: lokinh <locnh@uniultra.xyz>
* fix(config): prevent test suite from clobbering active_workspace.toml
Refactor persist_active_workspace_config_dir() to accept the default
config directory as an explicit parameter instead of reading HOME
internally. This eliminates a hidden dependency on process-wide
environment state that caused test-suite runs to overwrite the real
user's active_workspace.toml with a stale temp-directory path.
The temp-directory guard is now unconditional (previously gated behind
cfg(not(test))). It rejects writes only when a temp config_dir targets
a non-temp default location, so test-to-test writes within temp dirs
still succeed.
Closes#4117
* fix: remove needless borrow on default_config_dir parameter
---------
Co-authored-by: lamco-office <office@lamco.io>
The channel validation in `validate_announce_delivery` was missing `qq`,
causing API-created cron jobs with QQ delivery to be rejected.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(mcp): wire MCP tools into WebSocket chat path and gateway /api/tools
Agent::from_config() did not initialize MCP tools because it was
synchronous and MCP connection requires async. The gateway tool
registry built for /api/tools also missed MCP tools for the same
reason.
Changes:
- Make Agent::from_config() async so it can call McpRegistry::connect_all()
- Add MCP tool initialization (both eager and deferred modes) to
from_config(), following the same pattern used in loop_.rs CLI/webhook paths
- Add MCP tool initialization to the gateway's tool registry so
/api/tools reflects MCP tools
- Update all three call sites (run(), handle_socket, test) to await
Closes#4042
* fix: merge master and fix formatting
* fix: remove underscore prefix from used bindings (clippy)
* feat(hardware): add RPi GPIO, Aardvark I2C/SPI/GPIO, and hardware plugin system
Extends the hardware subsystem with three clusters of functionality,
all feature-gated (hardware / peripheral-rpi) with no impact on default builds.
Raspberry Pi native support:
- src/hardware/rpi.rs: board self-discovery (model, serial, revision),
sysfs GPIO pin read/write, and ACT LED control
- scripts/99-act-led.rules: udev rule for non-root ACT LED access
- scripts/deploy-rpi.sh, scripts/rpi-config.toml, scripts/zeroclaw.service:
one-shot deployment helper and systemd service template
Total Phase Aardvark USB adapter (I2C / SPI / GPIO):
- crates/aardvark-sys/: new workspace crate with FFI bindings loaded at
runtime via libloading; graceful stub fallback when .so is absent or
arch mismatches (Rosetta 2 detection)
- src/hardware/aardvark.rs: AardvarkTransport implementing Transport trait
- src/hardware/aardvark_tools.rs: agent tools i2c_scan, i2c_read,
i2c_write, spi_transfer, gpio_aardvark
- src/hardware/datasheet.rs: datasheet search/download for detected devices
- docs/aardvark-integration.md, examples/hardware/aardvark/: guide + examples
Hardware plugin / ToolRegistry system:
- src/hardware/tool_registry.rs: ToolRegistry for hardware module tool sets
- src/hardware/loader.rs, src/hardware/manifest.rs: manifest-driven loader
- src/hardware/subprocess.rs: subprocess execution helper for board I/O
- src/gateway/hardware_context.rs: POST /api/hardware/reload endpoint
- src/hardware/mod.rs: exports all new modules; merge_hardware_tools and
load_hardware_context_prompt helpers
Integration hooks (minimal surface):
- src/hardware/device.rs: DeviceKind::Aardvark, DeviceRuntime::Aardvark,
has_aardvark / resolve_aardvark_device on DeviceRegistry
- src/hardware/transport.rs: TransportKind::Aardvark
- src/peripherals/mod.rs: gate create_board_info_tools behind hardware feature
- src/agent/loop_.rs: TOOL_CHOICE_OVERRIDE task-local for Anthropic provider
- src/providers/anthropic.rs: read TOOL_CHOICE_OVERRIDE; add tool_choice field
- Cargo.toml: add aardvark-sys to workspace and as dependency
- firmware/zeroclaw-nucleo/: update Cargo.toml and Cargo.lock
Non-goals:
- No changes to agent orchestration, channels, providers, or security policy
- No new config keys outside existing [hardware] / [peripherals] sections
- No CI workflow changes
Risk: Low. All new paths are feature-gated; aardvark.so loads at runtime
only when present. No schema migrations or persistent state introduced.
Rollback: revert this single commit.
* fix(hardware): resolve clippy and rustfmt CI failures
- struct_excessive_bools: allow on DeviceCapabilities (7 bool fields needed)
- unnecessary_debug_formatting: use .display() instead of {:?} for paths
- stable_sort_primitive: replace .sort() with .sort_unstable() on &str slices
* fix(hardware): add missing serial/uf2/pico modules declared in mod.rs
cargo fmt was exiting with code 1 because mod.rs declared pub mod serial,
uf2, pico_flash, pico_code but those files were missing from the branch.
Also apply auto-formatting to loader.rs.
* fix(hardware): apply rustfmt 1.92.0 formatting (matches CI toolchain)
* docs(scripts): add RPi deployment and interaction guide
* push
* feat(firmware): add initial Pico firmware and serial device handling
- Introduced main.py for ZeroClaw Pico firmware with a placeholder for MicroPython implementation.
- Added binary UF2 file for Pico deployment.
- Implemented serial device enumeration and validation in the hardware module, enhancing security by restricting allowed serial paths.
- Updated related modules to integrate new serial device functionality.
---------
Co-authored-by: ehushubhamshaw <eshaw1@wpi.edu>
When the gateway security guard blocks a public bind address, the error
message now mentions the Docker use case and provides clear instructions
for connecting from Docker containers.
Closes#4086
* fix(approval): auto-approve read-only tools in non-interactive mode
Add web_search_tool, web_fetch, calculator, glob_search, content_search,
and image_info to the default auto_approve list. These are read-only tools
with no side effects that were being silently denied in channel mode
(Telegram, Slack, etc.) because the non-interactive ApprovalManager
auto-denies any tool not in auto_approve when autonomy != full.
Closes#4083
* fix: remove duplicate default_otp_challenge_max_attempts function
PR #3921 accidentally introduced a duplicate definition of
default_otp_challenge_max_attempts() in config/schema.rs, causing
compilation to fail on master (E0428: name defined multiple times).
* feat(memory): add mem0 (OpenMemory) backend integration
- Implement Mem0Memory struct with full Memory trait
- Add history() audit trail, recall_filtered() with time/metadata filters
- Add store_procedural() for conversation trace extraction
- Add ProceduralMessage type to Memory trait with default no-op
- Feature-gated behind `memory-mem0` flag
- 9 unit tests covering edge cases
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style: apply cargo fmt
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(memory): add extraction_prompt config, deploy scripts, and timing instrumentation
- Add `extraction_prompt` field to `Mem0Config` for custom LLM fact
extraction prompts (e.g. Cantonese/Chinese content), with
`MEM0_EXTRACTION_PROMPT` env var fallback
- Pass `custom_instructions` in mem0 store requests so the server
uses the client-supplied prompt over its default
- Add timing instrumentation to channel message pipeline
(mem_recall_ms, elapsed_before_llm_ms, llm_call_ms, total_ms)
- Add `deploy/mem0/` with self-hosted mem0 + reranker GPU server
scripts, fully configurable via environment variables
- Update config reference docs (EN, zh-CN, VI) with `[memory.mem0]`
subsection
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
# Conflicts:
# src/channels/mod.rs
* chore: remove accidentally staged worktree from index
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(gemini): use default chat() for prompt-guided tool calling and add multimodal vision support
The Gemini provider's chat() override bypassed the default trait
implementation that injects tool definitions into the system prompt for
providers without native tool calling. This caused the agent loop to see
tool definitions in context but never actually invoke them, resulting in
hallucinated tool calls (e.g. claiming "Stored" without calling
memory_store).
Remove the broken chat() override so the default prompt-guided fallback
in the Provider trait handles tool injection correctly. Add an explicit
capabilities() declaration (native_tool_calling: false, vision: true).
Also add multimodal support: convert Part from a plain struct to an
untagged enum with Text and Inline variants, and add build_parts() to
extract [IMAGE:data:...] markers as Gemini inline_data parts.
Includes 14 new tests covering capabilities, Part serialization,
build_parts edge cases, and role-mapping behavior. Removes unused
ChatResponse import.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test(gemini): move capabilities tests to component level and add tool conversion test
Move the 3 capabilities tests (native_tool_calling, vision,
supports_native_tools) from the inline module to
tests/component/gemini_capabilities.rs since they exercise the public
Provider trait contract through the factory. Add a new
convert_tools_returns_prompt_guided test verifying the agent loop will
receive PromptGuided payload for Gemini.
Private internals tests (Part serialization, build_parts, role mapping)
remain inline since those types are not publicly exported.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style(gemini): fix cargo fmt formatting
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(gemini): add prompt_caching field to capabilities declaration
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: myclaw <myclaw@myclaws-MacBook-Air.local>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The `SafetySection` in `SystemPromptBuilder` always hardcoded
"Do not run destructive commands without asking" and "Do not bypass
oversight or approval mechanisms" regardless of the configured
autonomy level. This caused the gateway WebSocket path (web interface)
to instruct the LLM to simulate approval dialogs even when
`autonomy.level = "full"`.
PRs #3955/#3970/#3975 fixed the channel dispatch path
(`build_system_prompt_with_mode_and_autonomy`) but missed the
`Agent::from_config` → `SystemPromptBuilder` path used by
`gateway/ws.rs`.
Changes:
- Add `autonomy_level` field to `PromptContext`
- Rewrite `SafetySection::build()` to conditionally include/exclude
approval instructions based on autonomy level, matching the logic
already present in `build_system_prompt_with_mode_and_autonomy`
- Add `autonomy_level` field to `Agent` struct and `AgentBuilder`
- Pass `config.autonomy.level` through `Agent::from_config`
- Add tests for full/supervised autonomy safety section behavior
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The QQ channel WebSocket loop did not handle incoming Ping frames,
causing the server to consider the connection dead and drop it. Add a
Ping handler that replies with Pong, keeping the connection alive.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The Claude Code CLI only supports temperatures 0.7 and 1.0, but
internal subsystems (memory consolidation, context summarizer) use
lower values like 0.1 and 0.2. Previously the provider rejected these
with a hard error, triggering retries and WARN-level log noise.
Clamp to the nearest supported value instead, since the CLI ignores
temperature anyway.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Expand the minimal .editorconfig to include comprehensive settings for
all file types used in the project:
- Add root = true declaration
- Add standard settings: end_of_line, charset, trim_trailing_whitespace,
insert_final_newline
- Add Rust-specific settings (indent_size = 4, max_line_length = 100)
to match rustfmt.toml
- Add Markdown settings (preserve trailing whitespace for hard breaks)
- Add TOML, YAML, Python, Shell script, and JSON settings
This ensures consistent editor behavior across all contributors and
matches the project's formatting standards.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* fix(docker): default CMD to daemon instead of gateway
- Change default Docker CMD from gateway to daemon in both dev and release stages
- gateway only starts the HTTP/WebSocket server — channel listeners (Matrix, Telegram, Discord, etc.) are never spawned
- daemon starts the full runtime: gateway + channels + heartbeat + scheduler
- Users who configure channels in config.toml and run the default image get no response because the channel sync loops never start
* fix(config): add challenge_max_attempts field to OtpConfig
Add missing challenge_max_attempts field to support OTP challenge
attempt limiting. This field allows users to configure the maximum
number of OTP challenge attempts before lockout.
Fixes#3919
---------
Co-authored-by: Jah-yee <jah.yee@outlook.com>
Co-authored-by: Jah-yee <jahyee@sparklab.ai>
The /memory data grid grew unboundedly with table rows, pushing the
horizontal scrollbar to the very bottom of a tall page and making it
inaccessible without scrolling all the way down first.
- Layout: change outer shell from min-h-screen to h-screen +
overflow-hidden, and add min-h-0 to <main> so flex-1 overflow-y-auto
actually clamps at the viewport boundary instead of growing infinitely.
- Memory page: switch root div to flex-col h-full so it fills the
bounded main area; give the glass-card table wrapper flex-1 min-h-0
overflow-auto so it consumes remaining space and exposes both
scrollbars without any page-level scrolling required.
- index.css: pin .table-electric thead th with position:sticky / top:0
and a matching opaque background so column headers stay visible
during vertical scroll inside the bounded card.
The result behaves like a bounded iframe: the table fills the available
screen, rows scroll vertically, wide columns scroll horizontally, and
both scrollbars are always reachable.
* feat(slack): implement reaction support with sanitized error responses
Add add_reaction() and remove_reaction() for Slack channel, with
unicode-to-Slack emoji mapping, idempotent error handling, and
sanitized API error responses matching the pattern used by
chat.postMessage.
Based on #4089 by @joehoyle, with sanitize_api_error() applied to
reaction error paths for consistency with existing Slack methods.
Supersedes #4089
* chore(deps): bump rustls-webpki to 0.103.10 (RUSTSEC-2026-0049)
Remove the "How it works (short)" ASCII diagram section from
all 31 README files (English + 30 translations).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Expand README/docs navigation to include Korean, Tagalog, German, Arabic, Hindi, and Bengali locale entries. Add canonical locale hub and summary files for each new language under docs/i18n/.
Update i18n index/coverage metadata to reflect hub-level support and keep language discovery consistent across root docs entry points.
Replace the shell-based unzip extraction with the zip crate for
cross-platform support. Use reqwest::Url for proper URL parsing,
add www.clawhub.ai and clawhub: shorthand support, fix the download
API URL, add ZIP path traversal protection, size limits, rate-limit
handling, and SKILL.toml fallback generation.
Supersedes #4043Closes#4022
Resolves merge conflicts from PR #4064. Uses typed DeliveryConfig in
CronAddBody and passes delivery directly to add_shell_job_with_approval
and add_agent_job instead of post-creation patching. Preserves master's
richer API fields (session_target, model, allowed_tools, delete_after_run).
* feat(providers): add Avian as a named provider
Add Avian (https://avian.io) as a first-class OpenAI-compatible provider
with bearer auth via AVIAN_API_KEY. Registers the provider in the factory,
credential resolver, provider list, onboard wizard, and docs.
Models: deepseek/deepseek-v3.2, moonshotai/kimi-k2.5, z-ai/glm-5,
minimax/minimax-m2.5.
* style: fix rustfmt formatting in wizard.rs curated models
Collapse short GLM-5 tuple onto single line to satisfy cargo fmt.
* fix: add none-backend test and sync localized docs
- Add memory_backend parameter to scaffold_workspace so it can
conditionally skip MEMORY.md creation and adjust AGENTS.md guidance
when memory.backend = "none"
- Add test scaffold_none_backend_disables_memory_guidance_and_skips_memory_md
- Sync Avian provider entry to all 6 localized providers-reference.md
files (el, fr, ja, ru, vi, zh-CN)
- Bump providers-reference.md date to March 9, 2026
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test(onboard): add explicit Avian assertions in wizard helper tests
Add targeted assertions for the Avian provider branches to prevent
silent regressions, as requested in CodeRabbit review feedback:
- default_model_for_provider("avian") => "deepseek/deepseek-v3.2"
- curated_models_for_provider("avian") includes all 4 catalog entries
- supports_live_model_fetch("avian") => true
- models_endpoint_for_provider("avian") => Avian API URL
- provider_env_var("avian") => "AVIAN_API_KEY"
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: fix rustfmt formatting in wizard.rs scaffold_workspace
Break scaffold_workspace function signature and .await.unwrap() chains
across multiple lines to comply with rustfmt max line width.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Containers always run Linux, so only GNU stat (-c%s) is needed.
The BSD fallback (stat -f%z) caused shell arithmetic errors under
podman when the fallback syntax was evaluated.
Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
ARM64 Linux (musl and Android) targets were using the default 2MB stack,
which is insufficient for the 126 JsonSchema derives and causes silent
daemon crashes due to stack overflow. x86_64 and Windows already had 8MB
overrides.
Closes#3537
Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
* Sync jira tool description between .rs and en.toml
Replace multi-line operational guide in en.toml with the same one-liner
already in jira_tool.rs description(), matching the pattern used by all
other tools where both sources are in sync.
* Add myself action to jira tool for credential verification
* Add tests for myself action in jira tool
* Review and fix list_projects action added to jira tool
- Fix doc comment: update action count from four to five and add missing
myself entry
- Remove redundant statuses_url variable (was identical to url)
The list_projects action fetches all projects with their issue types,
statuses, and assignable users by combining /rest/api/3/project,
per-project /statuses, and /user/assignable/multiProjectSearch endpoints.
* Remove hardcoded project-specific statuses from shape_projects
Replace fixed known_order list (which included project-specific statuses
like 'Collecting Intel', 'Design', 'Verification') with a simple
alphabetical sort. Any Jira project can use arbitrary status names so
hardcoding an order is not applicable universally.
* Fix list_projects: bounded concurrency, error surfacing, and output shape
- Use tokio::task::JoinSet with STATUS_CONCURRENCY=5 to fetch per-project
statuses concurrently instead of sequentially, bounding API blast radius
- Surface user fetch errors: non-2xx and JSON parse failures now bail
instead of silently falling back to empty vec
- Surface per-project status JSON parse errors instead of swallowing them
with unwrap_or_else
- Move users to top-level output {projects, users} so they are not
duplicated across every project entry
* fix(tool): apply rustfmt formatting to jira_tool.rs
* feat(tunnel): add Pinggy tunnel support with configuration options
* feat(pinggy): update Pinggy tunnel configuration to remove domain field and improve SSH command handling
* feat(pinggy): add encryption and decryption for Pinggy tunnel token in config
* feat(pinggy): enhance region configuration for Pinggy tunnel with detailed options and validation
* feat(pinggy): enhance region validation and streamline output handling in Pinggy tunnel
* fix(pinggy): resolve clippy and fmt warnings
---------
Co-authored-by: moksh gupta <moksh.gupta@linktoany.com>
Add full rich media send/receive support using unified [TYPE:target] markers
(aligned with Telegram). Register QQ as a cron announcement delivery channel.
- Media upload with SHA256-based caching and TTL
- Attachment download to workspace with all types supported
- Voice: prefer voice_wav_url (WAV), inject QQ ASR transcription
- File uploads include file_name for proper display in QQ client
- msg_seq generation and reply rate-limit tracking
- QQ delivery instructions in system prompt
- Register QQ in cron scheduler and tool description
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: preserve session context across Slack messages when thread_replies=false
When thread_replies=false, inbound_thread_ts() was falling back to each
message's own ts, giving every message a unique conversation key and
breaking multi-turn context. Now top-level messages get thread_ts=None
when threading is disabled, so all messages from the same user in the
same channel share one session.
Closes#4052
* chore: ignore RUSTSEC-2024-0384 (unmaintained instant crate via nostr)
* fix: normalize 5-field cron weekday numbers to match standard crontab
The cron crate uses 1=Sun,2=Mon,...,7=Sat while standard crontab uses
0/7=Sun,1=Mon,...,6=Sat. This caused '1-5' to mean Sun-Thu instead of
Mon-Fri. Add a normalization step when converting 5-field expressions
to 6-field so user-facing semantics match standard crontab behavior.
Closes#4049
* chore: ignore RUSTSEC-2024-0384 (unmaintained instant crate via nostr)
* fix(provider): replace overall timeout with per-read timeout for Codex streams
The 120-second overall HTTP timeout was killing SSE streams mid-flight
when GPT-5.4 reasoning responses exceeded that duration. Replace with
a 300-second per-read timeout that only fires when no data arrives,
allowing long-running streams to complete while still detecting stalled
connections.
Closes#3786
* chore(deps): bump aws-lc-rs to fix RUSTSEC-2026-0044/0048
Update aws-lc-rs 1.16.1 → 1.16.2 (aws-lc-sys 0.38.0 → 0.39.0) to
resolve security advisory for X.509 Name Constraints Bypass.
Update version across all distribution manifests:
- Cargo.toml + Cargo.lock
- dist/scoop/zeroclaw.json
- dist/aur/PKGBUILD + .SRCINFO (catch up from 0.4.3)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
`zeroclaw skills install https://clawhub.ai/owner/skill` previously
failed because is_git_source() treated all https:// URLs as git repos.
ClawHub is a web registry, not a git host.
- Add is_clawhub_source() to detect clawhub.ai URLs
- Add clawhub_slug() to extract the skill name from the URL path
- Add install_clawhub_skill_source() to download via the ClawHub
download API and extract the ZIP into the skills directory
- Exclude clawhub.ai URLs from git source detection
- Security audit runs on downloaded skills as with git installs
Closes#4022
Add the missing interrupt_on_new_message field to MatrixConfig and wire
it through InterruptOnNewMessageConfig so Matrix behaves consistently
with Telegram, Slack, Discord, and Mattermost.
Closes#4058
When a request exceeds a model's context window and there is no
conversation history to truncate (e.g. system prompt alone is too
large), bail immediately with an actionable error message instead of
wasting all retry attempts on the same unrecoverable error.
Previously, the retry loop would attempt truncation, find nothing to
drop (only system + one user message), then fall through to the normal
retry logic which classified context window errors as retryable. This
caused 3 identical failing API calls for a single "hello" message.
The fix adds an early exit in all three chat methods (chat_with_history,
chat_with_tools, chat) when truncate_for_context returns 0 dropped
messages, matching the existing behavior in chat_with_system.
Fixes#4044
* feat(config): add google workspace operation allowlists
* docs(superpowers): link google workspace operation inventory sources
* docs(superpowers): verify starter operation examples
* fix(google_workspace): remove duplicate credential/audit blocks, fix trim in allowlist check, add duplicate-methods test
- Remove the duplicated credentials_path, default_account, and audit_log
blocks that were copy-pasted into execute() — they were idempotent but
misleading and would double-append --account args on every call.
- Trim stored service/resource/method values in is_operation_allowed() to
match the trim applied during Config::validate(), preventing a mismatch
where a config entry with surrounding whitespace would pass validation but
never match at runtime.
- Add google_workspace_allowed_operations_reject_duplicate_methods_within_entry
test to cover the duplicate-method validation path that was implemented but
untested.
* fix(google_workspace): close sub_resource bypass, trim allowed_services at runtime, mark spec implemented
- HIGH: extract and validate sub_resource before the allowlist check;
is_operation_allowed() now accepts Option<&str> for sub_resource and
returns false (fail-closed) when allowed_operations is non-empty and
a sub_resource is present — prevents nested gws calls such as
`drive/files/permissions/list` from slipping past a 3-segment policy
- MEDIUM: runtime allowed_services check now uses s.trim() == service,
matching the trim() applied during config validation
- LOW: spec status updated to Implemented; stale "does not currently
support method-level allowlists" line removed
- Added test: operation_allowlist_rejects_sub_resource_when_operations_configured
* docs(google_workspace): document sub_resource limitation and add config-reference entries
Spec updates (superpowers/specs):
- Semantics section: note that sub_resource calls are denied fail-closed when
allowed_operations is configured
- Mental model: show both 3-segment and 4-segment gws command shapes; explain
that 4-segment commands are unsupported with allowed_operations in this version
- Runtime enforcement: correct the validation order to match the implementation
(sub_resource extracted before allowlist check, budget charged last)
- New section: Sub-Resource Limitation — documents impact, operator workaround,
and confirms the deny is intentional for this slice
- Follow-On Work: add sub_resource config model extension as item 1
Config reference updates (all three locales):
- Add [google_workspace] section with top-level keys, [[allowed_operations]]
sub-table, sub-resource limitation note, and TOML example
* fix(docs): add classroom and events to allowed_services list in all config-reference locales
* feat(google_workspace): extend allowed_operations to support sub_resource for 4-segment gws commands
All Gmail operations use gws gmail users <sub_resource> <method>, not the flat
3-segment shape. Without sub_resource support in allowed_operations, Gmail could
not be scoped at all, making the email-assistant use case impossible.
Config model:
- Add optional sub_resource field to GoogleWorkspaceAllowedOperation
- An entry without sub_resource matches 3-segment calls (Drive, Calendar, etc.)
- An entry with sub_resource matches only calls with that exact sub_resource value
- Duplicate detection updated to (service, resource, sub_resource) key
Runtime:
- Remove blanket sub_resource deny; is_operation_allowed now matches on all four
dimensions including the optional sub_resource
Tests:
- Add operation_allowlist_matches_gmail_sub_resource_shape
- Add operation_allowlist_matches_drive_3_segment_shape
- Add rejects_operation_with_unlisted_sub_resource
- Add google_workspace_allowed_operations_allow_same_resource_different_sub_resource
- Add google_workspace_allowed_operations_reject_invalid_sub_resource_characters
- Add google_workspace_allowed_operations_deserialize_without_sub_resource
- Update all existing tests to use correct gws command shapes
Docs:
- Spec: correct Gmail examples throughout; remove Sub-Resource Limitation section;
update data model, validation rules, example use case, and follow-on work
- Config-reference (en, vi, zh-CN): add sub_resource field to allowed_operations
table; update Gmail examples to correct 4-segment shapes
Platform:
- email-assistant SKILL.md: update allowed_operations paths to gmail/users/* shape
* fix(google_workspace): add classroom and events to service parameter schema description
* fix(google_workspace): cross-validate allowed_operations service against allowed_services
When allowed_services is explicitly configured, each allowed_operations entry's
service must appear in that list. An entry that can never match at runtime is a
misconfigured policy: it looks valid but silently produces a narrower scope than
the operator intended. Validation now rejects it with a clear error message.
Scope: only applies when allowed_services is non-empty. When it is empty, the tool
uses a built-in default list defined in the tool layer; the validator cannot
enumerate that list without duplicating the constant, so the cross-check is skipped.
Also:
- Update allowed_operations field doc-comment from 3-part (service, resource, method)
to 4-part (service, resource, sub_resource, method) model
- Soften Gmail sub_resource "required" language in config-reference (en, vi, zh-CN)
from a validation requirement to a runtime matching requirement — the validator
does not and should not hardcode API shape knowledge for individual services
- Add tests: rejects operation service not in allowed_services; skips cross-check
when allowed_services is empty
* fix(google_workspace): cross-validate allowed_operations.service against effective service set
When allowed_services is empty the validator was silently skipping the
service cross-check, allowing impossible configs like an unlisted service
in allowed_operations to pass validation and only fail at runtime.
Move DEFAULT_GWS_SERVICES from the tool layer (google_workspace.rs) into
schema.rs so the validator can use it unconditionally. When allowed_services
is explicitly set, validate against that set; when empty, fall back to
DEFAULT_GWS_SERVICES. Remove the now-incorrect "skips cross-check when empty"
test and add two replacement tests: one confirming a valid default service
passes, one confirming an unknown service is rejected even with empty
allowed_services.
* fix(google_workspace): update test assertion for new error message wording
* docs(google_workspace): fix stale 3-segment gmail example in TDD plan
* fix(google_workspace): address adversarial review round 4 findings
- Error message for denied operations now includes sub_resource when
present, so gmail/users/messages/send and gmail/users/drafts/send
produce distinct, debuggable errors.
- Audit log now records sub_resource, completing the trail for 4-segment
Gmail operations.
- Normalize (trim) allowed_services and allowed_operations fields at
construction time in new(). Runtime comparisons now use plain equality
instead of .trim() on every call, removing the latent defect where a
future code path could forget to trim and silently fail to match.
- Unify runtime character validation with schema validation: sub_resource
and service/resource/method checks now both require lowercase alphanumeric
plus underscore and hyphen, matching the validator's character set.
- Add positional_cmd_args() test helper and tests verifying 3-segment
(Drive) and 4-segment (Gmail) argument ordering.
- Add test confirming page_limit without page_all passes validation.
- Add test confirming whitespace in config values is normalized at
construction, not deferred to comparison time.
- Fix spec Runtime Enforcement section to reflect actual code order.
* fix(google_workspace): wire production helpers to close test coverage gaps
- Remove #[cfg(test)] from positional_cmd_args; execute() now calls the
same function the arg-ordering tests exercise, so a drift in the real
command-building path is caught by the existing tests.
- Extract build_pagination_args(page_all, page_limit) as a production
method used by execute(). Replace the brittle page_limit_without_page_all
test (which relied on environment-specific execution failure wording)
with four direct assertions on build_pagination_args covering all
page_all/page_limit combinations.
* fix(whatsapp): remove duplicate variable declaration causing unused warning
Remove duplicate `let transcription_config = self.transcription.clone()`
(line 626 shadowed by identical line 628). The duplicate caused a
compiler warning during --features whatsapp-web builds.
Note: the reported "hang" at 526/528 crates is expected behavior for
release builds with lto="fat" + codegen-units=1 — the final link step
is slow but does complete.
Closes#4034
---------
Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
* feat(config): add google workspace operation allowlists
* docs(superpowers): link google workspace operation inventory sources
* docs(superpowers): verify starter operation examples
* fix(google_workspace): remove duplicate credential/audit blocks, fix trim in allowlist check, add duplicate-methods test
- Remove the duplicated credentials_path, default_account, and audit_log
blocks that were copy-pasted into execute() — they were idempotent but
misleading and would double-append --account args on every call.
- Trim stored service/resource/method values in is_operation_allowed() to
match the trim applied during Config::validate(), preventing a mismatch
where a config entry with surrounding whitespace would pass validation but
never match at runtime.
- Add google_workspace_allowed_operations_reject_duplicate_methods_within_entry
test to cover the duplicate-method validation path that was implemented but
untested.
* fix(google_workspace): close sub_resource bypass, trim allowed_services at runtime, mark spec implemented
- HIGH: extract and validate sub_resource before the allowlist check;
is_operation_allowed() now accepts Option<&str> for sub_resource and
returns false (fail-closed) when allowed_operations is non-empty and
a sub_resource is present — prevents nested gws calls such as
`drive/files/permissions/list` from slipping past a 3-segment policy
- MEDIUM: runtime allowed_services check now uses s.trim() == service,
matching the trim() applied during config validation
- LOW: spec status updated to Implemented; stale "does not currently
support method-level allowlists" line removed
- Added test: operation_allowlist_rejects_sub_resource_when_operations_configured
* docs(google_workspace): document sub_resource limitation and add config-reference entries
Spec updates (superpowers/specs):
- Semantics section: note that sub_resource calls are denied fail-closed when
allowed_operations is configured
- Mental model: show both 3-segment and 4-segment gws command shapes; explain
that 4-segment commands are unsupported with allowed_operations in this version
- Runtime enforcement: correct the validation order to match the implementation
(sub_resource extracted before allowlist check, budget charged last)
- New section: Sub-Resource Limitation — documents impact, operator workaround,
and confirms the deny is intentional for this slice
- Follow-On Work: add sub_resource config model extension as item 1
Config reference updates (all three locales):
- Add [google_workspace] section with top-level keys, [[allowed_operations]]
sub-table, sub-resource limitation note, and TOML example
* fix(docs): add classroom and events to allowed_services list in all config-reference locales
* feat(google_workspace): extend allowed_operations to support sub_resource for 4-segment gws commands
All Gmail operations use gws gmail users <sub_resource> <method>, not the flat
3-segment shape. Without sub_resource support in allowed_operations, Gmail could
not be scoped at all, making the email-assistant use case impossible.
Config model:
- Add optional sub_resource field to GoogleWorkspaceAllowedOperation
- An entry without sub_resource matches 3-segment calls (Drive, Calendar, etc.)
- An entry with sub_resource matches only calls with that exact sub_resource value
- Duplicate detection updated to (service, resource, sub_resource) key
Runtime:
- Remove blanket sub_resource deny; is_operation_allowed now matches on all four
dimensions including the optional sub_resource
Tests:
- Add operation_allowlist_matches_gmail_sub_resource_shape
- Add operation_allowlist_matches_drive_3_segment_shape
- Add rejects_operation_with_unlisted_sub_resource
- Add google_workspace_allowed_operations_allow_same_resource_different_sub_resource
- Add google_workspace_allowed_operations_reject_invalid_sub_resource_characters
- Add google_workspace_allowed_operations_deserialize_without_sub_resource
- Update all existing tests to use correct gws command shapes
Docs:
- Spec: correct Gmail examples throughout; remove Sub-Resource Limitation section;
update data model, validation rules, example use case, and follow-on work
- Config-reference (en, vi, zh-CN): add sub_resource field to allowed_operations
table; update Gmail examples to correct 4-segment shapes
Platform:
- email-assistant SKILL.md: update allowed_operations paths to gmail/users/* shape
* fix(google_workspace): add classroom and events to service parameter schema description
* fix(google_workspace): cross-validate allowed_operations service against allowed_services
When allowed_services is explicitly configured, each allowed_operations entry's
service must appear in that list. An entry that can never match at runtime is a
misconfigured policy: it looks valid but silently produces a narrower scope than
the operator intended. Validation now rejects it with a clear error message.
Scope: only applies when allowed_services is non-empty. When it is empty, the tool
uses a built-in default list defined in the tool layer; the validator cannot
enumerate that list without duplicating the constant, so the cross-check is skipped.
Also:
- Update allowed_operations field doc-comment from 3-part (service, resource, method)
to 4-part (service, resource, sub_resource, method) model
- Soften Gmail sub_resource "required" language in config-reference (en, vi, zh-CN)
from a validation requirement to a runtime matching requirement — the validator
does not and should not hardcode API shape knowledge for individual services
- Add tests: rejects operation service not in allowed_services; skips cross-check
when allowed_services is empty
* fix(google_workspace): cross-validate allowed_operations.service against effective service set
When allowed_services is empty the validator was silently skipping the
service cross-check, allowing impossible configs like an unlisted service
in allowed_operations to pass validation and only fail at runtime.
Move DEFAULT_GWS_SERVICES from the tool layer (google_workspace.rs) into
schema.rs so the validator can use it unconditionally. When allowed_services
is explicitly set, validate against that set; when empty, fall back to
DEFAULT_GWS_SERVICES. Remove the now-incorrect "skips cross-check when empty"
test and add two replacement tests: one confirming a valid default service
passes, one confirming an unknown service is rejected even with empty
allowed_services.
* fix(google_workspace): update test assertion for new error message wording
* docs(google_workspace): fix stale 3-segment gmail example in TDD plan
* fix(google_workspace): address adversarial review round 4 findings
- Error message for denied operations now includes sub_resource when
present, so gmail/users/messages/send and gmail/users/drafts/send
produce distinct, debuggable errors.
- Audit log now records sub_resource, completing the trail for 4-segment
Gmail operations.
- Normalize (trim) allowed_services and allowed_operations fields at
construction time in new(). Runtime comparisons now use plain equality
instead of .trim() on every call, removing the latent defect where a
future code path could forget to trim and silently fail to match.
- Unify runtime character validation with schema validation: sub_resource
and service/resource/method checks now both require lowercase alphanumeric
plus underscore and hyphen, matching the validator's character set.
- Add positional_cmd_args() test helper and tests verifying 3-segment
(Drive) and 4-segment (Gmail) argument ordering.
- Add test confirming page_limit without page_all passes validation.
- Add test confirming whitespace in config values is normalized at
construction, not deferred to comparison time.
- Fix spec Runtime Enforcement section to reflect actual code order.
* fix(google_workspace): wire production helpers to close test coverage gaps
- Remove #[cfg(test)] from positional_cmd_args; execute() now calls the
same function the arg-ordering tests exercise, so a drift in the real
command-building path is caught by the existing tests.
- Extract build_pagination_args(page_all, page_limit) as a production
method used by execute(). Replace the brittle page_limit_without_page_all
test (which relied on environment-specific execution failure wording)
with four direct assertions on build_pagination_args covering all
page_all/page_limit combinations.
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Add `ZEROCLAW_GATEWAY_TIMEOUT_SECS` environment variable to override the
hardcoded 30-second gateway request timeout at runtime.
Agentic workloads with tool use (web search, MCP tools, sub-agent
delegation) regularly exceed 30 seconds, causing HTTP 408 timeouts at the
gateway layer — even though provider_timeout_secs allows longer LLM calls.
The default remains 30s for backward compatibility.
Co-authored-by: Claude Code (claude-opus-4-6) <noreply@anthropic.com>
Setup tokens (sk-ant-oat01-*) from Claude Pro/Max subscriptions require
specific headers and a system prompt to authenticate successfully.
Without these, the API returns 400 Bad Request.
Changes to apply_auth():
- Add claude-code-20250219 and interleaved-thinking-2025-05-14 beta
headers alongside existing oauth-2025-04-20
- Add anthropic-dangerous-direct-browser-access: true header
New apply_oauth_system_prompt() method:
- Prepends required "You are Claude Code" identity to system prompt
- Handles String, Blocks, and None system prompt variants
Changes to chat_with_system() and chat():
- Inject OAuth system prompt when using setup tokens
- Use NativeChatRequest/NativeChatResponse for proper SystemPrompt
enum support in chat_with_system
Test updates:
- Updated apply_auth test to verify new beta headers and
browser-access header
Tested with real OAuth token via `zeroclaw agent -m` — confirmed
working end-to-end.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
tool_search activates deferred MCP tools into ActivatedToolSet at runtime.
When tool_search runs in parallel with the tools it activates, a race
condition occurs where tool lookups happen before activation completes,
resulting in "Unknown tool" errors.
Force sequential execution in should_execute_tools_in_parallel() whenever
tool_search is present in the tool call batch.
Co-authored-by: Claude Code (claude-opus-4-6) <noreply@anthropic.com>
* feat(tools): add text browser tool for headless environments (#3879)
* fix(tools): remove redundant match arm in text_browser clippy lint
* ci: trigger fresh workflow run
Add armv7-unknown-linux-gnueabihf and arm-unknown-linux-gnueabihf targets
to the release build matrix for Raspberry Pi Zero, Orange Pi, and similar
ARM single-board computers. Disable observability-prometheus on 32-bit
targets (requires AtomicU64). Update install.sh to map armv6l to the
correct ARM binary.
In Compact (MetadataOnly) mode, skill tools were omitted from the system
prompt alongside instructions. This meant the LLM had no visibility into
TOML-defined tools when running in Compact mode, defeating the primary
advantage of TOML skills over MD skills (structured tool metadata).
Now Compact mode skips only instructions (loaded on demand via
read_skill) while still inlining tool definitions so the LLM knows
which skill tools are available.
Closes#3702
Add pre-built armv7 (32-bit ARM hard-float) binary to all release
workflows: stable, beta, and cross-platform manual build. Uses
gcc-arm-linux-gnueabihf cross-compiler on ubuntu-22.04 for glibc 2.35
compatibility.
The install script already maps armv7l/armv6l to this target, so
users on Raspberry Pi and similar devices will be able to install
directly via the install script once binaries are published.
Closes#3759
* feat: add Bailian (Aliyun) provider support
- Add bailian provider with coding.dashscope.aliyuncs.com endpoint
- Set User-Agent to 'openclaw' for compatibility with Bailian Coding Plan
- Support aliases: bailian, aliyun-bailian, aliyun
- Enable vision support for multimodal models
This allows users to use Alibaba Cloud's Bailian (百炼) Coding Plan API keys
with zeroclaw, matching the configuration used in OpenClaw.
* fix: address PR review comments for bailian provider
- Use is_bailian_alias() in factory match arm (consistency with other providers)
- Add bailian to canonical_china_provider_name() for onboarding routing
- Add BAILIAN_API_KEY env var resolution with DASHSCOPE_API_KEY fallback
- Add test coverage for bailian aliases in canonical_china_provider_name test
* fix: format bailian provider code
---------
Co-authored-by: Sagit-chu <sagit@example.com>
- Rename 'Active Channels' card heading to 'Channels'
- Add showAllChannels state (default: active-only view)
- Pill toggle button switches between 'Active' (green) and 'All' (blue)
views; button label reflects the *current* view so clicking switches to
the other mode
- When active-only and all channels are inactive, show
'No active channels' hint instead of an empty list
- Add i18n keys dashboard.channels, dashboard.filter_active,
dashboard.filter_all, dashboard.no_active_channels to en, zh, tr
locales
- No backend changes; no new dependencies
* fix(copilot): support vision via multi-part content messages
The Copilot provider sent image markers as plain text in the content
field. The API expects OpenAI-style multi-part content arrays with
separate text and image_url parts for vision input.
Add ApiContent enum (untagged) that serializes as either a plain string
or an array of ContentPart objects. User messages containing [IMAGE:]
markers are converted to multi-part content using the shared
multimodal::parse_image_markers() helper, matching the pattern used by
the OpenRouter and Compatible providers. Non-user messages and messages
without images serialize as plain strings.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(copilot): route chat_with_system through multimodal parser and add tests
Fix chat_with_system user message bypassing to_api_content(), which left
[IMAGE:] markers as plain text on that code path. Add unit tests for
to_api_content() covering image-marker, plain-text, and non-user roles.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* style: fix clippy wildcard match warning in copilot test
* style: apply rustfmt formatting
* style: fix formatting in channels/mod.rs
* ci: re-trigger CI
* fix(test): update conversation history key in new-session test
The conversation_history_key function now includes reply_target in the
key format. Update test assertions to use the correct key format
(telegram_chat-refresh_alice instead of telegram_alice).
---------
Co-authored-by: Tim Stewart <timstewartj@gmail.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(ollama): default to prompt-guided tool calling for local models
Ollama's /api/chat native tool-calling parameter is silently ignored by
many local models (llama3, qwen3, phi4, etc.), causing them to emit
tool-call JSON as plain text instead of structured tool calls. When
native_tool_calling was true, the XML tool-use instructions were
suppressed from the system prompt, leaving models with no guidance on
the expected tool protocol.
Default to prompt-guided (XML) tool calling so all Ollama models receive
tool-use instructions in the system prompt and the existing text-based
parser in the agent loop can extract tool calls reliably.
Also fixes a minor rustfmt issue in channels/mod.rs from #2891.
Fixes#3999Fixes#3982
* fix(channels): update test keys for reply_target in history key
The conversation_history_key now includes reply_target (from #2891),
but the refreshes_available_skills_after_new_session test still used
the old key format "telegram_alice" instead of
"telegram_chat-refresh_alice".
The Prerequisites section uses #### headings inside <details> blocks,
which triggers MD001 (heading increment skip) and MD024 (duplicate
heading text). The English README is excluded from the docs quality
gate; add markdownlint-disable/enable comments to match.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Restructure the English README and all 30 translated READMEs to match
OpenClaw's comprehensive structure while preserving ZeroClaw branding.
Key changes across all files:
- Replace outdated bootstrap.sh references with install.sh
- Add missing sections: Security defaults (DM access), Everything we
built so far, How it works (ASCII architecture diagram), CLI commands,
Agent workspace + skills, Channel configuration examples, Subscription
Auth (OAuth), Migrating from OpenClaw
- Fix comparison image path to docs/assets/zeroclaw-comparison.jpeg
- Update doc links to current paths (docs/reference/api/*, docs/ops/*)
- Standardize social badges (X, Facebook, Discord, Instagram, TikTok,
RedNote, Reddit) and contributors badge across all locales
- Update benchmark table (binary size ~8.8 MB)
- Full structural parity: all 31 files are 792 lines with identical
section ordering
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
conversation_history_key() now includes reply_target to isolate
conversation histories across distinct Discord/Slack/Mattermost
channels for the same sender. Previously all channels produced
the same key {channel}_{sender}, causing cross-channel context bleed.
New key format: {channel}_{reply_target}_{sender} (without thread)
or {channel}_{reply_target}_{thread_ts}_{sender} (with thread).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Swap auto_save and load_context order to avoid retrieving the just-stored
user message in memory recall.
Before: load_context → auto_save (could retrieve own message)
After: auto_save → load_context (query first, store after)
This fixes the issue where "[Memory context]\n- user_msg: <current>\n<current>"
would appear, causing the user's input to be duplicated.
Add tool descriptions alongside names in the deferred tools section so
the LLM can better identify which tool to activate via tool_search.
Original author: @mark-linyb
Add MiniMax-M2.7 and M2.7-highspeed to model selection lists. Set M2.7 as the new default for MiniMax, Novita, and Ollama cloud providers. Retain all previous models (M2.5, M2.1, M2) as alternatives.
The conditional cfg branches for AtomicU32 (32-bit fallback) and
AtomicU64 (64-bit) became dead code after portable_atomic::AtomicU64
was adopted in bd757996. The AtomicU32 branch would fail to compile
on 32-bit targets because the import was removed but the usage remained.
Use portable_atomic::AtomicU64 unconditionally, which works on all
targets.
Fixes#3452
Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Matrix channel was setting `channel: format!("matrix:{}", room_id)`
on incoming messages, but the reply router looks up channels by name
using `channels_by_name.get(&msg.channel)` where the key is just
"matrix". This mismatch caused replies to silently drop.
Align with how all other channels (telegram, discord, slack, etc.)
set the channel field to just the channel name.
Closes#3477
Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(packaging): ensure Homebrew var directory exists on first start
When zeroclaw is installed via Homebrew, the service plist references
/opt/homebrew/var/zeroclaw (or /usr/local/var/zeroclaw on Intel) for
runtime data, but this directory was never created. This caused
`brew services start zeroclaw` to fail on first use.
The fix detects Homebrew installations by checking if the binary lives
under a Homebrew prefix (Cellar path or symlinked bin/ with Cellar
sibling). When detected:
- install_macos() creates the var directory and sets
ZEROCLAW_CONFIG_DIR + WorkingDirectory in the generated plist
- start() defensively ensures the var directory exists before
invoking launchctl
Closes#3464
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style(service): fix cargo fmt formatting in Homebrew var dir tests
Collapse multi-line assert_eq! macros to single-line form as
required by cargo fmt.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add proper SAFETY documentation comments to the two unsafe blocks using
libc::getuid() in src/service/mod.rs:
1. In is_root() function: documents why getuid() is safe to call
2. In is_root_matches_system_uid() test: documents the test's purpose
This addresses the best practices audit finding about unsafe code without
safety comments. While we could use the nix crate's safe wrapper, adding
safety comments is a minimal change that satisfies the audit requirement
without introducing new dependencies.
As noted in refactor-candidates.md, the nix crate alternative would also
be acceptable, but this change follows the principle of minimal intervention
for documentation-only improvements.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
Expand rustfmt.toml configuration beyond just 'edition = \"2021\"' to
include additional stable formatting options that improve code consistency:
- max_width = 100: consistent line length
- tab_spaces = 4, hard_tabs = false: standard indentation
- use_field_init_shorthand = true: cleaner struct initialization
- use_try_shorthand = true: cleaner try operator usage
- reorder_imports = true, reorder_modules = true: consistent ordering
- match_arm_leading_pipes = \"Never\": cleaner match syntax
These options are all available on stable Rust and follow common Rust
formatting conventions. This addresses the finding in refactor-candidates.md
about the minimal rustfmt.toml configuration.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
When HOME environment variable is not set (common in cron jobs or
other non-TTY environments), `shellexpand::tilde` returns the literal
`~` unexpanded. This caused SOUL.md, IDENTITY.md, and other workspace
files to not be loaded because the workspace_dir path was invalid.
This fix adds `expand_tilde_path()` helper that falls back to
`directories::UserDirs` when shellexpand fails to expand tilde.
If both methods fail, a warning is logged advising users to use
absolute paths or set HOME explicitly in cron environments.
Closes#3819
The Dockerfile requires firmware/ and crates/robot-kit/ for the build:
- crates/robot-kit/Cargo.toml is needed for workspace manifest parsing
- firmware/ is copied as part of the build context
These entries in .dockerignore caused COPY commands to fail with
"file not found" errors, resulting in 100% Docker build failure.
Made-with: Cursor
Venice's API accepts the OpenAI-compatible tool format without error
but models ignore the tool specs and hallucinate tool usage in prose
instead. Disable native tool calling so Venice uses prompt-guided
tools, which work reliably.
Add without_native_tools() builder method to OpenAiCompatibleProvider
for providers that support system messages but not native function
calling.
Closes#4007
The copy button in the web dashboard silently failed when served over
HTTP because navigator.clipboard requires a secure context (HTTPS).
Add a textarea-based fallback using document.execCommand('copy') and
proper error handling for the clipboard promise.
Closes#4008
The /new command only cleared in-memory conversation history but left
the JSONL session file on disk. On daemon restart, stale history was
rehydrated, negating the user's session reset. This caused context
pollution and degraded tool calling reliability.
Add delete_session() to SessionStore and call it from the /new handler
so both in-memory and persisted state are cleared.
Closes#4009
Add interruption_scope_id to ChannelMessage for thread-aware cancellation. Slack genuine thread replies and Matrix threads get scoped keys, preventing cross-thread cancellation. All other channels preserve existing behavior.
Supersedes #3900. Depends on #3891.
Change default Docker CMD from gateway to daemon in both dev and release stages. Gateway only starts the HTTP/WebSocket server — channel listeners are never spawned. Daemon starts the full runtime: gateway + channels + heartbeat + scheduler.
Closes#3893
Add a Justfile providing convenient shortcuts for common development
tasks. Just is a modern command runner that serves as a better Make
alternative for project-specific commands.
Commands provided:
- just fmt / just fmt-check - Format code
- just lint - Run clippy
- just test / just test-lib - Run tests
- just ci - Run full CI quality gate locally
- just build / just build-debug - Build the project
- just dev [args] - Run zeroclaw in development mode
- just doc - Generate and open documentation
- just audit / just deny - Security and license checks
- just fmt-toml - Format TOML files with taplo
This complements the existing shell scripts in dev/ and scripts/
by providing a more discoverable command interface.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
Add Taplo configuration file to ensure consistent formatting of TOML
files across the project. Taplo is a TOML formatter and linter that
helps maintain consistent style in configuration files.
Configuration includes:
- 2-space indentation to match project style
- Comment alignment for better readability
- Trailing newline enforcement
- Sorting rules for dependencies and features
This complements rustfmt.toml and .editorconfig to provide complete
formatting coverage for all file types in the project.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
Add comprehensive .gitattributes file to ensure consistent line endings
and proper file handling across different operating systems and editors.
Key features:
- Automatic line ending normalization (LF for most files)
- Language detection hints for GitHub Linguist
- Binary file declarations to prevent text processing
- CRLF handling for Windows-specific files (.ps1, .sln)
- Mark Cargo.lock as generated to hide diff noise in PRs
This helps prevent issues with:
- Mixed line endings in the repository
- Incorrect language statistics on GitHub
- Binary files being corrupted by text processing
- Unnecessary merge conflicts due to line ending differences
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
The `gateway` command only starts the HTTP/WebSocket server without
channel listeners. Users configuring channels (Matrix, Telegram, etc.)
in config.toml get no response because the channel sync loops are
never spawned. The `daemon` command starts the full runtime including
gateway, channels, heartbeat, and scheduler.
Co-authored-by: reaster <reaster@courrier.dev>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(channel): add /stop command to cancel in-flight tasks
Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.
Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
semaphore acquisition so the target task is never replaced in the store;
fires CancellationToken on the running task; sends reply via tokio::spawn
using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
now enter the in_flight_by_sender store, enabling /stop to reach them;
auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
Slack only) — this is a deliberate broadening, not a side effect
Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
semantics analysis; Slack always sets thread_ts, Matrix only for replies)
Supersedes #2855
Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.
* feat(channel): add interrupt_on_new_message support for Discord
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(delegate): make delegate timeout configurable via config.toml
Add configurable timeout options for delegate tool:
- timeout_secs: for non-agentic sub-agent calls (default: 120s)
- agentic_timeout_secs: for agentic sub-agent runs (default: 300s)
Previously these were hardcoded constants (DELEGATE_TIMEOUT_SECS
and DELEGATE_AGENTIC_TIMEOUT_SECS). Users can now customize them
in config.toml under [[delegate.agents]] section.
Fixes#3898
* feat(config): make delegate tool timeouts configurable via config.toml
This change makes the hardcoded 120s/300s delegate tool timeouts
configurable through the config file:
- Add [delegate] section to Config with timeout_secs and agentic_timeout_secs
- Add DelegateToolConfig struct for global default timeout values
- Add DEFAULT_DELEGATE_TIMEOUT_SECS (120) and DEFAULT_DELEGATE_AGENTIC_TIMEOUT_SECS (300) constants
- Remove hardcoded constants from delegate.rs
- Update tests to use constant values instead of magic numbers
- Update examples/config.example.toml with documentation
Closes#3898
* fix: keep delegate timeout fields as Option<u64> with global fallback
- Change DelegateAgentConfig.timeout_secs and agentic_timeout_secs from
u64 to Option<u64> so per-agent overrides are truly optional
- Implement manual Default for DelegateToolConfig with correct values
(120s and 300s) instead of derive(Default)
- Add DelegateToolConfig to DelegateTool struct and wire through
constructors so agent timeouts fall back to global [delegate] config
- Add validation for delegate timeout values in Config::validate()
- Fix example config to use [agents.name] table syntax matching the
HashMap<String, DelegateAgentConfig> schema
- Add missing timeout fields to all DelegateAgentConfig struct literals
across codebase (doctor, swarm, model_routing_config, tools/mod)
* chore: trigger CI
* chore: retrigger CI
* fix: cargo fmt line wrapping in config/mod.rs
* fix: import timeout constants in delegate tests
* style: cargo fmt
---------
Co-authored-by: vincent067 <vincent067@outlook.com>
Keep both the PR's Mattermost interrupt_on_new_message additions
and master's new fields (pending_new_sessions, prompt_config,
autonomy_level) from the /stop command PR (#3891).
* feat: add Jira tool with get_ticket, search_tickets, and comment_ticket
Implements a new `jira` tool following the existing zeroclaw tool
conventions (Tool trait, SecurityPolicy, config-gated registration).
- get_ticket: configurable detail level (basic/basic_search/full/changelog)
with response shaping; always in the default allowed_actions list
- search_tickets: JQL-based search with cursor pagination (nextPageToken);
always returns basic_search shape; gated by allowed_actions
- comment_ticket: posts ADF comments with inline markdown-like syntax —
@email mentions resolved to Jira accountId, **bold**, bullet lists,
newlines; gated by allowed_actions and SecurityPolicy Act operation
Config: [jira] section with base_url, email, api_token (encrypted at
rest, falls back to JIRA_API_TOKEN env var), allowed_actions (default:
["get_ticket"]), and timeout_secs. Validated on load.
Tool description in tool_descriptions/en.toml documents all three
actions and the full comment syntax for the AI system prompt.
* fix: address jira tool code review findings
High priority:
- Validate issue_key against ^[A-Z][A-Z0-9]+-\d+$ before URL interpolation
to prevent path traversal in get_ticket and comment_ticket
Medium priority:
- Add email guard in tool registration (mod.rs) to skip with a warning
instead of registering a broken tool when jira.email is empty
- Shape comment_ticket response to return only id, author, created —
avoids exposing internal Jira metadata to the AI
- Replace O(n²) comment matching in shape_basic with a HashMap lookup
keyed by comment ID for O(1) access
- Add api_token validation in Config::validate() checking both the
config field and JIRA_API_TOKEN env var when jira.enabled = true
Low priority:
- Make shape_basic_search private (was accidentally pub)
- Extend clean_email to strip leading punctuation (( and [) so that
@(john@co.com) resolves correctly; fix suffix computation via pointer
arithmetic to handle the shifted offset
- Clarify tool_descriptions/en.toml: @prefix is required for mentions,
bare emails without @ are treated as plain text
- Handle unmatched ** in parse_inline: emit as literal text instead of
silently producing a bold node with no closing marker
* fix(jira): allow lowercase project keys in issue_key validation
Relax validate_issue_key to accept both PROJ-123 and proj-123, since
some Jira instances use lowercase custom project keys. Path traversal
protection is preserved via alphanumeric + digit-number requirement.
* feat(tools): add tool honesty instructions to system prompt
Prevent AI from fabricating tool results by injecting a CRITICAL:
Tool Honesty section into both channel and CLI/agent system prompts.
Rules: never fabricate or guess tool results, report errors as-is,
and ask the user when unsure if a tool call succeeded.
* style: sort JiraConfig import alphabetically in config/mod.rs
* style(jira): fix strict clippy lints in jira_tool
- Derive Default for LevelOfDetails instead of manual impl
- Use char arrays in trim_start_matches/trim_end_matches
- Allow cast_possible_truncation on search_tickets (usize->u32 bounded by max_results)
- Remove needless borrow on &email
* fix(ci): adapt to upstream autonomy_level additions in channels/mod.rs
- Add missing autonomy_level argument to build_system_prompt_with_mode call in test
- Add missing autonomy_level field in ChannelRuntimeContext test initializer
- Allow large_futures in load_or_init test (Config struct growth from JiraConfig)
* fix(ci): resolve duplicate and missing autonomy_level in test initializers
* fix(ci): use TelegramRecordingChannel in telegram-specific test
The test process_channel_message_executes_tool_calls_instead_of_sending_raw_json
sent messages on channel "telegram" but registered RecordingChannel (name:
"test-channel"), causing the channel lookup to return None and no messages to
be sent.
* fix(jira): prevent panics on short dates, fix dedup bug, normalize base_url
- Add date_prefix() helper to safely slice date strings instead of
panicking on empty or short strings from the Jira API.
- Replace Vec::dedup() with HashSet-based retain in extract_emails()
to correctly deduplicate non-adjacent duplicates.
- Strip trailing slashes from base_url during construction to prevent
double-slash URLs.
- Add tests for date_prefix and non-adjacent email dedup.
---------
Co-authored-by: Anatolii <anatolii@Anatoliis-MacBook.local>
Co-authored-by: Anatolii <anatolii.fesiuk@gmail.com>
Every translated README now includes the migrate section:
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Preserve assistant text from native tool-call turns in draft updates. Falls back to response_text when parsed_text is empty and native tool calls are present. Relays text through on_delta for draft-capable channels like Telegram.
Supersedes #3976. Closes#3974
Inject a human-readable summary of the active SecurityPolicy into the system prompt Safety section. LLM sees allowed commands, forbidden paths, autonomy level, and rate limits.
Supersedes #3968. Closes#2404
Adds text-to-speech output to the Telegram channel, mirroring the
existing WhatsApp Web voice-chat implementation. When a user sends a
voice note (transcribed via STT), the channel enters voice-chat mode
and subsequent agent replies are synthesised into a Telegram voice note
via the configured TTS provider, in addition to the normal text reply.
Sending a text message exits voice-chat mode.
Implementation details:
- Add `tts_config`, `voice_chats`, and `pending_voice` fields to
`TelegramChannel`
- Add `with_tts()` builder method, gated on `config.enabled`
- Track voice-chat state: enter on successful STT transcription, exit
on incoming text message
- `synthesize_and_send_voice()` static method: synthesises audio via
`TtsManager` and uploads to Telegram using `sendVoice` multipart API
- 10-second debounce in `send()` ensures only the final substantive
reply in a tool chain gets a voice note (skips JSON, code blocks,
URLs, short status messages)
- Wire `.with_tts(config.tts.clone())` into both Telegram construction
sites in the channel factory
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Replace relative docs/assets/zeroclaw-banner.png paths with absolute
raw.githubusercontent.com URLs in all 31 README files so the banner
renders correctly regardless of where the README is viewed
- Switch web dashboard favicon and logos from logo.png to zeroclaw-trans.png
- Add zeroclaw-trans.png and zeroclaw-banner.png assets
- Update build.rs to track new dashboard asset
- Fix missing autonomy_level in new test + Box::pin large future
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace relative docs/assets/zeroclaw-banner.png paths with absolute
raw.githubusercontent.com URLs in all 31 README files so the banner
renders correctly regardless of where the README is viewed
- Switch web dashboard favicon and logos from logo.png to zeroclaw-trans.png
- Add zeroclaw-trans.png and zeroclaw-banner.png assets
- Update build.rs to track new dashboard asset
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
build_system_prompt_with_mode was discarding the autonomy_level
parameter, passing None to build_system_prompt_with_mode_and_autonomy.
This caused full-autonomy prompts to still include "ask before acting"
instructions. Convert the level to an AutonomyConfig and pass it through.
The refresh-skills test was missing the autonomy_level parameter
added to build_system_prompt_with_mode and ChannelRuntimeContext
by a recently merged PR.
Resolve conflict in src/channels/mod.rs Safety section. Keeps the
PR's AutonomyConfig-based prompt construction (build_system_prompt_with_mode_and_autonomy)
while incorporating master's granular safety rules (conditional
destructive-command and ask-before-acting lines based on autonomy level).
Also fixes missing autonomy_level arg in refresh-skills test and removes
duplicate autonomy.level args from auto-merged call sites.
The Homebrew publish workflow downloads a source tarball but never
ensures Node.js is available during the build. Without it, build.rs
cannot run npm to compile the web frontend, so Homebrew-installed
ZeroClaw ships without the web dashboard.
Add `depends_on "node" => :build` to the formula by inserting it
after the existing `depends_on "rust" => :build` line. This lets
build.rs detect npm and automatically run `npm ci && npm run build`
to produce the web/dist assets.
Fixes#3991
* fix: change compact_context default to true
Local LLMs with limited context windows immediately run out of context
when compact_context defaults to false. The system prompt alone can
consume 25K+ tokens, exceeding even 55K context windows with history.
Setting compact_context=true by default limits system prompt injection
to 6000 chars and RAG results to 2 chunks, making the agent usable
with smaller models out of the box.
Fixes#3987
* docs: update compact_context default to true in config reference
Update all locale variants (en, zh-CN, vi) to reflect the new default.
* test: update tests to expect compact_context default of true
Update assertions in schema.rs unit tests and config_persistence.rs
component tests to match the new default value.
- Replace zeroclaw.png (broken/outdated) with zeroclaw-banner.png
across all 30 translated README files
- Add Instagram social badge (@therealzeroclaw) to all translations
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Persist allowed_tools in cron_jobs table, threading it through CLI add/update and cron_add/cron_update tool APIs. Add regression coverage for store, tool, and CLI roundtrip paths.
Fixups over original PR #3929: add allowed_tools to all_overdue_jobs SELECT (merge gap), resolve merge conflicts.
Closes#3920
Supersedes #3929
Add a thread_replies option to Slack channel config (default true). When false, replies go to channel root instead of the originating thread.
Closes#3888
* fix: always use Blocks format for system prompts with cache_control
System prompts under 3KB were wrapped in SystemPrompt::String which
cannot carry cache_control headers, resulting in 0% cache hit rate
on Haiku 4.5. Always use SystemPrompt::Blocks with ephemeral
cache_control regardless of prompt size.
Fixes#3977
* fix: lower conversation caching threshold from >4 to >1 messages
The previous threshold of >4 non-system messages was too restrictive,
delaying cache benefits until 5+ turns. Lower to >1 so caching kicks
in after the first user+assistant exchange.
Fixes#3977
* test: update anthropic cache tests for new thresholds and Blocks format
- convert_messages_small_system_prompt now expects Blocks with
cache_control instead of String variant
- should_cache_conversation tests updated for >1 threshold
- backward_compatibility test replaced with blocks-system test
* fix: add sandbox field to ShellTool struct
Add `sandbox: Arc<dyn Sandbox>` field to `ShellTool` and a
`new_with_sandbox()` constructor so callers can inject the configured
sandbox backend. The existing `new()` constructor defaults to
`NoopSandbox` for backward compatibility.
Ref: #3983
* fix: apply sandbox wrapping in ShellTool::execute()
Call `self.sandbox.wrap_command()` on the underlying std::process::Command
(via `as_std_mut()`) after building the shell command and before clearing
the environment. This ensures every shell command passes through the
configured sandbox backend before execution.
Ref: #3983
* fix: wire up sandbox creation at ShellTool callsites
In `all_tools_with_runtime()`, create a sandbox from
`root_config.security` via `create_sandbox()` and pass it to
`ShellTool::new_with_sandbox()`. The `default_tools_with_runtime()`
path retains `ShellTool::new()` which defaults to `NoopSandbox`.
Ref: #3983
* test: add sandbox integration tests for ShellTool
Verify that ShellTool can be constructed with a sandbox via
`new_with_sandbox()`, that NoopSandbox leaves commands unmodified,
and that command execution works end-to-end with a sandbox attached.
Ref: #3983
* fix: restore accidentally deleted logo file
The logo.png was removed in commit 48bdbde2 but is still referenced
by the web UI components. Restore it from git history.
Fixes#3984
* fix: copy logo to web/dist for rust-embed
The Rust binary embeds files from web/dist/ via rust-embed, so the
logo must also be present there to be served without a rebuild.
Fixes#3984
Replace subshell-based /dev/stdin probing in guided_input_stream with a
file-descriptor approach (guided_open_input) that works reliably in LXC
containers accessed over SSH.
The previous implementation probed /dev/stdin and /proc/self/fd/0 via
subshells before falling back to /dev/tty. In LXC containers these
probes fail even when FD 0 is perfectly usable, causing the guided
installer to exit with "requires an interactive terminal".
The fix:
- When stdin is a terminal (-t 0), assign GUIDED_FD=0 directly without
any subshell probing — trusting the kernel's own tty check
- Otherwise, open /dev/tty as an explicit fd (exec {GUIDED_FD}</dev/tty)
- guided_read uses `read -u "$GUIDED_FD"` instead of `< "$file_path"`
- Add echo after silent reads (password prompts) for correct line handling
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cron): add startup catch-up and drop login shell flag
Problems:
1. When ZeroClaw started after downtime (late boot, daemon restart),
overdue jobs were picked up via `due_jobs()` but limited by
`max_tasks` per poll cycle — with many overdue jobs, catch-up
could take many cycles.
2. Cron shell jobs used `sh -lc` (login shell), which loads the
full user profile on every execution — slow and may cause
unexpected side effects.
Fixes:
- Add `all_overdue_jobs()` store query without `max_tasks` limit
- Add `catch_up_overdue_jobs()` startup phase that runs ALL overdue
jobs once before entering the normal polling loop
- Extract `build_cron_shell_command()` helper using `sh -c` (non-login)
- Add structured tracing for catch-up progress
- Add tests for all new functions
* feat(cron): make catch-up configurable via API and control panel
Add `catch_up_on_startup` boolean to `[cron]` config (default: true).
When enabled, the scheduler runs all overdue jobs at startup before
entering the normal polling loop. Users can toggle this from:
- The Cron page toggle switch in the control panel
- PATCH /api/cron/settings { "catch_up_on_startup": false }
- The `[cron]` section of the TOML config editor
Also adds GET /api/cron/settings endpoint to read cron subsystem
settings without parsing the full config.
* fix(config): add catch_up_on_startup to CronConfig test constructors
The CI Lint job failed because the `cron_config_serde_roundtrip` test
constructs CronConfig directly and was missing the new field.
The AUR publish step fails with "Permission denied (publickey)".
Root cause is likely key formatting (Windows line endings from
GitHub secrets UI) or missing public key registration on AUR.
Changes:
- Normalize line endings (strip \r) when writing SSH key
- Set correct permissions on ~/.ssh (700) and ~/.ssh/config (600)
- Validate key with ssh-keygen before attempting clone
- Add SSH connectivity test for clearer error diagnostics
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: exempt tool schema validation errors from non-retryable classification
Groq returns 400 "tool call validation failed" which was classified as
non-retryable by is_non_retryable(), preventing the provider-level
fallback in compatible.rs from executing. Add is_tool_schema_error()
to detect these errors and return false from is_non_retryable(), allowing
the retry loop to pass control back to the provider's built-in fallback.
Fixes#3757
* test: add unit tests for tool schema error detection in reliable.rs
Verify is_tool_schema_error detects Groq-style validation failures and
that is_non_retryable returns false for tool schema 400s while still
returning true for other 400 errors like invalid API key.
* fix: escape format braces in test string literals for cargo check
The anyhow::anyhow! macro interprets curly braces as format
placeholders. Use explicit format argument to pass JSON-containing
strings in tests.
Add channel-lark to the cargo --features flag in all release and
cross-platform build workflows, and to the Docker build-args. This
gives users Feishu/Lark channel support out of the box without needing
to compile from source.
The channel-lark feature depends only on dep:prost (pure Rust protobuf),
so it is safe to enable on all platforms (Linux, macOS, Windows, Android).
* fix(openrouter): wire provider_timeout_secs through factory
Apply the configured provider_timeout_secs to OpenRouterProvider
in the provider factory, matching the pattern used for compatible
providers.
* fix(openrouter): add timeout_secs field to OpenRouterProvider
Add a configurable timeout_secs field (default 120s) and a
with_timeout_secs() builder method so the HTTP client timeout
can be overridden via provider config instead of being hardcoded.
* refactor(openrouter): improve response decode error messages
Read the response body as text first, then parse with
serde_json::from_str so that decode failures include a truncated
snippet of the raw body for easier debugging.
* test(openrouter): add timeout_secs configuration tests
Verify that the default timeout is 120s and that with_timeout_secs
correctly overrides it.
* style: run rustfmt on openrouter.rs
- Channel tool filtering (`non_cli_excluded_tools`) now respects
`autonomy.level = "full"` — full-autonomy agents keep all tools
available regardless of channel.
- Gateway `process_message` now creates and passes an `ApprovalManager`
to `agent_turn`, so `ReadOnly`/`Supervised` policies are enforced
instead of silently skipped.
- Gateway also applies `non_cli_excluded_tools` filtering with the same
full-autonomy bypass.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The conversational_ai config section is parsed but not yet consumed by
any runtime code. Emit a startup warning so users know the setting is
ignored, and update the doc comment to mark it as reserved for future use.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The GHCR image v0.5.0 fails to start when users configure the postgres
memory backend because the binary was compiled without the
memory-postgres cargo feature. Add it to all build configurations:
Dockerfiles (default ARG), release workflows (cargo build --features),
and Docker push steps (ZEROCLAW_CARGO_FEATURES build-arg).
Fixes#3946
When autonomy.level is set to "full", the channel/web system prompt no
longer includes instructions telling the model to ask for permission
before executing tools. Previously these safety lines were hardcoded
regardless of autonomy config, causing the LLM to simulate approval
dialogs in channel and web-interface modes even though the
ApprovalManager correctly allowed execution.
The fix adds an autonomy_level parameter to build_system_prompt_with_mode
and conditionally omits the "ask before acting" instructions when the
level is Full. Core safety rules (no data exfiltration, prefer trash)
are always included.
The [conversational_ai] config section was serialized into every
freshly-generated config.toml despite the feature being experimental
and not yet wired into the agent runtime. This confused new users who
found an undocumented section in their config.
Add skip_serializing_if = "ConversationalAiConfig::is_disabled" so the
section is only written when a user has explicitly enabled it. Existing
configs that already contain the section continue to deserialize
correctly via #[serde(default)].
Fixes#3958
Lower the default heartbeat interval to 5 minutes to match the renewable
partial wake-lock cadence. Add `[heartbeat task` to the memory auto-save
skip filter so heartbeat prompts (both Phase 1 decision and Phase 2 task
execution) do not pollute persistent conversation memory.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Replace workspace_dir.join(path) with resolve_tool_path(path) in
file_write, file_edit, and pdf_read tools to correctly handle absolute
paths within the workspace directory, preventing path doubling.
Closes#3774
The OtpConfig struct uses deny_unknown_fields but was missing the
challenge_max_attempts field, causing zeroclaw config schema to fail
with a TOML parse error when the field appeared in config files.
Add challenge_max_attempts as an Option<u32>-style field with a default
of 3 and a validation check ensuring it is greater than 0.
The `include` array was placed after `[lib]` without a section header,
causing Cargo to parse it as `lib.include` — an invalid manifest key.
This triggered a warning during builds and caused lockfile mismatch
errors when building with --locked in Docker (Dockerfile.debian).
Move the `include` key to the `[package]` section where it belongs and
regenerate Cargo.lock to stay in sync.
Fixes#3925
Repositions the one-time pairing code display to appear directly below
the dashboard URL for cleaner terminal output, and removes the duplicate
display that was showing at the bottom of the route list.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When the workspace is created outside of `zeroclaw onboard` (e.g., via
cron, daemon, or `< /dev/null`), SOUL.md and IDENTITY.md were never
scaffolded, causing the agent to activate without identity files.
Added `ensure_bootstrap_files()` in `Config::load_or_init()` that
idempotently creates default SOUL.md and IDENTITY.md if missing.
Closes#3819.
Add `timeout_secs` and `agentic_timeout_secs` fields to
`DelegateAgentConfig` so users can tune per-agent timeouts instead
of relying on the hardcoded 120s / 300s defaults.
Validation rejects values of 0 or above 3600s, matching the pattern
used by MCP timeout validation.
Closes#3898
Add a locale-aware tool description system that loads translations from
TOML files in tool_descriptions/. This enables non-English users to see
tool descriptions in their language.
- Add src/i18n.rs module with ToolDescriptions loader, locale detection
(ZEROCLAW_LOCALE, LANG, LC_ALL env vars), and English fallback chain
- Add locale config field to Config struct for explicit locale override
- Create tool_descriptions/en.toml with all 47 tool descriptions
- Create tool_descriptions/zh-CN.toml with Chinese translations
- Integrate with ToolsSection::build() and build_tool_instructions()
to resolve descriptions from locale files before hardcoded fallback
- Add PromptContext.tool_descriptions field for prompt-time resolution
- Add AgentBuilder.tool_descriptions() setter for Agent construction
- Include tool_descriptions/ in Cargo.toml package include list
- Add 8 unit tests covering locale loading, fallback chains, env
detection, and config override
Closes#3901
The seen_tool_signatures HashSet was initialized outside the iteration loop, causing cross-iteration deduplication of legitimate tool calls. This triggered a self-correction spiral where the agent repeatedly attempted skipped calls until hitting max_iterations.
Moving the HashSet inside the loop ensures deduplication only applies within a single iteration, as originally intended.
Fixes#3798
The Telegram channel was ignoring the ack_reactions setting because it
sent setMessageReaction API calls directly in its polling loop, bypassing
the top-level channels_config.ack_reactions check.
- Add optional ack_reactions field to TelegramConfig so it can be set
under [channels_config.telegram] without "unknown key" warnings
- Add ack_reactions field and with_ack_reactions() builder to
TelegramChannel, defaulting to true
- Guard try_add_ack_reaction_nonblocking() behind self.ack_reactions
- Wire channel-level override with fallback to top-level default
- Add config deserialization and channel behavior tests
When running install.sh with --docker --skip-build --prefer-prebuilt
(especially with podman via ZEROCLAW_CONTAINER_CLI), the script would
skip creating config.toml and workspace scaffold files because these
were only generated by the onboard wizard, which requires an interactive
terminal or explicit API key.
Add ensure_default_config_and_workspace() that creates a minimal
config.toml (with provider, workspace_dir, and optional api_key/model)
and seeds the workspace directory structure (sessions/, memory/, state/,
cron/, skills/ subdirectories plus IDENTITY.md, USER.md, MEMORY.md,
AGENTS.md, and SOUL.md) when they don't already exist.
This function is called:
- At the end of run_docker_bootstrap(), so config and workspace files
exist on the host volume regardless of whether onboard ran inside the
container.
- After the [3/3] Finalizing setup onboard block in the native install
path, covering --skip-build, --prefer-prebuilt, --skip-onboard, and
cases where the binary wasn't found.
The function is idempotent: it only writes files that don't already
exist, so it never overwrites config or workspace files created by a
successful onboard run.
Also makes the container onboard failure non-fatal (|| true) so that
the fallback config generation always runs.
Fixes#3852
When LLMs pass the schedule parameter as a JSON string instead of a JSON
object, serde fails with "invalid type: string, expected internally
tagged enum Schedule". Add a deserialize_maybe_stringified helper that
detects stringified JSON values and parses the inner string before
deserializing, providing backward compatibility for both object and
string representations.
Fixes#3860
The llamacpp provider was instantiated with vision disabled by default, causing image transfers from Telegram to fail. Use new_with_vision() with vision enabled, matching the behavior of other compatible providers.
Fixes#3802
The deferred MCP tools section in the system prompt only listed tool
names inside <available-deferred-tools> tags without any instruction
telling the LLM to call tool_search to activate them. In daemon and
Telegram mode, where conversations are shorter and less guided, the
LLM never discovered it should call tool_search, so deferred tools
were effectively unavailable.
Add a "## Deferred Tools" heading with explicit instructions that
the LLM MUST call tool_search before using any listed tool. This
ensures the LLM knows to activate deferred tools in all modes
(CLI, daemon, Telegram) consistently.
Also add tests covering:
- Instruction presence in the deferred section
- Multiple-server deferred tool search
- Cross-server keyword search ranking
- Activation persistence across multiple tool_search calls
- Idempotent re-activation
When a provider returns a context-size-exceeded error, truncate the
oldest non-system messages from conversation history and retry instead
of immediately bailing out. This enables local models with small
context windows (llamafile, llama.cpp) to work by automatically
fitting the conversation within available context.
Closes#3894
The echo_provider() test helper writes a fake_claude.sh script to
a shared temp directory. When lib and bin test binaries run in
parallel (separate processes, separate OnceLock statics), one
process can overwrite the script while the other is executing it,
causing "Text file busy" (ETXTBSY). Scope the filename with PID
to isolate each test process.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add support for switching AI models at runtime during a conversation.
The model_switch tool allows users to:
- Get current model state
- List available providers
- List models for a provider
- Switch to a different model
The switch takes effect immediately for the current conversation by
recreating the provider with the new model after tool execution.
Risk: Medium - internal state changes and provider recreation
When upgrading an existing installation, stale build artifacts in
target/release/build/ can cause compilation failures (e.g.
libsqlite3-sys bindgen.rs not found). Run cargo clean --release
before building when an upgrade is detected.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.
Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
semaphore acquisition so the target task is never replaced in the store;
fires CancellationToken on the running task; sends reply via tokio::spawn
using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
now enter the in_flight_by_sender store, enabling /stop to reach them;
auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
Slack only) — this is a deliberate broadening, not a side effect
Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
semantics analysis; Slack always sets thread_ts, Matrix only for replies)
Supersedes #2855
Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.
Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.
Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
semaphore acquisition so the target task is never replaced in the store;
fires CancellationToken on the running task; sends reply via tokio::spawn
using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
now enter the in_flight_by_sender store, enabling /stop to reach them;
auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
Slack only) — this is a deliberate broadening, not a side effect
Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
semantics analysis; Slack always sets thread_ts, Matrix only for replies)
Supersedes #2855
Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.
* fix(providers): preserve conversation context in Claude Code CLI provider
Override chat_with_history to format full multi-turn conversation
history into a single prompt for the claude CLI, instead of only
forwarding the last user message.
Closes#3878
* fix(providers): fix ETXTBSY race in claude_code tests
Use OnceLock to initialize the fake_claude.sh test script exactly
once, preventing "Text file busy" errors when parallel tests
concurrently write and execute the same script file.
Handle Schedule::At jobs in reschedule_after_run by disabling them
instead of rescheduling to a past timestamp. Also add a fallback in
persist_job_result to disable one-shot jobs if removal fails.
Closes#3868
When using Channel mode with dynamic classification and routing, the
route-specific `api_key` from `[[model_routes]]` was silently dropped.
The system always fell back to the global `api_key`, causing 401 errors
when routing to `custom:` providers that require distinct credentials.
Root cause: `ChannelRouteSelection` only stored provider + model, and
`get_or_create_provider` always used `ctx.api_key` (the global key).
Changes:
- Add `api_key` field to `ChannelRouteSelection` so the matched route's
credential survives through to provider creation.
- Update `get_or_create_provider` to accept and prefer a route-specific
`api_key` over the global key.
- Use a composite cache key (provider name + api_key hash) to prevent
cache poisoning when multiple routes target the same provider with
different credentials.
- Wire the route api_key through query classification matching and the
`/model` (SetModel) command path.
Fixes#3838
The Dockerfile and Dockerfile.debian COPY `firmware/`, `crates/robot-kit/`,
and `crates/robot-kit/Cargo.toml`, but `.dockerignore` excludes both
`firmware/` and `crates/robot-kit/`, causing COPY failures during build.
Since these are hardware-only paths not needed for the Docker runtime:
- Remove COPY commands for `firmware/` and `crates/robot-kit/`
- Remove dummy `crates/robot-kit/src` creation in dep-caching steps
- Use sed to strip `crates/robot-kit` from workspace members in the
copied Cargo.toml so Cargo doesn't look for the missing manifest
Fixes#3836
Fetch the current pairing code from GET /admin/paircode (localhost-only)
and display it in both the initial PairingDialog and the /pairing
management page. Users no longer need to check the terminal to find
the 6-digit code — it appears directly in the web UI.
Falls back gracefully when the admin endpoint is unreachable (e.g.
non-localhost access), showing the original "check your terminal" prompt.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update Facebook group link from /groups/zeroclaw to /groups/zeroclawlabs
across all 31 README locale files. Add Discord, TikTok, and RedNote
social badges to the badge section of all READMEs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
cargo-audit uses .cargo/audit.toml (not deny.toml) for its ignore
list. These 3 wasmtime advisories are transitive via extism 1.13.0
with no upstream fix available. Plugin system is feature-gated.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
RUSTSEC-2026-0006, RUSTSEC-2026-0020, RUSTSEC-2026-0021 are all in
wasmtime 37.x pinned by extism. No newer extism release available.
Plugin system is behind a feature flag to limit exposure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add cargo-deny ignore entries for RUSTSEC-2024-0388 (derivative),
RUSTSEC-2025-0057 (fxhash), and RUSTSEC-2025-0119 (number_prefix).
All are transitive dependencies we cannot directly control.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sync Cargo.lock with new Extism/WASM plugin dependencies and apply
rustfmt line-wrap fix in gateway WebSocket handler.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Wire WASM plugin tools into all_tools_with_runtime() behind
cfg(feature = "plugins-wasm"), discovering and registering tool-capable
plugins from the configured plugins directory at startup.
- Add /api/plugins gateway endpoint (cfg-gated) for listing plugin status.
- Add mod plugins declaration to main.rs binary crate so crate::plugins
resolves when the feature is enabled.
- Add unit tests for PluginHost: empty dir, manifest discovery, capability
filtering, lookup, and removal.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a standalone example plugin demonstrating the WASM plugin interface:
- example-plugin/Cargo.toml: cdylib crate targeting wasm32-wasip1
- example-plugin/src/lib.rs: mock weather tool using extism-pdk
- example-plugin/manifest.toml: plugin manifest declaring tool capability
This crate is intentionally NOT added to the workspace members since it
targets wasm32-wasip1 and would break the main build.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Introduce the WASM plugin system foundation:
- Add extism 1.9 as an optional dependency behind `plugins-wasm` feature
- Create `src/plugins/` module with manifest types, error types, and stub host
- Add `Plugin` CLI subcommands (list, install, remove, info) behind cfg gate
- Add `PluginsConfig` to the config schema with sensible defaults
All plugin code is behind `#[cfg(feature = "plugins-wasm")]` so the default
build is unaffected.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add device_registry, pending_pairings to test AppState instances and
pairing_dashboard to test GatewayConfig to fix compilation of tests
after the new pairing dashboard fields were introduced.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add Pairing page with device list table, pairing code generation,
and device revocation. Create useDevices hook for reusable device
fetching. Wire /pairing route into App.tsx router.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add ConnectParams struct for an optional first-frame connect handshake.
If the first WebSocket message is {"type":"connect",...}, connection
parameters (session_id, device_name, capabilities) are extracted and
a "connected" ack is sent back. Old clients sending "message" first
still work unchanged (backward-compatible).
Extract process_chat_message() helper to avoid duplication between
fallback first-message handling and the main message loop.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Introduce DeviceRegistry, PairingStore, and five new API endpoints:
- POST /api/pairing/initiate — generate a new pairing code
- POST /api/pairing/submit — submit code with device metadata
- GET /api/devices — list paired devices
- DELETE /api/devices/{id} — revoke a paired device
- POST /api/devices/{id}/rotate — rotate a device token
Wire into AppState and gateway router. Registry is only created
when require_pairing is enabled.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add PairingDashboardConfig struct with configurable code_length,
ttl_secs, max_pending, max_attempts, and lockout_secs fields.
Nested under GatewayConfig as `pairing_dashboard` with serde defaults.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Export commands module from lib.rs (pub mod commands) for external consumers
- Add --force and --version flags to the Update CLI command
- Wire version parameter through to check() and run() in update.rs,
supporting targeted version fetches via GitHub releases/tags API
- Add WebSocket handshake check (check_websocket_handshake) to the full
self-test suite in self_test.rs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add `zeroclaw update` command with a 6-phase self-update pipeline:
1. Preflight — check GitHub releases API for newer version
2. Download — fetch platform-specific binary to temp dir
3. Backup — copy current binary to .bak for rollback
4. Validate — size check + --version smoke test on download
5. Swap — overwrite current binary with new version
6. Smoke test — verify updated binary runs, rollback on failure
Supports --check flag for update-check-only mode without installing.
Includes version comparison logic with unit tests.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add `zeroclaw self-test` command with two modes:
- Quick mode (--quick): 8 offline checks including config, workspace,
SQLite, provider/tool/channel registries, security policy, and version
- Full mode (default): adds gateway health and memory round-trip checks
Creates src/commands/ module structure with self_test and update stubs.
Adds indicatif and tempfile runtime dependencies for the update pipeline.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Debian compatibility image was building from source with QEMU
cross-compilation for ARM64, which is extremely slow and was getting
cancelled by the concurrency group. Switch to using pre-built binaries
(same as the distroless image) with a debian:bookworm-slim runtime base.
- Add Dockerfile.debian.ci (mirrors Dockerfile.ci with Debian runtime)
- Update release-beta-on-push.yml to use docker-ctx + pre-built bins
- Update release-stable-manual.yml with same fix
- Drop GHA cache layers (no longer building from source)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Gateway WebSocket chat sessions were in-memory only — conversation
history was lost on gateway restart, macOS sleep/wake, or client
reconnect. This wires up the existing SessionBackend (SQLite) to
the gateway WS handler so sessions survive restarts and reconnections.
Changes:
- Add delete_session() to SessionBackend trait + SQLite implementation
- Add session_persistence and session_ttl_hours to GatewayConfig
- Add Agent::seed_history() to hydrate agent from persisted messages
- Initialize SqliteSessionBackend in run_gateway() when enabled
- Send session_start message on WS connect with session_id + resumed
- Persist user/assistant messages after each turn
- Add GET /api/sessions and DELETE /api/sessions/{id} REST endpoints
- Bump version to 0.5.0
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The Turkish (tr) locale section had a duplicate "Dashboard specific
labels" block that repeated 19 keys already defined earlier, causing
TypeScript error TS1117. Moved the unique keys (provider_model,
paired_yes, etc.) into the primary dashboard section and removed
the duplicate block.
Fixes build failure introduced by #3777.
Remove tweet job from beta workflow. Update tweet-release.yml to diff
against previous stable tag (excluding betas) to capture all features
across the full release cycle. Simplify tweet format to feature-focused
style without contributor counts.
Supersedes #3575.
- Add comprehensive Simplified Chinese (zh) translations for all UI strings
- Extend and complete Turkish (tr) translations
- Fill in missing English (en) translation keys
- Reset default locale to 'en'
- Update language toggle to cycle through all three locales: en → zh → tr
Revert "always generate pairing code" to tighter security posture:
codes are only generated on first startup when no tokens exist. Add
a CLI hint to the gateway banner so operators know how to re-pair
on demand. Fix install.sh to not use --new on fresh install (avoids
invalidating the auto-generated code). Fix onboard to show an
informational message instead of a throwaway PairingGuard.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(runtime): add configurable reasoning effort
* fix(test): add missing reasoning_effort field in live test
Add reasoning_effort: None to ProviderRuntimeOptions construction in
openai_codex_vision_e2e.rs to fix E0063 compile error.
---------
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
Instead of compiling Rust from source inside Docker (~60 min),
download the already-built linux binaries from the build matrix
and copy them into a minimal distroless image (~2 min).
- Add Dockerfile.ci for release workflows (no Rust toolchain needed)
- Update both beta and stable workflows to use pre-built artifacts
- Drop Docker job timeout from 60 to 15 minutes
- Original Dockerfile unchanged for local dev builds
PR #3604 included CLAUDE.md changes referencing non-existent modules
(src/security/taint.rs, src/sop/workflow.rs) and duplicating content
from CONTRIBUTING.md. These additions violate the anti-pattern rule
against modifying CLAUDE.md in feature PRs.
* feat(tools): add native LinkedIn integration tool
Add a config-gated LinkedIn tool that enables ZeroClaw to interact with
LinkedIn's REST API via OAuth2. Supports creating posts, listing own
posts, commenting, reacting, deleting posts, viewing engagement stats,
and retrieving profile info.
Architecture:
- linkedin.rs: Tool trait impl with action-dispatched design
- linkedin_client.rs: OAuth2 token management and API wrappers
- Config-gated via [linkedin] enabled = false (default off)
- Credentials loaded from workspace .env file
- Automatic token refresh with line-targeted .env update
39 unit tests covering security enforcement, parameter validation,
credential parsing, and token management.
* feat(linkedin): configurable content strategy and API version
- Expand LinkedInConfig with api_version and nested LinkedInContentConfig
(rss_feeds, github_users, github_repos, topics, persona, instructions)
- Add get_content_strategy tool action so agents can read config at runtime
- Fix hardcoded LinkedIn API version 202402 (expired) → configurable,
defaulting to 202602
- LinkedInClient accepts api_version as parameter instead of static header
- 4 new tests (43 total), all passing
* feat(linkedin): add multi-provider image generation for posts
Add ImageGenerator with provider chain (DALL-E, Stability AI, Imagen, Flux)
and SVG fallback card. LinkedIn tool create_post now supports generate_image
parameter. Includes LinkedIn image upload (register → upload → reference),
configurable provider priority, and 14 new tests.
* feat(whatsapp): add voice note transcription and TTS voice replies
- Add STT support: download incoming voice notes via wa-rs, transcribe
with OpenAI Whisper (or Groq), send transcribed text to agent
- Add TTS support: synthesize agent replies to Opus audio via OpenAI
TTS, upload encrypted media, send as WhatsApp voice note (ptt=true)
- Voice replies only trigger when user sends a voice note; text
messages get text replies only. Flag is consumed after one use to
prevent multiple voice notes per agent turn
- Fix transcription module to support OpenAI API key (not just Groq):
auto-detect provider from API URL, check ANTHROPIC_OAUTH_TOKEN /
OPENAI_API_KEY / GROQ_API_KEY env vars in priority order
- Add optional api_key field to TranscriptionConfig for explicit key
- Add response_format: opus to OpenAI TTS for WhatsApp compatibility
- Add channel capability note so agent knows TTS is automatic
- Wire transcription + TTS config into WhatsApp Web channel builder
* fix(providers): prefer ANTHROPIC_OAUTH_TOKEN over global api_key
When the Anthropic provider is used alongside a non-Anthropic primary
provider (e.g. custom: gateway), the global api_key would be passed
as credential override, bypassing provider-specific env vars. This
caused Claude Code subscription tokens (sk-ant-oat01-*) to be ignored
in favor of the unrelated gateway JWT.
Fix: for the anthropic provider, check ANTHROPIC_OAUTH_TOKEN and
ANTHROPIC_API_KEY env vars before falling back to the credential
override. This mirrors the existing MiniMax OAuth pattern and enables
subscription-based auth to work as a fallback provider.
* feat(linkedin): add scheduled post support via LinkedIn API
Add scheduled_at parameter to create_post and create_post_with_image.
When provided (RFC 3339 timestamp), the post is created as a DRAFT
with scheduledPublishOptions so LinkedIn publishes it automatically
at the specified time. This enables the cron job to schedule a week
of posts in advance directly on LinkedIn.
* fix(providers): prefer env vars for openai and groq credential resolution
Generalize the Anthropic OAuth fix to also cover openai and groq
providers. When used alongside a non-matching primary provider (e.g.
a custom: gateway), the global api_key would be passed as credential
override, causing auth failures. Now checks provider-specific env
vars (OPENAI_API_KEY, GROQ_API_KEY) before falling back to the
credential override.
* fix(whatsapp): debounce voice replies to voice final answer only
The voice note TTS was triggering on the first send() call, which was
often intermediate tool output (URLs, JSON, web fetch results) rather
than the actual answer. This produced incomprehensible voice notes.
Fix: accumulate substantive replies (>30 chars, not URLs/JSON/code)
in a pending_voice map. A spawned debounce task waits 4 seconds after
the last substantive message, then synthesizes and sends ONE voice
note with the final answer. Intermediate tool outputs are skipped.
This ensures the user hears the actual answer in the correct language,
not raw tool output in English.
* fix(whatsapp): voice in = voice out, text in = text out
Rewrite voice reply logic with clean separation:
- Voice note received: ALL text output suppressed. Latest message
accumulated silently. After 5s of no new messages, ONE voice note
sent with the final answer. No tool outputs, no text, just voice.
- Text received: normal text reply, no voice.
Atomic debounce: multiple spawned tasks race but only one can extract
the pending message (remove-inside-lock pattern). Prevents duplicate
voice notes.
* fix(whatsapp): voice replies send both text and voice note
Voice note in → text replies sent normally in real-time PLUS one
voice note with the final answer after 10s debounce. Only substantive
natural-language messages are voiced (tool outputs, URLs, JSON, code
blocks filtered out). Longer debounce (10s) ensures the agent
completes its full tool chain before the voice note fires.
Text in → text out only, no voice.
* fix(channels): suppress tool narration and ack reactions
- Add system prompt instruction telling the agent to NEVER narrate
tool usage (no "Let me fetch..." or "I will use http_request...")
- Disable ack_reactions (emoji reactions on incoming messages)
- Users see only the final answer, no intermediate steps
* docs(claude): add full CONTRIBUTING.md guidelines to CLAUDE.md
Add PR template requirements, code naming conventions, architecture
boundary rules, validation commands, and branch naming guidance
directly to CLAUDE.md for AI assistant reference.
* fix(docs): add blank lines around headings in CLAUDE.md for markdown lint
* fix(channels): strengthen tool narration suppression and fix large_futures
- Move anti-narration instruction to top of channel system prompt
- Add emphatic instruction for WhatsApp/voice channels specifically
- Add outbound message filter to strip tool-call-like patterns (⏳, 🔧)
- Box::pin the two-phase heartbeat agent::run call (16664 bytes on Linux)
* feat(channels): add Reddit, Bluesky, and generic Webhook adapters
- Reddit: OAuth2 polling for mentions/DMs/replies, comment and DM sending
- Bluesky: AT Protocol session auth, notification polling, post replies
- Webhook: Axum HTTP server for inbound, configurable outbound POST/PUT
- All three follow existing channel patterns with tests
* fix(channels): use neutral test fixtures and improve test naming in webhook
* feat(tools): add Google Workspace CLI (gws) integration
Adds GoogleWorkspaceTool for interacting with Google Drive, Sheets,
Gmail, Calendar, Docs, and other Workspace services via CLI.
- Config-gated (google_workspace.enabled)
- Service allowlist for restricted access
- Requires shell access for CLI delegation
- Input validation against shell injection
- Wrong-type rejection for all optional parameters
- Config validation for allowed_services (empty, duplicate, malformed)
- Registered in integrations registry and CLI discovery
Closes#2986
* style: fix cargo fmt + clippy violations
* feat(google-workspace): expand config with auth, rate limits, and audit settings
* fix(tools): define missing GWS_TIMEOUT_SECS constant
* fix: Box::pin large futures and resolve duplicate Default impl
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(stt): add multi-provider STT with TranscriptionProvider trait
Refactors single-endpoint transcription to support multiple providers:
Groq (existing), OpenAI Whisper, Deepgram, AssemblyAI, and Google Cloud
Speech-to-Text. Adds TranscriptionManager for provider routing with
backward-compatible config fields.
* style: fix cargo fmt + clippy violations
* fix: Box::pin large futures and resolve merge conflicts with master
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(providers): add Claude Code, Gemini CLI, and KiloCLI subprocess providers
Adds three new local subprocess-based providers for AI CLI tools.
Each provider spawns the CLI as a child process, communicates via
stdin/stdout pipes, and parses responses into ChatResponse format.
* fix: resolve clippy unnecessary_debug_formatting and rustfmt violations
* fix: resolve remaining clippy unnecessary_debug_formatting in CLI providers
* fix(providers): add AiAgent CLI category for subprocess providers
The schedule field in cron_add used a bare {"type":"object"} with a
description string encoding a tagged union in pseudo-notation. The patch
field in cron_update was an opaque {"type":"object"} despite CronJobPatch
having nine fully-typed fields. Both gaps cause weaker instruction-following
models to produce malformed or missing nested JSON when invoking these tools.
Changes:
- cron_add: expand schedule into a oneOf discriminated union with explicit
properties and required fields for each variant (cron/at/every), matching
the Schedule enum in src/cron/types.rs exactly
- cron_add: add descriptions to all previously undocumented top-level fields
- cron_add: expand delivery from a bare inline comment to fully-specified
properties with per-field descriptions
- cron_update: expand patch from opaque object to full properties matching
CronJobPatch (name, enabled, command, prompt, model, session_target,
delete_after_run, schedule, delivery)
- cron_update: schedule inside patch mirrors the same oneOf expansion
- Both: add inline NOTE comments flagging that oneOf is correct for
OpenAI-compatible APIs but SchemaCleanr::clean_for_gemini must be
applied if Gemini native tool calling is ever wired up
- Both: add schema-shape tests using the existing test_config/test_security
helper pattern, covering oneOf variant structure, required fields, and
delivery channel enum completeness
No behavior changes. No new dependencies. Backward compatible: the runtime
deserialization path (serde on Schedule/CronJobPatch) is unchanged.
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Add workflow_call triggers to pub-scoop.yml and pub-aur.yml so the
stable release workflow can invoke them automatically after publish.
Wire scoop and aur jobs into release-stable-manual.yml as post-publish
steps (parallel with tweet), gated on publish success.
Update ci-map.md trigger docs to reflect auto-called behavior.
* fix(tools): wire ActivatedToolSet into tool dispatch and spec advertisement
When deferred MCP tools are activated via tool_search, they are stored
in ActivatedToolSet but never consulted by the tool call loop.
tool_specs is built once before the iteration loop and never refreshed,
so the provider API tools[] parameter never includes activated tools.
find_tool only searches the static registry, so execution dispatch also
fails silently.
Thread Arc<Mutex<ActivatedToolSet>> from creation sites through to
run_tool_call_loop. Rebuild tool_specs each iteration to merge base
registry specs with activated specs. Add fallback in execute_one_tool
to check the activated set when the static registry lookup misses.
Change ActivatedToolSet internal storage from Box<dyn Tool> to
Arc<dyn Tool> so we can clone the Arc out of the mutex guard before
awaiting tool.execute() (std::sync::MutexGuard is not Send).
* fix(tools): add activated_tools field to new ChannelRuntimeContext test site
Both entries had hardcoded |_| IntegrationStatus::Available, ignoring
the live config entirely. Users with cron.enabled = true or
browser.enabled = true saw 'Available' on the /integrations dashboard
card instead of 'Active'.
Root cause: status_fn closures did not capture the Config argument.
Fix: replace the |_| stubs with |c| closures that check c.cron.enabled
and c.browser.enabled respectively, matching the pattern used by every
other wired entry in the registry (Telegram, Discord, Shell, etc.).
What did NOT change: ComingSoon entries, always-Active entries (Shell,
File System), platform entries, or any other registry logic.
* feat(security): add Merkle hash-chain audit trail
Each audit entry now includes a SHA-256 hash linking it to the previous
entry (entry_hash, prev_hash, sequence), forming a tamper-evident chain.
Modifying any entry invalidates all subsequent hashes.
- Chain fields added to AuditEvent with #[serde(default)] for backward compat
- AuditLogger tracks chain state and recovers from existing logs on restart
- verify_chain() validates hash linkage, sequence continuity, and integrity
- Five new tests: genesis seed, multi-entry verify, tamper detection,
sequence gap detection, and cross-restart chain recovery
* fix(security): replace personal name with neutral label in audit tests
Remove duplicate tool listing from XmlToolDispatcher::prompt_instructions()
since tool listing is already handled by ToolsSection in prompt.rs. The
method now only emits the XML protocol envelope.
Also fix UTF-8 char boundary panics in memory consolidation truncation by
using char_indices() instead of manual byte-boundary scanning.
Fixes#3643
Supersedes #3678
Co-authored-by: TJUEZ <TJUEZ@users.noreply.github.com>
The Rust build expects web/dist to exist (static assets). Track an empty
placeholder via web/dist/.gitkeep and adjust ignore rules to keep build
artifacts ignored while allowing the placeholder file.
Made-with: Cursor
Adds audio message detection and transcription to WhatsApp Web channel.
Voice messages (PTT) are downloaded, transcribed via the existing
transcription subsystem (Groq Whisper), and delivered as text content.
- TranscriptionConfig field with builder pattern
- Duration limit enforcement before download
- MIME type mapping for audio formats
- Graceful error handling (skip on failure)
- Preserves full retry/reconnect state machine from master
* fix(channel): resolve multi-room reply routing regression (#3224)
PR #3224 (f0f0f808, "feat(matrix): add multi-room support") changed the
channel name format in matrix.rs from "matrix" to "matrix:!roomId", but
the channel lookup in mod.rs still does an exact match against
channels_by_name, which is keyed by Channel::name() (returns "matrix").
This mismatch causes target_channel to always resolve to None for Matrix
messages, silently dropping all replies.
Fix: fall back to a prefix match on the base channel name (before ':')
when the exact lookup fails. This preserves multi-room conversation
isolation while correctly routing replies to the originating channel.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: apply cargo fmt to channel routing fix
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Sandeep (Claude) <sghael+claude@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tools): add browser delegation tool for corporate web app interaction
Adds BrowserDelegateTool that delegates browser-based tasks to Claude Code
(or other browser-capable CLIs) for interacting with corporate tools
(Teams, Outlook, Jira, Confluence) via browser automation. Includes domain
validation (allow/blocklist), task templates, Chrome profile persistence
for SSO sessions, and timeout management.
* fix: resolve clippy violation in browser delegation tool
* fix(browser-delegate): validate URLs embedded in task text against domain policy
Scan the task text for http(s):// URLs using regex and validate each
against the allow/block domain lists before forwarding to the browser
CLI subprocess. This prevents bypassing domain restrictions by
embedding blocked URLs in the task parameter.
* fix(browser-delegate): constrain URL schemes, gate on runtime, document config
- Add has_shell_access gate so BrowserDelegateTool is only registered on
shell-capable runtimes (skipped with warning on WASM/edge runtimes)
- Add boundary tests for javascript: and data: URL scheme rejection
- URL scheme validation (http/https only) and config docs were already
addressed by a prior commit on this branch
* fix(tools): address CodeRabbit review findings for browser delegation
Remove dead `max_concurrent_tasks` config field and expand doc comments
on the `[browser_delegate]` config section in schema.rs.
When the LLM hallucinates an invalid model ID through the
model_routing_config tool's set_default action, the invalid model gets
persisted to config.toml. The channel hot-reload then picks it up and
every subsequent message fails with a non-retryable 404, permanently
killing the connection with no user recovery path.
Fix with two layers of defense:
1. Tool probe-and-rollback: after saving the new model, send a minimal
chat request to verify the model is accessible. If the API returns a
non-retryable error (404, auth failure, etc.), automatically restore
the previous config and return a failure notice to the LLM.
2. Channel safety net: in maybe_apply_runtime_config_update, reject
config reloads when warmup fails with a non-retryable error instead
of applying the broken config anyway.
Co-authored-by: Christian Pojoni <christian.pojoni@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Some OpenAI models (o1, o3, o4, gpt-5 variants) only accept temperature=1.0 and return errors with other values like 0.7. This change automatically adjusts the temperature parameter based on the model being used.
Changes:
- Add adjust_temperature_for_model() function to detect reasoning models
- Apply temperature adjustment in chat_with_system(), chat(), and chat_with_tools()
- Preserve user-specified temperature for standard models (gpt-4o, gpt-4-turbo, etc.)
- Force temperature=1.0 for reasoning models (o1, o3, o4, gpt-5, gpt-5-mini, gpt-5-nano, gpt-5.x-chat-latest)
Testing:
- Add 7 unit tests covering reasoning models, standard models, and edge cases
- All tests pass successfully
- Empirical testing documented in docs/openai-temperature-compatibility.md
Impact:
- Fixes temperature errors when using o1, o3, o4, and gpt-5 model families
- No breaking changes - transparent adjustment for end users
- Standard models continue to work with flexible temperature values
Risk: Low - isolated change within OpenAI provider, well-tested
Rollback: Revert this commit to restore previous behavior
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Add HandStarted, HandCompleted, and HandFailed event variants to
ObserverEvent, and HandRunDuration, HandFindingsCount, HandSuccessRate
metric variants to ObserverMetric. Update all observer backends (log,
noop, verbose, prometheus, otel) to handle the new variants with
appropriate instrumentation. Prometheus backend registers hand_runs
counter, hand_duration histogram, and hand_findings counter. OTel
backend creates spans and records metrics for hand runs.
The /memory dashboard page rendered a black screen when MemoryCategory::Custom
was serialized by serde's derived impl as a tagged object {"custom":"..."} but
the frontend expected a plain string. No navigation was possible without using
the browser Back button.
Changes:
- src/memory/traits.rs: replace derived serde impls with custom serialize
(delegates to Display, emits plain snake_case string) and deserialize
(parses known variants by name, falls through to Custom(s) for unknown).
Adds memory_category_serde_uses_snake_case and memory_category_custom_roundtrip
tests. No persistent storage migration needed — all backends (SQLite, Markdown,
Postgres) use their own category_to_str/str_to_category helpers and never
read serde-serialized category values back from disk.
- web/src/App.tsx: export ErrorBoundary class so render crashes surface a
recoverable UI instead of a black screen. Adds aria-live="polite" to the
pairing error paragraph for screen reader accessibility.
- web/src/components/layout/Layout.tsx: wrap Outlet in ErrorBoundary keyed
by pathname so the navigation shell stays mounted during a page crash and
the boundary resets on route change.
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
The install.sh script was missing libssl-dev package for Debian/Ubuntu
systems. This caused compilation failures on Debian 12 and other
Debian-based distributions when building ZeroClaw from source.
The package is already included for other distributions:
- Alpine: openssl-dev
- Fedora/RHEL: openssl-devel
- Arch: openssl
This change adds libssl-dev to the apt-get install command to ensure
OpenSSL headers are available during compilation.
Fixes#2914🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
When mention_only is enabled, the bot correctly requires an @mention in
guild (server) channels. However, Direct Messages have no guild_id and
are inherently private and addressed to the bot — requiring a @mention
in a DM is never correct and silently drops all DM messages.
Changes:
- src/channels/discord.rs: detect DMs via absence of guild_id in the
gateway payload, compute effective_mention_only = self.mention_only && !is_dm,
and pass that to normalize_incoming_content instead of self.mention_only.
DMs bypass the mention gate; guild messages retain existing behaviour.
- Adds three tests: DM bypasses mention gate, guild message without mention
is rejected, guild message with mention passes and strips the mention tag.
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
The BuildKit cache mount persists .fingerprint and .d files across
source tree changes, causing compilation errors when imports change.
Clear zeroclawlabs-specific artifacts before building to ensure
a clean recompile of the main crate while preserving dep caches.
* fix(agent): remove bare URL → curl fallback in GLM-style tool call parser
The `parse_glm_style_tool_calls` function had a "Plain URL" fallback
that converted any bare URL line (e.g. `https://example.com`) into a
`shell` tool call running `curl -s '<url>'`. This caused:
- False positives: normal URLs in LLM replies misinterpreted as tool calls
- Swallowed replies: text with URLs not forwarded to the channel
- Unintended shell commands: `curl` executed without user intent
Explicit GLM-format tool calls like `browser_open/url>https://...` and
`shell/command>...` are unaffected — only the bare URL catch-all is
removed.
* style: cargo fmt
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(channels): add X/Twitter and Mochat channel integrations
Add two new channel implementations to close competitive gaps:
- X/Twitter: Twitter API v2 with mentions polling, tweet threading
(auto-splits at 280 chars), DM support, and rate limit handling
- Mochat: HTTP polling-based integration with Mochat customer service
platform, configurable poll interval, message dedup
Both channels follow the existing Channel trait pattern with full
config schema integration, health checks, and dedup.
Closes competitive gap: NanoClaw had X/Twitter, Nanobot had Mochat.
* fix(channels): use write! instead of format_push_string for clippy
Replace url.push_str(&format!(...)) with write!(url, ...) to satisfy
clippy::format_push_string lint on CI.
* fix(channels): rename reply_to parameter to avoid legacy field grep
The component test source_does_not_use_legacy_reply_to_field greps
for "reply_to:" in source files. Rename the parameter to
reply_tweet_id to pass this check.
* fix(agent): strip vision markers from history for non-vision providers
When a user sends an image via Telegram to a non-vision provider, the
`[IMAGE:/path]` marker gets stored in the JSONL session file. Previously,
the rollback only removed it from in-memory history, not from the JSONL
file. On restart, the marker was reloaded and permanently broke the
conversation.
Two fixes:
1. `rollback_orphan_user_turn` now also calls `remove_last` on the
session store so the poisoned entry is removed from disk.
2. When building history for a non-vision provider, `[IMAGE:]` markers
are stripped from older history messages (and empty turns are dropped).
Fixes#3674
* fix(agent): only strip vision markers from older history, not current message
The initial fix stripped [IMAGE:] markers from all prior_turns including
the current message, which caused the vision check to never fire. Now
only strip from turns before the last one (the current request), so
fresh image sends still get a proper vision capability error.
* fix(ci): decouple tweet from Docker push in release workflows
Remove Docker from the tweet job's dependency chain in both beta and
stable release workflows. Docker multi-platform builds are slow and
can be cancelled by concurrency groups, which was blocking the tweet
from ever firing. The tweet announces the GitHub Release, not the
Docker image.
* fix(qq): send markdown messages instead of plain text
Change msg_type from 0 (plain text) to 2 (markdown) and wrap content
in a markdown object per QQ's API documentation. This ensures markdown
formatting (bold, italic, code blocks, etc.) renders properly in QQ
clients instead of displaying raw syntax.
Fixes#3647
Add env var resolution for AiHubMix (AIHUBMIX_API_KEY) and SiliconFlow
(SILICONFLOW_API_KEY) so users can authenticate via environment variables.
Add factory and credential resolution tests for AiHubMix, SiliconFlow,
and Codex OAuth to ensure all provider aliases work correctly.
Remove Docker from the tweet job's dependency chain in both beta and
stable release workflows. Docker multi-platform builds are slow and
can be cancelled by concurrency groups, which was blocking the tweet
from ever firing. The tweet announces the GitHub Release, not the
Docker image.
Both publish-crates-auto.yml and publish-crates.yml now treat
"already exists" from cargo publish as success instead of failing
the workflow. This prevents false failures when the auto-sync and
stable release workflows race or when re-running a publish.
- Tweet workflow: stable releases always tweet (no feature-check gate);
beta tweets now also trigger on fix() commits, not just feat()
- Beta release: use cancel-in-progress to avoid queued runs getting
stuck when rapid merges hit the concurrency group
Adds Arch Linux distribution via AUR:
- dist/aur/PKGBUILD: package build template with cargo dist profile
- dist/aur/.SRCINFO: AUR metadata
- .github/workflows/pub-aur.yml: manual workflow to push to AUR
Adds Windows package distribution via Scoop:
- dist/scoop/zeroclaw.json: manifest template with checkver/autoupdate
- .github/workflows/pub-scoop.yml: manual workflow to update Scoop bucket
Re-adds the manual-dispatch workflow for bumping the zeroclaw formula
in Homebrew/homebrew-core via a bot-owned fork. Improved from the
previously removed version with safer env-var handling.
Requires secrets: HOMEBREW_CORE_BOT_TOKEN or HOMEBREW_UPSTREAM_PR_TOKEN
Requires variables: HOMEBREW_CORE_BOT_FORK_REPO, HOMEBREW_CORE_BOT_EMAIL
Drop crates-io from the tweet job's needs so the release announcement
goes out with GitHub release, Docker, and website — not blocked by
crates.io publish timing.
Also fix the duplicate detection: the previous curl-based check used
a URL that didn't match the actual crate, causing cargo publish to
hit "already exists" and fail the whole job. Now we just run cargo
publish and treat "already exists" as success.
- Add HeartbeatMetrics struct with uptime, consecutive success/failure
counts, EMA tick duration, and total ticks
- Add compute_adaptive_interval() for exponential backoff on failures
and faster polling when high-priority tasks are present
- Add SQLite-backed task run history (src/heartbeat/store.rs) mirroring
the cron/store.rs pattern with output truncation and pruning
- Add dead-man's switch that alerts if heartbeat stops ticking
- Wire metrics, history recording, and adaptive sleep into daemon worker
- Add config fields: adaptive, min/max_interval_minutes,
deadman_timeout_minutes, deadman_channel, deadman_to, max_run_history
- All new fields are backward-compatible with serde defaults
The aarch64-unknown-linux-gnu build runs on ubuntu-22.04 (glibc 2.35)
but was restoring cached build-script binaries compiled on ubuntu-latest
(24.04, glibc 2.39), causing GLIBC_2.39 not found failures.
Adding prefix-key with the OS image and target ensures each matrix entry
gets its own isolated cache, preventing cross-image glibc mismatches.
Adds /target-*/ to .gitignore to prevent alternative Cargo build
directories (e.g. target-ms365, target-pr) from being tracked.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Remove docs/i18n/vi/ — duplicate of canonical docs/vi/ per docs-contract
- Remove docs/i18n/el/ — orphan Greek locale, not in supported locales list
- Update docs/i18n/README.md to reflect correct canonical paths
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* style: cargo fmt Box::pin calls in cron scheduler
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(install): correct version display, show pairing code, bump to 0.4.0
- Reorder ZEROCLAW_BIN resolution to prefer ~/.cargo/bin over PATH,
preventing stale binaries from shadowing the fresh install
- Sync binary to ~/.local/bin after cargo install
- Fetch and display gateway pairing code via `gateway get-paircode`
after service restart instead of referencing non-existent output
- Update fallback message to actionable CLI command
- Bump Cargo.toml version from 0.3.4 to 0.4.0
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update Cargo.lock for 0.4.0 version bump
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Several recently-added Config fields (data_retention, cloud_ops,
conversational_ai, security_ops) were missing #[serde(default)],
causing deserialization failures when those sections are absent
from config files. Also fixes security field whose #[serde(default)]
was accidentally consumed by the backup doc comment.
Fixes test failures: agent_config_deserializes and
browser_config_backward_compat_missing_section.
* feat(tools): add cloud transformation accelerator tools
Add cloud_ops and cloud_patterns tools providing read-only cloud
transformation analysis: IaC review, migration assessment, cost
analysis, and Well-Architected Framework architecture review.
Includes CloudOpsConfig, SecurityOpsConfig, and ConversationalAiConfig
schema additions, Box::pin fixes for recursive async in cron scheduler,
and approval_manager field in ChannelRuntimeContext test constructors.
Original work by @rareba. Rebased on latest master with conflict
resolution (kept SwarmConfig/SwarmStrategy exports, swarm tool
registration, and approval_manager in test constructors).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: cargo fmt Box::pin calls
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add BackupTool for creating, listing, verifying, and restoring
timestamped workspace backups with SHA-256 manifest integrity
checking. Add DataManagementTool for retention status, time-based
purge, and storage statistics. Both tools are config-driven via
new BackupConfig and DataRetentionConfig sections.
Original work by @rareba. Rebased on latest master with conflict
resolution for SwarmConfig/SwarmStrategy exports and swarm tool
registration, and added missing approval_manager fields in
ChannelRuntimeContext test constructors.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(security): add MCSS security operations tool
Add managed cybersecurity service (MCSS) tool with alert triage,
incident response playbook execution, vulnerability scan parsing,
and security report generation. Includes SecurityOpsConfig, playbook
engine with approval gating, vulnerability scoring, and full test
coverage. Also fixes pre-existing missing approval_manager field in
ChannelRuntimeContext test constructors.
Original work by @rareba. Supersedes #3599.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add SecurityOpsConfig to re-exports, fix test constructors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add a new read-only project_intel tool that provides:
- Status report generation (weekly/sprint/month)
- Risk scanning with configurable sensitivity
- Client update drafting (formal/casual, client/internal)
- Sprint summary generation
- Heuristic effort estimation
Includes multi-language report templates (EN, DE, FR, IT),
ProjectIntelConfig schema with validation, and comprehensive tests.
Also fixes missing approval_manager field in 4 ChannelRuntimeContext
test constructors.
Supersedes #3591 — rebased on latest master. Original work by @rareba.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add a new `nodes` module with HMAC-SHA256 authenticated transport for
secure inter-node communication over standard HTTPS. Includes replay
protection via timestamped nonces and constant-time signature
comparison.
Also adds `NodeTransportConfig` to the config schema and fixes missing
`approval_manager` field in four `ChannelRuntimeContext` test
constructors that failed compilation on latest master.
Original work by @rareba. Rebased on latest master to resolve merge
conflicts (SwarmConfig/SwarmStrategy exports, duplicate MCP validation,
test constructor fields).
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add Microsoft 365 tool providing access to Outlook mail, Teams messages,
Calendar events, OneDrive files, and SharePoint search via Microsoft
Graph API. Includes OAuth2 token caching (client credentials and device
code flows), security policy enforcement, and config validation.
Rebased on latest master, resolving conflicts with SwarmConfig exports
and adding approval_manager to ChannelRuntimeContext test constructors.
Original work by @rareba.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add Notion integration with two components:
- NotionChannel: polls a Notion database for tasks with configurable
status properties, concurrency limits, and stale task recovery
- NotionTool: provides CRUD operations (query_database, read_page,
create_page, update_page) for agent-driven Notion interactions
Includes config schema (NotionConfig), onboarding wizard support,
and full unit test coverage for both channel and tool.
Supersedes #3609 — rebased on latest master to resolve merge conflicts
with swarm feature additions in config/mod.rs.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Wrap all crate::agent::run() calls with Box::pin() across scheduler,
daemon, gateway tests, and main.rs to satisfy clippy::large_futures.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Commit 811fab3b added is_service_environment() as a top-level function and
called it from two sites. The call at line 445 is at module scope and resolves
fine. The call at line 1473 is inside mod native_backend, which is a child
module — Rust does not implicitly import parent-scope items, so the unqualified
name fails with E0425 (cannot find function in this scope).
Fix: prefix the call with super:: so it resolves to the parent module's
function, matching how mod native_backend already imports other parent items
(e.g. use super::BrowserAction).
The browser-native feature flag is required to reproduce:
cargo check --features browser-native # fails without this fix
cargo check --features browser-native # clean with this fix
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
* feat(security): add Nevis IAM integration for SSO/MFA authentication
Add NevisAuthProvider supporting OAuth2/OIDC token validation (local JWKS +
remote introspection), FIDO2/passkey/OTP MFA verification, session management,
and health checks. Add IamPolicy engine mapping Nevis roles to ZeroClaw tool
and workspace permissions with deny-by-default enforcement and audit logging.
Add NevisConfig and NevisRoleMappingConfig to config schema with client_secret
wired through SecretStore encrypt/decrypt. All features disabled by default.
Rebased on latest master to resolve merge conflicts in security/mod.rs (redact
function) and config/schema.rs (test section).
Original work by @rareba. Supersedes #3593.
Co-Authored-By: rareba <5985289+rareba@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: cargo fmt Box::pin calls
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: rareba <5985289+rareba@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tunnel): add OpenVPN tunnel provider
Add OpenVPN as a new tunnel provider alongside cloudflare, tailscale,
ngrok, and custom. Includes config schema, validation, factory wiring,
and comprehensive unit tests.
Co-authored-by: rareba <rareba@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add missing approval_manager field to ChannelRuntimeContext constructors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: rareba <rareba@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add workspace profile management, security boundary enforcement, and
a workspace management tool for isolated client engagements.
Original work by @rareba. Supersedes #3597 — rebased on latest master.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Box::pin the cron_run execute_job_now call to satisfy clippy::large_futures
- Add missing approval_manager field to 4 query_classification test constructors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add an optional `allowed_tools` parameter that restricts which tools are
available to the agent. When `Some(list)`, only tools whose name appears
in the list are retained; when `None`, all tools remain available
(backward compatible). This enables fine-grained capability control for
cron jobs, heartbeat tasks, and CLI invocations.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove the --interactive flag from `zeroclaw onboard`. The command now
auto-detects whether stdin/stdout are a TTY: if yes and no provider
flags are given, it launches the full interactive wizard; otherwise it
runs the quick (scriptable) setup path.
This means all three install methods work with a single flow:
curl -fsSL https://zeroclawlabs.ai/install.sh | bash
cargo install zeroclawlabs && zeroclaw onboard
docker run … zeroclaw onboard --api-key …
When zeroclaw runs as a service, the process inherits a minimal
environment without HOME, DISPLAY, or user namespaces. Headless
browsers (Chromium/Firefox) need HOME for profile/cache dirs and
fail with sandbox errors without user namespaces.
- Detect service environment via INVOCATION_ID, JOURNAL_STREAM,
or missing HOME on Linux
- Auto-apply --no-sandbox and --disable-dev-shm-usage for Chrome
in service mode
- Set HOME fallback and CHROMIUM_FLAGS on agent-browser commands
- systemd unit: add Environment=HOME=%h and PassEnvironment
- OpenRC script: export HOME=/var/lib/zeroclaw with start_pre()
to create the directory
Closes#3584
The QueryClassificationConfig was parsed from config but never applied
during channel message processing. This adds the query_classification
field to ChannelRuntimeContext and invokes the classifier in
process_channel_message to override the route when a classification
rule matches a model_routes hint.
Closes#3579
* fix: exclude name field from Mistral tool_calls (#3572)
Add skip_serializing_if to the compatibility fields (name, arguments,
parameters) on the ToolCall struct so they are omitted from the JSON
payload when None. Mistral's API returns 422 "Extra inputs are not
permitted" when these extra null fields are present in tool_calls.
* fix: format serde attribute for CI lint compliance
The dashboard WebSocket client was only sending ['zeroclaw.v1'] as the
protocols parameter, omitting the bearer token subprotocol. When
require_pairing = true, the server extracts the token from
Sec-WebSocket-Protocol as a fallback (browsers cannot set custom
headers on WebSocket connections). Without the bearer.<token> entry
in the protocols array, subprotocol-based authentication always failed.
Include `bearer.<token>` in the protocols array when a token is
available, matching the server's extract_ws_token() expectation.
Closes#3011
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add `initial_prompt: Option<String>` to `TranscriptionConfig` and pass
it as the `prompt` field in the Whisper API multipart POST when present.
This lets users bias transcription toward expected vocabulary (proper
nouns, technical terms) via the config file.
Closes#2881
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Users who installed via `./install.sh --docker` had no documented way to
restart the container after stopping it. Add clear lifecycle instructions
(stop, start, restart, logs, health check) to both the bootstrap guide and
the operations runbook, covering docker-compose, manual `docker run`, and
Podman-specific flags.
Closes#3474
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The WhatsApp Web QR code was not shown during onboarding channel launch
because the wizard allowed configuring WhatsApp Web mode even when the
binary was built without the `whatsapp-web` feature flag. At runtime,
the channel was silently skipped with only a tracing::warn that most
users never see.
- Add compile-time warning in the onboarding wizard when WhatsApp Web
mode is selected but the feature is not compiled in
- Add eprintln! in collect_configured_channels so users see a visible
terminal warning when the feature is missing at startup
Closes#3577
Introduce the Hands system — autonomous agent packages that run on
schedules and accumulate knowledge over time. Each Hand maintains a
rolling context of findings across runs so the agent grows smarter
with every execution.
This PR adds:
- Hand definition type (TOML-deserializable, reuses cron Schedule)
- HandRun / HandRunStatus for execution records
- HandContext for rolling cross-run knowledge accumulation
- File-based persistence (load/save context as JSON)
- Directory-based Hand loading from ~/.zeroclaw/hands/*.toml
- 20 unit tests covering deserialization, persistence roundtrip,
history capping, fact deduplication, and error handling
Execution integration with the agent loop is deferred to a follow-up.
Channel-driven runs (Telegram, Matrix, Discord, etc.) previously bypassed
the ApprovalManager entirely — `None` was passed into the tool-call loop,
so `auto_approve`, `always_ask`, and supervised approval checks were
silently skipped for all non-CLI execution paths.
Add a non-interactive mode to ApprovalManager that enforces the same
autonomy config policies but auto-denies tools requiring interactive
approval (since no operator is present on channel runs). Specifically:
- Add `ApprovalManager::for_non_interactive()` constructor that creates
a manager which auto-denies tools needing approval instead of prompting
- Add `is_non_interactive()` method so the tool-call loop can distinguish
interactive (CLI prompt) from non-interactive (auto-deny) managers
- Update tool-call loop: non-interactive managers auto-deny instead of
the previous auto-approve behavior for non-CLI channels
- Wire the non-interactive approval manager into ChannelRuntimeContext
so channel runs enforce the full approval policy
- Add 8 tests covering non-interactive approval behavior
Security implications:
- `always_ask` tools are now denied on channels (previously bypassed)
- Supervised-mode unknown tools are now denied on channels (previously
bypassed)
- `auto_approve` tools continue to work on channels unchanged
- `full` autonomy mode is unaffected (no approval needed regardless)
- `read_only` mode is unaffected (blocks execution elsewhere)
Closes#3487
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When a tool call fails (security policy block, hook cancellation, user
denial, or execution error), the failure reason is now included in the
progress message sent to the chat channel via on_delta. Previously only
a ❌ icon was shown; now users see the actual reason (e.g. "Command not
allowed by security policy") without needing to check `zeroclaw doctor
traces`.
Closes#3628
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Matrix channel listener was building channel keys as `matrix:<room_id>`,
but the runtime channel mapping expects the plain key `matrix`. This mismatch
caused replies to silently drop in deployments using the Matrix channel.
Closes#3477
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace full-body buffering (`response.text().await`) in
`decode_responses_body()` with incremental `bytes_stream()` chunk
processing. The previous approach held the HTTP connection open until
every byte had arrived; on high-latency links the long-lived connection
would frequently drop mid-read, producing the "error decoding response
body" failure on the first attempt (succeeding only after retry).
Reading chunks incrementally lets each network segment complete within
its own timeout window, eliminating the systematic first-attempt failure.
Closes#3544
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Matrix image messages used lowercase `[image: ...]` format instead of
the canonical `[IMAGE:...]` marker used by all other channels (Telegram,
Slack, Discord, QQ, LinQ). This caused Matrix image attachments to
bypass the multimodal vision pipeline which looks for `[IMAGE:...]`.
Closes#3486
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When `block_high_risk_commands = true`, commands like `curl` and `wget`
were unconditionally blocked even if explicitly listed in
`allowed_commands`. This made it impossible to use legitimate API calls
in full autonomy mode.
Now, if a command is explicitly named in `allowed_commands` (not via
the wildcard `*`), it is exempt from the `block_high_risk_commands`
gate. The wildcard entry intentionally does NOT grant this exemption,
preserving the safety net for broad allowlists.
Other security gates (supervised-mode approval, rate limiting, path
policy, argument validation) remain fully enforced.
Closes#3567
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace unsafe byte-index string slicing (`&text[..N]`) with
char-boundary-safe alternatives in memory consolidation and security
redaction to prevent panics when multi-byte UTF-8 characters (e.g.
Chinese/Japanese/Korean) span the slice boundary.
Fixes the same class of bug as the prior fix in `execute_one_tool`
(commit 8fcbb6eb), applied to two remaining instances:
- `src/memory/consolidation.rs`: truncation at byte 4000 and 200
- `src/security/mod.rs`: `redact()` prefix at byte 4
Adds regression tests with CJK input for both locations.
Closes#3533
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The CLI `cron add` command always routed the second positional argument
through shell security policy validation, which blocked natural language
prompts like "Check server health: disk space, memory, CPU load". This
adds an `--agent` flag to `cron add`, `cron add-at`, `cron add-every`,
and `cron once` so that natural language prompts are correctly stored as
agent jobs without shell command validation.
Closes#3563
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The http_request tool unconditionally blocked all private/LAN hosts with
no opt-out, preventing legitimate use cases like calling a local Home
Assistant instance or internal APIs. This adds an `allow_private_hosts`
config flag (default: false) under `[http_request]` that, when set to
true, skips the private-host SSRF check while still enforcing the domain
allowlist.
Closes#3568
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comprehensive long-running context upgrades:
- Token-based compaction: replace message-count trigger with token
estimation (~4 chars/token). Compaction fires when estimated tokens
exceed max_context_tokens (default 32K) OR message count exceeds
max_history_messages. Cuts at user-turn boundaries only.
- Persistent sessions: JSONL append-only session files per channel
sender in {workspace}/sessions/. Sessions survive daemon restarts.
Hydrates in-memory history from disk on startup.
- LLM-driven memory consolidation: two-phase extraction after each
conversation turn. Phase 1 writes a timestamped history entry (Daily).
Phase 2 extracts new facts/preferences to Core memory (if any).
Replaces raw message auto-save with semantic extraction.
- New config fields: agent.max_context_tokens (32000),
channels_config.session_persistence (true).
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Upgrade heartbeat system with 4 key improvements:
- Two-phase heartbeat: Phase 1 asks LLM "skip or run?" to save API cost
on quiet periods. Phase 2 executes only selected tasks.
- Structured task format: `- [priority|status] task text` with
high/medium/low priority and active/paused/completed status.
- Decision intelligence: LLM-driven smart filtering via structured prompt
at temperature 0.0 for deterministic decisions.
- Delivery routing: auto-detect best configured channel when no explicit
target is set (telegram > discord > slack > mattermost).
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Validates the aarch64-linux-android release artifact: download, archive
integrity, ELF format, architecture, checksum, and install.sh detection.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The auto-generated What's New and Recent Contributors README sections
have been removed, so the sync-readme workflow and its backing script
are no longer needed.
When the crate was renamed from `zeroclaw` to `zeroclawlabs`, users who
installed via install.sh (which uses `cargo install --path`) had the
binary tracked under the old package name. Later running
`cargo install zeroclawlabs` from crates.io fails with:
error: binary `zeroclaw` already exists as part of `zeroclaw v0.1.9`
The installer now detects and removes stale tracking for the old
`zeroclaw` package name before installing, so both install.sh and
`cargo install zeroclawlabs` work cleanly for upgrades.
The tweet was firing on `release: published` immediately when the GitHub
Release was created, but Docker push, crates.io publish, and website
redeploy were still in-flight. This meant users saw the tweet before
they could actually pull the new Docker image or install from crates.io.
Convert tweet-release.yml and sync-readme.yml to reusable workflows
(workflow_call) and call them as the final jobs in both release
pipelines, gated on completion of docker, crates-io, and
redeploy-website jobs.
Before: gh release create → tweet fires immediately (race condition)
After: gh release create → docker + crates + website → tweet + readme
Add prebuilt binary support for Termux/Android devices:
- Add aarch64-linux-android to stable and beta release matrices with NDK setup
- Detect Termux in install.sh and map to the correct android target
- Add Termux pkg manager support for system dependency installation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove references to --interactive, --security-level, and --onboard
flags that no longer exist in the CLI. Update examples to use current
supported flags.
Closes#3484
Probe /dev/stdin readability before selecting it, and add /proc/self/fd/0
as a fallback for constrained containers where /dev/stdin is inaccessible.
Closes#3482
Add SignalChannel import and match arm in deliver_announcement() so
cron jobs with delivery.channel = "signal" are handled instead of
rejected as unsupported.
Closes#3476
* fix: resolve web dashboard 404 on static assets and SPA fallback
Strip leading slash from asset paths after prefix removal so rust-embed
can find the files. Return 503 with a helpful build hint when index.html
is missing instead of a generic 404.
Closes#3508
* fix: return concrete Response type to fix match arm type mismatch
* feat(channels): add comprehensive channel matrix tests and bump to v0.2.2
Add 50 integration tests covering the full Channel trait contract across
all 16 platforms: identity semantics, threading, draft lifecycle, reactions,
pinning, concurrency, edge cases, and cross-channel routing. Bump version
to 0.2.2.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: fix rustfmt formatting in channel matrix tests
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(lint): remove empty line after doc comment in channel matrix tests
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(ci): auto-sync README What's New and Contributors on release
Replace hardcoded What's New (v0.1.9b) and Recent Contributors sections
with marker-based auto-sync powered by a new GitHub Actions workflow.
The sync-readme workflow runs on each release and updates both sections
from git history.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(ci): auto-sync What's New and Contributors across all 31 READMEs
Add marker comments to all 30 translated README files and update the
sync script to process all README*.md files on each release. Fix regex
to handle empty markers on first run.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When multiple tool calls execute in a single turn, each tool result was
emitted as a separate role="user" message. Anthropic's API rejects
adjacent messages with the same role, and newer models like
claude-sonnet-4-6 respond with 500 Internal Server Error instead of a
descriptive 400.
Merge consecutive same-role messages in convert_messages() so that
multiple tool_result blocks are combined into one user message, and
consecutive user/assistant messages are also properly coalesced.
Fixes#3493
Multi-platform Docker builds (linux/amd64 + linux/arm64) with fat LTO
and codegen-units=1 consistently exceed 30 minutes. Bump to 60m.
Also skip crates.io publish when the version already exists (the
auto-sync workflow may have published it before the stable release
runs).
Nextcloud Talk bot webhooks send event type "Create" for new chat
messages, but the parser only accepted "message". This caused all
valid messages to be skipped with "skipping non-message event: Create".
Accept both "message" and "Create" as valid event types.
Closes#3491
Uses GitHub Actions cache with SHA-256 hash of tweet content. If the
exact same tweet text was already posted in a previous run, the tweet
is skipped with a warning. Works for both auto-release and manual
dispatch tweets.
Three-layer dedup:
1. feat() commit check — skip if no new features since last release
2. Content hash — skip if identical tweet already posted (cache hit)
3. Beta-to-beta diff — only new features since previous release tag
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The 280-char truncation was blindly chopping the tweet text, cutting
off the GitHub release URL. Now we extract the URL first, truncate
only the body text to fit (accounting for X's 23-char t.co counting),
then re-append the full URL so it always renders correctly.
GITHUB_TOKEN lacks permission to create tags, causing 'Published
releases must have a valid tag' errors. Use RELEASE_TOKEN (same PAT
the beta workflow uses successfully).
Triggers on push to master when Cargo.toml changes. Detects version
bumps by comparing current vs previous commit, checks crates.io to
avoid duplicate publishes, then publishes automatically.
New fast inference providers:
- Cerebras, SambaNova, Hyperbolic
New model hosting platforms:
- DeepInfra, Hugging Face, AI21 Labs, Reka, Baseten, Nscale,
Anyscale, Nebius AI Studio, Friendli AI, Lepton AI
New Chinese AI providers:
- Stepfun, Baichuan, 01.AI (Yi), Tencent Hunyuan
Also fixed missing list_providers() entries for Telnyx and Azure OpenAI.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Add include filter to Cargo.toml (750 -> 235 files) for clean crate packaging
- Add publish-crates.yml workflow for manual crates.io publishing with dry-run support
- Add crates-io job to release-stable-manual.yml for auto-publish on stable releases
- Requires CARGO_REGISTRY_TOKEN secret to be configured in repo settings
* feat(release+providers): fix release race condition, add 3 providers
Release fix (two parts):
1. Replace softprops/action-gh-release with `gh release create` — the
CLI uploads assets atomically with the release in a single call,
avoiding the immutable release race condition
2. Move website redeploy to a separate job with `if: always()` — so the
website updates regardless of publish outcome
Both release-beta-on-push.yml and release-stable-manual.yml are fixed.
Provider additions:
- SiliconFlow (siliconflow, silicon-flow)
- AiHubMix (aihubmix)
- LiteLLM router (litellm, lite-llm)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: trigger CI
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Use --notes-file instead of --notes to safely handle multiline
release notes with special characters
- Add explicit --repo flag for resilience
- Simplify redeploy-website condition to only run on publish success
The softprops/action-gh-release action creates the release first, then
uploads assets in separate API calls. GitHub's immutable release policy
rejects those subsequent uploads, causing "Cannot upload assets to an
immutable release" errors. The gh CLI uploads assets atomically with
release creation, avoiding this race condition.
Also moves website redeploy into a separate job that runs regardless of
publish outcome, so the site stays in sync even if asset upload fails.
Remove Architecture, Security, Configuration, Python Companion, Identity
System, Gateway API, Commands, Development, CI/CD, and Build troubleshooting
sections from README.md and all 16 translated variants. These sections are
no longer relevant and the information lives in the docs/ directory.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Documents with image extensions (jpg, png, etc.) are routed to
[Document: name] /path instead of [IMAGE:/path], bypassing the
multimodal pipeline entirely. This causes the model to have no vision
input for images sent as Telegram Documents.
Re-applies fix from merged dev PR #1631 which was lost during the
master branch migration.
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
parse_attachment_markers uses .find(']') which matches the first ]
in the content. Filenames containing brackets (e.g. yt-dlp output
'Video [G4PvTrTp7Tc].mp4') get truncated at the inner bracket,
causing the send to fail with 'path not found'.
Uses depth-tracking bracket matching instead.
Re-applies fix from merged dev PR #1632 which was lost during the
master branch migration.
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
* feat(channels): add show_tool_calls config to suppress tool notifications
When show_tool_calls is false, the ChannelNotifyObserver drains tool
events silently instead of forwarding them as individual messages to
the channel. Server-side logs remain unaffected.
Defaults to true for backwards compatibility.
* docs: add before/after screenshots for show_tool_calls PR
* docs(config): add doc comment on show_tool_calls field
---------
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
- Only tweet when there are NEW feat() commits since the previous
release (including betas) — skips if no new features to announce
- Feature diff uses previous release tag (beta-to-beta) to prevent
listing the same features across consecutive releases
- Contributor count uses stable-to-HEAD range for full cycle credit
- Shows count instead of names to avoid 280 char clipping
- Added #zeroclaw #rust #ai #opensource hashtags
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(release): bump to v0.2.0, auto-tweet with features + contributors
- Bump version from 0.1.9 to 0.2.0
- Release notes now auto-generate from feat() commits only (no bug
fixes) keeping them clean and concise
- Contributors are always featured in release notes, pulled from
git log authors and Co-Authored-By trailers
- Tweet workflow now fires on all releases (beta + stable), auto-
generates punchy feature-focused tweets with contributor shoutouts
- Stable release workflow gets the same release notes treatment
- Tweet gracefully skips if Twitter secrets aren't configured
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(release): credit all contributors, wider range, filter bots
- Use wider git range for contributor collection — skips same-version
betas to capture everyone who contributed to the release, not just
the last incremental push
- Filter out bot accounts (dependabot, github-actions, copilot, etc.)
from contributor lists
- Tweet shows up to 6 contributors with "+ N more" when there are
extras, ensuring everyone gets credit
- Deduplicate contributors across git authors and Co-Authored-By
- Deduplicate features with sort -uf for cleaner notes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(release): use last stable tag for contributor range, not last beta
Ensures the tweet and release notes capture ALL contributors since the
last stable release (e.g. v0.1.9a), not just since the last beta tag.
This gives proper credit to everyone who contributed across the full
release cycle.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(install): consolidate one-click installer with branded output and inline onboarding
- Add blue color scheme with 🦀 crab emoji branding throughout installer
- Add structured [1/3] [2/3] [3/3] step output with ✓/·/✗ indicators
- Consolidate onboarding into install.sh: inline provider selection menu,
API key prompt, and model override — no separate wizard step needed
- Replace --onboard/--interactive-onboard with --skip-onboard (opt-out)
- Add OS detection display, install method, version detection, upgrade vs
fresh install logic
- Add post-install gateway service install/restart, doctor health check
- Add dashboard URL (port 42617) with clipboard copy and browser auto-open
- Add docs link (https://www.zeroclawlabs.ai/docs) to success output
- Display pairing code after onboarding in Rust CLI (src/main.rs)
- Remove --interactive flag from `zeroclaw onboard` CLI command
- Remove redundant scripts/install-release.sh legacy redirect
- Update all --interactive references across codebase
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(onboard): auto-pair and include bearer token in dashboard URL
After onboarding, the CLI now auto-pairs using the generated pairing
code to produce a bearer token, then displays the dashboard URL with
the token embedded (e.g. http://127.0.0.1:42617?token=zc_...) so
users can access the dashboard immediately without a separate pairing
step. The token is also persisted to config for gateway restarts.
The install script captures this token-bearing URL from the onboard
output and uses it for clipboard copy and browser auto-open.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* security(onboard): revert token-in-URL, keep pairing code terminal-only
Removes the auto-pair + token-in-URL approach in favor of the original
secure pairing flow. Bearer tokens should never appear in URLs where
they can leak via browser history, Referer headers, clipboard, or
proxy logs. The pairing code stays in the terminal and the user enters
it in the dashboard to complete the handshake securely.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: apply cargo fmt formatting
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Conversation history on long-running channel sessions (e.g. Feishu) grew
unbounded until the provider returned a context-window-exceeded error.
The existing reactive compaction only kicked in *after* the error,
causing the user's message to be lost and requiring a resend.
Add proactive_trim_turns() which estimates total character count and
drops the oldest turns before the request reaches the provider. The
budget (400 k chars ≈ 100 k tokens) leaves headroom for system prompt,
memory context, and model output.
Closes#3460
The relative href="install.sh" in README nav headers resolves to
zeroclawlabs.ai/install.sh when viewed on the website, which returns
404 since the website does not serve repo-root files. Replace with
the raw.githubusercontent.com URL used elsewhere in the docs.
Fixes#3463
Replace single-turn chat with persistent Agent to maintain conversation
history across WebSocket turns within the same connection.
Co-authored-by: staz <starzwan2333@gmail.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
The website at zeroclawlabs.ai shows a copy button with
curl -fsSL https://zeroclawlabs.ai/install.sh | bash but the website
does not serve the script, returning 404.
Add install.sh as a GitHub release asset in both beta and stable
workflows so it is always available at a stable URL. Pass the raw
GitHub URL in the website redeploy dispatch payload so the website
can configure a redirect or proxy for /install.sh.
Closes#3463
The system prompt has no documentation of channel media markers
([Voice], [IMAGE:], [Document:]), causing the LLM to misinterpret
transcribed voice messages as unprocessable audio attachments instead
of responding to the transcribed text content.
Re-applies fix from merged dev PR #1697 which was lost during the
master branch migration.
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
After publishing a beta or stable release, dispatch a
repository_dispatch event to zeroclaw-website so it rebuilds
with the latest version tag automatically.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs(readme): add v0.1.9a release highlights and contributor credits
Add "What's New in v0.1.9a" section covering web dashboard restyle,
new providers/channels, MCP tools, infrastructure updates, and fixes.
Add "Recent Contributors" table crediting key contributors with their
specific highlights for this release cycle.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(docs): use table format for release highlights to pass MD036 lint
Replace bold-text subsections with a table to satisfy the markdown
linter's no-emphasis-as-heading rule (MD036).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* ci: add x86_64-pc-windows-msvc to build matrix
* fix: prevent test deadlock in ensure_onboard_overwrite_allowed
Gate non-interactive terminal check behind cfg!(not(test)) so tests with
force=false do not hang waiting on stdin. cfg!(test) path bails immediately
with a clear message. No changes to extra_headers, mcp, nodes, or shellexpand.
---------
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
* fix(linq): accept current webhook payload shape
* style(linq): satisfy clippy lifetime lint
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* docs(i18n/zh-CN): Complete full Chinese documentation translation and i18n migration
- Translate 58 core Chinese docs covering all modules (setup, reference, operations, security, hardware,
contributor guides, etc.)
- Migrate all .zh-CN.md files to i18n/zh-CN directory preserving original directory structure
- Update internal links across 3 entry files:
* Root README.zh-CN.md
* docs/README.zh-CN.md
* docs/SUMMARY.zh-CN.md
- All links point to correct i18n paths with full accessibility
- Align with Vietnamese i18n directory structure per project conventions
* fix(i18n/zh-CN): resolve all 30 blocking markdown lint errors
- Fix all MD022 blank lines around headings issues across 10 files
- Fix MD036 emphasis used as heading issue in refactor-candidates.md
* docs(i18n/zh-CN): resolve broken document reference link in root README.zh-CN.md
* fix(docs): repair double-quote HTML bug and broken relative links in zh-CN docs
- Remove stray extra `"` after href closing quotes in 6 HTML links in
root README.zh-CN.md
- Fix relative link depth for files under docs/i18n/zh-CN/ that
reference repo-root files (CONTRIBUTING.md, README.zh-CN.md,
.github/workflows/, tests/, docs/SUMMARY.zh-CN.md): use correct
number of `../` levels based on actual file location in the tree
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Add README and SUMMARY translations for 25 missing languages to match
the root-level README coverage. Update English docs hub and SUMMARY
with links to all localized hubs.
New languages: ar, bn, cs, da, de, el, es, fi, he, hi, hu, id, it,
ko, nb, nl, pl, pt, ro, sv, th, tl, tr, uk, ur. Also adds missing
SUMMARY.vi.md for Vietnamese.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Ignore JetBrains .idea folder
* fix(ollama): support stringified JSON tool call arguments
* providers: allow ZEROCLAW_PROVIDER_URL env var to override Ollama base URL
Supports container deployments where Ollama runs on a Docker network host
(e.g. http://ollama:11434) without requiring config.toml changes.
Includes regression test ensuring the environment override works.
* fix(clippy): replace Default::default() with ProviderRuntimeOptions::default()
---------
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Add a WebSocket endpoint at /ws/nodes where external processes and
devices can connect and advertise their capabilities at runtime.
The gateway tracks connected nodes in a NodeRegistry and exposes
their capabilities as dynamically available tools via NodeTool.
- Add src/gateway/nodes.rs: WebSocket endpoint, NodeRegistry, protocol
- Add src/tools/node_tool.rs: Tool trait wrapper for node capabilities
- Add NodesConfig to config schema (disabled by default)
- Wire /ws/nodes route into gateway router
- Add NodeRegistry to AppState and all test constructions
- Re-export NodesConfig and NodeTool from module roots
Closes#3093
A single cron job with a malformed `next_run` timestamp in the database
was silently stopping all scheduled jobs. The `due_jobs` query matched
rows whose `next_run` was lexicographically past-due (including
non-RFC3339 values like "2026-03-12 03:11:13" which sort before valid
RFC3339 strings), then `map_cron_job_row` failed to parse the timestamp,
the `row?` propagation caused `due_jobs` to return `Err`, and the
scheduler marked itself as `error` and skipped every subsequent tick —
taking down all other healthy jobs with it.
The fix changes the row iteration in `due_jobs` to log a warning and
skip unparseable rows rather than aborting the entire result set. Valid
jobs continue to fire; the broken row is surfaced in the logs without
collateral damage to the scheduler.
Co-authored-by: ZeroClaw <zeroclaw@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
The default GITHUB_TOKEN cannot bypass the "Release Tags - Restricted
Operators" ruleset, causing beta releases to fail with a 422 validation
error. Switch to a PAT stored as RELEASE_TOKEN that has bypass permissions.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
* feat(provider): support custom API path suffix for custom: endpoints
Allow users to configure a custom API path for custom/compatible
providers instead of hardcoding /v1/chat/completions. Some self-hosted
LLM servers use different API paths.
Adds an optional `api_path` field to:
- Config (top-level and model_providers profile)
- ProviderRuntimeOptions
- OpenAiCompatibleProvider
When set, the custom path is appended to base_url instead of the
default /chat/completions suffix.
Closes#3125
* fix: add missing api_path field to test ModelProviderConfig initializers
Add an in-memory DraftContext that persists textarea content when the
AgentChat component unmounts due to route navigation. The draft is
restored when the user returns to the chat view. The store is
session-scoped (not localStorage) so drafts are cleared on page reload.
Closes#3129
Restyle the entire web dashboard with an electric blue theme featuring
glassmorphism cards, smooth animations, and the ZeroClaw logo. Remove
duplicate Vite dev server infrastructure to ensure a single gateway.
- Add electric blue color palette and glassmorphism styling system
- Add 10+ keyframe animations (fade, slide, pulse-glow, shimmer, float)
- Restyle all 10 pages with glass cards and electric components
- Add ZeroClaw logo to sidebar, pairing screen, and favicon
- Remove Vite dev/preview scripts and proxy config (build-only now)
- Update pairing dialog with ambient glow and animated elements
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add deferred MCP tool activation to reduce context window waste.
When mcp.deferred_loading is true (the default), MCP tool schemas
are not eagerly included in the LLM context. Instead, only tool
names appear in an <available-deferred-tools> system prompt section,
and the LLM calls the built-in tool_search tool to fetch full schemas
on demand. Setting deferred_loading to false preserves the existing
eager behavior.
Closes#3095
Use `cmd.exe /C` instead of `sh -c` on Windows via cfg(target_os).
Make the shell allowlist, forbidden paths, env vars, risk classification,
and path detection platform-aware so the shell tool works correctly on
Windows without changing Unix behavior.
Closes#3327
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Send a read receipt after receiving each message, start a typing
notification while processing, and stop it before sending the response.
This gives Matrix users visual feedback that the bot has seen their
message and is working on a reply.
Closes#3357
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add GET /api/cron/{id}/runs?limit=N endpoint that returns recent stored
runs for a cron job, with server-side limit clamping to 1-100 (default 20).
Frontend adds a CronRun type, API client function, and an expandable
run history panel on the Cron page showing status, timestamps, duration,
and output for each run, with loading, empty, error, and refresh states.
Closes#3299
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Implement the Channel trait for WeCom Bot Webhook, supporting
outbound text messages via the WeCom webhook API. The channel
is send-only; inbound messages can be routed through the gateway
webhook subsystem.
Closes#3396
Users can now set `ack_reactions = false` in `[channels_config]` to
suppress the 👀/✅/⚠️ acknowledgement reactions on incoming messages.
The option defaults to `true`, preserving existing behavior.
Closes#3403
The default distroless image has no shell, preventing the agent from
using shell-based tools (pwd, ls, git, etc.). Add Dockerfile.debian
that uses debian:bookworm-slim as the runtime base and includes bash,
git, curl, and ca-certificates. The existing distroless Dockerfile
remains unchanged for security-conscious deployments.
Closes#3359
The polling-based Slack listener only called conversations.history, which
returns top-level channel messages but not thread replies. Users replying
inside a thread were invisible to the bot after its initial response.
Add conversations.replies polling for active threads discovered in
channel history. Track thread parents with reply_count > 0, periodically
fetch new replies, and emit them as ChannelMessage with the correct
thread_ts so the bot can continue multi-turn conversations in-thread.
Stale threads are evicted after 24 hours or when the tracker exceeds
50 entries.
Closes#3084
MCP tools were not visible to delegate subagents because parent_tools
was a static snapshot taken before MCP tool wiring. Switch to interior
mutability (parking_lot::RwLock) so MCP wrappers pushed after
DelegateTool construction are visible at sub-agent execution time.
Closes#3069
When workspace_only=true and allowed_roots is configured, several tools
(file_read, content_search, glob_search) rejected absolute paths before
the allowed_roots allowlist was consulted. Additionally, tilde paths
(~/...) passed is_path_allowed but were then incorrectly joined with
workspace_dir as literal relative paths.
Changes:
- Add SecurityPolicy::resolve_tool_path() to properly expand tilde
paths and handle absolute vs relative path resolution for tools
- Add SecurityPolicy::is_under_allowed_root() for tool pre-checks to
consult the allowed_roots allowlist before rejecting absolute paths
- Update file_read to use resolve_tool_path instead of workspace_dir.join
- Update content_search and glob_search absolute-path pre-checks to
allow paths under allowed_roots
- Add tests covering workspace_only + allowed_roots scenarios
Closes#3082
PR #3409 fixed AtomicU64 usage on 32-bit targets in other files but
missed src/tools/mcp_client.rs. Apply the same cfg(target_has_atomic)
pattern used in channels/irc.rs to conditionally select AtomicU64 vs
AtomicU32.
Closes#3430
* feat(agent): add tool_filter_groups for per-turn MCP tool schema filtering
Introduces per-turn MCP tool schema filtering to reduce token overhead when
many MCP tools are registered. Filtering is driven by a new config field
`agent.tool_filter_groups`, which is a list of named groups that each
specify tool glob patterns and an activation mode (`always` or `dynamic`).
Built-in (non-MCP) tools always pass through unchanged; the feature is fully
backward-compatible — an empty `tool_filter_groups` list (the default) leaves
all existing behaviour untouched.
Changes:
- src/config/schema.rs: add `ToolFilterGroupMode`, `ToolFilterGroup` types
and `tool_filter_groups` field on `AgentConfig`
- src/config/mod.rs: re-export `ToolFilterGroup`, `ToolFilterGroupMode`
- src/agent/loop_.rs: add `glob_match()`, `filter_tool_specs_for_turn()`,
`compute_excluded_mcp_tools()` helpers; wire call sites in both single-shot
and interactive REPL modes; add unit tests for all three functions
- docs/reference/api/config-reference.md: document `tool_filter_groups`
field and sub-table schema with example
- docs/i18n/el/config-reference.md: add Greek locale config-reference with
`tool_filter_groups` section (2026-03-12 update)
* Remove accidentally committed worktree directories
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(tools/mcp): add MCP subsystem tools layer with multi-transport client
Introduces a new MCP (Model Context Protocol) subsystem to the tools layer,
providing a multi-transport client implementation (stdio, HTTP, SSE) that
allows ZeroClaw agents to connect to external MCP servers and register their
exposed tools into the runtime tool registry.
New files:
- src/tools/mcp_client.rs: McpRegistry — lifecycle manager for MCP server connections
- src/tools/mcp_protocol.rs: protocol types (request/response/notifications)
- src/tools/mcp_tool.rs: McpToolWrapper — bridges MCP tools to ZeroClaw Tool trait
- src/tools/mcp_transport.rs: transport abstraction (Stdio, Http, Sse)
Wiring changes:
- src/tools/mod.rs: pub mod + pub use for new MCP modules
- src/config/schema.rs: McpTransport, McpServerConfig, McpConfig types; mcp field
on Config; validate_mcp_config; mcp unit tests
- src/config/mod.rs: re-exports McpConfig, McpServerConfig, McpTransport
- src/channels/mod.rs: MCP server init block in start_channels()
- src/agent/loop_.rs: MCP registry init in run() and process_message()
- src/onboard/wizard.rs: mcp: McpConfig::default() in both wizard constructors
* fix(tools/mcp): inject MCP tools after built-in tool filter, not before
MCP servers are user-declared external integrations. The built-in
agent.allowed_tools / agent.denied_tools filter (filter_primary_agent_tools_or_fail)
governs built-in tool governance only. Injecting MCP tools before that
filter would silently drop all MCP tools when a restrictive allowlist is
configured.
Add ordering comments at both call sites (run() CLI path and
process_message() path) to make this contract explicit for reviewers
and future merges.
Identified via: shady831213/zeroclaw-agent-mcp@3f90b78
* fix(tools/mcp): strip approved field from MCP tool args before forwarding
ZeroClaw's security model injects `approved: bool` into built-in tool
args for supervised-mode confirmation. MCP servers have no knowledge of
this field and reject calls that include it as an unexpected parameter.
Strip `approved` from object-typed args in McpToolWrapper::execute()
before forwarding to the MCP server. Non-object args pass through
unchanged (no silent conversion or rejection).
Add two unit tests:
- execute_strips_approved_field_from_object_args: verifies removal
- execute_handles_non_object_args_without_panic: verifies non-object
shapes are not broken by the stripping logic
Identified via: shady831213/zeroclaw-agent-mcp@c68be01
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Rust treats `~` as a literal path character, not a home directory
shorthand. Several config resolution paths used `PathBuf::from()` on
user-provided strings without expanding `~` first, causing a literal
`~` folder to be created in the working directory.
Apply `shellexpand::tilde()` to all user-facing path inputs:
- ZEROCLAW_CONFIG_DIR env var (config/schema.rs, onboard/wizard.rs)
- ZEROCLAW_WORKSPACE env var (config/schema.rs, onboard/wizard.rs,
channels/matrix.rs)
- active_workspace.toml marker file config_dir (config/schema.rs)
The WhatsApp Web session_path was already correctly expanded via
shellexpand::tilde() in whatsapp_web.rs.
Closes#3417
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Add `extra_headers` config field and `ZEROCLAW_EXTRA_HEADERS` env var
support so users can specify custom HTTP headers for provider API
requests. This enables connecting to providers that require specific
headers (e.g., User-Agent, HTTP-Referer, X-Title) without a reverse
proxy.
Config file headers serve as the base; env var headers override them.
Format: `Key:Value,Key2:Value2`
Closes#3189
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
When `require_pairing = false` in config, the dashboard showed the
pairing dialog even though no pairing code exists, creating a deadlock.
Add `requiresPairing` field to AuthState (defaults to `true` as a safe
fallback) and update it from the `/health` endpoint response. Gate the
pairing dialog in App.tsx on both `!isAuthenticated` and
`requiresPairing` so the dashboard loads directly when pairing is
disabled.
Closes#3267
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Prevent orphan `<tool_result>` blocks from leaking into LLM sessions:
- Strip `<tool_result>` blocks from cached prior turns in
`process_channel_message` so the LLM never sees a tool result
without a preceding tool call (Case A — in-memory accumulation).
- Skip memory entries containing `<tool_result` in both
`should_skip_memory_context_entry` (channel path) and
`build_context` (agent path) so SQLite-recalled tool output
is never injected as memory context (Case B — post-restart).
Closes#3402
The URL parser captured the first https:// URL found in cloudflared
stderr output. When cloudflared emits a quic-go UDP buffer warning
containing a github.com link, that documentation URL was incorrectly
captured as the tunnel's public URL.
Extract URL parsing into a testable helper function that skips known
documentation domains (github.com, cloudflare.com/docs,
developers.cloudflare.com) and recognises tunnel-specific log prefixes
("Visit it at", "Route at", "Registered tunnel connection") and the
.trycloudflare.com domain.
Closes#3413
Add a prominent migration notice to CONTRIBUTING.md with explicit
instructions for contributors who still have local or forked main
branches. Fix the last remaining main branch reference in
python/pyproject.toml. Stale merged branches and main-related remote
branches have been deleted.
Refs: #2929, #3061
* fix: gate prometheus and fix AtomicU64 for 32-bit targets (#3335)
Closes#3335
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: fix import ordering for cfg-gated atomics
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
MCP (Model Context Protocol) config and tool modules were added on the
old `main` branch but never made it to `master`. This restores the full
MCP subsystem: config schema, transport layer (stdio/HTTP/SSE), client
registry, tool wrapper, config validation, and channel wiring.
Closes#3379
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The build.rs was reduced to only creating an empty web/dist/ directory,
which meant rust-embed would embed no files and the SPA fallback handler
would return 404 for every request including `/`. This is a regression
from v0.1.8 where web/dist/ was still tracked in git.
Update build.rs to detect when web/dist/index.html is missing and
automatically run `npm ci && npm run build` if npm is available. The
build is best-effort: when Node.js is absent the Rust build still
succeeds with an empty dist directory (release workflows pre-build the
frontend separately).
Closes#3386
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The /releases/latest/download/ URL only resolves to the latest non-prerelease,
non-draft release. When that release has no binary assets (e.g. v0.1.9a),
--prebuilt-only fails with a 404. This adds resolve_asset_url() which queries
the GitHub releases API for the newest release (including prereleases) that
actually contains the requested asset, falling back to /releases/latest/ if
the API call fails.
Closes#3389
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When reasoning_enabled is configured, the Ollama provider sends
think=true to all models. Models that don't support the think parameter
(e.g. qwen3.5:0.8b) cause request failures that the reliable provider
classifies as retryable, leading to an infinite retry loop.
Fix: when a request with think=true fails, automatically retry once
with think omitted. This lets the call succeed on models that lack
reasoning support while preserving thinking for capable models.
Closes#3183
Related #850
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Replace direct `crypto.randomUUID()` calls in the web dashboard with a
`generateUUID()` utility that falls back to a manual UUID v4 implementation
using `crypto.getRandomValues()` when `randomUUID` is unavailable (older
Safari, some Electron builds, Raspberry Pi browsers).
Closes#3303Closes#3261
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
CI workflows use ubuntu-latest (24.04, glibc 2.39) while release
workflows used ubuntu-22.04 (glibc 2.35). Swatinem/rust-cache keys
on runner.os ("Linux"), not the specific version, so cached build
scripts compiled on 24.04 would fail on 22.04 with GLIBC_2.39 not
found errors.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The release profile uses fat LTO + codegen-units=1, which is
optimal for distribution binaries but unnecessarily slow for CI
validation builds. Add a dedicated `ci` profile with thin LTO and
codegen-units=16, and use it in both CI workflows.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When auto_save is enabled and a photo is sent on the first turn of a
Telegram session, the [IMAGE:] marker was duplicated because:
1. auto_save stores the photo message (including the marker) to memory
2. build_memory_context recalls the just-saved entry as relevant context
3. The recalled marker is prepended to the original message content
Fix: skip memory context entries containing [IMAGE:] markers in
should_skip_memory_context_entry so auto-saved photo messages are not
re-injected through memory context enrichment.
Closes#2403
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When workspace_only=true, is_path_allowed() blanket-rejected all
absolute paths. This blocked legitimate tool calls that referenced
files inside the workspace using an absolute path (e.g. saving a
screenshot to /home/user/.zeroclaw/workspace/images/example.png).
The fix checks whether an absolute path falls within workspace_dir or
any configured allowed_root before rejecting it, mirroring the priority
order already used by is_resolved_path_allowed(). Paths outside the
workspace and allowed roots are still blocked, and the forbidden-paths
list continues to apply to all other absolute paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Discord gateway event loop silently dropped websocket Ping frames
(via a catch-all `_ => continue`) without responding with Pong. After
splitting the websocket stream into read/write halves, automatic
Ping/Pong handling is disabled, so the server-side (Cloudflare/Discord)
eventually considers the client unresponsive and stops sending events.
Additionally, websocket read errors (`Some(Err(_))`) were silently
swallowed by the same catch-all, preventing reconnection on transient
failures.
This patch:
- Responds to `Message::Ping` with `Message::Pong` to maintain the
websocket keepalive contract
- Breaks the event loop on `Some(Err(_))` with a warning log, allowing
the supervisor to reconnect
Closes#2896
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Electric Blue dashboard (PR #2804) sends the pairing token as a
?token= query parameter, but the WS handler only checked that single
source. Earlier PR #2193 had established a three-source precedence
chain (header > subprotocol > query param) which was lost.
Add extract_ws_token() with the documented precedence:
1. Authorization: Bearer <token> header
2. Sec-WebSocket-Protocol: bearer.<token> subprotocol
3. ?token=<token> query parameter
This ensures browser-based clients (which cannot set custom headers)
can authenticate via query param or subprotocol, while non-browser
clients can use the standard Authorization header.
Includes 9 unit tests covering each source, precedence ordering,
and empty-value fallthrough.
Closes#2884
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rename shadowed `histories` and `store` bindings in three test functions
to eliminate variable shadows that are flagged under stricter lint
configurations (clippy::shadow_unrelated). The initial bindings are
consumed by struct initialization; the second bindings that lock the
mutex guard are now named distinctly (`locked_histories`, `cleanup_store`).
Closes#2443
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The provider HTTP request timeout was hardcoded at 120 seconds in
`OpenAiCompatibleProvider::http_client()`. This makes it configurable
via the `provider_timeout_secs` config key and the
`ZEROCLAW_PROVIDER_TIMEOUT_SECS` environment variable, defaulting
to 120s for backward compatibility.
Changes:
- Add `provider_timeout_secs` field to Config with serde default
- Add `ZEROCLAW_PROVIDER_TIMEOUT_SECS` env var override
- Add `timeout_secs` field and `with_timeout_secs()` builder on
`OpenAiCompatibleProvider`
- Add `provider_timeout_secs` to `ProviderRuntimeOptions`
- Thread config value through agent loop, channels, gateway, and tools
- Use `compat()` closure in provider factory to apply timeout to all
compatible providers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add `agent.tool_call_dedup_exempt` config key (list of tool names) to
allow specific tools to bypass the within-turn identical-signature
deduplication check in run_tool_call_loop. This fixes the browser
snapshot polling use case where repeated calls with identical arguments
are legitimate and should not be suppressed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a `zeroclaw channel send` subcommand that sends a one-off message
through a configured channel without starting the full agent loop.
This enables hardware sensor triggers (e.g., range sensors on
Raspberry Pi) to push notifications to Telegram and other platforms.
Usage:
zeroclaw channel send 'Alert!' --channel-id telegram --recipient <chat_id>
Supported channels: telegram, discord, slack.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When running through a PTY chain (kubectl exec, SSH, remote terminals),
the transport layer may split data frames at space (0x20) boundaries,
interrupting multi-byte UTF-8 characters mid-sequence. Rust's
BufRead::read_line requires valid UTF-8 and returns InvalidData
immediately, crashing the interactive agent loop.
Replace stdin().read_line() with byte-level read_until(b'\n') followed
by String::from_utf8_lossy() in both the main input loop and the
/clear confirmation prompt. This reads raw bytes without UTF-8
validation during transport, then does lossy conversion (replacing any
truly invalid bytes with U+FFFD instead of crashing).
Also set ENV LANG=C.UTF-8 in both Dockerfile stages as defense-in-depth
to ensure the container locale defaults to UTF-8.
Closes#2984
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The credential leak detector's check_high_entropy_tokens would
false-positive on URL path segments (e.g. long alphanumeric filenames)
because extract_candidate_tokens included '/' in the token character
set, creating long mixed-alpha-digit tokens that exceeded the Shannon
entropy threshold.
Fix: strip URLs from content before extracting candidate tokens for
entropy analysis. Structural pattern checks (API keys, JWTs, AWS
credentials) use dedicated regexes and are unaffected.
Closes#3064
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
WebSearchTool previously stored the Brave API key once at boot and never
re-read it. This caused three failures: (1) keys set after boot via
web_search_config were ignored, (2) encrypted keys passed as raw enc2:
blobs to the Brave API, and (3) keys absent at startup left the tool
permanently broken.
The fix adds lazy key resolution at execution time. A fast path returns
the boot-time key when it is plaintext and non-empty. When the boot key
is missing or still encrypted, the tool re-reads config.toml, decrypts
the value through SecretStore, and uses the result. This also means
runtime config updates (e.g. `web_search_config set brave_api_key=...`)
are picked up on the next search invocation.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Replace byte-level `&val[..4]` slice with `char_indices().nth(4)` to
prevent a panic when the captured credential value contains multi-byte
UTF-8 characters (e.g. Chinese text). Adds a regression test with
CJK input.
Closes#3024
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Extract `self.bot_handle.lock().take()` into a separate `let` binding
so the parking_lot::MutexGuard is dropped before the `.await`, making
the listen future Send again.
Closes#3312
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Replace the `use_feishu: bool` field on `LarkChannel` with a
`platform: LarkPlatform` enum field, add `mention_only` to the
`new_with_platform` constructor, and introduce `from_lark_config` /
`from_feishu_config` factory methods so the channel factory in
`mod.rs` and the existing tests compile.
Resolves#3302
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The ci-run.yml workflow was referenced in docs/contributing/ci-map.md
and branch protection rules but never existed in the repository,
causing push-triggered CI runs to fail immediately with zero jobs
and no logs.
This adds the workflow with all documented jobs: lint, strict delta
lint, test, build (linux + macOS), docs quality, and the CI Required
Gate composite check. Triggers on both push and pull_request to master.
Fixes#2853
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Centralize cron shell command validation so all entrypoints enforce the
same security policy (allowlist + risk gate + approval) before
persistence and execution.
Changes:
- Add validate_shell_command() and validate_shell_command_with_security()
as the single validation gate for all cron shell paths
- Add add_shell_job_with_approval() and update_shell_job_with_approval()
that validate before persisting
- Add add_once_validated() and add_once_at_validated() for one-shot jobs
- Make raw add_shell_job/add_job/add_once/add_once_at pub(crate) to
prevent unvalidated writes from outside the cron module
- Route gateway API through validated creation path
- Route schedule tool through validated helpers (single validation)
- Route cron_add/cron_update tools through validated helpers
- Unify scheduler execution validation via validate_shell_command_with_security
- CLI update handler uses full validate_command_execution instead of
just is_command_allowed
- Add focused tests for validation parity across entrypoints
- Standardize error format to "blocked by security policy: {reason}"
Closes#2741Closes#2742
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cli): honor config default_temperature in agent command
Fixes#3033
The agent command was using a hardcoded default_value of 0.7 for the
--temperature parameter, which ignored the default_temperature setting
in the config file.
Changes:
- Changed temperature from f64 to Option<f64>
- Removed hardcoded default_value
- Use config.default_temperature when --temperature is not provided
Users can now set default_temperature in config.toml and have it
honored when running 'zeroclaw agent' without --temperature.
Risk: low (behavior change: now honors config instead of hardcoded value)
* fix: resolve moved value error for config in agent command
Extract final_temperature before passing config to agent::run() to
avoid use-of-moved-value error (config is moved as the first argument
while config.default_temperature was being accessed in the same call).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(web): add auto-expanding multiline chat composer
Convert chat input from single-line input to auto-expanding textarea:
- Replace input with textarea for multiline support
- Add auto-resize logic that grows up to max height (200px)
- Enter sends message, Shift+Enter adds new line
- Reset height after sending message
- Add overflow-y-auto for scrolling when max height reached
- Update placeholder to indicate Shift+Enter for new line
Fixes#3119
* fix: run cargo fmt to fix lint CI
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: trigger CI re-run
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Fixes#3012
Adds a feature alias `channel-feishu` that maps to `channel-lark`.
This makes it easier for Feishu users to discover and enable the feature,
as Lark and Feishu are the same platform with different branding.
Users can now use either:
cargo install zeroclaw --features channel-lark
cargo install zeroclaw --features channel-feishu
Risk: low (feature alias only, no functional change)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Add vision support to Anthropic provider to enable image understanding:
- Add ImageSource struct for Anthropic's image content block format
- Add Image variant to NativeContentOut enum
- Implement capabilities() returning vision: true
- Update convert_messages() to parse [IMAGE:...] markers and convert
them to Anthropic's native image content blocks
- Support both data URIs and local file paths
- Add comprehensive tests for vision functionality
Fixes#3163
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
* feat(web): add per-message copy button on hover
Add a copy affordance to each chat message that appears on hover/focus:
- Add MessageCopyButton component with clipboard integration
- Button appears on hover/focus for both user and agent messages
- Shows checkmark icon briefly after successful copy
- Graceful fallback for non-secure contexts
- Keyboard accessible with focus state
- Does not disrupt message layout or text selection
Fixes#3120
* fix: run cargo fmt to fix lint CI
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: trigger CI re-run
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add default_subject field to EmailConfig to allow users to customize
the default subject line for outgoing emails. Previously hardcoded as
"ZeroClaw Message".
- Add default_subject field with serde default
- Update send() method to use configured default
- Add tests for new functionality
Fixes#2878
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
When [web_fetch] section was specified in config without explicit
allowed_domains, serde used Vec::default() (empty vector) instead of
the wildcard ["*"] default. This caused all web fetch requests to be
rejected unexpectedly.
Fix by adding explicit serde default function that returns vec!["*"].
Fixes#2941
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Updates rollup to 4.59.0 to fix CVE-2026-27606, an arbitrary file write
via path traversal vulnerability (GHSA-mw96-cpmx-2vgc).
Resolves Dependabot alert #2.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add support for openai-responses, open-ai-responses, openai-chat-completions,
and open-ai-chat-completions as aliases for wire_api configuration values.
This aligns the parser with documented values that users expect to work.
Fixes#2735
* feat(onboard): add --reinit flag to prevent accidental config overwrite
Add --reinit flag to onboard command that:
- Backs up existing ~/.zeroclaw directory with timestamp
- Starts fresh initialization after backup
- Requires --interactive mode to work
- Prevents accidental configuration loss
This addresses issue #3013 where onboard could accidentally
overwrite all configuration without warning.
Closes#3013🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(ci): SHA-pin all third-party GitHub Actions
Replace mutable version tags with immutable commit SHAs to prevent
tag-hijacking supply chain attacks (P1 finding).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: retrigger CI after startup_failure
* fix(onboard): address PR #3102 review issues for --reinit flag
- Use resolve_runtime_dirs_for_onboarding() instead of hardcoded ~/.zeroclaw
- Remove unsafe relative path fallback, bail instead
- Add user confirmation prompt before reinitializing config
- Update docs/reference/cli/commands-reference.md with --reinit docs
* style: fix cargo fmt and clippy violations
- Fix import ordering in src/config/mod.rs (rustfmt)
- Collapse single-arg encrypt/decrypt calls in src/config/schema.rs (rustfmt)
- Box::pin large onboard futures to fix clippy::large_futures in src/main.rs
These violations were blocking CI lint checks.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Simian Astronaut 7 <simianastronaut7@gmail.com>
The daemon previously only handled SIGINT (Ctrl+C), ignoring SIGTERM
which is the standard termination signal used by Docker, Kubernetes,
and systemd. This caused containers to wait for the grace period
then be force-killed with SIGKILL.
Now the daemon handles both SIGINT and SIGTERM for graceful shutdown.
Fixes#2529
The repository uses master as the default branch, but many documentation
files and scripts were referencing main branch URLs that would 404.
Updated all references from zeroclaw-labs/zeroclaw/main to
zeroclaw-labs/zeroclaw/master in README files and documentation.
Fixes#2929Fixes#3061
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Adds a prominent Branching Model section to CONTRIBUTING.md explaining:
- master is the sole default branch (main has been removed)
- Contributors should use feat/* and fix/* branches off master
- Historical context for the migration (refs #2929, #3061, #3194)
Also updates fork workflow instructions to show both feat/ and fix/ prefixes.
- Strip `<think>...</think>` blocks in parse_tool_calls(), XmlToolDispatcher,
and OllamaProvider before processing tool-call XML
- Add effective_content() fallback: when content is empty after stripping
think tags, check the thinking field for tool-call XML
- Add strip_think_tags() to ollama.rs, loop_.rs, and dispatcher.rs
- Add comprehensive tests for think-tag stripping and tool-call parsing
Fixes#3079
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- #3009: Add handle_api_integrations_settings endpoint returning JSON with
per-integration enabled/category/status so /api/integrations/settings no
longer falls through to the SPA static handler.
- #3010: Extract Sec-WebSocket-Protocol header in handle_ws_chat and echo
back "zeroclaw.v1" via ws.protocols() when the client requests it.
- #3038: Generate a persistent session_id (crypto.randomUUID stored in
sessionStorage) in the web WS client and pass it as a query parameter.
Add session_id: Option<String> to WsQuery on the server side so the
backend can key conversations by session across reconnects.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Replace single-line <input> with auto-expanding <textarea> (min 44px,
max 200px) that resizes on each keystroke. Add a copy button that
appears on hover for every message bubble, showing a checkmark on
successful clipboard write.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add a test verifying Telegram bot_token is encrypted on save and
decrypted on load, and add .gitignore entries for local state backups.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Wraps the WhatsApp Web listen() in a reconnect loop. When Event::LoggedOut
fires, the bot is torn down, stale session DB files are deleted, and after
exponential backoff (3s base, 300s cap, max 10 retries) the loop restarts
triggering fresh QR pairing.
- Broadcast channel for logout signaling from event handler to listen loop
- Session file cleanup (primary + WAL + SHM) only on explicit LoggedOut
- Proper resource ordering: client lock, abort handle, drop bot/device
- Tests for retry delay, counter, session purge, and file paths helpers
Adds symmetric encrypt/decrypt calls for all channel secret fields in
Config::save() and Config::load_or_init(). Previously only nostr.private_key
was handled, leaving all other channel secrets (bot_token, app_token,
access_token, api_token, password, etc.) and gateway.paired_tokens stored
as plaintext when secrets.encrypt = true.
Closes#3175, closes#3173.
Co-authored-by: jameslcowan
Reverts Dockerfile from rust:1.94-slim to rust:1.93-slim.
Rust 1.94 triggers a recursion limit overflow in matrix-sdk 0.16.0
(rust-lang/rust#152942, matrix-org/matrix-rust-sdk#6254). No upstream
fix available yet. CI uses Rust 1.92 and is unaffected.
Fixes#3207
Adds opencode-go as a first-class provider with dedicated API endpoint,
env var, onboarding wizard wiring, and test coverage.
CI failures are pre-existing on master (Rust 1.94 formatting/lint changes per #3207).
The live tool call notifications feature sends extra messages to the
channel when tools are invoked. Update 5 tests that assumed exactly
1 sent message to instead check the last message in the list.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Enable a single Matrix bot instance to respond in multiple rooms:
- Disable the room_id filter so messages from all joined rooms are processed
- Embed room_id in reply_target as "user||room_id" for routing replies
- Include room_id in channel field for per-room conversation isolation
- Extract room_id from recipient in send() for correct message routing
The configured room_id still serves as a fallback for direct sends
without a "||" separator in the recipient.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Add @theonlyhennygod as first-listed code owner on all CODEOWNERS paths
- Add SimianAstronaut7 as maintainer with PR approval authority in docs
- Normalize WORKFLOW_OWNER_LOGINS casing to canonical GitHub logins
Implement pin_message and unpin_message for the Matrix channel using
the m.room.pinned_events state event. Adds default no-op trait methods
to the Channel trait so other channel implementations are unaffected.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Implement add_reaction/remove_reaction for Matrix channel using
ReactionEventContent and redaction. Add threading support via
Relation::Thread in send() and thread_ts extraction from incoming
messages, enabling threaded conversations.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Add ChannelNotifyObserver that wraps the observer to forward tool-call
events as real-time threaded messages on messaging channels. Include
tool arguments (truncated) in ToolCallStart events for better
visibility into what tools are doing. Auto-thread final replies when
tools were used.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
The system prompt is built once at daemon startup and cached. The
"Current Date & Time" section becomes stale immediately. This patch
replaces it with a fresh timestamp every time build_channel_system_prompt
is called (i.e. per incoming message).
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When embedding_provider differs from default_provider (e.g. default=gemini,
embedding=openai), the caller-supplied api_key belongs to the chat provider.
Passing it to the embedding endpoint causes 401 Unauthorized (gemini key
sent to api.openai.com/v1/embeddings).
Add embedding_provider_env_key() which looks up OPENAI_API_KEY,
OPENROUTER_API_KEY, or COHERE_API_KEY before falling back to the
caller-supplied key. This matches the provider-specific env var resolution
in providers/mod.rs without introducing cross-module coupling.
Also add config_secrets_survive_save_load_roundtrip test: full save→load
cycle with channel credentials (telegram, discord, slack bot_token,
slack app_token) and gateway paired_tokens, verifying that enc2: values
are correctly decrypted by Config::load_or_init(). Regression guard for
issues #3173 and #3175.
Closes#3083
Co-authored-by: ZeroClaw Bot <zeroclaw_bot@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
The codebase uses both #[cfg(test)] blocks in source files (194 files)
and separate tests.rs files (src/agent/tests.rs). Document both.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- `/model <name>` now auto-resolves provider from configured model_routes
by matching model name or hint, fixing 404 when switching to models on
different providers (e.g. `/model kimi-k2.5` with anthropic default)
- Conversation history is no longer cleared on `/model` or `/models` —
users can explicitly reset via `/new`
- Matrix channel now supports `/model`, `/models`, and `/new` commands
- `/model` (no args) lists configured model routes with hints
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reorganize the flat tests/ directory into a structured taxonomy:
- component/ (174 tests): single-subsystem tests
- integration/ (65 tests): multi-component wiring tests
- system/ (5 tests): full-stack with real SQLite memory
- live/ (5 tests, #[ignore]): real external API tests
- manual/: shell/Python scripts for human-driven testing
- support/: shared mock infrastructure (MockProvider, EchoTool, etc.)
- fixtures/: JSON trace fixtures for declarative LLM replay
Deduplicates ~180 lines of identical mock code from agent_e2e.rs and
agent_loop_robustness.rs into shared support modules. Adds new component
tests for gateway (HMAC signature verification) and security (config
defaults, TOML round-trips). Adds system-level tests exercising the
full agent turn cycle with real SQLite memory.
Updates Cargo.toml with [[test]] entries, dev/ci.sh with level-specific
commands, and docs/contributing/testing.md with a comprehensive testing
guide.
Zero impact on src/.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The onboard wizard futures exceed clippy's large_futures threshold
(16KB+). Wrap in Box::pin to heap-allocate and fix the lint.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Import ordering in config/mod.rs and line-wrapping in config/schema.rs
were left unformatted by PR #2994. Run cargo fmt to fix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HTTP clients had no timeouts and could hang forever; add 30s total and
10s connect timeouts. Replace clock-nanos-based jitter with rand::random
for proper randomness. Add a 1000-entry cap to the user display name
cache with expired-entry pruning. Fix truncate_text to avoid scanning
the full string twice when checking for truncation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Google TTS was passing the API key as a URL query parameter, which can
appear in logs and proxy access records. Move it to the x-goog-api-key
header instead. Add input validation for ElevenLabs voice IDs (reject
non-alphanumeric/dash/underscore characters) and restrict Edge TTS
binary_path to allowed basenames (edge-tts, edge-playback).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Slack private file fetching was sending the bot token on every redirect
hop. Since Slack CDN redirects use pre-signed URLs, sending the bearer
token to CDN hosts is unnecessary credential exposure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Admin endpoints (/admin/shutdown, /admin/paircode, /admin/paircode/new) were
completely unauthenticated, allowing any network client to shut down the gateway
or read/generate pairing codes. Add require_localhost() guard that returns 403
for non-loopback IPs.
Replace std::process::exit(0) in shutdown handler with a tokio watch channel
for graceful shutdown, allowing proper destructor cleanup and connection
draining. Replace the 500ms sleep race in the restart command with a poll loop
that waits for the port to actually become free.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- config/mod.rs: reorder TTS imports alphabetically (rustfmt)
- config/schema.rs: collapse single-arg encrypt/decrypt calls (rustfmt)
- main.rs: Box::pin large onboard futures to fix clippy::large_futures
These violations were introduced by the TTS providers merge (#2994)
and are blocking CI lint on all open PRs targeting master.
The TTS providers merge (#2994) introduced import ordering and
line-wrapping that doesn't match rustfmt output, causing lint
failures on all open PRs targeting master.
- Add --new flag to GetPaircode command in src/lib.rs
- Update main.rs to handle GetPaircode { new } parameter
- Add /admin/paircode/new POST endpoint in gateway/mod.rs
- Enhance documentation for constant_time_eq security function
Refs: #3015
The bitwise & operator is intentional in constant_time_eq() to prevent
timing side-channel attacks. Both comparisons must always execute to
ensure constant-time behavior regardless of the first comparison result.
- Revert logical && back to bitwise &
- Add #[allow(clippy::needless_bitwise_bool)] annotation
- Add explanatory comment documenting the intentional use
- Add security warning for 0.0.0.0 binding in help text
- Implement proper gateway shutdown before restart via /admin/shutdown endpoint
- Fetch live pairing code from running gateway via /admin/paircode endpoint
- Extract duplicate code into helper functions
- Fix clippy warnings
- Fix Critical: Split illegal or-pattern (Some(...) | None) into separate match arms
- Fix Major: Implement restart command with graceful shutdown check
- Fix Major: Improve get-paircode to check gateway status and provide clear instructions
- Fix Minor: Update help text to document public-bind precondition
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add GatewayCommands enum with three subcommands:
- start: Start the gateway server (default behavior preserved)
- restart: Restart the gateway server
- get-paircode: Show current pairing status without restarting
This improves gateway management by allowing users to:
1. Restart gateway without manual stop/start
2. Check pairing status without disrupting running gateway
Closes#3014Closes#3015🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes#3063
The Dockerfile was missing the COPY directive for the data/ directory,
which contains security/attack-corpus-v1.jsonl required by the
semantic guard feature via include_str!() at compile time.
This caused Docker builds to fail with:
error: couldn't read src/security/../../data/security/attack-corpus-v1.jsonl
Risk: low (build fix only, no runtime behavior change)
- Dockerfile: replace heredoc with printf for config.toml to avoid
Docker parse error (unknown instruction WORKSPACE_DIR)
- install.sh: set DOCKER_BUILDKIT=1 when building image so RUN --mount works
Several files in docs/contributing/ referenced other docs files using
incorrect paths like "docs/pr-workflow.md" when the files are actually
in the same directory (docs/contributing/).
Changes:
- docs/contributing/README.md: fix Suggested Reading Order paths
- docs/contributing/pr-workflow.md: fix link text references
- docs/contributing/reviewer-playbook.md: fix link text reference
- docs/vi/contributing/README.md: fix Vietnamese translation paths
- docs/i18n/vi/contributing/README.md: fix Vietnamese i18n paths
- docs/setup-guides/zai-glm-setup.md: fix custom-providers.md link
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Modern macOS (Ventura+) stores iMessage content in the attributedBody
column as a binary typedstream blob rather than the text column. The
existing SQL filter `AND m.text IS NOT NULL` silently dropped all
incoming messages on affected systems.
Add a length-prefix extractor for the typedstream format and fall back
to attributedBody when text is NULL or empty. Includes real captured
blob fixtures and 14 new parser/integration tests.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The `examples/` directory no longer exists, causing
`source_does_not_use_legacy_reply_to_field` to panic in CI.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a Claude Code skill that helps users operate their ZeroClaw instance
through both the CLI and gateway API. The skill provides adaptive expertise
(adjusts detail level based on user signals), discovery logic (finds the
binary, checks gateway status), and covers all core operations: agent chat,
memory, cron, providers, config, SSE events, estop, and troubleshooting.
Also adds gitignore entry for skill eval workspaces.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Introduced a new function `check_api_key_prefix` to validate API key prefixes against their associated providers. This helps catch mismatches early in the process. Added unit tests to ensure correct functionality for various scenarios, including known and unknown key formats. This enhancement improves error handling and user guidance when incorrect provider keys are used.
Add build.rs script that creates web/dist/ if missing, preventing
build failures when the web dashboard hasn't been pre-built.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Introduced a new document, `extension-examples.md`, providing code examples for implementing custom extensions in ZeroClaw.
- Updated `SUMMARY.md` and `README.md` to include links to the new examples.
- Modified `change-playbooks.md` to reference the new examples for better guidance on extension implementation.
- Removed outdated example files for custom channels, memory, providers, and tools to streamline documentation.
The taiki-e/install-action is likely not on the org/repo Actions
allowlist, causing startup_failure for the entire workflow. Revert
to cargo install for cargo-audit and cargo-deny.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- README.ar.md: translate hero performance callout from English to Arabic
- README.nl.md: fix "Implantatie" → "Imitatie" in anti-impersonation heading
- README.tr.md: fix ambiguous security wording for native runtime sandbox
- README.uk.md: update "проект" → "проєкт" (modern Ukrainian orthography)
- README.fr/vi/ja/ru/zh-CN.md: update 6-language selector to full 31-language
selector matching README.md for navigation parity
- README.de.md: add note clarifying docs links route to English sources
- README.el.md: add note marking this as a condensed translation with pointer
to full English README
Addresses review comments from PR #3087.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instead of each build matrix job independently installing Node.js and
running npm ci/build, extract web dashboard build into a single job
that uploads web/dist/ as an artifact. Build jobs download it before
cargo build. Reduces total CI time by ~3 Node installs + builds.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
RustEmbed requires web/dist/ to exist at compile time. The PR checks
workflow used a placeholder mkdir, but release workflows need real
built assets since they produce the distributed binary. Add Node.js
setup and npm ci/build before cargo build in all three release/build
workflows.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
chore: project cleanup — restructure docs, rename firmware, update CI workflows
- Base branch target (`master` for all contributions): `master`
- Problem: Repository structure had grown organically — docs were flat, firmware directories had redundant prefixes, CI workflow names were unclear, and stale build artifacts (`web/dist`) were tracked.
- Why it matters: Cleaner repo structure improves contributor onboarding, reduces confusion, and establishes consistent naming conventions across the project.
- What changed: Restructured `docs/` into topic-based subdirectories (setup-guides, reference, ops, security, hardware, contributing, maintainers); renamed firmware crate directories to drop `zeroclaw-` prefix; renamed CI workflow files for clarity; migrated GitHub URLs from `theonlyhennygod` to `zeroclaw-labs`; added `.vscode` config; added Claude skills (github-issue, github-pr, skill-creator); slimmed down AGENTS.md/CLAUDE.md; unified install scripts; removed tracked `web/dist`; relocated test scripts.
- What did **not** change (scope boundary): No runtime behavior, no Rust `src/` logic changes beyond path/URL reference updates in peripherals and providers.
Add composite Gate job to checks-on-pr.yml so branch protection
only needs a single required check. Replace cargo-install with
taiki-e/install-action for cargo-audit and cargo-deny to cut
minutes off every PR run. Mark CI/CD P1/P2 findings as resolved
in refactor-candidates.md.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Narrow workflow-level permissions to contents:read and grant
write access only to the specific jobs that need it (publish
gets contents:write, docker gets packages:write). Reduces blast
radius if a build step is compromised (P1 finding).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace mutable version tags with immutable commit SHAs to prevent
tag-hijacking supply chain attacks (P1 finding).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Apply cargo fmt to fix formatting diffs in openrouter.rs and serial.rs.
Add web/dist placeholder step to lint, test, and build jobs so
RustEmbed compiles without the gitignored frontend assets.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Introduced extensions.json to recommend essential Rust-related extensions.
- Added launch.json for debugging configurations with LLDB for various components.
- Created settings.json to configure editor preferences and Rust analyzer settings.
- Included tasks.json to define build, lint, test, and CI tasks for the project.
- Rename all 4 workflow files to match trigger and purpose
- Expand PR quality gate with dedicated lint and security audit jobs
- Align workflow display names, concurrency groups, and all doc references
- Align src/peripherals/ and docs/hardware/ with the firmware directory renames
- Covers both compiled references (include_str!, constants) and documentation
- Move `recompute_contributor_tiers.sh` to automate the labeling of contributors based on their merged PR counts.
- Removed web/dist from git tracking (mistake)
Move RUN_TESTS.md and TESTING_TELEGRAM.md to docs/contributing/, and move
quick_test.sh, test_telegram_integration.sh, generate_test_messages.py to
tests/telegram/. Move scripts/test_dockerignore.sh to tests/. Update internal
cross-references to reflect new paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move zeroclaw.png and zero-claw.jpeg to docs/assets/, CLA.md to
docs/contributing/cla.md, and TRADEMARK.md to docs/project/trademark.md.
Update all cross-references in root README files (en, fr, ja, ru, vi, zh-CN)
to point to new locations.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds pluggable Text-to-Speech subsystem with TtsProvider trait,
TtsManager for provider selection, and per-provider config structs.
Includes secret encryption for TTS API keys.
- Add .zeroclaw/* to .gitignore to exclude ZeroClaw files from version control.
- Update CODEOWNERS to include @SimianAstronaut7 as a maintainer alongside @jordanthejet.
- Change dependabot target branch from dev to master for all update configurations.
- Revise master-branch-flow documentation to clarify active workflows and triggers.
The macos-13 runner is deprecated by GitHub Actions, causing the
x86_64-apple-darwin build to instantly cancel with no runner assigned.
This cascades to skip publish and docker jobs since they depend on all
matrix builds succeeding.
Intel Macs have been EOL since 2022; aarch64-apple-darwin via macos-14
covers all current macOS users (Rosetta handles x86_64 if needed).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing Homebrew Core formula section (step 6)
- Add pub-homebrew-core.yml to workflow contract listing
- Update GHCR tag verification to include SHA tag detail
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing Rust-gate parity line to CI merge-blocking section
- Add Sec Vorpal Reviewdog and Pub Homebrew Core workflow entries
- Add Pub Homebrew Core and Sec Vorpal Reviewdog to trigger map
- Update Dependabot trigger wording to match English (PRs target master)
- Add Homebrew formula triage entry and fix numbering
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix Docker trigger semantics in Vietnamese ci-map docs to match
canonical English wording (publish on tag push v*, smoke on master PRs)
- Add missing Rust-gate parity bullet to Vietnamese pr-workflow docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- docs/release-process.md: replace all `main` with `master` (lines 10,
37, 45, 47, 54, 64, 73)
- docs/pr-workflow.md: fix "behind main" on line 219 to "behind master"
- docs/vi/pr-workflow.md: fix lines 216 and 273 (main -> master)
- docs/i18n/vi/pr-workflow.md: same fixes for canonical vi locale
- scripts/bootstrap.sh: fix raw URL from /main/ to /master/ (line 61),
pin git clone to --branch master (line 909)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace all dev/main branch references with master across docs,
templates, CI docs, and localized files (en, vi)
- Remove dev->main promotion model (no more Main Promotion Gate)
- Rename main-branch-flow.md to master-branch-flow.md and rewrite
for single-branch workflow
- Update maintainers to theonlyhennygod and jordanthejet
- Update CODEOWNERS: replace @chumyin with @jordanthejet
- Update WORKFLOW_OWNER_LOGINS fallback references
- Update CODE_OF_CONDUCT enforcement contact to @argenistherose
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Changed base_branches from main/dev to master
- Removed unsupported base_branch_analysis field
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add concurrency group to promote-release workflow
- Fix markdown emphasis style in README (MD049)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
sccache GHA cache backend is fragile — fails the entire build when
GitHub's artifact cache service is unavailable. Removed in favor of
Swatinem/rust-cache which handles failures gracefully.
Kept: mold linker, cargo-nextest, CARGO_INCREMENTAL=0.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- sccache: compiler caching for test builds (11-14% faster compilation)
- mold: faster linker on Linux builds
- cargo-nextest: parallel test runner (up to 35-60% faster tests)
- CARGO_INCREMENTAL=0: disable incremental compilation overhead in CI
Allowlist impact: added mozilla-actions/sccache-action@*
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Auto CI on PRs builds linux x86_64 and macOS arm64 only.
Remaining targets (linux arm64, macOS x86, Windows) available via
manual workflow_dispatch in ci-full.yml.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- CI now builds across all 5 targets (linux x86/arm64, macOS x86/arm64,
Windows) matching the release matrix
- Fix chat_fails_without_credentials test to accept "builder error"
which occurs in CI environments without native TLS
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The default branch is master, not main. Updates CI and Beta Release
workflow triggers and corresponding docs references.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reflects the complete workflow overhaul from 22 workflows to 3.
Updates allowlist to match current action usage and removes stale entries.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove 22 workflow files and 9 JS scripts. Replace with 3 workflows:
- ci.yml: test + build on PRs
- release.yml: auto beta release on merge to main
- promote-release.yml: manual stable release promotion
Update README Development section to document the new CI/CD system.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:15:36 -05:00
1600 changed files with 203253 additions and 212103 deletions
File a structured GitHub issue (bug report or feature request) for ZeroClaw interactively from Claude Code.
## When to Use
Trigger when the user wants to file a GitHub issue, report a bug, or request a feature for ZeroClaw. Keywords: "file issue", "report bug", "feature request", "open issue", "create issue", "github issue".
## Instructions
You are filing a GitHub issue against the ZeroClaw repository using structured issue forms. Follow this workflow exactly.
### Step 1: Detect Issue Type and Read the Template
Determine from the user's message whether this is a **bug report** or **feature request**.
- If unclear, use AskUserQuestion to ask: "Is this a bug report or a feature request?"
Then read the corresponding issue template to understand the required fields:
- The `title` prefix (e.g. `[Bug]: `, `[Feature]: `)
- The `labels` array
- Each field in the `body` array: its `type` (dropdown, textarea, input, checkboxes, markdown), `id`, `attributes.label`, `attributes.options` (for dropdowns), `attributes.description`, `attributes.placeholder`, and `validations.required`
This is the source of truth for what fields exist, what they're called, what options are available, and which are required. Do not assume or hardcode any field names or options — always derive them from the template file.
### Step 2: Auto-Gather Context
Before asking the user anything, silently gather environment and repo context:
Also read recently changed files to infer the affected component and architecture impact.
### Step 3: Pre-Fill and Present the Form
Using the parsed template fields and gathered context, draft values for ALL fields from the template:
- **dropdown** fields: select the most likely option from `attributes.options` based on context. For dropdowns where you're uncertain, note your best guess and flag it for the user.
- **textarea** fields: draft content based on the user's description, git context, and the field's `attributes.description`/`attributes.placeholder` for guidance on what's expected.
- **input** fields: fill with auto-detected values (versions, OS) or draft from user context.
- **checkboxes** fields: auto-check all items (the skill itself ensures compliance with the stated checks).
- **markdown** fields: skip these — they're informational headers, not form inputs.
- **optional fields** (where `validations.required` is false): fill if there's enough context, otherwise note "(optional — not enough context to fill)".
Present the complete draft to the user in a clean readable format:
- "Here's the pre-filled issue. Please review and let me know what to change, or say 'submit' to file it."
If the user requests changes, update the draft and re-present. Iterate until the user approves.
### Step 4: Scope Guard
Before final submission, analyze the collected content for scope creep:
- Does the bug report describe multiple independent defects?
- Does the feature request bundle unrelated changes?
If multi-concept issues are detected:
1. Inform the user: "This issue appears to cover multiple distinct topics. Focused, single-concept issues are strongly preferred and more likely to be accepted."
2. Break down the distinct groups found.
3. Offer to file separate issues for each group, reusing shared context (environment, etc.).
4. Let the user decide: proceed as-is or split.
### Step 5: Construct Issue Body
Build the issue body as markdown sections matching GitHub's form-field rendering format. GitHub renders form-submitted issues with `### <Field Label>` sections, so use that exact structure.
For each non-markdown field from the template, in order:
```markdown
### <attributes.label>
<value>
```
For optional fields with no content, use `_No response_` as the value (this matches GitHub's native rendering for empty optional fields).
For checkbox fields, render each option as:
```markdown
- [X] <optionlabeltext>
```
### Step 6: Final Preview and Submit
Show the final constructed issue (title + labels + full body) for one last confirmation.
Then submit using a HEREDOC for the body to preserve formatting:
Open or update a GitHub Pull Request for ZeroClaw. Handles creating new PRs with a fully filled-out template body, and updating existing PRs (title, body sections, labels, comments). Use this skill whenever the user wants to open a PR, create a pull request, update a PR, edit PR description, add labels to a PR, or sync a PR after new commits — even if they don't say "PR" explicitly (e.g., "submit this for review", "push and open for merge").
## Instructions
This skill supports two modes: **Open** (create a new PR) and **Update** (edit an existing PR). Detect the mode from context — if there's already an open PR for the current branch and the user didn't say "open a new PR", default to update mode.
The PR template at `.github/pull_request_template.md` is the source of truth for the PR body structure. Read it every time — never assume or hardcode section names, fields, or their order. The template may change over time and the skill should always reflect its current state.
---
## Shared: Read the PR Template
Before opening or updating a PR body, read `.github/pull_request_template.md` and parse it to understand:
- The `## ` section headers (these are the top-level sections of the PR body)
- The bullet points, fields, and prompts within each section
- Which sections are marked `(required)` vs optional/recommended
- Any inline formatting conventions (backtick options, Yes/No fields, etc.)
This parsed structure drives how you fill, present, and edit the PR body.
---
## Mode: Open a New PR
### Step 1: Gather Context
Collect information to pre-fill the PR body. Run these in parallel:
Also review the changed files and commit messages to understand the nature of the change (bug fix, feature, refactor, docs, chore, etc.) and which subsystems are affected.
### Step 2: Pre-Fill the Template
Using the parsed template structure and gathered context, draft a complete PR body:
- For each `## ` section from the template, fill in the bullet points and fields based on context from the commits, diff, and changed files.
- Use the field descriptions and placeholder text in the template as guidance for what each field expects.
- For Yes/No fields, infer from the diff (e.g., if no files in `src/security/` changed, security impact is likely all No).
- For required sections, always provide a substantive answer. For optional sections, fill if there's enough context, otherwise leave the template prompts in place.
- Draft a conventional commit-style PR title based on the changes (e.g., `feat(provider): add retry budget override`, `fix(channel): handle disconnect gracefully`, `chore(ci): update workflow targets`).
### Step 3: Present Draft for Review
Show the user the complete draft:
```
## PR Draft: <title>
**Branch**: <head> -> master
**Labels**: <suggestedlabels>
<fullbodywithallsectionsfilled>
```
Ask the user to review: "Here's the pre-filled PR. Review and let me know what to change, or say 'submit' to open it."
2. Re-read the PR template. Analyze which sections are now stale based on the new changes — use the template's section names and field descriptions to identify what needs updating rather than relying on hardcoded assumptions.
3. Present proposed updates section-by-section and confirm before applying.
### Step 6: Apply Updates
For title/label changes, use direct `gh pr edit` flags.
- **Always read `.github/pull_request_template.md`** before filling or editing a PR body. Never assume section names, fields, or structure — derive everything from the template. It's the source of truth and may change.
description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
---
# Skill Creator
A skill for creating new skills and iteratively improving them.
At a high level, the process of creating a skill goes like this:
- Decide what you want the skill to do and roughly how it should do it
- Write a draft of the skill
- Create a few test prompts and run claude-with-access-to-the-skill on them
- Help the user evaluate the results both qualitatively and quantitatively
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
- Repeat until you're satisfied
- Expand the test set and try again at larger scale
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
Cool? Cool.
## Communicating with the user
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
- "evaluation" and "benchmark" are borderline, but OK
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
---
## Creating a skill
### Capture Intent
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
1. What should this skill enable Claude to do?
2. When should this skill trigger? (what user phrases/contexts)
3. What's the expected output format?
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
### Interview and Research
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
### Write the SKILL.md
Based on the user interview, fill in these components:
- **name**: Skill identifier
- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
2. **SKILL.md body** - In context whenever skill triggers (<500linesideal)
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
These word counts are approximate and you can feel free to go longer if needed.
**Key patterns:**
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
- Reference files clearly from SKILL.md with guidance on when to read them
- For large reference files (>300 lines), include a table of contents
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + selection)
└── references/
├── aws.md
├── gcp.md
└── azure.md
```
Claude reads only the relevant reference file.
#### Principle of Lack of Surprise
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
#### Writing Patterns
Prefer using the imperative form in instructions.
**Defining output formats** - You can do it like this:
```markdown
## Report structure
ALWAYS use this exact template:
# [Title]
## Executive summary
## Key findings
## Recommendations
```
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
### Test Cases
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
```json
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's task prompt",
"expected_output": "Description of expected result",
"files": []
}
]
}
```
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
## Running and evaluating test cases
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
**With-skill run:**
```
Execute this task:
- Skill path: <path-to-skill>
- Task: <evalprompt>
- Input files: <evalfilesifany,or"none">
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- Outputs to save: <whattheusercaresabout—e.g.,"the.docxfile","thefinalCSV">
```
**Baseline run** (same prompt, but the baseline depends on context):
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path><workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
```json
{
"eval_id": 0,
"eval_name": "descriptive-name-here",
"prompt": "The user's task prompt",
"assertions": []
}
```
### Step 2: While runs are in progress, draft assertions
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
### Step 3: As runs complete, capture timing data
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
```json
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3
}
```
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
### Step 4: Grade, aggregate, and launch the viewer
Once all runs are done:
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
Put each with_skill version before its baseline counterpart.
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
4. **Launch the viewer** with both qualitative outputs and quantitative data:
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
### What the user sees in the viewer
The "Outputs" tab shows one test case at a time:
- **Prompt**: the task that was given
- **Output**: the files the skill produced, rendered inline where possible
- **Feedback**: a textbox that auto-saves as they type
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
### Step 5: Read the feedback
When the user tells you they're done, read `feedback.json`:
```json
{
"reviews": [
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
],
"status": "complete"
}
```
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
Kill the viewer server when you're done with it:
```bash
kill $VIEWER_PID 2>/dev/null
```
---
## Improving the skill
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
### How to think about improvements
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
### The iteration loop
After improving the skill:
1. Apply your improvements to the skill
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
4. Wait for the user to review and tell you they're done
5. Read the new feedback, improve again, repeat
Keep going until:
- The user says they're happy
- The feedback is all empty (everything looks good)
- You're not making meaningful progress
---
## Advanced: Blind comparison
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
---
## Description Optimization
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
### Step 1: Generate trigger eval queries
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
```json
[
{"query": "the user prompt", "should_trigger": true},
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
### Step 2: Review with user
Present the eval set to the user for review using the HTML template:
1. Read the template from `assets/eval_review.html`
2. Replace the placeholders:
- `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
This step matters — bad eval queries lead to bad descriptions.
### Step 3: Run the optimization loop
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
Save the eval set to the workspace, then run in the background:
```bash
python -m scripts.run_loop \
--eval-set <path-to-trigger-eval.json> \
--skill-path <path-to-skill> \
--model <model-id-powering-this-session> \
--max-iterations 5 \
--verbose
```
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
### How skill triggering works
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
### Step 4: Apply the result
Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
---
### Package and Present (only if `present_files` tool is available)
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
After packaging, direct the user to the resulting `.skill` file path so they can install it.
---
## Claude.ai-specific instructions
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
**Blind comparison**: Requires subagents. Skip it.
**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
**Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. In this case:
- **Preserve the original name.** Note the skill's directory name and `name` frontmatter field -- use them unchanged. E.g., if the installed skill is `research-helper`, output `research-helper.skill` (not `research-helper-v2`).
- **Copy to a writeable location before editing.** The installed skill path may be read-only. Copy to `/tmp/skill-name/`, edit there, and package from the copy.
- **If packaging manually, stage in `/tmp/` first**, then copy to the output directory -- direct writes may fail due to permissions.
---
## Cowork-Specific Instructions
If you're in Cowork, the main things to know are:
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
- Packaging works — `package_skill.py` just needs Python and a filesystem.
- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
- **Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.
---
## Reference files
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
- `agents/grader.md` — How to evaluate assertions against outputs
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
- `agents/analyzer.md` — How to analyze why one version beat another
The references/ directory has additional documentation:
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
---
Repeating one more time the core loop here for emphasis:
- Figure out what the skill is about
- Draft or edit the skill
- Run claude-with-access-to-the-skill on test prompts
- With the user, evaluate the outputs:
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
- Run quantitative evals
- Repeat until you and the user are satisfied
- Package the final skill and return it to the user.
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
## Role
After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
## Inputs
You receive these parameters in your prompt:
- **winner**: "A" or "B" (from blind comparison)
- **winner_skill_path**: Path to the skill that produced the winning output
- **winner_transcript_path**: Path to the execution transcript for the winner
- **loser_skill_path**: Path to the skill that produced the losing output
- **loser_transcript_path**: Path to the execution transcript for the loser
- **comparison_result_path**: Path to the blind comparator's output JSON
- **output_path**: Where to save the analysis results
## Process
### Step 1: Read Comparison Result
1. Read the blind comparator's output at comparison_result_path
2. Note the winning side (A or B), the reasoning, and any scores
3. Understand what the comparator valued in the winning output
### Step 2: Read Both Skills
1. Read the winner skill's SKILL.md and key referenced files
2. Read the loser skill's SKILL.md and key referenced files
3. Identify structural differences:
- Instructions clarity and specificity
- Script/tool usage patterns
- Example coverage
- Edge case handling
### Step 3: Read Both Transcripts
1. Read the winner's transcript
2. Read the loser's transcript
3. Compare execution patterns:
- How closely did each follow their skill's instructions?
- What tools were used differently?
- Where did the loser diverge from optimal behavior?
- Did either encounter errors or make recovery attempts?
### Step 4: Analyze Instruction Following
For each transcript, evaluate:
- Did the agent follow the skill's explicit instructions?
- Did the agent use the skill's provided tools/scripts?
- Were there missed opportunities to leverage skill content?
- Did the agent add unnecessary steps not in the skill?
Score instruction following 1-10 and note specific issues.
### Step 5: Identify Winner Strengths
Determine what made the winner better:
- Clearer instructions that led to better behavior?
- Better scripts/tools that produced better output?
- More comprehensive examples that guided edge cases?
- Better error handling guidance?
Be specific. Quote from skills/transcripts where relevant.
### Step 6: Identify Loser Weaknesses
Determine what held the loser back:
- Ambiguous instructions that led to suboptimal choices?
- Missing tools/scripts that forced workarounds?
- Gaps in edge case coverage?
- Poor error handling that caused failures?
### Step 7: Generate Improvement Suggestions
Based on the analysis, produce actionable suggestions for improving the loser skill:
- Specific instruction changes to make
- Tools/scripts to add or modify
- Examples to include
- Edge cases to address
Prioritize by impact. Focus on changes that would have changed the outcome.
### Step 8: Write Analysis Results
Save structured analysis to `{output_path}`.
## Output Format
Write a JSON file with this structure:
```json
{
"comparison_summary": {
"winner": "A",
"winner_skill": "path/to/winner/skill",
"loser_skill": "path/to/loser/skill",
"comparator_reasoning": "Brief summary of why comparator chose winner"
},
"winner_strengths": [
"Clear step-by-step instructions for handling multi-page documents",
"Included validation script that caught formatting errors",
"Explicit guidance on fallback behavior when OCR fails"
],
"loser_weaknesses": [
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
"No script for validation, agent had to improvise and made errors",
"No guidance on OCR failure, agent gave up instead of trying alternatives"
],
"instruction_following": {
"winner": {
"score": 9,
"issues": [
"Minor: skipped optional logging step"
]
},
"loser": {
"score": 6,
"issues": [
"Did not use the skill's formatting template",
"Invented own approach instead of following step 3",
"Missed the 'always validate output' instruction"
]
}
},
"improvement_suggestions": [
{
"priority": "high",
"category": "instructions",
"suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
},
{
"priority": "high",
"category": "tools",
"suggestion": "Add validate_output.py script similar to winner skill's validation approach",
"expected_impact": "Would catch formatting errors before final output"
"expected_impact": "Would prevent early failure on difficult documents"
}
],
"transcript_insights": {
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
}
}
```
## Guidelines
- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
- **Be actionable**: Suggestions should be concrete changes, not vague advice
- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
- **Prioritize by impact**: Which changes would most likely have changed the outcome?
- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
- **Stay objective**: Analyze what happened, don't editorialize
- **Think about generalization**: Would this improvement help on other evals too?
## Categories for Suggestions
Use these categories to organize improvement suggestions:
| Category | Description |
|----------|-------------|
| `instructions` | Changes to the skill's prose instructions |
| `tools` | Scripts, templates, or utilities to add/modify |
| `examples` | Example inputs/outputs to include |
| `error_handling` | Guidance for handling failures |
| `structure` | Reorganization of skill content |
| `references` | External docs or resources to add |
## Priority Levels
- **high**: Would likely change the outcome of this comparison
- **medium**: Would improve quality but may not change win/loss
- **low**: Nice to have, marginal improvement
---
# Analyzing Benchmark Results
When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
## Role
Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
## Inputs
You receive these parameters in your prompt:
- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
- **skill_path**: Path to the skill being benchmarked
- **output_path**: Where to save the notes (as JSON array of strings)
## Process
### Step 1: Read Benchmark Data
1. Read the benchmark.json containing all run results
2. Note the configurations tested (with_skill, without_skill)
3. Understand the run_summary aggregates already calculated
### Step 2: Analyze Per-Assertion Patterns
For each expectation across all runs:
- Does it **always pass** in both configurations? (may not differentiate skill value)
- Does it **always fail** in both configurations? (may be broken or beyond capability)
- Does it **always pass with skill but fail without**? (skill clearly adds value here)
- Does it **always fail with skill but pass without**? (skill may be hurting)
- Is it **highly variable**? (flaky expectation or non-deterministic behavior)
### Step 3: Analyze Cross-Eval Patterns
Look for patterns across evals:
- Are certain eval types consistently harder/easier?
- Do some evals show high variance while others are stable?
- Are there surprising results that contradict expectations?
### Step 4: Analyze Metrics Patterns
Look at time_seconds, tokens, tool_calls:
- Does the skill significantly increase execution time?
- Is there high variance in resource usage?
- Are there outlier runs that skew the aggregates?
### Step 5: Generate Notes
Write freeform observations as a list of strings. Each note should:
- State a specific observation
- Be grounded in the data (not speculation)
- Help the user understand something the aggregate metrics don't show
Examples:
- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
Compare two outputs WITHOUT knowing which skill produced them.
## Role
The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
Your judgment is based purely on output quality and task completion.
## Inputs
You receive these parameters in your prompt:
- **output_a_path**: Path to the first output file or directory
- **output_b_path**: Path to the second output file or directory
- **eval_prompt**: The original task/prompt that was executed
- **expectations**: List of expectations to check (optional - may be empty)
## Process
### Step 1: Read Both Outputs
1. Examine output A (file or directory)
2. Examine output B (file or directory)
3. Note the type, structure, and content of each
4. If outputs are directories, examine all relevant files inside
### Step 2: Understand the Task
1. Read the eval_prompt carefully
2. Identify what the task requires:
- What should be produced?
- What qualities matter (accuracy, completeness, format)?
- What would distinguish a good output from a poor one?
### Step 3: Generate Evaluation Rubric
Based on the task, generate a rubric with two dimensions:
Be decisive - ties should be rare. One output is usually better, even if marginally.
### Step 7: Write Comparison Results
Save results to a JSON file at the path specified (or `comparison.json` if not specified).
## Output Format
Write a JSON file with this structure:
```json
{
"winner": "A",
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
"rubric": {
"A": {
"content": {
"correctness": 5,
"completeness": 5,
"accuracy": 4
},
"structure": {
"organization": 4,
"formatting": 5,
"usability": 4
},
"content_score": 4.7,
"structure_score": 4.3,
"overall_score": 9.0
},
"B": {
"content": {
"correctness": 3,
"completeness": 2,
"accuracy": 3
},
"structure": {
"organization": 3,
"formatting": 2,
"usability": 3
},
"content_score": 2.7,
"structure_score": 2.7,
"overall_score": 5.4
}
},
"output_quality": {
"A": {
"score": 9,
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
"weaknesses": ["Minor style inconsistency in header"]
Evaluate expectations against an execution transcript and outputs.
## Role
The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment.
You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so.
## Inputs
You receive these parameters in your prompt:
- **expectations**: List of expectations to evaluate (strings)
- **transcript_path**: Path to the execution transcript (markdown file)
- **outputs_dir**: Directory containing output files from execution
## Process
### Step 1: Read the Transcript
1. Read the transcript file completely
2. Note the eval prompt, execution steps, and final result
3. Identify any issues or errors documented
### Step 2: Examine Output Files
1. List files in outputs_dir
2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced.
3. Note contents, structure, and quality
### Step 3: Evaluate Each Assertion
For each expectation:
1. **Search for evidence** in the transcript and outputs
2. **Determine verdict**:
- **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance
- **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content)
3. **Cite the evidence**: Quote the specific text or describe what you found
### Step 4: Extract and Verify Claims
Beyond the predefined expectations, extract implicit claims from the outputs and verify them:
1. **Extract claims** from the transcript and outputs:
- Factual statements ("The form has 12 fields")
- Process claims ("Used pypdf to fill the form")
- Quality claims ("All fields were filled correctly")
2. **Verify each claim**:
- **Factual claims**: Can be checked against the outputs or external sources
- **Process claims**: Can be verified from the transcript
- **Quality claims**: Evaluate whether the claim is justified
3. **Flag unverifiable claims**: Note claims that cannot be verified with available information
This catches issues that predefined expectations might miss.
### Step 5: Read User Notes
If `{outputs_dir}/user_notes.md` exists:
1. Read it and note any uncertainties or issues flagged by the executor
2. Include relevant concerns in the grading output
3. These may reveal problems even when expectations pass
### Step 6: Critique the Evals
After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap.
Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't.
Suggestions worth raising:
- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content)
- An important outcome you observed — good or bad — that no assertion covers at all
- An assertion that can't actually be verified from the available outputs
Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion.
### Step 7: Write Grading Results
Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir).
## Grading Criteria
**PASS when**:
- The transcript or outputs clearly demonstrate the expectation is true
- Specific evidence can be cited
- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename)
**FAIL when**:
- No evidence found for the expectation
- Evidence contradicts the expectation
- The expectation cannot be verified from available information
- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete
- The output appears to meet the assertion by coincidence rather than by actually doing the work
**When uncertain**: The burden of proof to pass is on the expectation.
### Step 8: Read Executor Metrics and Timing
1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output
2. If `{outputs_dir}/../timing.json` exists, read it and include timing data
## Output Format
Write a JSON file with this structure:
```json
{
"expectations": [
{
"text": "The output includes the name 'John Smith'",
"passed": true,
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
},
{
"text": "The spreadsheet has a SUM formula in cell B10",
"passed": false,
"evidence": "No spreadsheet was created. The output was a text file."
},
{
"text": "The assistant used the skill's OCR script",
"evidence": "Counted 12 fields in field_info.json"
},
{
"claim": "All required fields were populated",
"type": "quality",
"verified": false,
"evidence": "Reference section was left blank despite data being available"
}
],
"user_notes_summary": {
"uncertainties": ["Used 2023 data, may be stale"],
"needs_review": [],
"workarounds": ["Fell back to text overlay for non-fillable fields"]
},
"eval_feedback": {
"suggestions": [
{
"assertion": "The output includes the name 'John Smith'",
"reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input"
},
{
"reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught"
}
],
"overall": "Assertions check presence but not correctness. Consider adding content verification."
}
}
```
## Field Descriptions
- **expectations**: Array of graded expectations
- **text**: The original expectation text
- **passed**: Boolean - true if expectation passes
- **evidence**: Specific quote or description supporting the verdict
- **summary**: Aggregate statistics
- **passed**: Count of passed expectations
- **failed**: Count of failed expectations
- **total**: Total expectations evaluated
- **pass_rate**: Fraction passed (0.0 to 1.0)
- **execution_metrics**: Copied from executor's metrics.json (if available)
- **output_chars**: Total character count of output files (proxy for tokens)
- **transcript_chars**: Character count of transcript
- **timing**: Wall clock timing from timing.json (if available)
- **executor_duration_seconds**: Time spent in executor subagent
- **total_duration_seconds**: Total elapsed time for the run
- **claims**: Extracted and verified claims from the output
- **claim**: The statement being verified
- **type**: "factual", "process", or "quality"
- **verified**: Boolean - whether the claim holds
- **evidence**: Supporting or contradicting evidence
- **user_notes_summary**: Issues flagged by the executor
- **uncertainties**: Things the executor wasn't sure about
- **needs_review**: Items requiring human attention
- **workarounds**: Places where the skill didn't work as expected
- **eval_feedback**: Improvement suggestions for the evals (only when warranted)
- **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to
- **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag
## Guidelines
- **Be objective**: Base verdicts on evidence, not assumptions
- **Be specific**: Quote the exact text that supports your verdict
- **Be thorough**: Check both transcript and output files
- **Be consistent**: Apply the same standard to each expectation
- **Explain failures**: Make it clear why evidence was insufficient
- **No partial credit**: Each expectation is pass or fail, not partial
- `errors_encountered`: Number of errors during execution
- `output_chars`: Total character count of output files
- `transcript_chars`: Character count of transcript
---
## timing.json
Wall clock timing for a run. Located at `<run-dir>/timing.json`.
**How to capture:** When a subagent task completes, the task notification includes `total_tokens` and `duration_ms`. Save these immediately — they are not persisted anywhere else and cannot be recovered after the fact.
```json
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3,
"executor_start": "2026-01-15T10:30:00Z",
"executor_end": "2026-01-15T10:32:45Z",
"executor_duration_seconds": 165.0,
"grader_start": "2026-01-15T10:32:46Z",
"grader_end": "2026-01-15T10:33:12Z",
"grader_duration_seconds": 26.0
}
```
---
## benchmark.json
Output from Benchmark mode. Located at `benchmarks/<timestamp>/benchmark.json`.
- `run_summary`: Statistical aggregates per configuration
- `with_skill` / `without_skill`: Each contains `pass_rate`, `time_seconds`, `tokens` objects with `mean` and `stddev` fields
- `delta`: Difference strings like `"+0.50"`, `"+13.0"`, `"+1700"`
- `notes`: Freeform observations from the analyzer
**Important:** The viewer reads these field names exactly. Using `config` instead of `configuration`, or putting `pass_rate` at the top level of a run instead of nested under `result`, will cause the viewer to show empty/zero values. Always reference this schema when generating benchmark.json manually.
---
## comparison.json
Output from blind comparator. Located at `<grading-dir>/comparison-N.json`.
```json
{
"winner": "A",
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
"rubric": {
"A": {
"content": {
"correctness": 5,
"completeness": 5,
"accuracy": 4
},
"structure": {
"organization": 4,
"formatting": 5,
"usability": 4
},
"content_score": 4.7,
"structure_score": 4.3,
"overall_score": 9.0
},
"B": {
"content": {
"correctness": 3,
"completeness": 2,
"accuracy": 3
},
"structure": {
"organization": 3,
"formatting": 2,
"usability": 3
},
"content_score": 2.7,
"structure_score": 2.7,
"overall_score": 5.4
}
},
"output_quality": {
"A": {
"score": 9,
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
"weaknesses": ["Minor style inconsistency in header"]
<strong>Optimizingyourskill's description.</strong> This page updates automatically as Claude tests different versions of your skill'sdescription.Eachrowisaniteration—anewdescriptionattempt.Thecolumnsshowtestqueries:greencheckmarksmeantheskilltriggeredcorrectly(orcorrectlydidn't trigger), red crosses mean it got it wrong. The "Train" score shows performance on queries used to improve the description; the "Test" score shows performance on held-out queries the optimizer hasn'tseen.Whenit's done, Claude will apply the best-performing description to your skill.
prompt=f"""You are optimizing a skill description for a Claude Code skill called "{skill_name}". A "skill" is sort of like a prompt, but with progressive disclosure -- there's a title and description that Claude sees when deciding whether to use the skill, and then if it does use the skill, it reads the .md file which has lots more details and potentially links to other resources in the skill folder like helper files and scripts and additional documentation or examples.
ThedescriptionappearsinClaude's "available_skills" list. When a user sends a query, Claude decides whether to invoke the skill based solely on the title and on this description. Your goal is to write a description that triggers for relevant queries, and doesn'ttriggerforirrelevantones.
Here's the current description:
<current_description>
"{current_description}"
</current_description>
Currentscores({scores_summary}):
<scores_summary>
"""
iffailed_triggers:
prompt+="FAILED TO TRIGGER (should have triggered but didn't):\n"
Basedonthefailures,writeanewandimproveddescriptionthatismorelikelytotriggercorrectly.WhenIsay"based on the failures",it's a bit of a tricky line to walk because we don'twanttooverfittothespecificcasesyou're seeing. So what I DON'Twantyoutodoisproduceanever-expandinglistofspecificqueriesthatthisskillshouldorshouldn't trigger for. Instead, try to generalize from the failures to broader categories of user intent and situations where this skill would be useful or not useful. The reason for this is twofold:
1.Avoidoverfitting
2.Thelistmightgetloooongandit's injected into ALL queries and there might be a lot of skills, so we don'twanttoblowtoomuchspaceonanygivendescription.
Herearesometipsthatwe've found to work well in writing these descriptions:
-Theskillshouldbephrasedintheimperative--"Use this skill for"ratherthan"this skill does"
-Theskilldescriptionshouldfocusontheuser's intent, what they are trying to achieve, vs. the implementation details of how the skill works.
-ThedescriptioncompeteswithotherskillsforClaude's attention — make it distinctive and immediately recognizable.
-Ifyou're getting lots of failures after repeated attempts, change things up. Try different sentence structures or wordings.
I'd encourage you to be creative and mix up the style in different iterations since you'llhavemultipleopportunitiestotrydifferentapproachesandwe'll just grab the highest-scoring one at the end.
description: "Help users operate and interact with their ZeroClaw agent instance — through both the CLI (`zeroclaw` commands) and the REST/WebSocket gateway API. Use this skill whenever the user wants to: send messages to ZeroClaw, manage memory or cron jobs, check system status, configure channels or providers, hit the gateway API, troubleshoot their ZeroClaw setup, build from source, or do anything involving the `zeroclaw` binary or its HTTP endpoints. Trigger this even if the user just says things like 'check my agent status', 'schedule a reminder', 'store this in memory', 'list my cron jobs', 'send a message to my bot', 'set up Telegram', 'build zeroclaw', or 'my bot is broken' — these are all ZeroClaw operations."
---
# ZeroClaw Skill
You are helping a user operate their ZeroClaw agent instance. ZeroClaw is an autonomous agent runtime with a CLI and an HTTP/WebSocket gateway.
Your job is to understand what the user wants to accomplish and then **execute it** — run the command, make the API call, report the result. Do not just show commands for the user to copy-paste. Actually run them via the Bash tool and tell the user what happened. The only exception is destructive operations (clearing all memory, estop kill-all) where you should confirm first.
## Adaptive Expertise
Pay attention to how the user talks. Someone who says "can you hit the webhook endpoint with a POST" is telling you they know what they're doing — be concise, skip explanations, just execute. Someone who says "how do I make my bot remember things" needs more context about what's happening under the hood.
Signals of technical comfort: mentions specific endpoints, HTTP methods, JSON fields, talks about tokens/auth, uses CLI flags fluently, references config files directly.
Signals of less familiarity: asks "what does X do", uses casual language about the bot/agent, describes goals rather than mechanisms ("I want it to check something every morning").
Default to a middle ground — brief explanation of what you're about to do, then do it. Dial up or down from there based on cues.
## Discovery — Before You Act
Before running any ZeroClaw operation, make sure you know where things are:
1. **Find the binary.** Search in this order:
- `which zeroclaw` (PATH)
- The current project's build output: `./target/release/zeroclaw` or `./target/debug/zeroclaw` — this is the right choice when the user is working inside the ZeroClaw source tree and may have local changes
- Common install locations: `~/.cargo/bin/zeroclaw`, `~/Downloads/zeroclaw-bin/zeroclaw`
If no binary is found anywhere, offer to build from source (see "Building from Source" below). If the user is a developer working on ZeroClaw itself, they'll likely want the local build — watch for cues like them editing source files, mentioning PRs, or being in the project directory.
2. **Check if the gateway is running** (only needed for REST/WebSocket operations). A quick `curl -sf http://127.0.0.1:42617/health` tells you. If it's not running and the user wants REST access, let them know and offer to start it (`zeroclaw gateway` or `zeroclaw daemon`).
3. **Check auth status.** If the gateway requires pairing (`require_pairing = true` is the default), REST calls need a bearer token. Run `zeroclaw status` to see the current state, or check `~/.zeroclaw/config.toml` for a stored token under `[gateway]`.
Cache these findings for the conversation — don't re-discover every time.
## Important: REPL Limitation
`zeroclaw agent` (interactive REPL) requires interactive stdin, which doesn't work through the Bash tool. When the user wants to chat with their agent, use single-message mode instead:
```bash
zeroclaw agent -m "the message"
```
Each `-m` invocation is independent (no conversation history between calls). If the user needs multi-turn conversation, let them know they can run `zeroclaw agent` directly in their terminal, or use the WebSocket endpoint for programmatic streaming.
## First-Time Setup
If the user hasn't set up ZeroClaw yet (no `~/.zeroclaw/config.toml` exists), guide them through onboarding:
```bash
zeroclaw onboard # Quick mode — defaults to OpenRouter
zeroclaw onboard --provider anthropic # Use Anthropic directly
zeroclaw onboard # Guided wizard (default)
```
After onboarding, verify everything works:
```bash
zeroclaw status
zeroclaw doctor
```
If they already have a config but something is broken, `zeroclaw onboard --channels-only` repairs just the channel configuration without overwriting everything else.
## Building from Source
If the user wants to build ZeroClaw (or no binary is installed):
```bash
cargo build --release
```
This produces `target/release/zeroclaw`. For faster iteration during development, `cargo build` (debug mode) is quicker but produces a slower binary at `target/debug/zeroclaw`.
You can also run directly without a separate build step:
```bash
cargo run --release -- <subcommand> [args]
```
Before building, `cargo check` gives a quick compile validation without the full build.
## Choosing CLI vs REST
Both surfaces can do most things. Rules of thumb:
- **CLI is simpler** for one-off operations from the terminal. It handles auth internally and formats output nicely. Prefer CLI when the user is working locally.
- **REST is needed** when the user is building an integration, scripting from another language, or accessing a remote ZeroClaw instance. Also needed for streaming (WebSocket, SSE).
- If unclear, **default to CLI** — it's less setup.
## Core Operations
### Sending Messages
**CLI:** `zeroclaw agent -m "your message here"` — remember, always use `-m` mode, not bare `zeroclaw agent`.
Tools are used automatically by the agent during conversations (shell, file ops, memory, browser, HTTP, web search, git, etc. — 30+ tools gated by security policy).
To see what's available: `GET /api/tools` (REST) lists all registered tools with descriptions and parameter schemas.
### Configuration
Edit `~/.zeroclaw/config.toml` directly, or re-run `zeroclaw onboard` to reconfigure.
**REST:**
- `GET /api/config` — get current config (secrets masked as `***MASKED***`)
- `PUT /api/config` — update config (send raw TOML as body, 1MB limit)
### Providers & Models
- `zeroclaw providers` — list all supported providers
- `zeroclaw models list` — cached model catalog
- `zeroclaw models refresh --all` — refresh from providers
- `zeroclaw models set anthropic/claude-sonnet-4-6` — set default model
- **Idempotency:** Optional `X-Idempotency-Key` header on `/webhook` (300s TTL)
- **Config location:**`~/.zeroclaw/config.toml`
## Reference Files
For the complete API specification with every endpoint, field, and edge case, read `references/rest-api.md`.
For the full CLI command tree with all flags and options, read `references/cli-reference.md`.
Only load these when you need precise details beyond what's in this file — for most operations, the quick references above are sufficient.
## Troubleshooting
**"zeroclaw: command not found"** — Binary not in PATH. Check `./target/release/zeroclaw`, `~/.cargo/bin/zeroclaw`, or build from source with `cargo build --release`.
**"Connection refused" on REST calls** — Gateway isn't running. Start it with `zeroclaw gateway` or `zeroclaw daemon`.
**"Unauthorized" (401/403)** — Bearer token is missing or invalid. Re-pair via `POST /pair` with the pairing code, or check `~/.zeroclaw/config.toml` for the stored token.
**"LLM request failed" (500)** — Provider issue. Run `zeroclaw doctor` to check connectivity. Common causes: expired API key, provider outage, rate limiting on the provider side.
**"Too many requests" (429)** — You're hitting ZeroClaw's rate limit. Back off — the response includes `retry_after` with the number of seconds to wait.
**Agent not using tools / acting limited** — Check autonomy settings in config.toml under `[autonomy]`. `level = "read_only"` disables most tools. Try `level = "supervised"` or `level = "full"`.
**Memory not persisting** — Check `[memory]` config. If `backend = "none"`, nothing is stored. Switch to `"sqlite"` or `"markdown"`. Also verify `auto_save = true`.
**Channel not responding** — Run `zeroclaw channels doctor` for the specific channel. Common issues: expired bot token, wrong allowed_users list, channel not enabled in `[channels]`.
Report errors to the user with context appropriate to their expertise level. For beginners, explain what went wrong and suggest the fix. For experts, just show the error and the fix.
The agent has access to 30+ tools gated by security policy: shell, file_read, file_write, file_edit, glob_search, content_search, memory_store, memory_recall, memory_forget, browser, http_request, web_fetch, web_search, cron, delegate, git, and more. Max tool iterations defaults to 10.
echo "Required CI jobs did not pass: lint=${lint_result} workspace-check=${workspace_check_result} package-check=${package_check_result} test=${test_result} build=${build_result}"
exit 1
fi
check_docs_quality
echo "All required checks passed."
# Composite status check — branch protection requires this single job.
- those events retrigger only label-driven `pull_request_target` automation (`pr-auto-response.yml`); `pr-labeler.yml` now runs only on PR lifecycle events (`opened`/`reopened`/`synchronize`/`ready_for_review`) to reduce churn.
6. When contributor pushes new commits to fork branch (`synchronize`):
1. Triggered on `pull_request` to `dev` or `main` when Docker build-input paths change.
2. Runs `PR Docker Smoke` job:
- Builds local smoke image with Buildx builder.
- Verifies container with `docker run ... --version`.
3. Typical runtime in recent sample: ~240.4s.
4. No registry push happens on PR events.
### Push behavior
1. `publish` job runs on tag pushes `v*` only.
2. Workflow trigger includes semantic version tag pushes (`v*`) only.
3. Login to `ghcr.io` uses `${{ github.actor }}` and `${{ secrets.GITHUB_TOKEN }}`.
4. Tag computation includes semantic tag from pushed git tag (`vX.Y.Z`) + SHA tag (`sha-<12>`) + `latest`.
5. Multi-platform publish is used for tag pushes (`linux/amd64,linux/arm64`).
6. `scripts/ci/ghcr_publish_contract_guard.py` validates anonymous pullability and digest parity across `vX.Y.Z`, `sha-<12>`, and `latest`, then emits rollback candidate mapping evidence.
7. Trivy scans are emitted for version, SHA, and latest references.
8. `scripts/ci/ghcr_vulnerability_gate.py` validates Trivy JSON outputs against `.github/release/ghcr-vulnerability-policy.json` and emits audit-event evidence.
9. Typical runtime in recent sample: ~139.9s.
10. Result: pushed image tags under `ghcr.io/<owner>/<repo>` with publish-contract + vulnerability-gate + scan artifacts.
Important: Docker publish now requires a `v*` tag push; regular `dev`/`main` branch pushes do not publish images.
## Release Logic
Workflow: `.github/workflows/pub-release.yml`
1. Trigger modes:
- Tag push `v*` -> publish mode.
- Manual dispatch -> verification-only or publish mode (input-driven).
- publish mode enforces actor authorization, stable annotated tag policy, `origin/main` ancestry, and `release_tag` == `Cargo.toml` version at the tag commit.
- trigger provenance is emitted as `release-trigger-guard` artifacts.
3. `build-release` builds matrix artifacts across Linux/macOS/Windows targets.
4. `verify-artifacts` runs `scripts/ci/release_artifact_guard.py` against `.github/release/release-artifact-contract.json` in verify-stage mode (archive contract required; manifest/SBOM/notice checks intentionally skipped) and uploads `release-artifact-guard-verify` evidence.
5. In publish mode, workflow generates SBOM (`CycloneDX` + `SPDX`), `SHA256SUMS`, and a checksum provenance statement (`zeroclaw.sha256sums.intoto.json`) plus audit-event envelope.
6. In publish mode, after manifest generation, workflow reruns `release_artifact_guard.py` in full-contract mode and emits `release-artifact-guard.publish.json` plus `audit-event-release-artifact-guard-publish.json`.
7. In publish mode, workflow keyless-signs release artifacts and composes a supply-chain release-notes preface via `release_notes_with_supply_chain_refs.py`.
8. In publish mode, workflow verifies GHCR release-tag availability.
9. In publish mode, workflow creates/updates the GitHub Release for the resolved tag and commit-ish, combining generated supply-chain preface with GitHub auto-generated commit notes.
3. In publish mode, prerelease assets are attached to a GitHub prerelease for the stage tag.
Canary policy lane:
1. `.github/workflows/ci-canary-gate.yml` runs weekly or manually.
2. `scripts/ci/canary_guard.py` evaluates metrics against `.github/release/canary-policy.json`.
3. Decision output is explicit (`promote`, `hold`, `abort`) with auditable artifacts and optional dispatch signal.
## Merge/Policy Notes
1. Workflow-file changes (`.github/workflows/**`) are validated through `pr-intake-checks.yml`, `ci-change-audit.yml`, and `CI Required Gate` without a dedicated owner-approval gate.
2. PR lint/test strictness is intentionally controlled by `ci:full` label.
4. `sec-audit.yml` runs on PR/push/merge queue (`merge_group`), plus scheduled weekly.
5. `ci-change-audit.yml` enforces pinned `uses:` references for CI/security workflow changes.
6. `sec-audit.yml` includes deny policy hygiene checks (`deny_policy_guard.py`) before cargo-deny.
7. `sec-audit.yml` includes gitleaks allowlist governance checks (`secrets_governance_guard.py`) against `.github/security/gitleaks-allowlist-governance.json`.
8. `ci-reproducible-build.yml` and `ci-supply-chain-provenance.yml` provide scheduled supply-chain assurance signals outside release-only windows.
9. Some workflows are operational and non-merge-path (`pr-check-stale`, `pr-check-status`, `sync-contributors`, etc.).
10. Workflow-specific JavaScript helpers are organized under `.github/workflows/scripts/`.
11. `ci-run.yml` includes cache partitioning (`prefix-key`) across lint/test/build/flake-probe lanes to reduce cache contention.
12. `ci-rollback.yml` provides a guarded rollback planning lane (scheduled dry-run + manual execute controls) with audit artifacts.
13. `ci-queue-hygiene.yml` periodically deduplicates superseded queued runs for lightweight PR automation workflows to reduce queue pressure.
## Mermaid Diagrams
### PR to Dev
```mermaid
flowchart TD
A["PR opened or updated -> dev"] --> B["pull_request_target lane"]
B --> B1["pr-intake-checks.yml"]
B --> B2["pr-labeler.yml"]
B --> B3["pr-auto-response.yml"]
A --> C["pull_request CI lane"]
C --> C1["ci-run.yml"]
C --> C2["sec-audit.yml"]
C --> C3["pub-docker-img.yml (if Docker paths changed)"]
C --> C4["workflow-sanity.yml (if workflow files changed)"]
C --> C5["pr-label-policy-check.yml (if policy files changed)"]
C1 --> D["CI Required Gate"]
D --> E{"Checks + review policy pass?"}
E -->|No| F["PR stays open"]
E -->|Yes| G["Merge PR"]
G --> H["push event on dev"]
```
### Main Delivery and Release
```mermaid
flowchart TD
D0["Commit reaches dev"] --> B0["ci-run.yml"]
D0 --> C0["sec-audit.yml"]
PRM["PR to main"] --> QM["ci-run.yml + sec-audit.yml (+ path-scoped)"]
1. **Quality gate failing on PR**: check `lint` job for formatting/clippy issues; check `test` job for test failures; check `build` job for compile errors; check `security` job for audit/deny failures.
2. **Beta release not appearing**: confirm the push landed on `master` (not another branch); check `release-beta-on-push.yml` run status.
3. **Stable release failing at validate**: ensure `Cargo.toml` version matches the input version and the tag does not already exist.
4. **Full matrix build needed**: run `cross-platform-build-manual.yml` manually from the Actions tab.
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.