Compare commits

...

84 Commits

Author SHA1 Message Date
Argenis 65cb4fe099 feat(heartbeat): default interval 30→5min + prune heartbeat from auto-save (#3938)
Lower the default heartbeat interval to 5 minutes to match the renewable
partial wake-lock cadence. Add `[heartbeat task` to the memory auto-save
skip filter so heartbeat prompts (both Phase 1 decision and Phase 2 task
execution) do not pollute persistent conversation memory.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 08:17:08 -04:00
Argenis 2e48cbf7c3 fix(tools): use resolve_tool_path for consistent path resolution (#3937)
Replace workspace_dir.join(path) with resolve_tool_path(path) in
file_write, file_edit, and pdf_read tools to correctly handle absolute
paths within the workspace directory, preventing path doubling.

Closes #3774
2026-03-18 23:51:35 -04:00
Argenis e4910705d1 fix(config): add missing challenge_max_attempts field to OtpConfig (#3919) (#3936)
The OtpConfig struct uses deny_unknown_fields but was missing the
challenge_max_attempts field, causing zeroclaw config schema to fail
with a TOML parse error when the field appeared in config files.

Add challenge_max_attempts as an Option<u32>-style field with a default
of 3 and a validation check ensuring it is greater than 0.
2026-03-18 23:48:53 -04:00
Argenis 1b664143c2 fix: move misplaced include key from [lib] to [package] in Cargo.toml (#3935)
The `include` array was placed after `[lib]` without a section header,
causing Cargo to parse it as `lib.include` — an invalid manifest key.
This triggered a warning during builds and caused lockfile mismatch
errors when building with --locked in Docker (Dockerfile.debian).

Move the `include` key to the `[package]` section where it belongs and
regenerate Cargo.lock to stay in sync.

Fixes #3925
2026-03-18 23:48:50 -04:00
Argenis 950f996812 Merge pull request #3926 from zeroclaw-labs/fix/pairing-code-terminal-display
fix(gateway): move pairing code below dashboard URL in terminal
2026-03-18 20:34:08 -04:00
argenis de la rosa b74c5cfda8 fix(gateway): move pairing code below dashboard URL in terminal banner
Repositions the one-time pairing code display to appear directly below
the dashboard URL for cleaner terminal output, and removes the duplicate
display that was showing at the bottom of the route list.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 19:50:26 -04:00
Argenis 02688eb124 feat(skills): autonomous skill creation from multi-step tasks (#3916)
Add SkillCreator module that persists successful multi-step task
executions as reusable SKILL.toml definitions under the workspace
skills directory.

- SkillCreationConfig in [skills.skill_creation] (disabled by default)
- Slug validation, TOML generation, embedding-based deduplication
- LRU eviction when max_skills limit is reached
- Agent loop integration post-success
- Gated behind `skill-creation` compile-time feature flag

Closes #3825.
2026-03-18 17:15:02 -04:00
Argenis 2c92cf913b fix: ensure SOUL.md and IDENTITY.md exist in non-tty sessions (#3915)
When the workspace is created outside of `zeroclaw onboard` (e.g., via
cron, daemon, or `< /dev/null`), SOUL.md and IDENTITY.md were never
scaffolded, causing the agent to activate without identity files.

Added `ensure_bootstrap_files()` in `Config::load_or_init()` that
idempotently creates default SOUL.md and IDENTITY.md if missing.

Closes #3819.
2026-03-18 17:12:44 -04:00
Argenis 3c117d2d7b feat(delegate): make sub-agent timeouts configurable via config.toml (#3909)
Add `timeout_secs` and `agentic_timeout_secs` fields to
`DelegateAgentConfig` so users can tune per-agent timeouts instead
of relying on the hardcoded 120s / 300s defaults.

Validation rejects values of 0 or above 3600s, matching the pattern
used by MCP timeout validation.

Closes #3898
2026-03-18 17:07:03 -04:00
Argenis 1f7c3c99e4 feat(i18n): externalize tool descriptions for translation (#3912)
Add a locale-aware tool description system that loads translations from
TOML files in tool_descriptions/. This enables non-English users to see
tool descriptions in their language.

- Add src/i18n.rs module with ToolDescriptions loader, locale detection
  (ZEROCLAW_LOCALE, LANG, LC_ALL env vars), and English fallback chain
- Add locale config field to Config struct for explicit locale override
- Create tool_descriptions/en.toml with all 47 tool descriptions
- Create tool_descriptions/zh-CN.toml with Chinese translations
- Integrate with ToolsSection::build() and build_tool_instructions()
  to resolve descriptions from locale files before hardcoded fallback
- Add PromptContext.tool_descriptions field for prompt-time resolution
- Add AgentBuilder.tool_descriptions() setter for Agent construction
- Include tool_descriptions/ in Cargo.toml package include list
- Add 8 unit tests covering locale loading, fallback chains, env
  detection, and config override

Closes #3901
2026-03-18 17:01:39 -04:00
Argenis 92940a3d16 Merge pull request #3904 from zeroclaw-labs/fix/install-stale-build-cache
fix(install): clean stale build cache on upgrade
2026-03-18 15:49:10 -04:00
Argenis d77c616905 fix: reset tool call dedup cache each iteration to prevent loops (#3910)
The seen_tool_signatures HashSet was initialized outside the iteration loop, causing cross-iteration deduplication of legitimate tool calls. This triggered a self-correction spiral where the agent repeatedly attempted skipped calls until hitting max_iterations.

Moving the HashSet inside the loop ensures deduplication only applies within a single iteration, as originally intended.

Fixes #3798
2026-03-18 15:45:10 -04:00
Argenis ac12470c27 fix(channels): respect ack_reactions config for Telegram channel (#3834) (#3913)
The Telegram channel was ignoring the ack_reactions setting because it
sent setMessageReaction API calls directly in its polling loop, bypassing
the top-level channels_config.ack_reactions check.

- Add optional ack_reactions field to TelegramConfig so it can be set
  under [channels_config.telegram] without "unknown key" warnings
- Add ack_reactions field and with_ack_reactions() builder to
  TelegramChannel, defaulting to true
- Guard try_add_ack_reaction_nonblocking() behind self.ack_reactions
- Wire channel-level override with fallback to top-level default
- Add config deserialization and channel behavior tests
2026-03-18 15:40:31 -04:00
Argenis c5a1148ae9 fix: ensure install.sh creates config.toml and workspace files (#3852) (#3906)
When running install.sh with --docker --skip-build --prefer-prebuilt
(especially with podman via ZEROCLAW_CONTAINER_CLI), the script would
skip creating config.toml and workspace scaffold files because these
were only generated by the onboard wizard, which requires an interactive
terminal or explicit API key.

Add ensure_default_config_and_workspace() that creates a minimal
config.toml (with provider, workspace_dir, and optional api_key/model)
and seeds the workspace directory structure (sessions/, memory/, state/,
cron/, skills/ subdirectories plus IDENTITY.md, USER.md, MEMORY.md,
AGENTS.md, and SOUL.md) when they don't already exist.

This function is called:
- At the end of run_docker_bootstrap(), so config and workspace files
  exist on the host volume regardless of whether onboard ran inside the
  container.
- After the [3/3] Finalizing setup onboard block in the native install
  path, covering --skip-build, --prefer-prebuilt, --skip-onboard, and
  cases where the binary wasn't found.

The function is idempotent: it only writes files that don't already
exist, so it never overwrites config or workspace files created by a
successful onboard run.

Also makes the container onboard failure non-fatal (|| true) so that
the fallback config generation always runs.

Fixes #3852
2026-03-18 15:15:47 -04:00
Argenis 440ad6e5b5 fix: handle double-serialized schedule in cron_add and cron_update (#3860) (#3905)
When LLMs pass the schedule parameter as a JSON string instead of a JSON
object, serde fails with "invalid type: string, expected internally
tagged enum Schedule". Add a deserialize_maybe_stringified helper that
detects stringified JSON values and parses the inner string before
deserializing, providing backward compatibility for both object and
string representations.

Fixes #3860
2026-03-18 15:15:22 -04:00
Argenis 2e41cb56f6 fix: enable vision support for llamacpp provider (#3907)
The llamacpp provider was instantiated with vision disabled by default, causing image transfers from Telegram to fail. Use new_with_vision() with vision enabled, matching the behavior of other compatible providers.

Fixes #3802
2026-03-18 15:14:57 -04:00
Argenis 2227fadb66 fix(tools): include tool_search instruction in deferred tools system prompt (#3826) (#3914)
The deferred MCP tools section in the system prompt only listed tool
names inside <available-deferred-tools> tags without any instruction
telling the LLM to call tool_search to activate them. In daemon and
Telegram mode, where conversations are shorter and less guided, the
LLM never discovered it should call tool_search, so deferred tools
were effectively unavailable.

Add a "## Deferred Tools" heading with explicit instructions that
the LLM MUST call tool_search before using any listed tool. This
ensures the LLM knows to activate deferred tools in all modes
(CLI, daemon, Telegram) consistently.

Also add tests covering:
- Instruction presence in the deferred section
- Multiple-server deferred tool search
- Cross-server keyword search ranking
- Activation persistence across multiple tool_search calls
- Idempotent re-activation
2026-03-18 15:13:58 -04:00
Argenis 162efbb49c fix(providers): recover from context window errors by truncating history (#3908)
When a provider returns a context-size-exceeded error, truncate the
oldest non-system messages from conversation history and retry instead
of immediately bailing out. This enables local models with small
context windows (llamafile, llama.cpp) to work by automatically
fitting the conversation within available context.

Closes #3894
2026-03-18 14:54:56 -04:00
argenis de la rosa 3c8b6d219a fix(test): use PID-scoped script path to prevent ETXTBSY in CI
The echo_provider() test helper writes a fake_claude.sh script to
a shared temp directory. When lib and bin test binaries run in
parallel (separate processes, separate OnceLock statics), one
process can overwrite the script while the other is executing it,
causing "Text file busy" (ETXTBSY). Scope the filename with PID
to isolate each test process.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 14:33:04 -04:00
Vasanth 58b98c59a8 feat(agent): add runtime model switching via model_switch tool (#3853)
Add support for switching AI models at runtime during a conversation.
The model_switch tool allows users to:
- Get current model state
- List available providers
- List models for a provider
- Switch to a different model

The switch takes effect immediately for the current conversation by
recreating the provider with the new model after tool execution.

Risk: Medium - internal state changes and provider recreation
2026-03-18 14:17:52 -04:00
argenis de la rosa d72e9379f7 fix(install): clean stale build cache on upgrade
When upgrading an existing installation, stale build artifacts in
target/release/build/ can cause compilation failures (e.g.
libsqlite3-sys bindgen.rs not found). Run cargo clean --release
before building when an upgrade is detected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 14:15:59 -04:00
Argenis 959b933841 fix(providers): preserve conversation context in Claude Code CLI (#3885)
* fix(providers): preserve conversation context in Claude Code CLI provider

Override chat_with_history to format full multi-turn conversation
history into a single prompt for the claude CLI, instead of only
forwarding the last user message.

Closes #3878

* fix(providers): fix ETXTBSY race in claude_code tests

Use OnceLock to initialize the fake_claude.sh test script exactly
once, preventing "Text file busy" errors when parallel tests
concurrently write and execute the same script file.
2026-03-18 11:13:42 -04:00
Argenis caf7c7194f fix(cron): prevent one-shot jobs from re-executing indefinitely (#3886)
Handle Schedule::At jobs in reschedule_after_run by disabling them
instead of rescheduling to a past timestamp. Also add a fallback in
persist_job_result to disable one-shot jobs if removal fails.

Closes #3868
2026-03-18 11:03:44 -04:00
Argenis ee7d542da6 fix: pass route-specific api_key through channel provider creation (#3881)
When using Channel mode with dynamic classification and routing, the
route-specific `api_key` from `[[model_routes]]` was silently dropped.
The system always fell back to the global `api_key`, causing 401 errors
when routing to `custom:` providers that require distinct credentials.

Root cause: `ChannelRouteSelection` only stored provider + model, and
`get_or_create_provider` always used `ctx.api_key` (the global key).

Changes:
- Add `api_key` field to `ChannelRouteSelection` so the matched route's
  credential survives through to provider creation.
- Update `get_or_create_provider` to accept and prefer a route-specific
  `api_key` over the global key.
- Use a composite cache key (provider name + api_key hash) to prevent
  cache poisoning when multiple routes target the same provider with
  different credentials.
- Wire the route api_key through query classification matching and the
  `/model` (SetModel) command path.

Fixes #3838
2026-03-18 10:06:06 -04:00
Argenis d51ec4b43f fix(docker): remove COPY commands for dockerignored paths (#3880)
The Dockerfile and Dockerfile.debian COPY `firmware/`, `crates/robot-kit/`,
and `crates/robot-kit/Cargo.toml`, but `.dockerignore` excludes both
`firmware/` and `crates/robot-kit/`, causing COPY failures during build.

Since these are hardware-only paths not needed for the Docker runtime:
- Remove COPY commands for `firmware/` and `crates/robot-kit/`
- Remove dummy `crates/robot-kit/src` creation in dep-caching steps
- Use sed to strip `crates/robot-kit` from workspace members in the
  copied Cargo.toml so Cargo doesn't look for the missing manifest

Fixes #3836
2026-03-18 10:06:03 -04:00
Argenis 3d92b2a652 Merge pull request #3833 from zeroclaw-labs/fix/pairing-code-display
fix(web): display pairing code in dashboard
2026-03-17 22:16:50 -04:00
argenis de la rosa 3255051426 fix(web): display pairing code in dashboard instead of terminal-only
Fetch the current pairing code from GET /admin/paircode (localhost-only)
and display it in both the initial PairingDialog and the /pairing
management page. Users no longer need to check the terminal to find
the 6-digit code — it appears directly in the web UI.

Falls back gracefully when the admin endpoint is unreachable (e.g.
non-localhost access), showing the original "check your terminal" prompt.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 22:01:03 -04:00
Argenis dcaf330848 Merge pull request #3828 from zeroclaw-labs/fix/readme
fix(readme): update social links across all locales
2026-03-17 19:15:29 -04:00
argenis de la rosa 7f8de5cb17 fix(readme): update Facebook group URL and add Discord, TikTok, RedNote badges
Update Facebook group link from /groups/zeroclaw to /groups/zeroclawlabs
across all 31 README locale files. Add Discord, TikTok, and RedNote
social badges to the badge section of all READMEs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 19:02:53 -04:00
Argenis 1341cfb296 Merge pull request #3827 from zeroclaw-labs/feat/plugin-wasm
feat(plugins): add WASM plugin system with Extism runtime
2026-03-17 18:51:41 -04:00
argenis de la rosa 9da620a5aa fix(ci): add cargo-audit ignore for wasmtime vulns from extism
cargo-audit uses .cargo/audit.toml (not deny.toml) for its ignore
list. These 3 wasmtime advisories are transitive via extism 1.13.0
with no upstream fix available. Plugin system is feature-gated.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:38:02 -04:00
argenis de la rosa d016e6b1a0 fix(ci): ignore wasmtime vulns from extism 1.13.0 (no upstream fix)
RUSTSEC-2026-0006, RUSTSEC-2026-0020, RUSTSEC-2026-0021 are all in
wasmtime 37.x pinned by extism. No newer extism release available.
Plugin system is behind a feature flag to limit exposure.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:35:08 -04:00
argenis de la rosa 9b6360ad71 fix(ci): ignore unmaintained transitive deps from extism and indicatif
Add cargo-deny ignore entries for RUSTSEC-2024-0388 (derivative),
RUSTSEC-2025-0057 (fxhash), and RUSTSEC-2025-0119 (number_prefix).
All are transitive dependencies we cannot directly control.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:33:03 -04:00
argenis de la rosa dc50ca9171 fix(plugins): update lockfile and fix ws.rs formatting
Sync Cargo.lock with new Extism/WASM plugin dependencies and apply
rustfmt line-wrap fix in gateway WebSocket handler.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:30:41 -04:00
argenis de la rosa 67edd2bc60 fix(plugins): integrate WASM tools into registry, add gateway routes and tests
- Wire WASM plugin tools into all_tools_with_runtime() behind
  cfg(feature = "plugins-wasm"), discovering and registering tool-capable
  plugins from the configured plugins directory at startup.
- Add /api/plugins gateway endpoint (cfg-gated) for listing plugin status.
- Add mod plugins declaration to main.rs binary crate so crate::plugins
  resolves when the feature is enabled.
- Add unit tests for PluginHost: empty dir, manifest discovery, capability
  filtering, lookup, and removal.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:10:24 -04:00
argenis de la rosa dcf66175e4 feat(plugins): add example weather plugin and manifest
Add a standalone example plugin demonstrating the WASM plugin interface:
- example-plugin/Cargo.toml: cdylib crate targeting wasm32-wasip1
- example-plugin/src/lib.rs: mock weather tool using extism-pdk
- example-plugin/manifest.toml: plugin manifest declaring tool capability

This crate is intentionally NOT added to the workspace members since it
targets wasm32-wasip1 and would break the main build.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:09:54 -04:00
argenis de la rosa b3bb79d805 feat(plugins): add PluginHost, WasmTool, and WasmChannel bridges
Implement the core plugin infrastructure:
- PluginHost: discovers plugins from the workspace plugins directory,
  loads manifest.toml files, supports install/remove/list/info operations
- WasmTool: bridges WASM plugins to the Tool trait (execute stub pending
  Extism runtime wiring)
- WasmChannel: bridges WASM plugins to the Channel trait (send/listen
  stubs pending Extism runtime wiring)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:09:54 -04:00
argenis de la rosa c857b64bb4 feat(plugins): add Extism dependency, feature flag, and plugin module skeleton
Introduce the WASM plugin system foundation:
- Add extism 1.9 as an optional dependency behind `plugins-wasm` feature
- Create `src/plugins/` module with manifest types, error types, and stub host
- Add `Plugin` CLI subcommands (list, install, remove, info) behind cfg gate
- Add `PluginsConfig` to the config schema with sensible defaults

All plugin code is behind `#[cfg(feature = "plugins-wasm")]` so the default
build is unaffected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:09:54 -04:00
Argenis c051f0323e Merge pull request #3822 from zeroclaw-labs/feat/pairing-dashboard
feat(gateway): add pairing dashboard with device management
2026-03-17 17:54:28 -04:00
Argenis dea5c67ab0 Merge pull request #3821 from zeroclaw-labs/feat/self-test-update
feat(cli): add self-test and update commands
2026-03-17 17:54:25 -04:00
Argenis a14afd7ef9 Merge pull request #3820 from zeroclaw-labs/feat/docker-fix
feat(docker): web-builder stage, healthcheck probe, resource limits
2026-03-17 17:54:22 -04:00
argenis de la rosa 4455b24056 fix(pairing): add SQLite persistence, fix config defaults, align with plan
- Add SQLite persistence to DeviceRegistry (backed by rusqlite)
- Rename config fields: ttl_secs -> code_ttl_secs, max_pending -> max_pending_codes, max_attempts -> max_failed_attempts
- Update defaults: code_length 6 -> 8, ttl_secs 300 -> 3600, max_pending 10 -> 3
- Add attempts tracking to PendingPairing struct
- Add token_hash() and authenticate_and_hash() to PairingGuard
- Fix route paths: /api/pairing/submit -> /api/pair, /api/devices/{id}/rotate -> /api/devices/{id}/token/rotate
- Add QR code placeholder to Pairing.tsx
- Pass workspace_dir to DeviceRegistry constructor

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:44:55 -04:00
argenis de la rosa 8ec6522759 fix(gateway): add new fields to test AppState and GatewayConfig constructors
Add device_registry, pending_pairings to test AppState instances and
pairing_dashboard to test GatewayConfig to fix compilation of tests
after the new pairing dashboard fields were introduced.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:36:01 -04:00
argenis de la rosa a818edb782 feat(web): add pairing dashboard page
Add Pairing page with device list table, pairing code generation,
and device revocation. Create useDevices hook for reusable device
fetching. Wire /pairing route into App.tsx router.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:35:39 -04:00
argenis de la rosa e0af3d98dd feat(gateway): extend WebSocket handshake with optional connect params
Add ConnectParams struct for an optional first-frame connect handshake.
If the first WebSocket message is {"type":"connect",...}, connection
parameters (session_id, device_name, capabilities) are extracted and
a "connected" ack is sent back. Old clients sending "message" first
still work unchanged (backward-compatible).

Extract process_chat_message() helper to avoid duplication between
fallback first-message handling and the main message loop.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:35:39 -04:00
argenis de la rosa 48bdbde26c feat(gateway): add device registry and pairing API handlers
Introduce DeviceRegistry, PairingStore, and five new API endpoints:
- POST /api/pairing/initiate — generate a new pairing code
- POST /api/pairing/submit — submit code with device metadata
- GET /api/devices — list paired devices
- DELETE /api/devices/{id} — revoke a paired device
- POST /api/devices/{id}/rotate — rotate a device token

Wire into AppState and gateway router. Registry is only created
when require_pairing is enabled.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:34:56 -04:00
argenis de la rosa dc495a105f feat(config): add PairingDashboardConfig to gateway schema
Add PairingDashboardConfig struct with configurable code_length,
ttl_secs, max_pending, max_attempts, and lockout_secs fields.
Nested under GatewayConfig as `pairing_dashboard` with serde defaults.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:32:27 -04:00
argenis de la rosa fe9addcfe0 fix(cli): align self-test and update commands with implementation plan
- Export commands module from lib.rs (pub mod commands) for external consumers
- Add --force and --version flags to the Update CLI command
- Wire version parameter through to check() and run() in update.rs,
  supporting targeted version fetches via GitHub releases/tags API
- Add WebSocket handshake check (check_websocket_handshake) to the full
  self-test suite in self_test.rs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:24:59 -04:00
argenis de la rosa 5bfa5f18e1 feat(cli): add update command with 6-phase pipeline and rollback
Add `zeroclaw update` command with a 6-phase self-update pipeline:
1. Preflight — check GitHub releases API for newer version
2. Download — fetch platform-specific binary to temp dir
3. Backup — copy current binary to .bak for rollback
4. Validate — size check + --version smoke test on download
5. Swap — overwrite current binary with new version
6. Smoke test — verify updated binary runs, rollback on failure

Supports --check flag for update-check-only mode without installing.
Includes version comparison logic with unit tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:24:58 -04:00
argenis de la rosa 72b7e1e647 feat(cli): add self-test command with quick and full modes
Add `zeroclaw self-test` command with two modes:
- Quick mode (--quick): 8 offline checks including config, workspace,
  SQLite, provider/tool/channel registries, security policy, and version
- Full mode (default): adds gateway health and memory round-trip checks

Creates src/commands/ module structure with self_test and update stubs.
Adds indicatif and tempfile runtime dependencies for the update pipeline.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:24:58 -04:00
argenis de la rosa 413c94befe chore(docker): tighten compose resource limits
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 16:02:15 -04:00
argenis de la rosa 5aa6026fa1 feat(cli): add status --format=exit-code for Docker healthcheck
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 16:02:15 -04:00
argenis de la rosa 6eca841bd7 feat(docker): add web-builder stage and update .dockerignore
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 16:02:15 -04:00
Argenis 50e8d4f5f8 fix(ci): use pre-built binaries for Debian Docker image (#3814)
The Debian compatibility image was building from source with QEMU
cross-compilation for ARM64, which is extremely slow and was getting
cancelled by the concurrency group. Switch to using pre-built binaries
(same as the distroless image) with a debian:bookworm-slim runtime base.

- Add Dockerfile.debian.ci (mirrors Dockerfile.ci with Debian runtime)
- Update release-beta-on-push.yml to use docker-ctx + pre-built bins
- Update release-stable-manual.yml with same fix
- Drop GHA cache layers (no longer building from source)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 15:21:15 -04:00
Argenis fc2aac7c94 feat(gateway): persist WS chat sessions across restarts (#3813)
Gateway WebSocket chat sessions were in-memory only — conversation
history was lost on gateway restart, macOS sleep/wake, or client
reconnect. This wires up the existing SessionBackend (SQLite) to
the gateway WS handler so sessions survive restarts and reconnections.

Changes:
- Add delete_session() to SessionBackend trait + SQLite implementation
- Add session_persistence and session_ttl_hours to GatewayConfig
- Add Agent::seed_history() to hydrate agent from persisted messages
- Initialize SqliteSessionBackend in run_gateway() when enabled
- Send session_start message on WS connect with session_id + resumed
- Persist user/assistant messages after each turn
- Add GET /api/sessions and DELETE /api/sessions/{id} REST endpoints
- Bump version to 0.5.0

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 14:26:39 -04:00
Argenis 4caa3f7e6f fix(web): remove duplicate dashboard keys in Turkish locale (#3812)
The Turkish (tr) locale section had a duplicate "Dashboard specific
labels" block that repeated 19 keys already defined earlier, causing
TypeScript error TS1117. Moved the unique keys (provider_model,
paired_yes, etc.) into the primary dashboard section and removed
the duplicate block.

Fixes build failure introduced by #3777.
2026-03-17 14:13:46 -04:00
Argenis 3bc6ec3cf5 fix: only tweet for stable releases, not beta builds (#3808)
Remove tweet job from beta workflow. Update tweet-release.yml to diff
against previous stable tag (excluding betas) to capture all features
across the full release cycle. Simplify tweet format to feature-focused
style without contributor counts.

Supersedes #3575.
2026-03-17 14:06:46 -04:00
Argenis f3fbd1b094 fix(web): preserve provider runtime options in ws agent (#3807)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 14:06:22 -04:00
Yingpeng MA 79e8252d7a feat(web/i18n): add full Chinese locale and complete Turkish translations (#3777)
- Add comprehensive Simplified Chinese (zh) translations for all UI strings
- Extend and complete Turkish (tr) translations
- Fill in missing English (en) translation keys
- Reset default locale to 'en'
- Update language toggle to cycle through all three locales: en → zh → tr
2026-03-17 13:40:25 -04:00
Marijan Petričević 924521c927 config/schema: add serde default to AutonomyConfig (#3691)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 13:40:18 -04:00
Argenis 07ca270f03 fix(security): restore tokens.is_empty() guard, add re-pairing hint (#3738)
Revert "always generate pairing code" to tighter security posture:
codes are only generated on first startup when no tokens exist. Add
a CLI hint to the gateway banner so operators know how to re-pair
on demand. Fix install.sh to not use --new on fresh install (avoids
invalidating the auto-generated code). Fix onboard to show an
informational message instead of a throwaway PairingGuard.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 13:40:02 -04:00
Alix-007 e08091a2e2 fix(install): print PATH guidance after cargo install (#3769)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:53 -04:00
Alix-007 1f1123d071 fix(channels): allow low-risk shell in non-interactive mode (#3771)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:37 -04:00
Alix-007 d5bc46238a fix(install): skip prebuilt flow on musl (#3788)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:29 -04:00
Alix-007 843973762a ci(docker): publish debian compatibility image (#3789)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:20 -04:00
Alix-007 5f8d7d7347 fix(daemon): preserve deferred MCP tools in /api/chat (#3790)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:12 -04:00
Alix-007 7b3bea8d01 fix(agent): resolve deferred MCP tools by suffix (#3793)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:03 -04:00
Alix-007 ac461dc704 fix(docker): align debian image glibc baseline (#3794)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:55 -04:00
Alix-007 f04e56d9a1 feat(skills): support YAML frontmatter in SKILL.md (#3797)
* feat(skills): support YAML frontmatter in SKILL.md

* fix(skills): preserve nested open-skill names

---------

Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:49 -04:00
Alix-007 1d6f482b04 fix(build): rerun embedded web assets when dist changes (#3799)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:40 -04:00
Alix-007 ba6d0a4df9 fix(release): include matrix channel in official builds (#3800)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:33 -04:00
Alix-007 3cf873ab85 fix(groq): fall back on tool validation 400s (#3778)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:23:39 -04:00
Argenis 025724913d feat(runtime): add configurable reasoning effort (#3785)
* feat(runtime): add configurable reasoning effort

* fix(test): add missing reasoning_effort field in live test

Add reasoning_effort: None to ProviderRuntimeOptions construction in
openai_codex_vision_e2e.rs to fix E0063 compile error.

---------

Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 09:21:53 -04:00
project516 49dd4cd9da Change AppImage to tar.gz in arduino-uno-q-setup.md (#3754)
Arduino App Lab is a tar.gz file for Linux, not an AppImage

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:19:38 -04:00
dependabot[bot] 0664a5e854 chore(deps): bump rust from 7d37016 to da9dab7 (#3776)
Bumps rust from `7d37016` to `da9dab7`.

---
updated-dependencies:
- dependency-name: rust
  dependency-version: 1.94-slim
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:16:21 -04:00
Argenis acd09fbd86 feat(ci): use pre-built binaries for Docker images (#3784)
Instead of compiling Rust from source inside Docker (~60 min),
download the already-built linux binaries from the build matrix
and copy them into a minimal distroless image (~2 min).

- Add Dockerfile.ci for release workflows (no Rust toolchain needed)
- Update both beta and stable workflows to use pre-built artifacts
- Drop Docker job timeout from 60 to 15 minutes
- Original Dockerfile unchanged for local dev builds
2026-03-17 09:03:13 -04:00
Alix-007 0f7d1fceeb fix(channels): hide tool-call notifications by default (#3779)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 08:52:49 -04:00
GhostC 01e13ac92d fix(skills): allow sibling markdown links within skills root (#3781)
Made-with: Cursor
2026-03-17 08:31:20 -04:00
Argenis a9a6113093 fix(docs): revert unauthorized CLAUDE.md additions from #3604 (#3761)
PR #3604 included CLAUDE.md changes referencing non-existent modules
(src/security/taint.rs, src/sop/workflow.rs) and duplicating content
from CONTRIBUTING.md. These additions violate the anti-pattern rule
against modifying CLAUDE.md in feature PRs.
2026-03-17 01:56:51 -04:00
Giulio V 906951a587 feat(multi): LinkedIn tool, WhatsApp voice notes, and Anthropic OAuth fix (#3604)
* feat(tools): add native LinkedIn integration tool

Add a config-gated LinkedIn tool that enables ZeroClaw to interact with
LinkedIn's REST API via OAuth2. Supports creating posts, listing own
posts, commenting, reacting, deleting posts, viewing engagement stats,
and retrieving profile info.

Architecture:
- linkedin.rs: Tool trait impl with action-dispatched design
- linkedin_client.rs: OAuth2 token management and API wrappers
- Config-gated via [linkedin] enabled = false (default off)
- Credentials loaded from workspace .env file
- Automatic token refresh with line-targeted .env update

39 unit tests covering security enforcement, parameter validation,
credential parsing, and token management.

* feat(linkedin): configurable content strategy and API version

- Expand LinkedInConfig with api_version and nested LinkedInContentConfig
  (rss_feeds, github_users, github_repos, topics, persona, instructions)
- Add get_content_strategy tool action so agents can read config at runtime
- Fix hardcoded LinkedIn API version 202402 (expired) → configurable,
  defaulting to 202602
- LinkedInClient accepts api_version as parameter instead of static header
- 4 new tests (43 total), all passing

* feat(linkedin): add multi-provider image generation for posts

Add ImageGenerator with provider chain (DALL-E, Stability AI, Imagen, Flux)
and SVG fallback card. LinkedIn tool create_post now supports generate_image
parameter. Includes LinkedIn image upload (register → upload → reference),
configurable provider priority, and 14 new tests.

* feat(whatsapp): add voice note transcription and TTS voice replies

- Add STT support: download incoming voice notes via wa-rs, transcribe
  with OpenAI Whisper (or Groq), send transcribed text to agent
- Add TTS support: synthesize agent replies to Opus audio via OpenAI
  TTS, upload encrypted media, send as WhatsApp voice note (ptt=true)
- Voice replies only trigger when user sends a voice note; text
  messages get text replies only. Flag is consumed after one use to
  prevent multiple voice notes per agent turn
- Fix transcription module to support OpenAI API key (not just Groq):
  auto-detect provider from API URL, check ANTHROPIC_OAUTH_TOKEN /
  OPENAI_API_KEY / GROQ_API_KEY env vars in priority order
- Add optional api_key field to TranscriptionConfig for explicit key
- Add response_format: opus to OpenAI TTS for WhatsApp compatibility
- Add channel capability note so agent knows TTS is automatic
- Wire transcription + TTS config into WhatsApp Web channel builder

* fix(providers): prefer ANTHROPIC_OAUTH_TOKEN over global api_key

When the Anthropic provider is used alongside a non-Anthropic primary
provider (e.g. custom: gateway), the global api_key would be passed
as credential override, bypassing provider-specific env vars. This
caused Claude Code subscription tokens (sk-ant-oat01-*) to be ignored
in favor of the unrelated gateway JWT.

Fix: for the anthropic provider, check ANTHROPIC_OAUTH_TOKEN and
ANTHROPIC_API_KEY env vars before falling back to the credential
override. This mirrors the existing MiniMax OAuth pattern and enables
subscription-based auth to work as a fallback provider.

* feat(linkedin): add scheduled post support via LinkedIn API

Add scheduled_at parameter to create_post and create_post_with_image.
When provided (RFC 3339 timestamp), the post is created as a DRAFT
with scheduledPublishOptions so LinkedIn publishes it automatically
at the specified time. This enables the cron job to schedule a week
of posts in advance directly on LinkedIn.

* fix(providers): prefer env vars for openai and groq credential resolution

Generalize the Anthropic OAuth fix to also cover openai and groq
providers. When used alongside a non-matching primary provider (e.g.
a custom: gateway), the global api_key would be passed as credential
override, causing auth failures. Now checks provider-specific env
vars (OPENAI_API_KEY, GROQ_API_KEY) before falling back to the
credential override.

* fix(whatsapp): debounce voice replies to voice final answer only

The voice note TTS was triggering on the first send() call, which was
often intermediate tool output (URLs, JSON, web fetch results) rather
than the actual answer. This produced incomprehensible voice notes.

Fix: accumulate substantive replies (>30 chars, not URLs/JSON/code)
in a pending_voice map. A spawned debounce task waits 4 seconds after
the last substantive message, then synthesizes and sends ONE voice
note with the final answer. Intermediate tool outputs are skipped.

This ensures the user hears the actual answer in the correct language,
not raw tool output in English.

* fix(whatsapp): voice in = voice out, text in = text out

Rewrite voice reply logic with clean separation:
- Voice note received: ALL text output suppressed. Latest message
  accumulated silently. After 5s of no new messages, ONE voice note
  sent with the final answer. No tool outputs, no text, just voice.
- Text received: normal text reply, no voice.

Atomic debounce: multiple spawned tasks race but only one can extract
the pending message (remove-inside-lock pattern). Prevents duplicate
voice notes.

* fix(whatsapp): voice replies send both text and voice note

Voice note in → text replies sent normally in real-time PLUS one
voice note with the final answer after 10s debounce. Only substantive
natural-language messages are voiced (tool outputs, URLs, JSON, code
blocks filtered out). Longer debounce (10s) ensures the agent
completes its full tool chain before the voice note fires.

Text in → text out only, no voice.

* fix(channels): suppress tool narration and ack reactions

- Add system prompt instruction telling the agent to NEVER narrate
  tool usage (no "Let me fetch..." or "I will use http_request...")
- Disable ack_reactions (emoji reactions on incoming messages)
- Users see only the final answer, no intermediate steps

* docs(claude): add full CONTRIBUTING.md guidelines to CLAUDE.md

Add PR template requirements, code naming conventions, architecture
boundary rules, validation commands, and branch naming guidance
directly to CLAUDE.md for AI assistant reference.

* fix(docs): add blank lines around headings in CLAUDE.md for markdown lint

* fix(channels): strengthen tool narration suppression and fix large_futures

- Move anti-narration instruction to top of channel system prompt
- Add emphatic instruction for WhatsApp/voice channels specifically
- Add outbound message filter to strip tool-call-like patterns (, 🔧)
- Box::pin the two-phase heartbeat agent::run call (16664 bytes on Linux)
2026-03-17 01:55:05 -04:00
Giulio V 220745e217 feat(channels): add Reddit, Bluesky, and generic Webhook adapters (#3598)
* feat(channels): add Reddit, Bluesky, and generic Webhook adapters

- Reddit: OAuth2 polling for mentions/DMs/replies, comment and DM sending
- Bluesky: AT Protocol session auth, notification polling, post replies
- Webhook: Axum HTTP server for inbound, configurable outbound POST/PUT
- All three follow existing channel patterns with tests

* fix(channels): use neutral test fixtures and improve test naming in webhook
2026-03-17 01:26:58 -04:00
Giulio V 61de3d5648 feat(knowledge): add knowledge graph for expertise capture and reuse (#3596)
* feat(knowledge): add knowledge graph for expertise capture and reuse

SQLite-backed knowledge graph system for consulting firms to capture,
organize, and reuse architecture decisions, solution patterns, lessons
learned, and expert matching across client engagements.

- KnowledgeGraph (src/memory/knowledge_graph.rs): node CRUD, edge
  creation, FTS5 full-text search, tag filtering, subgraph traversal,
  expert ranking by authored contributions, graph statistics
- KnowledgeTool (src/tools/knowledge_tool.rs): Tool trait impl with
  capture, search, relate, suggest, expert_find, lessons_extract, and
  graph_stats actions
- KnowledgeConfig (src/config/schema.rs): disabled by default,
  configurable db_path/max_nodes, cross_workspace_search off by default
  for client data isolation
- Wired into tools factory (conditional on config.knowledge.enabled)

20 unit tests covering node CRUD, edge creation, search ranking,
subgraph queries, expert ranking, and tool actions.

* fix: address CodeRabbit review findings

- Fix UTF-8 truncation panic in truncate_str by using char-based
  iteration instead of byte indexing
- Add config validation for knowledge.max_nodes > 0
- Add subgraph depth boundary validation (must be > 0, capped at 100)

* fix(knowledge): address remaining CodeRabbit review issues

- MAJOR: Add db_path non-empty validation in Config::validate()
- MAJOR: Reject tags containing commas in add_node (comma is separator)
- MAJOR: Fix subgraph depth boundary (0..depth instead of 0..=depth)
- MAJOR: Apply project and node_type filters consistently in both
  tag-only and similarity search paths

* fix: correct subgraph traversal test assertion and sync CI workflows
2026-03-17 01:11:29 -04:00
Giulio V 675a5c9af0 feat(tools): add Google Workspace CLI (gws) integration (#3616)
* feat(tools): add Google Workspace CLI (gws) integration

Adds GoogleWorkspaceTool for interacting with Google Drive, Sheets,
Gmail, Calendar, Docs, and other Workspace services via CLI.

- Config-gated (google_workspace.enabled)
- Service allowlist for restricted access
- Requires shell access for CLI delegation
- Input validation against shell injection
- Wrong-type rejection for all optional parameters
- Config validation for allowed_services (empty, duplicate, malformed)
- Registered in integrations registry and CLI discovery

Closes #2986

* style: fix cargo fmt + clippy violations

* feat(google-workspace): expand config with auth, rate limits, and audit settings

* fix(tools): define missing GWS_TIMEOUT_SECS constant

* fix: Box::pin large futures and resolve duplicate Default impl

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-17 00:52:59 -04:00
Giulio V b099728c27 feat(stt): multi-provider STT with TranscriptionProvider trait (#3614)
* feat(stt): add multi-provider STT with TranscriptionProvider trait

Refactors single-endpoint transcription to support multiple providers:
Groq (existing), OpenAI Whisper, Deepgram, AssemblyAI, and Google Cloud
Speech-to-Text. Adds TranscriptionManager for provider routing with
backward-compatible config fields.

* style: fix cargo fmt + clippy violations

* fix: Box::pin large futures and resolve merge conflicts with master

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-17 00:33:41 -04:00
144 changed files with 18436 additions and 1042 deletions
+10
View File
@@ -0,0 +1,10 @@
# cargo-audit configuration
# https://rustsec.org/
[advisories]
ignore = [
# wasmtime vulns via extism 1.13.0 — no upstream fix; plugins feature-gated
"RUSTSEC-2026-0006", # wasmtime f64.copysign segfault on x86-64
"RUSTSEC-2026-0020", # WASI guest-controlled resource exhaustion
"RUSTSEC-2026-0021", # WASI http fields panic
]
+9
View File
@@ -64,3 +64,12 @@ LICENSE
*.profdata
coverage
lcov.info
# Firmware and hardware crates (not needed for Docker runtime)
firmware/
crates/robot-kit/
# Application and script directories (not needed for Docker runtime)
apps/
python/
scripts/
@@ -74,4 +74,4 @@ jobs:
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
cargo build --release --locked --features channel-matrix --target ${{ matrix.target }}
+51 -17
View File
@@ -213,7 +213,7 @@ jobs:
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
cargo build --release --locked --features channel-matrix --target ${{ matrix.target }}
- name: Package (Unix)
if: runner.os != 'Windows'
@@ -294,10 +294,44 @@ jobs:
name: Push Docker Image
needs: [version, build]
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
cp Dockerfile.debian.ci docker-ctx/Dockerfile.debian
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
@@ -309,24 +343,24 @@ jobs:
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
context: docker-ctx
push: true
build-args: |
ZEROCLAW_CARGO_FEATURES=channel-matrix
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
# ── Post-publish: tweet after release + website are live ──────────────
# Docker is slow (multi-platform) and can be cancelled by concurrency;
# don't let it block the tweet.
tweet:
name: Tweet Release
needs: [version, publish, redeploy-website]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/tweet-release.yml
with:
release_tag: ${{ needs.version.outputs.tag }}
release_url: https://github.com/zeroclaw-labs/zeroclaw/releases/tag/${{ needs.version.outputs.tag }}
secrets: inherit
- name: Build and push Debian compatibility image
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
file: docker-ctx/Dockerfile.debian
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}-debian
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta-debian
platforms: linux/amd64,linux/arm64
# Tweet removed — only stable releases should tweet (see tweet-release.yml).
+50 -5
View File
@@ -214,7 +214,7 @@ jobs:
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
cargo build --release --locked --features channel-matrix --target ${{ matrix.target }}
- name: Package (Unix)
if: runner.os != 'Windows'
@@ -337,10 +337,44 @@ jobs:
name: Push Docker Image
needs: [validate, build]
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
cp Dockerfile.debian.ci docker-ctx/Dockerfile.debian
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
@@ -352,14 +386,25 @@ jobs:
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
context: docker-ctx
push: true
build-args: |
ZEROCLAW_CARGO_FEATURES=channel-matrix
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push Debian compatibility image
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
file: docker-ctx/Dockerfile.debian
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}-debian
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:debian
platforms: linux/amd64,linux/arm64
# ── Post-publish: package manager auto-sync ─────────────────────────
scoop:
+21 -36
View File
@@ -5,7 +5,7 @@ on:
workflow_call:
inputs:
release_tag:
description: "Release tag (e.g. v0.3.0 or v0.3.0-beta.42)"
description: "Stable release tag (e.g. v0.3.0)"
required: true
type: string
release_url:
@@ -61,9 +61,10 @@ jobs:
exit 0
fi
# For betas: find the PREVIOUS release tag to check for new features
# Find the previous STABLE release tag (exclude betas) to check for new features
PREV_TAG=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
@@ -97,53 +98,37 @@ jobs:
if [ -n "$MANUAL_TEXT" ]; then
TWEET="$MANUAL_TEXT"
else
# For features: diff against the PREVIOUS release (including betas)
# This prevents duplicate feature lists across consecutive betas
PREV_RELEASE=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| head -1 || echo "")
# For contributors: diff against the last STABLE release
# This captures everyone across the full release cycle
# Diff against the last STABLE release (exclude betas) to capture
# ALL features accumulated across the full beta cycle
PREV_STABLE=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
FEAT_RANGE="${PREV_RELEASE:+${PREV_RELEASE}..}${RELEASE_TAG}"
CONTRIB_RANGE="${PREV_STABLE:+${PREV_STABLE}..}${RELEASE_TAG}"
RANGE="${PREV_STABLE:+${PREV_STABLE}..}${RELEASE_TAG}"
# Extract NEW features only since the last release
FEATURES=$(git log "$FEAT_RANGE" --pretty=format:"%s" --no-merges \
# Extract ALL features since the last stable release
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf \
| head -4 \
| while IFS= read -r line; do echo "🚀 ${line}"; done || true)
if [ -z "$FEATURES" ]; then
FEATURES="🚀 Incremental improvements and polish"
fi
# Count ALL contributors across the full release cycle
GIT_AUTHORS=$(git log "$CONTRIB_RANGE" --pretty=format:"%an" --no-merges | sort -uf || true)
CO_AUTHORS=$(git log "$CONTRIB_RANGE" --pretty=format:"%b" --no-merges \
| grep -ioE 'Co-Authored-By: *[^<]+' \
| sed 's/Co-Authored-By: *//i' \
| sed 's/ *$//' \
| sort -uf || true)
TOTAL_COUNT=$(printf "%s\n%s" "$GIT_AUTHORS" "$CO_AUTHORS" \
| sort -uf \
| grep -v '^$' \
| grep -viE '\[bot\]$|^dependabot|^github-actions|^copilot|^ZeroClaw Bot|^ZeroClaw Runner|^ZeroClaw Agent|^blacksmith' \
| grep -c . || echo "0")
FEAT_COUNT=$(echo "$FEATURES" | grep -c . || echo "0")
# Build tweet — new features, contributor count, hashtags
TWEET=$(printf "🦀 ZeroClaw %s\n\n%s\n\n🙌 %s contributors\n\n%s\n\n#zeroclaw #rust #ai #opensource" \
"$RELEASE_TAG" "$FEATURES" "$TOTAL_COUNT" "$RELEASE_URL")
# Format top features with rocket emoji (limit to 6 for tweet space)
FEAT_LIST=$(echo "$FEATURES" \
| head -6 \
| while IFS= read -r line; do echo "🚀 ${line}"; done || true)
if [ -z "$FEAT_LIST" ]; then
FEAT_LIST="🚀 Incremental improvements and polish"
fi
# Build tweet — feature-focused style
TWEET=$(printf "🦀 ZeroClaw %s\n\n%s\n\nZero overhead. Zero compromise. 100%% Rust.\n\n#zeroclaw #rust #ai #opensource" \
"$RELEASE_TAG" "$FEAT_LIST")
fi
# X/Twitter counts any URL as 23 chars (t.co shortening).
Generated
+1296 -46
View File
File diff suppressed because it is too large Load Diff
+25 -11
View File
@@ -4,7 +4,7 @@ resolver = "2"
[package]
name = "zeroclawlabs"
version = "0.4.3"
version = "0.5.0"
edition = "2021"
authors = ["theonlyhennygod"]
license = "MIT OR Apache-2.0"
@@ -14,15 +14,6 @@ readme = "README.md"
keywords = ["ai", "agent", "cli", "assistant", "chatbot"]
categories = ["command-line-utilities", "api-bindings"]
rust-version = "1.87"
[[bin]]
name = "zeroclaw"
path = "src/main.rs"
[lib]
name = "zeroclaw"
path = "src/lib.rs"
include = [
"/src/**/*",
"/build.rs",
@@ -31,8 +22,17 @@ include = [
"/LICENSE*",
"/README.md",
"/web/dist/**/*",
"/tool_descriptions/**/*",
]
[[bin]]
name = "zeroclaw"
path = "src/main.rs"
[lib]
name = "zeroclaw"
path = "src/lib.rs"
[dependencies]
# CLI - minimal and fast
clap = { version = "4.5", features = ["derive"] }
@@ -53,6 +53,7 @@ matrix-sdk = { version = "0.16", optional = true, default-features = false, feat
serde = { version = "1.0", default-features = false, features = ["derive"] }
serde_json = { version = "1.0", default-features = false, features = ["std"] }
serde_ignored = "0.1"
serde_yaml = "0.9"
# Config
directories = "6.0"
@@ -82,6 +83,12 @@ nanohtml2text = "0.2"
# Optional Rust-native browser automation backend
fantoccini = { version = "0.22.1", optional = true, default-features = false, features = ["rustls-tls"] }
# Progress bars (update pipeline)
indicatif = "0.17"
# Temp files (update pipeline rollback)
tempfile = "3.26"
# Error handling
anyhow = "1.0"
thiserror = "2.0"
@@ -183,6 +190,9 @@ probe-rs = { version = "0.31", optional = true }
# PDF extraction for datasheet RAG (optional, enable with --features rag-pdf)
pdf-extract = { version = "0.10", optional = true }
# WASM plugin runtime (extism)
extism = { version = "1.9", optional = true }
# Terminal QR rendering for WhatsApp Web pairing flow.
qrcode = { version = "0.14", optional = true }
@@ -205,7 +215,7 @@ landlock = { version = "0.4", optional = true }
libc = "0.2"
[features]
default = ["observability-prometheus", "channel-nostr"]
default = ["observability-prometheus", "channel-nostr", "skill-creation"]
channel-nostr = ["dep:nostr-sdk"]
hardware = ["nusb", "tokio-serial"]
channel-matrix = ["dep:matrix-sdk"]
@@ -230,8 +240,12 @@ metrics = ["observability-prometheus"]
probe = ["dep:probe-rs"]
# rag-pdf = PDF ingestion for datasheet RAG
rag-pdf = ["dep:pdf-extract"]
# skill-creation = Autonomous skill creation from successful multi-step tasks
skill-creation = []
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "dep:serde-big-array", "dep:prost", "dep:qrcode"]
# WASM plugin system (extism-based)
plugins-wasm = ["dep:extism"]
[profile.release]
opt-level = "z" # Optimize for size
+31 -28
View File
@@ -1,9 +1,18 @@
# syntax=docker/dockerfile:1.7
# ── Stage 0: Frontend build ─────────────────────────────────────
FROM node:22-alpine AS web-builder
WORKDIR /web
COPY web/package.json web/package-lock.json* ./
RUN npm ci --ignore-scripts 2>/dev/null || npm install --ignore-scripts
COPY web/ .
RUN npm run build
# ── Stage 1: Build ────────────────────────────────────────────
FROM rust:1.94-slim@sha256:7d3701660d2aa7101811ba0c54920021452aa60e5bae073b79c2b137a432b2f4 AS builder
FROM rust:1.94-slim@sha256:da9dab7a6b8dd428e71718402e97207bb3e54167d37b5708616050b1e8f60ed6 AS builder
WORKDIR /app
ARG ZEROCLAW_CARGO_FEATURES=""
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
@@ -14,43 +23,29 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
COPY crates/robot-kit/Cargo.toml crates/robot-kit/Cargo.toml
# Remove robot-kit from workspace members — it is excluded by .dockerignore
# and is not needed for the Docker build (hardware-only crate).
RUN sed -i 's/members = \[".", "crates\/robot-kit"\]/members = ["."]/' Cargo.toml
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches crates/robot-kit/src \
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
&& echo "" > src/lib.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs \
&& echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs
&& echo "fn main() {}" > benches/agent_benchmarks.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked
RUN rm -rf src benches crates/robot-kit/src
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi
RUN rm -rf src benches
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
COPY benches/ benches/
COPY crates/ crates/
COPY firmware/ firmware/
COPY web/ web/
COPY --from=web-builder /web/dist web/dist
COPY *.rs .
# Keep release builds resilient when frontend dist assets are not prebuilt in Git.
RUN mkdir -p web/dist && \
if [ ! -f web/dist/index.html ]; then \
printf '%s\n' \
'<!doctype html>' \
'<html lang="en">' \
' <head>' \
' <meta charset="utf-8" />' \
' <meta name="viewport" content="width=device-width,initial-scale=1" />' \
' <title>ZeroClaw Dashboard</title>' \
' </head>' \
' <body>' \
' <h1>ZeroClaw Dashboard Unavailable</h1>' \
' <p>Frontend assets are not bundled in this build. Build the web UI to populate <code>web/dist</code>.</p>' \
' </body>' \
'</html>' > web/dist/index.html; \
fi
RUN touch src/main.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
@@ -58,7 +53,11 @@ RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/regist
rm -rf target/release/.fingerprint/zeroclawlabs-* \
target/release/deps/zeroclawlabs-* \
target/release/incremental/zeroclawlabs-* && \
cargo build --release --locked && \
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
@@ -114,6 +113,8 @@ ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
@@ -138,5 +139,7 @@ ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+25
View File
@@ -0,0 +1,25 @@
# Dockerfile.ci — CI/release image using pre-built binaries.
# Used by release workflows to skip the ~60 min Rust compilation.
# The main Dockerfile is still used for local dev builds.
# ── Runtime (Distroless) ─────────────────────────────────────
FROM gcr.io/distroless/cc-debian13:nonroot@sha256:84fcd3c223b144b0cb6edc5ecc75641819842a9679a3a58fd6294bec47532bf7
ARG TARGETARCH
# Copy the pre-built binary for this platform (amd64 or arm64)
COPY bin/${TARGETARCH}/zeroclaw /usr/local/bin/zeroclaw
# Runtime directory structure and default config
COPY --chown=65534:65534 zeroclaw-data/ /zeroclaw-data/
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+30 -29
View File
@@ -1,5 +1,13 @@
# syntax=docker/dockerfile:1.7
# ── Stage 0: Frontend build ─────────────────────────────────────
FROM node:22-alpine AS web-builder
WORKDIR /web
COPY web/package.json web/package-lock.json* ./
RUN npm ci --ignore-scripts 2>/dev/null || npm install --ignore-scripts
COPY web/ .
RUN npm run build
# Dockerfile.debian — Shell-equipped variant of the ZeroClaw container.
#
# The default Dockerfile produces a distroless "release" image with no shell,
@@ -15,10 +23,11 @@
# Or with docker compose:
# docker compose -f docker-compose.yml -f docker-compose.debian.yml up
# ── Stage 1: Build (identical to main Dockerfile) ───────────
FROM rust:1.94-slim@sha256:7d3701660d2aa7101811ba0c54920021452aa60e5bae073b79c2b137a432b2f4 AS builder
# ── Stage 1: Build (match runtime glibc baseline) ───────────
FROM rust:1.94-bookworm AS builder
WORKDIR /app
ARG ZEROCLAW_CARGO_FEATURES=""
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
@@ -29,47 +38,37 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
COPY crates/robot-kit/Cargo.toml crates/robot-kit/Cargo.toml
# Remove robot-kit from workspace members — it is excluded by .dockerignore
# and is not needed for the Docker build (hardware-only crate).
RUN sed -i 's/members = \[".", "crates\/robot-kit"\]/members = ["."]/' Cargo.toml
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches crates/robot-kit/src \
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
&& echo "" > src/lib.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs \
&& echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs
&& echo "fn main() {}" > benches/agent_benchmarks.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked
RUN rm -rf src benches crates/robot-kit/src
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi
RUN rm -rf src benches
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
COPY benches/ benches/
COPY crates/ crates/
COPY firmware/ firmware/
COPY web/ web/
# Keep release builds resilient when frontend dist assets are not prebuilt in Git.
RUN mkdir -p web/dist && \
if [ ! -f web/dist/index.html ]; then \
printf '%s\n' \
'<!doctype html>' \
'<html lang="en">' \
' <head>' \
' <meta charset="utf-8" />' \
' <meta name="viewport" content="width=device-width,initial-scale=1" />' \
' <title>ZeroClaw Dashboard</title>' \
' </head>' \
' <body>' \
' <h1>ZeroClaw Dashboard Unavailable</h1>' \
' <p>Frontend assets are not bundled in this build. Build the web UI to populate <code>web/dist</code>.</p>' \
' </body>' \
'</html>' > web/dist/index.html; \
fi
COPY --from=web-builder /web/dist web/dist
RUN touch src/main.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked && \
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
@@ -120,5 +119,7 @@ ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+34
View File
@@ -0,0 +1,34 @@
# Dockerfile.debian.ci — CI/release Debian image using pre-built binaries.
# Mirrors Dockerfile.ci but uses debian:bookworm-slim with shell tools
# so the agent can use shell-based tools (pwd, ls, git, curl, etc.).
# Used by release workflows to skip ~60 min QEMU cross-compilation.
# ── Runtime (Debian with shell) ────────────────────────────────
FROM debian:bookworm-slim
ARG TARGETARCH
# Install essential tools for agent shell operations
RUN apt-get update && apt-get install -y --no-install-recommends \
bash \
ca-certificates \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the pre-built binary for this platform (amd64 or arm64)
COPY bin/${TARGETARCH}/zeroclaw /usr/local/bin/zeroclaw
# Runtime directory structure and default config
COPY --chown=65534:65534 zeroclaw-data/ /zeroclaw-data/
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+5 -2
View File
@@ -16,7 +16,10 @@
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center" dir="rtl">
@@ -103,7 +106,7 @@
| التاريخ (UTC) | المستوى | الإشعار | الإجراء |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _حرج_ | **نحن غير مرتبطين** بـ `openagen/zeroclaw` أو `zeroclaw.org`. نطاق `zeroclaw.org` يشير حاليًا إلى الفرع `openagen/zeroclaw`، وهذا النطاق/المستودع ينتحل شخصية موقعنا/مشروعنا الرسمي. | لا تثق بالمعلومات أو الملفات الثنائية أو جمع التبرعات أو الإعلانات من هذه المصادر. استخدم فقط [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) وحساباتنا الموثقة على وسائل التواصل الاجتماعي. |
| 2026-02-21 | _مهم_ | موقعنا الرسمي أصبح متاحًا الآن: [zeroclawlabs.ai](https://zeroclawlabs.ai). شكرًا لصبرك أثناء الانتظار. لا نزال نكتشف محاولات الانتحال: لا تشارك في أي نشاط استثمار/تمويل باسم ZeroClaw إذا لم يتم نشره عبر قنواتنا الرسمية. | استخدم [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) كمصدر وحيد للحقيقة. تابع [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21)، [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs)، [Facebook (مجموعة)](https://www.facebook.com/groups/zeroclaw)، [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/)، و[Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) للتحديثات الرسمية. |
| 2026-02-21 | _مهم_ | موقعنا الرسمي أصبح متاحًا الآن: [zeroclawlabs.ai](https://zeroclawlabs.ai). شكرًا لصبرك أثناء الانتظار. لا نزال نكتشف محاولات الانتحال: لا تشارك في أي نشاط استثمار/تمويل باسم ZeroClaw إذا لم يتم نشره عبر قنواتنا الرسمية. | استخدم [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) كمصدر وحيد للحقيقة. تابع [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21)، [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs)، [Facebook (مجموعة)](https://www.facebook.com/groups/zeroclawlabs)، [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/)، و[Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) للتحديثات الرسمية. |
| 2026-02-19 | _مهم_ | قامت Anthropic بتحديث شروط استخدام المصادقة وبيانات الاعتماد في 2026-02-19. مصادقة OAuth (Free، Pro، Max) حصريًا لـ Claude Code و Claude.ai؛ استخدام رموز Claude Free/Pro/Max OAuth في أي منتج أو أداة أو خدمة أخرى (بما في ذلك Agent SDK) غير مسموح به وقد ينتهك شروط استخدام المستهلك. | يرجى تجنب مؤقتًا تكاملات Claude Code OAuth لمنع أي خسارة محتملة. البند الأصلي: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ الميزات
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ channels:
## কমিউনিটি
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Použijte tuto tabulku pro důležitá oznámení (změny kompatibility, bezpeč
| Datum (UTC) | Úroveň | Oznámení | Akce |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritické_ | **Nejsme propojeni** s `openagen/zeroclaw` nebo `zeroclaw.org`. Doména `zeroclaw.org` aktuálně směřuje na fork `openagen/zeroclaw`, a tato doména/repoziťář se vydává za náš oficiální web/projekt. | Nevěřte informacím, binárním souborům, fundraisingu nebo oznámením z těchto zdrojů. Používejte pouze [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) a naše ověřené sociální účty. |
| 2026-02-21 | _Důležité_ | Náš oficiální web je nyní online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Děkujeme za trpělivost během čekání. Stále detekujeme pokusy o vydávání se: neúčastněte žádné investiční/fundraisingové aktivity ve jménu ZeroClaw pokud není publikována přes naše oficiální kanály. | Používejte [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) jako jediný zdroj pravdy. Sledujte [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (skupina)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), a [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pro oficiální aktualizace. |
| 2026-02-21 | _Důležité_ | Náš oficiální web je nyní online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Děkujeme za trpělivost během čekání. Stále detekujeme pokusy o vydávání se: neúčastněte žádné investiční/fundraisingové aktivity ve jménu ZeroClaw pokud není publikována přes naše oficiální kanály. | Používejte [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) jako jediný zdroj pravdy. Sledujte [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (skupina)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), a [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pro oficiální aktualizace. |
| 2026-02-19 | _Důležité_ | Anthropic aktualizoval podmínky použití autentizace a přihlašovacích údajů dne 2026-02-19. OAuth autentizace (Free, Pro, Max) je výhradně pro Claude Code a Claude.ai; použití Claude Free/Pro/Max OAuth tokenů v jakémkoliv jiném produktu, nástroji nebo službě (včetně Agent SDK) není povoleno a může porušit Podmínky použití spotřebitele. | Prosím dočasně se vyhněte Claude Code OAuth integracím pro předcházení potenciálním ztrátám. Původní klauzule: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funkce
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ Se [LICENSE-APACHE](LICENSE-APACHE) og [LICENSE-MIT](LICENSE-MIT) for detaljer.
## Fællesskab
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -107,7 +110,7 @@ Verwende diese Tabelle für wichtige Hinweise (Kompatibilitätsänderungen, Sich
| Datum (UTC) | Ebene | Hinweis | Aktion |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritisch_ | Wir sind **nicht verbunden** mit `openagen/zeroclaw` oder `zeroclaw.org`. Die Domain `zeroclaw.org` zeigt derzeit auf den Fork `openagen/zeroclaw`, und diese Domain/Repository fälscht unsere offizielle Website/Projekt. | Vertraue keinen Informationen, Binärdateien, Fundraising oder Ankündigungen aus diesen Quellen. Verwende nur [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) und unsere verifizierten Social-Media-Konten. |
| 2026-02-21 | _Wichtig_ | Unsere offizielle Website ist jetzt online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Danke für deine Geduld während der Wartezeit. Wir erkennen weiterhin Fälschungsversuche: nimm an keiner Investitions-/Finanzierungsaktivität im Namen von ZeroClaw teil, wenn sie nicht über unsere offiziellen Kanäle veröffentlicht wird. | Verwende [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) als einzige Quelle der Wahrheit. Folge [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (Gruppe)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), und [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) für offizielle Updates. |
| 2026-02-21 | _Wichtig_ | Unsere offizielle Website ist jetzt online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Danke für deine Geduld während der Wartezeit. Wir erkennen weiterhin Fälschungsversuche: nimm an keiner Investitions-/Finanzierungsaktivität im Namen von ZeroClaw teil, wenn sie nicht über unsere offiziellen Kanäle veröffentlicht wird. | Verwende [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) als einzige Quelle der Wahrheit. Folge [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (Gruppe)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), und [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) für offizielle Updates. |
| 2026-02-19 | _Wichtig_ | Anthropic hat die Nutzungsbedingungen für Authentifizierung und Anmeldedaten am 2026-02-19 aktualisiert. Die OAuth-Authentifizierung (Free, Pro, Max) ist ausschließlich für Claude Code und Claude.ai; die Verwendung von Claude Free/Pro/Max OAuth-Token in einem anderen Produkt, Tool oder Dienst (einschließlich Agent SDK) ist nicht erlaubt und kann gegen die Verbrauchernutzungsbedingungen verstoßen. | Bitte vermeide vorübergehend Claude Code OAuth-Integrationen, um potenzielle Verluste zu verhindern. Originalklausel: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funktionen
+5 -2
View File
@@ -14,7 +14,10 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -176,7 +179,7 @@ channels:
## Κοινότητα
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Usa esta tabla para avisos importantes (cambios de compatibilidad, avisos de seg
| Fecha (UTC) | Nivel | Aviso | Acción |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Crítico_ | **No estamos afiliados** con `openagen/zeroclaw` o `zeroclaw.org`. El dominio `zeroclaw.org` apunta actualmente al fork `openagen/zeroclaw`, y este dominio/repositorio está suplantando nuestro sitio web/proyecto oficial. | No confíes en información, binarios, recaudaciones de fondos o anuncios de estas fuentes. Usa solo [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) y nuestras cuentas sociales verificadas. |
| 2026-02-21 | _Importante_ | Nuestro sitio web oficial ahora está en línea: [zeroclawlabs.ai](https://zeroclawlabs.ai). Gracias por tu paciencia durante la espera. Todavía detectamos intentos de suplantación: no participes en ninguna actividad de inversión/financiamiento en nombre de ZeroClaw si no se publica a través de nuestros canales oficiales. | Usa [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) como la única fuente de verdad. Sigue [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), y [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para actualizaciones oficiales. |
| 2026-02-21 | _Importante_ | Nuestro sitio web oficial ahora está en línea: [zeroclawlabs.ai](https://zeroclawlabs.ai). Gracias por tu paciencia durante la espera. Todavía detectamos intentos de suplantación: no participes en ninguna actividad de inversión/financiamiento en nombre de ZeroClaw si no se publica a través de nuestros canales oficiales. | Usa [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) como la única fuente de verdad. Sigue [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), y [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para actualizaciones oficiales. |
| 2026-02-19 | _Importante_ | Anthropic actualizó los términos de uso de autenticación y credenciales el 2026-02-19. La autenticación OAuth (Free, Pro, Max) es exclusivamente para Claude Code y Claude.ai; el uso de tokens OAuth de Claude Free/Pro/Max en cualquier otro producto, herramienta o servicio (incluyendo Agent SDK) no está permitido y puede violar los Términos de Uso del Consumidor. | Por favor, evita temporalmente las integraciones OAuth de Claude Code para prevenir cualquier pérdida potencial. Cláusula original: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Características
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ Katso [LICENSE-APACHE](LICENSE-APACHE) ja [LICENSE-MIT](LICENSE-MIT) yksityiskoh
## Yhteisö
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -14,7 +14,10 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributeurs" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Offrez-moi un café" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X : @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit : r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -101,7 +104,7 @@ Utilisez ce tableau pour les avis importants (changements incompatibles, avis de
| Date (UTC) | Niveau | Avis | Action |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Critique_ | Nous ne sommes **pas affiliés** à `openagen/zeroclaw` ou `zeroclaw.org`. Le domaine `zeroclaw.org` pointe actuellement vers le fork `openagen/zeroclaw`, et ce domaine/dépôt usurpe l'identité de notre site web/projet officiel. | Ne faites pas confiance aux informations, binaires, levées de fonds ou annonces provenant de ces sources. Utilisez uniquement [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) et nos comptes sociaux vérifiés. |
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (groupe)](https://www.facebook.com/groups/zeroclaw), et [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) pour les mises à jour officielles. |
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (groupe)](https://www.facebook.com/groups/zeroclawlabs), et [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) pour les mises à jour officielles. |
| 2026-02-19 | _Important_ | Anthropic a mis à jour les conditions d'utilisation de l'authentification et des identifiants le 2026-02-19. L'authentification OAuth (Free, Pro, Max) est exclusivement destinée à Claude Code et Claude.ai ; l'utilisation de tokens OAuth de Claude Free/Pro/Max dans tout autre produit, outil ou service (y compris Agent SDK) n'est pas autorisée et peut violer les Conditions d'utilisation grand public. | Veuillez temporairement éviter les intégrations OAuth de Claude Code pour prévenir toute perte potentielle. Clause originale : [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Fonctionnalités
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center" dir="rtl">
@@ -193,7 +196,7 @@ channels:
## קהילה
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ channels:
## समुदाय
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ Részletekért lásd a [LICENSE-APACHE](LICENSE-APACHE) és [LICENSE-MIT](LICENS
## Közösség
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ Lihat [LICENSE-APACHE](LICENSE-APACHE) dan [LICENSE-MIT](LICENSE-MIT) untuk deta
## Komunitas
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Usa questa tabella per avvisi importanti (cambiamenti di compatibilità, avvisi
| Data (UTC) | Livello | Avviso | Azione |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Critico_ | **Non siamo affiliati** con `openagen/zeroclaw` o `zeroclaw.org`. Il dominio `zeroclaw.org` punta attualmente al fork `openagen/zeroclaw`, e questo dominio/repository sta contraffacendo il nostro sito web/progetto ufficiale. | Non fidarti di informazioni, binari, raccolte fondi o annunci da queste fonti. Usa solo [questo repository](https://github.com/zeroclaw-labs/zeroclaw) e i nostri account social verificati. |
| 2026-02-21 | _Importante_ | Il nostro sito ufficiale è ora online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Grazie per la pazienza durante l'attesa. Rileviamo ancora tentativi di contraffazione: non partecipare ad alcuna attività di investimento/finanziamento a nome di ZeroClaw se non pubblicata tramite i nostri canali ufficiali. | Usa [questo repository](https://github.com/zeroclaw-labs/zeroclaw) come unica fonte di verità. Segui [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (gruppo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) per aggiornamenti ufficiali. |
| 2026-02-21 | _Importante_ | Il nostro sito ufficiale è ora online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Grazie per la pazienza durante l'attesa. Rileviamo ancora tentativi di contraffazione: non partecipare ad alcuna attività di investimento/finanziamento a nome di ZeroClaw se non pubblicata tramite i nostri canali ufficiali. | Usa [questo repository](https://github.com/zeroclaw-labs/zeroclaw) come unica fonte di verità. Segui [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (gruppo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) per aggiornamenti ufficiali. |
| 2026-02-19 | _Importante_ | Anthropic ha aggiornato i termini di utilizzo di autenticazione e credenziali il 2026-02-19. L'autenticazione OAuth (Free, Pro, Max) è esclusivamente per Claude Code e Claude.ai; l'uso di token OAuth di Claude Free/Pro/Max in qualsiasi altro prodotto, strumento o servizio (incluso Agent SDK) non è consentito e può violare i Termini di Utilizzo del Consumatore. | Si prega di evitare temporaneamente le integrazioni OAuth di Claude Code per prevenire qualsiasi potenziale perdita. Clausola originale: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funzionalità
+5 -2
View File
@@ -13,7 +13,10 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
@@ -92,7 +95,7 @@
| 日付 (UTC) | レベル | お知らせ | 対応 |
|---|---|---|---|
| 2026-02-19 | _緊急_ | 私たちは `openagen/zeroclaw` および `zeroclaw.org` とは**一切関係ありません**。`zeroclaw.org` は現在 `openagen/zeroclaw` の fork を指しており、そのドメイン/リポジトリは当プロジェクトの公式サイト・公式プロジェクトを装っています。 | これらの情報源による案内、バイナリ、資金調達情報、公式発表は信頼しないでください。必ず[本リポジトリ](https://github.com/zeroclaw-labs/zeroclaw)と認証済み公式SNSのみを参照してください。 |
| 2026-02-21 | _重要_ | 公式サイトを公開しました: [zeroclawlabs.ai](https://zeroclawlabs.ai)。公開までお待ちいただきありがとうございました。引き続きなりすましの試みを確認しているため、ZeroClaw 名義の投資・資金調達などの案内は、公式チャネルで確認できない限り参加しないでください。 | 情報は[本リポジトリ](https://github.com/zeroclaw-labs/zeroclaw)を最優先で確認し、[X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(グループ)](https://www.facebook.com/groups/zeroclaw)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) と [小紅書アカウント](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) で公式更新を確認してください。 |
| 2026-02-21 | _重要_ | 公式サイトを公開しました: [zeroclawlabs.ai](https://zeroclawlabs.ai)。公開までお待ちいただきありがとうございました。引き続きなりすましの試みを確認しているため、ZeroClaw 名義の投資・資金調達などの案内は、公式チャネルで確認できない限り参加しないでください。 | 情報は[本リポジトリ](https://github.com/zeroclaw-labs/zeroclaw)を最優先で確認し、[X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(グループ)](https://www.facebook.com/groups/zeroclawlabs)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) と [小紅書アカウント](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) で公式更新を確認してください。 |
| 2026-02-19 | _重要_ | Anthropic は 2026-02-19 に Authentication and Credential Use を更新しました。条文では、OAuth authenticationFree/Pro/Max)は Claude Code と Claude.ai 専用であり、Claude Free/Pro/Max で取得した OAuth トークンを他の製品・ツール・サービス(Agent SDK を含む)で使用することは許可されず、Consumer Terms of Service 違反に該当すると明記されています。 | 損失回避のため、当面は Claude Code OAuth 連携を試さないでください。原文: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
## 概要
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Harvard, MIT, 그리고 Sundai.Club 커뮤니티의 학생들과 멤버들이
| 날짜 (UTC) | 수준 | 공지 | 조치 |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _중요_ | 우리는 `openagen/zeroclaw` 또는 `zeroclaw.org`**관련이 없습니다**. `zeroclaw.org` 도메인은 현재 `openagen/zeroclaw` 포크를 가리키고 있으며, 이 도메인/저장소는 우리의 공식 웹사이트/프로젝트를 사칭하고 있습니다. | 이 소스의 정보, 바이너리, 펀딩, 공지를 신뢰하지 마세요. [이 저장소](https://github.com/zeroclaw-labs/zeroclaw)와 우리의 확인된 소셜 계정만 사용하세요. |
| 2026-02-21 | _중요_ | 우리의 공식 웹사이트가 이제 온라인입니다: [zeroclawlabs.ai](https://zeroclawlabs.ai). 기다려주셔서 감사합니다. 여전히 사칭 시도가 감지되고 있습니다: 공식 채널을 통해 게시되지 않은 ZeroClaw 이름의 모든 투자/펀딩 활동에 참여하지 마세요. | [이 저장소](https://github.com/zeroclaw-labs/zeroclaw)를 유일한 진실의 원천으로 사용하세요. [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (그룹)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), 그리고 [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)를 팔로우하여 공식 업데이트를 받으세요. |
| 2026-02-21 | _중요_ | 우리의 공식 웹사이트가 이제 온라인입니다: [zeroclawlabs.ai](https://zeroclawlabs.ai). 기다려주셔서 감사합니다. 여전히 사칭 시도가 감지되고 있습니다: 공식 채널을 통해 게시되지 않은 ZeroClaw 이름의 모든 투자/펀딩 활동에 참여하지 마세요. | [이 저장소](https://github.com/zeroclaw-labs/zeroclaw)를 유일한 진실의 원천으로 사용하세요. [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (그룹)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), 그리고 [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)를 팔로우하여 공식 업데이트를 받으세요. |
| 2026-02-19 | _중요_ | Anthropic이 2026-02-19에 인증 및 자격증명 사용 약관을 업데이트했습니다. OAuth 인증(Free, Pro, Max)은 Claude Code 및 Claude.ai 전용입니다. 다른 제품, 도구 또는 서비스(Agent SDK 포함)에서 Claude Free/Pro/Max OAuth 토큰을 사용하는 것은 허용되지 않으며 소비자 이용약관을 위반할 수 있습니다. | 잠재적인 손실을 방지하기 위해 일시적으로 Claude Code OAuth 통합을 피하세요. 원본 조항: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ 기능
+5 -2
View File
@@ -14,7 +14,10 @@
<a href="https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors"><img src="https://img.shields.io/github/contributors/zeroclaw-labs/zeroclaw?color=green" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -94,7 +97,7 @@ Use this board for important notices (breaking changes, security advisories, mai
| Date (UTC) | Level | Notice | Action |
| ---------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Critical_ | We are **not affiliated** with `openagen/zeroclaw`, `zeroclaw.org` or `zeroclaw.net`. The `zeroclaw.org` and `zeroclaw.net` domains currently points to the `openagen/zeroclaw` fork, and that domain/repository are impersonating our official website/project. | Do not trust information, binaries, fundraising, or announcements from those sources. Use only [this repository](https://github.com/zeroclaw-labs/zeroclaw) and our verified social accounts. |
| 2026-02-21 | _Important_ | Our official website is now live: [zeroclawlabs.ai](https://zeroclawlabs.ai). Thanks for your patience while we prepared the launch. We are still seeing impersonation attempts, so do **not** join any investment or fundraising activity claiming the ZeroClaw name unless it is published through our official channels. | Use [this repository](https://github.com/zeroclaw-labs/zeroclaw) as the single source of truth. Follow [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (Group)](https://www.facebook.com/groups/zeroclaw), and [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) for official updates. |
| 2026-02-21 | _Important_ | Our official website is now live: [zeroclawlabs.ai](https://zeroclawlabs.ai). Thanks for your patience while we prepared the launch. We are still seeing impersonation attempts, so do **not** join any investment or fundraising activity claiming the ZeroClaw name unless it is published through our official channels. | Use [this repository](https://github.com/zeroclaw-labs/zeroclaw) as the single source of truth. Follow [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (Group)](https://www.facebook.com/groups/zeroclawlabs), and [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) for official updates. |
| 2026-02-19 | _Important_ | Anthropic updated the Authentication and Credential Use terms on 2026-02-19. Claude Code OAuth tokens (Free, Pro, Max) are intended exclusively for Claude Code and Claude.ai; using OAuth tokens from Claude Free/Pro/Max in any other product, tool, or service (including Agent SDK) is not permitted and may violate the Consumer Terms of Service. | Please temporarily avoid Claude Code OAuth integrations to prevent potential loss. Original clause: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Features
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ Se [LICENSE-APACHE](LICENSE-APACHE) og [LICENSE-MIT](LICENSE-MIT) for detaljer.
## Fellesskap
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Gebruik deze tabel voor belangrijke aankondigingen (compatibiliteitswijzigingen,
| Datum (UTC) | Niveau | Aankondiging | Actie |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritiek_ | **We zijn niet gelieerd** met `openagen/zeroclaw` of `zeroclaw.org`. Het domein `zeroclaw.org` wijst momenteel naar de fork `openagen/zeroclaw`, en dit domein/repository imiteert onze officiële website/project. | Vertrouw geen informatie, binaire bestanden, fondsenwerving of aankondigingen van deze bronnen. Gebruik alleen [deze repository](https://github.com/zeroclaw-labs/zeroclaw) en onze geverifieerde sociale media accounts. |
| 2026-02-21 | _Belangrijk_ | Onze officiële website is nu online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bedankt voor je geduld tijdens het wachten. We detecteren nog steeds imitatiepogingen: neem niet deel aan enige investering/fondsenwerving activiteit in naam van ZeroClaw als deze niet via onze officiële kanalen wordt gepubliceerd. | Gebruik [deze repository](https://github.com/zeroclaw-labs/zeroclaw) als de enige bron van waarheid. Volg [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (groep)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), en [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) voor officiële updates. |
| 2026-02-21 | _Belangrijk_ | Onze officiële website is nu online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bedankt voor je geduld tijdens het wachten. We detecteren nog steeds imitatiepogingen: neem niet deel aan enige investering/fondsenwerving activiteit in naam van ZeroClaw als deze niet via onze officiële kanalen wordt gepubliceerd. | Gebruik [deze repository](https://github.com/zeroclaw-labs/zeroclaw) als de enige bron van waarheid. Volg [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (groep)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), en [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) voor officiële updates. |
| 2026-02-19 | _Belangrijk_ | Anthropic heeft de gebruiksvoorwaarden voor authenticatie en inloggegevens bijgewerkt op 2026-02-19. OAuth authenticatie (Free, Pro, Max) is exclusief voor Claude Code en Claude.ai; het gebruik van Claude Free/Pro/Max OAuth tokens in enig ander product, tool of service (inclusief Agent SDK) is niet toegestaan en kan in strijd zijn met de Consumenten Gebruiksvoorwaarden. | Vermijd tijdelijk Claude Code OAuth integraties om potentiële verliezen te voorkomen. Originele clausule: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Functies
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Użyj tej tabeli dla ważnych ogłoszeń (zmiany kompatybilności, powiadomienia
| Data (UTC) | Poziom | Ogłoszenie | Działanie |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Krytyczny_ | **Nie jesteśmy powiązani** z `openagen/zeroclaw` lub `zeroclaw.org`. Domena `zeroclaw.org` obecnie wskazuje na fork `openagen/zeroclaw`, i ta domena/repozytorium podszywa się pod naszą oficjalną stronę/projekt. | Nie ufaj informacjom, plikom binarnym, zbiórkom funduszy lub ogłoszeniom z tych źródeł. Używaj tylko [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) i naszych zweryfikowanych kont społecznościowych. |
| 2026-02-21 | _Ważne_ | Nasza oficjalna strona jest teraz online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Dziękujemy za cierpliwość podczas oczekiwania. Nadal wykrywamy próby podszywania się: nie uczestnicz w żadnej działalności inwestycyjnej/finansowej w imieniu ZeroClaw jeśli nie jest opublikowana przez nasze oficjalne kanały. | Używaj [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) jako jedynego źródła prawdy. Śledź [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupa)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), i [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) dla oficjalnych aktualizacji. |
| 2026-02-21 | _Ważne_ | Nasza oficjalna strona jest teraz online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Dziękujemy za cierpliwość podczas oczekiwania. Nadal wykrywamy próby podszywania się: nie uczestnicz w żadnej działalności inwestycyjnej/finansowej w imieniu ZeroClaw jeśli nie jest opublikowana przez nasze oficjalne kanały. | Używaj [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) jako jedynego źródła prawdy. Śledź [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupa)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), i [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) dla oficjalnych aktualizacji. |
| 2026-02-19 | _Ważne_ | Anthropic zaktualizował warunki używania uwierzytelniania i poświadczeń 2026-02-19. Uwierzytelnianie OAuth (Free, Pro, Max) jest wyłącznie dla Claude Code i Claude.ai; używanie tokenów OAuth Claude Free/Pro/Max w jakimkolwiek innym produkcie, narzędziu lub usłudze (w tym Agent SDK) nie jest dozwolone i może naruszać Warunki Użytkowania Konsumenta. | Prosimy tymczasowo unikać integracji OAuth Claude Code aby zapobiec potencjalnym stratom. Oryginalna klauzula: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funkcje
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Use esta tabela para avisos importantes (mudanças de compatibilidade, avisos de
| Data (UTC) | Nível | Aviso | Ação |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Crítico_ | **Não somos afiliados** ao `openagen/zeroclaw` ou `zeroclaw.org`. O domínio `zeroclaw.org` atualmente aponta para o fork `openagen/zeroclaw`, e este domínio/repositório está falsificando nosso site/projeto oficial. | Não confie em informações, binários, arrecadações ou anúncios dessas fontes. Use apenas [este repositório](https://github.com/zeroclaw-labs/zeroclaw) e nossas contas sociais verificadas. |
| 2026-02-21 | _Importante_ | Nosso site oficial agora está online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Obrigado pela paciência durante a espera. Ainda detectamos tentativas de falsificação: não participe de nenhuma atividade de investimento/financiamento em nome do ZeroClaw se não for publicada através de nossos canais oficiais. | Use [este repositório](https://github.com/zeroclaw-labs/zeroclaw) como a única fonte de verdade. Siga [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para atualizações oficiais. |
| 2026-02-21 | _Importante_ | Nosso site oficial agora está online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Obrigado pela paciência durante a espera. Ainda detectamos tentativas de falsificação: não participe de nenhuma atividade de investimento/financiamento em nome do ZeroClaw se não for publicada através de nossos canais oficiais. | Use [este repositório](https://github.com/zeroclaw-labs/zeroclaw) como a única fonte de verdade. Siga [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para atualizações oficiais. |
| 2026-02-19 | _Importante_ | A Anthropic atualizou os termos de uso de autenticação e credenciais em 2026-02-19. A autenticação OAuth (Free, Pro, Max) é exclusivamente para Claude Code e Claude.ai; o uso de tokens OAuth do Claude Free/Pro/Max em qualquer outro produto, ferramenta ou serviço (incluindo Agent SDK) não é permitido e pode violar os Termos de Uso do Consumidor. | Por favor, evite temporariamente as integrações OAuth do Claude Code para prevenir qualquer perda potencial. Cláusula original: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funcionalidades
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ Vezi [LICENSE-APACHE](LICENSE-APACHE) și [LICENSE-MIT](LICENSE-MIT) pentru deta
## Comunitate
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -13,7 +13,10 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
@@ -92,7 +95,7 @@
| Дата (UTC) | Уровень | Объявление | Действие |
|---|---|---|---|
| 2026-02-19 | _Срочно_ | Мы **не аффилированы** с `openagen/zeroclaw` и `zeroclaw.org`. Домен `zeroclaw.org` сейчас указывает на fork `openagen/zeroclaw`, и этот домен/репозиторий выдают себя за наш официальный сайт и проект. | Не доверяйте информации, бинарникам, сборам средств и «официальным» объявлениям из этих источников. Используйте только [этот репозиторий](https://github.com/zeroclaw-labs/zeroclaw) и наши верифицированные соцсети. |
| 2026-02-21 | _Важно_ | Наш официальный сайт уже запущен: [zeroclawlabs.ai](https://zeroclawlabs.ai). Спасибо, что дождались запуска. При этом попытки выдавать себя за ZeroClaw продолжаются, поэтому не участвуйте в инвестициях, сборах средств и похожих активностях, если они не подтверждены через наши официальные каналы. | Ориентируйтесь только на [этот репозиторий](https://github.com/zeroclaw-labs/zeroclaw); также следите за [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (группа)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) и [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) для официальных обновлений. |
| 2026-02-21 | _Важно_ | Наш официальный сайт уже запущен: [zeroclawlabs.ai](https://zeroclawlabs.ai). Спасибо, что дождались запуска. При этом попытки выдавать себя за ZeroClaw продолжаются, поэтому не участвуйте в инвестициях, сборах средств и похожих активностях, если они не подтверждены через наши официальные каналы. | Ориентируйтесь только на [этот репозиторий](https://github.com/zeroclaw-labs/zeroclaw); также следите за [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (группа)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) и [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) для официальных обновлений. |
| 2026-02-19 | _Важно_ | Anthropic обновил раздел Authentication and Credential Use 2026-02-19. В нем указано, что OAuth authentication (Free/Pro/Max) предназначена только для Claude Code и Claude.ai; использование OAuth-токенов, полученных через Claude Free/Pro/Max, в любых других продуктах, инструментах или сервисах (включая Agent SDK), не допускается и может считаться нарушением Consumer Terms of Service. | Чтобы избежать потерь, временно не используйте Claude Code OAuth-интеграции. Оригинал: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
## О проекте
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ Se [LICENSE-APACHE](LICENSE-APACHE) och [LICENSE-MIT](LICENSE-MIT) för detaljer
## Community
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ channels:
## ชุมชน
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Gamitin ang talahanayang ito para sa mahahalagang paunawa (compatibility changes
| Petsa (UTC) | Antas | Paunawa | Aksyon |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritikal_ | **Hindi kami kaugnay** sa `openagen/zeroclaw` o `zeroclaw.org`. Ang domain na `zeroclaw.org` ay kasalukuyang tumuturo sa fork na `openagen/zeroclaw`, at ang domain/repository na ito ay nanggagaya sa aming opisyal na website/proyekto. | Huwag magtiwala sa impormasyon, binaries, fundraising, o mga anunsyo mula sa mga pinagmulang ito. Gamitin lamang [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) at aming mga verified social media accounts. |
| 2026-02-21 | _Mahalaga_ | Ang aming opisyal na website ay ngayon online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Salamat sa iyong pasensya sa panahon ng paghihintay. Nakikita pa rin namin ang mga pagtatangka ng panliliko: huwag lumahok sa anumang investment/funding activity sa ngalan ng ZeroClaw kung hindi ito nai-publish sa pamamagitan ng aming mga opisyal na channel. | Gamitin [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) bilang nag-iisang source of truth. Sundan [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), at [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para sa mga opisyal na update. |
| 2026-02-21 | _Mahalaga_ | Ang aming opisyal na website ay ngayon online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Salamat sa iyong pasensya sa panahon ng paghihintay. Nakikita pa rin namin ang mga pagtatangka ng panliliko: huwag lumahok sa anumang investment/funding activity sa ngalan ng ZeroClaw kung hindi ito nai-publish sa pamamagitan ng aming mga opisyal na channel. | Gamitin [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) bilang nag-iisang source of truth. Sundan [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), at [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para sa mga opisyal na update. |
| 2026-02-19 | _Mahalaga_ | In-update ng Anthropic ang authentication at credential use terms noong 2026-02-19. Ang OAuth authentication (Free, Pro, Max) ay eksklusibo para sa Claude Code at Claude.ai; ang paggamit ng Claude Free/Pro/Max OAuth tokens sa anumang iba pang produkto, tool, o serbisyo (kasama ang Agent SDK) ay hindi pinapayagan at maaaring lumabag sa Consumer Terms of Use. | Mangyaring pansamantalang iwasan ang Claude Code OAuth integrations upang maiwasan ang anumang potensyal na pagkawala. Orihinal na clause: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Mga Tampok
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +106,7 @@ Harvard, MIT ve Sundai.Club topluluklarının öğrencileri ve üyeleri tarafın
| Tarih (UTC) | Seviye | Duyuru | Eylem |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritik_ | **`openagen/zeroclaw` veya `zeroclaw.org` ile bağlantılı değiliz.** `zeroclaw.org` alanı şu anda `openagen/zeroclaw` fork'una işaret ediyor ve bu alan/depo taklitçiliğini yapıyor. | Bu kaynaklardan bilgi, ikili dosyalar, bağış toplama veya duyurulara güvenmeyin. Sadece [bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) ve doğrulanmış sosyal medya hesaplarımızı kullanın. |
| 2026-02-21 | _Önemli_ | Resmi web sitemiz artık çevrimiçi: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bekleme sürecinde sabırlarınız için teşekkürler. Hala taklit girişimleri tespit ediyoruz: ZeroClaw adına resmi kanallarımız aracılığıyla yayınlanmayan herhangi bir yatırım/bağış faaliyetine katılmayın. | [Bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) tek doğruluk kaynağı olarak kullanın. Resmi güncellemeler için [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grup)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) ve [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)'u takip edin. |
| 2026-02-21 | _Önemli_ | Resmi web sitemiz artık çevrimiçi: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bekleme sürecinde sabırlarınız için teşekkürler. Hala taklit girişimleri tespit ediyoruz: ZeroClaw adına resmi kanallarımız aracılığıyla yayınlanmayan herhangi bir yatırım/bağış faaliyetine katılmayın. | [Bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) tek doğruluk kaynağı olarak kullanın. Resmi güncellemeler için [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grup)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) ve [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)'u takip edin. |
| 2026-02-19 | _Önemli_ | Anthropic, 2026-02-19 tarihinde kimlik doğrulama ve kimlik bilgileri kullanım şartlarını güncelledi. OAuth kimlik doğrulaması (Free, Pro, Max) yalnızca Claude Code ve Claude.ai içindir; Claude Free/Pro/Max OAuth belirteçlerini başka herhangi bir ürün, araç veya hizmette (Agent SDK dahil) kullanmak yasaktır ve Tüketici Kullanım Şartlarını ihlal edebilir. | Olası kayıpları önlemek için lütfen geçici olarak Claude Code OAuth entegrasyonlarından kaçının. Orijinal madde: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Özellikler
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -177,7 +180,7 @@ channels:
## Спільнота
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -17,7 +17,10 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center" dir="rtl">
@@ -193,7 +196,7 @@ channels:
## کمیونٹی
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+5 -2
View File
@@ -14,7 +14,10 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -101,7 +104,7 @@ Bảng này dành cho các thông báo quan trọng (thay đổi không tương
| Ngày (UTC) | Mức độ | Thông báo | Hành động |
|---|---|---|---|
| 2026-02-19 | _Nghiêm trọng_ | Chúng tôi **không có liên kết** với `openagen/zeroclaw` hoặc `zeroclaw.org`. Tên miền `zeroclaw.org` hiện đang trỏ đến fork `openagen/zeroclaw`, và tên miền/repository đó đang mạo danh website/dự án chính thức của chúng tôi. | Không tin tưởng thông tin, binary, gây quỹ, hay thông báo từ các nguồn đó. Chỉ sử dụng [repository này](https://github.com/zeroclaw-labs/zeroclaw) và các tài khoản mạng xã hội đã được xác minh của chúng tôi. |
| 2026-02-21 | _Quan trọng_ | Website chính thức của chúng tôi đã ra mắt: [zeroclawlabs.ai](https://zeroclawlabs.ai). Cảm ơn mọi người đã kiên nhẫn chờ đợi. Chúng tôi vẫn đang ghi nhận các nỗ lực mạo danh, vì vậy **không** tham gia bất kỳ hoạt động đầu tư hoặc gây quỹ nào nhân danh ZeroClaw nếu thông tin đó không được công bố qua các kênh chính thức của chúng tôi. | Sử dụng [repository này](https://github.com/zeroclaw-labs/zeroclaw) làm nguồn thông tin duy nhất đáng tin cậy. Theo dõi [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (nhóm)](https://www.facebook.com/groups/zeroclaw), và [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) để nhận cập nhật chính thức. |
| 2026-02-21 | _Quan trọng_ | Website chính thức của chúng tôi đã ra mắt: [zeroclawlabs.ai](https://zeroclawlabs.ai). Cảm ơn mọi người đã kiên nhẫn chờ đợi. Chúng tôi vẫn đang ghi nhận các nỗ lực mạo danh, vì vậy **không** tham gia bất kỳ hoạt động đầu tư hoặc gây quỹ nào nhân danh ZeroClaw nếu thông tin đó không được công bố qua các kênh chính thức của chúng tôi. | Sử dụng [repository này](https://github.com/zeroclaw-labs/zeroclaw) làm nguồn thông tin duy nhất đáng tin cậy. Theo dõi [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (nhóm)](https://www.facebook.com/groups/zeroclawlabs), và [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) để nhận cập nhật chính thức. |
| 2026-02-19 | _Quan trọng_ | Anthropic đã cập nhật điều khoản Xác thực và Sử dụng Thông tin xác thực vào ngày 2026-02-19. Xác thực OAuth (Free, Pro, Max) được dành riêng cho Claude Code và Claude.ai; việc sử dụng OAuth token từ Claude Free/Pro/Max trong bất kỳ sản phẩm, công cụ hay dịch vụ nào khác (bao gồm Agent SDK) đều không được phép và có thể vi phạm Điều khoản Dịch vụ cho Người tiêu dùng. | Vui lòng tạm thời tránh tích hợp Claude Code OAuth để ngăn ngừa khả năng mất mát. Điều khoản gốc: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Tính năng
+5 -2
View File
@@ -13,7 +13,10 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
@@ -92,7 +95,7 @@
| 日期(UTC) | 级别 | 通知 | 处理建议 |
|---|---|---|---|
| 2026-02-19 | _紧急_ | 我们与 `openagen/zeroclaw``zeroclaw.org` **没有任何关系**`zeroclaw.org` 当前会指向 `openagen/zeroclaw` 这个 fork,并且该域名/仓库正在冒充我们的官网与官方项目。 | 请不要相信上述来源发布的任何信息、二进制、募资活动或官方声明。请仅以[本仓库](https://github.com/zeroclaw-labs/zeroclaw)和已验证官方社媒为准。 |
| 2026-02-21 | _重要_ | 我们的官网现已上线:[zeroclawlabs.ai](https://zeroclawlabs.ai)。感谢大家一直以来的耐心等待。我们仍在持续发现冒充行为,请勿参与任何未经我们官方渠道发布、但打着 ZeroClaw 名义进行的投资、募资或类似活动。 | 一切信息请以[本仓库](https://github.com/zeroclaw-labs/zeroclaw)为准;也可关注 [X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(群组)](https://www.facebook.com/groups/zeroclaw)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) 与 [小红书账号](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) 获取官方最新动态。 |
| 2026-02-21 | _重要_ | 我们的官网现已上线:[zeroclawlabs.ai](https://zeroclawlabs.ai)。感谢大家一直以来的耐心等待。我们仍在持续发现冒充行为,请勿参与任何未经我们官方渠道发布、但打着 ZeroClaw 名义进行的投资、募资或类似活动。 | 一切信息请以[本仓库](https://github.com/zeroclaw-labs/zeroclaw)为准;也可关注 [X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(群组)](https://www.facebook.com/groups/zeroclawlabs)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) 与 [小红书账号](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) 获取官方最新动态。 |
| 2026-02-19 | _重要_ | Anthropic 于 2026-02-19 更新了 Authentication and Credential Use 条款。条款明确:OAuth authentication(用于 Free、Pro、Max)仅适用于 Claude Code 与 Claude.ai;将 Claude Free/Pro/Max 账号获得的 OAuth token 用于其他任何产品、工具或服务(包括 Agent SDK)不被允许,并可能构成对 Consumer Terms of Service 的违规。 | 为避免损失,请暂时不要尝试 Claude Code OAuth 集成;原文见:[Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
## 项目简介
+53 -2
View File
@@ -1,22 +1,30 @@
use std::fs;
use std::path::Path;
use std::process::Command;
use std::time::SystemTime;
fn main() {
let dist_dir = Path::new("web/dist");
let web_dir = Path::new("web");
// Tell Cargo to re-run this script when web source files change.
// Tell Cargo to re-run this script when web sources or bundled assets change.
println!("cargo:rerun-if-changed=web/src");
println!("cargo:rerun-if-changed=web/public");
println!("cargo:rerun-if-changed=web/index.html");
println!("cargo:rerun-if-changed=web/package.json");
println!("cargo:rerun-if-changed=web/package-lock.json");
println!("cargo:rerun-if-changed=web/tsconfig.json");
println!("cargo:rerun-if-changed=web/tsconfig.app.json");
println!("cargo:rerun-if-changed=web/tsconfig.node.json");
println!("cargo:rerun-if-changed=web/vite.config.ts");
println!("cargo:rerun-if-changed=web/dist");
// Attempt to build the web frontend if npm is available and web/dist is
// missing or stale. The build is best-effort: when Node.js is not
// installed (e.g. CI containers, cross-compilation, minimal dev setups)
// we fall back to the existing stub/empty dist directory so the Rust
// build still succeeds.
let needs_build = !dist_dir.join("index.html").exists();
let needs_build = web_build_required(web_dir, dist_dir);
if needs_build && web_dir.join("package.json").exists() {
if let Ok(npm) = which_npm() {
@@ -77,6 +85,49 @@ fn main() {
ensure_dist_dir(dist_dir);
}
fn web_build_required(web_dir: &Path, dist_dir: &Path) -> bool {
let Some(dist_mtime) = latest_modified(dist_dir) else {
return true;
};
[
web_dir.join("src"),
web_dir.join("public"),
web_dir.join("index.html"),
web_dir.join("package.json"),
web_dir.join("package-lock.json"),
web_dir.join("tsconfig.json"),
web_dir.join("tsconfig.app.json"),
web_dir.join("tsconfig.node.json"),
web_dir.join("vite.config.ts"),
]
.into_iter()
.filter_map(|path| latest_modified(&path))
.any(|mtime| mtime > dist_mtime)
}
fn latest_modified(path: &Path) -> Option<SystemTime> {
let metadata = fs::metadata(path).ok()?;
if metadata.is_file() {
return metadata.modified().ok();
}
if !metadata.is_dir() {
return None;
}
let mut latest = metadata.modified().ok();
let entries = fs::read_dir(path).ok()?;
for entry in entries.flatten() {
if let Some(child_mtime) = latest_modified(&entry.path()) {
latest = Some(match latest {
Some(current) if current >= child_mtime => current,
_ => child_mtime,
});
}
}
latest
}
/// Ensure the dist directory exists so `rust-embed` does not fail at compile
/// time even when the web frontend is not built.
fn ensure_dist_dir(dist_dir: &Path) {
+7
View File
@@ -12,6 +12,13 @@ ignore = [
# bincode v2.0.1 via probe-rs — project ceased but 1.3.3 considered complete
"RUSTSEC-2025-0141",
{ id = "RUSTSEC-2024-0384", reason = "Reported to `rust-nostr/nostr` and it's WIP" },
{ id = "RUSTSEC-2024-0388", reason = "derivative via extism → wasmtime transitive dep" },
{ id = "RUSTSEC-2025-0057", reason = "fxhash via extism → wasmtime transitive dep" },
{ id = "RUSTSEC-2025-0119", reason = "number_prefix via indicatif — cosmetic dep" },
# wasmtime vulns via extism 1.13.0 — no upstream fix yet; plugins feature-gated
{ id = "RUSTSEC-2026-0006", reason = "wasmtime segfault via extism; awaiting extism upgrade" },
{ id = "RUSTSEC-2026-0020", reason = "WASI resource exhaustion via extism; awaiting extism upgrade" },
{ id = "RUSTSEC-2026-0021", reason = "WASI http fields panic via extism; awaiting extism upgrade" },
]
[licenses]
+6 -3
View File
@@ -10,6 +10,9 @@
services:
zeroclaw:
image: ghcr.io/zeroclaw-labs/zeroclaw:latest
# For ARM64 environments where the distroless image exits immediately,
# switch to the Debian compatibility image instead:
# image: ghcr.io/zeroclaw-labs/zeroclaw:debian
# Or build locally (distroless, no shell):
# build: .
# Or build the Debian variant (includes bash, git, curl):
@@ -50,15 +53,15 @@ services:
resources:
limits:
cpus: '2'
memory: 2G
memory: 512M
reservations:
cpus: '0.5'
memory: 512M
memory: 32M
# Health check — uses lightweight status instead of full diagnostics.
# For images with curl, prefer: curl -f http://localhost:42617/health
healthcheck:
test: ["CMD", "zeroclaw", "status"]
test: ["CMD", "zeroclaw", "status", "--format=exit-code"]
interval: 60s
timeout: 10s
retries: 3
+1 -1
View File
@@ -31,7 +31,7 @@ Build with `--features hardware` to include Uno Q support.
### 1.1 Configure Uno Q via App Lab
1. Download [Arduino App Lab](https://docs.arduino.cc/software/app-lab/) (AppImage on Linux).
1. Download [Arduino App Lab](https://docs.arduino.cc/software/app-lab/) (tar.gz on Linux).
2. Connect Uno Q via USB, power it on.
3. Open App Lab, connect to the board.
4. Follow the setup wizard:
+4
View File
@@ -183,6 +183,8 @@ Delegate sub-agent configurations. Each key under `[agents]` defines a named sub
| `agentic` | `false` | Enable multi-turn tool-call loop mode for the sub-agent |
| `allowed_tools` | `[]` | Tool allowlist for agentic mode |
| `max_iterations` | `10` | Max tool-call iterations for agentic mode |
| `timeout_secs` | `120` | Timeout in seconds for non-agentic provider calls (13600) |
| `agentic_timeout_secs` | `300` | Timeout in seconds for agentic sub-agent loops (13600) |
Notes:
@@ -199,11 +201,13 @@ max_depth = 2
agentic = true
allowed_tools = ["web_search", "http_request", "file_read"]
max_iterations = 8
agentic_timeout_secs = 600
[agents.coder]
provider = "ollama"
model = "qwen2.5-coder:32b"
temperature = 0.2
timeout_secs = 60
```
## `[runtime]`
@@ -0,0 +1,314 @@
# LinkedIn Tool — Design Spec
**Date:** 2026-03-13
**Status:** Approved
**Risk tier:** Medium (new tool, external API, credential handling)
## Summary
Native LinkedIn integration tool for ZeroClaw. Enables the agent to create posts,
list its own posts, comment, react, delete posts, view post engagement, and retrieve
profile info — all through LinkedIn's official REST API with OAuth2 authentication.
## Motivation
Enable ZeroClaw to autonomously publish LinkedIn content on a schedule (via cron),
drawing from the user's memory, project history, and Medium feed. Removes dependency
on third-party platforms like Composio for social media posting.
## Required OAuth2 scopes
Users must grant these scopes when creating their LinkedIn Developer App:
| Scope | Required for |
|---|---|
| `w_member_social` | `create_post`, `comment`, `react`, `delete_post` |
| `r_liteprofile` | `get_profile` |
| `r_member_social` | `list_posts`, `get_engagement` |
The "Share on LinkedIn" and "Sign In with LinkedIn using OpenID Connect" products
must be requested in the LinkedIn Developer App dashboard (both auto-approve).
## Architecture
### File structure
| File | Role |
|---|---|
| `src/tools/linkedin.rs` | `Tool` trait impl, action dispatch, parameter validation |
| `src/tools/linkedin_client.rs` | OAuth2 token management, LinkedIn REST API wrappers |
| `src/tools/mod.rs` | Module declaration, pub use, registration in `all_tools_with_runtime` |
| `src/config/schema.rs` | `[linkedin]` config section (`LinkedInConfig`) |
| `src/config/mod.rs` | Add `LinkedInConfig` to pub use exports |
### No new dependencies
All required crates are already in `Cargo.toml`: `reqwest` (HTTP), `serde`/`serde_json`
(serialization), `chrono` (timestamps), `tokio` (async fs for .env reading).
## Config
### `config.toml`
```toml
[linkedin]
enabled = false
```
### `.env` credentials
```bash
LINKEDIN_CLIENT_ID=your_client_id
LINKEDIN_CLIENT_SECRET=your_client_secret
LINKEDIN_ACCESS_TOKEN=your_access_token
LINKEDIN_REFRESH_TOKEN=your_refresh_token
LINKEDIN_PERSON_ID=your_person_urn_id
```
Token format: `LINKEDIN_PERSON_ID` is the bare ID (e.g., `dXNlcjpA...`), not the
full URN. The client prefixes `urn:li:person:` internally.
## Tool design
### Single tool, action-dispatched
Tool name: `linkedin`
The LLM calls it with an `action` field and action-specific parameters:
```json
{ "action": "create_post", "text": "...", "visibility": "PUBLIC" }
```
### Actions
| Action | Params | API | Write? |
|---|---|---|---|
| `create_post` | `text`, `visibility?` (PUBLIC/CONNECTIONS, default PUBLIC), `article_url?`, `article_title?` | `POST /rest/posts` | Yes |
| `list_posts` | `count?` (default 10, max 50) | `GET /rest/posts?author={personUrn}&q=author` | No |
| `comment` | `post_id`, `text` | `POST /rest/socialActions/{id}/comments` | Yes |
| `react` | `post_id`, `reaction_type` (LIKE/CELEBRATE/SUPPORT/LOVE/INSIGHTFUL/FUNNY) | `POST /rest/reactions?actor={actorUrn}` | Yes |
| `delete_post` | `post_id` | `DELETE /rest/posts/{id}` | Yes |
| `get_engagement` | `post_id` | `GET /rest/socialActions/{id}` | No |
| `get_profile` | (none) | `GET /rest/me` | No |
Note: `list_posts` queries posts authored by the authenticated user (not a home feed —
LinkedIn does not expose a home feed API). `get_engagement` returns likes/comments/shares
counts for a specific post via the socialActions endpoint.
### Security enforcement
- Write actions (`create_post`, `comment`, `react`, `delete_post`): check `security.can_act()` + `security.record_action()`
- Read actions (`list_posts`, `get_engagement`, `get_profile`): still call `record_action()` for rate tracking
### Parameter validation
- `article_title` without `article_url` returns error: "article_title requires article_url"
- `react` requires both `post_id` and `reaction_type`
- `comment` requires both `post_id` and `text`
- `create_post` requires `text` (non-empty)
### Parameter schema
```json
{
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": ["create_post", "list_posts", "comment", "react", "delete_post", "get_engagement", "get_profile"],
"description": "The LinkedIn action to perform"
},
"text": {
"type": "string",
"description": "Post or comment text content"
},
"visibility": {
"type": "string",
"enum": ["PUBLIC", "CONNECTIONS"],
"description": "Post visibility (default: PUBLIC)"
},
"article_url": {
"type": "string",
"description": "URL to attach as article/link preview"
},
"article_title": {
"type": "string",
"description": "Title for the attached article (requires article_url)"
},
"post_id": {
"type": "string",
"description": "LinkedIn post URN for comment/react/delete/engagement"
},
"reaction_type": {
"type": "string",
"enum": ["LIKE", "CELEBRATE", "SUPPORT", "LOVE", "INSIGHTFUL", "FUNNY"],
"description": "Reaction type for the react action"
},
"count": {
"type": "integer",
"description": "Number of posts to retrieve (default 10, max 50)"
}
},
"required": ["action"]
}
```
## LinkedIn client
### `LinkedInClient` struct
```rust
pub struct LinkedInClient {
workspace_dir: PathBuf,
}
```
Uses `crate::config::build_runtime_proxy_client_with_timeouts("tool.linkedin", 30, 10)`
per request (same pattern as Pushover), respecting runtime proxy configuration.
### Credential loading
Same pattern as `PushoverTool`: reads `.env` from `workspace_dir`, parses key-value
pairs, supports `export` prefix and quoted values.
### Token refresh
1. All API calls use `LINKEDIN_ACCESS_TOKEN` in `Authorization: Bearer` header
2. On 401 response, attempt token refresh:
- `POST https://www.linkedin.com/oauth/v2/accessToken`
- Body: `grant_type=refresh_token&refresh_token=...&client_id=...&client_secret=...`
3. On successful refresh, update `LINKEDIN_ACCESS_TOKEN` in `.env` file via
line-targeted replacement (read all lines, replace the matching key line, write back).
Preserves `export` prefixes, quoting style, comments, and all other keys.
4. Retry the original request once
5. If refresh also fails, return error with clear message about re-authentication
### API versioning
All requests include:
- `LinkedIn-Version: 202402` header (stable version)
- `X-Restli-Protocol-Version: 2.0.0` header
- `Content-Type: application/json`
### React endpoint details
The `react` action sends:
- `POST /rest/reactions?actor=urn:li:person:{personId}`
- Body: `{"reactionType": "LIKE", "object": "urn:li:ugcPost:{postId}"}`
The actor URN is derived from `LINKEDIN_PERSON_ID` in `.env`.
### Response parsing
The client returns structured data types:
```rust
pub struct PostSummary {
pub id: String,
pub text: String,
pub created_at: String,
pub visibility: String,
}
pub struct ProfileInfo {
pub id: String,
pub name: String,
pub headline: String,
}
pub struct EngagementSummary {
pub likes: u64,
pub comments: u64,
pub shares: u64,
}
```
## Registration
In `src/tools/mod.rs` (follows `security_ops` config-gated pattern):
```rust
// Module declarations
pub mod linkedin;
pub mod linkedin_client;
// Re-exports
pub use linkedin::LinkedInTool;
// In all_tools_with_runtime():
if root_config.linkedin.enabled {
tool_arcs.push(Arc::new(LinkedInTool::new(
security.clone(),
workspace_dir.to_path_buf(),
)));
}
```
## Config schema
In `src/config/schema.rs`:
```rust
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct LinkedInConfig {
pub enabled: bool,
}
impl Default for LinkedInConfig {
fn default() -> Self {
Self { enabled: false }
}
}
```
Added as field `pub linkedin: LinkedInConfig` on the `Config` struct.
Added to `pub use` exports in `src/config/mod.rs`.
## Testing
### Unit tests (in `linkedin.rs`)
- Tool name, description, schema validation
- Action dispatch routes correctly
- Write actions blocked in read-only mode
- Write actions blocked by rate limiting
- Missing required params return clear errors
- Unknown action returns error
- `article_title` without `article_url` returns validation error
### Unit tests (in `linkedin_client.rs`)
- Credential parsing from `.env` (plain, quoted, export prefix, comments)
- Missing credential fields produce specific errors
- Token refresh writes updated token back to `.env` preserving other keys
- Post creation builds correct request body with URN formatting
- React builds correct query param with actor URN
- Visibility defaults to PUBLIC when omitted
### Registry tests (in `mod.rs`)
- `all_tools` excludes `linkedin` when `linkedin.enabled = false`
- `all_tools` includes `linkedin` when `linkedin.enabled = true`
### Integration tests
Not added in this PR — would require live LinkedIn API credentials.
A `#[cfg(feature = "test-linkedin-live")]` gate can be added later.
## Error handling
- Missing `.env` file: "LinkedIn credentials not found. Add LINKEDIN_* keys to .env"
- Missing specific key: "LINKEDIN_ACCESS_TOKEN not found in .env"
- Expired token + no refresh token: "LinkedIn token expired. Re-authenticate or add LINKEDIN_REFRESH_TOKEN to .env"
- `article_title` without `article_url`: "article_title requires article_url to be set"
- API errors: pass through LinkedIn's error message with status code
- Rate limited by LinkedIn: "LinkedIn API rate limit exceeded. Try again later."
- Missing scope: "LinkedIn API returned 403. Ensure your app has the required scopes: w_member_social, r_liteprofile, r_member_social"
## PR metadata
- **Branch:** `feature/linkedin-tool`
- **Title:** `feat(tools): add native LinkedIn integration tool`
- **Risk:** Medium — new tool, external API, no security boundary changes
- **Size target:** M (2 new files ~200-300 lines each, 3-4 modified files)
+12
View File
@@ -0,0 +1,12 @@
[package]
name = "zeroclaw-weather-plugin"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
extism-pdk = "1.3"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
+8
View File
@@ -0,0 +1,8 @@
name = "weather"
version = "0.1.0"
description = "Example weather tool plugin for ZeroClaw"
author = "ZeroClaw Labs"
wasm_path = "target/wasm32-wasip1/release/zeroclaw_weather_plugin.wasm"
capabilities = ["tool"]
permissions = ["http_client"]
+42
View File
@@ -0,0 +1,42 @@
//! Example ZeroClaw weather plugin.
//!
//! Demonstrates how to create a WASM tool plugin using extism-pdk.
//! Build with: cargo build --target wasm32-wasip1 --release
use extism_pdk::*;
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct WeatherInput {
location: String,
}
#[derive(Serialize)]
struct WeatherOutput {
location: String,
temperature: f64,
unit: String,
condition: String,
humidity: u32,
}
/// Get weather for a location (mock implementation for demonstration).
#[plugin_fn]
pub fn get_weather(input: String) -> FnResult<String> {
let params: WeatherInput =
serde_json::from_str(&input).map_err(|e| Error::msg(format!("invalid input: {e}")))?;
// Mock weather data for demonstration
let output = WeatherOutput {
location: params.location,
temperature: 22.5,
unit: "celsius".to_string(),
condition: "Partly cloudy".to_string(),
humidity: 65,
};
let json = serde_json::to_string(&output)
.map_err(|e| Error::msg(format!("serialization error: {e}")))?;
Ok(json)
}
+200 -4
View File
@@ -177,11 +177,29 @@ get_available_disk_mb() {
fi
}
is_musl_linux() {
[[ "$(uname -s)" == "Linux" ]] || return 1
if [[ -f /etc/alpine-release ]]; then
return 0
fi
if have_cmd ldd && ldd --version 2>&1 | grep -qi 'musl'; then
return 0
fi
return 1
}
detect_release_target() {
local os arch
os="$(uname -s)"
arch="$(uname -m)"
if is_musl_linux; then
return 1
fi
case "$os:$arch" in
Linux:x86_64)
echo "x86_64-unknown-linux-gnu"
@@ -283,6 +301,12 @@ install_prebuilt_binary() {
return 1
fi
if is_musl_linux; then
warn "Pre-built release binaries are not published for musl/Alpine yet."
warn "Falling back to source build."
return 1
fi
target="$(detect_release_target || true)"
if [[ -z "$target" ]]; then
warn "No pre-built binary target mapping for $(uname -s)/$(uname -m)."
@@ -743,6 +767,140 @@ run_guided_installer() {
fi
}
ensure_default_config_and_workspace() {
# Creates a minimal config.toml and workspace scaffold files when the
# onboard wizard was skipped (e.g. --skip-build --prefer-prebuilt, or
# Docker mode without an API key).
#
# $1 — config directory (e.g. ~/.zeroclaw or $docker_data_dir/.zeroclaw)
# $2 — workspace directory (e.g. ~/.zeroclaw/workspace or $docker_data_dir/workspace)
# $3 — provider name (default: openrouter)
local config_dir="$1"
local workspace_dir="$2"
local provider="${3:-openrouter}"
mkdir -p "$config_dir" "$workspace_dir"
# --- config.toml ---
local config_path="$config_dir/config.toml"
if [[ ! -f "$config_path" ]]; then
step_dot "Creating default config.toml"
cat > "$config_path" <<TOML
# ZeroClaw configuration — generated by install.sh
# Edit this file or run 'zeroclaw onboard' to reconfigure.
default_provider = "${provider}"
workspace_dir = "${workspace_dir}"
TOML
if [[ -n "${API_KEY:-}" ]]; then
printf 'api_key = "%s"\n' "$API_KEY" >> "$config_path"
fi
if [[ -n "${MODEL:-}" ]]; then
printf 'default_model = "%s"\n' "$MODEL" >> "$config_path"
fi
chmod 600 "$config_path" 2>/dev/null || true
step_ok "Default config.toml created at $config_path"
else
step_dot "config.toml already exists, skipping"
fi
# --- Workspace scaffold ---
local subdirs=(sessions memory state cron skills)
for dir in "${subdirs[@]}"; do
mkdir -p "$workspace_dir/$dir"
done
# Seed workspace markdown files only if they don't already exist.
local user_name="${USER:-User}"
local agent_name="ZeroClaw"
_write_if_missing() {
local filepath="$1"
local content="$2"
if [[ ! -f "$filepath" ]]; then
printf '%s\n' "$content" > "$filepath"
fi
}
_write_if_missing "$workspace_dir/IDENTITY.md" \
"# IDENTITY.md — Who Am I?
- **Name:** ${agent_name}
- **Creature:** A Rust-forged AI — fast, lean, and relentless
- **Vibe:** Sharp, direct, resourceful. Not corporate. Not a chatbot.
---
Update this file as you evolve. Your identity is yours to shape."
_write_if_missing "$workspace_dir/USER.md" \
"# USER.md — Who You're Helping
## About You
- **Name:** ${user_name}
- **Timezone:** UTC
- **Languages:** English
## Preferences
- (Add your preferences here)
## Work Context
- (Add your work context here)
---
*Update this anytime. The more ${agent_name} knows, the better it helps.*"
_write_if_missing "$workspace_dir/MEMORY.md" \
"# MEMORY.md — Long-Term Memory
## Key Facts
(Add important facts here)
## Decisions & Preferences
(Record decisions and preferences here)
## Lessons Learned
(Document mistakes and insights here)
## Open Loops
(Track unfinished tasks and follow-ups here)"
_write_if_missing "$workspace_dir/AGENTS.md" \
"# AGENTS.md — ${agent_name} Personal Assistant
## Every Session (required)
Before doing anything else:
1. Read SOUL.md — this is who you are
2. Read USER.md — this is who you're helping
3. Use memory_recall for recent context
---
*Add your own conventions, style, and rules.*"
_write_if_missing "$workspace_dir/SOUL.md" \
"# SOUL.md — Who You Are
## Core Truths
**Be genuinely helpful, not performatively helpful.**
**Have opinions.** You're allowed to disagree.
**Be resourceful before asking.** Try to figure it out first.
**Earn trust through competence.**
## Identity
You are **${agent_name}**. Built in Rust. 3MB binary. Zero bloat.
---
*This file is yours to evolve.*"
step_ok "Workspace scaffold ready at $workspace_dir"
unset -f _write_if_missing
}
resolve_container_cli() {
local requested_cli
requested_cli="${ZEROCLAW_CONTAINER_CLI:-docker}"
@@ -860,10 +1018,17 @@ run_docker_bootstrap() {
-v "$config_mount" \
-v "$workspace_mount" \
"$docker_image" \
"${onboard_cmd[@]}"
"${onboard_cmd[@]}" || true
else
info "Docker image ready. Run zeroclaw onboard inside the container to configure."
fi
# Ensure config.toml and workspace scaffold exist on the host even when
# onboard was skipped, failed, or ran non-interactively inside the container.
ensure_default_config_and_workspace \
"$docker_data_dir/.zeroclaw" \
"$docker_data_dir/workspace" \
"$PROVIDER"
}
SCRIPT_PATH="${BASH_SOURCE[0]:-$0}"
@@ -1145,7 +1310,11 @@ if [[ "$FORCE_SOURCE_BUILD" == false ]]; then
SKIP_BUILD=true
SKIP_INSTALL=true
elif [[ "$PREBUILT_ONLY" == true ]]; then
error "Pre-built-only mode requested, but no compatible release asset is available."
if is_musl_linux; then
error "Pre-built-only mode is not supported on musl/Alpine because releases do not include musl assets yet."
else
error "Pre-built-only mode requested, but no compatible release asset is available."
fi
error "Try again later, or run with --force-source-build on a machine with enough RAM/disk."
exit 1
else
@@ -1190,6 +1359,12 @@ if [[ -n "$TARGET_VERSION" ]]; then
step_dot "Installing ZeroClaw v${TARGET_VERSION}"
fi
if [[ "$SKIP_BUILD" == false ]]; then
# Clean stale build artifacts on upgrade to prevent bindgen/build-script
# cache mismatches (e.g. libsqlite3-sys bindgen.rs not found).
if [[ "$INSTALL_MODE" == "upgrade" && -d "$WORK_DIR/target/release/build" ]]; then
step_dot "Cleaning stale build cache (upgrade detected)"
cargo clean --release 2>/dev/null || true
fi
step_dot "Building release binary"
cargo build --release --locked
step_ok "Release binary built"
@@ -1280,6 +1455,13 @@ elif [[ -z "$ZEROCLAW_BIN" ]]; then
warn "ZeroClaw binary not found — cannot configure provider"
fi
# Ensure config.toml and workspace scaffold exist even when onboard was
# skipped, unavailable, or failed (e.g. --skip-build --prefer-prebuilt
# without an API key, or when the binary could not run onboard).
_native_config_dir="${ZEROCLAW_CONFIG_DIR:-$HOME/.zeroclaw}"
_native_workspace_dir="${ZEROCLAW_WORKSPACE:-$_native_config_dir/workspace}"
ensure_default_config_and_workspace "$_native_config_dir" "$_native_workspace_dir" "$PROVIDER"
# --- Gateway service management ---
if [[ -n "$ZEROCLAW_BIN" ]]; then
# Try to install and start the gateway service
@@ -1290,8 +1472,14 @@ if [[ -n "$ZEROCLAW_BIN" ]]; then
step_ok "Gateway service restarted"
# Fetch and display pairing code from running gateway
sleep 1 # brief wait for service to start
if PAIR_CODE=$("$ZEROCLAW_BIN" gateway get-paircode 2>/dev/null | grep -oE '[0-9]{6}'); then
PAIR_CODE=""
for i in 1 2 3 4 5; do
sleep 2
if PAIR_CODE=$("$ZEROCLAW_BIN" gateway get-paircode 2>/dev/null | grep -oE '[0-9]{6}'); then
break
fi
done
if [[ -n "$PAIR_CODE" ]]; then
echo
echo -e " ${BOLD_BLUE}🔐 Gateway Pairing Code${RESET}"
echo
@@ -1300,6 +1488,7 @@ if [[ -n "$ZEROCLAW_BIN" ]]; then
echo -e " ${BOLD_BLUE}└──────────────┘${RESET}"
echo
echo -e " ${DIM}Enter this code in the dashboard to pair your device.${RESET}"
echo -e " ${DIM}Run 'zeroclaw gateway get-paircode --new' anytime to generate a fresh code.${RESET}"
fi
else
step_fail "Gateway service restart failed — re-run with zeroclaw service start"
@@ -1331,6 +1520,13 @@ else
echo -e "${BOLD_BLUE}${CRAB} ZeroClaw installed successfully!${RESET}"
fi
if [[ -x "$HOME/.cargo/bin/zeroclaw" ]] && ! have_cmd zeroclaw; then
echo
warn "zeroclaw is installed in $HOME/.cargo/bin, but that directory is not in PATH for this shell."
warn 'Run: export PATH="$HOME/.cargo/bin:$PATH"'
step_dot "To persist it, add that export line to ~/.bashrc, ~/.zshrc, or your shell profile, then open a new shell."
fi
if [[ "$INSTALL_MODE" == "upgrade" ]]; then
step_dot "Upgrade complete"
fi
+166 -1
View File
@@ -4,6 +4,7 @@ use crate::agent::dispatcher::{
use crate::agent::memory_loader::{DefaultMemoryLoader, MemoryLoader};
use crate::agent::prompt::{PromptContext, SystemPromptBuilder};
use crate::config::Config;
use crate::i18n::ToolDescriptions;
use crate::memory::{self, Memory, MemoryCategory};
use crate::observability::{self, Observer, ObserverEvent};
use crate::providers::{self, ChatMessage, ChatRequest, ConversationMessage, Provider};
@@ -40,6 +41,7 @@ pub struct Agent {
route_model_by_hint: HashMap<String, String>,
allowed_tools: Option<Vec<String>>,
response_cache: Option<Arc<crate::memory::response_cache::ResponseCache>>,
tool_descriptions: Option<ToolDescriptions>,
}
pub struct AgentBuilder {
@@ -64,6 +66,7 @@ pub struct AgentBuilder {
route_model_by_hint: Option<HashMap<String, String>>,
allowed_tools: Option<Vec<String>>,
response_cache: Option<Arc<crate::memory::response_cache::ResponseCache>>,
tool_descriptions: Option<ToolDescriptions>,
}
impl AgentBuilder {
@@ -90,6 +93,7 @@ impl AgentBuilder {
route_model_by_hint: None,
allowed_tools: None,
response_cache: None,
tool_descriptions: None,
}
}
@@ -207,6 +211,11 @@ impl AgentBuilder {
self
}
pub fn tool_descriptions(mut self, tool_descriptions: Option<ToolDescriptions>) -> Self {
self.tool_descriptions = tool_descriptions;
self
}
pub fn build(self) -> Result<Agent> {
let mut tools = self
.tools
@@ -257,6 +266,7 @@ impl AgentBuilder {
route_model_by_hint: self.route_model_by_hint.unwrap_or_default(),
allowed_tools: allowed,
response_cache: self.response_cache,
tool_descriptions: self.tool_descriptions,
})
}
}
@@ -278,6 +288,25 @@ impl Agent {
self.memory_session_id = session_id;
}
/// Hydrate the agent with prior chat messages (e.g. from a session backend).
///
/// Ensures a system prompt is prepended if history is empty, then appends all
/// non-system messages from the seed. System messages in the seed are skipped
/// to avoid duplicating the system prompt.
pub fn seed_history(&mut self, messages: &[ChatMessage]) {
if self.history.is_empty() {
if let Ok(sys) = self.build_system_prompt() {
self.history
.push(ConversationMessage::Chat(ChatMessage::system(sys)));
}
}
for msg in messages {
if msg.role != "system" {
self.history.push(ConversationMessage::Chat(msg.clone()));
}
}
}
pub fn from_config(config: &Config) -> Result<Self> {
let observer: Arc<dyn Observer> =
Arc::from(observability::create_observer(&config.observability));
@@ -331,13 +360,16 @@ impl Agent {
.unwrap_or("anthropic/claude-sonnet-4-20250514")
.to_string();
let provider: Box<dyn Provider> = providers::create_routed_provider(
let provider_runtime_options = providers::provider_runtime_options_from_config(config);
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
provider_name,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
&model_name,
&provider_runtime_options,
)?;
let dispatcher_choice = config.agent.tool_dispatcher.as_str();
@@ -434,6 +466,7 @@ impl Agent {
skills_prompt_mode: self.skills_prompt_mode,
identity_config: Some(&self.identity_config),
dispatcher_instructions: &instructions,
tool_descriptions: self.tool_descriptions.as_ref(),
};
self.prompt_builder.build(&ctx)
}
@@ -1006,6 +1039,92 @@ mod tests {
assert_eq!(seen.as_slice(), &["hint:fast".to_string()]);
}
#[tokio::test]
async fn from_config_passes_extra_headers_to_custom_provider() {
use axum::{http::HeaderMap, routing::post, Json, Router};
use tempfile::TempDir;
use tokio::net::TcpListener;
let captured_headers: Arc<std::sync::Mutex<Option<HashMap<String, String>>>> =
Arc::new(std::sync::Mutex::new(None));
let captured_headers_clone = captured_headers.clone();
let app = Router::new().route(
"/chat/completions",
post(
move |headers: HeaderMap, Json(_body): Json<serde_json::Value>| {
let captured_headers = captured_headers_clone.clone();
async move {
let collected = headers
.iter()
.filter_map(|(name, value)| {
value
.to_str()
.ok()
.map(|value| (name.as_str().to_string(), value.to_string()))
})
.collect();
*captured_headers.lock().unwrap() = Some(collected);
Json(serde_json::json!({
"choices": [{
"message": {
"content": "hello from mock"
}
}]
}))
}
},
),
);
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let server_handle = tokio::spawn(async move {
axum::serve(listener, app).await.unwrap();
});
let tmp = TempDir::new().expect("temp dir");
let workspace_dir = tmp.path().join("workspace");
std::fs::create_dir_all(&workspace_dir).unwrap();
let mut config = crate::config::Config::default();
config.workspace_dir = workspace_dir;
config.config_path = tmp.path().join("config.toml");
config.api_key = Some("test-key".to_string());
config.default_provider = Some(format!("custom:http://{addr}"));
config.default_model = Some("test-model".to_string());
config.memory.backend = "none".to_string();
config.memory.auto_save = false;
config.extra_headers.insert(
"User-Agent".to_string(),
"zeroclaw-web-test/1.0".to_string(),
);
config
.extra_headers
.insert("X-Title".to_string(), "zeroclaw-web".to_string());
let mut agent = Agent::from_config(&config).expect("agent from config");
let response = agent.turn("hello").await.expect("agent turn");
assert_eq!(response, "hello from mock");
let headers = captured_headers
.lock()
.unwrap()
.clone()
.expect("captured headers");
assert_eq!(
headers.get("user-agent").map(String::as_str),
Some("zeroclaw-web-test/1.0")
);
assert_eq!(
headers.get("x-title").map(String::as_str),
Some("zeroclaw-web")
);
server_handle.abort();
}
#[test]
fn builder_allowed_tools_none_keeps_all_tools() {
let provider = Box::new(MockProvider {
@@ -1069,4 +1188,50 @@ mod tests {
"No tools should match a non-existent allowlist entry"
);
}
#[test]
fn seed_history_prepends_system_and_skips_system_from_seed() {
let provider = Box::new(MockProvider {
responses: Mutex::new(vec![]),
});
let memory_cfg = crate::config::MemoryConfig {
backend: "none".into(),
..crate::config::MemoryConfig::default()
};
let mem: Arc<dyn Memory> = Arc::from(
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None)
.expect("memory creation should succeed with valid config"),
);
let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {});
let mut agent = Agent::builder()
.provider(provider)
.tools(vec![Box::new(MockTool)])
.memory(mem)
.observer(observer)
.tool_dispatcher(Box::new(NativeToolDispatcher))
.workspace_dir(std::path::PathBuf::from("/tmp"))
.build()
.expect("agent builder should succeed with valid config");
let seed = vec![
ChatMessage::system("old system prompt"),
ChatMessage::user("hello"),
ChatMessage::assistant("hi there"),
];
agent.seed_history(&seed);
let history = agent.history();
// First message should be a freshly built system prompt (not the seed one)
assert!(matches!(&history[0], ConversationMessage::Chat(m) if m.role == "system"));
// System message from seed should be skipped, so next is user
assert!(
matches!(&history[1], ConversationMessage::Chat(m) if m.role == "user" && m.content == "hello")
);
assert!(
matches!(&history[2], ConversationMessage::Chat(m) if m.role == "assistant" && m.content == "hi there")
);
assert_eq!(history.len(), 3);
}
}
+495 -107
View File
@@ -1,5 +1,6 @@
use crate::approval::{ApprovalManager, ApprovalRequest, ApprovalResponse};
use crate::config::Config;
use crate::i18n::ToolDescriptions;
use crate::memory::{self, Memory, MemoryCategory};
use crate::multimodal;
use crate::observability::{self, runtime_trace, Observer, ObserverEvent};
@@ -17,7 +18,7 @@ use std::collections::HashSet;
use std::fmt::Write;
use std::io::Write as _;
use std::path::{Path, PathBuf};
use std::sync::{Arc, LazyLock};
use std::sync::{Arc, LazyLock, Mutex};
use std::time::{Duration, Instant};
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
@@ -33,6 +34,29 @@ const DEFAULT_MAX_TOOL_ITERATIONS: usize = 10;
/// Matches the channel-side constant in `channels/mod.rs`.
const AUTOSAVE_MIN_MESSAGE_CHARS: usize = 20;
/// Callback type for checking if model has been switched during tool execution.
/// Returns Some((provider, model)) if a switch was requested, None otherwise.
pub type ModelSwitchCallback = Arc<Mutex<Option<(String, String)>>>;
/// Global model switch request state - used for runtime model switching via model_switch tool.
/// This is set by the model_switch tool and checked by the agent loop.
#[allow(clippy::type_complexity)]
static MODEL_SWITCH_REQUEST: LazyLock<Arc<Mutex<Option<(String, String)>>>> =
LazyLock::new(|| Arc::new(Mutex::new(None)));
/// Get the global model switch request state
pub fn get_model_switch_state() -> ModelSwitchCallback {
Arc::clone(&MODEL_SWITCH_REQUEST)
}
/// Clear any pending model switch request
pub fn clear_model_switch_request() {
if let Ok(guard) = MODEL_SWITCH_REQUEST.lock() {
let mut guard = guard;
*guard = None;
}
}
fn glob_match(pattern: &str, name: &str) -> bool {
match pattern.find('*') {
None => pattern == name,
@@ -2118,6 +2142,31 @@ pub(crate) fn is_tool_loop_cancelled(err: &anyhow::Error) -> bool {
err.chain().any(|source| source.is::<ToolLoopCancelled>())
}
#[derive(Debug)]
pub(crate) struct ModelSwitchRequested {
pub provider: String,
pub model: String,
}
impl std::fmt::Display for ModelSwitchRequested {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"model switch requested to {} {}",
self.provider, self.model
)
}
}
impl std::error::Error for ModelSwitchRequested {}
pub(crate) fn is_model_switch_requested(err: &anyhow::Error) -> Option<(String, String)> {
err.chain()
.filter_map(|source| source.downcast_ref::<ModelSwitchRequested>())
.map(|e| (e.provider.clone(), e.model.clone()))
.next()
}
/// Execute a single turn of the agent loop: send messages, parse tool calls,
/// execute tools, and loop until the LLM produces a final text response.
/// When `silent` is true, suppresses stdout (for channel use).
@@ -2131,8 +2180,13 @@ pub(crate) async fn agent_turn(
model: &str,
temperature: f64,
silent: bool,
channel_name: &str,
multimodal_config: &crate::config::MultimodalConfig,
max_tool_iterations: usize,
excluded_tools: &[String],
dedup_exempt_tools: &[String],
activated_tools: Option<&std::sync::Arc<std::sync::Mutex<crate::tools::ActivatedToolSet>>>,
model_switch_callback: Option<ModelSwitchCallback>,
) -> Result<String> {
run_tool_call_loop(
provider,
@@ -2144,15 +2198,16 @@ pub(crate) async fn agent_turn(
temperature,
silent,
None,
"channel",
channel_name,
multimodal_config,
max_tool_iterations,
None,
None,
None,
&[],
&[],
None,
excluded_tools,
dedup_exempt_tools,
activated_tools,
model_switch_callback,
)
.await
}
@@ -2174,7 +2229,7 @@ async fn execute_one_tool(
let static_tool = find_tool(tools_registry, call_name);
let activated_arc = if static_tool.is_none() {
activated_tools.and_then(|at| at.lock().unwrap().get(call_name))
activated_tools.and_then(|at| at.lock().unwrap().get_resolved(call_name))
} else {
None
};
@@ -2358,6 +2413,7 @@ pub(crate) async fn run_tool_call_loop(
excluded_tools: &[String],
dedup_exempt_tools: &[String],
activated_tools: Option<&std::sync::Arc<std::sync::Mutex<crate::tools::ActivatedToolSet>>>,
model_switch_callback: Option<ModelSwitchCallback>,
) -> Result<String> {
let max_iterations = if max_tool_iterations == 0 {
DEFAULT_MAX_TOOL_ITERATIONS
@@ -2366,9 +2422,10 @@ pub(crate) async fn run_tool_call_loop(
};
let turn_id = Uuid::new_v4().to_string();
let mut seen_tool_signatures: HashSet<(String, String)> = HashSet::new();
for iteration in 0..max_iterations {
let mut seen_tool_signatures: HashSet<(String, String)> = HashSet::new();
if cancellation_token
.as_ref()
.is_some_and(CancellationToken::is_cancelled)
@@ -2376,6 +2433,28 @@ pub(crate) async fn run_tool_call_loop(
return Err(ToolLoopCancelled.into());
}
// Check if model switch was requested via model_switch tool
if let Some(ref callback) = model_switch_callback {
if let Ok(guard) = callback.lock() {
if let Some((new_provider, new_model)) = guard.as_ref() {
if new_provider != provider_name || new_model != model {
tracing::info!(
"Model switch detected: {} {} -> {} {}",
provider_name,
model,
new_provider,
new_model
);
return Err(ModelSwitchRequested {
provider: new_provider.clone(),
model: new_model.clone(),
}
.into());
}
}
}
}
// Rebuild tool_specs each iteration so newly activated deferred tools appear.
let mut tool_specs: Vec<crate::tools::ToolSpec> = tools_registry
.iter()
@@ -3000,7 +3079,10 @@ pub(crate) async fn run_tool_call_loop(
/// Build the tool instruction block for the system prompt so the LLM knows
/// how to invoke tools.
pub(crate) fn build_tool_instructions(tools_registry: &[Box<dyn Tool>]) -> String {
pub(crate) fn build_tool_instructions(
tools_registry: &[Box<dyn Tool>],
tool_descriptions: Option<&ToolDescriptions>,
) -> String {
let mut instructions = String::new();
instructions.push_str("\n## Tool Use Protocol\n\n");
instructions.push_str("To use a tool, wrap a JSON object in <tool_call></tool_call> tags:\n\n");
@@ -3016,11 +3098,14 @@ pub(crate) fn build_tool_instructions(tools_registry: &[Box<dyn Tool>]) -> Strin
instructions.push_str("### Available Tools\n\n");
for tool in tools_registry {
let desc = tool_descriptions
.and_then(|td| td.get(tool.name()))
.unwrap_or_else(|| tool.description());
let _ = writeln!(
instructions,
"**{}**: {}\nParameters: `{}`\n",
tool.name(),
tool.description(),
desc,
tool.parameters_schema()
);
}
@@ -3195,37 +3280,32 @@ pub async fn run(
}
// ── Resolve provider ─────────────────────────────────────────
let provider_name = provider_override
let mut provider_name = provider_override
.as_deref()
.or(config.default_provider.as_deref())
.unwrap_or("openrouter");
.unwrap_or("openrouter")
.to_string();
let model_name = model_override
let mut model_name = model_override
.as_deref()
.or(config.default_model.as_deref())
.unwrap_or("anthropic/claude-sonnet-4");
.unwrap_or("anthropic/claude-sonnet-4")
.to_string();
let provider_runtime_options = providers::ProviderRuntimeOptions {
auth_profile_override: None,
provider_api_url: config.api_url.clone(),
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
};
let provider_runtime_options = providers::provider_runtime_options_from_config(&config);
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
provider_name,
let mut provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
&provider_name,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
model_name,
&model_name,
&provider_runtime_options,
)?;
let model_switch_callback = get_model_switch_state();
observer.record_event(&ObserverEvent::AgentStart {
provider: provider_name.to_string(),
model: model_name.to_string(),
@@ -3251,6 +3331,16 @@ pub async fn run(
.map(|b| b.board.clone())
.collect();
// ── Load locale-aware tool descriptions ────────────────────────
let i18n_locale = config
.locale
.as_deref()
.filter(|s| !s.is_empty())
.map(ToString::to_string)
.unwrap_or_else(crate::i18n::detect_locale);
let i18n_search_dirs = crate::i18n::default_search_dirs(&config.workspace_dir);
let i18n_descs = crate::i18n::ToolDescriptions::load(&i18n_locale, &i18n_search_dirs);
// ── Build system prompt from workspace MD files (OpenClaw framework) ──
let skills = crate::skills::load_skills_with_config(&config.workspace_dir, &config);
let mut tool_descs: Vec<(&str, &str)> = vec![
@@ -3369,7 +3459,7 @@ pub async fn run(
let native_tools = provider.supports_native_tools();
let mut system_prompt = crate::channels::build_system_prompt_with_mode(
&config.workspace_dir,
model_name,
&model_name,
&tool_descs,
&skills,
Some(&config.identity),
@@ -3380,7 +3470,7 @@ pub async fn run(
// Append structured tool-use instructions with schemas (only for non-native providers)
if !native_tools {
system_prompt.push_str(&build_tool_instructions(&tools_registry));
system_prompt.push_str(&build_tool_instructions(&tools_registry, Some(&i18n_descs)));
}
// Append deferred MCP tool names so the LLM knows what is available
@@ -3452,27 +3542,93 @@ pub async fn run(
let excluded_tools =
compute_excluded_mcp_tools(&tools_registry, &config.agent.tool_filter_groups, &msg);
let response = run_tool_call_loop(
provider.as_ref(),
&mut history,
&tools_registry,
observer.as_ref(),
provider_name,
model_name,
temperature,
false,
approval_manager.as_ref(),
channel_name,
&config.multimodal,
config.agent.max_tool_iterations,
None,
None,
None,
&excluded_tools,
&config.agent.tool_call_dedup_exempt,
activated_handle.as_ref(),
)
.await?;
#[allow(unused_assignments)]
let mut response = String::new();
loop {
match run_tool_call_loop(
provider.as_ref(),
&mut history,
&tools_registry,
observer.as_ref(),
&provider_name,
&model_name,
temperature,
false,
approval_manager.as_ref(),
channel_name,
&config.multimodal,
config.agent.max_tool_iterations,
None,
None,
None,
&excluded_tools,
&config.agent.tool_call_dedup_exempt,
activated_handle.as_ref(),
Some(model_switch_callback.clone()),
)
.await
{
Ok(resp) => {
response = resp;
break;
}
Err(e) => {
if let Some((new_provider, new_model)) = is_model_switch_requested(&e) {
tracing::info!(
"Model switch requested, switching from {} {} to {} {}",
provider_name,
model_name,
new_provider,
new_model
);
provider = providers::create_routed_provider_with_options(
&new_provider,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
&new_model,
&provider_runtime_options,
)?;
provider_name = new_provider;
model_name = new_model;
clear_model_switch_request();
observer.record_event(&ObserverEvent::AgentStart {
provider: provider_name.to_string(),
model: model_name.to_string(),
});
continue;
}
return Err(e);
}
}
}
// After successful multi-step execution, attempt autonomous skill creation.
#[cfg(feature = "skill-creation")]
if config.skills.skill_creation.enabled {
let tool_calls = crate::skills::creator::extract_tool_calls_from_history(&history);
if tool_calls.len() >= 2 {
let creator = crate::skills::creator::SkillCreator::new(
config.workspace_dir.clone(),
config.skills.skill_creation.clone(),
);
match creator.create_from_execution(&msg, &tool_calls, None).await {
Ok(Some(slug)) => {
tracing::info!(slug, "Auto-created skill from execution");
}
Ok(None) => {
tracing::debug!("Skill creation skipped (duplicate or disabled)");
}
Err(e) => tracing::warn!("Skill creation failed: {e}"),
}
}
}
final_output = response.clone();
println!("{response}");
observer.record_event(&ObserverEvent::TurnComplete);
@@ -3614,32 +3770,66 @@ pub async fn run(
&user_input,
);
let response = match run_tool_call_loop(
provider.as_ref(),
&mut history,
&tools_registry,
observer.as_ref(),
provider_name,
model_name,
temperature,
false,
approval_manager.as_ref(),
channel_name,
&config.multimodal,
config.agent.max_tool_iterations,
None,
None,
None,
&excluded_tools,
&config.agent.tool_call_dedup_exempt,
activated_handle.as_ref(),
)
.await
{
Ok(resp) => resp,
Err(e) => {
eprintln!("\nError: {e}\n");
continue;
let response = loop {
match run_tool_call_loop(
provider.as_ref(),
&mut history,
&tools_registry,
observer.as_ref(),
&provider_name,
&model_name,
temperature,
false,
approval_manager.as_ref(),
channel_name,
&config.multimodal,
config.agent.max_tool_iterations,
None,
None,
None,
&excluded_tools,
&config.agent.tool_call_dedup_exempt,
activated_handle.as_ref(),
Some(model_switch_callback.clone()),
)
.await
{
Ok(resp) => break resp,
Err(e) => {
if let Some((new_provider, new_model)) = is_model_switch_requested(&e) {
tracing::info!(
"Model switch requested, switching from {} {} to {} {}",
provider_name,
model_name,
new_provider,
new_model
);
provider = providers::create_routed_provider_with_options(
&new_provider,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
&new_model,
&provider_runtime_options,
)?;
provider_name = new_provider;
model_name = new_model;
clear_model_switch_request();
observer.record_event(&ObserverEvent::AgentStart {
provider: provider_name.to_string(),
model: model_name.to_string(),
});
continue;
}
eprintln!("\nError: {e}\n");
break String::new();
}
}
};
final_output = response.clone();
@@ -3657,7 +3847,7 @@ pub async fn run(
if let Ok(compacted) = auto_compact_history(
&mut history,
provider.as_ref(),
model_name,
&model_name,
config.agent.max_history_messages,
config.agent.max_context_tokens,
)
@@ -3743,6 +3933,10 @@ pub async fn process_message(
// NOTE: Same ordering contract as the CLI path above — MCP tools must be
// injected after filter_primary_agent_tools_or_fail (or equivalent built-in
// tool allow/deny filtering) to avoid MCP tools being silently dropped.
let mut deferred_section = String::new();
let mut activated_handle_pm: Option<
std::sync::Arc<std::sync::Mutex<crate::tools::ActivatedToolSet>>,
> = None;
if config.mcp.enabled && !config.mcp.servers.is_empty() {
tracing::info!(
"Initializing MCP client — {} server(s) configured",
@@ -3751,28 +3945,50 @@ pub async fn process_message(
match crate::tools::McpRegistry::connect_all(&config.mcp.servers).await {
Ok(registry) => {
let registry = std::sync::Arc::new(registry);
let names = registry.tool_names();
let mut registered = 0usize;
for name in names {
if let Some(def) = registry.get_tool_def(&name).await {
let wrapper: std::sync::Arc<dyn Tool> =
std::sync::Arc::new(crate::tools::McpToolWrapper::new(
name,
def,
std::sync::Arc::clone(&registry),
));
if let Some(ref handle) = delegate_handle_pm {
handle.write().push(std::sync::Arc::clone(&wrapper));
if config.mcp.deferred_loading {
let deferred_set = crate::tools::DeferredMcpToolSet::from_registry(
std::sync::Arc::clone(&registry),
)
.await;
tracing::info!(
"MCP deferred: {} tool stub(s) from {} server(s)",
deferred_set.len(),
registry.server_count()
);
deferred_section =
crate::tools::mcp_deferred::build_deferred_tools_section(&deferred_set);
let activated = std::sync::Arc::new(std::sync::Mutex::new(
crate::tools::ActivatedToolSet::new(),
));
activated_handle_pm = Some(std::sync::Arc::clone(&activated));
tools_registry.push(Box::new(crate::tools::ToolSearchTool::new(
deferred_set,
activated,
)));
} else {
let names = registry.tool_names();
let mut registered = 0usize;
for name in names {
if let Some(def) = registry.get_tool_def(&name).await {
let wrapper: std::sync::Arc<dyn Tool> =
std::sync::Arc::new(crate::tools::McpToolWrapper::new(
name,
def,
std::sync::Arc::clone(&registry),
));
if let Some(ref handle) = delegate_handle_pm {
handle.write().push(std::sync::Arc::clone(&wrapper));
}
tools_registry.push(Box::new(crate::tools::ArcToolRef(wrapper)));
registered += 1;
}
tools_registry.push(Box::new(crate::tools::ArcToolRef(wrapper)));
registered += 1;
}
tracing::info!(
"MCP: {} tool(s) registered from {} server(s)",
registered,
registry.server_count()
);
}
tracing::info!(
"MCP: {} tool(s) registered from {} server(s)",
registered,
registry.server_count()
);
}
Err(e) => {
tracing::error!("MCP registry failed to initialize: {e:#}");
@@ -3785,16 +4001,7 @@ pub async fn process_message(
.default_model
.clone()
.unwrap_or_else(|| "anthropic/claude-sonnet-4-20250514".into());
let provider_runtime_options = providers::ProviderRuntimeOptions {
auth_profile_override: None,
provider_api_url: config.api_url.clone(),
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
};
let provider_runtime_options = providers::provider_runtime_options_from_config(&config);
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
provider_name,
config.api_key.as_deref(),
@@ -3820,6 +4027,16 @@ pub async fn process_message(
.map(|b| b.board.clone())
.collect();
// ── Load locale-aware tool descriptions ────────────────────────
let i18n_locale = config
.locale
.as_deref()
.filter(|s| !s.is_empty())
.map(ToString::to_string)
.unwrap_or_else(crate::i18n::detect_locale);
let i18n_search_dirs = crate::i18n::default_search_dirs(&config.workspace_dir);
let i18n_descs = crate::i18n::ToolDescriptions::load(&i18n_locale, &i18n_search_dirs);
let skills = crate::skills::load_skills_with_config(&config.workspace_dir, &config);
let mut tool_descs: Vec<(&str, &str)> = vec![
("shell", "Execute terminal commands."),
@@ -3885,7 +4102,11 @@ pub async fn process_message(
config.skills.prompt_injection_mode,
);
if !native_tools {
system_prompt.push_str(&build_tool_instructions(&tools_registry));
system_prompt.push_str(&build_tool_instructions(&tools_registry, Some(&i18n_descs)));
}
if !deferred_section.is_empty() {
system_prompt.push('\n');
system_prompt.push_str(&deferred_section);
}
let mem_context = build_context(
@@ -3912,6 +4133,8 @@ pub async fn process_message(
ChatMessage::system(&system_prompt),
ChatMessage::user(&enriched),
];
let excluded_tools =
compute_excluded_mcp_tools(&tools_registry, &config.agent.tool_filter_groups, message);
agent_turn(
provider.as_ref(),
@@ -3922,8 +4145,13 @@ pub async fn process_message(
&model_name,
config.default_temperature,
true,
"daemon",
&config.multimodal,
config.agent.max_tool_iterations,
&excluded_tools,
&config.agent.tool_call_dedup_exempt,
activated_handle_pm.as_ref(),
None,
)
.await
}
@@ -4021,6 +4249,36 @@ mod tests {
assert!(outcome.output.contains("Unknown tool: unknown_tool"));
}
#[tokio::test]
async fn execute_one_tool_resolves_unique_activated_tool_suffix() {
let observer = NoopObserver;
let invocations = Arc::new(AtomicUsize::new(0));
let activated = Arc::new(std::sync::Mutex::new(crate::tools::ActivatedToolSet::new()));
let activated_tool: Arc<dyn Tool> = Arc::new(CountingTool::new(
"docker-mcp__extract_text",
Arc::clone(&invocations),
));
activated
.lock()
.unwrap()
.activate("docker-mcp__extract_text".into(), activated_tool);
let outcome = execute_one_tool(
"extract_text",
serde_json::json!({ "value": "ok" }),
&[],
Some(&activated),
&observer,
None,
)
.await
.expect("suffix alias should execute the unique activated tool");
assert!(outcome.success);
assert_eq!(outcome.output, "counted:ok");
assert_eq!(invocations.load(Ordering::SeqCst), 1);
}
use crate::memory::{Memory, MemoryCategory, SqliteMemory};
use crate::observability::NoopObserver;
use crate::providers::traits::ProviderCapabilities;
@@ -4351,6 +4609,7 @@ mod tests {
&[],
&[],
None,
None,
)
.await
.expect_err("provider without vision support should fail");
@@ -4399,6 +4658,7 @@ mod tests {
&[],
&[],
None,
None,
)
.await
.expect_err("oversized payload must fail");
@@ -4441,6 +4701,7 @@ mod tests {
&[],
&[],
None,
None,
)
.await
.expect("valid multimodal payload should pass");
@@ -4569,6 +4830,7 @@ mod tests {
&[],
&[],
None,
None,
)
.await
.expect("parallel execution should complete");
@@ -4640,6 +4902,7 @@ mod tests {
&[],
&[],
None,
None,
)
.await
.expect("loop should finish after deduplicating repeated calls");
@@ -4659,6 +4922,69 @@ mod tests {
assert!(tool_results.content.contains("Skipped duplicate tool call"));
}
#[tokio::test]
async fn run_tool_call_loop_allows_low_risk_shell_in_non_interactive_mode() {
let provider = ScriptedProvider::from_text_responses(vec![
r#"<tool_call>
{"name":"shell","arguments":{"command":"echo hello"}}
</tool_call>"#,
"done",
]);
let tmp = TempDir::new().expect("temp dir");
let security = Arc::new(crate::security::SecurityPolicy {
autonomy: crate::security::AutonomyLevel::Supervised,
workspace_dir: tmp.path().to_path_buf(),
..crate::security::SecurityPolicy::default()
});
let runtime: Arc<dyn crate::runtime::RuntimeAdapter> =
Arc::new(crate::runtime::NativeRuntime::new());
let tools_registry: Vec<Box<dyn Tool>> = vec![Box::new(
crate::tools::shell::ShellTool::new(security, runtime),
)];
let mut history = vec![
ChatMessage::system("test-system"),
ChatMessage::user("run shell"),
];
let observer = NoopObserver;
let approval_mgr =
ApprovalManager::for_non_interactive(&crate::config::AutonomyConfig::default());
let result = run_tool_call_loop(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
Some(&approval_mgr),
"telegram",
&crate::config::MultimodalConfig::default(),
4,
None,
None,
None,
&[],
&[],
None,
None,
)
.await
.expect("non-interactive shell should succeed for low-risk command");
assert_eq!(result, "done");
let tool_results = history
.iter()
.find(|msg| msg.role == "user" && msg.content.starts_with("[Tool results]"))
.expect("tool results message should be present");
assert!(tool_results.content.contains("hello"));
assert!(!tool_results.content.contains("Denied by user."));
}
#[tokio::test]
async fn run_tool_call_loop_dedup_exempt_allows_repeated_calls() {
let provider = ScriptedProvider::from_text_responses(vec![
@@ -4703,6 +5029,7 @@ mod tests {
&[],
&exempt,
None,
None,
)
.await
.expect("loop should finish with exempt tool executing twice");
@@ -4781,6 +5108,7 @@ mod tests {
&[],
&exempt,
None,
None,
)
.await
.expect("loop should complete");
@@ -4836,6 +5164,7 @@ mod tests {
&[],
&[],
None,
None,
)
.await
.expect("native fallback id flow should complete");
@@ -4856,6 +5185,64 @@ mod tests {
);
}
#[test]
fn agent_turn_executes_activated_tool_from_wrapper() {
let runtime = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("test runtime should initialize");
runtime.block_on(async {
let provider = ScriptedProvider::from_text_responses(vec![
r#"<tool_call>
{"name":"pixel__get_api_health","arguments":{"value":"ok"}}
</tool_call>"#,
"done",
]);
let invocations = Arc::new(AtomicUsize::new(0));
let activated = Arc::new(std::sync::Mutex::new(crate::tools::ActivatedToolSet::new()));
let activated_tool: Arc<dyn Tool> = Arc::new(CountingTool::new(
"pixel__get_api_health",
Arc::clone(&invocations),
));
activated
.lock()
.unwrap()
.activate("pixel__get_api_health".into(), activated_tool);
let tools_registry: Vec<Box<dyn Tool>> = Vec::new();
let mut history = vec![
ChatMessage::system("test-system"),
ChatMessage::user("use the activated MCP tool"),
];
let observer = NoopObserver;
let result = agent_turn(
&provider,
&mut history,
&tools_registry,
&observer,
"mock-provider",
"mock-model",
0.0,
true,
"daemon",
&crate::config::MultimodalConfig::default(),
4,
&[],
&[],
Some(&activated),
None,
)
.await
.expect("wrapper path should execute activated tools");
assert_eq!(result, "done");
assert_eq!(invocations.load(Ordering::SeqCst), 1);
});
}
#[test]
fn resolve_display_text_hides_raw_payload_for_tool_only_turns() {
let display = resolve_display_text(
@@ -5425,7 +5812,7 @@ Tail"#;
std::path::Path::new("/tmp"),
));
let tools = tools::default_tools(security);
let instructions = build_tool_instructions(&tools);
let instructions = build_tool_instructions(&tools, None);
assert!(instructions.contains("## Tool Use Protocol"));
assert!(instructions.contains("<tool_call>"));
@@ -6738,6 +7125,7 @@ Let me check the result."#;
&[],
&[],
None,
None,
)
.await
.expect("tool loop should complete");
+15 -1
View File
@@ -1,4 +1,5 @@
use crate::config::IdentityConfig;
use crate::i18n::ToolDescriptions;
use crate::identity;
use crate::skills::Skill;
use crate::tools::Tool;
@@ -17,6 +18,9 @@ pub struct PromptContext<'a> {
pub skills_prompt_mode: crate::config::SkillsPromptInjectionMode,
pub identity_config: Option<&'a IdentityConfig>,
pub dispatcher_instructions: &'a str,
/// Locale-aware tool descriptions. When present, tool descriptions in
/// prompts are resolved from the locale file instead of hardcoded values.
pub tool_descriptions: Option<&'a ToolDescriptions>,
}
pub trait PromptSection: Send + Sync {
@@ -124,11 +128,15 @@ impl PromptSection for ToolsSection {
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
let mut out = String::from("## Tools\n\n");
for tool in ctx.tools {
let desc = ctx
.tool_descriptions
.and_then(|td: &ToolDescriptions| td.get(tool.name()))
.unwrap_or_else(|| tool.description());
let _ = writeln!(
out,
"- **{}**: {}\n Parameters: `{}`",
tool.name(),
tool.description(),
desc,
tool.parameters_schema()
);
}
@@ -317,6 +325,7 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: Some(&identity_config),
dispatcher_instructions: "",
tool_descriptions: None,
};
let section = IdentitySection;
@@ -345,6 +354,7 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "instr",
tool_descriptions: None,
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
assert!(prompt.contains("## Tools"));
@@ -380,6 +390,7 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
};
let output = SkillsSection.build(&ctx).unwrap();
@@ -418,6 +429,7 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Compact,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
};
let output = SkillsSection.build(&ctx).unwrap();
@@ -439,6 +451,7 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "instr",
tool_descriptions: None,
};
let rendered = DateTimeSection.build(&ctx).unwrap();
@@ -477,6 +490,7 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
+15
View File
@@ -126,6 +126,15 @@ impl ApprovalManager {
return true;
}
// Channel-driven shell execution is still guarded by the shell tool's
// own command allowlist and risk policy. Skipping the outer approval
// gate here lets low-risk allowlisted commands (e.g. `ls`) work in
// non-interactive channels without silently allowing medium/high-risk
// commands.
if self.non_interactive && tool_name == "shell" {
return false;
}
// auto_approve skips the prompt.
if self.auto_approve.contains(tool_name) {
return false;
@@ -456,6 +465,12 @@ mod tests {
assert!(!mgr.needs_approval("memory_recall"));
}
#[test]
fn non_interactive_shell_skips_outer_approval_by_default() {
let mgr = ApprovalManager::for_non_interactive(&AutonomyConfig::default());
assert!(!mgr.needs_approval("shell"));
}
#[test]
fn non_interactive_always_ask_tools_need_approval() {
let mgr = ApprovalManager::for_non_interactive(&supervised_config());
+571
View File
@@ -0,0 +1,571 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use anyhow::{bail, Result};
use async_trait::async_trait;
use parking_lot::Mutex;
use serde::{Deserialize, Serialize};
use std::time::{Duration, Instant};
/// Bluesky channel — polls for mentions via AT Protocol and replies as posts.
pub struct BlueskyChannel {
handle: String,
app_password: String,
auth: Mutex<BlueskyAuth>,
}
struct BlueskyAuth {
access_jwt: String,
refresh_jwt: String,
did: String,
expires_at: Instant,
}
const BSKY_API_BASE: &str = "https://bsky.social/xrpc";
const POLL_INTERVAL: Duration = Duration::from_secs(5);
#[derive(Deserialize)]
struct CreateSessionResponse {
#[serde(rename = "accessJwt")]
access_jwt: String,
#[serde(rename = "refreshJwt")]
refresh_jwt: String,
did: String,
}
#[derive(Deserialize)]
struct RefreshSessionResponse {
#[serde(rename = "accessJwt")]
access_jwt: String,
#[serde(rename = "refreshJwt")]
refresh_jwt: String,
}
#[derive(Deserialize)]
struct NotificationListResponse {
notifications: Vec<Notification>,
cursor: Option<String>,
}
#[allow(dead_code)]
#[derive(Deserialize)]
struct Notification {
uri: String,
cid: String,
author: NotificationAuthor,
reason: String,
record: Option<serde_json::Value>,
#[serde(rename = "isRead")]
is_read: bool,
#[serde(rename = "indexedAt")]
indexed_at: String,
}
#[allow(dead_code)]
#[derive(Deserialize)]
struct NotificationAuthor {
did: String,
handle: String,
#[serde(rename = "displayName")]
display_name: Option<String>,
}
/// AT Protocol record for creating a post.
#[derive(Serialize)]
struct CreateRecordRequest {
repo: String,
collection: String,
record: PostRecord,
}
#[derive(Serialize)]
struct PostRecord {
#[serde(rename = "$type")]
record_type: String,
text: String,
#[serde(rename = "createdAt")]
created_at: String,
#[serde(skip_serializing_if = "Option::is_none")]
reply: Option<ReplyRef>,
}
#[derive(Serialize)]
struct ReplyRef {
root: PostRef,
parent: PostRef,
}
#[derive(Serialize)]
struct PostRef {
uri: String,
cid: String,
}
impl BlueskyChannel {
pub fn new(handle: String, app_password: String) -> Self {
Self {
handle,
app_password,
auth: Mutex::new(BlueskyAuth {
access_jwt: String::new(),
refresh_jwt: String::new(),
did: String::new(),
expires_at: Instant::now(),
}),
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.bluesky")
}
/// Create a new session with handle + app password.
async fn create_session(&self) -> Result<()> {
let client = self.http_client();
let resp = client
.post(format!("{BSKY_API_BASE}/com.atproto.server.createSession"))
.json(&serde_json::json!({
"identifier": self.handle,
"password": self.app_password,
}))
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Bluesky createSession failed ({status}): {body}");
}
let session: CreateSessionResponse = resp.json().await?;
let mut auth = self.auth.lock();
auth.access_jwt = session.access_jwt;
auth.refresh_jwt = session.refresh_jwt;
auth.did = session.did;
// AT Protocol JWTs typically last ~2 hours; refresh well before that.
auth.expires_at = Instant::now() + Duration::from_secs(90 * 60);
Ok(())
}
/// Refresh an existing session.
async fn refresh_session(&self) -> Result<()> {
let refresh_jwt = {
let auth = self.auth.lock();
auth.refresh_jwt.clone()
};
if refresh_jwt.is_empty() {
return self.create_session().await;
}
let client = self.http_client();
let resp = client
.post(format!("{BSKY_API_BASE}/com.atproto.server.refreshSession"))
.bearer_auth(&refresh_jwt)
.send()
.await?;
if !resp.status().is_success() {
// Refresh failed — fall back to full re-auth
tracing::warn!("Bluesky session refresh failed, re-authenticating");
return self.create_session().await;
}
let refreshed: RefreshSessionResponse = resp.json().await?;
let mut auth = self.auth.lock();
auth.access_jwt = refreshed.access_jwt;
auth.refresh_jwt = refreshed.refresh_jwt;
auth.expires_at = Instant::now() + Duration::from_secs(90 * 60);
Ok(())
}
/// Get a valid access JWT, refreshing if expired.
async fn get_access_jwt(&self) -> Result<String> {
{
let auth = self.auth.lock();
if !auth.access_jwt.is_empty() && Instant::now() < auth.expires_at {
return Ok(auth.access_jwt.clone());
}
}
self.refresh_session().await?;
let auth = self.auth.lock();
Ok(auth.access_jwt.clone())
}
/// Get the DID for the authenticated account.
fn get_did(&self) -> String {
self.auth.lock().did.clone()
}
/// Parse a notification into a ChannelMessage (only processes mentions).
fn parse_notification(&self, notif: &Notification) -> Option<ChannelMessage> {
// Only process mentions
if notif.reason != "mention" && notif.reason != "reply" {
return None;
}
// Skip already-read notifications
if notif.is_read {
return None;
}
// Skip own posts
if notif.author.did == self.get_did() {
return None;
}
// Extract text from the record
let text = notif
.record
.as_ref()
.and_then(|r| r.get("text"))
.and_then(|t| t.as_str())
.unwrap_or("");
if text.is_empty() {
return None;
}
// Parse timestamp from indexedAt (ISO 8601)
let timestamp = chrono::DateTime::parse_from_rfc3339(&notif.indexed_at)
.map(|dt| dt.timestamp().cast_unsigned())
.unwrap_or(0);
// Extract CID from the record for reply references
let cid = notif
.record
.as_ref()
.and_then(|r| r.get("cid"))
.and_then(|c| c.as_str())
.unwrap_or(&notif.cid);
// The reply target encodes the URI and CID needed for threading
let reply_target = format!("{}|{}", notif.uri, cid);
Some(ChannelMessage {
id: format!("bluesky_{}", notif.cid),
sender: notif.author.handle.clone(),
reply_target,
content: text.to_string(),
channel: "bluesky".to_string(),
timestamp,
thread_ts: Some(notif.uri.clone()),
})
}
/// Mark notifications as read up to a given timestamp.
async fn update_seen(&self, seen_at: &str) -> Result<()> {
let token = self.get_access_jwt().await?;
let client = self.http_client();
let resp = client
.post(format!("{BSKY_API_BASE}/app.bsky.notification.updateSeen"))
.bearer_auth(&token)
.json(&serde_json::json!({ "seenAt": seen_at }))
.send()
.await?;
if !resp.status().is_success() {
tracing::warn!("Bluesky updateSeen failed: {}", resp.status());
}
Ok(())
}
}
#[async_trait]
impl Channel for BlueskyChannel {
fn name(&self) -> &str {
"bluesky"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let token = self.get_access_jwt().await?;
let did = self.get_did();
let client = self.http_client();
let now = chrono::Utc::now().to_rfc3339();
// Parse reply reference from recipient if present (format: "uri|cid")
let reply = if message.recipient.contains('|') {
let parts: Vec<&str> = message.recipient.splitn(2, '|').collect();
if parts.len() == 2 {
let uri = parts[0];
let cid = parts[1];
Some(ReplyRef {
root: PostRef {
uri: uri.to_string(),
cid: cid.to_string(),
},
parent: PostRef {
uri: uri.to_string(),
cid: cid.to_string(),
},
})
} else {
None
}
} else {
None
};
// Bluesky posts have a 300-character limit (grapheme clusters).
// For longer content, truncate with an indicator.
let text = if message.content.len() > 300 {
format!("{}...", &message.content[..297])
} else {
message.content.clone()
};
let request = CreateRecordRequest {
repo: did,
collection: "app.bsky.feed.post".to_string(),
record: PostRecord {
record_type: "app.bsky.feed.post".to_string(),
text,
created_at: now,
reply,
},
};
let resp = client
.post(format!("{BSKY_API_BASE}/com.atproto.repo.createRecord"))
.bearer_auth(&token)
.json(&request)
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Bluesky post failed ({status}): {body}");
}
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
// Initial auth
self.create_session().await?;
tracing::info!("Bluesky channel listening as @{}...", self.handle);
loop {
tokio::time::sleep(POLL_INTERVAL).await;
let token = match self.get_access_jwt().await {
Ok(t) => t,
Err(e) => {
tracing::warn!("Bluesky auth error: {e}");
continue;
}
};
let client = self.http_client();
let resp = match client
.get(format!(
"{BSKY_API_BASE}/app.bsky.notification.listNotifications"
))
.bearer_auth(&token)
.query(&[("limit", "25")])
.send()
.await
{
Ok(r) => r,
Err(e) => {
tracing::warn!("Bluesky poll error: {e}");
continue;
}
};
if !resp.status().is_success() {
tracing::warn!("Bluesky notifications failed: {}", resp.status());
continue;
}
let listing: NotificationListResponse = match resp.json().await {
Ok(l) => l,
Err(e) => {
tracing::warn!("Bluesky parse error: {e}");
continue;
}
};
let mut latest_indexed_at: Option<String> = None;
for notif in &listing.notifications {
if let Some(msg) = self.parse_notification(notif) {
latest_indexed_at = Some(notif.indexed_at.clone());
if tx.send(msg).await.is_err() {
return Ok(());
}
}
}
// Mark as seen
if let Some(ref seen_at) = latest_indexed_at {
if let Err(e) = self.update_seen(seen_at).await {
tracing::warn!("Bluesky updateSeen error: {e}");
}
}
let _ = &listing.cursor; // cursor available for pagination if needed
}
}
async fn health_check(&self) -> bool {
self.get_access_jwt().await.is_ok()
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> BlueskyChannel {
let ch = BlueskyChannel::new("testbot.bsky.social".into(), "app-password".into());
// Seed auth with a DID for tests
{
let mut auth = ch.auth.lock();
auth.did = "did:plc:test123".into();
}
ch
}
fn make_notification(
reason: &str,
handle: &str,
did: &str,
text: &str,
is_read: bool,
) -> Notification {
Notification {
uri: format!("at://{did}/app.bsky.feed.post/abc123"),
cid: "bafyreitest123".into(),
author: NotificationAuthor {
did: did.into(),
handle: handle.into(),
display_name: None,
},
reason: reason.into(),
record: Some(serde_json::json!({ "text": text })),
is_read,
indexed_at: "2026-01-15T10:00:00.000Z".into(),
}
}
#[test]
fn parse_mention_notification() {
let ch = make_channel();
let notif = make_notification(
"mention",
"user1.bsky.social",
"did:plc:user1",
"@testbot hello",
false,
);
let msg = ch.parse_notification(&notif).unwrap();
assert_eq!(msg.sender, "user1.bsky.social");
assert_eq!(msg.content, "@testbot hello");
assert_eq!(msg.channel, "bluesky");
assert!(msg.id.starts_with("bluesky_"));
}
#[test]
fn parse_reply_notification() {
let ch = make_channel();
let notif = make_notification(
"reply",
"user2.bsky.social",
"did:plc:user2",
"thanks for the info!",
false,
);
let msg = ch.parse_notification(&notif).unwrap();
assert_eq!(msg.sender, "user2.bsky.social");
assert_eq!(msg.content, "thanks for the info!");
}
#[test]
fn skip_read_notifications() {
let ch = make_channel();
let notif = make_notification(
"mention",
"user1.bsky.social",
"did:plc:user1",
"old message",
true,
);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn skip_own_notifications() {
let ch = make_channel();
let notif = make_notification(
"mention",
"testbot.bsky.social",
"did:plc:test123", // same as seeded DID
"self message",
false,
);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn skip_like_notifications() {
let ch = make_channel();
let notif = make_notification(
"like",
"user1.bsky.social",
"did:plc:user1",
"liked post",
false,
);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn skip_empty_text() {
let ch = make_channel();
let notif = make_notification("mention", "user1.bsky.social", "did:plc:user1", "", false);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn reply_target_encoding() {
let ch = make_channel();
let notif = make_notification(
"mention",
"user1.bsky.social",
"did:plc:user1",
"hello",
false,
);
let msg = ch.parse_notification(&notif).unwrap();
// reply_target should contain URI|CID
assert!(msg.reply_target.contains('|'));
let parts: Vec<&str> = msg.reply_target.splitn(2, '|').collect();
assert_eq!(parts.len(), 2);
assert!(parts[0].starts_with("at://"));
}
#[test]
fn send_message_formatting() {
// Verify reply target parsing
let reply_target = "at://did:plc:user1/app.bsky.feed.post/abc|bafyreitest";
let parts: Vec<&str> = reply_target.splitn(2, '|').collect();
assert_eq!(parts.len(), 2);
assert_eq!(parts[0], "at://did:plc:user1/app.bsky.feed.post/abc");
assert_eq!(parts[1], "bafyreitest");
}
}
+208 -23
View File
@@ -14,6 +14,7 @@
//! To add a new channel, implement [`Channel`] in a new submodule and wire it into
//! [`start_channels`]. See `AGENTS.md` §7.2 for the full change playbook.
pub mod bluesky;
pub mod clawdtalk;
pub mod cli;
pub mod dingtalk;
@@ -33,6 +34,7 @@ pub mod nextcloud_talk;
pub mod nostr;
pub mod notion;
pub mod qq;
pub mod reddit;
pub mod session_backend;
pub mod session_sqlite;
pub mod session_store;
@@ -44,6 +46,7 @@ pub mod transcription;
pub mod tts;
pub mod twitter;
pub mod wati;
pub mod webhook;
pub mod wecom;
pub mod whatsapp;
#[cfg(feature = "whatsapp-web")]
@@ -51,6 +54,7 @@ pub mod whatsapp_storage;
#[cfg(feature = "whatsapp-web")]
pub mod whatsapp_web;
pub use bluesky::BlueskyChannel;
pub use clawdtalk::{ClawdTalkChannel, ClawdTalkConfig};
pub use cli::CliChannel;
pub use dingtalk::DingTalkChannel;
@@ -70,6 +74,7 @@ pub use nextcloud_talk::NextcloudTalkChannel;
pub use nostr::NostrChannel;
pub use notion::NotionChannel;
pub use qq::QQChannel;
pub use reddit::RedditChannel;
pub use signal::SignalChannel;
pub use slack::SlackChannel;
pub use telegram::TelegramChannel;
@@ -78,6 +83,7 @@ pub use traits::{Channel, SendMessage};
pub use tts::{TtsManager, TtsProvider};
pub use twitter::TwitterChannel;
pub use wati::WatiChannel;
pub use webhook::WebhookChannel;
pub use wecom::WeComChannel;
pub use whatsapp::WhatsAppChannel;
#[cfg(feature = "whatsapp-web")]
@@ -221,6 +227,10 @@ fn channel_message_timeout_budget_secs(
struct ChannelRouteSelection {
provider: String,
model: String,
/// Route-specific API key override. When set, this takes precedence over
/// the global `api_key` in [`ChannelRuntimeContext`] when creating the
/// provider for this route.
api_key: Option<String>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
@@ -898,6 +908,7 @@ fn default_route_selection(ctx: &ChannelRuntimeContext) -> ChannelRouteSelection
ChannelRouteSelection {
provider: defaults.default_provider,
model: defaults.model,
api_key: None,
}
}
@@ -1116,21 +1127,43 @@ fn load_cached_model_preview(workspace_dir: &Path, provider_name: &str) -> Vec<S
.unwrap_or_default()
}
/// Build a cache key that includes the provider name and, when a
/// route-specific API key is supplied, a hash of that key. This prevents
/// cache poisoning when multiple routes target the same provider with
/// different credentials.
fn provider_cache_key(provider_name: &str, route_api_key: Option<&str>) -> String {
match route_api_key {
Some(key) => {
use std::hash::{Hash, Hasher};
let mut hasher = std::collections::hash_map::DefaultHasher::new();
key.hash(&mut hasher);
format!("{provider_name}@{:x}", hasher.finish())
}
None => provider_name.to_string(),
}
}
async fn get_or_create_provider(
ctx: &ChannelRuntimeContext,
provider_name: &str,
route_api_key: Option<&str>,
) -> anyhow::Result<Arc<dyn Provider>> {
let cache_key = provider_cache_key(provider_name, route_api_key);
if let Some(existing) = ctx
.provider_cache
.lock()
.unwrap_or_else(|e| e.into_inner())
.get(provider_name)
.get(&cache_key)
.cloned()
{
return Ok(existing);
}
if provider_name == ctx.default_provider.as_str() {
// Only return the pre-built default provider when there is no
// route-specific credential override — otherwise the default was
// created with the global key and would be wrong.
if route_api_key.is_none() && provider_name == ctx.default_provider.as_str() {
return Ok(Arc::clone(&ctx.provider));
}
@@ -1141,9 +1174,14 @@ async fn get_or_create_provider(
None
};
// Prefer route-specific credential; fall back to the global key.
let effective_api_key = route_api_key
.map(ToString::to_string)
.or_else(|| ctx.api_key.clone());
let provider = create_resilient_provider_nonblocking(
provider_name,
ctx.api_key.clone(),
effective_api_key,
api_url.map(ToString::to_string),
ctx.reliability.as_ref().clone(),
ctx.provider_runtime_options.clone(),
@@ -1157,7 +1195,7 @@ async fn get_or_create_provider(
let mut cache = ctx.provider_cache.lock().unwrap_or_else(|e| e.into_inner());
let cached = cache
.entry(provider_name.to_string())
.entry(cache_key)
.or_insert_with(|| Arc::clone(&provider));
Ok(Arc::clone(cached))
}
@@ -1273,25 +1311,27 @@ async fn handle_runtime_command_if_needed(
ChannelRuntimeCommand::ShowProviders => build_providers_help_response(&current),
ChannelRuntimeCommand::SetProvider(raw_provider) => {
match resolve_provider_alias(&raw_provider) {
Some(provider_name) => match get_or_create_provider(ctx, &provider_name).await {
Ok(_) => {
if provider_name != current.provider {
current.provider = provider_name.clone();
set_route_selection(ctx, &sender_key, current.clone());
}
Some(provider_name) => {
match get_or_create_provider(ctx, &provider_name, None).await {
Ok(_) => {
if provider_name != current.provider {
current.provider = provider_name.clone();
set_route_selection(ctx, &sender_key, current.clone());
}
format!(
format!(
"Provider switched to `{provider_name}` for this sender session. Current model is `{}`.\nUse `/model <model-id>` to set a provider-compatible model.",
current.model
)
}
Err(err) => {
let safe_err = providers::sanitize_api_error(&err.to_string());
format!(
}
Err(err) => {
let safe_err = providers::sanitize_api_error(&err.to_string());
format!(
"Failed to initialize provider `{provider_name}`. Route unchanged.\nDetails: {safe_err}"
)
}
}
},
}
None => format!(
"Unknown provider `{raw_provider}`. Use `/models` to list valid providers."
),
@@ -1311,6 +1351,7 @@ async fn handle_runtime_command_if_needed(
}) {
current.provider = route.provider.clone();
current.model = route.model.clone();
current.api_key = route.api_key.clone();
} else {
current.model = model.clone();
}
@@ -1493,7 +1534,68 @@ fn sanitize_channel_response(response: &str, tools: &[Box<dyn Tool>]) -> String
.iter()
.map(|tool| tool.name().to_ascii_lowercase())
.collect();
strip_isolated_tool_json_artifacts(response, &known_tool_names)
// Strip XML-style tool-call tags (e.g. <tool_call>...</tool_call>)
let stripped_xml = strip_tool_call_tags(response);
// Strip isolated tool-call JSON artifacts
let stripped_json = strip_isolated_tool_json_artifacts(&stripped_xml, &known_tool_names);
// Strip leading narration lines that announce tool usage
strip_tool_narration(&stripped_json)
}
/// Remove leading lines that narrate tool usage (e.g. "Let me check the weather for you.").
///
/// Only strips lines from the very beginning of the message that match common
/// narration patterns, so genuine content is preserved.
fn strip_tool_narration(message: &str) -> String {
let narration_prefixes: &[&str] = &[
"let me ",
"i'll ",
"i will ",
"i am going to ",
"i'm going to ",
"searching ",
"looking up ",
"fetching ",
"checking ",
"using the ",
"using my ",
"one moment",
"hold on",
"just a moment",
"give me a moment",
"allow me to ",
];
let mut result_lines: Vec<&str> = Vec::new();
let mut past_narration = false;
for line in message.lines() {
if past_narration {
result_lines.push(line);
continue;
}
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
let lower = trimmed.to_lowercase();
if narration_prefixes.iter().any(|p| lower.starts_with(p)) {
// Skip this narration line
continue;
}
// First non-narration, non-empty line — keep everything from here
past_narration = true;
result_lines.push(line);
}
let joined = result_lines.join("\n");
let trimmed = joined.trim();
if trimmed.is_empty() && !message.trim().is_empty() {
// If stripping removed everything, return original to avoid empty reply
message.to_string()
} else {
trimmed.to_string()
}
}
fn is_tool_call_payload(value: &serde_json::Value, known_tool_names: &HashSet<String>) -> bool {
@@ -1855,12 +1957,19 @@ async fn process_channel_message(
route = ChannelRouteSelection {
provider: matched_route.provider.clone(),
model: matched_route.model.clone(),
api_key: matched_route.api_key.clone(),
};
}
}
let runtime_defaults = runtime_defaults_snapshot(ctx.as_ref());
let active_provider = match get_or_create_provider(ctx.as_ref(), &route.provider).await {
let active_provider = match get_or_create_provider(
ctx.as_ref(),
&route.provider,
route.api_key.as_deref(),
)
.await
{
Ok(provider) => provider,
Err(err) => {
let safe_err = providers::sanitize_api_error(&err.to_string());
@@ -2142,6 +2251,7 @@ async fn process_channel_message(
},
ctx.tool_call_dedup_exempt.as_ref(),
ctx.activated_tools.as_ref(),
None,
),
) => LlmExecutionResult::Completed(result),
};
@@ -2691,6 +2801,17 @@ pub fn build_system_prompt_with_mode(
use std::fmt::Write;
let mut prompt = String::with_capacity(8192);
// ── 0. Anti-narration (top priority) ───────────────────────
prompt.push_str(
"## CRITICAL: No Tool Narration\n\n\
NEVER narrate, announce, describe, or explain your tool usage to the user. \
Do NOT say things like 'Let me check...', 'I will use http_request to...', \
'I'll fetch that for you', 'Searching now...', or 'Using the web_search tool'. \
The user must ONLY see the final answer. Tool calls are invisible infrastructure \
never reference them. If you catch yourself starting a sentence about what tool \
you are about to use or just used, DELETE it and give the answer directly.\n\n",
);
// ── 1. Tooling ──────────────────────────────────────────────
if !tools.is_empty() {
prompt.push_str("## Tools\n\n");
@@ -2830,7 +2951,9 @@ pub fn build_system_prompt_with_mode(
prompt.push_str("- You are running as a messaging bot. Your response is automatically sent back to the user's channel.\n");
prompt.push_str("- You do NOT need to ask permission to respond — just respond directly.\n");
prompt.push_str("- NEVER repeat, describe, or echo credentials, tokens, API keys, or secrets in your responses.\n");
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n\n");
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n");
prompt.push_str("- When a user sends a voice note, it is automatically transcribed to text. Your text reply is automatically converted to a voice note and sent back. Do NOT attempt to generate audio yourself — TTS is handled by the channel.\n");
prompt.push_str("- NEVER narrate or describe your tool usage. Do NOT say 'Let me fetch...', 'I will use...', 'Searching...', or similar. Give the FINAL ANSWER only — no intermediate steps, no tool mentions, no progress updates.\n\n");
if prompt.is_empty() {
"You are ZeroClaw, a fast and efficient AI assistant built in Rust. Be helpful, concise, and direct."
@@ -3087,7 +3210,7 @@ pub(crate) async fn handle_command(command: crate::ChannelCommands, config: &Con
anyhow::bail!("Remove channel '{name}' — edit ~/.zeroclaw/config.toml directly");
}
crate::ChannelCommands::BindTelegram { identity } => {
bind_telegram_identity(config, &identity).await
Box::pin(bind_telegram_identity(config, &identity)).await
}
crate::ChannelCommands::Send {
message,
@@ -3106,12 +3229,16 @@ fn build_channel_by_id(config: &Config, channel_id: &str) -> Result<Arc<dyn Chan
.telegram
.as_ref()
.context("Telegram channel is not configured")?;
let ack = tg
.ack_reactions
.unwrap_or(config.channels_config.ack_reactions);
Ok(Arc::new(
TelegramChannel::new(
tg.bot_token.clone(),
tg.allowed_users.clone(),
tg.mention_only,
)
.with_ack_reactions(ack)
.with_streaming(tg.stream_mode, tg.draft_update_interval_ms)
.with_transcription(config.transcription.clone())
.with_workspace_dir(config.workspace_dir.clone()),
@@ -3199,6 +3326,9 @@ fn collect_configured_channels(
let mut channels = Vec::new();
if let Some(ref tg) = config.channels_config.telegram {
let ack = tg
.ack_reactions
.unwrap_or(config.channels_config.ack_reactions);
channels.push(ConfiguredChannel {
display_name: "Telegram",
channel: Arc::new(
@@ -3207,6 +3337,7 @@ fn collect_configured_channels(
tg.allowed_users.clone(),
tg.mention_only,
)
.with_ack_reactions(ack)
.with_streaming(tg.stream_mode, tg.draft_update_interval_ms)
.with_transcription(config.transcription.clone())
.with_workspace_dir(config.workspace_dir.clone()),
@@ -3340,7 +3471,8 @@ fn collect_configured_channels(
wa.pair_code.clone(),
wa.allowed_numbers.clone(),
)
.with_transcription(config.transcription.clone()),
.with_transcription(config.transcription.clone())
.with_tts(config.tts.clone()),
),
});
} else {
@@ -3546,6 +3678,43 @@ fn collect_configured_channels(
}
}
if let Some(ref rd) = config.channels_config.reddit {
channels.push(ConfiguredChannel {
display_name: "Reddit",
channel: Arc::new(RedditChannel::new(
rd.client_id.clone(),
rd.client_secret.clone(),
rd.refresh_token.clone(),
rd.username.clone(),
rd.subreddit.clone(),
)),
});
}
if let Some(ref bs) = config.channels_config.bluesky {
channels.push(ConfiguredChannel {
display_name: "Bluesky",
channel: Arc::new(BlueskyChannel::new(
bs.handle.clone(),
bs.app_password.clone(),
)),
});
}
if let Some(ref wh) = config.channels_config.webhook {
channels.push(ConfiguredChannel {
display_name: "Webhook",
channel: Arc::new(WebhookChannel::new(
wh.port,
wh.listen_path.clone(),
wh.send_url.clone(),
wh.send_method.clone(),
wh.auth_header.clone(),
wh.secret.clone(),
)),
});
}
channels
}
@@ -3619,6 +3788,7 @@ pub async fn start_channels(config: Config) -> Result<()> {
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
reasoning_effort: config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
@@ -3769,6 +3939,16 @@ pub async fn start_channels(config: Config) -> Result<()> {
let skills = crate::skills::load_skills_with_config(&workspace, &config);
// ── Load locale-aware tool descriptions ────────────────────────
let i18n_locale = config
.locale
.as_deref()
.filter(|s| !s.is_empty())
.map(ToString::to_string)
.unwrap_or_else(crate::i18n::detect_locale);
let i18n_search_dirs = crate::i18n::default_search_dirs(&workspace);
let i18n_descs = crate::i18n::ToolDescriptions::load(&i18n_locale, &i18n_search_dirs);
// Collect tool descriptions for the prompt
let mut tool_descs: Vec<(&str, &str)> = vec![
(
@@ -3848,7 +4028,10 @@ pub async fn start_channels(config: Config) -> Result<()> {
config.skills.prompt_injection_mode,
);
if !native_tools {
system_prompt.push_str(&build_tool_instructions(tools_registry.as_ref()));
system_prompt.push_str(&build_tool_instructions(
tools_registry.as_ref(),
Some(&i18n_descs),
));
}
// Append deferred MCP tool names so the LLM knows what is available
@@ -5483,6 +5666,7 @@ BTC is currently around $65,000 based on latest tool output."#
ChannelRouteSelection {
provider: "openrouter".to_string(),
model: "route-model".to_string(),
api_key: None,
},
);
@@ -6597,7 +6781,7 @@ BTC is currently around $65,000 based on latest tool output."#
"build_system_prompt should not emit protocol block directly"
);
prompt.push_str(&build_tool_instructions(&[]));
prompt.push_str(&build_tool_instructions(&[], None));
assert_eq!(
prompt.matches("## Tool Use Protocol").count(),
@@ -8505,6 +8689,7 @@ This is an example JSON object for profile settings."#;
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
ack_reactions: None,
});
match build_channel_by_id(&config, "telegram") {
Ok(channel) => assert_eq!(channel.name(), "telegram"),
+504
View File
@@ -0,0 +1,504 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use anyhow::{bail, Result};
use async_trait::async_trait;
use parking_lot::Mutex;
use serde::Deserialize;
use std::time::{Duration, Instant};
/// Reddit channel — polls for mentions, DMs, and comment replies via Reddit OAuth2 API.
pub struct RedditChannel {
client_id: String,
client_secret: String,
refresh_token: String,
username: String,
subreddit: Option<String>,
auth: Mutex<RedditAuth>,
}
struct RedditAuth {
access_token: String,
expires_at: Instant,
}
#[derive(Deserialize)]
struct RedditTokenResponse {
access_token: String,
expires_in: u64,
}
#[derive(Deserialize)]
struct RedditListing {
data: RedditListingData,
}
#[derive(Deserialize)]
struct RedditListingData {
children: Vec<RedditChild>,
}
#[derive(Deserialize)]
struct RedditChild {
data: RedditItemData,
}
#[allow(dead_code)]
#[derive(Deserialize)]
struct RedditItemData {
name: Option<String>,
author: Option<String>,
body: Option<String>,
subject: Option<String>,
parent_id: Option<String>,
link_id: Option<String>,
subreddit: Option<String>,
created_utc: Option<f64>,
new: Option<bool>,
#[serde(rename = "type")]
message_type: Option<String>,
context: Option<String>,
}
const REDDIT_API_BASE: &str = "https://oauth.reddit.com";
const REDDIT_TOKEN_URL: &str = "https://www.reddit.com/api/v1/access_token";
const USER_AGENT: &str = "zeroclaw:channel:v0.1.0 (by /u/zeroclaw-bot)";
/// Reddit enforces 60 requests per minute.
const POLL_INTERVAL: Duration = Duration::from_secs(5);
impl RedditChannel {
pub fn new(
client_id: String,
client_secret: String,
refresh_token: String,
username: String,
subreddit: Option<String>,
) -> Self {
Self {
client_id,
client_secret,
refresh_token,
username,
subreddit,
auth: Mutex::new(RedditAuth {
access_token: String::new(),
expires_at: Instant::now(),
}),
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.reddit")
}
/// Refresh the OAuth2 access token using the refresh token.
async fn refresh_access_token(&self) -> Result<()> {
let client = self.http_client();
let resp = client
.post(REDDIT_TOKEN_URL)
.basic_auth(&self.client_id, Some(&self.client_secret))
.header("User-Agent", USER_AGENT)
.form(&[
("grant_type", "refresh_token"),
("refresh_token", &self.refresh_token),
])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Reddit token refresh failed ({status}): {body}");
}
let token_resp: RedditTokenResponse = resp.json().await?;
let mut auth = self.auth.lock();
auth.access_token = token_resp.access_token;
auth.expires_at =
Instant::now() + Duration::from_secs(token_resp.expires_in.saturating_sub(60));
Ok(())
}
/// Get a valid access token, refreshing if expired.
async fn get_access_token(&self) -> Result<String> {
{
let auth = self.auth.lock();
if !auth.access_token.is_empty() && Instant::now() < auth.expires_at {
return Ok(auth.access_token.clone());
}
}
self.refresh_access_token().await?;
let auth = self.auth.lock();
Ok(auth.access_token.clone())
}
/// Fetch unread inbox items (mentions, DMs, comment replies).
async fn fetch_inbox(&self) -> Result<Vec<RedditChild>> {
let token = self.get_access_token().await?;
let client = self.http_client();
let resp = client
.get(format!("{REDDIT_API_BASE}/message/unread"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.query(&[("limit", "25")])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
tracing::warn!("Reddit inbox fetch failed ({status}): {body}");
return Ok(Vec::new());
}
let listing: RedditListing = resp.json().await?;
Ok(listing.data.children)
}
/// Mark inbox items as read.
async fn mark_read(&self, fullnames: &[String]) -> Result<()> {
if fullnames.is_empty() {
return Ok(());
}
let token = self.get_access_token().await?;
let client = self.http_client();
let ids = fullnames.join(",");
let resp = client
.post(format!("{REDDIT_API_BASE}/api/read_message"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.form(&[("id", ids.as_str())])
.send()
.await?;
if !resp.status().is_success() {
tracing::warn!("Reddit mark_read failed: {}", resp.status());
}
Ok(())
}
/// Parse a Reddit inbox item into a ChannelMessage.
fn parse_item(&self, item: &RedditItemData) -> Option<ChannelMessage> {
let author = item.author.as_deref().unwrap_or("");
let body = item.body.as_deref().unwrap_or("");
let name = item.name.as_deref().unwrap_or("");
// Skip messages from ourselves
if author.eq_ignore_ascii_case(&self.username) || author.is_empty() || body.is_empty() {
return None;
}
// If a subreddit filter is set, skip items from other subreddits
if let Some(ref sub) = self.subreddit {
if let Some(ref item_sub) = item.subreddit {
if !item_sub.eq_ignore_ascii_case(sub) {
return None;
}
}
}
// Determine reply target: for comment replies use the parent thing name,
// for DMs reply to the author.
let reply_target =
if item.message_type.as_deref() == Some("comment_reply") || item.parent_id.is_some() {
// For comment replies, the recipient is the parent fullname
item.parent_id.clone().unwrap_or_else(|| name.to_string())
} else {
// For DMs, reply to the author
author.to_string()
};
#[allow(clippy::cast_possible_truncation, clippy::cast_sign_loss)]
let timestamp = item.created_utc.unwrap_or(0.0) as u64;
Some(ChannelMessage {
id: format!("reddit_{name}"),
sender: author.to_string(),
reply_target,
content: body.to_string(),
channel: "reddit".to_string(),
timestamp,
thread_ts: item.parent_id.clone(),
})
}
}
#[async_trait]
impl Channel for RedditChannel {
fn name(&self) -> &str {
"reddit"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let token = self.get_access_token().await?;
let client = self.http_client();
// If recipient looks like a Reddit fullname (t1_, t3_, t4_), it's a comment reply.
// Otherwise treat it as a DM to a username.
if message.recipient.starts_with("t1_")
|| message.recipient.starts_with("t3_")
|| message.recipient.starts_with("t4_")
{
// Comment reply
let resp = client
.post(format!("{REDDIT_API_BASE}/api/comment"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.form(&[
("thing_id", message.recipient.as_str()),
("text", &message.content),
])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Reddit comment reply failed ({status}): {body}");
}
} else {
// Direct message
let subject = message
.subject
.as_deref()
.unwrap_or("Message from ZeroClaw");
let resp = client
.post(format!("{REDDIT_API_BASE}/api/compose"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.form(&[
("to", message.recipient.as_str()),
("subject", subject),
("text", &message.content),
])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Reddit DM failed ({status}): {body}");
}
}
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
// Initial auth
self.refresh_access_token().await?;
tracing::info!(
"Reddit channel listening as u/{} {}...",
self.username,
self.subreddit
.as_ref()
.map(|s| format!("in r/{s}"))
.unwrap_or_default()
);
loop {
tokio::time::sleep(POLL_INTERVAL).await;
let items = match self.fetch_inbox().await {
Ok(items) => items,
Err(e) => {
tracing::warn!("Reddit poll error: {e}");
continue;
}
};
let mut read_ids = Vec::new();
for child in &items {
if let Some(ref name) = child.data.name {
read_ids.push(name.clone());
}
if let Some(msg) = self.parse_item(&child.data) {
if tx.send(msg).await.is_err() {
return Ok(());
}
}
}
if let Err(e) = self.mark_read(&read_ids).await {
tracing::warn!("Reddit mark_read error: {e}");
}
}
}
async fn health_check(&self) -> bool {
self.get_access_token().await.is_ok()
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> RedditChannel {
RedditChannel::new(
"client_id".into(),
"client_secret".into(),
"refresh_token".into(),
"testbot".into(),
None,
)
}
fn make_channel_with_sub(sub: &str) -> RedditChannel {
RedditChannel::new(
"client_id".into(),
"client_secret".into(),
"refresh_token".into(),
"testbot".into(),
Some(sub.into()),
)
}
#[test]
fn parse_comment_reply() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t1_abc123".into()),
author: Some("user1".into()),
body: Some("hello bot".into()),
subject: None,
parent_id: Some("t1_parent1".into()),
link_id: Some("t3_post1".into()),
subreddit: Some("rust".into()),
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: Some("comment_reply".into()),
context: None,
};
let msg = ch.parse_item(&item).unwrap();
assert_eq!(msg.sender, "user1");
assert_eq!(msg.content, "hello bot");
assert_eq!(msg.reply_target, "t1_parent1");
assert_eq!(msg.channel, "reddit");
assert_eq!(msg.id, "reddit_t1_abc123");
}
#[test]
fn parse_dm() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t4_dm456".into()),
author: Some("user2".into()),
body: Some("private message".into()),
subject: Some("Hello".into()),
parent_id: None,
link_id: None,
subreddit: None,
created_utc: Some(1_700_000_100.0),
new: Some(true),
message_type: None,
context: None,
};
let msg = ch.parse_item(&item).unwrap();
assert_eq!(msg.sender, "user2");
assert_eq!(msg.content, "private message");
assert_eq!(msg.reply_target, "user2"); // DM reply goes to author
}
#[test]
fn skip_self_messages() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t1_self".into()),
author: Some("testbot".into()),
body: Some("my own message".into()),
subject: None,
parent_id: None,
link_id: None,
subreddit: None,
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&item).is_none());
}
#[test]
fn skip_empty_body() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t1_empty".into()),
author: Some("user1".into()),
body: Some(String::new()),
subject: None,
parent_id: None,
link_id: None,
subreddit: None,
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&item).is_none());
}
#[test]
fn subreddit_filter() {
let ch = make_channel_with_sub("rust");
let item = RedditItemData {
name: Some("t1_other".into()),
author: Some("user1".into()),
body: Some("hello".into()),
subject: None,
parent_id: None,
link_id: None,
subreddit: Some("python".into()),
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&item).is_none());
let matching_item = RedditItemData {
name: Some("t1_match".into()),
author: Some("user1".into()),
body: Some("hello".into()),
subject: None,
parent_id: None,
link_id: None,
subreddit: Some("rust".into()),
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&matching_item).is_some());
}
#[test]
fn send_message_formatting() {
// Verify SendMessage can be constructed for both DM and comment reply
let dm = SendMessage::new("hello", "user1");
assert_eq!(dm.recipient, "user1");
assert_eq!(dm.content, "hello");
let reply = SendMessage::new("response", "t1_abc123");
assert!(reply.recipient.starts_with("t1_"));
}
}
+5
View File
@@ -76,6 +76,11 @@ pub trait SessionBackend: Send + Sync {
fn search(&self, _query: &SessionQuery) -> Vec<SessionMetadata> {
Vec::new()
}
/// Delete all messages for a session. Returns `true` if the session existed.
fn delete_session(&self, _session_key: &str) -> std::io::Result<bool> {
Ok(false)
}
}
#[cfg(test)]
+55
View File
@@ -288,6 +288,39 @@ impl SessionBackend for SqliteSessionBackend {
Ok(count)
}
fn delete_session(&self, session_key: &str) -> std::io::Result<bool> {
let conn = self.conn.lock();
// Check if session exists
let exists: bool = conn
.query_row(
"SELECT COUNT(*) > 0 FROM session_metadata WHERE session_key = ?1",
params![session_key],
|row| row.get(0),
)
.unwrap_or(false);
if !exists {
return Ok(false);
}
// Delete messages (FTS5 trigger handles sessions_fts cleanup)
conn.execute(
"DELETE FROM sessions WHERE session_key = ?1",
params![session_key],
)
.map_err(std::io::Error::other)?;
// Delete metadata
conn.execute(
"DELETE FROM session_metadata WHERE session_key = ?1",
params![session_key],
)
.map_err(std::io::Error::other)?;
Ok(true)
}
fn search(&self, query: &SessionQuery) -> Vec<SessionMetadata> {
let Some(keyword) = &query.keyword else {
return self.list_sessions_with_metadata();
@@ -473,6 +506,28 @@ mod tests {
assert_eq!(sessions[0], "new_session");
}
#[test]
fn delete_session_removes_all_data() {
let tmp = TempDir::new().unwrap();
let backend = SqliteSessionBackend::new(tmp.path()).unwrap();
backend.append("s1", &ChatMessage::user("hello")).unwrap();
backend.append("s1", &ChatMessage::assistant("hi")).unwrap();
backend.append("s2", &ChatMessage::user("other")).unwrap();
assert!(backend.delete_session("s1").unwrap());
assert!(backend.load("s1").is_empty());
assert_eq!(backend.list_sessions().len(), 1);
assert_eq!(backend.list_sessions()[0], "s2");
}
#[test]
fn delete_session_returns_false_for_missing() {
let tmp = TempDir::new().unwrap();
let backend = SqliteSessionBackend::new(tmp.path()).unwrap();
assert!(!backend.delete_session("nonexistent").unwrap());
}
#[test]
fn migrate_from_jsonl_imports_and_renames() {
let tmp = TempDir::new().unwrap();
+39 -9
View File
@@ -332,6 +332,7 @@ pub struct TelegramChannel {
transcription: Option<crate::config::TranscriptionConfig>,
voice_transcriptions: Mutex<std::collections::HashMap<String, String>>,
workspace_dir: Option<std::path::PathBuf>,
ack_reactions: bool,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
@@ -370,9 +371,16 @@ impl TelegramChannel {
transcription: None,
voice_transcriptions: Mutex::new(std::collections::HashMap::new()),
workspace_dir: None,
ack_reactions: true,
}
}
/// Configure whether Telegram-native acknowledgement reactions are sent.
pub fn with_ack_reactions(mut self, enabled: bool) -> Self {
self.ack_reactions = enabled;
self
}
/// Configure workspace directory for saving downloaded attachments.
pub fn with_workspace_dir(mut self, dir: std::path::PathBuf) -> Self {
self.workspace_dir = Some(dir);
@@ -751,7 +759,7 @@ impl TelegramChannel {
if let Some(identity) = bind_identity {
self.add_allowed_identity_runtime(&identity);
match self.persist_allowed_identity(&identity).await {
match Box::pin(self.persist_allowed_identity(&identity)).await {
Ok(()) => {
let _ = self
.send(&SendMessage::new(
@@ -2685,17 +2693,19 @@ Ensure only one `zeroclaw` process is using this bot token."
} else if let Some(m) = self.try_parse_attachment_message(update).await {
m
} else {
self.handle_unauthorized_message(update).await;
Box::pin(self.handle_unauthorized_message(update)).await;
continue;
};
if let Some((reaction_chat_id, reaction_message_id)) =
Self::extract_update_message_target(update)
{
self.try_add_ack_reaction_nonblocking(
reaction_chat_id,
reaction_message_id,
);
if self.ack_reactions {
if let Some((reaction_chat_id, reaction_message_id)) =
Self::extract_update_message_target(update)
{
self.try_add_ack_reaction_nonblocking(
reaction_chat_id,
reaction_message_id,
);
}
}
// Send "typing" indicator immediately when we receive a message
@@ -4681,4 +4691,24 @@ mod tests {
// the agent loop will return ProviderCapabilityError before calling
// the provider, and the channel will send "⚠️ Error: ..." to the user.
}
#[test]
fn ack_reactions_defaults_to_true() {
let ch = TelegramChannel::new("token".into(), vec!["*".into()], false);
assert!(ch.ack_reactions);
}
#[test]
fn with_ack_reactions_false_disables_reactions() {
let ch =
TelegramChannel::new("token".into(), vec!["*".into()], false).with_ack_reactions(false);
assert!(!ch.ack_reactions);
}
#[test]
fn with_ack_reactions_true_keeps_reactions() {
let ch =
TelegramChannel::new("token".into(), vec!["*".into()], false).with_ack_reactions(true);
assert!(ch.ack_reactions);
}
}
+800 -34
View File
@@ -1,11 +1,19 @@
use std::collections::HashMap;
use anyhow::{bail, Context, Result};
use async_trait::async_trait;
use reqwest::multipart::{Form, Part};
use crate::config::TranscriptionConfig;
/// Maximum upload size accepted by the Groq Whisper API (25 MB).
/// Maximum upload size accepted by most Whisper-compatible APIs (25 MB).
const MAX_AUDIO_BYTES: usize = 25 * 1024 * 1024;
/// Request timeout for transcription API calls (seconds).
const TRANSCRIPTION_TIMEOUT_SECS: u64 = 120;
// ── Audio utilities ─────────────────────────────────────────────
/// Map file extension to MIME type for Whisper-compatible transcription APIs.
fn mime_for_audio(extension: &str) -> Option<&'static str> {
match extension.to_ascii_lowercase().as_str() {
@@ -31,16 +39,51 @@ fn normalize_audio_filename(file_name: &str) -> String {
}
}
/// Transcribe audio bytes via a Whisper-compatible transcription API.
/// Resolve the API key for voice transcription.
///
/// Returns the transcribed text on success. Requires `GROQ_API_KEY` in the
/// environment. The caller is responsible for enforcing duration limits
/// *before* downloading the file; this function enforces the byte-size cap.
pub async fn transcribe_audio(
audio_data: Vec<u8>,
file_name: &str,
config: &TranscriptionConfig,
) -> Result<String> {
/// Priority order:
/// 1. Explicit `config.api_key` (if set and non-empty).
/// 2. Provider-specific env var based on `api_url`:
/// - URL contains "openai.com" -> `OPENAI_API_KEY`
/// - URL contains "groq.com" -> `GROQ_API_KEY`
/// 3. Fallback chain: `TRANSCRIPTION_API_KEY` -> `GROQ_API_KEY` -> `OPENAI_API_KEY`.
fn resolve_transcription_api_key(config: &TranscriptionConfig) -> Result<String> {
// 1. Explicit config key
if let Some(ref key) = config.api_key {
let trimmed = key.trim();
if !trimmed.is_empty() {
return Ok(trimmed.to_string());
}
}
// 2. Provider-specific env var based on API URL
if config.api_url.contains("openai.com") {
if let Ok(key) = std::env::var("OPENAI_API_KEY") {
return Ok(key);
}
} else if config.api_url.contains("groq.com") {
if let Ok(key) = std::env::var("GROQ_API_KEY") {
return Ok(key);
}
}
// 3. Fallback chain
for var in ["TRANSCRIPTION_API_KEY", "GROQ_API_KEY", "OPENAI_API_KEY"] {
if let Ok(key) = std::env::var(var) {
return Ok(key);
}
}
bail!(
"No API key found for voice transcription — set one of: \
transcription.api_key in config, TRANSCRIPTION_API_KEY, GROQ_API_KEY, or OPENAI_API_KEY"
);
}
/// Validate audio data and resolve MIME type from file name.
///
/// Returns `(normalized_filename, mime_type)` on success.
fn validate_audio(audio_data: &[u8], file_name: &str) -> Result<(String, &'static str)> {
if audio_data.len() > MAX_AUDIO_BYTES {
bail!(
"Audio file too large ({} bytes, max {MAX_AUDIO_BYTES})",
@@ -59,37 +102,494 @@ pub async fn transcribe_audio(
)
})?;
let api_key = std::env::var("GROQ_API_KEY").context(
"GROQ_API_KEY environment variable is not set — required for voice transcription",
)?;
Ok((normalized_name, mime))
}
let client = crate::config::build_runtime_proxy_client("transcription.groq");
// ── TranscriptionProvider trait ─────────────────────────────────
let file_part = Part::bytes(audio_data)
.file_name(normalized_name)
.mime_str(mime)?;
/// Trait for speech-to-text provider implementations.
#[async_trait]
pub trait TranscriptionProvider: Send + Sync {
/// Human-readable provider name (e.g. "groq", "openai").
fn name(&self) -> &str;
let mut form = Form::new()
.part("file", file_part)
.text("model", config.model.clone())
.text("response_format", "json");
/// Transcribe raw audio bytes. `file_name` includes the extension for
/// format detection (e.g. "voice.ogg").
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String>;
if let Some(ref lang) = config.language {
form = form.text("language", lang.clone());
/// List of supported audio file extensions.
fn supported_formats(&self) -> Vec<String> {
vec![
"flac", "mp3", "mpeg", "mpga", "mp4", "m4a", "ogg", "oga", "opus", "wav", "webm",
]
.into_iter()
.map(String::from)
.collect()
}
}
// ── GroqProvider ────────────────────────────────────────────────
/// Groq Whisper API provider (default, backward-compatible with existing config).
pub struct GroqProvider {
api_url: String,
model: String,
api_key: String,
language: Option<String>,
}
impl GroqProvider {
/// Build from the existing `TranscriptionConfig` fields.
///
/// Credential resolution order:
/// 1. `config.api_key`
/// 2. `GROQ_API_KEY` environment variable (backward compatibility)
pub fn from_config(config: &TranscriptionConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.or_else(|| {
std::env::var("GROQ_API_KEY")
.ok()
.map(|v| v.trim().to_string())
.filter(|v| !v.is_empty())
})
.context(
"Missing transcription API key: set [transcription].api_key or GROQ_API_KEY environment variable",
)?;
Ok(Self {
api_url: config.api_url.clone(),
model: config.model.clone(),
api_key,
language: config.language.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for GroqProvider {
fn name(&self) -> &str {
"groq"
}
if let Some(ref prompt) = config.initial_prompt {
form = form.text("prompt", prompt.clone());
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (normalized_name, mime) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.groq");
let file_part = Part::bytes(audio_data.to_vec())
.file_name(normalized_name)
.mime_str(mime)?;
let mut form = Form::new()
.part("file", file_part)
.text("model", self.model.clone())
.text("response_format", "json");
if let Some(ref lang) = self.language {
form = form.text("language", lang.clone());
}
let resp = client
.post(&self.api_url)
.bearer_auth(&self.api_key)
.multipart(form)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to Groq")?;
parse_whisper_response(resp).await
}
}
// ── OpenAiWhisperProvider ───────────────────────────────────────
/// OpenAI Whisper API provider.
pub struct OpenAiWhisperProvider {
api_key: String,
model: String,
}
impl OpenAiWhisperProvider {
pub fn from_config(config: &crate::config::OpenAiSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing OpenAI STT API key: set [transcription.openai].api_key")?;
Ok(Self {
api_key,
model: config.model.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for OpenAiWhisperProvider {
fn name(&self) -> &str {
"openai"
}
let resp = client
.post(&config.api_url)
.bearer_auth(&api_key)
.multipart(form)
.send()
.await
.context("Failed to send transcription request")?;
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (normalized_name, mime) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.openai");
let file_part = Part::bytes(audio_data.to_vec())
.file_name(normalized_name)
.mime_str(mime)?;
let form = Form::new()
.part("file", file_part)
.text("model", self.model.clone())
.text("response_format", "json");
let resp = client
.post("https://api.openai.com/v1/audio/transcriptions")
.bearer_auth(&self.api_key)
.multipart(form)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to OpenAI")?;
parse_whisper_response(resp).await
}
}
// ── DeepgramProvider ────────────────────────────────────────────
/// Deepgram STT API provider.
pub struct DeepgramProvider {
api_key: String,
model: String,
}
impl DeepgramProvider {
pub fn from_config(config: &crate::config::DeepgramSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing Deepgram API key: set [transcription.deepgram].api_key")?;
Ok(Self {
api_key,
model: config.model.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for DeepgramProvider {
fn name(&self) -> &str {
"deepgram"
}
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (_, mime) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.deepgram");
let url = format!(
"https://api.deepgram.com/v1/listen?model={}&punctuate=true",
self.model
);
let resp = client
.post(&url)
.header("Authorization", format!("Token {}", self.api_key))
.header("Content-Type", mime)
.body(audio_data.to_vec())
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to Deepgram")?;
let status = resp.status();
let body: serde_json::Value = resp
.json()
.await
.context("Failed to parse Deepgram response")?;
if !status.is_success() {
let error_msg = body["err_msg"]
.as_str()
.or_else(|| body["error"].as_str())
.unwrap_or("unknown error");
bail!("Deepgram API error ({}): {}", status, error_msg);
}
let text = body["results"]["channels"][0]["alternatives"][0]["transcript"]
.as_str()
.context("Deepgram response missing transcript field")?
.to_string();
Ok(text)
}
}
// ── AssemblyAiProvider ──────────────────────────────────────────
/// AssemblyAI STT API provider.
pub struct AssemblyAiProvider {
api_key: String,
}
impl AssemblyAiProvider {
pub fn from_config(config: &crate::config::AssemblyAiSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing AssemblyAI API key: set [transcription.assemblyai].api_key")?;
Ok(Self { api_key })
}
}
#[async_trait]
impl TranscriptionProvider for AssemblyAiProvider {
fn name(&self) -> &str {
"assemblyai"
}
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (_, _) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.assemblyai");
// Step 1: Upload the audio file.
let upload_resp = client
.post("https://api.assemblyai.com/v2/upload")
.header("Authorization", &self.api_key)
.header("Content-Type", "application/octet-stream")
.body(audio_data.to_vec())
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to upload audio to AssemblyAI")?;
let upload_status = upload_resp.status();
let upload_body: serde_json::Value = upload_resp
.json()
.await
.context("Failed to parse AssemblyAI upload response")?;
if !upload_status.is_success() {
let error_msg = upload_body["error"].as_str().unwrap_or("unknown error");
bail!("AssemblyAI upload error ({}): {}", upload_status, error_msg);
}
let upload_url = upload_body["upload_url"]
.as_str()
.context("AssemblyAI upload response missing 'upload_url'")?;
// Step 2: Create transcription job.
let transcript_req = serde_json::json!({
"audio_url": upload_url,
});
let create_resp = client
.post("https://api.assemblyai.com/v2/transcript")
.header("Authorization", &self.api_key)
.json(&transcript_req)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to create AssemblyAI transcription")?;
let create_status = create_resp.status();
let create_body: serde_json::Value = create_resp
.json()
.await
.context("Failed to parse AssemblyAI create response")?;
if !create_status.is_success() {
let error_msg = create_body["error"].as_str().unwrap_or("unknown error");
bail!(
"AssemblyAI transcription error ({}): {}",
create_status,
error_msg
);
}
let transcript_id = create_body["id"]
.as_str()
.context("AssemblyAI response missing 'id'")?;
// Step 3: Poll for completion.
let poll_url = format!("https://api.assemblyai.com/v2/transcript/{transcript_id}");
let poll_interval = std::time::Duration::from_secs(3);
let poll_deadline = tokio::time::Instant::now() + std::time::Duration::from_secs(180);
while tokio::time::Instant::now() < poll_deadline {
tokio::time::sleep(poll_interval).await;
let poll_resp = client
.get(&poll_url)
.header("Authorization", &self.api_key)
.timeout(std::time::Duration::from_secs(30))
.send()
.await
.context("Failed to poll AssemblyAI transcription")?;
let poll_status = poll_resp.status();
let poll_body: serde_json::Value = poll_resp
.json()
.await
.context("Failed to parse AssemblyAI poll response")?;
if !poll_status.is_success() {
let error_msg = poll_body["error"].as_str().unwrap_or("unknown poll error");
bail!("AssemblyAI poll error ({}): {}", poll_status, error_msg);
}
let status_str = poll_body["status"].as_str().unwrap_or("unknown");
match status_str {
"completed" => {
let text = poll_body["text"]
.as_str()
.context("AssemblyAI response missing 'text'")?
.to_string();
return Ok(text);
}
"error" => {
let error_msg = poll_body["error"]
.as_str()
.unwrap_or("unknown transcription error");
bail!("AssemblyAI transcription failed: {}", error_msg);
}
_ => {}
}
}
bail!("AssemblyAI transcription timed out after 180s")
}
}
// ── GoogleSttProvider ───────────────────────────────────────────
/// Google Cloud Speech-to-Text API provider.
pub struct GoogleSttProvider {
api_key: String,
language_code: String,
}
impl GoogleSttProvider {
pub fn from_config(config: &crate::config::GoogleSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing Google STT API key: set [transcription.google].api_key")?;
Ok(Self {
api_key,
language_code: config.language_code.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for GoogleSttProvider {
fn name(&self) -> &str {
"google"
}
fn supported_formats(&self) -> Vec<String> {
// Google Cloud STT supports a subset of formats.
vec!["flac", "wav", "ogg", "opus", "mp3", "webm"]
.into_iter()
.map(String::from)
.collect()
}
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (normalized_name, _) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.google");
let encoding = match normalized_name
.rsplit_once('.')
.map(|(_, e)| e.to_ascii_lowercase())
.as_deref()
{
Some("flac") => "FLAC",
Some("wav") => "LINEAR16",
Some("ogg" | "opus") => "OGG_OPUS",
Some("mp3") => "MP3",
Some("webm") => "WEBM_OPUS",
Some(ext) => bail!("Google STT does not support '.{ext}' input"),
None => bail!("Google STT requires a file extension"),
};
let audio_content =
base64::Engine::encode(&base64::engine::general_purpose::STANDARD, audio_data);
let request_body = serde_json::json!({
"config": {
"encoding": encoding,
"languageCode": &self.language_code,
"enableAutomaticPunctuation": true,
},
"audio": {
"content": audio_content,
}
});
let url = format!(
"https://speech.googleapis.com/v1/speech:recognize?key={}",
self.api_key
);
let resp = client
.post(&url)
.json(&request_body)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to Google STT")?;
let status = resp.status();
let body: serde_json::Value = resp
.json()
.await
.context("Failed to parse Google STT response")?;
if !status.is_success() {
let error_msg = body["error"]["message"].as_str().unwrap_or("unknown error");
bail!("Google STT API error ({}): {}", status, error_msg);
}
let text = body["results"][0]["alternatives"][0]["transcript"]
.as_str()
.unwrap_or("")
.to_string();
Ok(text)
}
}
// ── Shared response parsing ─────────────────────────────────────
/// Parse a standard Whisper-compatible JSON response (`{ "text": "..." }`).
async fn parse_whisper_response(resp: reqwest::Response) -> Result<String> {
let status = resp.status();
let body: serde_json::Value = resp
.json()
@@ -109,6 +609,128 @@ pub async fn transcribe_audio(
Ok(text)
}
// ── TranscriptionManager ────────────────────────────────────────
/// Manages multiple STT providers and routes transcription requests.
pub struct TranscriptionManager {
providers: HashMap<String, Box<dyn TranscriptionProvider>>,
default_provider: String,
}
impl TranscriptionManager {
/// Build a `TranscriptionManager` from config.
///
/// Always attempts to register the Groq provider from existing config fields.
/// Additional providers are registered when their config sections are present.
///
/// Provider keys with missing API keys are silently skipped — the error
/// surfaces at transcribe-time so callers that target a different default
/// provider are not blocked.
pub fn new(config: &TranscriptionConfig) -> Result<Self> {
let mut providers: HashMap<String, Box<dyn TranscriptionProvider>> = HashMap::new();
if let Ok(groq) = GroqProvider::from_config(config) {
providers.insert("groq".to_string(), Box::new(groq));
}
if let Some(ref openai_cfg) = config.openai {
if let Ok(p) = OpenAiWhisperProvider::from_config(openai_cfg) {
providers.insert("openai".to_string(), Box::new(p));
}
}
if let Some(ref deepgram_cfg) = config.deepgram {
if let Ok(p) = DeepgramProvider::from_config(deepgram_cfg) {
providers.insert("deepgram".to_string(), Box::new(p));
}
}
if let Some(ref assemblyai_cfg) = config.assemblyai {
if let Ok(p) = AssemblyAiProvider::from_config(assemblyai_cfg) {
providers.insert("assemblyai".to_string(), Box::new(p));
}
}
if let Some(ref google_cfg) = config.google {
if let Ok(p) = GoogleSttProvider::from_config(google_cfg) {
providers.insert("google".to_string(), Box::new(p));
}
}
let default_provider = config.default_provider.clone();
if config.enabled && !providers.contains_key(&default_provider) {
let available: Vec<&str> = providers.keys().map(|k| k.as_str()).collect();
bail!(
"Default transcription provider '{}' is not configured. Available: {available:?}",
default_provider
);
}
Ok(Self {
providers,
default_provider,
})
}
/// Transcribe audio using the default provider.
pub async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
self.transcribe_with_provider(audio_data, file_name, &self.default_provider)
.await
}
/// Transcribe audio using a specific named provider.
pub async fn transcribe_with_provider(
&self,
audio_data: &[u8],
file_name: &str,
provider: &str,
) -> Result<String> {
let p = self.providers.get(provider).ok_or_else(|| {
let available: Vec<&str> = self.providers.keys().map(|k| k.as_str()).collect();
anyhow::anyhow!(
"Transcription provider '{provider}' not configured. Available: {available:?}"
)
})?;
p.transcribe(audio_data, file_name).await
}
/// List registered provider names.
pub fn available_providers(&self) -> Vec<&str> {
self.providers.keys().map(|k| k.as_str()).collect()
}
}
// ── Backward-compatible convenience function ────────────────────
/// Transcribe audio bytes via a Whisper-compatible transcription API.
///
/// Returns the transcribed text on success.
///
/// This is the backward-compatible entry point that preserves the original
/// function signature. It uses the Groq provider directly, matching the
/// original single-provider behavior.
///
/// Credential resolution order:
/// 1. `config.transcription.api_key`
/// 2. `GROQ_API_KEY` environment variable (backward compatibility)
///
/// The caller is responsible for enforcing duration limits *before* downloading
/// the file; this function enforces the byte-size cap.
pub async fn transcribe_audio(
audio_data: Vec<u8>,
file_name: &str,
config: &TranscriptionConfig,
) -> Result<String> {
// Validate audio before resolving credentials so that size/format errors
// are reported before missing-key errors (preserves original behavior).
validate_audio(&audio_data, file_name)?;
let groq = GroqProvider::from_config(config)?;
groq.transcribe(&audio_data, file_name).await
}
#[cfg(test)]
mod tests {
use super::*;
@@ -129,8 +751,10 @@ mod tests {
#[tokio::test]
async fn rejects_missing_api_key() {
// Ensure the key is absent for this test
// Ensure all candidate keys are absent for this test.
std::env::remove_var("GROQ_API_KEY");
std::env::remove_var("OPENAI_API_KEY");
std::env::remove_var("TRANSCRIPTION_API_KEY");
let data = vec![0u8; 100];
let config = TranscriptionConfig::default();
@@ -139,11 +763,29 @@ mod tests {
.await
.unwrap_err();
assert!(
err.to_string().contains("GROQ_API_KEY"),
err.to_string().contains("transcription API key"),
"expected missing-key error, got: {err}"
);
}
#[tokio::test]
async fn uses_config_api_key_without_groq_env() {
std::env::remove_var("GROQ_API_KEY");
let data = vec![0u8; 100];
let mut config = TranscriptionConfig::default();
config.api_key = Some("transcription-key".to_string());
// Keep invalid extension so we fail before network, but after key resolution.
let err = transcribe_audio(data, "recording.aac", &config)
.await
.unwrap_err();
assert!(
err.to_string().contains("Unsupported audio format"),
"expected unsupported-format error, got: {err}"
);
}
#[test]
fn mime_for_audio_maps_accepted_formats() {
let cases = [
@@ -219,4 +861,128 @@ mod tests {
"error should mention the rejected extension, got: {msg}"
);
}
// ── TranscriptionManager tests ──────────────────────────────
#[test]
fn manager_creation_with_default_config() {
std::env::remove_var("GROQ_API_KEY");
let config = TranscriptionConfig::default();
let manager = TranscriptionManager::new(&config).unwrap();
assert_eq!(manager.default_provider, "groq");
// Groq won't be registered without a key.
assert!(manager.providers.is_empty());
}
#[test]
fn manager_registers_groq_with_key() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.api_key = Some("test-groq-key".to_string());
let manager = TranscriptionManager::new(&config).unwrap();
assert!(manager.providers.contains_key("groq"));
assert_eq!(manager.providers["groq"].name(), "groq");
}
#[test]
fn manager_registers_multiple_providers() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.api_key = Some("test-groq-key".to_string());
config.openai = Some(crate::config::OpenAiSttConfig {
api_key: Some("test-openai-key".to_string()),
model: "whisper-1".to_string(),
});
config.deepgram = Some(crate::config::DeepgramSttConfig {
api_key: Some("test-deepgram-key".to_string()),
model: "nova-2".to_string(),
});
let manager = TranscriptionManager::new(&config).unwrap();
assert!(manager.providers.contains_key("groq"));
assert!(manager.providers.contains_key("openai"));
assert!(manager.providers.contains_key("deepgram"));
assert_eq!(manager.available_providers().len(), 3);
}
#[tokio::test]
async fn manager_rejects_unconfigured_provider() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.api_key = Some("test-groq-key".to_string());
let manager = TranscriptionManager::new(&config).unwrap();
let err = manager
.transcribe_with_provider(&[0u8; 100], "test.ogg", "nonexistent")
.await
.unwrap_err();
assert!(
err.to_string().contains("not configured"),
"expected not-configured error, got: {err}"
);
}
#[test]
fn manager_default_provider_from_config() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.default_provider = "openai".to_string();
config.openai = Some(crate::config::OpenAiSttConfig {
api_key: Some("test-openai-key".to_string()),
model: "whisper-1".to_string(),
});
let manager = TranscriptionManager::new(&config).unwrap();
assert_eq!(manager.default_provider, "openai");
}
#[test]
fn validate_audio_rejects_oversized() {
let big = vec![0u8; MAX_AUDIO_BYTES + 1];
let err = validate_audio(&big, "test.ogg").unwrap_err();
assert!(err.to_string().contains("too large"));
}
#[test]
fn validate_audio_rejects_unsupported_format() {
let data = vec![0u8; 100];
let err = validate_audio(&data, "test.aac").unwrap_err();
assert!(err.to_string().contains("Unsupported audio format"));
}
#[test]
fn validate_audio_accepts_supported_format() {
let data = vec![0u8; 100];
let (name, mime) = validate_audio(&data, "test.ogg").unwrap();
assert_eq!(name, "test.ogg");
assert_eq!(mime, "audio/ogg");
}
#[test]
fn validate_audio_normalizes_oga() {
let data = vec![0u8; 100];
let (name, mime) = validate_audio(&data, "voice.oga").unwrap();
assert_eq!(name, "voice.ogg");
assert_eq!(mime, "audio/ogg");
}
#[test]
fn backward_compat_config_defaults_unchanged() {
let config = TranscriptionConfig::default();
assert!(!config.enabled);
assert!(config.api_key.is_none());
assert!(config.api_url.contains("groq.com"));
assert_eq!(config.model, "whisper-large-v3-turbo");
assert_eq!(config.default_provider, "groq");
assert!(config.openai.is_none());
assert!(config.deepgram.is_none());
assert!(config.assemblyai.is_none());
assert!(config.google.is_none());
}
}
+1
View File
@@ -85,6 +85,7 @@ impl TtsProvider for OpenAiTtsProvider {
"input": text,
"voice": voice,
"speed": self.speed,
"response_format": "opus",
});
let resp = self
+409
View File
@@ -0,0 +1,409 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use anyhow::{bail, Result};
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
/// Generic Webhook channel — receives messages via HTTP POST and sends replies
/// to a configurable outbound URL. This is the "universal adapter" for any system
/// that supports webhooks.
pub struct WebhookChannel {
listen_port: u16,
listen_path: String,
send_url: Option<String>,
send_method: String,
auth_header: Option<String>,
secret: Option<String>,
}
/// Incoming webhook payload format.
#[derive(Debug, Deserialize)]
struct IncomingWebhook {
sender: String,
content: String,
#[serde(default)]
thread_id: Option<String>,
}
/// Outgoing webhook payload format.
#[derive(Debug, Serialize)]
struct OutgoingWebhook {
content: String,
#[serde(skip_serializing_if = "Option::is_none")]
thread_id: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
recipient: Option<String>,
}
impl WebhookChannel {
pub fn new(
listen_port: u16,
listen_path: Option<String>,
send_url: Option<String>,
send_method: Option<String>,
auth_header: Option<String>,
secret: Option<String>,
) -> Self {
let path = listen_path.unwrap_or_else(|| "/webhook".to_string());
// Ensure path starts with /
let listen_path = if path.starts_with('/') {
path
} else {
format!("/{path}")
};
Self {
listen_port,
listen_path,
send_url,
send_method: send_method
.unwrap_or_else(|| "POST".to_string())
.to_uppercase(),
auth_header,
secret,
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.webhook")
}
/// Verify an incoming request's signature if a secret is configured.
fn verify_signature(&self, body: &[u8], signature: Option<&str>) -> bool {
let Some(ref secret) = self.secret else {
return true; // No secret configured, accept all
};
let Some(sig) = signature else {
return false; // Secret is set but no signature header provided
};
// HMAC-SHA256 verification
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
let Ok(mut mac) = HmacSha256::new_from_slice(secret.as_bytes()) else {
return false;
};
mac.update(body);
// Signature should be hex-encoded
let Ok(expected) = hex::decode(sig.trim_start_matches("sha256=")) else {
return false;
};
mac.verify_slice(&expected).is_ok()
}
}
#[async_trait]
impl Channel for WebhookChannel {
fn name(&self) -> &str {
"webhook"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let Some(ref send_url) = self.send_url else {
tracing::debug!("Webhook channel: no send_url configured, skipping outbound message");
return Ok(());
};
let client = self.http_client();
let payload = OutgoingWebhook {
content: message.content.clone(),
thread_id: message.thread_ts.clone(),
recipient: if message.recipient.is_empty() {
None
} else {
Some(message.recipient.clone())
},
};
let mut request = match self.send_method.as_str() {
"PUT" => client.put(send_url),
_ => client.post(send_url),
};
if let Some(ref auth) = self.auth_header {
request = request.header("Authorization", auth);
}
let resp = request.json(&payload).send().await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Webhook send failed ({status}): {body}");
}
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
use axum::{
body::Bytes,
extract::State,
http::{HeaderMap, StatusCode},
routing::post,
Router,
};
use portable_atomic::{AtomicU64, Ordering};
use std::sync::Arc;
let counter = Arc::new(AtomicU64::new(0));
struct WebhookState {
tx: tokio::sync::mpsc::Sender<ChannelMessage>,
secret: Option<String>,
counter: Arc<AtomicU64>,
}
let state = Arc::new(WebhookState {
tx: tx.clone(),
secret: self.secret.clone(),
counter: counter.clone(),
});
let listen_path = self.listen_path.clone();
async fn handle_webhook(
State(state): State<Arc<WebhookState>>,
headers: HeaderMap,
body: Bytes,
) -> StatusCode {
// Verify signature if secret is configured
if let Some(ref secret) = state.secret {
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
let signature = headers
.get("x-webhook-signature")
.and_then(|v| v.to_str().ok());
let valid = if let Some(sig) = signature {
if let Ok(mut mac) = HmacSha256::new_from_slice(secret.as_bytes()) {
mac.update(&body);
let expected =
hex::decode(sig.trim_start_matches("sha256=")).unwrap_or_default();
mac.verify_slice(&expected).is_ok()
} else {
false
}
} else {
false
};
if !valid {
tracing::warn!("Webhook: invalid signature, rejecting request");
return StatusCode::UNAUTHORIZED;
}
}
let payload: IncomingWebhook = match serde_json::from_slice(&body) {
Ok(p) => p,
Err(e) => {
tracing::warn!("Webhook: invalid JSON payload: {e}");
return StatusCode::BAD_REQUEST;
}
};
if payload.content.is_empty() {
return StatusCode::BAD_REQUEST;
}
let seq = state.counter.fetch_add(1, Ordering::Relaxed);
#[allow(clippy::cast_possible_truncation)]
let timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let reply_target = payload
.thread_id
.clone()
.unwrap_or_else(|| payload.sender.clone());
let msg = ChannelMessage {
id: format!("webhook_{seq}"),
sender: payload.sender,
reply_target,
content: payload.content,
channel: "webhook".to_string(),
timestamp,
thread_ts: payload.thread_id,
};
if state.tx.send(msg).await.is_err() {
return StatusCode::SERVICE_UNAVAILABLE;
}
StatusCode::OK
}
let app = Router::new()
.route(&listen_path, post(handle_webhook))
.with_state(state);
let addr = std::net::SocketAddr::from(([0, 0, 0, 0], self.listen_port));
tracing::info!(
"Webhook channel listening on http://0.0.0.0:{}{} ...",
self.listen_port,
self.listen_path
);
let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app)
.await
.map_err(|e| anyhow::anyhow!("Webhook server error: {e}"))?;
Ok(())
}
async fn health_check(&self) -> bool {
// Webhook channel is healthy if the port can be bound (basic check).
// In practice, once listen() starts the server is running.
true
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> WebhookChannel {
WebhookChannel::new(
8080,
Some("/webhook".into()),
Some("https://example.com/callback".into()),
None,
None,
None,
)
}
fn make_channel_with_secret() -> WebhookChannel {
WebhookChannel::new(
8080,
None,
Some("https://example.com/callback".into()),
None,
None,
Some("mysecret".into()),
)
}
#[test]
fn default_path() {
let ch = WebhookChannel::new(8080, None, None, None, None, None);
assert_eq!(ch.listen_path, "/webhook");
}
#[test]
fn path_normalized() {
let ch = WebhookChannel::new(8080, Some("hooks/incoming".into()), None, None, None, None);
assert_eq!(ch.listen_path, "/hooks/incoming");
}
#[test]
fn send_method_default() {
let ch = make_channel();
assert_eq!(ch.send_method, "POST");
}
#[test]
fn send_method_put() {
let ch = WebhookChannel::new(
8080,
None,
Some("https://example.com".into()),
Some("put".into()),
None,
None,
);
assert_eq!(ch.send_method, "PUT");
}
#[test]
fn incoming_payload_deserializes_all_fields() {
let json = r#"{"sender": "zeroclaw_user", "content": "hello", "thread_id": "t1"}"#;
let payload: IncomingWebhook = serde_json::from_str(json).unwrap();
assert_eq!(payload.sender, "zeroclaw_user");
assert_eq!(payload.content, "hello");
assert_eq!(payload.thread_id.as_deref(), Some("t1"));
}
#[test]
fn incoming_payload_without_thread() {
let json = r#"{"sender": "bob", "content": "hi"}"#;
let payload: IncomingWebhook = serde_json::from_str(json).unwrap();
assert_eq!(payload.sender, "bob");
assert_eq!(payload.content, "hi");
assert!(payload.thread_id.is_none());
}
#[test]
fn outgoing_payload_serializes_content() {
let payload = OutgoingWebhook {
content: "response".into(),
thread_id: Some("t1".into()),
recipient: Some("zeroclaw_user".into()),
};
let json = serde_json::to_value(&payload).unwrap();
assert_eq!(json["content"], "response");
assert_eq!(json["thread_id"], "t1");
assert_eq!(json["recipient"], "zeroclaw_user");
}
#[test]
fn outgoing_payload_omits_none_fields() {
let payload = OutgoingWebhook {
content: "response".into(),
thread_id: None,
recipient: None,
};
let json = serde_json::to_value(&payload).unwrap();
assert_eq!(json["content"], "response");
assert!(json.get("thread_id").is_none());
assert!(json.get("recipient").is_none());
}
#[test]
fn verify_signature_no_secret() {
let ch = make_channel();
assert!(ch.verify_signature(b"body", None));
}
#[test]
fn verify_signature_missing_header() {
let ch = make_channel_with_secret();
assert!(!ch.verify_signature(b"body", None));
}
#[test]
fn verify_signature_valid() {
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
let ch = make_channel_with_secret();
let body = b"test body";
let mut mac = HmacSha256::new_from_slice(b"mysecret").unwrap();
mac.update(body);
let sig = hex::encode(mac.finalize().into_bytes());
assert!(ch.verify_signature(body, Some(&sig)));
}
#[test]
fn verify_signature_invalid() {
let ch = make_channel_with_secret();
assert!(!ch.verify_signature(b"body", Some("badhex")));
}
}
+315 -90
View File
@@ -64,8 +64,17 @@ pub struct WhatsAppWebChannel {
client: Arc<Mutex<Option<Arc<wa_rs::Client>>>>,
/// Message sender channel
tx: Arc<Mutex<Option<tokio::sync::mpsc::Sender<ChannelMessage>>>>,
/// Voice transcription configuration
/// Voice transcription (STT) config
transcription: Option<crate::config::TranscriptionConfig>,
/// Text-to-speech config for voice replies
tts_config: Option<crate::config::TtsConfig>,
/// Chats awaiting a voice reply — maps chat JID to the latest substantive
/// reply text. A background task debounces and sends the voice note after
/// the agent finishes its turn (no new send() for 3 seconds).
pending_voice:
Arc<std::sync::Mutex<std::collections::HashMap<String, (String, std::time::Instant)>>>,
/// Chats whose last incoming message was a voice note.
voice_chats: Arc<std::sync::Mutex<std::collections::HashSet<String>>>,
}
impl WhatsAppWebChannel {
@@ -93,10 +102,13 @@ impl WhatsAppWebChannel {
client: Arc::new(Mutex::new(None)),
tx: Arc::new(Mutex::new(None)),
transcription: None,
tts_config: None,
pending_voice: Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
voice_chats: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
}
}
/// Configure voice transcription.
/// Configure voice transcription (STT) for incoming voice notes.
#[cfg(feature = "whatsapp-web")]
pub fn with_transcription(mut self, config: crate::config::TranscriptionConfig) -> Self {
if config.enabled {
@@ -105,6 +117,15 @@ impl WhatsAppWebChannel {
self
}
/// Configure text-to-speech for outgoing voice replies.
#[cfg(feature = "whatsapp-web")]
pub fn with_tts(mut self, config: crate::config::TtsConfig) -> Self {
if config.enabled {
self.tts_config = Some(config);
}
self
}
/// Check if a phone number is allowed (E.164 format: +1234567890)
#[cfg(feature = "whatsapp-web")]
fn is_number_allowed(&self, phone: &str) -> bool {
@@ -287,6 +308,134 @@ impl WhatsAppWebChannel {
format!("{expanded_session_path}-shm"),
]
}
/// Attempt to download and transcribe a WhatsApp voice note.
///
/// Returns `None` if transcription is disabled, download fails, or
/// transcription fails (all logged as warnings).
#[cfg(feature = "whatsapp-web")]
async fn try_transcribe_voice_note(
client: &wa_rs::Client,
audio: &wa_rs_proto::whatsapp::message::AudioMessage,
transcription_config: Option<&crate::config::TranscriptionConfig>,
) -> Option<String> {
let config = transcription_config?;
// Enforce duration limit
if let Some(seconds) = audio.seconds {
if u64::from(seconds) > config.max_duration_secs {
tracing::info!(
"WhatsApp Web: skipping voice note ({}s exceeds {}s limit)",
seconds,
config.max_duration_secs
);
return None;
}
}
// Download the encrypted audio
use wa_rs::download::Downloadable;
let audio_data = match client.download(audio as &dyn Downloadable).await {
Ok(data) => data,
Err(e) => {
tracing::warn!("WhatsApp Web: failed to download voice note: {e}");
return None;
}
};
// Determine filename from mimetype for transcription API
let file_name = match audio.mimetype.as_deref() {
Some(m) if m.contains("opus") || m.contains("ogg") => "voice.ogg",
Some(m) if m.contains("mp4") || m.contains("m4a") => "voice.m4a",
Some(m) if m.contains("mpeg") || m.contains("mp3") => "voice.mp3",
Some(m) if m.contains("webm") => "voice.webm",
_ => "voice.ogg", // WhatsApp default
};
tracing::info!(
"WhatsApp Web: transcribing voice note ({} bytes, file={})",
audio_data.len(),
file_name
);
match super::transcription::transcribe_audio(audio_data, file_name, config).await {
Ok(text) if text.trim().is_empty() => {
tracing::info!("WhatsApp Web: voice transcription returned empty text, skipping");
None
}
Ok(text) => {
tracing::info!(
"WhatsApp Web: voice note transcribed ({} chars)",
text.len()
);
Some(text)
}
Err(e) => {
tracing::warn!("WhatsApp Web: voice transcription failed: {e}");
None
}
}
}
/// Synthesize text to speech and send as a WhatsApp voice note (static version for spawned tasks).
#[cfg(feature = "whatsapp-web")]
async fn synthesize_voice_static(
client: &wa_rs::Client,
to: &wa_rs_binary::jid::Jid,
text: &str,
tts_config: &crate::config::TtsConfig,
) -> Result<()> {
let tts_manager = super::tts::TtsManager::new(tts_config)?;
let audio_bytes = tts_manager.synthesize(text).await?;
let audio_len = audio_bytes.len();
tracing::info!("WhatsApp Web TTS: synthesized {} bytes of audio", audio_len);
if audio_bytes.is_empty() {
anyhow::bail!("TTS returned empty audio");
}
use wa_rs_core::download::MediaType;
let upload = client
.upload(audio_bytes, MediaType::Audio)
.await
.map_err(|e| anyhow!("Failed to upload TTS audio: {e}"))?;
tracing::info!(
"WhatsApp Web TTS: uploaded audio (url_len={}, file_length={})",
upload.url.len(),
upload.file_length
);
// Estimate duration: Opus at ~32kbps → bytes / 4000 ≈ seconds
#[allow(clippy::cast_possible_truncation)]
let estimated_seconds = std::cmp::max(1, (upload.file_length / 4000) as u32);
let voice_msg = wa_rs_proto::whatsapp::Message {
audio_message: Some(Box::new(wa_rs_proto::whatsapp::message::AudioMessage {
url: Some(upload.url),
direct_path: Some(upload.direct_path),
media_key: Some(upload.media_key),
file_enc_sha256: Some(upload.file_enc_sha256),
file_sha256: Some(upload.file_sha256),
file_length: Some(upload.file_length),
mimetype: Some("audio/ogg; codecs=opus".to_string()),
ptt: Some(true),
seconds: Some(estimated_seconds),
..Default::default()
})),
..Default::default()
};
Box::pin(client.send_message(to.clone(), voice_msg))
.await
.map_err(|e| anyhow!("Failed to send voice note: {e}"))?;
tracing::info!(
"WhatsApp Web TTS: sent voice note ({} bytes, ~{}s)",
audio_len,
estimated_seconds
);
Ok(())
}
}
#[cfg(feature = "whatsapp-web")]
@@ -315,6 +464,88 @@ impl Channel for WhatsAppWebChannel {
}
let to = self.recipient_to_jid(&message.recipient)?;
// Voice chat mode: send text normally AND queue a voice note of the
// final answer. Only substantive messages (not tool outputs) are queued.
// A debounce task waits 10s after the last substantive message, then
// sends ONE voice note. Text in → text out. Voice in → text + voice out.
let is_voice_chat = self
.voice_chats
.lock()
.map(|vs| vs.contains(&message.recipient))
.unwrap_or(false);
if is_voice_chat && self.tts_config.is_some() {
let content = &message.content;
// Only queue substantive natural-language replies for voice.
// Skip tool outputs: URLs, JSON, code blocks, errors, short status.
let is_substantive = content.len() > 40
&& !content.starts_with("http")
&& !content.starts_with('{')
&& !content.starts_with('[')
&& !content.starts_with("Error")
&& !content.contains("```")
&& !content.contains("tool_call")
&& !content.contains("wttr.in");
if is_substantive {
if let Ok(mut pv) = self.pending_voice.lock() {
pv.insert(
message.recipient.clone(),
(content.clone(), std::time::Instant::now()),
);
}
let pending = self.pending_voice.clone();
let voice_chats = self.voice_chats.clone();
let client_clone = client.clone();
let to_clone = to.clone();
let recipient = message.recipient.clone();
let tts_config = self.tts_config.clone().unwrap();
tokio::spawn(async move {
// Wait 10 seconds — long enough for the agent to finish its
// full tool chain and send the final answer.
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
// Atomic check-and-remove: only one task gets the value
let to_voice = pending.lock().ok().and_then(|mut pv| {
if let Some((_, ts)) = pv.get(&recipient) {
if ts.elapsed().as_secs() >= 8 {
return pv.remove(&recipient).map(|(text, _)| text);
}
}
None
});
if let Some(text) = to_voice {
if let Ok(mut vc) = voice_chats.lock() {
vc.remove(&recipient);
}
match Box::pin(WhatsAppWebChannel::synthesize_voice_static(
&client_clone,
&to_clone,
&text,
&tts_config,
))
.await
{
Ok(()) => {
tracing::info!(
"WhatsApp Web: voice reply sent ({} chars)",
text.len()
);
}
Err(e) => {
tracing::warn!("WhatsApp Web: TTS voice reply failed: {e}");
}
}
}
});
}
// Fall through to send text normally (voice chat gets BOTH)
}
// Send text message
let outgoing = wa_rs_proto::whatsapp::Message {
conversation: Some(message.content.clone()),
..Default::default()
@@ -322,7 +553,7 @@ impl Channel for WhatsAppWebChannel {
let message_id = client.send_message(to, outgoing).await?;
tracing::debug!(
"WhatsApp Web: sent message to {} (id: {})",
"WhatsApp Web: sent text to {} (id: {})",
message.recipient,
message_id
);
@@ -394,6 +625,9 @@ impl Channel for WhatsAppWebChannel {
let session_revoked_clone = session_revoked.clone();
let transcription_config = self.transcription.clone();
let transcription_config = self.transcription.clone();
let voice_chats = self.voice_chats.clone();
let mut builder = Bot::builder()
.with_backend(backend)
.with_transport_factory(transport_factory)
@@ -405,27 +639,15 @@ impl Channel for WhatsAppWebChannel {
let retry_count = retry_count_clone.clone();
let session_revoked = session_revoked_clone.clone();
let transcription_config = transcription_config.clone();
let voice_chats = voice_chats.clone();
async move {
match event {
Event::Message(msg, info) => {
// Extract message content
let text = msg.text_content().unwrap_or("");
let sender_jid = info.source.sender.clone();
let sender_alt = info.source.sender_alt.clone();
let sender = sender_jid.user().to_string();
let chat = info.source.chat.to_string();
tracing::info!(
"WhatsApp Web message received (sender_len={}, chat_len={}, text_len={})",
sender.len(),
chat.len(),
text.len()
);
tracing::debug!(
"WhatsApp Web message content: {}",
text
);
let mapped_phone = if sender_jid.is_lid() {
client.get_phone_number_from_lid(&sender_jid.user).await
} else {
@@ -437,93 +659,92 @@ impl Channel for WhatsAppWebChannel {
mapped_phone.as_deref(),
);
if let Some(normalized) = sender_candidates
let normalized = match sender_candidates
.iter()
.find(|candidate| {
Self::is_number_allowed_for_list(&allowed_numbers, candidate)
})
.cloned()
{
let content = if !text.trim().is_empty() {
text.trim().to_string()
} else if let Some(ref audio) = msg.get_base_message().audio_message {
let duration = audio.seconds.unwrap_or(0);
tracing::info!(
"WhatsApp Web audio from {} ({}s, ptt={})",
normalized, duration, audio.ptt.unwrap_or(false)
Some(n) => n,
None => {
tracing::warn!(
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
sender_candidates.len()
);
let config = match transcription_config.as_ref() {
Some(c) => c,
None => {
tracing::debug!("WhatsApp Web: transcription disabled, ignoring audio");
return;
}
};
if u64::from(duration) > config.max_duration_secs {
tracing::info!(
"WhatsApp Web: skipping audio ({}s > {}s limit)",
duration, config.max_duration_secs
);
return;
}
let audio_data = match client.download(audio.as_ref()).await {
Ok(d) => d,
Err(e) => {
tracing::warn!("WhatsApp Web: failed to download audio: {e}");
return;
}
};
let file_name = match audio.mimetype.as_deref() {
Some(m) if m.contains("ogg") => "voice.ogg",
Some(m) if m.contains("opus") => "voice.opus",
Some(m) if m.contains("mp4") || m.contains("m4a") => "voice.m4a",
Some(m) if m.contains("webm") => "voice.webm",
_ => "voice.ogg",
};
match super::transcription::transcribe_audio(audio_data, file_name, config).await {
Ok(t) if !t.trim().is_empty() => {
tracing::info!("WhatsApp Web: transcribed audio from {}: {}", normalized, t.trim());
t.trim().to_string()
}
Ok(_) => {
tracing::info!("WhatsApp Web: transcription returned empty text");
return;
}
Err(e) => {
tracing::warn!("WhatsApp Web: transcription failed: {e}");
return;
}
}
} else {
tracing::debug!("WhatsApp Web: ignoring non-text/non-audio message from {}", normalized);
return;
};
}
};
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content,
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
// Attempt voice note transcription (ptt = push-to-talk = voice note)
let voice_text = if let Some(ref audio) = msg.audio_message {
if audio.ptt == Some(true) {
Self::try_transcribe_voice_note(
&client,
audio,
transcription_config.as_ref(),
)
.await
{
tracing::error!("Failed to send message to channel: {}", e);
} else {
tracing::debug!(
"WhatsApp Web: ignoring non-PTT audio message from {}",
normalized
);
None
}
} else {
tracing::warn!(
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
sender_candidates.len()
None
};
// Use transcribed voice text, or fall back to text content.
// Track whether this chat used a voice note so we reply in kind.
// We store the chat JID (reply_target) since that's what send() receives.
let content = if let Some(ref vt) = voice_text {
if let Ok(mut vs) = voice_chats.lock() {
vs.insert(chat.clone());
}
format!("[Voice] {vt}")
} else {
if let Ok(mut vs) = voice_chats.lock() {
vs.remove(&chat);
}
let text = msg.text_content().unwrap_or("");
text.trim().to_string()
};
tracing::info!(
"WhatsApp Web message received (sender_len={}, chat_len={}, content_len={})",
sender.len(),
chat.len(),
content.len()
);
tracing::debug!(
"WhatsApp Web message content: {}",
content
);
if content.is_empty() {
tracing::debug!(
"WhatsApp Web: ignoring empty or non-text message from {}",
normalized
);
return;
}
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content,
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
.await
{
tracing::error!("Failed to send message to channel: {}", e);
}
}
Event::Connected(_) => {
@@ -764,6 +985,10 @@ impl WhatsAppWebChannel {
pub fn with_transcription(self, _config: crate::config::TranscriptionConfig) -> Self {
self
}
pub fn with_tts(self, _config: crate::config::TtsConfig) -> Self {
self
}
}
#[cfg(not(feature = "whatsapp-web"))]
+2
View File
@@ -0,0 +1,2 @@
pub mod self_test;
pub mod update;
+281
View File
@@ -0,0 +1,281 @@
//! `zeroclaw self-test` — quick and full diagnostic checks.
use anyhow::Result;
use std::path::Path;
/// Result of a single diagnostic check.
pub struct CheckResult {
pub name: &'static str,
pub passed: bool,
pub detail: String,
}
impl CheckResult {
fn pass(name: &'static str, detail: impl Into<String>) -> Self {
Self {
name,
passed: true,
detail: detail.into(),
}
}
fn fail(name: &'static str, detail: impl Into<String>) -> Self {
Self {
name,
passed: false,
detail: detail.into(),
}
}
}
/// Run the quick self-test suite (no network required).
pub async fn run_quick(config: &crate::config::Config) -> Result<Vec<CheckResult>> {
let mut results = Vec::new();
// 1. Config file exists and parses
results.push(check_config(config));
// 2. Workspace directory is writable
results.push(check_workspace(&config.workspace_dir).await);
// 3. SQLite memory backend opens
results.push(check_sqlite(&config.workspace_dir));
// 4. Provider registry has entries
results.push(check_provider_registry());
// 5. Tool registry has entries
results.push(check_tool_registry(config));
// 6. Channel registry loads
results.push(check_channel_config(config));
// 7. Security policy parses
results.push(check_security_policy(config));
// 8. Version sanity
results.push(check_version());
Ok(results)
}
/// Run the full self-test suite (includes network checks).
pub async fn run_full(config: &crate::config::Config) -> Result<Vec<CheckResult>> {
let mut results = run_quick(config).await?;
// 9. Gateway health endpoint
results.push(check_gateway_health(config).await);
// 10. Memory write/read round-trip
results.push(check_memory_roundtrip(config).await);
// 11. WebSocket handshake
results.push(check_websocket_handshake(config).await);
Ok(results)
}
/// Print results in a formatted table.
pub fn print_results(results: &[CheckResult]) {
let total = results.len();
let passed = results.iter().filter(|r| r.passed).count();
let failed = total - passed;
println!();
for (i, r) in results.iter().enumerate() {
let icon = if r.passed {
"\x1b[32m✓\x1b[0m"
} else {
"\x1b[31m✗\x1b[0m"
};
println!(" {} {}/{} {}{}", icon, i + 1, total, r.name, r.detail);
}
println!();
if failed == 0 {
println!(" \x1b[32mAll {total} checks passed.\x1b[0m");
} else {
println!(" \x1b[31m{failed}/{total} checks failed.\x1b[0m");
}
println!();
}
fn check_config(config: &crate::config::Config) -> CheckResult {
if config.config_path.exists() {
CheckResult::pass(
"config",
format!("loaded from {}", config.config_path.display()),
)
} else {
CheckResult::fail("config", "config file not found (using defaults)")
}
}
async fn check_workspace(workspace_dir: &Path) -> CheckResult {
match tokio::fs::metadata(workspace_dir).await {
Ok(meta) if meta.is_dir() => {
// Try writing a temp file
let test_file = workspace_dir.join(".selftest_probe");
match tokio::fs::write(&test_file, b"ok").await {
Ok(()) => {
let _ = tokio::fs::remove_file(&test_file).await;
CheckResult::pass(
"workspace",
format!("{} (writable)", workspace_dir.display()),
)
}
Err(e) => CheckResult::fail(
"workspace",
format!("{} (not writable: {e})", workspace_dir.display()),
),
}
}
Ok(_) => CheckResult::fail(
"workspace",
format!("{} exists but is not a directory", workspace_dir.display()),
),
Err(e) => CheckResult::fail(
"workspace",
format!("{} (error: {e})", workspace_dir.display()),
),
}
}
fn check_sqlite(workspace_dir: &Path) -> CheckResult {
let db_path = workspace_dir.join("memory.db");
match rusqlite::Connection::open(&db_path) {
Ok(conn) => match conn.execute_batch("SELECT 1") {
Ok(()) => CheckResult::pass("sqlite", "memory.db opens and responds"),
Err(e) => CheckResult::fail("sqlite", format!("query failed: {e}")),
},
Err(e) => CheckResult::fail("sqlite", format!("cannot open memory.db: {e}")),
}
}
fn check_provider_registry() -> CheckResult {
let providers = crate::providers::list_providers();
if providers.is_empty() {
CheckResult::fail("providers", "no providers registered")
} else {
CheckResult::pass(
"providers",
format!("{} providers available", providers.len()),
)
}
}
fn check_tool_registry(config: &crate::config::Config) -> CheckResult {
let security = std::sync::Arc::new(crate::security::SecurityPolicy::from_config(
&config.autonomy,
&config.workspace_dir,
));
let tools = crate::tools::default_tools(security);
if tools.is_empty() {
CheckResult::fail("tools", "no tools registered")
} else {
CheckResult::pass("tools", format!("{} core tools available", tools.len()))
}
}
fn check_channel_config(config: &crate::config::Config) -> CheckResult {
let channels = config.channels_config.channels();
let configured = channels.iter().filter(|(_, c)| *c).count();
CheckResult::pass(
"channels",
format!(
"{} channel types, {} configured",
channels.len(),
configured
),
)
}
fn check_security_policy(config: &crate::config::Config) -> CheckResult {
let _policy =
crate::security::SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
CheckResult::pass(
"security",
format!("autonomy level: {:?}", config.autonomy.level),
)
}
fn check_version() -> CheckResult {
let version = env!("CARGO_PKG_VERSION");
CheckResult::pass("version", format!("v{version}"))
}
async fn check_gateway_health(config: &crate::config::Config) -> CheckResult {
let port = config.gateway.port;
let host = if config.gateway.host == "[::]" || config.gateway.host == "0.0.0.0" {
"127.0.0.1"
} else {
&config.gateway.host
};
let url = format!("http://{host}:{port}/health");
match reqwest::Client::new()
.get(&url)
.timeout(std::time::Duration::from_secs(5))
.send()
.await
{
Ok(resp) if resp.status().is_success() => {
CheckResult::pass("gateway", format!("health OK at {url}"))
}
Ok(resp) => CheckResult::fail("gateway", format!("health returned {}", resp.status())),
Err(e) => CheckResult::fail("gateway", format!("not reachable at {url}: {e}")),
}
}
async fn check_memory_roundtrip(config: &crate::config::Config) -> CheckResult {
let mem = match crate::memory::create_memory(
&config.memory,
&config.workspace_dir,
config.api_key.as_deref(),
) {
Ok(m) => m,
Err(e) => return CheckResult::fail("memory", format!("cannot create backend: {e}")),
};
let test_key = "__selftest_probe__";
let test_value = "selftest_ok";
if let Err(e) = mem
.store(
test_key,
test_value,
crate::memory::MemoryCategory::Core,
None,
)
.await
{
return CheckResult::fail("memory", format!("write failed: {e}"));
}
match mem.recall(test_key, 1, None).await {
Ok(entries) if !entries.is_empty() => {
let _ = mem.forget(test_key).await;
CheckResult::pass("memory", "write/read/delete round-trip OK")
}
Ok(_) => {
let _ = mem.forget(test_key).await;
CheckResult::fail("memory", "no entries returned after round-trip")
}
Err(e) => {
let _ = mem.forget(test_key).await;
CheckResult::fail("memory", format!("read failed: {e}"))
}
}
}
async fn check_websocket_handshake(config: &crate::config::Config) -> CheckResult {
let port = config.gateway.port;
let host = if config.gateway.host == "[::]" || config.gateway.host == "0.0.0.0" {
"127.0.0.1"
} else {
&config.gateway.host
};
let url = format!("ws://{host}:{port}/ws/chat");
match tokio_tungstenite::connect_async(&url).await {
Ok((_, _)) => CheckResult::pass("websocket", format!("handshake OK at {url}")),
Err(e) => CheckResult::fail("websocket", format!("handshake failed at {url}: {e}")),
}
}
+276
View File
@@ -0,0 +1,276 @@
//! `zeroclaw update` — self-update pipeline with rollback.
use anyhow::{bail, Context, Result};
use std::path::Path;
use tracing::{info, warn};
const GITHUB_RELEASES_LATEST_URL: &str =
"https://api.github.com/repos/zeroclaw-labs/zeroclaw/releases/latest";
const GITHUB_RELEASES_TAG_URL: &str =
"https://api.github.com/repos/zeroclaw-labs/zeroclaw/releases/tags";
#[derive(Debug)]
pub struct UpdateInfo {
pub current_version: String,
pub latest_version: String,
pub download_url: Option<String>,
pub is_newer: bool,
}
/// Check for available updates without downloading.
///
/// If `target_version` is `Some`, fetch that specific release tag instead of latest.
pub async fn check(target_version: Option<&str>) -> Result<UpdateInfo> {
let current = env!("CARGO_PKG_VERSION").to_string();
let client = reqwest::Client::builder()
.user_agent(format!("zeroclaw/{current}"))
.timeout(std::time::Duration::from_secs(15))
.build()?;
let url = match target_version {
Some(v) => {
let tag = if v.starts_with('v') {
v.to_string()
} else {
format!("v{v}")
};
format!("{GITHUB_RELEASES_TAG_URL}/{tag}")
}
None => GITHUB_RELEASES_LATEST_URL.to_string(),
};
let resp = client
.get(&url)
.send()
.await
.context("failed to reach GitHub releases API")?;
if !resp.status().is_success() {
bail!("GitHub API returned {}", resp.status());
}
let release: serde_json::Value = resp.json().await?;
let tag = release["tag_name"]
.as_str()
.unwrap_or("unknown")
.trim_start_matches('v')
.to_string();
let download_url = find_asset_url(&release);
let is_newer = version_is_newer(&current, &tag);
Ok(UpdateInfo {
current_version: current,
latest_version: tag,
download_url,
is_newer,
})
}
/// Run the full 6-phase update pipeline.
///
/// If `target_version` is `Some`, fetch that specific version instead of latest.
pub async fn run(target_version: Option<&str>) -> Result<()> {
// Phase 1: Preflight
info!("Phase 1/6: Preflight checks...");
let update_info = check(target_version).await?;
if !update_info.is_newer {
println!("Already up to date (v{}).", update_info.current_version);
return Ok(());
}
println!(
"Update available: v{} -> v{}",
update_info.current_version, update_info.latest_version
);
let download_url = update_info
.download_url
.context("no suitable binary found for this platform")?;
let current_exe =
std::env::current_exe().context("cannot determine current executable path")?;
// Phase 2: Download
info!("Phase 2/6: Downloading...");
let temp_dir = tempfile::tempdir().context("failed to create temp dir")?;
let download_path = temp_dir.path().join("zeroclaw_new");
download_binary(&download_url, &download_path).await?;
// Phase 3: Backup
info!("Phase 3/6: Creating backup...");
let backup_path = current_exe.with_extension("bak");
tokio::fs::copy(&current_exe, &backup_path)
.await
.context("failed to backup current binary")?;
// Phase 4: Validate
info!("Phase 4/6: Validating download...");
validate_binary(&download_path).await?;
// Phase 5: Swap
info!("Phase 5/6: Swapping binary...");
if let Err(e) = swap_binary(&download_path, &current_exe).await {
// Rollback
warn!("Swap failed, rolling back: {e}");
if let Err(rollback_err) = tokio::fs::copy(&backup_path, &current_exe).await {
eprintln!("CRITICAL: Rollback also failed: {rollback_err}");
eprintln!(
"Manual recovery: cp {} {}",
backup_path.display(),
current_exe.display()
);
}
bail!("Update failed during swap: {e}");
}
// Phase 6: Smoke test
info!("Phase 6/6: Smoke test...");
match smoke_test(&current_exe).await {
Ok(()) => {
// Cleanup backup on success
let _ = tokio::fs::remove_file(&backup_path).await;
println!("Successfully updated to v{}!", update_info.latest_version);
Ok(())
}
Err(e) => {
warn!("Smoke test failed, rolling back: {e}");
tokio::fs::copy(&backup_path, &current_exe)
.await
.context("rollback after smoke test failure")?;
bail!("Update rolled back — smoke test failed: {e}");
}
}
}
fn find_asset_url(release: &serde_json::Value) -> Option<String> {
let target = if cfg!(target_os = "macos") {
if cfg!(target_arch = "aarch64") {
"aarch64-apple-darwin"
} else {
"x86_64-apple-darwin"
}
} else if cfg!(target_os = "linux") {
if cfg!(target_arch = "aarch64") {
"aarch64-unknown-linux"
} else {
"x86_64-unknown-linux"
}
} else {
return None;
};
release["assets"]
.as_array()?
.iter()
.find(|asset| {
asset["name"]
.as_str()
.map(|name| name.contains(target))
.unwrap_or(false)
})
.and_then(|asset| asset["browser_download_url"].as_str().map(String::from))
}
fn version_is_newer(current: &str, candidate: &str) -> bool {
let parse = |v: &str| -> Vec<u32> { v.split('.').filter_map(|p| p.parse().ok()).collect() };
let cur = parse(current);
let cand = parse(candidate);
cand > cur
}
async fn download_binary(url: &str, dest: &Path) -> Result<()> {
let client = reqwest::Client::builder()
.user_agent(format!("zeroclaw/{}", env!("CARGO_PKG_VERSION")))
.timeout(std::time::Duration::from_secs(300))
.build()?;
let resp = client
.get(url)
.send()
.await
.context("download request failed")?;
if !resp.status().is_success() {
bail!("download returned {}", resp.status());
}
let bytes = resp.bytes().await.context("failed to read download body")?;
tokio::fs::write(dest, &bytes)
.await
.context("failed to write downloaded binary")?;
// Make executable on Unix
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
let perms = std::fs::Permissions::from_mode(0o755);
tokio::fs::set_permissions(dest, perms).await?;
}
Ok(())
}
async fn validate_binary(path: &Path) -> Result<()> {
let meta = tokio::fs::metadata(path).await?;
if meta.len() < 1_000_000 {
bail!(
"downloaded binary too small ({} bytes), likely corrupt",
meta.len()
);
}
// Quick check: try running --version
let output = tokio::process::Command::new(path)
.arg("--version")
.output()
.await
.context("cannot execute downloaded binary")?;
if !output.status.success() {
bail!("downloaded binary --version check failed");
}
let stdout = String::from_utf8_lossy(&output.stdout);
if !stdout.contains("zeroclaw") {
bail!("downloaded binary does not appear to be zeroclaw");
}
Ok(())
}
async fn swap_binary(new: &Path, target: &Path) -> Result<()> {
tokio::fs::copy(new, target)
.await
.context("failed to overwrite binary")?;
Ok(())
}
async fn smoke_test(binary: &Path) -> Result<()> {
let output = tokio::process::Command::new(binary)
.arg("--version")
.output()
.await
.context("smoke test: cannot execute updated binary")?;
if !output.status.success() {
bail!("smoke test: updated binary returned non-zero exit code");
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_version_comparison() {
assert!(version_is_newer("0.4.3", "0.5.0"));
assert!(version_is_newer("0.4.3", "0.4.4"));
assert!(!version_is_newer("0.5.0", "0.4.3"));
assert!(!version_is_newer("0.4.3", "0.4.3"));
assert!(version_is_newer("1.0.0", "2.0.0"));
}
}
+21 -16
View File
@@ -6,23 +6,27 @@ pub mod workspace;
pub use schema::{
apply_runtime_proxy_to_builder, build_runtime_proxy_client,
build_runtime_proxy_client_with_timeouts, runtime_proxy_config, set_runtime_proxy_config,
AgentConfig, AuditConfig, AutonomyConfig, BackupConfig, BrowserComputerUseConfig,
BrowserConfig, BuiltinHooksConfig, ChannelsConfig, ClassificationRule, CloudOpsConfig,
ComposioConfig, Config, ConversationalAiConfig, CostConfig, CronConfig, DataRetentionConfig,
DelegateAgentConfig, DiscordConfig, DockerRuntimeConfig, EdgeTtsConfig, ElevenLabsTtsConfig,
EmbeddingRouteConfig, EstopConfig, FeishuConfig, GatewayConfig, GoogleTtsConfig,
AgentConfig, AssemblyAiSttConfig, AuditConfig, AutonomyConfig, BackupConfig,
BrowserComputerUseConfig, BrowserConfig, BuiltinHooksConfig, ChannelsConfig,
ClassificationRule, CloudOpsConfig, ComposioConfig, Config, ConversationalAiConfig, CostConfig,
CronConfig, DataRetentionConfig, DeepgramSttConfig, DelegateAgentConfig, DiscordConfig,
DockerRuntimeConfig, EdgeTtsConfig, ElevenLabsTtsConfig, EmbeddingRouteConfig, EstopConfig,
FeishuConfig, GatewayConfig, GoogleSttConfig, GoogleTtsConfig, GoogleWorkspaceConfig,
HardwareConfig, HardwareTransport, HeartbeatConfig, HooksConfig, HttpRequestConfig,
IMessageConfig, IdentityConfig, LarkConfig, MatrixConfig, McpConfig, McpServerConfig,
McpTransport, MemoryConfig, Microsoft365Config, ModelRouteConfig, MultimodalConfig,
NextcloudTalkConfig, NodeTransportConfig, NodesConfig, NotionConfig, ObservabilityConfig,
OpenAiTtsConfig, OpenVpnTunnelConfig, OtpConfig, OtpMethod, PeripheralBoardConfig,
PeripheralsConfig, ProjectIntelConfig, ProxyConfig, ProxyScope, QdrantConfig,
QueryClassificationConfig, ReliabilityConfig, ResourceLimitsConfig, RuntimeConfig,
SandboxBackend, SandboxConfig, SchedulerConfig, SecretsConfig, SecurityConfig,
SecurityOpsConfig, SkillsConfig, SkillsPromptInjectionMode, SlackConfig, StorageConfig,
StorageProviderConfig, StorageProviderSection, StreamMode, SwarmConfig, SwarmStrategy,
TelegramConfig, ToolFilterGroup, ToolFilterGroupMode, TranscriptionConfig, TtsConfig,
TunnelConfig, WebFetchConfig, WebSearchConfig, WebhookConfig, WorkspaceConfig,
IMessageConfig, IdentityConfig, ImageProviderDalleConfig, ImageProviderFluxConfig,
ImageProviderImagenConfig, ImageProviderStabilityConfig, KnowledgeConfig, LarkConfig,
LinkedInConfig, LinkedInContentConfig, LinkedInImageConfig, MatrixConfig, McpConfig,
McpServerConfig, McpTransport, MemoryConfig, Microsoft365Config, ModelRouteConfig,
MultimodalConfig, NextcloudTalkConfig, NodeTransportConfig, NodesConfig, NotionConfig,
ObservabilityConfig, OpenAiSttConfig, OpenAiTtsConfig, OpenVpnTunnelConfig, OtpConfig,
OtpMethod, PeripheralBoardConfig, PeripheralsConfig, PluginsConfig, ProjectIntelConfig,
ProxyConfig, ProxyScope, QdrantConfig, QueryClassificationConfig, ReliabilityConfig,
ResourceLimitsConfig, RuntimeConfig, SandboxBackend, SandboxConfig, SchedulerConfig,
SecretsConfig, SecurityConfig, SecurityOpsConfig, SkillCreationConfig, SkillsConfig,
SkillsPromptInjectionMode, SlackConfig, StorageConfig, StorageProviderConfig,
StorageProviderSection, StreamMode, SwarmConfig, SwarmStrategy, TelegramConfig,
ToolFilterGroup, ToolFilterGroupMode, TranscriptionConfig, TtsConfig, TunnelConfig,
WebFetchConfig, WebSearchConfig, WebhookConfig, WorkspaceConfig,
};
pub fn name_and_presence<T: traits::ChannelConfig>(channel: Option<&T>) -> (&'static str, bool) {
@@ -51,6 +55,7 @@ mod tests {
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
ack_reactions: None,
};
let discord = DiscordConfig {
+1172 -17
View File
File diff suppressed because it is too large Load Diff
+4 -1
View File
@@ -17,7 +17,10 @@ pub use store::{
add_agent_job, due_jobs, get_job, list_jobs, list_runs, record_last_run, record_run,
remove_job, reschedule_after_run, update_job,
};
pub use types::{CronJob, CronJobPatch, CronRun, DeliveryConfig, JobType, Schedule, SessionTarget};
pub use types::{
deserialize_maybe_stringified, CronJob, CronJobPatch, CronRun, DeliveryConfig, JobType,
Schedule, SessionTarget,
};
/// Validate a shell command against the full security policy (allowlist + risk gate).
///
+16 -2
View File
@@ -242,6 +242,15 @@ async fn persist_job_result(
if success {
if let Err(e) = remove_job(config, &job.id) {
tracing::warn!("Failed to remove one-shot cron job after success: {e}");
// Fall back to disabling the job so it won't re-trigger.
let _ = update_job(
config,
&job.id,
CronJobPatch {
enabled: Some(false),
..CronJobPatch::default()
},
);
}
} else {
let _ = record_last_run(config, &job.id, finished_at, false, output);
@@ -1038,7 +1047,7 @@ mod tests {
}
#[tokio::test]
async fn persist_job_result_at_schedule_without_delete_after_run_is_not_deleted() {
async fn persist_job_result_at_schedule_without_delete_after_run_is_disabled() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp).await;
let at = Utc::now() + ChronoDuration::minutes(10);
@@ -1060,8 +1069,13 @@ mod tests {
let success = persist_job_result(&config, &job, true, "ok", started, finished).await;
assert!(success);
// After reschedule_after_run, At schedule jobs should be disabled
// to prevent re-execution with a past next_run timestamp.
let updated = cron::get_job(&config, &job.id).unwrap();
assert!(updated.enabled);
assert!(
!updated.enabled,
"At schedule job should be disabled after execution via reschedule"
);
assert_eq!(updated.last_status.as_deref(), Some("ok"));
}
+67 -17
View File
@@ -285,26 +285,41 @@ pub fn reschedule_after_run(
output: &str,
) -> Result<()> {
let now = Utc::now();
let next_run = next_run_for_schedule(&job.schedule, now)?;
let status = if success { "ok" } else { "error" };
let bounded_output = truncate_cron_output(output);
with_connection(config, |conn| {
conn.execute(
"UPDATE cron_jobs
SET next_run = ?1, last_run = ?2, last_status = ?3, last_output = ?4
WHERE id = ?5",
params![
next_run.to_rfc3339(),
now.to_rfc3339(),
status,
bounded_output,
job.id
],
)
.context("Failed to update cron job run state")?;
Ok(())
})
// One-shot `At` schedules have no future occurrence — record the run
// result and disable the job so it won't be picked up again.
if matches!(job.schedule, Schedule::At { .. }) {
with_connection(config, |conn| {
conn.execute(
"UPDATE cron_jobs
SET enabled = 0, last_run = ?1, last_status = ?2, last_output = ?3
WHERE id = ?4",
params![now.to_rfc3339(), status, bounded_output, job.id],
)
.context("Failed to disable completed one-shot cron job")?;
Ok(())
})
} else {
let next_run = next_run_for_schedule(&job.schedule, now)?;
with_connection(config, |conn| {
conn.execute(
"UPDATE cron_jobs
SET next_run = ?1, last_run = ?2, last_status = ?3, last_output = ?4
WHERE id = ?5",
params![
next_run.to_rfc3339(),
now.to_rfc3339(),
status,
bounded_output,
job.id
],
)
.context("Failed to update cron job run state")?;
Ok(())
})
}
}
pub fn record_run(
@@ -852,6 +867,41 @@ mod tests {
assert!(stored.len() <= MAX_CRON_OUTPUT_BYTES);
}
#[test]
fn reschedule_after_run_disables_at_schedule_job() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let at = Utc::now() + ChronoDuration::minutes(10);
let job = add_shell_job(&config, None, Schedule::At { at }, "echo once").unwrap();
reschedule_after_run(&config, &job, true, "done").unwrap();
let stored = get_job(&config, &job.id).unwrap();
assert!(
!stored.enabled,
"At schedule job should be disabled after reschedule"
);
assert_eq!(stored.last_status.as_deref(), Some("ok"));
}
#[test]
fn reschedule_after_run_disables_at_schedule_job_on_failure() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let at = Utc::now() + ChronoDuration::minutes(10);
let job = add_shell_job(&config, None, Schedule::At { at }, "echo once").unwrap();
reschedule_after_run(&config, &job, false, "failed").unwrap();
let stored = get_job(&config, &job.id).unwrap();
assert!(
!stored.enabled,
"At schedule job should be disabled after reschedule even on failure"
);
assert_eq!(stored.last_status.as_deref(), Some("error"));
assert_eq!(stored.last_output.as_deref(), Some("failed"));
}
#[test]
fn reschedule_after_run_truncates_last_output() {
let tmp = TempDir::new().unwrap();
+66 -1
View File
@@ -1,6 +1,32 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
/// Try to deserialize a `serde_json::Value` as `T`. If the value is a JSON
/// string that looks like an object (i.e. the LLM double-serialized it), parse
/// the inner string first and then deserialize the resulting object. This
/// provides backward-compatible handling for both `Value::Object` and
/// `Value::String` representations.
pub fn deserialize_maybe_stringified<T: serde::de::DeserializeOwned>(
v: &serde_json::Value,
) -> Result<T, serde_json::Error> {
// Fast path: value is already the right shape (object, array, etc.)
match serde_json::from_value::<T>(v.clone()) {
Ok(parsed) => Ok(parsed),
Err(first_err) => {
// If it's a string, try parsing the string as JSON first.
if let Some(s) = v.as_str() {
let s = s.trim();
if s.starts_with('{') || s.starts_with('[') {
if let Ok(inner) = serde_json::from_str::<serde_json::Value>(s) {
return serde_json::from_value::<T>(inner);
}
}
}
Err(first_err)
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)]
#[serde(rename_all = "lowercase")]
pub enum JobType {
@@ -154,7 +180,46 @@ pub struct CronJobPatch {
#[cfg(test)]
mod tests {
use super::JobType;
use super::*;
#[test]
fn deserialize_schedule_from_object() {
let val = serde_json::json!({"kind": "cron", "expr": "*/5 * * * *"});
let sched = deserialize_maybe_stringified::<Schedule>(&val).unwrap();
assert!(matches!(sched, Schedule::Cron { ref expr, .. } if expr == "*/5 * * * *"));
}
#[test]
fn deserialize_schedule_from_string() {
let val = serde_json::Value::String(r#"{"kind":"cron","expr":"*/5 * * * *"}"#.to_string());
let sched = deserialize_maybe_stringified::<Schedule>(&val).unwrap();
assert!(matches!(sched, Schedule::Cron { ref expr, .. } if expr == "*/5 * * * *"));
}
#[test]
fn deserialize_schedule_string_with_tz() {
let val = serde_json::Value::String(
r#"{"kind":"cron","expr":"*/30 9-15 * * 1-5","tz":"Asia/Shanghai"}"#.to_string(),
);
let sched = deserialize_maybe_stringified::<Schedule>(&val).unwrap();
match sched {
Schedule::Cron { tz, .. } => assert_eq!(tz.as_deref(), Some("Asia/Shanghai")),
_ => panic!("expected Cron variant"),
}
}
#[test]
fn deserialize_every_from_string() {
let val = serde_json::Value::String(r#"{"kind":"every","every_ms":60000}"#.to_string());
let sched = deserialize_maybe_stringified::<Schedule>(&val).unwrap();
assert!(matches!(sched, Schedule::Every { every_ms: 60000 }));
}
#[test]
fn deserialize_invalid_string_returns_error() {
let val = serde_json::Value::String("not json at all".to_string());
assert!(deserialize_maybe_stringified::<Schedule>(&val).is_err());
}
#[test]
fn job_type_try_from_accepts_known_values_case_insensitive() {
+12 -3
View File
@@ -72,7 +72,7 @@ pub async fn run(config: Config, host: String, port: u16) -> Result<()> {
move || {
let cfg = gateway_cfg.clone();
let host = gateway_host.clone();
async move { crate::gateway::run_gateway(&host, port, cfg).await }
async move { Box::pin(crate::gateway::run_gateway(&host, port, cfg)).await }
},
));
}
@@ -116,7 +116,7 @@ pub async fn run(config: Config, host: String, port: u16) -> Result<()> {
max_backoff,
move || {
let cfg = scheduler_cfg.clone();
async move { crate::cron::scheduler::run(cfg).await }
async move { Box::pin(crate::cron::scheduler::run(cfg)).await }
},
));
} else {
@@ -127,6 +127,9 @@ pub async fn run(config: Config, host: String, port: u16) -> Result<()> {
println!("🧠 ZeroClaw daemon started");
println!(" Gateway: http://{host}:{port}");
println!(" Components: gateway, channels, heartbeat, scheduler");
if config.gateway.require_pairing {
println!(" Pairing: enabled (code appears in gateway output above)");
}
println!(" Ctrl+C or SIGTERM to stop");
// Wait for shutdown signal (SIGINT or SIGTERM)
@@ -312,7 +315,10 @@ async fn run_heartbeat_worker(config: Config) -> Result<()> {
// ── Phase 1: LLM decision (two-phase mode) ──────────────
let tasks_to_run = if two_phase {
let decision_prompt = HeartbeatEngine::build_decision_prompt(&tasks);
let decision_prompt = format!(
"[Heartbeat Task | decision] {}",
HeartbeatEngine::build_decision_prompt(&tasks),
);
match Box::pin(crate::agent::run(
config.clone(),
Some(decision_prompt),
@@ -639,6 +645,7 @@ mod tests {
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
ack_reactions: None,
});
assert!(has_supervised_channels(&config));
}
@@ -752,6 +759,7 @@ mod tests {
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
ack_reactions: None,
});
let target = resolve_heartbeat_delivery(&config).unwrap();
@@ -768,6 +776,7 @@ mod tests {
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
ack_reactions: None,
});
let target = resolve_heartbeat_delivery(&config).unwrap();
+4
View File
@@ -1281,6 +1281,8 @@ mod tests {
agentic: false,
allowed_tools: Vec::new(),
max_iterations: 10,
timeout_secs: None,
agentic_timeout_secs: None,
},
);
config.agents.insert(
@@ -1295,6 +1297,8 @@ mod tests {
agentic: false,
allowed_tools: Vec::new(),
max_iterations: 10,
timeout_secs: None,
agentic_timeout_secs: None,
},
);
+70
View File
@@ -1076,6 +1076,76 @@ fn hydrate_config_for_save(
incoming
}
// ── Session API handlers ─────────────────────────────────────────
/// GET /api/sessions — list gateway sessions
pub async fn handle_api_sessions_list(
State(state): State<AppState>,
headers: HeaderMap,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
let Some(ref backend) = state.session_backend else {
return Json(serde_json::json!({
"sessions": [],
"message": "Session persistence is disabled"
}))
.into_response();
};
let all_metadata = backend.list_sessions_with_metadata();
let gw_sessions: Vec<serde_json::Value> = all_metadata
.into_iter()
.filter_map(|meta| {
let session_id = meta.key.strip_prefix("gw_")?;
Some(serde_json::json!({
"session_id": session_id,
"created_at": meta.created_at.to_rfc3339(),
"last_activity": meta.last_activity.to_rfc3339(),
"message_count": meta.message_count,
}))
})
.collect();
Json(serde_json::json!({ "sessions": gw_sessions })).into_response()
}
/// DELETE /api/sessions/{id} — delete a gateway session
pub async fn handle_api_session_delete(
State(state): State<AppState>,
headers: HeaderMap,
Path(id): Path<String>,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
let Some(ref backend) = state.session_backend else {
return (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": "Session persistence is disabled"})),
)
.into_response();
};
let session_key = format!("gw_{id}");
match backend.delete_session(&session_key) {
Ok(true) => Json(serde_json::json!({"deleted": true, "session_id": id})).into_response(),
Ok(false) => (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": "Session not found"})),
)
.into_response(),
Err(e) => (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({"error": format!("Failed to delete session: {e}")})),
)
.into_response(),
}
}
#[cfg(test)]
mod tests {
use super::*;
+383
View File
@@ -0,0 +1,383 @@
//! Device management and pairing API handlers.
use super::AppState;
use axum::{
extract::State,
http::{header, HeaderMap, StatusCode},
response::{IntoResponse, Json},
};
use chrono::{DateTime, Utc};
use parking_lot::Mutex;
use rusqlite::Connection;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::path::{Path, PathBuf};
/// Metadata about a paired device.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DeviceInfo {
pub id: String,
pub name: Option<String>,
pub device_type: Option<String>,
pub paired_at: DateTime<Utc>,
pub last_seen: DateTime<Utc>,
pub ip_address: Option<String>,
}
/// Registry of paired devices backed by SQLite.
#[derive(Debug)]
pub struct DeviceRegistry {
cache: Mutex<HashMap<String, DeviceInfo>>,
db_path: PathBuf,
}
impl DeviceRegistry {
pub fn new(workspace_dir: &Path) -> Self {
let db_path = workspace_dir.join("devices.db");
let conn = Connection::open(&db_path).expect("Failed to open device registry database");
conn.execute_batch(
"CREATE TABLE IF NOT EXISTS devices (
token_hash TEXT PRIMARY KEY,
id TEXT NOT NULL,
name TEXT,
device_type TEXT,
paired_at TEXT NOT NULL,
last_seen TEXT NOT NULL,
ip_address TEXT
)",
)
.expect("Failed to create devices table");
// Warm the in-memory cache from DB
let mut cache = HashMap::new();
let mut stmt = conn
.prepare("SELECT token_hash, id, name, device_type, paired_at, last_seen, ip_address FROM devices")
.expect("Failed to prepare device select");
let rows = stmt
.query_map([], |row| {
let token_hash: String = row.get(0)?;
let id: String = row.get(1)?;
let name: Option<String> = row.get(2)?;
let device_type: Option<String> = row.get(3)?;
let paired_at_str: String = row.get(4)?;
let last_seen_str: String = row.get(5)?;
let ip_address: Option<String> = row.get(6)?;
let paired_at = DateTime::parse_from_rfc3339(&paired_at_str)
.map(|dt| dt.with_timezone(&Utc))
.unwrap_or_else(|_| Utc::now());
let last_seen = DateTime::parse_from_rfc3339(&last_seen_str)
.map(|dt| dt.with_timezone(&Utc))
.unwrap_or_else(|_| Utc::now());
Ok((
token_hash,
DeviceInfo {
id,
name,
device_type,
paired_at,
last_seen,
ip_address,
},
))
})
.expect("Failed to query devices");
for (hash, info) in rows.flatten() {
cache.insert(hash, info);
}
Self {
cache: Mutex::new(cache),
db_path,
}
}
fn open_db(&self) -> Connection {
Connection::open(&self.db_path).expect("Failed to open device registry database")
}
pub fn register(&self, token_hash: String, info: DeviceInfo) {
let conn = self.open_db();
conn.execute(
"INSERT OR REPLACE INTO devices (token_hash, id, name, device_type, paired_at, last_seen, ip_address) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
token_hash,
info.id,
info.name,
info.device_type,
info.paired_at.to_rfc3339(),
info.last_seen.to_rfc3339(),
info.ip_address,
],
)
.expect("Failed to insert device");
self.cache.lock().insert(token_hash, info);
}
pub fn list(&self) -> Vec<DeviceInfo> {
let conn = self.open_db();
let mut stmt = conn
.prepare("SELECT token_hash, id, name, device_type, paired_at, last_seen, ip_address FROM devices")
.expect("Failed to prepare device select");
let rows = stmt
.query_map([], |row| {
let id: String = row.get(1)?;
let name: Option<String> = row.get(2)?;
let device_type: Option<String> = row.get(3)?;
let paired_at_str: String = row.get(4)?;
let last_seen_str: String = row.get(5)?;
let ip_address: Option<String> = row.get(6)?;
let paired_at = DateTime::parse_from_rfc3339(&paired_at_str)
.map(|dt| dt.with_timezone(&Utc))
.unwrap_or_else(|_| Utc::now());
let last_seen = DateTime::parse_from_rfc3339(&last_seen_str)
.map(|dt| dt.with_timezone(&Utc))
.unwrap_or_else(|_| Utc::now());
Ok(DeviceInfo {
id,
name,
device_type,
paired_at,
last_seen,
ip_address,
})
})
.expect("Failed to query devices");
rows.filter_map(|r| r.ok()).collect()
}
pub fn revoke(&self, device_id: &str) -> bool {
let conn = self.open_db();
let deleted = conn
.execute(
"DELETE FROM devices WHERE id = ?1",
rusqlite::params![device_id],
)
.unwrap_or(0);
if deleted > 0 {
let mut cache = self.cache.lock();
let key = cache
.iter()
.find(|(_, v)| v.id == device_id)
.map(|(k, _)| k.clone());
if let Some(key) = key {
cache.remove(&key);
}
true
} else {
false
}
}
pub fn update_last_seen(&self, token_hash: &str) {
let now = Utc::now();
let conn = self.open_db();
conn.execute(
"UPDATE devices SET last_seen = ?1 WHERE token_hash = ?2",
rusqlite::params![now.to_rfc3339(), token_hash],
)
.ok();
if let Some(device) = self.cache.lock().get_mut(token_hash) {
device.last_seen = now;
}
}
pub fn device_count(&self) -> usize {
self.cache.lock().len()
}
}
/// Store for pending pairing requests.
#[derive(Debug)]
pub struct PairingStore {
pending: Mutex<Vec<PendingPairing>>,
max_pending: usize,
}
#[derive(Debug, Clone, Serialize)]
struct PendingPairing {
code: String,
created_at: DateTime<Utc>,
expires_at: DateTime<Utc>,
client_ip: Option<String>,
attempts: u32,
}
impl PairingStore {
pub fn new(max_pending: usize) -> Self {
Self {
pending: Mutex::new(Vec::new()),
max_pending,
}
}
pub fn pending_count(&self) -> usize {
let mut pending = self.pending.lock();
pending.retain(|p| p.expires_at > Utc::now());
pending.len()
}
}
fn extract_bearer(headers: &HeaderMap) -> Option<&str> {
headers
.get(header::AUTHORIZATION)
.and_then(|v| v.to_str().ok())
.and_then(|auth| auth.strip_prefix("Bearer "))
}
fn require_auth(state: &AppState, headers: &HeaderMap) -> Result<(), (StatusCode, &'static str)> {
if state.pairing.require_pairing() {
let token = extract_bearer(headers).unwrap_or("");
if !state.pairing.is_authenticated(token) {
return Err((StatusCode::UNAUTHORIZED, "Unauthorized"));
}
}
Ok(())
}
/// POST /api/pairing/initiate — initiate a new pairing session
pub async fn initiate_pairing(
State(state): State<AppState>,
headers: HeaderMap,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
match state.pairing.generate_new_pairing_code() {
Some(code) => Json(serde_json::json!({
"pairing_code": code,
"message": "New pairing code generated"
}))
.into_response(),
None => (
StatusCode::SERVICE_UNAVAILABLE,
"Pairing is disabled or not available",
)
.into_response(),
}
}
/// POST /api/pair — submit pairing code (for new device pairing)
pub async fn submit_pairing_enhanced(
State(state): State<AppState>,
headers: HeaderMap,
Json(body): Json<serde_json::Value>,
) -> impl IntoResponse {
let code = body["code"].as_str().unwrap_or("");
let device_name = body["device_name"].as_str().map(String::from);
let device_type = body["device_type"].as_str().map(String::from);
let client_id = headers
.get("X-Forwarded-For")
.and_then(|v| v.to_str().ok())
.unwrap_or("unknown")
.to_string();
match state.pairing.try_pair(code, &client_id).await {
Ok(Some(token)) => {
// Register the new device
let token_hash = {
use sha2::{Digest, Sha256};
let hash = Sha256::digest(token.as_bytes());
hex::encode(hash)
};
if let Some(ref registry) = state.device_registry {
registry.register(
token_hash,
DeviceInfo {
id: uuid::Uuid::new_v4().to_string(),
name: device_name,
device_type,
paired_at: Utc::now(),
last_seen: Utc::now(),
ip_address: Some(client_id),
},
);
}
Json(serde_json::json!({
"token": token,
"message": "Pairing successful"
}))
.into_response()
}
Ok(None) => (StatusCode::BAD_REQUEST, "Invalid or expired pairing code").into_response(),
Err(lockout_secs) => (
StatusCode::TOO_MANY_REQUESTS,
format!("Too many attempts. Locked out for {lockout_secs}s"),
)
.into_response(),
}
}
/// GET /api/devices — list paired devices
pub async fn list_devices(State(state): State<AppState>, headers: HeaderMap) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
let devices = state
.device_registry
.as_ref()
.map(|r| r.list())
.unwrap_or_default();
let count = devices.len();
Json(serde_json::json!({
"devices": devices,
"count": count
}))
.into_response()
}
/// DELETE /api/devices/{id} — revoke a paired device
pub async fn revoke_device(
State(state): State<AppState>,
headers: HeaderMap,
axum::extract::Path(device_id): axum::extract::Path<String>,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
let revoked = state
.device_registry
.as_ref()
.map(|r| r.revoke(&device_id))
.unwrap_or(false);
if revoked {
Json(serde_json::json!({
"message": "Device revoked",
"device_id": device_id
}))
.into_response()
} else {
(StatusCode::NOT_FOUND, "Device not found").into_response()
}
}
/// POST /api/devices/{id}/token/rotate — rotate a device's token
pub async fn rotate_token(
State(state): State<AppState>,
headers: HeaderMap,
axum::extract::Path(device_id): axum::extract::Path<String>,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
// Generate a new pairing code for re-pairing
match state.pairing.generate_new_pairing_code() {
Some(code) => Json(serde_json::json!({
"device_id": device_id,
"pairing_code": code,
"message": "Use this code to re-pair the device"
}))
.into_response(),
None => (
StatusCode::SERVICE_UNAVAILABLE,
"Cannot generate new pairing code",
)
.into_response(),
}
}
+77
View File
@@ -0,0 +1,77 @@
//! Plugin management API routes (requires `plugins-wasm` feature).
#[cfg(feature = "plugins-wasm")]
pub mod plugin_routes {
use axum::{
extract::State,
http::{header, HeaderMap, StatusCode},
response::{IntoResponse, Json},
};
use super::super::AppState;
/// `GET /api/plugins` — list loaded plugins and their status.
pub async fn list_plugins(
State(state): State<AppState>,
headers: HeaderMap,
) -> impl IntoResponse {
// Auth check
if state.pairing.require_pairing() {
let token = headers
.get(header::AUTHORIZATION)
.and_then(|v| v.to_str().ok())
.and_then(|auth| auth.strip_prefix("Bearer "))
.unwrap_or("");
if !state.pairing.is_authenticated(token) {
return (StatusCode::UNAUTHORIZED, "Unauthorized").into_response();
}
}
let config = state.config.lock();
let plugins_enabled = config.plugins.enabled;
let plugins_dir = config.plugins.plugins_dir.clone();
drop(config);
let plugins: Vec<serde_json::Value> = if plugins_enabled {
let plugin_path = if plugins_dir.starts_with("~/") {
directories::UserDirs::new()
.map(|u| u.home_dir().join(&plugins_dir[2..]))
.unwrap_or_else(|| std::path::PathBuf::from(&plugins_dir))
} else {
std::path::PathBuf::from(&plugins_dir)
};
if plugin_path.exists() {
match crate::plugins::host::PluginHost::new(
plugin_path.parent().unwrap_or(&plugin_path),
) {
Ok(host) => host
.list_plugins()
.into_iter()
.map(|p| {
serde_json::json!({
"name": p.name,
"version": p.version,
"description": p.description,
"capabilities": p.capabilities,
"loaded": p.loaded,
})
})
.collect(),
Err(_) => vec![],
}
} else {
vec![]
}
} else {
vec![]
};
Json(serde_json::json!({
"plugins_enabled": plugins_enabled,
"plugins_dir": plugins_dir,
"plugins": plugins,
}))
.into_response()
}
}
+120 -15
View File
@@ -8,13 +8,17 @@
//! - Header sanitization (handled by axum/hyper)
pub mod api;
pub mod api_pairing;
#[cfg(feature = "plugins-wasm")]
pub mod api_plugins;
pub mod nodes;
pub mod sse;
pub mod static_files;
pub mod ws;
use crate::channels::{
Channel, LinqChannel, NextcloudTalkChannel, SendMessage, WatiChannel, WhatsAppChannel,
session_backend::SessionBackend, session_sqlite::SqliteSessionBackend, Channel, LinqChannel,
NextcloudTalkChannel, SendMessage, WatiChannel, WhatsAppChannel,
};
use crate::config::Config;
use crate::cost::CostTracker;
@@ -331,6 +335,12 @@ pub struct AppState {
pub shutdown_tx: tokio::sync::watch::Sender<bool>,
/// Registry of dynamically connected nodes
pub node_registry: Arc<nodes::NodeRegistry>,
/// Session backend for persisting gateway WS chat sessions
pub session_backend: Option<Arc<dyn SessionBackend>>,
/// Device registry for paired device management
pub device_registry: Option<Arc<api_pairing::DeviceRegistry>>,
/// Pending pairing request store
pub pending_pairings: Option<Arc<api_pairing::PairingStore>>,
}
/// Run the HTTP gateway using axum with proper HTTP/1.1 compliance.
@@ -370,6 +380,7 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
reasoning_effort: config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
@@ -553,6 +564,29 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
})
.map(Arc::from);
// ── Session persistence for WS chat ─────────────────────
let session_backend: Option<Arc<dyn SessionBackend>> = if config.gateway.session_persistence {
match SqliteSessionBackend::new(&config.workspace_dir) {
Ok(b) => {
tracing::info!("Gateway session persistence enabled (SQLite)");
if config.gateway.session_ttl_hours > 0 {
if let Ok(cleaned) = b.cleanup_stale(config.gateway.session_ttl_hours) {
if cleaned > 0 {
tracing::info!("Cleaned up {cleaned} stale gateway sessions");
}
}
}
Some(Arc::new(b))
}
Err(e) => {
tracing::warn!("Session persistence disabled: {e}");
None
}
}
} else {
None
};
// ── Pairing guard ──────────────────────────────────────
let pairing = Arc::new(PairingGuard::new(
config.gateway.require_pairing,
@@ -599,6 +633,21 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
println!(" 🌐 Public URL: {url}");
}
println!(" 🌐 Web Dashboard: http://{display_addr}/");
if let Some(code) = pairing.pairing_code() {
println!();
println!(" 🔐 PAIRING REQUIRED — use this one-time code:");
println!(" ┌──────────────┐");
println!("{code}");
println!(" └──────────────┘");
println!();
} else if pairing.require_pairing() {
println!(" 🔒 Pairing: ACTIVE (bearer token required)");
println!(" To pair a new device: zeroclaw gateway get-paircode --new");
println!();
} else {
println!(" ⚠️ Pairing: DISABLED (all requests accepted)");
println!();
}
println!(" POST /pair — pair a new client (X-Pairing-Code header)");
println!(" POST /webhook — {{\"message\": \"your prompt\"}}");
if whatsapp_channel.is_some() {
@@ -622,18 +671,6 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
}
println!(" GET /health — health check");
println!(" GET /metrics — Prometheus metrics");
if let Some(code) = pairing.pairing_code() {
println!();
println!(" 🔐 PAIRING REQUIRED — use this one-time code:");
println!(" ┌──────────────┐");
println!("{code}");
println!(" └──────────────┘");
println!(" Send: POST /pair with header X-Pairing-Code: {code}");
} else if pairing.require_pairing() {
println!(" 🔒 Pairing: ACTIVE (bearer token required)");
} else {
println!(" ⚠️ Pairing: DISABLED (all requests accepted)");
}
println!(" Press Ctrl+C to stop.\n");
crate::health::mark_component_ok("gateway");
@@ -655,6 +692,22 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
// Node registry for dynamic node discovery
let node_registry = Arc::new(nodes::NodeRegistry::new(config.nodes.max_nodes));
// Device registry and pairing store (only when pairing is required)
let device_registry = if config.gateway.require_pairing {
Some(Arc::new(api_pairing::DeviceRegistry::new(
&config.workspace_dir,
)))
} else {
None
};
let pending_pairings = if config.gateway.require_pairing {
Some(Arc::new(api_pairing::PairingStore::new(
config.gateway.pairing_dashboard.max_pending_codes,
)))
} else {
None
};
let state = AppState {
config: config_state,
provider,
@@ -680,6 +733,9 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
event_tx,
shutdown_tx,
node_registry,
session_backend,
device_registry,
pending_pairings,
};
// Config PUT needs larger body limit (1MB)
@@ -727,6 +783,26 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
.route("/api/cost", get(api::handle_api_cost))
.route("/api/cli-tools", get(api::handle_api_cli_tools))
.route("/api/health", get(api::handle_api_health))
.route("/api/sessions", get(api::handle_api_sessions_list))
.route("/api/sessions/{id}", delete(api::handle_api_session_delete))
// ── Pairing + Device management API ──
.route("/api/pairing/initiate", post(api_pairing::initiate_pairing))
.route("/api/pair", post(api_pairing::submit_pairing_enhanced))
.route("/api/devices", get(api_pairing::list_devices))
.route("/api/devices/{id}", delete(api_pairing::revoke_device))
.route(
"/api/devices/{id}/token/rotate",
post(api_pairing::rotate_token),
);
// ── Plugin management API (requires plugins-wasm feature) ──
#[cfg(feature = "plugins-wasm")]
let app = app.route(
"/api/plugins",
get(api_plugins::plugin_routes::list_plugins),
);
let app = app
// ── SSE event stream ──
.route("/api/events", get(sse::handle_sse_events))
// ── WebSocket agent chat ──
@@ -838,7 +914,9 @@ async fn handle_pair(
match state.pairing.try_pair(code, &rate_key).await {
Ok(Some(token)) => {
tracing::info!("🔐 New client paired successfully");
if let Err(err) = persist_pairing_tokens(state.config.clone(), &state.pairing).await {
if let Err(err) =
Box::pin(persist_pairing_tokens(state.config.clone(), &state.pairing)).await
{
tracing::error!("🔐 Pairing succeeded but token persistence failed: {err:#}");
let body = serde_json::json!({
"paired": true,
@@ -1827,6 +1905,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let response = handle_metrics(State(state)).await.into_response();
@@ -1879,6 +1960,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let response = handle_metrics(State(state)).await.into_response();
@@ -2041,7 +2125,7 @@ mod tests {
assert!(guard.is_authenticated(&token));
let shared_config = Arc::new(Mutex::new(config));
persist_pairing_tokens(shared_config.clone(), &guard)
Box::pin(persist_pairing_tokens(shared_config.clone(), &guard))
.await
.unwrap();
@@ -2255,6 +2339,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let mut headers = HeaderMap::new();
@@ -2321,6 +2408,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let headers = HeaderMap::new();
@@ -2399,6 +2489,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let response = handle_webhook(
@@ -2449,6 +2542,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let mut headers = HeaderMap::new();
@@ -2504,6 +2600,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let mut headers = HeaderMap::new();
@@ -2564,6 +2663,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let response = Box::pin(handle_nextcloud_talk_webhook(
@@ -2620,6 +2722,9 @@ mod tests {
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
node_registry: Arc::new(nodes::NodeRegistry::new(16)),
session_backend: None,
device_registry: None,
pending_pairings: None,
};
let mut headers = HeaderMap::new();
+177 -44
View File
@@ -20,6 +20,28 @@ use axum::{
};
use futures_util::{SinkExt, StreamExt};
use serde::Deserialize;
use tracing::debug;
/// Optional connection parameters sent as the first WebSocket message.
///
/// If the first message after upgrade is `{"type":"connect",...}`, these
/// parameters are extracted and an acknowledgement is sent back. Old clients
/// that send `{"type":"message",...}` as the first frame still work — the
/// message is processed normally (backward-compatible).
#[derive(Debug, Deserialize)]
struct ConnectParams {
#[serde(rename = "type")]
msg_type: String,
/// Client-chosen session ID for memory persistence
#[serde(default)]
session_id: Option<String>,
/// Device name for device registry tracking
#[serde(default)]
device_name: Option<String>,
/// Client capabilities
#[serde(default)]
capabilities: Vec<String>,
}
/// The sub-protocol we support for the chat WebSocket.
const WS_PROTOCOL: &str = "zeroclaw.v1";
@@ -111,14 +133,21 @@ pub async fn handle_ws_chat(
ws
};
let session_id = params.session_id.clone();
let session_id = params.session_id;
ws.on_upgrade(move |socket| handle_socket(socket, state, session_id))
.into_response()
}
/// Gateway session key prefix to avoid collisions with channel sessions.
const GW_SESSION_PREFIX: &str = "gw_";
async fn handle_socket(socket: WebSocket, state: AppState, session_id: Option<String>) {
let (mut sender, mut receiver) = socket.split();
// Resolve session ID: use provided or generate a new UUID
let session_id = session_id.unwrap_or_else(|| uuid::Uuid::new_v4().to_string());
let session_key = format!("{GW_SESSION_PREFIX}{session_id}");
// Build a persistent Agent for this connection so history is maintained across turns.
let config = state.config.lock().clone();
let mut agent = match crate::agent::Agent::from_config(&config) {
@@ -129,7 +158,90 @@ async fn handle_socket(socket: WebSocket, state: AppState, session_id: Option<St
return;
}
};
agent.set_memory_session_id(session_id.clone());
agent.set_memory_session_id(Some(session_id.clone()));
// Hydrate agent from persisted session (if available)
let mut resumed = false;
let mut message_count: usize = 0;
if let Some(ref backend) = state.session_backend {
let messages = backend.load(&session_key);
if !messages.is_empty() {
message_count = messages.len();
agent.seed_history(&messages);
resumed = true;
}
}
// Send session_start message to client
let session_start = serde_json::json!({
"type": "session_start",
"session_id": session_id,
"resumed": resumed,
"message_count": message_count,
});
let _ = sender
.send(Message::Text(session_start.to_string().into()))
.await;
// ── Optional connect handshake ──────────────────────────────────
// The first message may be a `{"type":"connect",...}` frame carrying
// connection parameters. If it is, we extract the params, send an
// ack, and proceed to the normal message loop. If the first message
// is a regular `{"type":"message",...}` frame, we fall through and
// process it immediately (backward-compatible).
let mut first_msg_fallback: Option<String> = None;
if let Some(first) = receiver.next().await {
match first {
Ok(Message::Text(text)) => {
if let Ok(cp) = serde_json::from_str::<ConnectParams>(&text) {
if cp.msg_type == "connect" {
debug!(
session_id = ?cp.session_id,
device_name = ?cp.device_name,
capabilities = ?cp.capabilities,
"WebSocket connect params received"
);
// Override session_id if provided in connect params
if let Some(sid) = &cp.session_id {
agent.set_memory_session_id(Some(sid.clone()));
}
let ack = serde_json::json!({
"type": "connected",
"message": "Connection established"
});
let _ = sender.send(Message::Text(ack.to_string().into())).await;
} else {
// Not a connect message — fall through to normal processing
first_msg_fallback = Some(text.to_string());
}
} else {
// Not parseable as ConnectParams — fall through
first_msg_fallback = Some(text.to_string());
}
}
Ok(Message::Close(_)) | Err(_) => return,
_ => {}
}
}
// Process the first message if it was not a connect frame
if let Some(ref text) = first_msg_fallback {
if let Ok(parsed) = serde_json::from_str::<serde_json::Value>(text) {
if parsed["type"].as_str() == Some("message") {
let content = parsed["content"].as_str().unwrap_or("").to_string();
if !content.is_empty() {
// Persist user message
if let Some(ref backend) = state.session_backend {
let user_msg = crate::providers::ChatMessage::user(&content);
let _ = backend.append(&session_key, &user_msg);
}
process_chat_message(&state, &mut agent, &mut sender, &content, &session_key)
.await;
}
}
}
}
while let Some(msg) = receiver.next().await {
let msg = match msg {
@@ -158,53 +270,74 @@ async fn handle_socket(socket: WebSocket, state: AppState, session_id: Option<St
continue;
}
// Process message with the LLM provider
let provider_label = state
.config
.lock()
.default_provider
.clone()
.unwrap_or_else(|| "unknown".to_string());
// Persist user message
if let Some(ref backend) = state.session_backend {
let user_msg = crate::providers::ChatMessage::user(&content);
let _ = backend.append(&session_key, &user_msg);
}
// Broadcast agent_start event
let _ = state.event_tx.send(serde_json::json!({
"type": "agent_start",
"provider": provider_label,
"model": state.model,
}));
process_chat_message(&state, &mut agent, &mut sender, &content, &session_key).await;
}
}
// Multi-turn chat via persistent Agent (history is maintained across turns)
match agent.turn(&content).await {
Ok(response) => {
// Send the full response as a done message
let done = serde_json::json!({
"type": "done",
"full_response": response,
});
let _ = sender.send(Message::Text(done.to_string().into())).await;
/// Process a single chat message through the agent and send the response.
async fn process_chat_message(
state: &AppState,
agent: &mut crate::agent::Agent,
sender: &mut futures_util::stream::SplitSink<WebSocket, Message>,
content: &str,
session_key: &str,
) {
let provider_label = state
.config
.lock()
.default_provider
.clone()
.unwrap_or_else(|| "unknown".to_string());
// Broadcast agent_end event
let _ = state.event_tx.send(serde_json::json!({
"type": "agent_end",
"provider": provider_label,
"model": state.model,
}));
// Broadcast agent_start event
let _ = state.event_tx.send(serde_json::json!({
"type": "agent_start",
"provider": provider_label,
"model": state.model,
}));
// Multi-turn chat via persistent Agent (history is maintained across turns)
match agent.turn(content).await {
Ok(response) => {
// Persist assistant response
if let Some(ref backend) = state.session_backend {
let assistant_msg = crate::providers::ChatMessage::assistant(&response);
let _ = backend.append(session_key, &assistant_msg);
}
Err(e) => {
let sanitized = crate::providers::sanitize_api_error(&e.to_string());
let err = serde_json::json!({
"type": "error",
"message": sanitized,
});
let _ = sender.send(Message::Text(err.to_string().into())).await;
// Broadcast error event
let _ = state.event_tx.send(serde_json::json!({
"type": "error",
"component": "ws_chat",
"message": sanitized,
}));
}
let done = serde_json::json!({
"type": "done",
"full_response": response,
});
let _ = sender.send(Message::Text(done.to_string().into())).await;
// Broadcast agent_end event
let _ = state.event_tx.send(serde_json::json!({
"type": "agent_end",
"provider": provider_label,
"model": state.model,
}));
}
Err(e) => {
let sanitized = crate::providers::sanitize_api_error(&e.to_string());
let err = serde_json::json!({
"type": "error",
"message": sanitized,
});
let _ = sender.send(Message::Text(err.to_string().into())).await;
// Broadcast error event
let _ = state.event_tx.send(serde_json::json!({
"type": "error",
"component": "ws_chat",
"message": sanitized,
}));
}
}
}
+311
View File
@@ -0,0 +1,311 @@
//! Internationalization support for tool descriptions.
//!
//! Loads tool descriptions from TOML locale files in `tool_descriptions/`.
//! Falls back to English when a locale file or specific key is missing,
//! and ultimately falls back to the hardcoded `tool.description()` value
//! if no file-based description exists.
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use tracing::debug;
/// Container for locale-specific tool descriptions loaded from TOML files.
#[derive(Debug, Clone)]
pub struct ToolDescriptions {
/// Descriptions from the requested locale (may be empty if file missing).
locale_descriptions: HashMap<String, String>,
/// English fallback descriptions (always loaded when locale != "en").
english_fallback: HashMap<String, String>,
/// The resolved locale tag (e.g. "en", "zh-CN").
locale: String,
}
/// TOML structure: `[tools]` table mapping tool name -> description string.
#[derive(Debug, serde::Deserialize)]
struct DescriptionFile {
#[serde(default)]
tools: HashMap<String, String>,
}
impl ToolDescriptions {
/// Load descriptions for the given locale.
///
/// `search_dirs` lists directories to probe for `tool_descriptions/<locale>.toml`.
/// The first directory containing a matching file wins.
///
/// Resolution:
/// 1. Look up tool name in the locale file.
/// 2. If missing (or locale file absent), look up in `en.toml`.
/// 3. If still missing, callers fall back to `tool.description()`.
pub fn load(locale: &str, search_dirs: &[PathBuf]) -> Self {
let locale_descriptions = load_locale_file(locale, search_dirs);
let english_fallback = if locale == "en" {
HashMap::new()
} else {
load_locale_file("en", search_dirs)
};
debug!(
locale = locale,
locale_keys = locale_descriptions.len(),
english_keys = english_fallback.len(),
"tool descriptions loaded"
);
Self {
locale_descriptions,
english_fallback,
locale: locale.to_string(),
}
}
/// Get the description for a tool by name.
///
/// Returns `Some(description)` if found in the locale file or English fallback.
/// Returns `None` if neither file contains the key (caller should use hardcoded).
pub fn get(&self, tool_name: &str) -> Option<&str> {
self.locale_descriptions
.get(tool_name)
.or_else(|| self.english_fallback.get(tool_name))
.map(String::as_str)
}
/// The resolved locale tag.
pub fn locale(&self) -> &str {
&self.locale
}
/// Create an empty instance that always returns `None` (hardcoded fallback).
pub fn empty() -> Self {
Self {
locale_descriptions: HashMap::new(),
english_fallback: HashMap::new(),
locale: "en".to_string(),
}
}
}
/// Detect the user's preferred locale from environment variables.
///
/// Checks `ZEROCLAW_LOCALE`, then `LANG`, then `LC_ALL`.
/// Returns "en" if none are set or parseable.
pub fn detect_locale() -> String {
if let Ok(val) = std::env::var("ZEROCLAW_LOCALE") {
let val = val.trim().to_string();
if !val.is_empty() {
return normalize_locale(&val);
}
}
for var in &["LANG", "LC_ALL"] {
if let Ok(val) = std::env::var(var) {
let locale = normalize_locale(&val);
if locale != "C" && locale != "POSIX" && !locale.is_empty() {
return locale;
}
}
}
"en".to_string()
}
/// Normalize a raw locale string (e.g. "zh_CN.UTF-8") to a tag we use
/// for file lookup (e.g. "zh-CN").
fn normalize_locale(raw: &str) -> String {
// Strip encoding suffix (.UTF-8, .utf8, etc.)
let base = raw.split('.').next().unwrap_or(raw);
// Replace underscores with hyphens for BCP-47-ish consistency
base.replace('_', "-")
}
/// Build the default set of search directories for locale files.
///
/// 1. The workspace directory itself (for project-local overrides).
/// 2. The binary's parent directory (for installed distributions).
/// 3. The compile-time `CARGO_MANIFEST_DIR` as a final fallback during dev.
pub fn default_search_dirs(workspace_dir: &Path) -> Vec<PathBuf> {
let mut dirs = vec![workspace_dir.to_path_buf()];
if let Ok(exe) = std::env::current_exe() {
if let Some(parent) = exe.parent() {
dirs.push(parent.to_path_buf());
}
}
// During development, also check the project root (where Cargo.toml lives).
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
if !dirs.contains(&manifest_dir) {
dirs.push(manifest_dir);
}
dirs
}
/// Try to load and parse a locale TOML file from the first matching search dir.
fn load_locale_file(locale: &str, search_dirs: &[PathBuf]) -> HashMap<String, String> {
let filename = format!("tool_descriptions/{locale}.toml");
for dir in search_dirs {
let path = dir.join(&filename);
match std::fs::read_to_string(&path) {
Ok(contents) => match toml::from_str::<DescriptionFile>(&contents) {
Ok(parsed) => {
debug!(path = %path.display(), keys = parsed.tools.len(), "loaded locale file");
return parsed.tools;
}
Err(e) => {
debug!(path = %path.display(), error = %e, "failed to parse locale file");
}
},
Err(_) => {
// File not found in this directory, try next.
}
}
}
debug!(
locale = locale,
"no locale file found in any search directory"
);
HashMap::new()
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
/// Helper: create a temp dir with a `tool_descriptions/<locale>.toml` file.
fn write_locale_file(dir: &Path, locale: &str, content: &str) {
let td = dir.join("tool_descriptions");
fs::create_dir_all(&td).unwrap();
fs::write(td.join(format!("{locale}.toml")), content).unwrap();
}
#[test]
fn load_english_descriptions() {
let tmp = tempfile::tempdir().unwrap();
write_locale_file(
tmp.path(),
"en",
r#"[tools]
shell = "Execute a shell command"
file_read = "Read file contents"
"#,
);
let descs = ToolDescriptions::load("en", &[tmp.path().to_path_buf()]);
assert_eq!(descs.get("shell"), Some("Execute a shell command"));
assert_eq!(descs.get("file_read"), Some("Read file contents"));
assert_eq!(descs.get("nonexistent"), None);
assert_eq!(descs.locale(), "en");
}
#[test]
fn fallback_to_english_when_locale_key_missing() {
let tmp = tempfile::tempdir().unwrap();
write_locale_file(
tmp.path(),
"en",
r#"[tools]
shell = "Execute a shell command"
file_read = "Read file contents"
"#,
);
write_locale_file(
tmp.path(),
"zh-CN",
r#"[tools]
shell = "在工作区目录中执行 shell 命令"
"#,
);
let descs = ToolDescriptions::load("zh-CN", &[tmp.path().to_path_buf()]);
// Translated key returns Chinese.
assert_eq!(descs.get("shell"), Some("在工作区目录中执行 shell 命令"));
// Missing key falls back to English.
assert_eq!(descs.get("file_read"), Some("Read file contents"));
assert_eq!(descs.locale(), "zh-CN");
}
#[test]
fn fallback_when_locale_file_missing() {
let tmp = tempfile::tempdir().unwrap();
write_locale_file(
tmp.path(),
"en",
r#"[tools]
shell = "Execute a shell command"
"#,
);
// Request a locale that has no file.
let descs = ToolDescriptions::load("fr", &[tmp.path().to_path_buf()]);
// Falls back to English.
assert_eq!(descs.get("shell"), Some("Execute a shell command"));
assert_eq!(descs.locale(), "fr");
}
#[test]
fn fallback_when_no_files_exist() {
let tmp = tempfile::tempdir().unwrap();
let descs = ToolDescriptions::load("en", &[tmp.path().to_path_buf()]);
assert_eq!(descs.get("shell"), None);
}
#[test]
fn empty_always_returns_none() {
let descs = ToolDescriptions::empty();
assert_eq!(descs.get("shell"), None);
assert_eq!(descs.locale(), "en");
}
#[test]
fn detect_locale_from_env() {
// Save and restore env.
let saved = std::env::var("ZEROCLAW_LOCALE").ok();
let saved_lang = std::env::var("LANG").ok();
std::env::set_var("ZEROCLAW_LOCALE", "ja-JP");
assert_eq!(detect_locale(), "ja-JP");
std::env::remove_var("ZEROCLAW_LOCALE");
std::env::set_var("LANG", "zh_CN.UTF-8");
assert_eq!(detect_locale(), "zh-CN");
// Restore.
match saved {
Some(v) => std::env::set_var("ZEROCLAW_LOCALE", v),
None => std::env::remove_var("ZEROCLAW_LOCALE"),
}
match saved_lang {
Some(v) => std::env::set_var("LANG", v),
None => std::env::remove_var("LANG"),
}
}
#[test]
fn normalize_locale_strips_encoding() {
assert_eq!(normalize_locale("en_US.UTF-8"), "en-US");
assert_eq!(normalize_locale("zh_CN.utf8"), "zh-CN");
assert_eq!(normalize_locale("fr"), "fr");
assert_eq!(normalize_locale("pt_BR"), "pt-BR");
}
#[test]
fn config_locale_overrides_env() {
// This tests the precedence logic: if config provides a locale,
// it should be used instead of detect_locale().
// The actual override happens at the call site in prompt.rs / loop_.rs,
// so here we just verify ToolDescriptions works with an explicit locale.
let tmp = tempfile::tempdir().unwrap();
write_locale_file(
tmp.path(),
"de",
r#"[tools]
shell = "Einen Shell-Befehl im Arbeitsverzeichnis ausführen"
"#,
);
let descs = ToolDescriptions::load("de", &[tmp.path().to_path_buf()]);
assert_eq!(
descs.get("shell"),
Some("Einen Shell-Befehl im Arbeitsverzeichnis ausführen")
);
}
}
+13
View File
@@ -509,6 +509,18 @@ pub fn all_integrations() -> Vec<IntegrationEntry> {
},
},
// ── Productivity ────────────────────────────────────────
IntegrationEntry {
name: "Google Workspace",
description: "Drive, Gmail, Calendar, Sheets, Docs via gws CLI",
category: IntegrationCategory::Productivity,
status_fn: |c| {
if c.google_workspace.enabled {
IntegrationStatus::Active
} else {
IntegrationStatus::Available
}
},
},
IntegrationEntry {
name: "GitHub",
description: "Code, issues, PRs",
@@ -828,6 +840,7 @@ mod tests {
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
ack_reactions: None,
});
let entries = all_integrations();
let tg = entries.iter().find(|e| e.name == "Telegram").unwrap();
+5
View File
@@ -42,6 +42,7 @@ pub mod agent;
pub(crate) mod approval;
pub(crate) mod auth;
pub mod channels;
pub mod commands;
pub mod config;
pub(crate) mod cost;
pub(crate) mod cron;
@@ -53,6 +54,7 @@ pub(crate) mod hardware;
pub(crate) mod health;
pub(crate) mod heartbeat;
pub mod hooks;
pub mod i18n;
pub(crate) mod identity;
pub(crate) mod integrations;
pub mod memory;
@@ -72,6 +74,9 @@ pub mod tools;
pub(crate) mod tunnel;
pub(crate) mod util;
#[cfg(feature = "plugins-wasm")]
pub mod plugins;
pub use config::Config;
/// Gateway management subcommands
+214 -35
View File
@@ -75,6 +75,7 @@ mod agent;
mod approval;
mod auth;
mod channels;
mod commands;
mod rag {
pub use zeroclaw::rag::*;
}
@@ -88,6 +89,7 @@ mod hardware;
mod health;
mod heartbeat;
mod hooks;
mod i18n;
mod identity;
mod integrations;
mod memory;
@@ -96,6 +98,8 @@ mod multimodal;
mod observability;
mod onboard;
mod peripherals;
#[cfg(feature = "plugins-wasm")]
mod plugins;
mod providers;
mod runtime;
mod security;
@@ -282,7 +286,11 @@ Examples:
},
/// Show system status (full details)
Status,
Status {
/// Output format: "exit-code" exits 0 if healthy, 1 otherwise (for Docker HEALTHCHECK)
#[arg(long)]
format: Option<String>,
},
/// Engage, inspect, and resume emergency-stop states.
///
@@ -462,6 +470,52 @@ Examples:
config_command: ConfigCommands,
},
/// Check for and apply updates
#[command(long_about = "\
Check for and apply ZeroClaw updates.
By default, downloads and installs the latest release with a \
6-phase pipeline: preflight, download, backup, validate, swap, \
and smoke test. Automatic rollback on failure.
Use --check to only check for updates without installing.
Use --force to skip the confirmation prompt.
Use --version to target a specific release instead of latest.
Examples:
zeroclaw update # download and install latest
zeroclaw update --check # check only, don't install
zeroclaw update --force # install without confirmation
zeroclaw update --version 0.6.0 # install specific version")]
Update {
/// Only check for updates, don't install
#[arg(long)]
check: bool,
/// Skip confirmation prompt
#[arg(long)]
force: bool,
/// Target version (default: latest)
#[arg(long)]
version: Option<String>,
},
/// Run diagnostic self-tests
#[command(long_about = "\
Run diagnostic self-tests to verify the ZeroClaw installation.
By default, runs the full test suite including network checks \
(gateway health, memory round-trip). Use --quick to skip network \
checks for faster offline validation.
Examples:
zeroclaw self-test # full suite
zeroclaw self-test --quick # quick checks only (no network)")]
SelfTest {
/// Run quick checks only (no network)
#[arg(long)]
quick: bool,
},
/// Generate shell completion script to stdout
#[command(long_about = "\
Generate shell completion scripts for `zeroclaw`.
@@ -477,6 +531,35 @@ Examples:
#[arg(value_enum)]
shell: CompletionShell,
},
/// Manage WASM plugins
#[cfg(feature = "plugins-wasm")]
Plugin {
#[command(subcommand)]
plugin_command: PluginCommands,
},
}
#[cfg(feature = "plugins-wasm")]
#[derive(Subcommand, Debug)]
enum PluginCommands {
/// List installed plugins
List,
/// Install a plugin from a directory or URL
Install {
/// Path to plugin directory or manifest
source: String,
},
/// Remove an installed plugin
Remove {
/// Plugin name
name: String,
},
/// Show information about a plugin
Info {
/// Plugin name
name: String,
},
}
#[derive(Subcommand, Debug)]
@@ -806,40 +889,22 @@ async fn main() -> Result<()> {
} else if is_tty && !has_provider_flags {
Box::pin(onboard::run_wizard(force)).await
} else {
onboard::run_quick_setup(
Box::pin(onboard::run_quick_setup(
api_key.as_deref(),
provider.as_deref(),
model.as_deref(),
memory.as_deref(),
force,
)
))
.await
}?;
// Display pairing code — user enters it in the dashboard to pair securely.
// The code is one-time use and brute-force protected (5 attempts → lockout).
// No auth material is placed in URLs to prevent leakage via browser history,
// Referer headers, clipboard, or proxy logs.
if config.gateway.require_pairing {
let pairing = security::PairingGuard::new(true, &config.gateway.paired_tokens);
if let Some(code) = pairing.pairing_code() {
println!();
println!(" \x1b[1;34m🦀 Gateway Pairing Code\x1b[0m");
println!();
println!(" \x1b[1;34m┌──────────────┐\x1b[0m");
println!(" \x1b[1;34m│\x1b[0m \x1b[1m{code}\x1b[0m \x1b[1;34m│\x1b[0m");
println!(" \x1b[1;34m└──────────────┘\x1b[0m");
println!();
println!(" Enter this code in the dashboard to pair your device.");
println!(" The code is single-use and expires after pairing.");
println!();
println!(
" \x1b[2mDashboard: http://127.0.0.1:{}\x1b[0m",
config.gateway.port
);
println!(" \x1b[2mDocs: https://www.zeroclawlabs.ai/docs\x1b[0m");
println!();
}
println!();
println!(" Pairing is enabled. A one-time pairing code will be");
println!(" displayed when the gateway starts.");
println!(" Dashboard: http://127.0.0.1:{}", config.gateway.port);
println!();
}
// Auto-start channels if user said yes during wizard
@@ -850,7 +915,7 @@ async fn main() -> Result<()> {
}
// All other commands need config loaded first
let mut config = Config::load_or_init().await?;
let mut config = Box::pin(Config::load_or_init()).await?;
config.apply_env_overrides();
observability::runtime_trace::init_from_config(&config.observability, &config.workspace_dir);
if config.security.otp.enabled {
@@ -931,7 +996,7 @@ async fn main() -> Result<()> {
}
log_gateway_start(&host, port);
gateway::run_gateway(&host, port, config).await
Box::pin(gateway::run_gateway(&host, port, config)).await
}
Some(zeroclaw::GatewayCommands::GetPaircode { new }) => {
let port = config.gateway.port;
@@ -980,13 +1045,13 @@ async fn main() -> Result<()> {
Some(zeroclaw::GatewayCommands::Start { port, host }) => {
let (port, host) = resolve_gateway_addr(&config, port, host);
log_gateway_start(&host, port);
gateway::run_gateway(&host, port, config).await
Box::pin(gateway::run_gateway(&host, port, config)).await
}
None => {
let port = config.gateway.port;
let host = config.gateway.host.clone();
log_gateway_start(&host, port);
gateway::run_gateway(&host, port, config).await
Box::pin(gateway::run_gateway(&host, port, config)).await
}
}
}
@@ -999,10 +1064,33 @@ async fn main() -> Result<()> {
} else {
info!("🧠 Starting ZeroClaw Daemon on {host}:{port}");
}
daemon::run(config, host, port).await
Box::pin(daemon::run(config, host, port)).await
}
Commands::Status => {
Commands::Status { format } => {
if format.as_deref() == Some("exit-code") {
// Lightweight health probe for Docker HEALTHCHECK
let port = config.gateway.port;
let host = if config.gateway.host == "[::]" || config.gateway.host == "0.0.0.0" {
"127.0.0.1"
} else {
&config.gateway.host
};
let url = format!("http://{}:{}/health", host, port);
match reqwest::Client::new()
.get(&url)
.timeout(std::time::Duration::from_secs(5))
.send()
.await
{
Ok(resp) if resp.status().is_success() => {
std::process::exit(0);
}
_ => {
std::process::exit(1);
}
}
}
println!("🦀 ZeroClaw Status");
println!();
println!("Version: {}", env!("CARGO_PKG_VERSION"));
@@ -1123,7 +1211,9 @@ async fn main() -> Result<()> {
ModelCommands::List { provider } => {
onboard::run_models_list(&config, provider.as_deref()).await
}
ModelCommands::Set { model } => onboard::run_models_set(&config, &model).await,
ModelCommands::Set { model } => {
Box::pin(onboard::run_models_set(&config, &model)).await
}
ModelCommands::Status => onboard::run_models_status(&config).await,
},
@@ -1191,7 +1281,7 @@ async fn main() -> Result<()> {
Commands::Channel { channel_command } => match channel_command {
ChannelCommands::Start => Box::pin(channels::start_channels(config)).await,
ChannelCommands::Doctor => Box::pin(channels::doctor_channels(config)).await,
other => channels::handle_command(other, &config).await,
other => Box::pin(channels::handle_command(other, &config)).await,
},
Commands::Integrations {
@@ -1215,7 +1305,46 @@ async fn main() -> Result<()> {
}
Commands::Peripheral { peripheral_command } => {
peripherals::handle_command(peripheral_command.clone(), &config).await
Box::pin(peripherals::handle_command(
peripheral_command.clone(),
&config,
))
.await
}
Commands::Update {
check,
force: _force,
version,
} => {
if check {
let info = commands::update::check(version.as_deref()).await?;
if info.is_newer {
println!(
"Update available: v{} -> v{}",
info.current_version, info.latest_version
);
} else {
println!("Already up to date (v{}).", info.current_version);
}
Ok(())
} else {
commands::update::run(version.as_deref()).await
}
}
Commands::SelfTest { quick } => {
let results = if quick {
commands::self_test::run_quick(&config).await?
} else {
commands::self_test::run_full(&config).await?
};
commands::self_test::print_results(&results);
let failed = results.iter().filter(|r| !r.passed).count();
if failed > 0 {
std::process::exit(1);
}
Ok(())
}
Commands::Config { config_command } => match config_command {
@@ -1228,6 +1357,56 @@ async fn main() -> Result<()> {
Ok(())
}
},
#[cfg(feature = "plugins-wasm")]
Commands::Plugin { plugin_command } => match plugin_command {
PluginCommands::List => {
let host = zeroclaw::plugins::host::PluginHost::new(&config.workspace_dir)?;
let plugins = host.list_plugins();
if plugins.is_empty() {
println!("No plugins installed.");
} else {
println!("Installed plugins:");
for p in &plugins {
println!(
" {} v{} — {}",
p.name,
p.version,
p.description.as_deref().unwrap_or("(no description)")
);
}
}
Ok(())
}
PluginCommands::Install { source } => {
let mut host = zeroclaw::plugins::host::PluginHost::new(&config.workspace_dir)?;
host.install(&source)?;
println!("Plugin installed from {source}");
Ok(())
}
PluginCommands::Remove { name } => {
let mut host = zeroclaw::plugins::host::PluginHost::new(&config.workspace_dir)?;
host.remove(&name)?;
println!("Plugin '{name}' removed.");
Ok(())
}
PluginCommands::Info { name } => {
let host = zeroclaw::plugins::host::PluginHost::new(&config.workspace_dir)?;
match host.get_plugin(&name) {
Some(info) => {
println!("Plugin: {} v{}", info.name, info.version);
if let Some(desc) = &info.description {
println!("Description: {desc}");
}
println!("Capabilities: {:?}", info.capabilities);
println!("Permissions: {:?}", info.permissions);
println!("WASM: {}", info.wasm_path.display());
}
None => println!("Plugin '{name}' not found."),
}
Ok(())
}
},
}
}
+824
View File
@@ -0,0 +1,824 @@
//! Knowledge graph for capturing, organizing, and reusing expertise.
//!
//! SQLite-backed storage for knowledge nodes (patterns, decisions, lessons,
//! experts, technologies) and directed edges (uses, replaces, extends,
//! authored_by, applies_to). Supports full-text search, tag filtering,
//! and relation traversal.
use anyhow::Context;
use chrono::{DateTime, Utc};
use parking_lot::Mutex;
use rusqlite::{params, Connection};
use serde::{Deserialize, Serialize};
use std::collections::{HashMap, HashSet};
use std::path::{Path, PathBuf};
use uuid::Uuid;
// ── Domain types ────────────────────────────────────────────────
/// The kind of knowledge captured in a node.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum NodeType {
Pattern,
Decision,
Lesson,
Expert,
Technology,
}
impl NodeType {
pub fn as_str(&self) -> &'static str {
match self {
Self::Pattern => "pattern",
Self::Decision => "decision",
Self::Lesson => "lesson",
Self::Expert => "expert",
Self::Technology => "technology",
}
}
pub fn parse(s: &str) -> anyhow::Result<Self> {
match s {
"pattern" => Ok(Self::Pattern),
"decision" => Ok(Self::Decision),
"lesson" => Ok(Self::Lesson),
"expert" => Ok(Self::Expert),
"technology" => Ok(Self::Technology),
other => anyhow::bail!("unknown node type: {other}"),
}
}
}
/// Directed relationship between two knowledge nodes.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum Relation {
Uses,
Replaces,
Extends,
AuthoredBy,
AppliesTo,
}
impl Relation {
pub fn as_str(&self) -> &'static str {
match self {
Self::Uses => "uses",
Self::Replaces => "replaces",
Self::Extends => "extends",
Self::AuthoredBy => "authored_by",
Self::AppliesTo => "applies_to",
}
}
pub fn parse(s: &str) -> anyhow::Result<Self> {
match s {
"uses" => Ok(Self::Uses),
"replaces" => Ok(Self::Replaces),
"extends" => Ok(Self::Extends),
"authored_by" => Ok(Self::AuthoredBy),
"applies_to" => Ok(Self::AppliesTo),
other => anyhow::bail!("unknown relation: {other}"),
}
}
}
/// A node in the knowledge graph.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct KnowledgeNode {
pub id: String,
pub node_type: NodeType,
pub title: String,
pub content: String,
pub tags: Vec<String>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub source_project: Option<String>,
}
/// A directed edge in the knowledge graph.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct KnowledgeEdge {
pub from_id: String,
pub to_id: String,
pub relation: Relation,
}
/// A search result with relevance score.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SearchResult {
pub node: KnowledgeNode,
pub score: f64,
}
/// Summary statistics for the knowledge graph.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GraphStats {
pub total_nodes: usize,
pub total_edges: usize,
pub nodes_by_type: HashMap<String, usize>,
pub top_tags: Vec<(String, usize)>,
}
// ── Knowledge graph ─────────────────────────────────────────────
/// SQLite-backed knowledge graph.
pub struct KnowledgeGraph {
conn: Mutex<Connection>,
#[allow(dead_code)]
db_path: PathBuf,
max_nodes: usize,
}
impl KnowledgeGraph {
/// Open (or create) a knowledge graph database at the given path.
pub fn new(db_path: &Path, max_nodes: usize) -> anyhow::Result<Self> {
if let Some(parent) = db_path.parent() {
std::fs::create_dir_all(parent)?;
}
let conn = Connection::open(db_path).context("failed to open knowledge graph database")?;
conn.execute_batch(
"PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA foreign_keys = ON;",
)?;
conn.execute_batch(
"CREATE TABLE IF NOT EXISTS nodes (
id TEXT PRIMARY KEY,
node_type TEXT NOT NULL,
title TEXT NOT NULL,
content TEXT NOT NULL,
tags TEXT NOT NULL DEFAULT '',
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
source_project TEXT
);
CREATE TABLE IF NOT EXISTS edges (
from_id TEXT NOT NULL,
to_id TEXT NOT NULL,
relation TEXT NOT NULL,
PRIMARY KEY (from_id, to_id, relation),
FOREIGN KEY (from_id) REFERENCES nodes(id) ON DELETE CASCADE,
FOREIGN KEY (to_id) REFERENCES nodes(id) ON DELETE CASCADE
);
CREATE VIRTUAL TABLE IF NOT EXISTS nodes_fts USING fts5(
title, content, tags, content='nodes', content_rowid='rowid'
);
CREATE TRIGGER IF NOT EXISTS nodes_ai AFTER INSERT ON nodes BEGIN
INSERT INTO nodes_fts(rowid, title, content, tags)
VALUES (new.rowid, new.title, new.content, new.tags);
END;
CREATE TRIGGER IF NOT EXISTS nodes_ad AFTER DELETE ON nodes BEGIN
INSERT INTO nodes_fts(nodes_fts, rowid, title, content, tags)
VALUES ('delete', old.rowid, old.title, old.content, old.tags);
END;
CREATE TRIGGER IF NOT EXISTS nodes_au AFTER UPDATE ON nodes BEGIN
INSERT INTO nodes_fts(nodes_fts, rowid, title, content, tags)
VALUES ('delete', old.rowid, old.title, old.content, old.tags);
INSERT INTO nodes_fts(rowid, title, content, tags)
VALUES (new.rowid, new.title, new.content, new.tags);
END;
CREATE INDEX IF NOT EXISTS idx_nodes_type ON nodes(node_type);
CREATE INDEX IF NOT EXISTS idx_nodes_source ON nodes(source_project);
CREATE INDEX IF NOT EXISTS idx_edges_from ON edges(from_id);
CREATE INDEX IF NOT EXISTS idx_edges_to ON edges(to_id);",
)?;
Ok(Self {
conn: Mutex::new(conn),
db_path: db_path.to_path_buf(),
max_nodes,
})
}
/// Add a node to the graph. Returns the generated node id.
pub fn add_node(
&self,
node_type: NodeType,
title: &str,
content: &str,
tags: &[String],
source_project: Option<&str>,
) -> anyhow::Result<String> {
let conn = self.conn.lock();
// Enforce max_nodes limit.
let count: usize = conn.query_row("SELECT COUNT(*) FROM nodes", [], |r| r.get(0))?;
if count >= self.max_nodes {
anyhow::bail!(
"knowledge graph node limit reached ({}/{})",
count,
self.max_nodes
);
}
// Reject tags containing commas since comma is the separator in storage.
for tag in tags {
if tag.contains(',') {
anyhow::bail!(
"tag '{}' contains a comma, which is used as the tag separator",
tag
);
}
}
let id = Uuid::new_v4().to_string();
let now = Utc::now().to_rfc3339();
let tags_str = tags.join(",");
conn.execute(
"INSERT INTO nodes (id, node_type, title, content, tags, created_at, updated_at, source_project)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)",
params![
id,
node_type.as_str(),
title,
content,
tags_str,
now,
now,
source_project,
],
)?;
Ok(id)
}
/// Add a directed edge between two nodes.
pub fn add_edge(&self, from_id: &str, to_id: &str, relation: Relation) -> anyhow::Result<()> {
let conn = self.conn.lock();
// Verify both endpoints exist.
let exists = |id: &str| -> anyhow::Result<bool> {
let c: usize = conn.query_row(
"SELECT COUNT(*) FROM nodes WHERE id = ?1",
params![id],
|r| r.get(0),
)?;
Ok(c > 0)
};
if !exists(from_id)? {
anyhow::bail!("source node not found: {from_id}");
}
if !exists(to_id)? {
anyhow::bail!("target node not found: {to_id}");
}
conn.execute(
"INSERT OR IGNORE INTO edges (from_id, to_id, relation) VALUES (?1, ?2, ?3)",
params![from_id, to_id, relation.as_str()],
)?;
Ok(())
}
/// Retrieve a node by id.
pub fn get_node(&self, id: &str) -> anyhow::Result<Option<KnowledgeNode>> {
let conn = self.conn.lock();
let mut stmt = conn.prepare(
"SELECT id, node_type, title, content, tags, created_at, updated_at, source_project
FROM nodes WHERE id = ?1",
)?;
let mut rows = stmt.query(params![id])?;
match rows.next()? {
Some(row) => Ok(Some(row_to_node(row)?)),
None => Ok(None),
}
}
/// Query nodes by tags (all listed tags must be present).
pub fn query_by_tags(&self, tags: &[String]) -> anyhow::Result<Vec<KnowledgeNode>> {
let conn = self.conn.lock();
let mut stmt = conn.prepare(
"SELECT id, node_type, title, content, tags, created_at, updated_at, source_project
FROM nodes ORDER BY updated_at DESC",
)?;
let mut results = Vec::new();
let mut rows = stmt.query([])?;
while let Some(row) = rows.next()? {
let node = row_to_node(row)?;
if tags.iter().all(|t| node.tags.contains(t)) {
results.push(node);
}
}
Ok(results)
}
/// Full-text search across node titles, content, and tags.
pub fn query_by_similarity(
&self,
query: &str,
limit: usize,
) -> anyhow::Result<Vec<SearchResult>> {
let conn = self.conn.lock();
// Sanitize FTS query: escape double quotes, wrap tokens in quotes.
let sanitized: String = query
.split_whitespace()
.map(|w| format!("\"{}\"", w.replace('"', "")))
.collect::<Vec<_>>()
.join(" ");
if sanitized.is_empty() {
return Ok(Vec::new());
}
let mut stmt = conn.prepare(
"SELECT n.id, n.node_type, n.title, n.content, n.tags,
n.created_at, n.updated_at, n.source_project,
rank
FROM nodes_fts f
JOIN nodes n ON n.rowid = f.rowid
WHERE nodes_fts MATCH ?1
ORDER BY rank
LIMIT ?2",
)?;
let mut results = Vec::new();
let mut rows = stmt.query(params![sanitized, limit as i64])?;
while let Some(row) = rows.next()? {
let node = row_to_node(row)?;
let rank: f64 = row.get(8)?;
results.push(SearchResult {
node,
score: -rank, // FTS5 rank is negative (lower = better), invert for intuitive scoring
});
}
Ok(results)
}
/// Find nodes directly related to the given node (outbound edges).
pub fn find_related(&self, node_id: &str) -> anyhow::Result<Vec<(KnowledgeNode, Relation)>> {
let conn = self.conn.lock();
let mut stmt = conn.prepare(
"SELECT n.id, n.node_type, n.title, n.content, n.tags,
n.created_at, n.updated_at, n.source_project,
e.relation
FROM edges e
JOIN nodes n ON n.id = e.to_id
WHERE e.from_id = ?1",
)?;
let mut results = Vec::new();
let mut rows = stmt.query(params![node_id])?;
while let Some(row) = rows.next()? {
let node = row_to_node(row)?;
let relation_str: String = row.get(8)?;
let relation = Relation::parse(&relation_str)?;
results.push((node, relation));
}
Ok(results)
}
/// Maximum allowed subgraph traversal depth.
const MAX_SUBGRAPH_DEPTH: usize = 100;
/// Extract a subgraph starting from `root_id` up to `depth` hops.
///
/// `depth` must be between 1 and [`Self::MAX_SUBGRAPH_DEPTH`] (100).
pub fn get_subgraph(
&self,
root_id: &str,
depth: usize,
) -> anyhow::Result<(Vec<KnowledgeNode>, Vec<KnowledgeEdge>)> {
if depth == 0 {
anyhow::bail!("subgraph depth must be greater than 0");
}
let depth = depth.min(Self::MAX_SUBGRAPH_DEPTH);
let mut visited: HashSet<String> = HashSet::new();
let mut nodes = Vec::new();
let mut edges = Vec::new();
// Visit the root node first, then expand outward `depth` levels.
visited.insert(root_id.to_string());
if let Some(root_node) = self.get_node(root_id)? {
nodes.push(root_node);
}
let mut frontier = vec![root_id.to_string()];
for _ in 0..depth {
if frontier.is_empty() {
break;
}
let mut next_frontier = Vec::new();
for nid in &frontier {
for (related, relation) in self.find_related(nid)? {
edges.push(KnowledgeEdge {
from_id: nid.clone(),
to_id: related.id.clone(),
relation,
});
if visited.insert(related.id.clone()) {
nodes.push(related.clone());
next_frontier.push(related.id.clone());
}
}
}
frontier = next_frontier;
}
Ok((nodes, edges))
}
/// Find experts associated with the given tags via `authored_by` edges.
pub fn find_experts(&self, tags: &[String]) -> anyhow::Result<Vec<SearchResult>> {
// Find nodes matching the tags, then follow authored_by edges to experts.
let matching = self.query_by_tags(tags)?;
let mut expert_scores: HashMap<String, f64> = HashMap::new();
let conn = self.conn.lock();
for node in &matching {
let mut stmt = conn.prepare(
"SELECT to_id FROM edges WHERE from_id = ?1 AND relation = 'authored_by'",
)?;
let mut rows = stmt.query(params![node.id])?;
while let Some(row) = rows.next()? {
let expert_id: String = row.get(0)?;
*expert_scores.entry(expert_id).or_default() += 1.0;
}
}
drop(conn);
let mut results: Vec<SearchResult> = Vec::new();
for (eid, score) in expert_scores {
if let Some(node) = self.get_node(&eid)? {
if node.node_type == NodeType::Expert {
results.push(SearchResult { node, score });
}
}
}
results.sort_by(|a, b| {
b.score
.partial_cmp(&a.score)
.unwrap_or(std::cmp::Ordering::Equal)
});
Ok(results)
}
/// Return summary statistics for the graph.
pub fn stats(&self) -> anyhow::Result<GraphStats> {
let conn = self.conn.lock();
let total_nodes: usize = conn.query_row("SELECT COUNT(*) FROM nodes", [], |r| r.get(0))?;
let total_edges: usize = conn.query_row("SELECT COUNT(*) FROM edges", [], |r| r.get(0))?;
let mut by_type = HashMap::new();
{
let mut stmt =
conn.prepare("SELECT node_type, COUNT(*) FROM nodes GROUP BY node_type")?;
let mut rows = stmt.query([])?;
while let Some(row) = rows.next()? {
let t: String = row.get(0)?;
let c: usize = row.get(1)?;
by_type.insert(t, c);
}
}
// Top 10 tags by frequency.
let mut tag_counts: HashMap<String, usize> = HashMap::new();
{
let mut stmt = conn.prepare("SELECT tags FROM nodes WHERE tags != ''")?;
let mut rows = stmt.query([])?;
while let Some(row) = rows.next()? {
let tags_str: String = row.get(0)?;
for tag in tags_str.split(',') {
let tag = tag.trim();
if !tag.is_empty() {
*tag_counts.entry(tag.to_string()).or_default() += 1;
}
}
}
}
let mut top_tags: Vec<(String, usize)> = tag_counts.into_iter().collect();
top_tags.sort_by(|a, b| b.1.cmp(&a.1));
top_tags.truncate(10);
Ok(GraphStats {
total_nodes,
total_edges,
nodes_by_type: by_type,
top_tags,
})
}
}
/// Parse a database row into a `KnowledgeNode`.
fn row_to_node(row: &rusqlite::Row<'_>) -> anyhow::Result<KnowledgeNode> {
let id: String = row.get(0)?;
let node_type_str: String = row.get(1)?;
let title: String = row.get(2)?;
let content: String = row.get(3)?;
let tags_str: String = row.get(4)?;
let created_at_str: String = row.get(5)?;
let updated_at_str: String = row.get(6)?;
let source_project: Option<String> = row.get(7)?;
let tags: Vec<String> = tags_str
.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect();
let created_at = DateTime::parse_from_rfc3339(&created_at_str)
.map(|dt| dt.with_timezone(&Utc))
.unwrap_or_else(|_| Utc::now());
let updated_at = DateTime::parse_from_rfc3339(&updated_at_str)
.map(|dt| dt.with_timezone(&Utc))
.unwrap_or_else(|_| Utc::now());
Ok(KnowledgeNode {
id,
node_type: NodeType::parse(&node_type_str)?,
title,
content,
tags,
created_at,
updated_at,
source_project,
})
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
fn test_graph() -> (TempDir, KnowledgeGraph) {
let tmp = TempDir::new().unwrap();
let db_path = tmp.path().join("knowledge.db");
let graph = KnowledgeGraph::new(&db_path, 1000).unwrap();
(tmp, graph)
}
#[test]
fn add_node_returns_unique_id() {
let (_tmp, graph) = test_graph();
let id1 = graph
.add_node(
NodeType::Pattern,
"Caching",
"Use Redis for caching",
&["redis".into()],
None,
)
.unwrap();
let id2 = graph
.add_node(NodeType::Lesson, "Lesson A", "Content A", &[], None)
.unwrap();
assert_ne!(id1, id2);
}
#[test]
fn get_node_returns_stored_data() {
let (_tmp, graph) = test_graph();
let id = graph
.add_node(
NodeType::Decision,
"Use Postgres",
"Chose Postgres over MySQL",
&["database".into(), "postgres".into()],
Some("project_alpha"),
)
.unwrap();
let node = graph.get_node(&id).unwrap().unwrap();
assert_eq!(node.title, "Use Postgres");
assert_eq!(node.node_type, NodeType::Decision);
assert_eq!(node.tags, vec!["database", "postgres"]);
assert_eq!(node.source_project.as_deref(), Some("project_alpha"));
}
#[test]
fn get_node_missing_returns_none() {
let (_tmp, graph) = test_graph();
assert!(graph.get_node("nonexistent").unwrap().is_none());
}
#[test]
fn add_edge_creates_relationship() {
let (_tmp, graph) = test_graph();
let id1 = graph
.add_node(NodeType::Pattern, "P1", "Pattern one", &[], None)
.unwrap();
let id2 = graph
.add_node(NodeType::Technology, "T1", "Tech one", &[], None)
.unwrap();
graph.add_edge(&id1, &id2, Relation::Uses).unwrap();
let related = graph.find_related(&id1).unwrap();
assert_eq!(related.len(), 1);
assert_eq!(related[0].0.id, id2);
assert_eq!(related[0].1, Relation::Uses);
}
#[test]
fn add_edge_rejects_missing_node() {
let (_tmp, graph) = test_graph();
let id = graph
.add_node(NodeType::Lesson, "L1", "Lesson", &[], None)
.unwrap();
let err = graph
.add_edge(&id, "nonexistent", Relation::Extends)
.unwrap_err();
assert!(err.to_string().contains("target node not found"));
}
#[test]
fn query_by_tags_filters_correctly() {
let (_tmp, graph) = test_graph();
graph
.add_node(
NodeType::Pattern,
"P1",
"Content",
&["rust".into(), "async".into()],
None,
)
.unwrap();
graph
.add_node(NodeType::Pattern, "P2", "Content", &["rust".into()], None)
.unwrap();
graph
.add_node(NodeType::Pattern, "P3", "Content", &["python".into()], None)
.unwrap();
let results = graph.query_by_tags(&["rust".into()]).unwrap();
assert_eq!(results.len(), 2);
let results = graph
.query_by_tags(&["rust".into(), "async".into()])
.unwrap();
assert_eq!(results.len(), 1);
assert_eq!(results[0].title, "P1");
}
#[test]
fn query_by_similarity_returns_ranked_results() {
let (_tmp, graph) = test_graph();
graph
.add_node(
NodeType::Decision,
"Choose Rust for performance",
"Rust gives memory safety and speed",
&["rust".into()],
None,
)
.unwrap();
graph
.add_node(
NodeType::Lesson,
"Python scaling issues",
"Python had GIL bottleneck",
&["python".into()],
None,
)
.unwrap();
let results = graph.query_by_similarity("Rust performance", 10).unwrap();
assert!(!results.is_empty());
assert!(results[0].score > 0.0);
}
#[test]
fn subgraph_traversal_collects_connected_nodes() {
let (_tmp, graph) = test_graph();
let a = graph
.add_node(NodeType::Pattern, "A", "Node A", &[], None)
.unwrap();
let b = graph
.add_node(NodeType::Pattern, "B", "Node B", &[], None)
.unwrap();
let c = graph
.add_node(NodeType::Pattern, "C", "Node C", &[], None)
.unwrap();
graph.add_edge(&a, &b, Relation::Extends).unwrap();
graph.add_edge(&b, &c, Relation::Uses).unwrap();
let (nodes, edges) = graph.get_subgraph(&a, 2).unwrap();
assert_eq!(nodes.len(), 3);
assert_eq!(edges.len(), 2);
}
#[test]
fn expert_ranking_by_authored_contributions() {
let (_tmp, graph) = test_graph();
let expert = graph
.add_node(
NodeType::Expert,
"zeroclaw_user",
"Backend expert",
&[],
None,
)
.unwrap();
let p1 = graph
.add_node(
NodeType::Pattern,
"Cache pattern",
"Redis caching",
&["caching".into()],
None,
)
.unwrap();
let p2 = graph
.add_node(
NodeType::Pattern,
"Queue pattern",
"Message queue",
&["caching".into()],
None,
)
.unwrap();
graph.add_edge(&p1, &expert, Relation::AuthoredBy).unwrap();
graph.add_edge(&p2, &expert, Relation::AuthoredBy).unwrap();
let experts = graph.find_experts(&["caching".into()]).unwrap();
assert_eq!(experts.len(), 1);
assert_eq!(experts[0].node.title, "zeroclaw_user");
assert!((experts[0].score - 2.0).abs() < f64::EPSILON);
}
#[test]
fn max_nodes_limit_enforced() {
let tmp = TempDir::new().unwrap();
let db_path = tmp.path().join("knowledge.db");
let graph = KnowledgeGraph::new(&db_path, 2).unwrap();
graph
.add_node(NodeType::Lesson, "L1", "C1", &[], None)
.unwrap();
graph
.add_node(NodeType::Lesson, "L2", "C2", &[], None)
.unwrap();
let err = graph
.add_node(NodeType::Lesson, "L3", "C3", &[], None)
.unwrap_err();
assert!(err.to_string().contains("node limit reached"));
}
#[test]
fn stats_reports_correct_counts() {
let (_tmp, graph) = test_graph();
graph
.add_node(NodeType::Pattern, "P", "C", &["rust".into()], None)
.unwrap();
graph
.add_node(
NodeType::Lesson,
"L",
"C",
&["rust".into(), "async".into()],
None,
)
.unwrap();
let stats = graph.stats().unwrap();
assert_eq!(stats.total_nodes, 2);
assert_eq!(stats.nodes_by_type.get("pattern"), Some(&1));
assert_eq!(stats.nodes_by_type.get("lesson"), Some(&1));
assert!(!stats.top_tags.is_empty());
}
#[test]
fn node_type_roundtrip() {
for nt in &[
NodeType::Pattern,
NodeType::Decision,
NodeType::Lesson,
NodeType::Expert,
NodeType::Technology,
] {
assert_eq!(&NodeType::parse(nt.as_str()).unwrap(), nt);
}
}
#[test]
fn relation_roundtrip() {
for r in &[
Relation::Uses,
Relation::Replaces,
Relation::Extends,
Relation::AuthoredBy,
Relation::AppliesTo,
] {
assert_eq!(&Relation::parse(r.as_str()).unwrap(), r);
}
}
}
+8
View File
@@ -4,6 +4,7 @@ pub mod cli;
pub mod consolidation;
pub mod embeddings;
pub mod hygiene;
pub mod knowledge_graph;
pub mod lucid;
pub mod markdown;
pub mod none;
@@ -100,6 +101,7 @@ pub fn should_skip_autosave_content(content: &str) -> bool {
let lowered = normalized.to_ascii_lowercase();
lowered.starts_with("[cron:")
|| lowered.starts_with("[heartbeat task")
|| lowered.starts_with("[distilled_")
|| lowered.contains("distilled_index_sig:")
}
@@ -470,6 +472,12 @@ mod tests {
assert!(should_skip_autosave_content(
"[DISTILLED_MEMORY_CHUNK 1/2] DISTILLED_INDEX_SIG:abc123"
));
assert!(should_skip_autosave_content(
"[Heartbeat Task | decision] Should I run tasks?"
));
assert!(should_skip_autosave_content(
"[Heartbeat Task | high] Execute scheduled patrol"
));
assert!(!should_skip_autosave_content(
"User prefers concise answers."
));
+29 -14
View File
@@ -95,7 +95,7 @@ pub async fn run_wizard(force: bool) -> Result<Config> {
match resolve_interactive_onboarding_mode(&config_path, force)? {
InteractiveOnboardingMode::FullOnboarding => {}
InteractiveOnboardingMode::UpdateProviderOnly => {
return run_provider_update_wizard(&workspace_dir, &config_path).await;
return Box::pin(run_provider_update_wizard(&workspace_dir, &config_path)).await;
}
}
@@ -173,6 +173,7 @@ pub async fn run_wizard(force: bool) -> Result<Config> {
web_fetch: crate::config::WebFetchConfig::default(),
web_search: crate::config::WebSearchConfig::default(),
project_intel: crate::config::ProjectIntelConfig::default(),
google_workspace: crate::config::GoogleWorkspaceConfig::default(),
proxy: crate::config::ProxyConfig::default(),
identity: crate::config::IdentityConfig::default(),
cost: crate::config::CostConfig::default(),
@@ -189,6 +190,10 @@ pub async fn run_wizard(force: bool) -> Result<Config> {
workspace: crate::config::WorkspaceConfig::default(),
notion: crate::config::NotionConfig::default(),
node_transport: crate::config::NodeTransportConfig::default(),
knowledge: crate::config::KnowledgeConfig::default(),
linkedin: crate::config::LinkedInConfig::default(),
plugins: crate::config::PluginsConfig::default(),
locale: None,
};
println!(
@@ -248,7 +253,7 @@ pub async fn run_channels_repair_wizard() -> Result<Config> {
);
println!();
let mut config = Config::load_or_init().await?;
let mut config = Box::pin(Config::load_or_init()).await?;
print_step(1, 1, "Channels (How You Talk to ZeroClaw)");
config.channels_config = setup_channels()?;
@@ -424,14 +429,14 @@ pub async fn run_quick_setup(
.map(|u| u.home_dir().to_path_buf())
.context("Could not find home directory")?;
run_quick_setup_with_home(
Box::pin(run_quick_setup_with_home(
credential_override,
provider,
model_override,
memory_backend,
force,
&home,
)
))
.await
}
@@ -543,6 +548,7 @@ async fn run_quick_setup_with_home(
web_fetch: crate::config::WebFetchConfig::default(),
web_search: crate::config::WebSearchConfig::default(),
project_intel: crate::config::ProjectIntelConfig::default(),
google_workspace: crate::config::GoogleWorkspaceConfig::default(),
proxy: crate::config::ProxyConfig::default(),
identity: crate::config::IdentityConfig::default(),
cost: crate::config::CostConfig::default(),
@@ -559,6 +565,10 @@ async fn run_quick_setup_with_home(
workspace: crate::config::WorkspaceConfig::default(),
notion: crate::config::NotionConfig::default(),
node_transport: crate::config::NodeTransportConfig::default(),
knowledge: crate::config::KnowledgeConfig::default(),
linkedin: crate::config::LinkedInConfig::default(),
plugins: crate::config::PluginsConfig::default(),
locale: None,
};
config.save().await?;
@@ -3675,6 +3685,7 @@ fn setup_channels() -> Result<ChannelsConfig> {
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
ack_reactions: None,
});
}
ChannelMenuChoice::Discord => {
@@ -4605,6 +4616,10 @@ fn setup_channels() -> Result<ChannelsConfig> {
config.webhook = Some(WebhookConfig {
port: port.parse().unwrap_or(8080),
listen_path: None,
send_url: None,
send_method: None,
auth_header: None,
secret: if secret.is_empty() {
None
} else {
@@ -5912,14 +5927,14 @@ mod tests {
let _config_env = EnvVarGuard::unset("ZEROCLAW_CONFIG_DIR");
let tmp = TempDir::new().unwrap();
let config = run_quick_setup_with_home(
let config = Box::pin(run_quick_setup_with_home(
Some("sk-issue946"),
Some("openrouter"),
Some("custom-model-946"),
Some("sqlite"),
false,
tmp.path(),
)
))
.await
.unwrap();
@@ -5939,14 +5954,14 @@ mod tests {
let _config_env = EnvVarGuard::unset("ZEROCLAW_CONFIG_DIR");
let tmp = TempDir::new().unwrap();
let config = run_quick_setup_with_home(
let config = Box::pin(run_quick_setup_with_home(
Some("sk-issue946"),
Some("anthropic"),
None,
Some("sqlite"),
false,
tmp.path(),
)
))
.await
.unwrap();
@@ -5969,14 +5984,14 @@ mod tests {
.await
.unwrap();
let err = run_quick_setup_with_home(
let err = Box::pin(run_quick_setup_with_home(
Some("sk-existing"),
Some("openrouter"),
Some("custom-model"),
Some("sqlite"),
false,
tmp.path(),
)
))
.await
.expect_err("quick setup should refuse overwrite without --force");
@@ -6002,14 +6017,14 @@ mod tests {
.await
.unwrap();
let config = run_quick_setup_with_home(
let config = Box::pin(run_quick_setup_with_home(
Some("sk-force"),
Some("openrouter"),
Some("custom-model-fresh"),
Some("sqlite"),
true,
tmp.path(),
)
))
.await
.expect("quick setup should overwrite existing config with --force");
@@ -6036,14 +6051,14 @@ mod tests {
);
let _config_env = EnvVarGuard::unset("ZEROCLAW_CONFIG_DIR");
let config = run_quick_setup_with_home(
let config = Box::pin(run_quick_setup_with_home(
Some("sk-env"),
Some("openrouter"),
Some("model-env"),
Some("sqlite"),
false,
tmp.path(),
)
))
.await
.expect("quick setup should honor ZEROCLAW_WORKSPACE");
+1 -1
View File
@@ -77,7 +77,7 @@ pub async fn handle_command(cmd: crate::PeripheralCommands, config: &Config) ->
Some(path.clone())
};
let mut cfg = crate::config::Config::load_or_init().await?;
let mut cfg = Box::pin(crate::config::Config::load_or_init()).await?;
cfg.peripherals.enabled = true;
if cfg
+33
View File
@@ -0,0 +1,33 @@
//! Plugin error types.
use thiserror::Error;
#[derive(Debug, Error)]
pub enum PluginError {
#[error("plugin not found: {0}")]
NotFound(String),
#[error("invalid manifest: {0}")]
InvalidManifest(String),
#[error("failed to load WASM module: {0}")]
LoadFailed(String),
#[error("plugin execution failed: {0}")]
ExecutionFailed(String),
#[error("permission denied: plugin '{plugin}' requires '{permission}'")]
PermissionDenied { plugin: String, permission: String },
#[error("plugin '{0}' is already loaded")]
AlreadyLoaded(String),
#[error("plugin capability not supported: {0}")]
UnsupportedCapability(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
#[error("TOML parse error: {0}")]
TomlParse(#[from] toml::de::Error),
}
+325
View File
@@ -0,0 +1,325 @@
//! Plugin host: discovery, loading, lifecycle management.
use super::error::PluginError;
use super::{PluginCapability, PluginInfo, PluginManifest};
use std::collections::HashMap;
use std::path::{Path, PathBuf};
/// Manages the lifecycle of WASM plugins.
pub struct PluginHost {
plugins_dir: PathBuf,
loaded: HashMap<String, LoadedPlugin>,
}
struct LoadedPlugin {
manifest: PluginManifest,
wasm_path: PathBuf,
}
impl PluginHost {
/// Create a new plugin host with the given plugins directory.
pub fn new(workspace_dir: &Path) -> Result<Self, PluginError> {
let plugins_dir = workspace_dir.join("plugins");
if !plugins_dir.exists() {
std::fs::create_dir_all(&plugins_dir)?;
}
let mut host = Self {
plugins_dir,
loaded: HashMap::new(),
};
host.discover()?;
Ok(host)
}
/// Discover plugins in the plugins directory.
fn discover(&mut self) -> Result<(), PluginError> {
if !self.plugins_dir.exists() {
return Ok(());
}
let entries = std::fs::read_dir(&self.plugins_dir)?;
for entry in entries.flatten() {
let path = entry.path();
if path.is_dir() {
let manifest_path = path.join("manifest.toml");
if manifest_path.exists() {
if let Ok(manifest) = self.load_manifest(&manifest_path) {
let wasm_path = path.join(&manifest.wasm_path);
self.loaded.insert(
manifest.name.clone(),
LoadedPlugin {
manifest,
wasm_path,
},
);
}
}
}
}
Ok(())
}
fn load_manifest(&self, path: &Path) -> Result<PluginManifest, PluginError> {
let content = std::fs::read_to_string(path)?;
let manifest: PluginManifest = toml::from_str(&content)?;
Ok(manifest)
}
/// List all discovered plugins.
pub fn list_plugins(&self) -> Vec<PluginInfo> {
self.loaded
.values()
.map(|p| PluginInfo {
name: p.manifest.name.clone(),
version: p.manifest.version.clone(),
description: p.manifest.description.clone(),
capabilities: p.manifest.capabilities.clone(),
permissions: p.manifest.permissions.clone(),
wasm_path: p.wasm_path.clone(),
loaded: p.wasm_path.exists(),
})
.collect()
}
/// Get info about a specific plugin.
pub fn get_plugin(&self, name: &str) -> Option<PluginInfo> {
self.loaded.get(name).map(|p| PluginInfo {
name: p.manifest.name.clone(),
version: p.manifest.version.clone(),
description: p.manifest.description.clone(),
capabilities: p.manifest.capabilities.clone(),
permissions: p.manifest.permissions.clone(),
wasm_path: p.wasm_path.clone(),
loaded: p.wasm_path.exists(),
})
}
/// Install a plugin from a directory path.
pub fn install(&mut self, source: &str) -> Result<(), PluginError> {
let source_path = PathBuf::from(source);
let manifest_path = if source_path.is_dir() {
source_path.join("manifest.toml")
} else {
source_path.clone()
};
if !manifest_path.exists() {
return Err(PluginError::NotFound(format!(
"manifest.toml not found at {}",
manifest_path.display()
)));
}
let manifest = self.load_manifest(&manifest_path)?;
let source_dir = manifest_path
.parent()
.ok_or_else(|| PluginError::InvalidManifest("no parent directory".into()))?;
let wasm_source = source_dir.join(&manifest.wasm_path);
if !wasm_source.exists() {
return Err(PluginError::NotFound(format!(
"WASM file not found: {}",
wasm_source.display()
)));
}
if self.loaded.contains_key(&manifest.name) {
return Err(PluginError::AlreadyLoaded(manifest.name));
}
// Copy plugin to plugins directory
let dest_dir = self.plugins_dir.join(&manifest.name);
std::fs::create_dir_all(&dest_dir)?;
// Copy manifest
std::fs::copy(&manifest_path, dest_dir.join("manifest.toml"))?;
// Copy WASM file
let wasm_dest = dest_dir.join(&manifest.wasm_path);
if let Some(parent) = wasm_dest.parent() {
std::fs::create_dir_all(parent)?;
}
std::fs::copy(&wasm_source, &wasm_dest)?;
self.loaded.insert(
manifest.name.clone(),
LoadedPlugin {
manifest,
wasm_path: wasm_dest,
},
);
Ok(())
}
/// Remove a plugin by name.
pub fn remove(&mut self, name: &str) -> Result<(), PluginError> {
if self.loaded.remove(name).is_none() {
return Err(PluginError::NotFound(name.to_string()));
}
let plugin_dir = self.plugins_dir.join(name);
if plugin_dir.exists() {
std::fs::remove_dir_all(plugin_dir)?;
}
Ok(())
}
/// Get tool-capable plugins.
pub fn tool_plugins(&self) -> Vec<&PluginManifest> {
self.loaded
.values()
.filter(|p| p.manifest.capabilities.contains(&PluginCapability::Tool))
.map(|p| &p.manifest)
.collect()
}
/// Get channel-capable plugins.
pub fn channel_plugins(&self) -> Vec<&PluginManifest> {
self.loaded
.values()
.filter(|p| p.manifest.capabilities.contains(&PluginCapability::Channel))
.map(|p| &p.manifest)
.collect()
}
/// Returns the plugins directory path.
pub fn plugins_dir(&self) -> &Path {
&self.plugins_dir
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::tempdir;
#[test]
fn test_empty_plugin_dir() {
let dir = tempdir().unwrap();
let host = PluginHost::new(dir.path()).unwrap();
assert!(host.list_plugins().is_empty());
}
#[test]
fn test_discover_with_manifest() {
let dir = tempdir().unwrap();
let plugin_dir = dir.path().join("plugins").join("test-plugin");
std::fs::create_dir_all(&plugin_dir).unwrap();
std::fs::write(
plugin_dir.join("manifest.toml"),
r#"
name = "test-plugin"
version = "0.1.0"
description = "A test plugin"
wasm_path = "plugin.wasm"
capabilities = ["tool"]
permissions = []
"#,
)
.unwrap();
let host = PluginHost::new(dir.path()).unwrap();
let plugins = host.list_plugins();
assert_eq!(plugins.len(), 1);
assert_eq!(plugins[0].name, "test-plugin");
}
#[test]
fn test_tool_plugins_filter() {
let dir = tempdir().unwrap();
let plugins_base = dir.path().join("plugins");
// Tool plugin
let tool_dir = plugins_base.join("my-tool");
std::fs::create_dir_all(&tool_dir).unwrap();
std::fs::write(
tool_dir.join("manifest.toml"),
r#"
name = "my-tool"
version = "0.1.0"
wasm_path = "tool.wasm"
capabilities = ["tool"]
"#,
)
.unwrap();
// Channel plugin
let chan_dir = plugins_base.join("my-channel");
std::fs::create_dir_all(&chan_dir).unwrap();
std::fs::write(
chan_dir.join("manifest.toml"),
r#"
name = "my-channel"
version = "0.1.0"
wasm_path = "channel.wasm"
capabilities = ["channel"]
"#,
)
.unwrap();
let host = PluginHost::new(dir.path()).unwrap();
assert_eq!(host.list_plugins().len(), 2);
assert_eq!(host.tool_plugins().len(), 1);
assert_eq!(host.channel_plugins().len(), 1);
assert_eq!(host.tool_plugins()[0].name, "my-tool");
}
#[test]
fn test_get_plugin() {
let dir = tempdir().unwrap();
let plugin_dir = dir.path().join("plugins").join("lookup-test");
std::fs::create_dir_all(&plugin_dir).unwrap();
std::fs::write(
plugin_dir.join("manifest.toml"),
r#"
name = "lookup-test"
version = "1.0.0"
description = "Lookup test"
wasm_path = "plugin.wasm"
capabilities = ["tool"]
"#,
)
.unwrap();
let host = PluginHost::new(dir.path()).unwrap();
assert!(host.get_plugin("lookup-test").is_some());
assert!(host.get_plugin("nonexistent").is_none());
}
#[test]
fn test_remove_plugin() {
let dir = tempdir().unwrap();
let plugin_dir = dir.path().join("plugins").join("removable");
std::fs::create_dir_all(&plugin_dir).unwrap();
std::fs::write(
plugin_dir.join("manifest.toml"),
r#"
name = "removable"
version = "0.1.0"
wasm_path = "plugin.wasm"
capabilities = ["tool"]
"#,
)
.unwrap();
let mut host = PluginHost::new(dir.path()).unwrap();
assert_eq!(host.list_plugins().len(), 1);
host.remove("removable").unwrap();
assert!(host.list_plugins().is_empty());
assert!(!plugin_dir.exists());
}
#[test]
fn test_remove_nonexistent_returns_error() {
let dir = tempdir().unwrap();
let mut host = PluginHost::new(dir.path()).unwrap();
assert!(host.remove("ghost").is_err());
}
}
+76
View File
@@ -0,0 +1,76 @@
//! WASM plugin system for ZeroClaw.
//!
//! Plugins are WebAssembly modules loaded via Extism that can extend
//! ZeroClaw with custom tools and channels. Enable with `--features plugins-wasm`.
pub mod error;
pub mod host;
pub mod wasm_channel;
pub mod wasm_tool;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
/// A plugin's declared manifest (loaded from manifest.toml alongside the .wasm).
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PluginManifest {
/// Plugin name (unique identifier)
pub name: String,
/// Plugin version
pub version: String,
/// Human-readable description
pub description: Option<String>,
/// Author name or organization
pub author: Option<String>,
/// Path to the .wasm file (relative to manifest)
pub wasm_path: String,
/// Capabilities this plugin provides
pub capabilities: Vec<PluginCapability>,
/// Permissions this plugin requests
#[serde(default)]
pub permissions: Vec<PluginPermission>,
}
/// What a plugin can do.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum PluginCapability {
/// Provides one or more tools
Tool,
/// Provides a channel implementation
Channel,
/// Provides a memory backend
Memory,
/// Provides an observer/metrics backend
Observer,
}
/// Permissions a plugin may request.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum PluginPermission {
/// Can make HTTP requests
HttpClient,
/// Can read from the filesystem (within sandbox)
FileRead,
/// Can write to the filesystem (within sandbox)
FileWrite,
/// Can access environment variables
EnvRead,
/// Can read agent memory
MemoryRead,
/// Can write agent memory
MemoryWrite,
}
/// Information about a loaded plugin.
#[derive(Debug, Clone, Serialize)]
pub struct PluginInfo {
pub name: String,
pub version: String,
pub description: Option<String>,
pub capabilities: Vec<PluginCapability>,
pub permissions: Vec<PluginPermission>,
pub wasm_path: PathBuf,
pub loaded: bool,
}
+44
View File
@@ -0,0 +1,44 @@
//! Bridge between WASM plugins and the Channel trait.
use crate::channels::traits::{Channel, ChannelMessage, SendMessage};
use async_trait::async_trait;
/// A channel backed by a WASM plugin.
pub struct WasmChannel {
name: String,
plugin_name: String,
}
impl WasmChannel {
pub fn new(name: String, plugin_name: String) -> Self {
Self { name, plugin_name }
}
}
#[async_trait]
impl Channel for WasmChannel {
fn name(&self) -> &str {
&self.name
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
// TODO: Wire to WASM plugin send function
tracing::warn!(
"WasmChannel '{}' (plugin: {}) send not yet connected: {}",
self.name,
self.plugin_name,
message.content
);
Ok(())
}
async fn listen(&self, _tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> anyhow::Result<()> {
// TODO: Wire to WASM plugin receive/listen function
tracing::warn!(
"WasmChannel '{}' (plugin: {}) listen not yet connected",
self.name,
self.plugin_name,
);
Ok(())
}
}
+63
View File
@@ -0,0 +1,63 @@
//! Bridge between WASM plugins and the Tool trait.
use crate::tools::traits::{Tool, ToolResult};
use async_trait::async_trait;
use serde_json::Value;
/// A tool backed by a WASM plugin function.
pub struct WasmTool {
name: String,
description: String,
plugin_name: String,
function_name: String,
parameters_schema: Value,
}
impl WasmTool {
pub fn new(
name: String,
description: String,
plugin_name: String,
function_name: String,
parameters_schema: Value,
) -> Self {
Self {
name,
description,
plugin_name,
function_name,
parameters_schema,
}
}
}
#[async_trait]
impl Tool for WasmTool {
fn name(&self) -> &str {
&self.name
}
fn description(&self) -> &str {
&self.description
}
fn parameters_schema(&self) -> Value {
self.parameters_schema.clone()
}
async fn execute(&self, args: Value) -> anyhow::Result<ToolResult> {
// TODO: Call into Extism plugin runtime
// For now, return a placeholder indicating the plugin system is available
// but not yet wired to actual WASM execution.
Ok(ToolResult {
success: false,
output: format!(
"[plugin:{}/{}] WASM execution not yet connected. Args: {}",
self.plugin_name,
self.function_name,
serde_json::to_string(&args).unwrap_or_default()
),
error: Some("WASM execution bridge not yet implemented".into()),
})
}
}
+150 -4
View File
@@ -17,9 +17,6 @@
//!
//! # Limitations
//!
//! - **Conversation history**: Only the system prompt (if present) and the last
//! user message are forwarded. Full multi-turn history is not preserved because
//! the CLI accepts a single prompt per invocation.
//! - **System prompt**: The system prompt is prepended to the user message with a
//! blank-line separator, as the CLI does not provide a dedicated system-prompt flag.
//! - **Temperature**: The CLI does not expose a temperature parameter.
@@ -34,7 +31,7 @@
//!
//! - `CLAUDE_CODE_PATH` — override the path to the `claude` binary (default: `"claude"`)
use crate::providers::traits::{ChatRequest, ChatResponse, Provider, TokenUsage};
use crate::providers::traits::{ChatMessage, ChatRequest, ChatResponse, Provider, TokenUsage};
use async_trait::async_trait;
use std::path::PathBuf;
use tokio::io::AsyncWriteExt;
@@ -212,6 +209,54 @@ impl Provider for ClaudeCodeProvider {
self.invoke_cli(&full_message, model).await
}
async fn chat_with_history(
&self,
messages: &[ChatMessage],
model: &str,
temperature: f64,
) -> anyhow::Result<String> {
Self::validate_temperature(temperature)?;
// Separate system prompt from conversation messages.
let system = messages
.iter()
.find(|m| m.role == "system")
.map(|m| m.content.as_str());
// Build conversation turns (skip system messages).
let turns: Vec<&ChatMessage> = messages.iter().filter(|m| m.role != "system").collect();
// If there's only one user message, use the simple path.
if turns.len() <= 1 {
let last_user = turns.first().map(|m| m.content.as_str()).unwrap_or("");
let full_message = match system {
Some(s) if !s.is_empty() => format!("{s}\n\n{last_user}"),
_ => last_user.to_string(),
};
return self.invoke_cli(&full_message, model).await;
}
// Format multi-turn conversation into a single prompt.
let mut parts = Vec::new();
if let Some(s) = system {
if !s.is_empty() {
parts.push(format!("[system]\n{s}"));
}
}
for msg in &turns {
let label = match msg.role.as_str() {
"user" => "[user]",
"assistant" => "[assistant]",
other => other,
};
parts.push(format!("{label}\n{}", msg.content));
}
parts.push("[assistant]".to_string());
let full_message = parts.join("\n\n");
self.invoke_cli(&full_message, model).await
}
async fn chat(
&self,
request: ChatRequest<'_>,
@@ -327,4 +372,105 @@ mod tests {
"unexpected error message: {msg}"
);
}
/// Helper: create a provider that uses a shell script echoing stdin back.
/// The script ignores CLI flags (`--print`, `--model`, `-`) and just cats stdin.
///
/// Uses `OnceLock` to write the script file exactly once, avoiding
/// "Text file busy" (ETXTBSY) races when parallel tests try to
/// overwrite a script that another test is currently executing.
fn echo_provider() -> ClaudeCodeProvider {
use std::sync::OnceLock;
static SCRIPT_PATH: OnceLock<PathBuf> = OnceLock::new();
let script = SCRIPT_PATH.get_or_init(|| {
use std::io::Write;
let dir = std::env::temp_dir().join("zeroclaw_test_claude_code");
std::fs::create_dir_all(&dir).unwrap();
let path = dir.join(format!("fake_claude_{}.sh", std::process::id()));
let mut f = std::fs::File::create(&path).unwrap();
writeln!(f, "#!/bin/sh\ncat /dev/stdin").unwrap();
drop(f);
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
std::fs::set_permissions(&path, std::fs::Permissions::from_mode(0o755)).unwrap();
}
path
});
ClaudeCodeProvider {
binary_path: script.clone(),
}
}
#[tokio::test]
async fn chat_with_history_single_user_message() {
let provider = echo_provider();
let messages = vec![ChatMessage::user("hello")];
let result = provider
.chat_with_history(&messages, "default", 1.0)
.await
.unwrap();
assert_eq!(result, "hello");
}
#[tokio::test]
async fn chat_with_history_single_user_with_system() {
let provider = echo_provider();
let messages = vec![
ChatMessage::system("You are helpful."),
ChatMessage::user("hello"),
];
let result = provider
.chat_with_history(&messages, "default", 1.0)
.await
.unwrap();
assert_eq!(result, "You are helpful.\n\nhello");
}
#[tokio::test]
async fn chat_with_history_multi_turn_includes_all_messages() {
let provider = echo_provider();
let messages = vec![
ChatMessage::system("Be concise."),
ChatMessage::user("What is 2+2?"),
ChatMessage::assistant("4"),
ChatMessage::user("And 3+3?"),
];
let result = provider
.chat_with_history(&messages, "default", 1.0)
.await
.unwrap();
assert!(result.contains("[system]\nBe concise."));
assert!(result.contains("[user]\nWhat is 2+2?"));
assert!(result.contains("[assistant]\n4"));
assert!(result.contains("[user]\nAnd 3+3?"));
assert!(result.ends_with("[assistant]"));
}
#[tokio::test]
async fn chat_with_history_multi_turn_without_system() {
let provider = echo_provider();
let messages = vec![
ChatMessage::user("hi"),
ChatMessage::assistant("hello"),
ChatMessage::user("bye"),
];
let result = provider
.chat_with_history(&messages, "default", 1.0)
.await
.unwrap();
assert!(!result.contains("[system]"));
assert!(result.contains("[user]\nhi"));
assert!(result.contains("[assistant]\nhello"));
assert!(result.contains("[user]\nbye"));
}
#[tokio::test]
async fn chat_with_history_rejects_bad_temperature() {
let provider = echo_provider();
let messages = vec![ChatMessage::user("test")];
let result = provider.chat_with_history(&messages, "default", 0.5).await;
assert!(result.is_err());
}
}
+54
View File
@@ -41,6 +41,8 @@ pub struct OpenAiCompatibleProvider {
timeout_secs: u64,
/// Extra HTTP headers to include in all API requests.
extra_headers: std::collections::HashMap<String, String>,
/// Optional reasoning effort for GPT-5/Codex-compatible backends.
reasoning_effort: Option<String>,
/// Custom API path suffix (e.g. "/v2/generate").
/// When set, overrides the default `/chat/completions` path detection.
api_path: Option<String>,
@@ -179,6 +181,7 @@ impl OpenAiCompatibleProvider {
native_tool_calling: !merge_system_into_user,
timeout_secs: 120,
extra_headers: std::collections::HashMap::new(),
reasoning_effort: None,
api_path: None,
}
}
@@ -198,6 +201,12 @@ impl OpenAiCompatibleProvider {
self
}
/// Set reasoning effort for GPT-5/Codex-compatible chat-completions APIs.
pub fn with_reasoning_effort(mut self, reasoning_effort: Option<String>) -> Self {
self.reasoning_effort = reasoning_effort;
self
}
/// Set a custom API path suffix for this provider.
/// When set, replaces the default `/chat/completions` path.
pub fn with_api_path(mut self, api_path: Option<String>) -> Self {
@@ -363,6 +372,14 @@ impl OpenAiCompatibleProvider {
})
.collect()
}
fn reasoning_effort_for_model(&self, model: &str) -> Option<String> {
let id = model.rsplit('/').next().unwrap_or(model);
let supports_reasoning_effort = id.starts_with("gpt-5") || id.contains("codex");
supports_reasoning_effort
.then(|| self.reasoning_effort.clone())
.flatten()
}
}
#[derive(Debug, Serialize)]
@@ -373,6 +390,8 @@ struct ApiChatRequest {
#[serde(skip_serializing_if = "Option::is_none")]
stream: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
reasoning_effort: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
tools: Option<Vec<serde_json::Value>>,
#[serde(skip_serializing_if = "Option::is_none")]
tool_choice: Option<String>,
@@ -569,6 +588,8 @@ struct NativeChatRequest {
#[serde(skip_serializing_if = "Option::is_none")]
stream: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
reasoning_effort: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
tools: Option<Vec<serde_json::Value>>,
#[serde(skip_serializing_if = "Option::is_none")]
tool_choice: Option<String>,
@@ -1181,6 +1202,8 @@ impl OpenAiCompatibleProvider {
"does not support tools",
"function calling is not supported",
"tool_choice",
"tool call validation failed",
"was not in request",
]
.iter()
.any(|hint| lower.contains(hint))
@@ -1240,6 +1263,7 @@ impl Provider for OpenAiCompatibleProvider {
messages,
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: None,
tool_choice: None,
};
@@ -1362,6 +1386,7 @@ impl Provider for OpenAiCompatibleProvider {
messages: api_messages,
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: None,
tool_choice: None,
};
@@ -1472,6 +1497,7 @@ impl Provider for OpenAiCompatibleProvider {
messages: api_messages,
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: if tools.is_empty() {
None
} else {
@@ -1577,6 +1603,7 @@ impl Provider for OpenAiCompatibleProvider {
),
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tool_choice: tools.as_ref().map(|_| "auto".to_string()),
tools,
};
@@ -1720,6 +1747,7 @@ impl Provider for OpenAiCompatibleProvider {
messages,
temperature,
stream: Some(options.enabled),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: None,
tool_choice: None,
};
@@ -1861,6 +1889,7 @@ mod tests {
],
temperature: 0.4,
stream: Some(false),
reasoning_effort: None,
tools: None,
tool_choice: None,
};
@@ -2418,6 +2447,14 @@ mod tests {
);
}
#[test]
fn native_tool_schema_unsupported_detects_groq_tool_validation_error() {
assert!(OpenAiCompatibleProvider::is_native_tool_schema_unsupported(
reqwest::StatusCode::BAD_REQUEST,
r#"Groq API error (400 Bad Request): {"error":{"message":"tool call validation failed: attempted to call tool 'memory_recall={\"limit\":5}' which was not in request"}}"#
));
}
#[test]
fn prompt_guided_tool_fallback_injects_system_instruction() {
let input = vec![ChatMessage::user("check status")];
@@ -2441,6 +2478,22 @@ mod tests {
assert!(output[0].content.contains("shell_exec"));
}
#[test]
fn reasoning_effort_only_applies_to_gpt5_and_codex_models() {
let provider = make_provider("test", "https://example.com", None)
.with_reasoning_effort(Some("high".to_string()));
assert_eq!(
provider.reasoning_effort_for_model("gpt-5.3-codex"),
Some("high".to_string())
);
assert_eq!(
provider.reasoning_effort_for_model("openai/gpt-5"),
Some("high".to_string())
);
assert_eq!(provider.reasoning_effort_for_model("llama-3.3-70b"), None);
}
#[tokio::test]
async fn warmup_without_key_is_noop() {
let provider = make_provider("test", "https://example.com", None);
@@ -2617,6 +2670,7 @@ mod tests {
}],
temperature: 0.7,
stream: Some(false),
reasoning_effort: None,
tools: Some(tools),
tool_choice: Some("auto".to_string()),
};
+44 -1
View File
@@ -680,6 +680,7 @@ pub struct ProviderRuntimeOptions {
pub zeroclaw_dir: Option<PathBuf>,
pub secrets_encrypt: bool,
pub reasoning_enabled: Option<bool>,
pub reasoning_effort: Option<String>,
/// HTTP request timeout in seconds for LLM provider API calls.
/// `None` uses the provider's built-in default (120s for compatible providers).
pub provider_timeout_secs: Option<u64>,
@@ -699,6 +700,7 @@ impl Default for ProviderRuntimeOptions {
zeroclaw_dir: None,
secrets_encrypt: true,
reasoning_enabled: None,
reasoning_effort: None,
provider_timeout_secs: None,
extra_headers: std::collections::HashMap::new(),
api_path: None,
@@ -706,6 +708,22 @@ impl Default for ProviderRuntimeOptions {
}
}
pub fn provider_runtime_options_from_config(
config: &crate::config::Config,
) -> ProviderRuntimeOptions {
ProviderRuntimeOptions {
auth_profile_override: None,
provider_api_url: config.api_url.clone(),
zeroclaw_dir: config.config_path.parent().map(PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
reasoning_effort: config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
}
}
fn is_secret_char(c: char) -> bool {
c.is_ascii_alphanumeric() || matches!(c, '-' | '_' | '.' | ':')
}
@@ -818,6 +836,26 @@ fn resolve_provider_credential(name: &str, credential_override: Option<&str>) ->
if let Some(credential) = resolve_minimax_oauth_refresh_token(name) {
return Some(credential);
}
} else if name == "anthropic" || name == "openai" || name == "groq" {
// For well-known providers, prefer provider-specific env vars over the
// global api_key override, since the global key may belong to a different
// provider (e.g. a custom: gateway). This enables multi-provider setups
// where the primary uses a custom gateway and fallbacks use named providers.
let env_candidates: &[&str] = match name {
"anthropic" => &["ANTHROPIC_OAUTH_TOKEN", "ANTHROPIC_API_KEY"],
"openai" => &["OPENAI_API_KEY"],
"groq" => &["GROQ_API_KEY"],
_ => &[],
};
for env_var in env_candidates {
if let Ok(val) = std::env::var(env_var) {
let trimmed = val.trim().to_string();
if !trimmed.is_empty() {
return Some(trimmed);
}
}
}
return Some(trimmed_override.to_owned());
} else {
return Some(trimmed_override.to_owned());
}
@@ -1016,6 +1054,7 @@ fn create_provider_with_url_and_options(
// headers to OpenAI-compatible providers before boxing them as trait objects.
let compat = {
let timeout = options.provider_timeout_secs;
let reasoning_effort = options.reasoning_effort.clone();
let extra_headers = options.extra_headers.clone();
let api_path = options.api_path.clone();
move |p: OpenAiCompatibleProvider| -> Box<dyn Provider> {
@@ -1023,6 +1062,9 @@ fn create_provider_with_url_and_options(
if let Some(t) = timeout {
p = p.with_timeout_secs(t);
}
if let Some(ref effort) = reasoning_effort {
p = p.with_reasoning_effort(Some(effort.clone()));
}
if !extra_headers.is_empty() {
p = p.with_extra_headers(extra_headers.clone());
}
@@ -1278,11 +1320,12 @@ fn create_provider_with_url_and_options(
.map(str::trim)
.filter(|value| !value.is_empty())
.unwrap_or("llama.cpp");
Ok(compat(OpenAiCompatibleProvider::new(
Ok(compat(OpenAiCompatibleProvider::new_with_vision(
"llama.cpp",
base_url,
Some(llama_cpp_key),
AuthStyle::Bearer,
true,
)))
}
"sglang" => {
+29 -4
View File
@@ -22,6 +22,7 @@ pub struct OpenAiCodexProvider {
responses_url: String,
custom_endpoint: bool,
gateway_api_key: Option<String>,
reasoning_effort: Option<String>,
client: Client,
}
@@ -105,6 +106,7 @@ impl OpenAiCodexProvider {
custom_endpoint: !is_default_responses_url(&responses_url),
responses_url,
gateway_api_key: gateway_api_key.map(ToString::to_string),
reasoning_effort: options.reasoning_effort.clone(),
client: Client::builder()
.timeout(std::time::Duration::from_secs(120))
.connect_timeout(std::time::Duration::from_secs(10))
@@ -304,9 +306,10 @@ fn clamp_reasoning_effort(model: &str, effort: &str) -> String {
effort.to_string()
}
fn resolve_reasoning_effort(model_id: &str) -> String {
let raw = std::env::var("ZEROCLAW_CODEX_REASONING_EFFORT")
.ok()
fn resolve_reasoning_effort(model_id: &str, configured: Option<&str>) -> String {
let raw = configured
.map(ToString::to_string)
.or_else(|| std::env::var("ZEROCLAW_CODEX_REASONING_EFFORT").ok())
.and_then(|value| first_nonempty(Some(&value)))
.unwrap_or_else(|| "xhigh".to_string())
.to_ascii_lowercase();
@@ -663,7 +666,10 @@ impl OpenAiCodexProvider {
verbosity: "medium".to_string(),
},
reasoning: ResponsesReasoningOptions {
effort: resolve_reasoning_effort(normalized_model),
effort: resolve_reasoning_effort(
normalized_model,
self.reasoning_effort.as_deref(),
),
summary: "auto".to_string(),
},
include: vec!["reasoning.encrypted_content".to_string()],
@@ -951,6 +957,24 @@ mod tests {
);
}
#[test]
fn resolve_reasoning_effort_prefers_configured_override() {
let _guard = EnvGuard::set("ZEROCLAW_CODEX_REASONING_EFFORT", Some("low"));
assert_eq!(
resolve_reasoning_effort("gpt-5-codex", Some("high")),
"high".to_string()
);
}
#[test]
fn resolve_reasoning_effort_uses_legacy_env_when_unconfigured() {
let _guard = EnvGuard::set("ZEROCLAW_CODEX_REASONING_EFFORT", Some("minimal"));
assert_eq!(
resolve_reasoning_effort("gpt-5-codex", None),
"low".to_string()
);
}
#[test]
fn parse_sse_text_reads_output_text_delta() {
let payload = r#"data: {"type":"response.created","response":{"id":"resp_123"}}
@@ -1125,6 +1149,7 @@ data: [DONE]
secrets_encrypt: false,
auth_profile_override: None,
reasoning_enabled: None,
reasoning_effort: None,
provider_timeout_secs: None,
extra_headers: std::collections::HashMap::new(),
api_path: None,

Some files were not shown because too many files have changed in this diff Show More